content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Re: Spacing and formatting across multiple math tags
From: Stan Devitt <jsdevitt@stratumtek.com> Date: Tue, 28 Oct 2003 09:22:41 -0500 Message-ID: <3F9E7BB1.4040205@stratumtek.com> To: silverbanana@gmx.de Cc: www-math@w3.org
In a some sense, xhtml serves much the same purpose for browsers that
postscript does for printers. One generally just writes the printer
code necessary to cause the printer to show (say the appropriate
thing. It is generally an information loosing transformation,
although conventions and a clever use of div, span with class
attributes can do better (as can a clever use of postscript functions).
Thus, I'm really seeing two requests, at least one of which can be
easily modelled right now.
1. The capability of embedding (some) xhtml in some part of mathml (so
that we can drive the printer better) - This has significant
implications for mathml display engines and is an extension to
presentaiton mathml.
2. Introduction of one or more meta-math structure to describe
multi-step solutions. (an extension to content mathml)
There is nothing to prevent modelling this meta-structure in xml and
mapping it to tables (or using CSS directly ) for presentation right
now. Such a thing could move into the spec once it stabilized.
Stan Devitt
Bernd Fuhrmann wrote:
> > However I think a more portable solution would be to use an xhtml
> table to get the layout and use individual math elements in teh xhtml
> table cells to typeset the math fragments.
> Using tables for formatting would break the semantical structure of a
> XHTML+MathML-document. Furthermore there are cases that would require
> to use presentation instead of content markup which would be bad
> aswell. Besides, using tables would force a renderer to put certain
> data into certain places, while I think that one should be able to
> "recommend" formatting. This is especcially important when there are
> very long terms and/or narrow pages that require line breaks. A table
> could never solve that!
> If there is indeed no way to do this in a reasonable way with MathML
> 2.0 to do this, someone should add a mechanism to the next MathML specs.
> Regards,
> Bernd Fuhrmann
Received on Tuesday, 28 October 2003 09:21:56 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:55 GMT | {"url":"http://lists.w3.org/Archives/Public/www-math/2003Oct/0022.html","timestamp":"2014-04-23T17:33:12Z","content_type":null,"content_length":"10064","record_id":"<urn:uuid:d3ab8389-fbfc-4c60-bd43-acde1b6d8ade>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on atdotde: Not quite infiniteRobert:Interesting subject. I think I can accept t...Thanks Robert. You seem to have in mind the nonper...I know there is some literature about different re...Hi Robert, Lubos, and anyone else,I have a questio...Why are zeta-function techniques better than simpl...Dear Robert, if you exactly agree with Joe's deriv...When physicists proceed 'formally', its usually ex...My final comment for tonight: For those readers wh...Let me say more physically what she actually did. ...Dear Robert, I disagree that one can only trust a ...By "LQG string" I meant our version where we (in a...All I am saying is you should have a way (fine if ...Dear Robert, concerning your comment, I understood...More generally about your comments, Robert.I think...Lubos,you misunderstand me. I have no doubt that i...Dear robert, I am somewhat confused by your skepti...For those readers who don't have Joe's book at han...In chapter 1 of my book, eq. 1.3.34, I derive the ...
tag:blogger.com,1999:blog-8883034.post7129582648092232044..comments2014-04-15T13:33:42.632+02:00Robert Hellinghttps://plus.google.com/
subject. I think I can accept the "mathematical" definition of summing divert series because the "sum" can be defined in a potentially consistent manner.<BR/><BR/>However, in any real world situation
the question does it EVER make sense using such a divergent series? Would it ever make sense add (sum?)an infinite number of measurable quantites? If not exactly what is being added in ST?
Thanks.cecil kirkseynoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-44388986416087773952007-09-19T14:36:00.000+02:002007-09-19T14:36:00.000+02:00Thanks Robert. You seem to have in mind the
nonperturbative lattice formulation used in computer simulations, but there is also a perturbative version which does expand in the coupling constant - see, e.g., T.Reisz, NPB 318 (1989) 417 where
perturbative renormalizability of lattice QCD was proved to all orders in the loop expansion. However, it is not clear to me that this will always give the correct physical results for any choice of
lattice QCD formulation. There must surely be some conditions on the formulation; in particular some minimal locality condition. That's why I was surprised by the claim that any regularization
(preserving the symmetries) will must lead to the same end results <BR/><BR/>(Btw, extraction of physics results from the lattice involves perturbative calculations as well as the computer
simulations. I recall some nice posts about this on the "life on the lattice" blog at some point..)
amusednoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-29194604463907269052007-09-19T11:57:00.000+02:002007-09-19T11:57:00.000+02:00I know there is some literature about different
regularisation/renormalisation schemes giving identical results but trying to locate some using google scholar was unsuccessful. I know for sure that BPHZ and Epstein-Glaser have been shown to be
equivalent and would be surprised if the ones more often used in practical calculations (i.e. dim reg) would not have been connected as well. Step zero for such a proof (which in character is
mathematical and not very physics oriented) is to define what exactly you mean by scheme X. That would have to be a prescription that works at all loop order for all graphs and not like in QFT
textbooks where a few simple graphs are calculated (most often only one loop so they do not encounter overlapping divergencies) and then a "you proceed along the same lines for other graphs"
instruction is given.<BR/><BR/>Lattice regularisation, however, is very different in spirit as it is not perturbative (it does not expand in the coupling constant) so it is not supposed to match a
perturbative calculation up to some fixed loop order. Thus it does not compare directly with Feynman graph calculations. Only the continuum limit of the lattice theory is supposed to match with an
all loop calculation that also takes into account non-perturbative effects.<BR/><BR/>In fact, the lattice version of gauge theories is probably the best definition of what you mean by "the full
quantum theory including non-perturbative effects" as those are not computed directly in perturbation theory and there are only indirect hints from asymptotic expansions and of course S-duality.<BR/>
<BR/>OTOH, starting from the lattice theory, you have to show that the continuum limit in fact has Lorentz symmetry and is causal, two properties that this regularisation destroys. Once you managed
this, it's likely you are not too far from claiming the 1 million dollars:<BR/><BR/>http://www.claymath.org/millennium/Yang-Mills_Theory/Roberthttp://www.blogger.com/profile/
06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-58005313139765947892007-09-19T10:55:00.000+02:002007-09-19T10:55:00.000+02:00Hi Robert, Lubos, and anyone else,<BR/>I
have a question/doubt about something Lubos wrote in his post on this topic and would appreciate your views or clarifications. (Normally I would post this on the blog of the person who wrote it,<BR/>
but seeing as in this case it's Lubos...hope you don't mind me posting it here instead)<BR/><BR/>LM wrote:<BR/>"The fact that different regularizations lead to the same final results is a priori
non-trivial but can be mathematically demonstrated to be inevitably true by the tools of the renormalization group."<BR/><BR/>Is this really true? E.g., I don't recall any mention of this in Peskin &
Schroeder's book, even though they discuss RG group in detail.. To explain my doubts, consider the case of perturbative QCD: two different regularizations which preserve gauge invariance are
dimensional reg. and lattice formulation. In fact there are a whole lot of different possible lattice discretizations, and not all of them can be expected to produce results which agree with the
physical ones obtained using dimensional regularization. E.g., there must at least be some kind of locality condition on the lattice QCD formulation that one uses, and I don't think anyone knows at
present what the mildest possible locality requirement is that guarantees that the lattice formulation will produce correct results. In light of this, I don't see how it can be asserted that
different regularizations (which preserve the appropriate symmetries) are always guaranteed to give the same final
results...amusednoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-53337365239018237012007-09-19T09:16:00.000+02:002007-09-19T09:16:00.000+02:00Why are zeta-function techniques better than
simply calculating the action of the Virasoro generators on some state? It is very easy to compute [L_m, L_-m] |0>, and you can read off the central charge from this, without ever having to introduce
any infinities.<BR/><BR/>What is less trivial is how to generalize this to d dimensions, where the diffeomorphism generators are labelled by vectors m = (m_0, m_1, ...) in Z^d rather than an scalar
integer m in Z. In fact, I was stuck on this problem for many years (and ran out of funding in the meantime), before it was solved in a <A HREF="http://projecteuclid.org/DPubS/Repository/1.0/
Disseminate?view=body&id=pdf_1&handle=euclid.cmp/1104254598" REL="nofollow">seminal paper</A> by Rao and Moody.Thomas
Larssonnoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-6690364732551352112007-09-19T07:41:00.000+02:002007-09-19T07:41:00.000+02:00Dear Robert, if you exactly agree with Joe's derivation,
why do you exactly write that this derivation is based on an "obscure analogy with minimal subtraction"? <BR/><BR/>There is nothing obscure about it and, if looked at properly, there is nothing
obscure about the minimal subtraction either. One can easily prove why it works whenever it works.<BR/><BR/>I agree that one must be careful about infinite quantities but we seem to disagree what it
means to be careful. In my picture, it means that you must carefully include them whenever they are nonzero. In the polymer LQG string that you researched, for example, they are very careful to throw
all these important terms arising as infinities away which is wrong, and your work is an interpolation between the correct result and the wrong result which is thus also wrong, at least partially. ;
-)<BR/><BR/>I disagree that your "nonserious" comment is not serious. It is absolutely serious. Don't try to erase this comment because of it. The comment that you call "nonserious" is the standard
insight - certainly taught in QFT courses at most good graduate schools - that power law divergences are zero in dim reg. In the case of the log divergence it is still true as long as you
consistently extract the finite part by taking correct limits of the integral.Lumohttp://www.blogger.com/profile/
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-36219625806270989142007-09-19T03:42:00.000+02:002007-09-19T03:42:00.000+02:00When physicists proceed 'formally', its
usually explicitly stated as such. <BR/><BR/>There are many examples throughout history where this actually turns out to be wrong when done rigorously.<BR/><BR/>The interesting thing (for
mathematicians) is when it turns out to be correct, as it usually means theres some hidden principle in there somewhere and often can lead to new and nontrivial mathematics (eg distribution theory).
<BR/>-HaelfixAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-2033703959595385552007-09-18T21:55:00.000+02:002007-09-18T21:55:00.000+02:00My final comment for tonight: For those
readers who did not get this from my comments above: I completely agree with Joe's derivation of including a regularisation and imposing Weyl invariance. Do not try to convince me it is correct. It
is.<BR/><BR/>My point about Sabine's calculation was that you can of course (and nobody I believe doubts this) produce non-sense if you are not careful about infinite quantities. Once you regulate,
the error is obvious.<BR/><BR/>My final remark (and this is not serious, thus I will delete any comments referring to it) is that there is a shorter version of Sabine's argument which goes: "int dx/x
is always zero in dimensional regularisation" (this is how I learned to actually apply dim reg from a particle phenomenologist: Bring your integrals to form finite + int dx/x and set the second term
to zero).Roberthttp://www.blogger.com/profile/
06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-33096133345078612032007-09-18T21:43:00.000+02:002007-09-18T21:43:00.000+02:00Let me say more physically what she actually
did. In order to calculate a convergent integral in the momentum space (x), she wrote it as a difference of two divergent ones. That would be perfectly compatible with physics and nothing wrong could
follow from it. The error only occurs when she rescales the "x" by a factor of 1/2 or 2 in the two terms. This is equivalent to confusing what is her cutoff - by a factor of two up or down. Because
her integral is logarithmically divergent, it is a standard example of a running coupling. So she has effectively added "g(2.lambda)-g(lambda/2)" - the difference of gauge couplings at two different
scales, pretending that it is zero. Of course, it is not zero: this is exactly the way how running couplings arise.<BR/><BR/>An experienced physicist would never make this error - using inconsistent
cutoffs for different contributions in the same expression. Hers is just a physics error, if we interpret it as a physics calculation. One can't say that her calculation is analogous to the correct
calculations such as Joe's subtractions of the vacuum energy even though it seems that this is precisely what you're saying.<BR/><BR/>There is a very clear a priori difference between correct and
wrong calculations: correct ones have no physical errors of this or other kinds.Lumohttp://www.blogger.com/profile/
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-15358707659118839922007-09-18T21:36:00.000+02:002007-09-18T21:36:00.000+02:00Dear Robert, I disagree that one can only
trust a theory if infinities never occur. A particular regularization that replaces infinities by finite numbers as the intermediate results is just a mathematical trick but the actual physical
result is independent of all details of the regularization which really means that it directly follows from a correct calculation inside the theory that contains these infinities.<BR/><BR/>In other
words, you only need the Lagrangian of standard QCD (one that leads to divergent Feynman diagrams) plus correct physical rules that constrain/dictate how to deal with infinities to get the right QCD
predictions. You don't need any theory that is free of infinities. Such a theory is just a psychological help if one feels uncertain.<BR/><BR/>I agree with you that one should be able to decide
whether an argument is correct before the result is compared with another one. And indeed, it is possible. This is what this discussion is about. You argue that it is impossible to decide whether an
argument or calculation is correct as long as it started with an infinite expression, and others are telling you that it is possible.<BR/><BR/>If you rederive the same physics in what you call "LQG
string", why do you talk about "LQG string" as opposed to just a "string"? Cannot you reformulate your argument in normal physics as opposed to one of kinds of LQG physics?<BR/><BR/>Sabine's
calculation you linked to is manifestly wrong because she doubles one of the infinities in order to subtract them and get a wrong finite part. There was no symmetry principle that would constrain the
right result in her calculation. The original integral was perfectly convergent and she just added (2-1) times infinity (by rescaling the cutoff by a factor of 2 in one term), pretending that 2-1=0.
I don't quite know why you think that I am prone to such arguments. ;-) Maybe Sabine is but I am not.<BR/><BR/>She didn't make any proper analysis of counterterms, any proper analysis of any
symmetries, and she didn't make any analytical continuation of anything to a convergent region either. Why do you think it's analogous to a valid calculation?<BR/><BR/>If you mentioned it because of
the relationship between 1+2+3+... and 1-2+3-4+..., the derived relationship between them may remind you of Sabine's wrong calculation. But it is not analogous. These rescalings and alternating sums
can be calculated by the zeta function regularization that allows me to make these arguments adding subseries and rescaling them. <BR/><BR/>For example, you get the correct sum for antiperiodic
fields, 1/2 + 3/2 + 5/2 + ... can also be calculated by taking the normal sum 1+2+3 and subtracting a multiple of it from itself.<BR/><BR/>So if the zeta-function reg gives a Weyl-invariant value of
the alternating sum, it also gives the right value of the normal sum as well as the shifted Neveu-Schwarz sum and others.Lumohttp://www.blogger.com/profile/
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-64224818600962722462007-09-18T21:16:00.000+02:002007-09-18T21:16:00.000+02:00By "LQG string" I meant our version where we
(in a slightly mathematically more careful language) re-derive the usual central charge (same content, different formalism) rather than the polymer version (different content of which you know I do
not approve).Roberthttp://www.blogger.com/profile/
06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-20062209484590308452007-09-18T21:11:00.000+02:002007-09-18T21:11:00.000+02:00All I am saying is you should have a way
(fine if done retroactively) to treat infinities without them actually occurring. And if you do that by adding an epsilon dependent counter term (that diverges by itself when you take epsilon to 0)
that's fine with me. As long as you can physically justify it. <BR/><BR/>Otherwise you are prone to arguments like <BR/>http://backreaction.blogspot.com/2007/08/after-work-chill-out.html<BR/><BR/>And
sorry, "an argument is correct if it gives the correct result" is not good enough. I would like to have a way to decide if an argument is valid before I know the answer from somewhere
Dear Robert, concerning your comment, I understood pretty well that you wanted to define the whole theory for complex unphysical values of "s". <BR/><BR/>That's exactly why I pre-emptively wrote that
it is wrong to try to define the whole theory for wrong values of "s" just like it is wrong to define a theory in a complex dimension "d" in dimreg. Such a theory probably doesn't exist, especially
not in the dimreg case.<BR/><BR/>But you don't need the full theory in 3.98+0.2i spacetime dimensions in order to prove that dimreg preserves gauge invariance, do you? In the same way, you don't need
to define the operator algebra in a CFT for complex values of "s" or something like that.<BR/><BR/>I don't understand how to combine this discussion with the "LQG version of a string". The texts I
wrote above were trying to help to clarify how the quantities actually behave in correct physics while LQG is a supreme example how the divergences and other things are treated physically
incorrectly.<BR/><BR/>Of course that things I write are incompatible with the LQG quantization. But the reason is that the LQG quantization is wrong while e.g. Joe's arguments are correct. Your
conclusion that physics is ambiguous is not a correct conclusion.Lumohttp://www.blogger.com/profile/
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-23330073135188306962007-09-18T20:53:00.000+02:002007-09-18T20:53:00.000+02:00More generally about your comments, Robert.
<BR/><BR/>I think that it is entirely wrong to say "this argument is dodgy blah blah blah" (in the context of the vacuum energy subtraction) because the argument is transparent and rigorous when
looked at properly. Both of them in fact.<BR/><BR/>Also, I disagree with your general statement that an infinity means that we have asked a wrong question. Only IR divergences are about wrong
questions. UV divergences are about a theory being effective. But even QCD that is UV finite gives UV divergences - they're responsible e.g. for the running. There's no way to ask a better question
about the exact QCD theory that we know and love that would remove the infinity.<BR/><BR/>QCD also falsifies your statement that "the integral over all p is unphysical". It's not unphysical. QCD is
well-defined at arbitrarily high values of "p" but it still requires one to deal with and subtract the infinities properly.<BR/><BR/>Sorry to say but the comments that physicists are always expected
to say "we're dodgy, everything is unreliable, we need experiments" just mean that you don't quite understand the technology. Your comments are Woit-Lite comments. In each case, there is a completely
well-defined answer to the questions whether a particular symmetry constrains the terms or not, whether a given regularization preserves the symmetry or not, and consequently, whether a given
regularization gives a correct result or not. There is no ambiguity here whatsoever and the examples listed are guaranteed to give the right results.Lumohttp://www.blogger.com/profile/
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-50930590238764848922007-09-18T20:47:00.000+02:002007-09-18T20:47:00.000+02:00Lubos,<BR/><BR/>you misunderstand me. I have
no doubt that in field theory calculations where for example you want to compute tr(log(O)) for some operator O as this gives you the 1 loop effective action zeta function regularisation of log(0)
works as well as any other regularisation (and often nicer as it preserves more symmetries than more ad hoc versions).<BR/><BR/>What I am looking for is a version where you not only reinterpret n as
1/n^s for s=-1 once you encounter an obviously divergent expression but start out with something that includes s from the beginning such that for say Re(s)>1 everything is finite at all stages and in
the end you can take s->-1 analytically. Can you come up with (s dependent) definitions of a_n and their commutation relations or L_n such that the commutator of L_n's (which is something you
calculate rather than define) gives the expression including s?<BR/><BR/>BTW, in the LQG version of the string, the correct constant appears as Tr([A_2,B_2]) where A and B are generators of
diffeomorphisms and the subscript 2 refers to<BR/><BR/>A_2 = (A + JAJ)/2<BR/><BR/>where J multiplies positive modes by i and negative modes by -i. Thus it's the 'beta'-part in the language of
Boguliubov transformations. Needless to mention this expression is in fact finite even though there is a trace in an infinite dimensional Hilbert space as can be shown that A_2 is a Hilbert-Schmidt
operator (that is the product of two such operators has a finite trace). Of course you need an infinite dimensional space for a commutator to have a non-vanishing trace.Roberthttp://www.blogger.com/
profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-28099121187349170772007-09-18T20:28:00.000+02:002007-09-18T20:28:00.000+02:00Dear robert, I am somewhat confused
by your skepticism. A similar comment to yours by ori - I suppose it could even be Ori Ganor - appeared on my blog.<BR/><BR/>Why I am confused? Because I think that Joe's argument is, at the level of
physics, a rigorous argument. Let me start with the vacuum energy subtraction.<BR/><BR/>We require Weyl invariance of the physical quantities. So the total zero-point function must vanish. It is
clearly the case because such a result is dimensionful and any dimensionful quantity has a scale and breaks scale invariance.<BR/><BR/>So one exactly needs to add a counterterms to have the total
vacuum energy vanish and this counterterm thus exactly has the role of killing the 1/epsilon^2 term. Joe has a lot of detailed extra factors of length etc. in his formulae to make it really
transparent how the terms depend on the length. This makes the mathematical essence of the regularization more convoluted than it is but it should make the physical interpretation much more
unambiguous.<BR/><BR/>Now the zeta function.<BR/><BR/>You ask about the "hope" that physics is analytical in complex "s". I don't know why you call it a hope. It is a easily demonstrable fact that
is, as you correctly hint, analogous to the case of dim reg. Just substitute a complex "s" and calculate what the result is. You only get nice functions so of course the result is locally holomorphic
in "s".<BR/><BR/>Just like in the case of dimreg, one doesn't have to have an interpretation of complex values of "s". The only thing we call "physics for complex s" are the actual formulae and their
results and they are clearly holomorphic.<BR/><BR/><A HREF="http://scholar.google.com/scholar?q=beisert+tseytlin+zeta" REL="nofollow">Beisert and Tseytlin</A> have checked a highly nontrivial
zeta-function regularization of some AdS/CFT spinning calculation up to four loops. That's where they argued to understand the three-loop discrepancy as an order of limits issue.<BR/><BR/>See also a
600+ citation paper by <A HREF="http://scholar.google.com/scholar?q=hawking+zeta" REL="nofollow">Hawking</A> who checks curved spaces in all dimensions etc. These regularizations work and it's no
17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-81384059771003213072007-09-18T20:08:00.000+02:002007-09-18T20:08:00.000+02:00For those readers who don't have Joe's book
at hand let me reproduce his argument: In the cut-off version, epsilon is if fact dimension-full and a constant, n independent term would as well be the consequence of a world sheet cosmological
constant. Thus the 1/epsilon^2 is in fact a renormalisation of the world-sheet cosmological constant. This would be in conflict with Weyl invariance and thus one has to add a counter term which makes
it vanish. <BR/><BR/>This is what I should have written instead of calling the argument "obscure". <BR/><BR/>This leaves me still looking for a physical justification for the introduction of s in the
zeta regularisation and the hope that physics is actually analytic in s. Maybe this could be related to dimensional regularisation on the world sheet?Roberthttp://www.blogger.com/profile/
06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-23630106520634868102007-09-18T19:06:00.000+02:002007-09-18T19:06:00.000+02:00In chapter 1 of my book, eq. 1.3.34, I
derive the `correct' value of this infinite sum by the requirement that one cancel the Weyl anomaly introduced by the regulator by a local counterterm; this fixes the finite value completely.<BR/><BR
/>At various points later in the book (see index item `normal ordering constants') I derive the constant by a fully finite calculation that respects the Weyl symmetry throughout.Joe | {"url":"http://atdotde.blogspot.com/feeds/7129582648092232044/comments/default","timestamp":"2014-04-20T03:47:40Z","content_type":null,"content_length":"50313","record_id":"<urn:uuid:2e02302d-cd90-45aa-b228-88f2bf1bec44>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS1500 Algorithms and Data Structures for Engineering, FALL 2012
HW2 Gradient Descent
Implement the Gradient Descent algorithm for finding the minimum of a polynomial function of one variable f(x).Your program has to :
• Allow the user to define the function: read the degree and polynomial coefficients from a file (in any readable format, we suggest this format). Ask the user for the filename. In order to avoid
using arrays, you can define the 5 coefficients (assume maximum degree 4) as global variables, that is outside any function. Alternatively, you can use an array of coefficients, but we have a
lecture on arrays until next week.
• Implement a C++ function double userfunction(double) that computes f(x) for any real numbers x.
• Implement a C++ function double userfunction_diff(double) that computes the first differential f'(x) on real arguments x.
• Implement the Gradient descent loop for finding the minimum value of f using repeated calls to the two functions above, starting with an initial guessed argument x. Make sure the termination
condition causes the loop to finish eventually.
• Try several learning rates λ, see what works best.
• EXTRA CREDIT: You can also try to vary λ across loop iterations: start with a larger value, and decrease it when x approaches the optimum.
• EXTRA CREDIT: Try to identify the cases where from starting x, following gradient descent leads to minus infinity.
Try your program with the following functions (other functions could be used by TA for testing):
f(x)= (x-2)^2+1, start with x=3
f(x) = 5x^3 - 16.25x^2 - 205x +6.57, start with x=0 , use λ=0.002
A description of the complete algorithm can be found here; however lecture notes are sufficient for this assignment. As usual for homeworks, you have to write the pseudocode and submit it together
with your code.
EXTRA CREDIT: Implement Gradient Descent for polynomials up to degree 4 with two variables f(x[1],x[2]).
EXTRA CREDIT: Implement Gradient Descent for other functions (not polynomials) of two variables, like Radial-Basis Functions. | {"url":"http://www.ccs.neu.edu/home/vip/teach/Cpp_ENG/Homeworks/HW2.html","timestamp":"2014-04-17T09:37:16Z","content_type":null,"content_length":"4219","record_id":"<urn:uuid:2d604c80-b134-4136-ba7b-aec9e2cb711f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
ABSTRACT. We construct a heirarchy of spectral sequences for a filtered complex under a left-exact functor. As applications we prove (1) the existence of a Leray spectral sequence for de Rham
cohomology, (2) the equivalence of this sequence with the “usual” Leray spectral sequence under the comparison isomorphism and (3) the isomorphism of the Bloch-Ogus spectral sequence with the Leray
spectral sequence for the morphism from the fine site to the Zariski site. | {"url":"http://www.imsc.res.in/~kapil/papers/spectral/spectral.html","timestamp":"2014-04-20T08:36:54Z","content_type":null,"content_length":"3819","record_id":"<urn:uuid:06994f60-46ec-43fd-84ff-6dcc6ba05f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Systems of Non-Linear Equations: Graphical Considerations
Systems of Non-Linear Equations:
Graphical Considerations (page 2 of 6)
Suppose you have the following:
• Solve the system by graphing:
y = x^2
y = 8 – x^2
I can graph each of these equations separately:
y = x^2
y = 8 – x^2
..and each point on each graph is a solution to that graph's equation.
Now look at the graph of the system:
y = x^2
y = 8 – x^2
A solution to the system is any point that is a solution for both equations. In other words, a solution point for this system is any point that is on both graphs. In other words:
"SOLUTIONS" FOR SYSTEMS ARE INTERSECTIONS OF THE LINES
Then, graphically, the solutions for this system are the red-highlighted points at right:
That is, the solutions to this system are the points (–2, 4) and (2, 4).
So when you're trying to solve a system of equations, you're trying to find the coordinates of the intersection points. Copyright © 2002-2011 Elizabeth Stapel All Rights Reserved
The system shown above has two solutions, because the graph shows two intersection points. A system can have one solution:
...lots of solutions:
...or no solutions at all:
(In this last situation, where there was no solution, the system of equations is said to be "inconsistent".)
When you look at a graph, you can only guess at an approximation to the solution. Unless the solutions points are nice neat numbers (and unless you happen to know this in advance), you can't get
the solution from the picture. For instance, you can't tell what the solution to the system graphed at right might be:
...because you're having to guess from a picture. As it happens, the solution is (x, y) = (1^3/[7],^ 9/[14]), but you would have no possible way of knowing that from this picture.
Advisory: Your text will almost certainly have you do some "solve by graphing" exercises. You may safely assume for these exercises that answers are nice and neat, because the solutions must be if
you are to be able to have a chance at guessing the solutions from a picture.
This "solving by graphing" can be useful, in that it helps you get an idea in picture form of what is going on when solving systems. But it can be misleading, too, in that it implies that all
solutions will be "neat" ones, when most solutions are actually rather messy.
<< Previous Top | 1 | 2 | 3 | 4 | 5 | 6 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Systems of Non-Linear Equations: Graphical Considerations." Purplemath.
Available from http://www.purplemath.com/modules/syseqgen2.htm. | {"url":"http://www.purplemath.com/modules/syseqgen2.htm","timestamp":"2014-04-20T03:42:31Z","content_type":null,"content_length":"28396","record_id":"<urn:uuid:e5bf1630-464c-445b-b6b8-4326ddae3248>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Python CSV Blank Cells Read as strings
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Hi there,
My problem involves reading in and working with an excel file that contains blank cells randomly throughout the spreadsheet. The reason there are blank cells is because I have expression data for
various cell types but for some of the cell types the expression data was not available. My program works great (Finding standard deviations, averages etc.) when I fill the empty cells with 0, but
when I leave them blank I get the following error:
Resistance.append(map(float, row[4:]))
ValueError: could not convert string to float:
Basically when I try to append the data to a new array and convert it to floats I get the error because of the blank cells. If I don't do the float function, I get errors everywhere in the programd
because I can't do the Stdev, for example I will have the following error:
withinA1 = numpy.std(a1)
File "C:\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 2433, in std
return _wrapit(a, 'std', axis, dtype, out, ddof)
File "C:\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 37, in _wrapit
result = getattr(asarray(obj),method)(*args, **kwds)
TypeError: cannot perform reduce with flexible type
How can I read through and tell python that the empty cells should just be skipped over? I've tried:
for label in Reader:
while i < len(label):
# if either the item in col1 or col2 is empty remove both of them
if label[i] == '':
del label[i]
# otherwise, increment the index
i += 1
if len(label) == 0 or label[j] == '':
Resistance.append(map(float, label[4:]))
i += 1
for row in Reader:
if len(row) == 0 or row[i] == '':
Resistance.append(map(float, row[4:]))
To no avail. Any help would be greatly appreciated, thank you.
Reputation Points: 1,140 [?]
Q&As Helped to Solve: 884 [?]
Skill Endorsements: 18 [?]
You could insert 0.0 when there is no data
Resistance.append(tuple((float(x) if x.strip() else 0.0) for x in row[4:]))
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
That doesn't work because adding 0 changes the standard deviation
Reputation Points: 1,140 [?]
Q&As Helped to Solve: 884 [?]
Skill Endorsements: 18 [?]
Well, here is something which should always work
def to_float(sequence):
"""Generate a sequence of floats by converting every item in
a given sequence to float, and ignoring failing conversions"""
for x in sequence:
yield float(x)
except ValueError:
Now if you want to append all the values in columns 4, 5, ... for all rows,
you can write a generator
def sheet_values(reader):
"""Generate the cell values that we want to convert and append to Resistance"""
for row in reader:
for x in row[4:]:
yield x
Finally, here is how to append all the values
Perhaps you could describe what is this variable Resistance. Is it a list, a list of lists ? What
is your expected content for Resistance ?
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Thank you for your help.
The code does work indeed, the only problem is that now it is one large array, and I need to be able to distinguish the original rows from one another so that I may calculate standard deviations as I
see fit. Sorry if that was not clear
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Yes Resistance is a list of lists. My raw data is in the form of an excel spreadsheet. I have A1, A2, B1..B6, C1-C15, etc... My goal is to calculate a running standard deviation of each column
(index) from A1 to A2, so STDEV OF A1[0]A2[0], A1[1]A2[1] etc. for each of the subclasses. I will then find the average standard deviation within each subclass
Reputation Points: 1,140 [?]
Q&As Helped to Solve: 884 [?]
Skill Endorsements: 18 [?]
The only thing you need to do is determine which lists you want to append to Resistance. For example if you want to append the values from a column numbered k, you can write
def column_values(rows, k):
for row in rows:
yield row[k]
rows = list(Reader) # convert to list in case Reader is a generator.
Resistance.append(list(to_float(column_values(rows, 0))))
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Great, I guess I'm more of a beginner than I thought.
I am now getting this error:
ValueError: setting an array element with a sequence.
and I believe it is because since we took out some zeros, the arrays are of different sizes and the stdev indexing gets thrown off. Is there a way to tell python to completely ignore an index
position rather than no longer assuming it's not there?
So, having something like:
[[[-1.0, -1.38, na, 0.24, 0.01, 0.47, -0.38, -0.64, -0.47, -0.71, 0.79, 1.61, 0.88, -0.99, -0.46, -0.23, -0.23, -1.0, -0.01, -0.52, -0.58, -1.17, 1.3, -1.08, -0.53, -0.75, 0.8, -0.23, -0.19, -0.83,
-0.43, -0.07, 1.84, -0.4, -0.54, -0.4, 1.82, -0.61, 0.17]], [[-0.89, -0.59, -0.27, -0.63, 0.14, 1.35, -0.28, -0.51, -0.36, -0.52, 0.1, 1.58, -0.78, -0.86, -0.92, -0.41, -0.22, -1.0, -0.24, -0.2,
-0.15, -0.73, 0.05, -0.68, -0.52, -0.27, 1.36, -0.44, -0.06, -0.78, -0.02, -0.22, 2.04, -0.16, -0.45, -0.34, 1.33, -0.71, 1.14]]]
I put an na up there as an example just so that the boxes are the same dimension
Reputation Points: 1,140 [?]
Q&As Helped to Solve: 884 [?]
Skill Endorsements: 18 [?]
You should have a look in numpy masked arrays. I can not help you much here, because I never tried it. | {"url":"http://www.daniweb.com/software-development/python/threads/443762/python-csv-blank-cells-read-as-strings","timestamp":"2014-04-19T09:28:18Z","content_type":null,"content_length":"53338","record_id":"<urn:uuid:52324c50-283f-4d46-a617-22841afbacfd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
• parabolas
how do you find the focus and the directrix line if all you have is the equation in parabolic form?
woah i have no idea where that p came from.
ughh i wish my teacher would do his job. i have no idea what you mean. like if i have y=(x-2)+3, i know how to get the vertex but not the focus or the directrix
wait.... what??? why not?
the x is not squared. i suppose you meant to type $y = (x - 2)^2 + 3$
now, you should know that if a parabola is in the form $y = a(x - h)^2 + k$, the vertex is given by $(h,k)$...oh, you do know that. ok, moving on.
now we want to get this parabola to look like the form i used in the post i directed you to.
the form must look like $(x - h)^2 = 4p(y - k)$
in this form, the focus is: $(h, k + p)$
and the directrix is: $y = k - p$
now, $y = (x - 2)^2 + 3$
$\Rightarrow (x - 2)^2 = y - 3$
the coefficient of y is 1, so where would the 4 come from? well, we can notice that $1 = \frac 44 = 4 \frac 14$
thus, we have $(x - 2)^2 = 4 \frac 14y - 3 = 4 \frac 14(y - 3)$
so, we have $(x - 2)^2 = 4 \cdot \frac 14(y - 3)$ ....look familiar?
omg i think i actually get it now. thank you so much!
hmm.. what do i do with the p again?
k+p, but i don't know what to plug in for p
it seems you are not reading through my posts. first of all, i never said anything was k + p. neither the focus nor the directrix is k + p. the focus is a coordinate and the directrix is a
horizontal line.
i am sorry, but i really do not see the problem you are having. i went through the algebraic manipulations and tried to make it plain. but let's try again.
$(x - {\color{red}h})^2 = 4{\color{blue}p}(y - {\color{green}k})$ ...........this is the form we want
....... $\downarrow$......... $\downarrow$....... $\downarrow$
$(x - {\color{red}2})^2 = 4 {\color{blue}\frac 14}(y - {\color{green}3})$ ............this is the form we got our equation in
now, the focus is: $(h, k + p)$
and the directrix is: $y = k - p$
can you tell me what h, p and k are?
i meant k+p as the y coordinate. and i didn't realize that p is 1/4.... here i'll try again now | {"url":"http://mathhelpforum.com/pre-calculus/32876-parabolas-print.html","timestamp":"2014-04-17T20:41:32Z","content_type":null,"content_length":"16048","record_id":"<urn:uuid:dad54a2a-1dc8-4958-a34d-d79ff4ffa7ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
François Levrier
Ph.D. Thesis [2000-2004]
Désordre et cohérence dans les structures du milieu interstellaire: analyse statistique, filtrage interférométrique et transfert radiatif
Supervisors : Edith Falgarone & François Viallefond
Observations of the molecular phase of the interstellar medium reveal a hierarchy of complex structures, both in density and velocity, over more than four decades. The understanding of physical
phenomena leading to this structuration and to the first stages of star formation, calls for a proper description of fields which observers do not have direct access to. It is therefore necessary to
understand the chain of physical, observational and instrumental processes providing the observer with meaningful quantities. Three aspects of this problem are considered here.
The observed structures are the result of a complex projection of three-dimensional fields onto a position-position-velocity space, from which two-dimensional maps of intensity and velocity centroids
are constructed. We establish the relationships between the statistical properties of these maps and those of the original fields. In particular, we show that the spectral index of the velocity
centroids' map is equal to that of the velocity field, in the approximation of small density fluctuations, which is discussed numerically.
Also, interferometric observations impose a filtering of spatial frequencies, which degrades the brightness distributions over the plane of the sky. In the purpose of evaluating the performances of
the forthcoming ALMA instrument, we show, using numerical simulations, that the power spectrum is the most suited tool to recover structural characteristics from the dirty maps. We furthermore
introduce a new and promising method of analysis, based on the increments of Fourier phases.
Finally, we consider the problem of interstellar line formation in the framework of a one-dimensional stochastic radiative transfer formalism. We introduce the case of correlations between the
velocity and density fields through a polytropic relation, and we come up with a generalized transfer equation. We show that the probability distribution function of line-of-sight velocities is
non-Gaussian, contrary to the uncorrelated case, which strongly suggests that these correlations should be taken into account in the interpretation of line profiles.
Complete manuscript PDF PS
Part I : Structuration du milieu interstellaire PDF PS
Part II : Éléments mathématiques et outils d'analyse PDF PS
Part III : Statistique des centroïdes de vitesse PDF PS
Part IV : Filtrage interférométrique des structures PDF PS
Part V : Transfert radiatif dans les milieux complexes PDF PS
Bibliography PDF PS
Defence PDF PS | {"url":"http://www.lra.ens.fr/~levrier/Education/these.php","timestamp":"2014-04-19T22:41:27Z","content_type":null,"content_length":"8797","record_id":"<urn:uuid:08baf3a3-cb45-4e6f-9543-8c6f25a294be>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Melbourne on transit
The user time element in public transport service planning
(Pt 2)
Can it be done?
Still think that substantial travel time reductions are impossible without building high-speed railways under each suburb?
The following examples show how significant travel time savings could be possible, largely with existing infrastructure.
Example 1: Inter-suburban bus trip
Consider a short bus trip where the bus runs every 40 minutes. This service level is typical for middle and outer suburban routes and even some major inner suburban routes (eg 246 on Sunday
The trip takes ten minutes, with five minutes allowed to reach and leave the stop at either end. That adds up to a best case 20 minutes travel time, if you arrive at the stop just as the bus arrives.
The worst case, if a bus has just been missed, is 60 minutes. While the average, assuming random arrival is 40 minutes.
This comparison shows high variability, or +/- 20 minutes from the average. The 3:1 difference between maximum and minimum trip times is entirely due to the attempt to make a short trip by randomly
arriving to catch a low frequency service.
Increasing frequency to 20 minutes reduces the average time to 30 minutes, or a 33% reduction. Variability is reduced to +/- 10 minutes, or 20 to 40 minutes. For a ten minute frequency the average
drops to 25 minutes, or an average 37% time saving. Variability drops to +/- 5, or a ratio of maximum to minimum of 1.4:1.
The above example shows that frequency has a dramatic effect on random arrival end-to-end travel times. Higher frequency also cuts variability, or effectively increasing travel reliability. The
minority willing to plan around timetables can enjoy the benefits today, without any frequency increase. That’s unless they depend on connections, which are discussed next.
Example 2: Bus + train trip
Consider a suburb about 20km from the CBD. Its local buses run every 30 minutes and the trains every 20 minutes. The passenger is 5 minutes walk away from the bus stop. The bus trip (via a meandering
route) takes 15 minutes to a stop near the station. When passengers alight the bus they need to cross a busy road to the station, which has only one entrance at the far end of the platform (assume 5
min). After that the train takes 40 minutes to the city.
The best case travel time is 5 min (walk) + 0 min (wait for bus) + 15 min (bus travel) + 3 min (station access) + 0 min (wait for train) + 40 min (train travel). Or a total of 65 minutes for the 20km
The worst case travel time is 5 min (walk) + 30 min (wait for bus) + 15 min (bus travel) + 5 min (station access – assuming 2 min traffic light cycle at crossing) + 20 min (wait for train) + 40 min
(train travel). That’s a total of 115 minutes for the 20km trip.
There’s three things to note. Firstly the worst case represents an overall speed of just over 10km/h or about double walking speed. Secondly the variability is high – the worst case taking twice as
long as the best case. And, in the worst case example, the passenger is in motion for barely half the time.
Now consider the same trip above, with the following modest service improvements:
· Bus frequency upgraded from 30 to 20 minutes, to provide a harmonised connection to each train, with a consistent 6 minute connection · Bus route made more direct, to reduce travel time from 15 to
10 minutes and fund the higher frequency, but with 5 minutes added walking time · Additional station platform entrance and zebra crossing installed (to reduce station access time from 5 to 2 minutes)
The best case travel time following these improvements is as follows: 10 min (walk) + 0 min (wait for bus) + (10 min bus travel) + 2 min (station access) + 4 min (wait for train) + 40 min (train
travel). Or a total of 66 minutes for the 20km trip. The worst case following these improvements is as follows: 10 min (walk) + 20 min (wait for bus) + 10 min (bus travel) + 2 min (station access) +
4 min (wait for train) + 40 min (train travel). Or a total of 86 minutes for the 20km trip.
The difference is dramatic. The average travel time has fallen from 90 to 76 minutes, while the ‘worst case’ is nearly 30 minutes quicker. Variability also fell; from +/- 25 minutes of the mean to +/
- 10 minutes. Better connectivity and higher bus frequency contributed most to the gain. However more direct bus routing and better pedestrian access also added smaller but no less cost-effective
Example 3: Bus + Bus trip
Finally we’ll examine a cross-suburban trip involving a change between two bus routes. This is typical for journey types in which public transport has a low modal share.
I’ll use similar assumptions to the first example. Eg 5 minute walk to and from the bus and 10 minute travel time in each bus. The first route runs every 60 minutes while the one being changed to is
every 40 minutes.
The first leg involves 5 min (walk time) + 30 min (average wait) + 10 min (travel time), or a total of 45 minutes. The best case is 15 minutes, while worst case is 75 minutes. Or a variability of +/-
30 minutes.
The wait to the second bus will be anywhere between 0 and 40 minutes. Because the frequencies are unharmonised the best connections will recur every two hours. We’ll assume an average of half its
frequency, or 20 minutes.
The second leg involves 20 min (average wait) + 10 min (travel time) + 5 min walk time, or a total of 35 minutes average. But it could range from 15 to 55 minutes, or a variability of +/- 20 min from
the average.
The very shortest time that the overall trip can be made is 30 minutes, with the longest 130 minutes. The average time is 80 minutes. Because time savings and delays average out, the traveller is
unlikely to experience the extreme shortest and longest trip times. But if they do, that’s a ratio of over 4:1, or a variation of +/- 50 minutes.
While people may tolerate a higher variability for a short trip (Eg a 10 minute trip taking 20 minutes), it is probably true that tolerance declines for longer trips (especially if routine). Hence
the letters in the paper complaining about suburban trips that take an hour by public transport but only 20 minutes driving.
There’s a couple of things that can be done to reduce variability.
Firstly the passenger could forego flexibility and use a timetable. Instead of waiting an average of 30 minutes for the first bus, they wait an average 5 minutes. This reduced variability of +/- 20
minutes is solely due to the connection between the first and second bus, which is beyond the passenger’s control. Average travel time is also reduced – by 25 minutes, which is the difference between
the planned wait and the random arrival wait (ie half the frequency of the first service).
There’s also the contribution of service planning, which unlike the first response, assumes no accommodation on the part of the passenger.
Suppose the frequency of the first service was upgraded from 60 to 40 minutes. The first benefit of this is to reduce the average wait, from 30 to 20 minutes. Variability contributed by the wait for
the first service is thus reduced.
The second benefit is that it matches the frequency of the second service. Such matching does not guarantee good connections but does dramatically slash variability. Let’s look at the numbers.
The first leg involves 5 min (walk time) + 20 min (average wait) + 10 min (travel time), or a total of 35 minutes. The best case is 15 minutes, while worst case is 55 minutes. Or a variability of +/-
20 minutes.
The wait to the second bus will be anywhere between 0 and 40 minutes. Because the frequencies are now harmonised the connections will recur every 40 minutes. We’ll assume there’s been no special
planning and the wait for the second service is half its frequency, or 20 minutes.
The second leg therefore involves 20 min (average wait) + 10 min (travel time) + 5 min walk time, or a total of 35 minutes. Because the first leg has been harmonised to it the wait for it is now
constant, variability has been reduced to zero.
Add the two legs and we have an average of 70 minutes. That’s 10 minutes down on the first case of 80 minutes. But the real gain has been in reduced variability. At best it’s 50 minutes and at worst
it takes 90 minutes. This is a variability of +/- 20 minutes – well down on the earlier +/- 50 minutes. Also the ratio of maximum to minimum journey time has fallen from over 4:1 to under 2:1.
Although average travel times are still slower than many would like, the improvement made from adjusting one route from a non-harmonised 60 minutes to a harmonised 40 minutes cannot be
Again, with the earlier example one can do better. If one sacrifices flexibility and uses a timetable to catch the first service, the average travel time falls by 15 minutes (70 to 55 minutes) and
variability virtually eliminated. Secondly, if planners consider that the connection between the two services is sufficiently important to be worth adjusting timetables, the connection time could be
reduced from the 20 minutes average assumed here to 10 minutes. This contributes another 10 minutes, meaning a total average trip time of 60 minutes for those who don’t use a timetable and a reliable
45 minutes for those who do.
Summary and Conclusion
I have demonstrated the effect of frequency on cutting journey time. It is at first dramatic, with a point of diminishing returns being reached as frequency rises to around ten minutes. Beyond that
point, unless it needed for capacity or for very short trips, its impact drops.
Also discussed has been travel time variability. Public debate on this normally concerns train reliability, and this is especially important for those connecting to less frequent buses. However the
examples demonstrate indicate it can be very high for bus trips, especially those involving random arrival and connections between non-harmonised services. Harmonised bus frequencies can greatly
reduce variability and make public transport more useful for trips where it’s currently weakest.
The planning approach presented here focuses most on service frequency and its harmonisation. There is less attention to infrastructure and capacity.
The former is cheap and quick, while the latter is expensive and long-term.
Both have their place in a growing city. But introducing the latter without the former means that use of the latter is poorly utilised and public transport’s potential to fully contribute to the
overall transport effort is unrealised.
Labels: buses, co-ordination, fares, service levels, service planning, trains, trams
5 Comments:
IkaInk said...
Well thought posts.
I agree that passengers should be able to leave without consulting a timetable, but I disagree that frequencies should always be so high that they can assume a short wait. Which is where
clock-face timetables come into play.
Setting timetables to repeat every 10,15,20,30 or in some rare cases 60 minutes mean that people can simply remember "oh the bus arrives 2 minutes past the quarter hour".
@Ikalnk completely agree that where frequency can't be high a clockface pattern is the next best thing.
Especially if (i) it harmonises with other routes and (ii) the most common local trips in the area can be made without transferring.
Trying to do the latter with network design has problems because it makes routes less direct, but if it's done with urban design (eg locating jobs and shopping around railway stations/bus
interchages) then it works.
Hence the inherent transport network superiority of centralised Sunbury, Werribee, Pakenham, Lilydale, Belgrave over fragmented Lalor, Melton, Craigieburn (proposed town centre) and Cranbourne.
Of course, it also depends on what the role of the bus network is. A bus network designed as a rail feeder service could still be of a lower frequency, but harmonised to train headways and
running on a 'pulse' timetable to ensure connections to and from train services. This would work well at both metropolitan outer termini (e.g. Cranbourne, Pakenham, Frankston) as well as the
regional cities such as Geelong, Ballarat and Bendigo with their baseline hourly headways.
But if the bus network serves other purposes such bus-bus or bus-tram transfers or serve activity centres away from railway stations, other measures will have to suffice such as increased service
frequencies, straighter, more direct routes and longer stop spacing to improve travel times. Rail feeder buses would all benefit from these as well.
I think that ultimately the key consideration of your post is the tendency for interchange to be poorly coordinated due to inconsistent frequencies and pedestrian hostile interchange. To use a
local example, the connection between train and 566 at Watsonia station is mostly appalling (20 minute train, 24 minute bus). It can take me almost as long to get from home to Watsonia station (5
min walk + av. 12 min wait + 3 min bus ride + 4-5 min walk - used to be better but the traffic lights have been made pedestrian unfriendly = 25 min) as it does to get from Watsonia to the city on
the train(33 mins Watsonia to Flinders St), compared to a 4 minute drive from home to station car park (+1 min walk to the platform).
If the bus was harmonised to connect with the train (eg every 20 minutes with 6 minutes connection time), this would reduce.
However, even without interchange public transport can shoot itself in the foot. I have investigated the options in travelling to a new job in the new year, from Bundoora to Doncaster. I can
choose between a 20-30 minute drive, or the Metlink journey planner offer of an hour single seat journey on the 902 high frequency (for Melbourne) Smartbus. Whilst I am slightly tempted as I can
work on the bus where I can't by driving, 40-50% journey time by car is almost enough to convince me to buy a second car for work journeys. Clearly there are other answers to saving time related
to reducing bus journey times through better priority, vigorously promoting prepurchasing and so on even before better frequencies and interchange...
LS: The lower frequency but harmonised bus network (effectively the Transperth model) is probably the second-best option. And the only cost-effective option in a low density area.
While it's cheaper to run than a high frequency service it oddly requires a lot more planning effort with regards to route length, frequency and timed connections.
Especially where the train is a regional type frequency (eg RFR lines and even Pakenham, Cranbourne, Belgrave, Lilydale).
It also requires a Perth level of train punctuality and procedures to enforce connectivity (at least in peak direction flows, most notably train to bus in the pm peak).
I think the calculation method outlined successfully provides an assessment system that identifies 'gold standard' service levels, especially through its emphasis on random arrival.
Yet it also sufficiently seperates a middle level of service (ie a Perth-style timed transfer system with reasonable train reliability) from a low level of service (ie no harmonisation and low | {"url":"http://www.melbourneontransit.blogspot.com/2011/12/user-time-element-in-public-transport_10.html","timestamp":"2014-04-20T23:27:49Z","content_type":null,"content_length":"42325","record_id":"<urn:uuid:7651fe12-e780-45bd-a813-b7d45405ce57>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spinnerstown Math Tutor
Find a Spinnerstown Math Tutor
My knowledge of economics and mathematics stems from my master's degree in economics from Lehigh University. I specialize in micro- and macroeconomics, from an introductory level up to an advanced
level. I have master's degree work in labor economics, financial analysis and game theory.
19 Subjects: including calculus, Microsoft Excel, precalculus, statistics
...I value and respect the incredible variety of personalities that I encounter while teaching, and I greatly enjoy meeting students from various backgrounds. I understand that each student
deserves specialized consideration in order to succeed, and I happily adapt my teaching methods to best meet ...
17 Subjects: including ACT Math, SAT math, English, reading
...I am eager to help students from elementary to high school level understand the subject matter and improve grades. I will tailor the lessons according to my student's needs to achieve the best
possible results and I am enthusiastic to see the effects of our combined efforts! I am a native Turkish speaker.
8 Subjects: including algebra 2, calculus, trigonometry, algebra 1
...My favorite subjects are chemistry, physics, and any math subject, but would be willing to step outside my comfort zone as I am exposed to many math/science subjects as a chemical engineer. I
have found in my studies that to be good at anything, practice is necessary. I find running through example problems to be the most effective way of learning a subject.
10 Subjects: including calculus, geometry, algebra 2, precalculus
...As a teacher, I aim to perpetuate knowledge and inspire learning. I believe, every student, regardless of background, can improve his or her ability to read and write. My goal is to help my
students learn so that they can excel in every step of their life.Organization is the key to success.
18 Subjects: including algebra 2, special needs, study skills, discrete math | {"url":"http://www.purplemath.com/spinnerstown_pa_math_tutors.php","timestamp":"2014-04-18T21:23:24Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:ff194b3e-3efe-4025-a325-0ef5a61986c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Envision Math
Envision Math was discovered to have potentially favorable impacts on math accomplishment for elementary school students.
Program Description
This program attempts to assist pupils acquire an comprehension of l / z concepts through issue – based education, small group interaction, and visual learning with a focus on modeling and reasoning.
Continuing evaluation and differentiated teaching are used to match the requirements pupils at all skill levels.
Envision Math has extensive print and digital parts, enabling educators alternative in the way they educate each issue. Envision Math unites strong visual learning strategies which make significant
links between known and new math’s suggestions for the students, with break-through digital teaching and learning tools which focus on each instructor’s technological expertise.
Differentiated teaching and learning strategies enable one to tailor your approaches to enhance pupils’ learning. Application components could be adapted to all or any primary math’s classrooms
around Australia and could be educated in almost any arrangement.
The text is perfectly organized having a predictable structure for every issue. You will find Going Digital exercises to become fulfilled on e-Tools.
It has much more, printable pages of all exercises inside the book, plus additional exercise pages, tests, the replies towards the exercises, thoughts and printable pages for centers, and education
In many lessons, new concepts are presented visually, over the very best of the spread. This really is followed by Directed Practice with issues in line with the newest notion. Next is Independent
Practice where all difficulties still connect towards the newest notion.
Word problems and practical application situations pervade this plan, meaning that pupils must move beyond computation skills to realize which mathematical functions to employ in various situations.
In each class, lessons are split under 20 topical units (just 16 for kindergarten) having numerous lessons presented in each unit. These classes develop each issue throughout a unit instead of
combining issues as is done in Saxon Z. Each unit starts having a brief review group of issues and ends having a test, all incorporated inside the student publication. As the number of work inside
the student text could be adequate for many students, you will find additional worksheets which may be printed from your instructor’s edition CDROM.
Related terms: Pearson Math Grade 4, En Vision Math Grade 3, En Vision Math Grade 5, En Vision Math Grade 1, En Vision Math Games, En Vision Math Grade 4 Textbook, enVision Math Program, enVision
Math Pearson | {"url":"http://envisionsmath.net/","timestamp":"2014-04-17T12:54:49Z","content_type":null,"content_length":"16976","record_id":"<urn:uuid:fe815a89-1164-4735-84a1-5c8fb5fa1239>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Chicago Algebra Tutor
...I have several years of experience as a teacher which required effective public speaking for many hundreds of hours. As the manager of a business for several years, both marketing and employee
management required effective public speaking. Several years of group tutoring and many years of group training have given me additional experience as a speaker.
49 Subjects: including algebra 2, algebra 1, reading, English
...Thus I bring first hand knowledge to your history studies. I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational
Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes.
41 Subjects: including algebra 2, algebra 1, English, chemistry
...Since graduating with a dissertation from the University of Chicago, I have started a life-long learning network and I teach students of all ages. I believe that World History, Art History, and
Archaeology are important because they tell the story of our world. World History develops critical thinking and asks students to think through issues unique to each time and place.
10 Subjects: including algebra 1, algebra 2, calculus, trigonometry
I am a teacher's assistant who is looking to tutor students part-time. I have been a teacher's assistant for 1st and 3rd grade for the past 2 years. I am currently a student at Purdue University
Calumet, and I am majoring in math education.
9 Subjects: including algebra 2, calculus, precalculus, elementary (k-6th)
...In addition to a Bachelor of Arts in Spanish and Great Books from the University of Notre Dame, I studied pre-med in a post-baccalaureate program at Northwestern University and earned a master
degree in counseling psychology from Lewis University. Though I do not have the same competency as with...
20 Subjects: including algebra 1, Spanish, writing, English | {"url":"http://www.purplemath.com/East_Chicago_Algebra_tutors.php","timestamp":"2014-04-17T13:40:44Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:0fa999ca-7d31-4df6-8f77-ce89e481cb38>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunday evening, stupid games…
April 1, 2012
By arthur charpentier
This evening, while I was about to wash the dishes, I heard my elders starting a game (call them
Him: "
I have picked - in my head - a number, lower than 50. Try to guess...
Her: "
No way, too difficult...
Him: "
You can try five different numbers...
Her: "
.,. um ... No, no way...
Me: "
Wait... each time we suggest a number, you tell us if yours is either above, or below ?
You can see me coming clearly, can't you ? Using a simple subdivision rule, we have a fast algorithm (and indeed, if I have to choose between washing the dishes and playing with the kids...)
Him: "
um.... ok
Her: "
Daddy, are you sure we will win ?
Me: "
Well... I cannot promise that we will win... but I am rather sure
that we will win quite frequently: more gains than losses...
" (I guess).
Her: "
Great ! I am playing with daddy...
Him: "
um.... wait, is it one of you trick, again ? I don't to play anymore... Do you want to see the books we've chosen at the library ?
Her: "
Me: "
What ? no one wants to see if I was right ? that we have indeed more than 50% chances to win...
Him and her: "
No !
The point of that story ? If we listen to kids, science will not go forward, trust me. But I am curious... I want to see if my intuition was correct. Actually, the intuition was based on the fact
> 2^5
[1] 32
> 2^6
[1] 64
To be sure, let us substitute my laptop to my son... to pick up numbers, randomly (yes, sometimes I feel like I am Doctor Tenma,
). The algorithm is simple: there are bounds, and at each stop I should suggest the middle of the interval. If the middle is not an integer, I suggest either the integer below or the integer above
(with equal probabilities).
if(m %% 1 == 0){m=m}
if(m %% 1 != 0){m=sample(c(m-.5,m+.5),size=1)}
The following functions runs 10,000 simulations, and tells us how many times, out of 5 numbers suggested, we got the good one.
for(simul in 1:NS){
for(i in 1:tries){
It looks like the probability that we got the good number is higher than 60%,
> winning()
[1] 0.61801
Which is not bad. And if the upper limit was not 50, but something else, the probability of winning would have been the following.
Actually, after losing a couple of times, I am rather sure that my son would have to us that we can suggest only four numbers. In that case, the probability would have been close to 30%, as shown on
the blue curve below (where four numbers only can be suggested)
Anyway, as intuited, with five possible suggestions, we were quite likely to win frequently. Actually with a probability of almost 2 out of 3...and 1 out of 3 if my son had decided to pick an number
between 1 and 100, or only 4 possible suggestions... Those are quite large actually, when we think about it. It reminds me that
story I mentioned a few months ago... Anyway, calculating probabilities is nice, but I still have to wash the dishes...
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/sunday-evening-stupid-games/","timestamp":"2014-04-17T21:53:47Z","content_type":null,"content_length":"70303","record_id":"<urn:uuid:8f79a8b8-aed1-4ea8-99cf-518c2ff6dd42>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seyit Temir
Harran University, Faculty of Arts and Sciences, Department of Mathematics, 63200 Sanliurfa, Turkey
Abstract: Direct product of two amenable groups is considered. If one of them is $\sum\limits_{i=1}^\infty Z_2$, then $\sum\limits_{i=1}^\infty Z_2\times G$ is amenable. Subadditive processes on such
a direct product are studied.
Keywords: amenable groups; direct product; subadditive process
Classification (MSC2000): 28D05; 22D40
Full text of the article:
Electronic version published on: 23 Nov 2003. This page was last modified: 24 Nov 2003.
© 2003 Mathematical Institute of the Serbian Academy of Science and Arts
© 2003 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/086/13.html","timestamp":"2014-04-21T02:27:03Z","content_type":null,"content_length":"3530","record_id":"<urn:uuid:0ae135c3-ef43-49c0-8feb-1a844ee50056>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
From MozillaWiki
Project Summary
The ability for Web authors to make dynamic pages is important and was requested by many users. In the case of MathML it allows such things as writing interactive pages, i.e. math quiz or similar.
For the simplest interactions, the MathML specification provides the maction element, which is partially implemented in Mozilla's MathML engine.
However, Mozilla's maction implementation has several bugs/spec violations and some features asked by users such as the tooltip actiontype are not still implemented. Similarly, the rendering of
MathML formulas created by Javascript is not always updated correctly. This is problematic, for example to render the MathML code generated by the MathJax library.
In this project I intend to do several things. Firstly, I intend to develop a full maction support (for example, to implement the tooltip actiontype) and to fix known maction bugs. I intend to write
tests for maction and dynamic MathML rendering in general to reveal previously unknown bugs and make the implementation more reliable. Also Mozilla's MathML demo pages are incomplete and out of date,
so I want to update them to follow our current maction implementation and to use MathML syntax instead of HTML+CSS.
Finally, there are some things in MathML REC that are not clear and cases when Mozilla's implementation differs from the REC, e.g. bug 749044. During this summer I plan to discuss these issues with
MathML WG, fix them in Mozilla's MathML engine and possibly implement more features from the MathML spec.
Project Goals
• To implement the "tooltip" actiontype. "Tooltip" actiontype provides the ability to display a tooltip when the cursor is pointed over the expression.
[bug 544001]
• To improve implementation of maction, fix known maction bugs. [bug 734729][bug 748779][bug 749044][tracking bug 544036]
• To improve handling of attribute/child changes of MathML frames. [bug 734729][bug 750169][tracking bug 744783]
• To verify that Javascript works well with MathML, report and fix related bugs.
• To rewrite MathML demo pages using MathML REC syntax (this should be done after implementing "tooltip"). [bug 700440][bug 749103]
[MathML demo pages tracking bug 585142]
• To write reftests to compare dynamically generated MathML / static MathML.
Work done before the coding period
• maction statusline syntax was changed to follow MathML specification. [Gecko-specific notes][bug 729924]
• Fixed the bug related to maction selection attribute. According to MathML spec, it should be taken into account only with actiontype toggle. [bug 739556]
• Detected and fixed the bug when dynamic change of the maction actiontype attribute didn't work. [bug 745535]
Weekly updates
21/05 - 27/05
This week I've been doing two things. I started to work on MathML tooltip implementation and started to work on bug 749044.
It seems that there are 3 options for tooltip implementation:
• First option is implementing it using tooltip implementation in browser.js (and also mirrored here). The problem with this option may be that this part of the code is marked as temporary, and I
don't know if I can modify it. Now I am trying to contact UI team to find it out.
• Second option is about modifying mathml.css file, similar to this implementation (maction is in the bottom). The advantage of this implementation is that it allows to implement both text and
MathML messages, but the disadvantage is that we will get the message in the top right corner instead of placing it near the object.
• Finally, the third option is implementing tooltip using XUL tooltip implementation. I haven't quite considered this option, so I just leave it for now.
Along with the tooltip, I am also working with bug 749044, which is a regression from bug 739556, where I fixed an issue related to maction selection attribute. By MathML REC it shouldn't be taken
into account when actiontype="statusline" or "tooltip". This was done in the patch, but the behavior of the unknown actiontype was implemented improperly. There was some discussion in the MathML
list, and though there was no answer from MathML WG yet, we think that selection attribute should be considered by default.
28/05 - 03/06
This week I was continuing my work on the MathML tooltip implementation and on bug 749044.
Investigation of the HTML tooltip implementation revealed that MathML tooltip can be implemented in browser.js simply by attaching the <math> element to the <tooltip> element that is shown. This will
make code is browser.js a bit more complex, but I hope that UI reviewers will accept the code. Otherwise, I'll have to switch to other options, listed in my previous update.
I uploaded the patch to bug 749044 and now I'm waiting for a review. List of changes:
• Selection attribute on <maction> is now considered by default, i.e. when actiontype is unknown.
• MathML error is now generated when there is no actiontype attribute in <maction> element.
• MathML error is now generated when <maction> selection attribute is out of range (except in cases when actiontype attribute allows to ignore selection, e.g. statusline or tooltip).
• Fixed a bug when <maction> element could never throw a MathML error.
Also I updated my project summary, to give out some more information about the project. Next week I will continue to work on tooltip implementation, and if everything goes well, next week I could
also work on MathML tooltip demo pages as well.
04/06 - 10/06
Another GSoC week has passed. During this week I was finishing work on previously mentioned bugs and starting to write Mochitest tests and demo pages for <maction> element. Bug 749044 was already
pushed into the tree, but unfortunately some Android tests have failed. However, it needs only a small fix, which I will upload shortly. (update: patch is pushed into m-c)
I am sorry to announce that after an attempt to implement MathML tooltip we agreed to abandon this enhancement due to increased code complexity and security issues. For now only <mtext> tooltip will
be supported, patch is already uploaded and is waiting for a review. I want to mention that MathML tooltip is not a MathML REC requirement:
QUOTE: "The renderer displays the first child. When the pointer pauses over the expression for a long enough delay time, the renderer displays a rendering of the message in a pop-up "tooltip" box
near the expression. Many systems may limit the popup to be text, so the second child should be an mtext element in most circumstances. For non-mtext messages, renderers may provide a natural
language translation of the markup if full MathML rendering is not practical, but this is not required." [Presentation Markup]
Next week I am going to finish my work on bug 544001, write Mochitest tests and update MathML demo pages according to our current syntax. | {"url":"https://wiki.mozilla.org/SummerOfCode/2012/DynamicMathML","timestamp":"2014-04-21T04:36:27Z","content_type":null,"content_length":"26956","record_id":"<urn:uuid:a830ad45-853d-4d4d-ba0f-81401eeed817>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetry energy, neutron star crust, neutron skin thickness, and the r-mode instability
Isaac Vidana
Univ. of Coimbra
We analyze different aspects of the nuclear symmetry energy by using both microscopic and phenomenoliogical approaches to the nuclear equation of state. Microscopic includes Brueckner-Hatree-Fock,
the variational form due to Akmal, Pandharipande, and Ravenhall, and a parametrization of recent Auxiliary Field Diffusion Monte Carlo. For phenomenological approaches we use Skyrme forces and
relativistic mean field models. Specifically, we study correlations of symmetry energy parameters, the slope L and the curvature K_symm, with the neutron slin thinkness (in neutron rich isotopes) and
the crust-core transition point (in neutron stars). We confirm that there is an inverse correlations between the neutron slin thinkness and the transition density. The role of L in the r-mode
instability of neutron stars is also studied. The r-mode instablity region is smaller for models which give larger values of L. Using the measured spin frequency and estimated core temperature of the
pulsar in the low mass X-ray binary 4U 1608-52, we show that observational data seem to favor L values larger than ~50 MeV. | {"url":"http://www.lanl.gov/projects/t2nuclear/newpage.php?section=abstract&number=-19&year=2012","timestamp":"2014-04-16T11:01:31Z","content_type":null,"content_length":"12441","record_id":"<urn:uuid:94ec6237-2fd6-4d5b-8306-a6dcb27878c0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Videos
In this video I just show what it would be like to have an interactive session online with me or to have me make a video lesson on a topic of your choice.
Video describing a basic approach to solving Related Rates with an example.
A video describing the steps of logarithmic differentiation with an example.
Review how to take the derivative of logarithmic functions. The sample problem is taken from IB Math SL, Section 23C. Recorded November 2013.
You can see how I use skype to interact in real-time and the student can see everything that is being explained. | {"url":"http://www.wyzant.com/resources/video/calculus","timestamp":"2014-04-17T22:19:42Z","content_type":null,"content_length":"37118","record_id":"<urn:uuid:2f4098ee-38db-4786-bbd6-c721022fd0f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winning the Netflix Prize: A Summary
How was the Netflix Prize won? I went through a lot of the Netflix Prize papers a couple years ago, so I’ll try to give an overview of the techniques that went into the winning solution here.
Normalization of Global Effects
Suppose Alice rates Inception 4 stars. We can think of this rating as composed of several parts:
• A baseline rating (e.g., maybe the mean over all user-movie ratings is 3.1 stars).
• An Alice-specific effect (e.g., maybe Alice tends to rate movies lower than the average user, so her ratings are -0.5 stars lower than we normally expect).
• An Inception-specific effect (e.g., Inception is a pretty awesome movie, so its ratings are 0.7 stars higher than we normally expect).
• A less predictable effect based on the specific interaction between Alice and Inception that accounts for the remainder of the stars (e.g., Alice really liked Inception because of its particular
combination of Leonardo DiCaprio and neuroscience, so this rating gets an additional 0.7 stars).
In other words, we’ve decomposed the 4-star rating into: 4 = [3.1 (the baseline rating) - 0.5 (the Alice effect) + 0.7 (the Inception effect)] + 0.7 (the specific interaction)
So instead of having our models predict the 4-star rating itself, we could first try to remove the effect of the baseline predictors (the first three components) and have them predict the specific
0.7 stars. (I guess you can also think of this as a simple kind of boosting.)
More generally, additional baseline predictors include:
• A factor that allows Alice’s rating to (linearly) depend on the (square root of the) number of days since her first rating. (For example, have you ever noticed that you become a harsher critic
over time?)
• A factor that allows Alice’s rating to depend on the number of days since the movie’s first rating by anyone. (If you’re one of the first people to watch it, maybe it’s because you’re a huge fan
and really excited to see it on DVD, so you’ll tend to rate it higher.)
• A factor that allows Alice’s rating to depend on the number of people who have rated Inception. (Maybe Alice is a hipster who hates being part of the crowd.)
• A factor that allows Alice’s rating to depend on the movie’s overall rating.
• (Plus a bunch of others.)
And, in fact, modeling these biases turned out to be fairly important: in their paper describing their final solution to the Netflix Prize, Bell and Koren write that
Of the numerous new algorithmic contributions, I would like to highlight one – those humble baseline predictors (or biases), which capture main effects in the data. While the literature mostly
concentrates on the more sophisticated algorithmic aspects, we have learned that an accurate treatment of main effects is probably at least as signficant as coming up with modeling breakthroughs.
(For a perhaps more concrete example of why removing these biases is useful, suppose you know that Bob likes the same kinds of movies that Alice does. To predict Bob’s rating of Inception, instead of
simply predicting the same 4 stars that Alice rated, if we know that Bob tends to rate movies 0.3 stars higher than average, then we could first remove Alice’s bias and then add in Bob’s: 4 + 0.5 +
0.3 = 4.8.)
Neighborhood Models
Let’s now look at some slightly more sophisticated models. As alluded to in the section above, one of the standard approaches to collaborative filtering is to use neighborhood models.
Briefly, a neighborhood model works as follows. To predict Alice’s rating of Titanic, you could do two things:
• Item-item approach: find a set of items similar to Titanic that Alice has also rated, and take the (weighted) mean of Alice’s ratings on them.
• User-user approach: find a set of users similar to Alice who rated Titanic, and again take the mean of their ratings of Titanic.
(See also my post on item-to-item collaborative filtering on Amazon.)
The main questions, then, are (let’s stick to the item-item approach for simplicity):
• How do we find the set of similar items?
• How do we weight these items when taking their mean?
The standard approach is to take some similarity metric (e.g., correlation or a Jaccard index) to define similarities between pairs of movies, take the K most similar movies under this metric (where
K is perhaps chosen via cross-validation), and then use the same similarity metric when computing the weighted mean.
This has a couple problems:
• Neighbors aren’t independent, so using a standard similarity metric to define a weighted mean overcounts information. For example, suppose you ask five friends where you should eat tonight. Three
of them went to Mexico last week and are sick of burritos, so they strongly recommend against a taqueria. Thus, your friends’ recommendations have a stronger bias than what you’d get if you asked
five friends who didn’t know each other at all. (Compare with the situation where all three Lord of the Rings Movies are neighbors of Harry Potter.)
• Different movies should perhaps be using different numbers of neighbors. Some movies may be predicted well by only one neighbor (e.g., Harry Potter 2 could be predicted well by Harry Potter 1
alone), some movies may require more, and some movies may have no good neighbors (so you should ignore your neighborhood algorithms entirely and let your other ratings models stand on their own).
So another approach is the following:
• You can still use a similarity metric like correlation or cosine similarity to choose the set of similar items.
• But instead of using the similarity metric to define the interpolation weights in the mean calculations, you essentially perform a (sparse) linear regression to find the weights that minimize the
squared error between an item’s rating and a linear combination of the ratings of its neighbors. Note that these weights are no longer constrained, so that if all neighbors are weak, then their
weights will be close to zero and the neighborhood model will have a low effect.
(A slightly more complicated user-user approach, similar to this item-item neighborhood approach, is also useful.)
Implicit Data
Adding on to the neighborhood approach, we can also let implicit data influence our predictions. The mere fact that a user rated lots of science fiction movies but no westerns, suggests that the user
likes science fiction better than cowboys. So using a similar framework as in the neighborhood ratings model, we can learn for Inception a set of offset weights associated to Inception’s movie
Whenever we want to predict how Bob rates Inception, we look at whether Bob rated each of Inception’s neighbors. If he did, we add in the corresponding offset; if not, then we add nothing (and, thus,
Bob’s rating is implicitly penalized by the missing weight).
Matrix Factorization
Complementing the neighborhood approach to collaborative filtering is the matrix factorization approach. Whereas the neighborhood approach takes a very local approach to ratings (if you liked Harry
Potter 1, then you’ll like Harry Potter 2!), the factorization approach takes a more global view (we know that you like fantasy movies and that Harry Potter has a strong fantasy element, so we think
that you’ll like Harry Potter) that decomposes users and movies into a set of latent factors (which we can think of as categories like “fantasy” or “violence”).
In fact, matrix factorization methods were probably the most important class of techniques for winning the Netflix Prize. In their 2008 Progress Prize paper, Bell and Koren write
It seems that models based on matrix-factorization were found to be most accurate (and thus popular), as evident by recent publications and discussions on the Netflix Prize forum. We definitely
agree to that, and would like to add that those matrix-factorization models also offer the important flexibility needed for modeling temporal effects and the binary view. Nonetheless,
neighborhood models, which have been dominating most of the collaborative filtering literature, are still expected to be popular due to their practical characteristics - being able to handle new
users/ratings without re-training and offering direct explanations to the recommendations.
The typical way to perform matrix factorizations is to perform a singular value decomposition on the (sparse) ratings matrix (using stochastic gradient descent and regularizing the weights of the
factors, possibly constraining the weights to be positive to get a type of non-negative matrix factorization). (Note that this “SVD” is a little different from the standard SVD learned in linear
algebra, since not every user has rated every movie and so the ratings matrix contains many missing elements that we don’t want to simply treat as 0.)
Some SVD-inspired methods used in the Netflix Prize include:
• Standard SVD: Once you’ve represented users and movies as factor vectors, you can dot product Alice’s vector with Inception’s vector to get Alice’s predicted rating of Inception.
• Asymmetric SVD: Instead of users having their own notion of factor vectors, we can represent users as a bag of items they have rated (or provided implicit feedback for). So Alice is now
represented as a (possibly weighted) sum of the factor vectors of the items she has rated, and to get her predicted rating of Titanic, we can dot product this representation with the factor
vector of Titanic. From a practical perspective, this model has an added benefit in that no user parameterizations are needed, so we can use this approach to generate recommendations as soon as a
user provides some feedback (which could just be views or clicks on an item, and not necessarily ratings), without needing to retrain the model to factorize the user.
• SVD++: Incorporate both the standard SVD and the asymmetric SVD model by representing users both by their own factor representation and as a bag of item vectors.
Some regression models were also used in the predictions. The models are fairly standard, I think, so I won’t spend too long here. Basically, just as with the neighborhood models, we can take a
user-centric approach and a movie-centric approach to regression:
• User-centric approach: We learn a regression model for each user, using all the movies that the user rated as the dataset. The response is the movie’s rating, and the predictor variables are
attributes associated to that movie (which can be derived from, say, PCA, MDS, or an SVD).
• Movie-centric approach: Similarly, we can learn a regression model for each movie, using all the users that rated the movie as the dataset.
Restricted Boltzmann Machines
Restricted Boltzmann Machines provide another kind of latent factor approach that can be used. See this paper for a description of how to apply them to the Netflix Prize. (In case the paper’s a
little difficult to read, I wrote an introduction to RBMs a little while ago.)
Temporal Effects
Many of the models incorporate temporal effects. For example, when describing the baseline predictors above, we used a few temporal predictors that allowed a user’s rating to (linearly) depend on the
time since the first rating he ever made and on the time since a movie’s first rating. We can also get more fine-grained temporal effects by, say, binning items into a couple months’ worth of ratings
at a time, and allowing movie biases to change within each bin. (For example, maybe in May 2006, Time Magazine nominated Titanic as the best movie ever made, which caused a spurt in glowing ratings
around that time.)
In the matrix factorization approach, user factors were also allowed to be time-dependent (e.g., maybe Bob comes to like comedy movies more and more over time). We can also give more weight to recent
user actions.
Regularization was also applied throughout pretty much all the models learned, to prevent overfitting on the dataset. Ridge regression was heavily used in the factorization models to penalize large
weights, and lasso regression (though less effective) was useful as well. Many other parameters (e.g., the baseline predictors, similarity weights and interpolation weights in the neighborhood
models) were also estimated using fairly standard shrinkage techniques.
Ensemble Methods
Finally, let’s talk about how all of these different algorithms were combined to provide a single rating that exploits the strengths of each model. (Note that, as mentioned above, many of these
models were not trained on the raw ratings data directly, but rather on the residuals of other models.)
In the paper detailing their final solution, the winners describe using gradient boosted decision trees to combine over 500 models; previous solutions used instead a linear regression to combine the
Briefly, gradient boosted decision trees work by sequentially fitting a series of decision trees to the data; each tree is asked to predict the error made by the previous trees, and is often trained
on slightly perturbed versions of the data. (For a longer description of a similar technique, see my introduction to random forests.)
Since GBDTs have a built-in ability to apply different methods to different slices of the data, we can add in some predictors that help the trees make useful clusterings:
• Number of movies each user rated
• Number of users that rated each movie
• Factor vectors of users and movies
• Hidden units of a restricted Boltzmann Machine
(For example, one thing that Bell and Koren found (when using an earlier ensemble method) was that RBMs are more useful when the movie or the user has a low number of ratings, and that matrix
factorization methods are more useful when the movie or user has a high number of ratings.)
Here’s a graph of the effect of ensemble size from early on in the competition (in 2007), and the authors’ take on it:
However, we would like to stress that it is not necessary to have such a large number of models to do well. The plot below shows RMSE as a function of the number of methods used. One can achieve
our winning score (RMSE=0.8712) with less than 50 methods, using the best 3 methods can yield RMSE < 0.8800, which would land in the top 10. Even just using our single best method puts us on the
leaderboard with an RMSE of 0.8890. The lesson here is that having lots of models is useful for the incremental results needed to win competitions, but practically, excellent systems can be built
with just a few well-selected models. | {"url":"http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/","timestamp":"2014-04-18T00:15:13Z","content_type":null,"content_length":"27235","record_id":"<urn:uuid:f1c5719b-a4d5-4191-81e3-84baae4c3802>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lower bound on number of tetrahedra needed to triangulate a knot complement
up vote 8 down vote favorite
Following along a similar line to the question asked here: Is there an explicit bound on the number of tetrahedra needed to triangulate a hyperbolic 3-manifold of volume V?
Let $K$ be a (hyperbolic) knot in $S^3$. Let $n$ be the minimal number of crossings of any diagram of $K$ and let $M = S^3 \backslash K$ be its complement. By Moise’s theorem the 3-manifold $M$ can
be triangulated by tetrahedra. But is there any known bound on the number of tetrahedra needed to triangulate $M$ as a function of $n$? I am particularly interested in any known lower bounds, but
upper bounds would also be interesting.
3-manifolds knot-theory
There's an upper bound coming from the number of vertices in a planar knot diagram -- this comes from the algorithm SnapPea uses to construct a topological ideal triangulation of the complement.
You can then subdivide appropriately to construct a proper triangulation. I believe the number of tetrahedra needed should be linear in n although I haven't thought it through precisely. – Ryan
Budney Nov 15 '10 at 21:42
Point of clarification -- do you want your triangulation to be a hyperbolic ideal triangulation? I think it's an open problem as to whether or not cusped hyperbolic manifolds admit hyperbolic ideal
triangulations, isn't it? They have cusped polyhedral (Epstein-Penner) decompositions but they're generally not triangulations. – Ryan Budney Nov 15 '10 at 21:45
And you can't have a lower bound of the number of tetrahedra needed in terms of $n$ since you can always unneccessarily complicate your knot diagram. Perhaps you want $n$ to be the minimal number
of crossings in a diagram for the knot? – Ryan Budney Nov 15 '10 at 21:46
Yes Ryan you are correct, by number of crossings of a knot K I mean the minimal number of crossings of ANY diagram of K. I'll change the question to reflect this. – Mark Bell Nov 15 '10 at 22:58
add comment
1 Answer
active oldest votes
Let $t(K)$ be the minimal number of tetrahedra needed to triangulate a knot complement. Let $c(K)$ be the minimal crossing number of a knot.
As Ryan Budney points out in the comments, $t(K)\leq C c(K)$ for some constant $C$. One may show that this lower bound is optimal, indirectly using a result of Lackenby. For alternating
knots, one obtains a linear lower bound on the hyperbolic volume of the knot complement, and therefore on the number of tetrahedra needed to triangulate the complement (as noted in
Thurston's answer to the other question), in terms of the twist number of the alternating knot, by work of Lackenby. One may find hyperbolic alternating knots $K_i$ where the twist number
is equal to the crossing number $c(K_i)$, and thus $ c(K_i)\leq c_1 Vol(S^3-K_i) \leq c_2 t(K_i)$ for some constants $c_1, c_2$. This shows that the linear lower bound is optimal.
For the other direction, of course, there is some sort of upper bound, since there are only finitely many knot complements with a triangulation by $\leq n$ tetrahedra, so just take the one
with the maximal crossing number! We may attempt to make this relation more explicit, using a result of Simon King. I think I can show that $c(K)\leq e^{p(t(K))}$, for some polynomial $p(n)
up vote $. Inverting this inequality gives a lower bound on $t(K)$ in terms of $c(K)$. King shows that if one has a triangulation $\tau$ of $S^3$ with $m$ tetrahedra, and a knot $K$ in the
10 down 1-skeleton of $\tau$, then the crossing number $c(K)$ is bounded by $C^{m^2}$ for some $C$. Given a triangulation of the knot complement, one wants to estimate the crossing number. To apply
vote King's result, one must extend the triangulation of $S^3-K$ to a triangulation of $S^3$. The difficulty is that the meridian of the knot may be a very complicated curve in the triangulation
accepted of the torus boundary of the knot complement. Work of Jaco and Sedgwick allow one to (in principle) estimate the combinatorial length of the meridian in the triangulation of the boundary
torus. Their work also allows one to extend the triangulated knot complement along a solid torus to get a triangulation of $S^3$ with the knot in the 1-skeleton. I think one could obtain
from their work an estimate on the number of tetrahedra of $\tau$ which is polynomial in the number of tetrahedra $t(K)$. Together with Simon King's result, this should give a bound $c(K)\
leq e^{p(t(K))}$ ($p(n)$ a polynomial) on the crossing number in terms of the number of tetrahedra. I expect the answer to be exponential though.
Exponential lower bounds are realized, for example, by the torus knots. The crossing number is bounded linearly below by the genus. One may find sequences of torus knot complements
triangulated by $n$ tetrahedra, but with genus growing exponential in $n$, and therefore crossing number growing exponentially. Estimates of the number of tetrahedra needed to triangulate
Seifert fibered spaces were given by Martelli and Petronio. The point is that you can get a triangulation of a $(p,q)$ torus knot with the number of tetrahedra growing like the continued
fraction expansion of $p/q$, which can be like $log(|p|+|q|)$, but the genus is $(p-1)(q-1)/2$.
add comment
Not the answer you're looking for? Browse other questions tagged 3-manifolds knot-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/46149/lower-bound-on-number-of-tetrahedra-needed-to-triangulate-a-knot-complement?sort=newest","timestamp":"2014-04-17T21:44:40Z","content_type":null,"content_length":"60190","record_id":"<urn:uuid:3da1682b-9497-4469-bf41-f9c3c74865f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benoit Mandelbrot, RIP
Benoit Mandelbrot
, the father of fractal geometry, has died. He was 85.
"Fractal geometry is not just a chapter of mathematics, but one that helps Everyman to see the same world differently." -Benoit Mandelbrot (1924-2010)
Benoit Mandelbrot the Maverick, 1924-2010
(The Atlantic)
Long Live Mandelbrot (Imaginary Foundation)
David Pescovitz is Boing Boing's co-editor/managing partner. He's also a research director at Institute for the Future. On Instagram, he's @pesco.
More at Boing Boing
44 Responses to “Benoit Mandelbrot, RIP”
1. benher says:
What Hokusai saw in his waves Mandelbrot saw in fractals.
Thank you for your hard work and the inspiration you provided to humanity.
2. charlesj says:
One way to honor his memory is to serve fractal broccoli for dinner:
3. bartoncasey says:
One way to honor his memory is to serve fractal broccoli for dinner
The problem with this is that the infinite surface area of the first piece uses up all the cheese sauce.
4. Anonymous says:
“Mandelbrot’s in heaven.”
– Jonathan Coulton
Mandelbrot Set
5. whizse says:
I guess JoCo needs to update his song…
http://www.jonathancoulton.com/songdetails/Mandelbrot Set
6. BobbyMike says:
Not many people know that Mandelbrot did his best thinking about Fractal Geometry while he was standing at his bathroom sink, staring at himself in the mirror.
8. lava says:
I zoomed in on that photo and I’m pretty sure his hair is a fractal.
9. Anonymous says:
Thank god he wasn’t murdered. The police would have taken forever to draw the chalk outline!
…too soon?
10. Lobster says:
The man may be gone but his contributions go on forever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and
ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever, and ever. And ever.
11. cjp says:
“My fate has been that what I undertook was fully understood only after the fact.”
Benoit Mandelbrot
12. Dave McCaig says:
Such a shame that he didn’t get more mainstream recognition – his work truly revolutionized the understanding of how the world assembles itself, and without him computer graphics would not be the
same. Every CG cloud, every mountain, etc. should be wearing a little black arm band.
While he coined the term “fractal”, expended the concept’s use and understanding, and won the hearts of the masses with his super cool computer representations – it’s good to remember that he
stood on the shoulders of great mathematicians who came before him in the field, such as Weierstrass, Koch, Cantor, and Gaston Julia – my 2nd favorite scientist without a nose. I don’t know if
“father of fractal geometry” is a fair term here. He certainly cranked a quiet signal up to 11 though.
□ Tynam says:
Rest peacefully in the infinite depths.
Arguably Koch and Julia have more claim to being the ‘father’ of fractal geometry, but Mandelbrot was unquestionably the man who brought it into public use.
Age 17 I spent all of a particularly boring physics lesson coding a crude Mandelbrot generator on my Casio fx-7700. I still have that calculator in a top cupboard; I’ve never had the heart to
throw it away since.
13. agnot says:
Dr. Mandelbrot becomes the Mandelbrot. When we miss him we can visit him there.
14. ackpht says:
Back in the 90s I worked at IBM manufacturing in Poughkeepsie. From time to time we interfaced with R&D in Yorktown Heights. One day we were having lunch there, and I idly looked around the
cafeteria- and there’s Mandelbrot, alone at a small table.
“Is that…?”
It was hard not to stare.
15. kaini says:
benoit ignited a passion for maths in me that’s still there sixteen years later. jeff buckley once described nusrat fateh ali khan as ‘my elvis’. well, that’s sort of the way i feel about doctor
mandelbrot. a sad day.
16. Anonymous says:
One of the best math classes I ever took was ‘Chaotic Dynamic Systems’. One of the first java applets I ever wrote (back in the e .8 beta days) was a Mandlebrot set applet.
17. kaini says:
‘Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.’ – Benoit Mandelbrot.
18. Forteto says:
“Take a point called Z in a complex plane….”
19. jes5199 says:
A friend of mine suggests that JoCo should just replace the lyrics about Mandelbrot still being alive and teaching at Yale with several beats of silence
20. edthehippie says:
RIP ~ i wrote my first mandelbrot program for the motorola 6800 cpu in the comodore amiga , around 1986 or so , in forth and assembler language ~ i wrote the inner ” butterfly ” complex
arithmetic multiply routine and also implemented a scaled integer square root algorithm myself ~~ so many lovely pictures , and real world physics applications too !!
21. Gunn says:
Rudy Rucker posted a neat remembrance of Mandelbrot on his blog: http://www.rudyrucker.com/blog/2010/10/16/remembering-benoit-mandelbrot/
22. kaini says:
double MinRe = -2.0;
double MaxRe = 1.0;
double MinIm = -1.2;
double MaxIm = MinIm+(MaxRe-MinRe)*ImageHeight/ImageWidth;
double Re_factor = (MaxRe-MinRe)/(ImageWidth-1);
double Im_factor = (MaxIm-MinIm)/(ImageHeight-1);
unsigned MaxIterations = 30;
for(unsigned y=0; y {
double c_im = MaxIm - y*Im_factor;
for(unsigned x=0; x {
double c_re = MinRe + x*Re_factor;
double Z_re = c_re, Z_im = c_im;
bool isInside = true;
for(unsigned n=0; n {
double Z_re2 = Z_re*Z_re, Z_im2 = Z_im*Z_im;
if(Z_re2 + Z_im2 > 4)
isInside = false;
Z_im = 2*Z_re*Z_im + c_im;
Z_re = Z_re2 – Z_im2 + c_re;
if(isInside) { putpixel(x, y); }
24. Anonymous says:
RIP. For my part, the first Mandelbrot program I wrote from scratch was on my TI-85 (which has a Z80 µP) in high school–I started it around 10 AM, and it had rendered most of the set by 3 PM.
Actually, I still have that calculator . . .
Off to measure the coast of Britain in memory.
25. Lady Katey says:
File under “people I didn’t know were still alive until they died.”
When I was a kid in the mid 1990s, we had a CD-ROM of Encyclopedia Encarta. It had a number of interactive ‘games’, one of which was Fractal Trees, which introduced me to the concept and the name
(That Encarta also had some random music videos on it- I remember Changes by David Bowie in particular.)
26. Anonymous says:
RIP MANDLEBROT, YOU WILL BE MISSED AND YOUR WORK REMEMBERED.
27. desiredusername says:
Thanks for making my video games look awesome! RIP Mandelbrot!
28. rtresco says:
My first exposure to Mandelbrot was Arthur C. Clarke’s “The Colors of Infinity”. To hear Clarke describe it, I remember thinking I was on the verge of unlocking some kind of secret of the
universe, with infinite worlds exisitng within each other. But then it ended and I was left thinking, “….”. The a-ha never came. It was just trippy graphics set to a Sci-Fi storyteller.
29. DWittSF says:
Looking at his hair next to the fractal image, the mirror story makes perfect sense.
I’d like to think that Dr. Mandelbrot has just zoomed into the macrocosm…or the microcosm. He was a modern day Hermes Trismegistus. RIP mapping that eternal shoreline!
30. Phlip says:
Here’s a Buddhabrot:
Someone in Bronze Age India has a _lot_ of explaining to do there! C-:
31. Phlip says:
Benoit M. was the winner of a fight between orthodox mathematicians, who claimed that Proofs must come first…
…and programmer-mathematicians, who used computers to explore formulaic spaces BEFORE proving things about the landscapes they discovered.
We who now take computers for granted forget that mathematicians using them were once scorned as “relying on crutches”.
□ Joe says:
Philip, you’re creating a straw man. How can proofs come first? Any statement that is eventually proved has to start as a conjecture. Mathematical exploration isn’t something that Mandelbrot
☆ Phlip says:
Actually I’m citing the Scientific American article on the Mandelbrot, from the early 1980s, IIRC.
Mandelbrot’s eulogies are calling him a “maverick”. I’m pointing out why – because traditionalists did not like programmers leaning on their computer programs.
The issue here is not “strawman” arguments but mythification. Mandelbrot was considered a maverick for a very specific reason.
32. DJBudSonic says:
I was fortunate enough to meet the man many years ago at a fractals in engineering symposium. I am certain that most of the real scientific breakthroughs in the years ahead will come about as a
result of studying, in any field, the boundaries. All the action is at the edges! RIP
33. Stjohn says:
Rest in peace, sir. Thanks to Benoit Mandelbrot and the really smart guys behind Fractint, I was able to make a little money selling videotapes of color-cycling Mandelbrot tendrils at the Ann
Arbor Art Fair in 1992. Fractals got me into computer graphics and computer graphics got me out trouble and into a fascinating and rewarding career.
34. Sxe says:
Dr. Steve Brule?!?
35. shutz says:
In my last year of high school, I did a science project where I attempted to calculate the fractal dimension of the edges of tree leaves (maple leaves had the most fractal-like features, and, it
so happens, the highest fractal dimension). The math behind my Basic program was a bit over my head, but I did get interesting results.
When presenting the project at a science project expo, I would use a PC with Fractint and would zoom into a Mandelbrot set to demonstrate the internal symmetry of fractals, as an introduction to
my project.
Since then, whenever I’ve learned to program graphics on a new platform, I programmed a Mandelbrot set “browser” with zooming capability as one of my first exercises. My most recent attempt is a
Mandelbrot set browser for the Nintendo DS. Anyone with a DS flash cart should be able to run this:
The controls are on-screen. Note that selecting a zoom-in area will modify your aspect ratio… I haven’t bothered to correct for that, yet. When not in the mode that lets you select zoom-in
coordinates, the L and R buttons cycle the colors.
Also note that if you zoom in too far, you reach the limits of the Double-precision float data type I used, so don’t expect to do any extreme deep-zooming. Anyway, the rendering would take hours
at that point, expecially since I didn’t optimize the rendering much.
36. CfM says:
I got into mathematics because of Mandelbrot, I got into computer science because of Mandelbrot…
Kind of a shock…
37. Anonymous says:
When experimenting with hallucinogenic substances it quickly became a frequent thing for me to visit a flatland like plane where all history is laid out as a Mandelbrot fractal. It was so scary
while under the influence to think that every sensory experience, from every nervous twitch to every Luftwaffe bomber takeoff was somehow already graphed into that fractal and we are just
experiencing the sensory phonograph stylus playing the pattern back to us.
Benoit Mandelbrot, we will miss you, thanks for the dreams and the bad trips.
38. hemanz346 says:
I like Julia.
.. but Quaternions, Newton, IFS are cool kind of fractals as well.
I listed some fractal types here:
of course you’re welcome to view examples and vote your favorite!
RIP Ben, you surely deserve a nice shaped stone.
39. Lex10 says:
I hear he originally wanted to call fractals Benoit Balls but then he thought better of it……
40. sapere_aude says:
And let’s not forget that Barbara “Excuse me stewardess, I speak Jive” Billingsley also died today at age 94.
41. theLadyfingers says:
I guess Mandelbrots don’t go on forever. | {"url":"http://boingboing.net/2010/10/16/benoit-mandelbrot-ri.html","timestamp":"2014-04-18T23:20:53Z","content_type":null,"content_length":"95624","record_id":"<urn:uuid:3ab44dd0-071d-49a1-83ff-7b0af1fcea4a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rancho Park, CA Math Tutor
Find a Rancho Park, CA Math Tutor
...Both qualifications are earned in the United States. More than 12-year experience in teaching chemistry from middle school-level to graduate school-level students. Extensive experience on 1-1
tutoring as well.
10 Subjects: including precalculus, statistics, MCAT, probability
...When we met, she was struggling in an SAT prep academy, scoring in the low 500's. She opted for private tutoring sessions. After one month of intense tutoring, she was scoring mid 700's in her
practice exams.
9 Subjects: including algebra 1, algebra 2, vocabulary, geometry
...I grew up in Idaho, and moved to California to attend UC Berkeley, where I studied Chinese history. After returning from Taiwan, I spent six years as a journalist before returning to teaching
in 2000. I am married, and have two children.
41 Subjects: including linear algebra, reading, prealgebra, English
...The way I teach will be by showing you step by step how to do a problem for a few times. Then I will let you take on the problem itself and make sure you understand why each step is necessary.
Depending on your work load I can assign homework that will challenge your ability to solve hard problems so you can keep progressing.
9 Subjects: including calculus, precalculus, differential equations, physical science
I feel students learn best in an environment that lifts them up and doesn't put them down. I have been teaching ESL for over twenty years. I have taught all nationalities and ages in both Paris
and Los Angeles.
13 Subjects: including algebra 1, statistics, reading, writing
Related Rancho Park, CA Tutors
Rancho Park, CA Accounting Tutors
Rancho Park, CA ACT Tutors
Rancho Park, CA Algebra Tutors
Rancho Park, CA Algebra 2 Tutors
Rancho Park, CA Calculus Tutors
Rancho Park, CA Geometry Tutors
Rancho Park, CA Math Tutors
Rancho Park, CA Prealgebra Tutors
Rancho Park, CA Precalculus Tutors
Rancho Park, CA SAT Tutors
Rancho Park, CA SAT Math Tutors
Rancho Park, CA Science Tutors
Rancho Park, CA Statistics Tutors
Rancho Park, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Baldwin Hills, CA Math Tutors
Bicentennial, CA Math Tutors
Briggs, CA Math Tutors
Century City, CA Math Tutors
Dockweiler, CA Math Tutors
Farmer Market, CA Math Tutors
Mar Vista, CA Math Tutors
Miracle Mile, CA Math Tutors
Oakwood, CA Math Tutors
Pico Heights, CA Math Tutors
Playa Vista, CA Math Tutors
Playa, CA Math Tutors
Preuss, CA Math Tutors
Westwood, LA Math Tutors
Wilcox, CA Math Tutors | {"url":"http://www.purplemath.com/rancho_park_ca_math_tutors.php","timestamp":"2014-04-17T01:21:44Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:76b8f8af-0ba3-45ea-a647-04bacaeb36fc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Maxwell Equations as axioms over all of physics and math #9 Textbook
2nd ed. : TRUE CALCULUS; without the phony limit concept
Replies: 6 Last Post: May 26, 2013 1:35 AM
Messages: [ Previous | Next ]
picketfence model is irreplaceable Re: Maxwell Equations as axioms
over all of physics and math #9 Textbook 2nd ed. : TRUE CALCULUS; without the
phony limit concept
Posted: May 25, 2013 2:04 PM
I was hoping I could replace the picketfence model by two pure
triangle model.
Remember the sawtooth function of F(x) = 0 for even numbered x and
F(x) = 10^603 for odd numbered x. Here we can do the integration, not
by summation of thin picketfences but rather by summation of two thin
pure triangles along the leftside and rightside of each point of the
function graph.
From that sawtooth function I was hoping to eliminate the rectangle
portion of the picketfence model, but I cannot. So that leaves me with
the-- having to prove that Calculus best model is the picketfence and
it is irreplaceable for the calculus.
Proof: I would use the identity function and obviously the picketfence
model gives the derivative of 1 and the integral of 1/2x^2, but, can
I get a derivative of 1 via 2 pure triangles? So far, no. If the only
way to get the derivative and integral of the identity function via
picketfence model, then it is irreplaceable.
More than 90 percent of AP's posts are missing in the Google
newsgroups author search archive from May 2012 to May 2013. Drexel
University's Math Forum has done a far better job and many of those
missing Google posts can be seen here:
Archimedes Plutonium
whole entire Universe is just one big atom
where dots of the electron-dot-cloud are galaxies | {"url":"http://mathforum.org/kb/message.jspa?messageID=9127319","timestamp":"2014-04-16T22:30:21Z","content_type":null,"content_length":"25643","record_id":"<urn:uuid:1a45d58c-141d-44d3-9f08-95e6eb6307cc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Provide an example of an arithmetic series which totals zero. Using complete sentences, explain how you created the example.
Best Response
You've already chosen the best response.
Use all zeros \[\{0,0,0,0,\ldots\}\]
Best Response
You've already chosen the best response.
-8, -6, -4, -2, 0, 2, 4, 6, 8 I created it by picking 0 as the middle term and the ones below 0 as the opposites of the ones above 0. bye
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f78f5f0e4b0ddcbb89e7fab","timestamp":"2014-04-20T08:32:53Z","content_type":null,"content_length":"30121","record_id":"<urn:uuid:8541e8e5-7451-4906-934b-1a82c2f30101>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometric series
To find the middle term of a G.P. with positive terms, you don’t need to find the common ratio: just multiply the two end terms and take the square root (i.e. take the geometric mean).
Similarly to find the middle term of an A.P., you don’t need the common difference: just add the two end terms and divide by 2 (arithmetic mean).
Last edited by Nehushtan (2013-10-21 22:36:14) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=288097","timestamp":"2014-04-16T21:53:49Z","content_type":null,"content_length":"9833","record_id":"<urn:uuid:45ddaec5-b957-4704-9a71-f8163c2042e0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
As described in the paper, during each phase, JPM performs a stable matching driven by the outputs. At the end of each phase cells are transferred from inputs to outputs according to the match. The
description in the paper gives room to interpretation of (1) what cells are considered by the inputs in granting the requests, and (2) what cells are actually transferred. Our intended interpretation
was the following:
• A. (1) Each input considers outputs' cells that are the most preferred by the input. (2) A matched input transfers output's most preferred cell.
In addition, JPM admits two other interpretations:
• B. (1) Same as A.1. (2) A matched input transfers the most preferred cell belonging to the output, according to input's preference list.
• C. (1) Each input considers only cells requested by their outputs, i.e., the most preferred cells by the outputs. (2) Same as A.2.
As it was intended, JPM does not work correctly when (1) there are more cells destined to the same output enqueued at the same input, and when (2) the relative order of these cells in the input
preference list is different from their relative order in the output preference list (this is also true for interpretation B). This was pointed out by Balaji Prabhakar. For a detailed counterexample
by Balaji Prabhakar and Ashish Goel click here.
Independent work done by Chuang, Goel, McKeown, and Prabhakar includes the results in our paper; their paper is available as CSL-TR-98-758. Subsequently, and motivated by a counterexample given by
Balaji Prabhakar (similar to the example we use below) we have arrived at the following change to our algorithm.
Note: Assuming interpretation C the algorithm works. However, the proof, as described in the paper has a gap, as it does not handle the case when a cell is not transferred, and no more preferred
cells by either the input or output are transferred. For a complete proof click here.
After the matching is done swap the matched cell in the input preference list with the cell destined to the same output that is the most preferred by that input.
Note: This change does not affect the example, the complexity claims and the main body of the proofs presented in the paper. However, in addition we need to show that this fix indeed enforces the
basic assumption on which our proofs are constructed. For the proof click here.
For clarity consider the following example. Assume that input x of a switch with outputs a, b, and c holds the following three cells a.2, b.1, and a.1 (in this order). In the cell notation the letter
represents the output to which the cell is destined, while the number represents the order of the cell in the output preference list.
Next assume that during a matching phase outputs a and b request their most preferred cells from input x, i.e., a.1 and b.1, respectively. (Note that while a.1 is the most preferred cell by output a,
the cell destined to a that is the most preferred by input x is a.2)
Among outputs a and b the input x will grant the request to the output with the most preferred cell according to the x's preference list. This will be output a because the cell destined to a that is
the most preferred by x (i.e., a.2) appears before the cell destined to output b which is the most preferred by x (i.e., b.1).
Once this matching is done cells a.1 and a.2 are swapped and a.1, which is the most preferred cell by output a, is transferred to a. As a result, after the transfer, the preference list of input x
will be b.1, a.2 (in this order).
We are very thankful to Balaji Prabhakar and Ashish Goel for pointing out the above problem, as well as for providing the counterexample. Ion Stoica Last modified: Sat Aug 8 16:43:36 EDT 1998 | {"url":"http://www.cs.berkeley.edu/~istoica/IWQoS98-fix.html","timestamp":"2014-04-16T16:19:34Z","content_type":null,"content_length":"5016","record_id":"<urn:uuid:bfb58956-4863-4071-8439-bf4d52689e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Modeling
These notes are available online at http://www.mat.univie.ac.at/~neum/papers.html#model
A slightly revised version appeared as
A. Neumaier, Mathematical Model Building, Chapter 3 in: Modeling Languages in Mathematical Optimization (J. Kallrath, ed.), Applied Optimization, Vol. 88, Kluwer, Boston 2004.
Abstract. Some notes on mathematical modeling, listing motivations, applications, a numerical toolkit, general modeling rules, modeling conflicts, useful attitudes, and structuring the modeling work
into 16 related activities by means of a novel modeling diagram.
1 Why mathematical modeling?
Mathematical modeling is the art of translating problems from an application area into tractable mathematical formulations whose theoretical and numerical analysis provides insight, answers, and
guidance useful for the originating application.
Mathematical modeling
• is indispensable in many applications
• is successful in many further applications
• gives precision and direction for problem solution
• enables a thorough understanding of the system modeled
• prepares the way for better design or control of a system
• allows the efficient use of modern computing capabilities
Learning about mathematical modeling is an important step from a theoretical mathematical training to an application-oriented mathematical expertise, and makes the student fit for mastering the
challenges of our modern technological culture.
2 A list of applications
In the following, I give a list of applications whose modeling I understand, at least in some detail. All areas mentioned have numerous mathematical challenges.
This list is based on my own experience; therefore it is very incomplete as a list of applications of mathematics in general. There are an almost endless number of other areas with interesting
mathematical problems.
Indeed, mathematics is simply the language for posing problems precisely and unambiguously (so that even a stupid, pedantic computer can understand it).
• Modeling, classifying and reconstructing skulls
• Reconstruction of objects from preserved fragments
• Classifying ancient artifices
Artificial intelligence
• Computer vision
• Image interpretation
• Robotics
• Speech recognition
• Optical character recognition
• Reasoning under uncertainty
• Computer animation (Jurassic Park)
• Detection of planetary systems
• Correcting the Hubble telescope
• Origin of the universe
• Evolution of stars
• Protein folding
• Humane genome project
• Population dynamics
• Morphogenesis
• Evolutionary pedigrees
• Spreading of infectuous diseases (AIDS)
• Animal and plant breeding (genetic variability)
Chemical engineering
• Chemical equilibrium
• Planning of production units
• Chemical reaction dynamics
• Molecular modeling
• Electronic structure calculations
Computer science
• Image processing
• Realistic computer graphics (ray tracing)
Criminalistic science
• Finger print recognition
• Face recognition
Electrical engineering
• Stability of electric curcuits
• Microchip analysis
• Power supply network optimization
• Risk analysis
• Value estimation of options
Fluid mechanics
• Prediction of oil or ore deposits
• Map production
• Earth quake prediction
• Web search
• Optimal routing
Materials Science
• Microchip production
• Microstructures
• Semiconductor modeling
Mechanical engineering
• Stability of structures (high rise buildings, bridges, air planes)
• Structural optimization
• Crash simulation
• Radiation therapy planning
• Computer-aided tomography
• Blood circulation models
• Weather prediction
• Climate prediction (global warming, what caused the ozone hole?)
• Analysis and synthesis of sounds
• Neural networks
• Signal transmission in nerves
• Docking of molecules to proteins
• Screening of new compounds
• Elementary particle tracking
• Quantum field theory predictions (baryon spectrum)
• Laser dynamics
Political Sciences
• Formalizing diaries of therapy sessions
Space Sciences
• Trajectory planning
• Flight simulation
• Shuttle reentry
Transport Science
• Air traffic scheduling
• Taxi for handicapped people
• Automatic pilot for cars and airplanes
3 Basic numerical tasks
The following is a list of categories containing the basic algorithmic toolkit needed for extracting numerical information from mathematical models.
Due to the breadth of the subject, this cannot be covered in a single course. For a thorough education one needs to attend courses (or read books) at least on numerical analysis (which usually covers
some numerical linear algebra, too), optimization, and numerical methods for partial differential equations.
Unfortunately, there appear to be few good courses and books on (higher-dimensional) numerical data analysis.
Numerical linear algebra
• Linear systems of equations
• Eigenvalue problems
• Linear programming (linear optimization)
• Techniques for large, sparse problems
Numerical analysis
• Function evaluation
• Automatic and numerical differentiation
• Interpolation
• Approximation (Padé, least squares, radial basis functions)
• Integration (univariate, multivariate, Fourier transform)
• Special functions
• Nonlinear systems of equations
• Optimization = nonlinear programming
• Techniques for large, sparse problems
Numerical data analysis (= numerical statistics)
• Visualization (2D and 3D computational geometry)
• Parameter estimation (least squares, maximum likelihood)
• Prediction
• Classification
• Time series analysis (signal processing, filtering, time correlations, spectral analysis)
• Categorical time series (hidden Markov models)
• Random numbers and Monte Carlo methods
• Techniques for large, sparse problems
Numerical functional analysis
• Ordinary differential equations (initial value problems, boundary value problems, eigenvalue problems, stability)
• Techniques for large problems
• Partial differential equations (finite differences, finite elements, boundary elements, mesh generation, adaptive meshes)
• Stochastic differential equations
• Integral equations (and regularization)
Non-numerical algorithms
• Symbolic methods (computer algebra)
• Sorting
• Compression
• Cryptography
• Error correcting codes
4 The modeling diagram
The nodes of the following diagram represent information to be collected, sorted, evaluated, and organized.
The edges of the diagram represent activities of two-way communication (flow of relevant information) between the nodes and the corresponding sources of information.
S. Problem Statement
• Interests of customer/boss
• Often ambiguous/incomplete
• Wishes are sometimes incompatible
M. Mathematical Model
• Concepts/Variables
• Relations
• Restrictions
• Goals
• Priorities/Quality assignments
T. Theory
• of Application
• of Mathematics
• Literature search
N. Numerical Methods
• Software libraries
• Free software from WWW
• Background information
P. Programs
• Flow diagrams
• Implementation
• User interface
• Documentation
R. Report
• Description
• Analysis
• Results
• Validation
• Visualization
• Limitations
• Recommendations
Using the modeling diagram
• The modeling diagram breaks the modeling task into 16=6+10 different processes.
• Each of the 6 nodes and each of the 10 edges deserve repeated attention, usually at every stage of the modeling process.
• The modeling is complete only when the 'traffic' along all edges becomes insignificant.
• Generally, working on an edge enriches both participating nodes.
• If stuck along one edge, move to another one! Use the general rules below as a check list!
• Frequently, the problem changes during modeling, in the light of the understanding gained by the modeling process. At the end, even a vague or contradictory initial problem description should
have developed into a reasonably well-defined description, with an associated precisely defined (though perhaps inaccurate) mathematical model.
5 General rules
• Look at how others model similar situations; adapt their models to the present situation.
• Collect/ask for background information needed to understand the problem.
• Start with simple models; add details as they become known and useful or necessary.
• Find all relevant quantities and make them precise.
• Find all relevant relationships between quantities ([differential] equations, inequalities, case distinctions).
• Locate/collect/select the data needed to specify these relationships.
• Find all restrictions that the quantities must obey (sign, limits, forbidden overlaps, etc.). Which restrictions are hard, which soft? How soft?
• Try to incorporate qualitative constraints that rule out otherwise feasible results (usually from inadequate previous versions).
• Find all goals (including conflicting ones)
• Play the devil's advocate to find out and formulate the weak spots of your model.
• Sort available information by the degree of impact expected/hoped for.
• Create a hierarchy of models: from coarse, highly simplifying models to models with all known details. Are there useful toy models with simpler data? Are there limiting cases where the model
simplifies? Are there interesting extreme cases that help discover difficulties?
• First solve the coarser models (cheap but inaccurate) to get good starting points for the finer models (expensive to solve but realistic)
• Try to have a simple working model (with report) after 1/3 of the total time planned for the task. Use the remaining time for improving or expanding the model based on your experience, for making
the programs more versatile and speeding them up, for polishing documentation, etc.
• Good communication is essential for good applied work.
• The responsibility for understanding, for asking the questions that lead to it, for recognizing misunderstanding (mismatch between answers expected and answers received), and for overcoming them
lies with the mathematician. You cannot usually assume your customer to understand your scientific jargon.
• Be not discouraged. Failures inform you about important missing details in your understanding of the problem (or the customer/boss) - utilize this information!
• There are rarely perfect solutions. Modeling is the art of finding a satisfying compromise. Start with the highest standards, and lower them as the deadline approaches. If you have results early,
raise your standards again.
• Finish your work in time.
Lao Tse: ''People often fail on the verge of success; take care at the end as at the beginning, so that you may avoid failure.''
6 Conflicts
Most modeling situations involve a number of tensions between conflicting requirements that cannot be reconciled easily.
• fast - slow
• cheap - expensive
• short term - long term
• simplicity - complexity
• low quality - high quality
• approximate - accurate
• superficial - in depth
• sketchy - comprehensive
• concise - detailed
• short description - long description
Einstein: ''A good theory'' (or model) ''should be as simple as possible, but not simpler.''
• perfecting a program - need for quick results
• collecting the theory - producing a solution
• doing research - writing up
• quality standards - deadlines
• dreams - actual results
The conflicts described are creative and constructive, if one does not give in too easily. As a good material can handle more physical stress, so a good scientist can handle more stress created by
''We shall overcome'' - a successful motto of the black liberation movement, created by a strong trust in God. This generalizes to other situations where one has to face difficulties, too.
Among other qualities it has, university education is not least a long term stress test - if you got your degree, this is a proof that you could overcome significant barriers. The job market pays for
the ability to persist.
7 Attitudes
• Do whatever you do with love. Love (even in difficult circumstances) can be learnt; it noticeably improves the quality of your work and the satisfaction you derive from it.
• Do whatever you do as a service to others. This will improve your attention, the feedback you'll get, and the impact you'll have.
• Take responsibility; ask if in doubt; read to confirm your understanding. This will remove many impasses that otherwise would delay your work.
Jesus: ''Ask, and you will receive. Search, and you will find. Knock, and the door will be opened for you.''
8 References
For more information about mathematics, software, and applications, see, e.g., my home page, at http://www.mat.univie.ac.at/~neum/ | {"url":"http://www.mat.univie.ac.at/~neum/model.html","timestamp":"2014-04-19T02:11:29Z","content_type":null,"content_length":"20642","record_id":"<urn:uuid:7c6cb175-97e1-4570-8acb-7089e1ab217a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
CSE 2011: Data: A Centuries-old Revolution in Science, Part I
July 15, 2011
Ed Seidel
Four hundred years ago, Galileo ushered in a true revolution in science by combining painstaking observations---data, which he collected in notebooks---with deep thinking to articulate mathematical
descriptions of the observed systems. Building on this data-driven foundation, Newton developed a modern theory of gravitation, as well as calculus, which laid the groundwork for a comprehensive
worldview governed by partial differential equations (that many of us have spent our careers trying to solve!).
Clearly, Galileo and Newton taught us well: The four centuries of modern science that followed have been a time of amazing discoveries. And less than one hundred years ago, Albert Einstein fueled
this data revolution when he extended Newton's theory with his theory of general relativity built on a system of PDEs. Unfortunately, these PDEs were so complex that Einstein himself was not equipped
to solve them! Nonetheless, observations generating just a notebook full of data confirmed that his theory was indeed true. Half a century later, Stephen Hawking's groundbreaking work on black holes
resulted in output that can now be quantified as kilobytes of digital data.
Indeed, the methodologies of Galileo's and Newton's data-driven science, and the culture of science, with small groups thinking deeply about fundamental problems, have been at the center of the
time-honored tradition of scientific research for centuries.
But if we fast-forward just 30 years from Hawking's work on black holes, we see that the world has changed tremendously. Advances of about 9 orders of magnitude in computing capability, along with
deep advances in algorithms, have made many of the most complex PDEs solvable. Suddenly, we are generating data by the petabyte, in quantities that could no longer be stored in Galileo's notebooks.
Dramatic, fundamental, and pervasive changes are upon us as we enter the data-intensive age of science.
The New Age of Data
A profound shift is occurring across all fields of research as technological advances enable us to tackle many truly complex challenges facing science and society. For example, not only can we now
solve Einstein's complex PDEs, we can also begin to integrate other parts of physics and astronomy into studies of real-world phenomena, such as gamma-ray bursts, across the universe. At the same
time, we are developing the capability to observe phenomena through all channels known to science, resulting in a diversity of data sources brought to bear on a single event. Now, all this knowledge,
held in different communities, and all this data must be integrated, so that new knowledge can emerge.
Indeed, two key trends have begun to emerge:
• The new frontier is seriously about data. While computing capability has grown according to Moore's law, data volumes are growing at a much higher rate. Sensors, telescopes, accelerators,
experiments, and other means are generating data at astonishing rates. While a cosmology simulation can generate a peta-byte of data, a machine like the LHC (Large Hadron Collider) already
generates tens of petabytes that must be served to thousands of scientists across the planet; planned survey telescopes like the Large Synoptic Survey Telescope will generate hundreds of times
this much data for analysis by astronomers, computers, and school kids around the planet; DNA sequencers in any biologist's lab are capable of generating an LHC's worth of data (at a rate of a
terabyte per minute). And it is not just the volume, but also the great diversity of the data, that challenges us.
• Grand Challenge communities will be needed to address complex problems. Challenges---such as understanding gamma-ray bursts, predicting hurricanes, or forecasting climate change---will require
not only advancement and integration of numerous and diverse data- and compute-intensive activities, but also critical collaborations among scientists from different communities at scales never
before possible.
Fundamentally, data is becoming not only the output of most scientific inquiry, but also the dominant and fundamental medium of
among researchers across all disciplines.
Implications for Science
Galileo's vision of modern science as a data-driven activity guiding mathematical description remains, but exponential growth of the data volumes, along with their ubiquity and diversity, will
require completely new thinking---specifically, new mathematical and statistical methods---to describe not only the systems under study but the
data themselves
. Like computation, data-intensive science will drive revolutions in mathematics: How are features, let alone new laws of nature, to be found in the vast volumes of data being collected? How can
disparate data, from different instruments and multiple communities, be combined to advance knowledge? These questions will drive new discoveries in mathematics and statistics, and new techniques in
computer science and machine learning, just as they will be required for progress in the underlying science domains that pose them.
Furthermore, these changes in the culture and methods of science will call for a reconsideration of policies and practices as they relate to scientific research. As knowledge creation occurs rapidly
at community boundaries, as data is increasingly the main output of science, and as scientists will need to share data to collaborate, policy must be carefully developed to enable collaboration. And
traditional modes of communication---namely, scientific publications---will need to develop a richer set of tools and software to support and accelerate the flow of information and to support the
reproducibility of results. Openness and sharing of data will be critical to an accelerating advancement of science.
And so, we have arrived at yet another scientific revolution---a revolution in the scope, use, and production of data. The scientific rationale and implications for policy will be explored in the
second part of this article.
Ed Seidel, assistant director for mathematical and physical sciences at the National Science Foundation, discussed the ideas presented here in a panel session at CSE 2011 in Reno. At NSF, he was
previously director of the Office of Cyberinfrastructure. He is on leave from Louisiana State University, where he is the Floating Point Systems Professor in the Departments of Physics and Astronomy
and Computer Science. | {"url":"https://www.siam.org/news/news.php?id=1899","timestamp":"2014-04-18T13:07:49Z","content_type":null,"content_length":"13297","record_id":"<urn:uuid:c7583155-1372-4a92-b829-059834234748>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Read about some Spiked Math updates after the comic!
Thanks to
Jack and Beth
for providing the ideas for today's comic. This is based on true events! Check out the version Beth really wrote
(note that Delta-Delta-Delta and FU are University type stuff).
Spiked Math Updates!
• Spiked Math is now available on the Swoopy app (Your daily cartoon comics all in one app for Iphone and Ipad).
• Spiked Math now has a youtube channel and I've partnered with a buddy named Maiu who will be helping roll out some mathy type of videos for you all! Our latest videos are: Any feedback is much
appreciated and if you have any ideas for mathy videos send them my way to: spikedmath@gmail.com. Quality and mathiness will improve as a function of time.
15 Comments
aaawww :>
I thought F is gonna be a slightly different set ;)
We all have a dirty mind, don't we?
C, the set of pleasant conversationalists, is equivalent to C, the set of complex numbers?
One could define a conversation with another person as the real part and the pleasance as the imaginary part.
Lets take all future-existing people as possible persons and assume that they are infinit many and lets give each person an intervall which represents their drunkenness. Since there are also
monologues, the pure imaginary axis is also defined. ;)
Spiked Math was apparently aware of sexual orientation issues and extended the domain and range of the date function.
This is how Sheldon cooper dates...BAZINGAAA!!! :D
A friend of mine is showing me this site right now, and I'd like to say a big, loud "OMG! Geek!" Except I understood everything, so I'll just say : aww, so cute!
I think there's a mistake on the proof of theorem 5.3. The existance of such an element G is not obvious, is it?
Haha I'm the Jack of this comic and you're right. He had to pare down my proof just a tad to format it well I think. In my original response I said something about my limited time, power, and money,
and thus there existing a finite number of things which I could do on that night. Now it follows directly from the well-ordering principle.
It's surely a finite set and so G would necessarily exist.
M is empty. That is the mistake in the proof of Theorem 5.3.
Hover over the capital Psi at the bottom center, below the comic. LOL
In Beth's original note: does the S with a vertical line through it stand for "suppose"? Is this a common abbreviation?
Profile pictures are tied to your email address and can be set up at Gravatar. Click here for recent comments.
(Note: You must have javascript enabled to leave comments, otherwise you will get a comment submission error.)
If you make a mistake or the comment doesn't show up properly, email me and I'll gladly fix it :-). | {"url":"http://spikedmath.com/538.html","timestamp":"2014-04-18T05:32:01Z","content_type":null,"content_length":"50664","record_id":"<urn:uuid:3cca3ba2-18e3-4041-96d2-255b56fa0afa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Archive | January 26, 2010 | Chegg.com
Physics Archive: Questions from January 26, 2010
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• gammabeta asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
An electron is traveling through a region between two metal platesin which there is a constant elect... Show more
An electron is traveling through a region between two metal platesin which there is a constant electric field of magnitude
directed along the
direction as sketched in the figurebelow. This region has a total length of
, and theelectron has an initial velocity of
direction. (Use the following variables asnecessary: e, E, L, m, and v_0 for
.) (a) How long does it take the electron to travel the length of theplates?
= 1
magnitude F = 2
direction 3
(c) What is the acceleration of the electron?
(d) By what distance is the electron deflected when it leaves theplates?
= 6
• Show less
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
The figure below shows the electric field lines for two pointcharges separated by a small distan... Show more
The figure below shows the electric field lines for two pointcharges separated by a small distance. Determine the ratio.
Save Answer
(Points: 1)
Thefigure below shows a test charge q between the twopositive charges. Find the force (in newtons) on the test chargefor q = 3 µC. Give a positive answer if the force isto the right and a
negative answer if the force is to the left.
Save Answer
(Points: 1)
Forthe previous question, find the electric field (in newtons/coulomb)at the position of test charge. Again, supply a positive value ifthe electric field points to the right and a negative value
if itpoints to the left.
Save Answer
(Points: 0.5)
Howmuch work (in joules) is done in moving a charge of 2.5μC adistance of 42 cm along an equipotential at 7~V?
Save Answer
(Points: 0.5)
Which of the following are valid units for electric fieldstrength?
A. N/m
B. N/C
C. V/m
D. V/C
Save Answer
(Points: 1)
Thefield lines associated with a uniform electric field in a region ofspace are shown in the figure below. Imagine drawing twoequipotentials in the plane of the figure, one passing throughpoint
A and and one passing through point B. If the electric fieldis 400 N/C, how much does the potential change in going from A toB? Provide your answer in volts and use a positive value if
itincreases and a negative value if it decreases.
Save Answer
• Show less
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• shaydudu88 asked
3 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
Traumatic brain injury such as concussion results when the headundergoes a very large acceleration.... Show more
Traumatic brain injury such as concussion results when the headundergoes a very large acceleration. Generally, an accelerationless than 800 m/s
lasting for any length of time willnot cause injury, whereas an acceleration greater than 1000m/s
lasting for at least 1 ms will cause injury.Suppose a small child rolls off a bed that is 0.37 m above the floor. If the floor is hardwood, thechild's head is brought to rest in approximately 2.1
mm. If the floor is carpeted, this stoppingdistance is increased to about 1.0 cm.Calculate the magnitude and duration of the deceleration in bothcases, to determine the risk of injury. Assume
that the childremains horizontal during the fall to the floor. Note that a morecomplicated fall could result in a head velocity greater or lessthan the speed you calculate.
Enter anumber.
Hardwood floor magnitude duration Enter anumber.
Enter anumber.
Carpeted floor magnitude duration Enter anumber.
What would the duration of the hardwood floor be??????????
• Show less
2 answers
• Anonymous asked
1 answer
• Anonymous asked
Jane goes to a juice bar with her friend Neil. She is thinking ofordering her favorite drink, 7/8 or... Show more
Jane goes to a juice bar with her friend Neil. She is thinking ofordering her favorite drink, 7/8 orange juice and 1/8 cranberryjuice, but the drink is not on the menu, so she decides to order
aglass of orange juice and a glass of cranberry juice and do themixing herself. The drinks come in two identical tall glasses; toavoid spilling while mixing the two juices, Jane shows
Neilsomething she learned that day in class. She drinks about 1/8 ofthe orange juice, then takes the straw from the glass containingcranberry juice, sucks up just enough cranberry juice to fill
thestraw, and while covering the top of the straw with her thumb,carefully bends the straw and places the end over the orange juiceglass. After she releases her thumb, the cranberry juice
flowsthrough the straw into the orange juice glass. Jane hassuccessfully designed a siphon.
Assume that the glass containing cranberry juice has a verylarge diameter with respect to the diameter of the straw and thatthe cross-sectional area of the straw is the same at all points.Let the
atmospheric pressure be
A) Consider the end of the straw from which the cranberry juice isflowing into the glass containing orange juice, and let
(I found part A, the answer is
B) Given the information found in Part A, find the time
t = ______________s ?????
• Show less
3 answers
• Anonymous asked
3 answers
• Sasper asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Empire asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• kitsuneouji asked
0 answers
• Hegemon asked
0 answers
• vloraime asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
An electron is traveling through a region between two metal platesin which there is a constant elect... Show more
An electron is traveling through a region between two metal platesin which there is a constant electric field of magnitude
directed along the
direction as sketched in the figurebelow. This region has a total length of
, and theelectron has an initial velocity of
direction. (Use the following variables asnecessary: e, E, L, m, and v_0 for
(a) How long does it take the electron totravel the length of the plates?
= 1
(b) What are the magnitude and direction of the electric force onthe electron while it is between the plates?
magnitude F = 2
direction 3
(c) What is the acceleration of the electron?
(d) By what distance is the electron deflected when it leaves theplates?
= 6
(e) What is the final velocity of the electron?
• Show less
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
Just as in the experiment, the temperature of the body is a balance betwee... Show more
x.HmTION: BODY TEMPERATURE
Just as in the experiment, the temperature of the body is a balance between the rate of energy production (in this case metabolic) and energy loss. For the body, important mechanisms of heat loss
are radiation, convective heat loss to the air (the dominant mechanism in the experiment), and evaporation.
READING 1
METABOLIC RATE: The Basal Metabolic Rate (BMR) is the rate at which the internal energy of an organism is decreasing when it is fasting and completely at rest [8]. The metabolic energy production
in resting mammals is surprisingly constant when divided by their surface area. For instance, from a white mouse weighing 34 g to an elephant weighing 3700 kg, the rate only varies from 42 to 100
If one makes a graph of the basal metabolic rate of mammals of various sizes as a function of the mass of the animal, one discovers that the metabolic rate (dE/dt) "scales" with mass M in
accordance with the formula (dE/dt) = C
. Using this formula, one can compute rather accurately the metabolic rate of any size of mammal. The fact that dE/dt varies as
is called Kleiber's Law.
This is but one example of many such scaling laws in biology. For example, experimental measurements show that the height of trees is proportional to the two-thirds power of the diameter of the
tree. The study of such scaling laws, and an effort to find theoretical bases for them, represents an active branch of the sciences of zoology and comparative physiology [8].
Kleiber's Law can be reproduced theoretically by making the following assumptions:
(a) All mammals can be represented as consisting of body parts that are cylinders of length l and diameter d.
(b) The limiting length of a cylindrical column that can support itself is proportional to the 2/3 power of its diameter, i.e., l
(c) Since all mammals have roughly the same density, the mass M is proportional to volume.
(d) The power output of a muscle depends on its cross-sectional area, P
. This is known as Hill's law of muscular power. Hill found that the strength of muscle fiber and the rate of contraction of muscles
x /
t varies little from species to species but that the number of fibers in the muscle is proportional to the cross-sectional area. Since power equals F
x /
t (F is the force), this implies P
(e) The rate of heat production dE/dt is proportional to the power output of the animal's muscles.
Using these assumptions, you can derive Kleiber's Law, dE/dt
This suggests that our analysis may õntx.Hmtrack [11].
[8] George B. Benedek and Felix M. H. Villars, PHYSICS WITH ILLUSTRATIVE EXAMPLES FROM MEDICINE AND BIOLOGY, Vol. 1. (Menlo Park: Addison-Wesley Publishing Co., 1974).
[16] S. R. Wilk, R. E. McNair, and M. S. Feld, AM. J. PHYS. 51, 783 (1983).
Basal Metabolic Rate More information on the history of Kleiber's law and scaling laws in biology
To make sure you understand the derivation based on the five assumptions (a-e), select the correct order in which the assumptions were used in the derivation above.
A. c, a, b, e, d
B. c, e, b, a, d
C. a, b, c, d, e
D. a, c, b, e, d
E. None of the above“x.Hmo‘x.Hm”
• Show less
0 answers
• Anonymous asked
RADIATIVE HEAT LOSS: In the resting condition, the primarymechanism of heat loss is radiation [6]. T... Show more
RADIATIVE HEAT LOSS: In the resting condition, the primarymechanism of heat loss is radiation [6]. The Stefan-Boltzmann lawstates that heat loss depends on the radiating area as
is the Stefan-Boltzmann constant
is the absolute temperature of the surfaceof the body, and
is the absolute temperatureof the surroundings. The emissivity, e, of the skin in the infraredregion is approximately 1. This means that the skin is an excellentabsorber (which means it is also
an excellent radiator) thatreflects no radiation; in the visible light range an object with e= 1 would appear black and thus a perfect radiator is referred toas a black body [10]. A man of
surface area
and body temperature 34 ?C therefore losesenergy to a 25 ?C room at the rate
Skin temperature is lower than internal body core temperature buthigher than normal room temperature. It is therefore possible tomeasure the infrared radiation from a person. Since this
radiationis proportional to the absolute temperature to the fourth power,the amount of infrared radiation is a sensitive indicator ofsurface temperature. The technique of measuring infrared
radiationand thereby mapping temperature is called thermography [11].Thermography gives an indication of blood supply, since one of themain methods of heat transfer in the body is blood flow.
Adepressed skin temperature indicates a deficiency in blood flow toa given region. This could be caused by clotting, stroke, etc. Alocally elevated temperature can indicate the presence of
amalignant (cancerous) tumor. Such tumors grow very rapidly comparedto other tissues and thus require an increased blood supply[11].
[6] Jerry B. Marion, William F. Hornyak, GENERAL PHYSICS WITHBIOSCIENCE ESSAYS, 2nd Ed. (New York, John Wiley & Sons,1985).
[11] Peter Paul Urone, PHYSICS WITH HEALTH SCIENCE APPLICATIONS.(San Francisco: Harper & Row Publishers, 1986).
A man of surface area 1.8 m2 and bodytemperature 34 ?C was determined to lose energy to a 25 ?C room atthe rate of 88 Kcal/hr. Compare this to the basal metabolic rate ofa 65 kg man using
Kleiber's Law. What does this suggest aboutradiative heat loss?
A.It is probably one of several major loss mechanisms while atrest.
B.It provides only a minor contribution to the energy lost while atrest.
C.It is the dominant loss mechanism at rest.
D.This suggests nothing about radiative heat loss.
• Show less
0 answers
• Anonymous asked
VASODILATION AND VASOCONSTRICTION: During times of thermal stress,heat must be dissipated into the e... Show more
VASODILATION AND VASOCONSTRICTION: During times of thermal stress,heat must be dissipated into the environment or contained withinthe body. The body's temperature is monitored and controlled
byspecial neurons in the hypothalamus that respond to the temperatureof the surrounding blood. This has been demonstrated by experimentswhich found that when implanted electrodes are used to
change thetemperature of the hypothalamus, the body's temperature-regulatingmechanisms are fully activated, even though the temperature of therest of the body is unchanged. When the temperature
of thehypothalamus is above 37 ?C, the heat loss mechanisms, such asvasodilation and sweating are activated, and when the temperatureis below 37 ?C, the heat conserving and heat generating
mechanisms,such as vasoconstriction and shivering, are activated [12].
The body uses changes in blood circulation and changes in thethickness of the skin to control heat loss under differenttemperature conditions. The rate of heat conductance to thesurrounding
environment is influenced by the "countercurrent"system of blood flow [13]. This employs two systems of venous bloodflow in the limbs: the deep venea and peripheral veins. Inaddition,
subcutaneous fat enhances the control of insulationachieved by changes of peripheral blood flow [14,15]. In a coldenvironment most of the venous return from arms and legs is throughthe deep venea
comitantes that receive heat from blood flowingthrough the arteries and thereby minimizes heat loss. Thus heatconductance to the periphery is low, yet actual blood flow to thelimbs may be high,
protecting tissues of the limbs from cold injuryand hypoxia. In a hot environment most of the venous blood flowreturns through the peripheral veins, and because they are close tothe surface, heat
loss to the environment is increased. By thispathway, external heat loss is maximized with high conductance[13]. When the blood vessels of the skin are dilated, thesubcutaneous fat can have
little effect, since warm blood is movingthrough to the surface, where heat exchange with the surroundingcan take place [14]. The dermis-epidermis combination changeseffective thickness by virtue
of involuntary, lateral musclemovements which govern the depth of blood capillaries carrying theheat energy to be thrown away: in the cold these capillariesretract, thus increasing the effective
thickness of the insulation[15]. After vasoconstriction of the blood vessels at thedermis-epidermis, this shunt no longer exists, and heat exchangemust take place through the fat [14].
[12] Alan H. Cromer, PHYSICS FOR THE LIFE SCIENCES, 2nd ed. (NewYork: McGraw-Hill, 1977).
[13] Roberto A. Frisancho, HUMAN ADAPTATION, (St. Louis: C.V. MosbyCo., 1979).
[14] Anthony H. Rose, THERMOBIOLOGY, (New York: Academic Press,1967).
[15] E. J. Casey, BIOPHYSICS, (New York: Reinhold Publishing Co.,1965).
Body Temperature Regulation
The skin and subcutaneous fat provide little insulation in hotweather but very effective insulation in cold weather due towhat?
A.Lateral muscle movements
C.Capillary retraction
D.All of the above causes the skin and subcutaneous fat to insulateeffectively in cold weather.
• Show less
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• SwankyHippo4595 asked
2 answers
• Dolores asked
1 answer
• Dolores asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
I am trying to do problem 33 in chapter 3 on the Willey PlusWeb site. It's the same thing as... Show more
Hello !
I am trying to do problem 33 in chapter 3 on the Willey PlusWeb site. It's the same thing as the one in Cramster but I can'tfind the result. Can somebody please help me about it?
The results are :
(a) Number 17.822738285684 Units km
(b) Number -50.088219499693 Units ° (degrees)
I am pasting the problem below
Oasis B is a distance d = 6 kmeast of oasis A, along the x axis shown in theFigure. A confused camel, intending to walk directly fromA to B instead walks a distanceW[1] = 21 km west of due south
byangle θ[1] = 15.0°. It then walks adistance W[2] = 34 km due north. Ifit is to then walk directly to B, (a) howfar (in km) and (b) in what direction should itwalk (relative to the positive
direction of the xaxis)?
Thank you !
• Show less
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• mahwill asked
3 answers
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
3 answers
• Anonymous asked
4 answers
• Anonymous asked
0 answers
• Anonymous asked
10 answers
• Poochi asked
0 answers
• distitegqboi asked
1 answer
• distitegqboi asked
1 answer
• distitegqboi asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
=... Show more
Two charges are placed on the
axis. One charge(
= +16.5 µC) is at
= +3.0 cm and the other(
= -17µC) is at
= +9.0 cm. Find the netelectric field (magnitude and direction) at the followinglocations. (a)
= 0 cm
magnitude 1________ N/C <---- Fill in theblank!
• Show less
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
3 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• EgyCopt asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• pratikp89 asked
2 answers
• pratikp89 asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• soilmole asked
1 answer
• soilmole asked
1 answer
• Anonymous asked
0 answers
• soilmole asked
3 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• frank801 asked
1 answer
• frank801 asked
1 answer
• frank801 asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• DeepTide asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• DeepTide asked
0 answers
• Anonymous asked
0 answers
• George1769 asked
1 answer
• George1769 asked
3 answers
• George1769 asked
The junction rule describes the conservation of... Show more
Part A
The junction rule describes the conservation ofwhich quantity? Note that this rule applies only to circuits that arein a steady state.
Part B
Apply the junction rule to the junction labeled with the number 1 (at the bottom of the resistor of resistance
Answer in terms of given quantities, together with the meter readings
Part C
Apply the loop rule to loop 2 (the smaller loopon the right). Sum the voltage changes across each circuit elementaround this loop going in the direction of the arrow. Remember that thecurrent
meter is ideal.
Express the voltage drops in terms of
Part D
Now apply the loop rule to loop 1 (the largerloop spanning the entire circuit). Sum the voltage changes across eachcircuit element around this loop going in the direction of the arrow.
Express the voltage drops in terms of
• Show less
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Novice asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
C. Inside the conductor is... Show more
An isolated conductor of arbitrary shape has a net charge of+9.5 x 10^- 6 C. Inside the conductor is a cavity withinwhich is a point charge q = +2.4 x 10^-6 C. InCoulombs, what is the charge (a)
on the cavitywall and (b) on the outer surface of theconductor?
(a) Number -.0000024 Units C
• Show less
1 answer
• Anonymous asked
An electron is released from rest at a perpendicular distance of10.8 cm from a line of charge on a v... Show more
An electron is released from rest at a perpendicular distance of10.8 cm from a line of charge on a very long nonconducting rod.That charge is uniformly distributed, with 8.1 μC per meter.What is
the magnitude of the electron's initial acceleration?
Number ??????? Units m/s^2
• Show less
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Dolores asked
0 answers
• EnchantingElbow925 asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
____________ m pe... Show more
(a) What is the magnitude, in meters, represented by the area ofthe shaded box?
____________ m perbox
(b) Estimate the displacement of the particle for the two 1-sintervals, one beginning at
= 1.0 s and the otherat
= 2.0 s.
displacement in 1 ≤ t ≤ 2 __________ s
displacement in 2 ≤ t ≤ 3 __________ s
(c) Estimate the average velocity for the interval 1.0 s≤
≤ 3.0 s.
(d) The equation of the curveis
= (0.4 m/s
. Findthe displacement of the particle for the interval byintegration.
____________ m
Compare this answer with your answer for Part (b).
a.smaller than the sum of the displacements found in Part(b)
• Show less
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Jalyth asked
0 answers
• EnchantingElbow925 asked
2 answers
• Anonymous asked
3 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Sidewinder asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• limddavid asked
0 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
6 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• SillyChicken9880 asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
4 answers
• Anonymous asked
1 answer
• Anonymous asked
I need some help, i need to know how to calculate the force,magnitude, and direction of this problem... Show more
I need some help, i need to know how to calculate the force,magnitude, and direction of this problem.
Particles of charge
= +73 µC,
= +47 µC, and
= -80µC are placed in a line (Fig. 16-49). The center one is 0.35m from each of the others.
Figure 16-49
Calculate the net force on each charge due to the other two. (Stateboth magnitude and direction.)
Force on Q[1] 1 N 2
Force on Q[2] 3 N 4
Force on Q[3] 5 N 6
• Show less
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
has a magnitude of 2400 n... Show more
Two forces are applied to a tree stump to pull it out of theground. Force
has a magnitude of 2400 newtons andpoints 33.5° south of east, while force
has a magnitude of 3160 newtons and points due south. Using thecomponent method, find the magnitude and direction of the resultantforce
that is applied to the stump. Specify the direction with respect todue east.
magnitude 1 N
direction 2° 3 of east
• Show less
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Tinkertoes01 asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• gammabeta asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• gammabeta asked
1 answer
• Anonymous asked
3 answers
• gammabeta asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
3 answers
• Anonymous asked
0 answers
• Anonymous asked
3 answers
• Loco415 asked
2 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Revtek asked
0 answers
• Anonymous asked
0 answers
• Revtek asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
An electron is placed in a uniform electric field ofstrength 246 N/C. If theelectron is at rest at t... Show more
An electron is placed in a uniform electric field ofstrength 246 N/C. If theelectron is at rest at the origin of a coordinate systemat
= 0 and the electric field is in thepositive
-direction, what are the
- coordinates of the electronat
= 2.62 ns?
x-coordinate mm
y-coordinate mm
• Show less
1 answer
• Anonymous asked
The figure below shows, in cross section, a central metal ball, twospherical metal shells, and three... Show more
The figure below shows, in cross section, a central metal ball, twospherical metal shells, and three spherical Gaussian surfaces ofradii R, 2R, and 3R, all with the samecenter. The uniform
charges on the three objects are: ball,Q; smaller shell, 3Q; larger shell, 5Q. Rank the Gaussian surfaces according to the magnitude of theelectric field at any point on the surface, greatest
first. (Useonly the symbols > or =, for example R=2R>3R.) • Show less
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
C are placed at the two lower cornersof a square with si... Show more
Two tiny objectswith equal charges of 7.35 µC are placed at the two lower cornersof a square with sides of 0.370 m, as shown.
Find the electricfield at point
, the upper left corner. (Assume thepositive
-direction points to the right.)
magnitude N/C
direction °CCW from the positive x-axis
• Show less
0 answers
• Anonymous asked
3 answers
• Anonymous asked
0 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
0 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
A fountain shoots a jet of water 10 m into the air. The fountainhas a round... Show more
Question Details:
A fountain shoots a jet of water 10 m into the air. The fountainhas a round nozzle of
diameter 1.84 cm at ground level, fed by a
pump located 6 m below ground through a
round pipe of diameter 5.19 cm .
What is the gauge pressure of the water
at the pump? Assume steady, laminar flow
of water, no viscosity, and no air drag on
the water jet. The acceleration of gravity is
9.81 m/s^2 .
Answer in units of kPa.
• Show less
1 answer
• ElectricLamp3694 asked
2 answers
• Anonymous asked
1 answer
• OrangeSheep7761 asked
1 answer
• OrangeSheep7761 asked
1 answer
• OrangeSheep7761 asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• SweetZipper7785 asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• tc7033 asked
1 answer
• Anonymous asked
1 answer
• tc7033 asked
4 answers
Get the most out of Chegg Study | {"url":"http://www.chegg.com/homework-help/questions-and-answers/physics-archive-2010-january-26","timestamp":"2014-04-16T09:59:27Z","content_type":null,"content_length":"602679","record_id":"<urn:uuid:84c83e76-8719-4f3b-a298-f68bae6bd841>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Firdale, WA Math Tutor
Find a Firdale, WA Math Tutor
...I've completed 3 quarters of college level chemistry with a 4.0 GPA. I teach by breaking problems down into simple steps and keeping careful track of all quantities as we work. Working as a
technical writer in the software industry, I wrote, edited, illustrated, and published professional documentation.
18 Subjects: including algebra 1, algebra 2, biology, chemistry
I have worked as a classroom teaching assistant and tutor since 2007. In the classroom, I have helped teach introductory physics classes at the University of Washington and Washington University
in St Louis. I also have worked with these students individually on homework problems or test preparation.
17 Subjects: including prealgebra, English, linear algebra, algebra 1
...Simplicity and consistency are key elements of a good PP presentation. I started teaching prealgebra in 1984, my very first teaching job. I had a split 7/8th grade class.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra
...I have been a drama teacher for 33 years at a Seattle public high school. I have directed 33 productions over the course of 30 years of teaching. Of the 5 times that my students entered the
English Speaking Union's Monologue and Sonnet Contest, two of my students won first place in 2002 and 2003 and one student won second place in 2004.
30 Subjects: including SAT math, Russian, algebra 1, algebra 2
...I've presented a fuel-cell powered car at a national AIChE conference. I've researched and presented UREX designs. I've worked for Boeing PhantomWorks on fuel tank anti-corrosive agents.
62 Subjects: including precalculus, ACT Math, ACT English, discrete math
Related Firdale, WA Tutors
Firdale, WA Accounting Tutors
Firdale, WA ACT Tutors
Firdale, WA Algebra Tutors
Firdale, WA Algebra 2 Tutors
Firdale, WA Calculus Tutors
Firdale, WA Geometry Tutors
Firdale, WA Math Tutors
Firdale, WA Prealgebra Tutors
Firdale, WA Precalculus Tutors
Firdale, WA SAT Tutors
Firdale, WA SAT Math Tutors
Firdale, WA Science Tutors
Firdale, WA Statistics Tutors
Firdale, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Chase Lake, NY Math Tutors
Clearview, WA Math Tutors
Earlmount, WA Math Tutors
Fort Lawton, WA Math Tutors
Kennard Corner, WA Math Tutors
Larimers Corner, WA Math Tutors
Little Boston, WA Math Tutors
Maxwelton, WA Math Tutors
Picnic Point, WA Math Tutors
Possession, WA Math Tutors
Richmond Beach, WA Math Tutors
Richmond Highlands, WA Math Tutors
Thrashers Corner, WA Math Tutors
Wedgwood, WA Math Tutors
Westwood, WA Math Tutors | {"url":"http://www.purplemath.com/firdale_wa_math_tutors.php","timestamp":"2014-04-18T18:32:47Z","content_type":null,"content_length":"23794","record_id":"<urn:uuid:4553063d-fff5-42c4-a22e-7b8ba4ae9f6d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is a similar lesson with the emphasis on determining slope steepness and rates of change
view a plan
This is a similar lesson with the emphasis on determining slope steepness and rates of change
9, 10, 11, 12
Title – Slopes and Rate-of-Change
By – Mikel Whiting
Primary Subject – Math
Grade Level – 9-12
Duration: 90
Overview: This lesson is about determining the steepness of slopes and the rates of change.
Lesson Plan Objectives:
MA.C.3.4.2 Using a rectangular coordinate system (graph), applies and algebraically verifies properties of two- and three-dimensional figures, including distance, midpoint, slope, parallelism, and
Measurable Objective(s)
In this lesson students will learn to: 1. investigate real-world situations that relate to slopes and rates of change, and 2. determine the steepness of slopes by viewing and recording data.
Teaching and Instructional Strategy
(15 min.) MODELING
(10 min.) DISCUSSION
(20 min.) SEE-SAY-DO
(10 min.) LECTURE/BRAINSTORM
(20 min.) QUESTIONING
(15 min.) TEXTBOOK EXERCISE
Motivational Activity
(15 min.) MODELING: Tell the students that on each table are construction paper and markers. Assign students to teams of four and have them work as a team to make paper airplanes. Have half of the
group members to fly the planes and the other members to time the flights in seconds. Have the students to graph their data in the form of sloping lines. Have students turn in their airplanes in
fifteen minutes. Students are to find three sloping objects and determine the difference in their “rise” and “run”.
(15 min.) DISCUSSION: Ask the students: “Who can explain what they think “slope” means?” (list their ideas on the board) Give examples such as roads, walkways, ramps, stairs, or slides. Explain what
the words “vertical” and “horizontal” mean. Ask the students: “Who can describe ways in which the steepness of a slope might be measured?” Ask students: “What terms have you used in the past to
describe how quickly or how slowly something changes?” (20 min.) SEE-SAY-DO: Read the sentence: “Slope equals vertical change (rise) divided by horizontal change (run).” Have students read the
sentence aloud. Read the “slope equation” in sentence form: “Slope equals vertical change “y2 minus y1 divided by x2 minus x1 (where x2 and x1 is not equal to 0).” Have the students read the sentence
aloud. Read the sentence: “Rate of change equals change in dependent variable (vertical change) divided by the change in independent variable (horizontal change). Have the students read the sentence
aloud. Therefore, “Rate of change equals Slope.” Ask the students: “Find the slope of the line passing through each pair of points (-3,-1)(-1,5).” Ask the students: “How would they place these
coordinates in the “slope equation. Model the equation on the board: “Slope equals vertical change “y2 minus y1 divided by x2 minus x1 (where x2 minus x1 is not equal to 0).” Have the class to repeat
the equation sentence: (5)-(1)/(-1)-(3). Ask if there are any questions. Remind the students that a number cannot be divided by zero. Explain to the students that if y2-y1 = 0, then the slope is a
“horizontal line”, and if x2-x1 = 0, then the slope is a vertical line and is considered “undefined”.
(10 min.) LECTURE/BRAINSTORM: Tell the students that today’s lesson is to review “Slopes” and “Rates of Change”. Have students brainstorm other situations, such as airplane flight landings or
takeoffs, which it is more practical to explain slopes and rates of change and to determine the steepness of slopes and rates of change by viewing and recording this data. Remind students that the
dependant variable goes in the numerator and the independent variable goes in the denominator. Point out that by convention we read slopes from left to right. (When lines rise from left to right, its
slope is “positive”, and when lines fall from left to right, its slope is “negative”)
LEP, IEP, ESE, ESOL students will be allowed more time to answer questions during discussion and will be given extra time to complete the five individual seatwork problems. These students will also
be allowed to complete seatwork with peer tutor.
(15 min) TEXTBOOK EXERCISE: Students will correctly solve five slope and rate of change problems provided by teacher.
E-Mail Mikel Whiting ! | {"url":"http://lessonplanspage.com/mathslopesandrateofchange912-htm/","timestamp":"2014-04-19T01:49:11Z","content_type":null,"content_length":"46531","record_id":"<urn:uuid:d4597f3b-0998-4701-82cb-912e5b3e994b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contents Next: Verification Up: A mathematical experiment Previous: A mathematical experiment
We now turn to a more concrete example of a mathematical experiment. Our meta-goal in devising this experiment was to investigate the similarities and differences between experiments in mathematics
and in the natural sciences, particularly in physics. We therefore resolved to examine a conjecture which could be approached by collecting and investigating a huge amount of data: the conjecture
that every non-rational algebraic number is normal in every base (see box). It is important to understand that we did not aim to prove or disprove this conjecture; our aim was to find evidence
pointing in one or the other direction. We were hoping to gain insight into the nature of the problem from an experimental perspective.
The actual experiment consisted of computing to 10,000 decimal digits the square roots and cube roots of the positive integers smaller than 1000 and then subjecting these data to certain statistical
tests (again, see box). Under the hypothesis that the digits of these numbers are uniformly distributed (a much weaker hypothesis than normality of these numbers), we expected the probability values
of the statistics to be distributed uniformly between 0 and 1. Our first run showed fairly conclusively that the digits were distributed uniformly. In fact, the Anderson-Darling test, which we used
to measure how uniformly distributed our probabilities were suggested that the probabilities might have been `too uniform' to be random. We therefore ran the same tests again, only this time for the
first 20,000 decimal digits, hoping to detect some non-randomness in the data. The data were not as interesting on the second run.
Contents Next: Verification Up: A mathematical experiment Previous: A mathematical experiment | {"url":"http://oldweb.cecm.sfu.ca/organics/vault/expmath/expmath/html/node13.html","timestamp":"2014-04-17T18:25:37Z","content_type":null,"content_length":"4869","record_id":"<urn:uuid:28780c95-f60c-4722-8906-b5304f092ea7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Match the Matches
Copyright © University of Cambridge. All rights reserved.
'Match the Matches' printed from http://nrich.maths.org/
Why do this problem?
This problem
could be used at the start of a series of lessons on data handling, or as an assessment opportunity at the end of the unit. It will get children talking meaningfully about mathematics, presenting and
justifying arguments.
Possible approach
As an introduction to this task, you may choose to ask general questions about the different forms of data. This might be most helpful in the case of the pie chart if the class is not so familiar
with this method of representation. For example, you could ask questions such as:
• Looking at the pie chart, in approximately what fraction of the total number of games did the team score one goal?
• What does the tally chart show us?
This activity would be ideal to tackle in pairs or threes. You could print off
this Word document or this pdf containing the six different forms of data which could be cut up to create six cards. In this way, children would be encouraged to talk to each other as they interpret
the data and the richness of their discussion will allow you to assess their understanding.
In the plenary, you can focus on how pupils knew which forms of data go together.
Key questions
What is the total number of goals each team scored over the fifteen matches?
Have you tried comparing two of the charts with each other?
Do you think they represent the same team's goals? Why or why not?
Possible extension
You could challenge children to make their own version of this problem in pairs.
Possible support
It might be helpful for children to be encouraged to make jottings on the cards as they work on this task. | {"url":"http://nrich.maths.org/4937/note?nomenu=1","timestamp":"2014-04-21T14:44:49Z","content_type":null,"content_length":"6151","record_id":"<urn:uuid:0f1f4d1c-3db2-43bb-ae24-8584254ed3bd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
int8 (MathScript RT Module Function)
LabVIEW 2011 MathScript RT Module Help Edition Date:
June 2011
Part Number:
»View Product Info
Owning Class: support
Requires: MathScript RT Module
c = int8(a)
Converts the input elements to 8-bit signed integers.
Name Description
a Specifies a numeric scalar, vector, or matrix.
Name Description
Returns elements of a as 8-bit signed integers. c is a scalar, vector, or matrix of the same size as a. If a is a complex number, this function converts the real and imaginary parts of the
c number separately and then returns a complex number that consists of those parts. To return the 8-bit signed representation of a when the input is complex, use the real function to extract the
real part of a, then use the int8 function to convert the real part.
If an element in a is greater than 2^7-1, the corresponding element in c is 2^7-1. If an element in a is less than -2^7, the corresponding element in c is -2^7.
The following table lists the support characteristics of this function.
Supported in the LabVIEW Run-Time Engine Yes
Supported on RT targets Yes
Suitable for bounded execution times on RT Yes
A = [34.333, 123456.4, -38]
C = int8(A)
Related Topics | {"url":"http://zone.ni.com/reference/en-XX/help/373123B-01/lvtextmath/msfunc_int8/","timestamp":"2014-04-16T16:51:57Z","content_type":null,"content_length":"13378","record_id":"<urn:uuid:f9c133f8-7604-460e-98da-0fb6628baebb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raising from the dead, black cleaver vs last whisper (With math! YAY!)
TL : DR version : if your opponent has over 143.5 armor and your allies do not have armor reduction debuffs, get last whisper over the black cleaver, otherwise black cleaver is a better armor
reduction item. if your allies do have armor reduction debuffs, you'll need to read further for how to calculate the break even point.
ok I just noticed quite a few people out there putting last whisper as a "core" item in their build but I know for fact that last whisper is a conditional item. why is it conditional? because it
provides a lot less compared to black cleaver can.
now for those of you who still dont know, flat is applied before percentage, and reduction is applied before penetration, and penetration can not bring armor values down to 0
so heres the formula for those who are interested
TA = Target's armor, FAR = flat armor reduction, PAR = Percentage armor reduction, FAP = flat armor pen, PAP = percent armor pen
all the values of each sub catagory are added together, so if you had last whisper (40% armor pen) and weapon expertise (10 armor pen) you had a total of 50% armor pen
((TA-FAR)*(1-PAR))-FAP)*(1-PAP) = final armor, though like I said earlier, penetration can not bring the value below 0 so if the FAR and PAR brings it down to 0 or lower, the rest of the calculations
is ignored, if FAP and PAP brings the value below 0 then the value remains 0
now that we know how to calculate this value, we can use it to determine at what armor value the enemy must have in order for last whisper to reduce more armor than black cleaver. we know that black
cleaver reduces target's armor by 45 at max stacks, so at what armor value does 45 = 40%? it is 112.5, meaning that ((TA-FAR)*(1-PAR))-FAP) = 112.5, not TA = 112.5
most people who buy last whisper will have armor pen runes and sunder mastery which add about 31 armor pen (not including yellow and blue armor pen runes) so you would add this value in order to make
sure armor pen from last whisper is the same as black cleaver which means that (TA-FAR)*(1-PAR) must = to 143.5 armor.
but wait there are still many other champions out there that also have armor reduction abilities as well, that apply a debuff thus must be counted in, and theres way too many to calculate, but you
now have the base of how to calculate how much armor your opponent must have before last whisper becomes a better choice in terms of armor reduction.
or you can just pick up a black cleaver and skip the math since black cleaver is a constant value. or if you have no allies who apply an armor reduction debuff on your opponents, then you know that
your opponent must have at least 143.5 armor before last whisper becomes a viable option over black cleaver. but then again black cleaver is a debuff and benefits everyone as well and that cant be
ignored, dont forget this is a team game, and thus you fight in a team. | {"url":"http://forums.na.leagueoflegends.com/board/showthread.php?t=2143859","timestamp":"2014-04-24T12:36:03Z","content_type":null,"content_length":"42031","record_id":"<urn:uuid:35438796-2a31-4cb7-bca3-76895287d87b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::Cephes::Polynomial - Perl interface to the cephes math polynomial routines
use Math::Cephes::Polynomial qw(poly);
# 'poly' is a shortcut for Math::Cephes::Polynomial->new
require Math::Cephes::Fraction; # if coefficients are fractions
require Math::Cephes::Complex; # if coefficients are complex
my $a = poly([1, 2, 3]); # a(x) = 1 + 2x + 3x^2
my $b = poly([4, 5, 6, 7]; # b(x) = 4 + 5x + 6x^2 + 7x^3
my $c = $a->add($b); # c(x) = 5 + 7x + 9x^2 + 7x^3
my $cc = $c->coef;
for (my $i=0; $i<4; $i++) {
print "term $i: $cc->[$i]\n";
my $x = 2;
my $r = $c->eval($x);
print "At x=$x, c(x) is $r\n";
my $u1 = Math::Cephes::Complex->new(2,1);
my $u2 = Math::Cephes::Complex->new(1,-3);
my $v1 = Math::Cephes::Complex->new(1,3);
my $v2 = Math::Cephes::Complex->new(2,4);
my $z1 = Math::Cephes::Polynomial->new([$u1, $u2]);
my $z2 = Math::Cephes::Polynomial->new([$v1, $v2]);
my $z3 = $z1->add($z2);
my $z3c = $z3->coef;
for (my $i=0; $i<2; $i++) {
print "term $i: real=$z3c->{r}->[$i], imag=$z3c->{i}->[$i]\n";
$r = $z3->eval($x);
print "At x=$x, z3(x) has real=", $r->r, " and imag=", $r->i, "\n";
my $a1 = Math::Cephes::Fraction->new(1,2);
my $a2 = Math::Cephes::Fraction->new(2,1);
my $b1 = Math::Cephes::Fraction->new(1,2);
my $b2 = Math::Cephes::Fraction->new(2,2);
my $f1 = Math::Cephes::Polynomial->new([$a1, $a2]);
my $f2 = Math::Cephes::Polynomial->new([$b1, $b2]);
my $f3 = $f1->add($f2);
my $f3c = $f3->coef;
for (my $i=0; $i<2; $i++) {
print "term $i: num=$f3c->{n}->[$i], den=$f3c->{d}->[$i]\n";
$r = $f3->eval($x);
print "At x=$x, f3(x) has num=", $r->n, " and den=", $r->d, "\n";
$r = $f3->eval($a1);
print "At x=", $a1->n, "/", $a1->d,
", f3(x) has num=", $r->n, " and den=", $r->d, "\n";
This module is a layer on top of the basic routines in the cephes math library to handle polynomials. In the following, a Math::Cephes::Polynomial object is created as
my $p = Math::Cephes::Polynomial->new($arr_ref);
where $arr_ref is a reference to an array which can consist of one of
• floating point numbers, for polynomials with floating point coefficients,
• Math::Cephes::Fraction or Math::Fraction objects, for polynomials with fractional coefficients,
• Math::Cephes::Complex or Math::Complex objects, for polynomials with complex coefficients,
The maximum degree of the polynomials handled is set by default to 256 - this can be changed by setting $Math::Cephes::Polynomial::MAXPOL.
A copy of a Math::Cephes::Polynomial object may be done as
my $p_copy = $p->new();
and a string representation of the polynomial may be gotten through
print $p->as_string;
The following methods are available.
my $c = $p->coef;
This returns an array reference containing the coefficients of the polynomial.
This sets the coefficients of the polynomial identically to 0, up to $p->[$n]. If $n is omitted, all elements are set to 0.
$c = $a->add($b);
This sets $c equal to $a + $b.
$c = $a->sub($b);
This sets $c equal to $a - $b.
$c = $a->mul($b);
This sets $c equal to $a * $b.
$c = $a->div($b);
This sets $c equal to $a / $b, expanded by a Taylor series. Accuracy is approximately equal to the degree of the polynomial, with an internal limit of about 16.
$c = $a->sbt($b);
If a(x) and b(x) are polynomials, then
c(x) = a(b(x))
is a polynomial found by substituting b(x) for x in a(x). This method is not available for polynomials with complex coefficients.
$s = $a->eval($x);
This evaluates the polynomial at the value $x. The returned value is of the same type as that used to represent the coefficients of the polynomial.
$b = $a->sqt();
This finds the square root of a polynomial, evaluated by a Taylor expansion. Accuracy is approximately equal to the degree of the polynomial, with an internal limit of about 16. This method is
not available for polynomials with complex coefficients.
$b = $a->sin();
This finds the sine of a polynomial, evaluated by a Taylor expansion. Accuracy is approximately equal to the degree of the polynomial, with an internal limit of about 16. This method is not
available for polynomials with complex coefficients.
$b = $a->cos();
This finds the cosine of a polynomial, evaluated by a Taylor expansion. Accuracy is approximately equal to the degree of the polynomial, with an internal limit of about 16. This method is not
available for polynomials with complex coefficients.
$c = $a->atn($b);
This finds the arctangent of the ratio $a / $b of two polynomial, evaluated by a Taylor expansion. Accuracy is approximately equal to the degree of the polynomial, with an internal limit of about
16. This method is not available for polynomials with complex coefficients.
my $w = Math::Cephes::Polynomial->new([-2, 0, -1, 0, 1]);
my ($flag, $r) = $w->rts();
for (my $i=0; $i<4; $i++) {
print "Root $i has real=", $r->[$i]->r, " and imag=", $r->[$i]->i, "\n";
This finds the roots of a polynomial. $flag, if non-zero, indicates a failure of some kind. $roots in an array reference of Math::Cephes::Complex objects holding the real and complex values of
the roots found. This method is not available for polynomials with complex coefficients.
Termination depends on evaluation of the polynomial at the trial values of the roots. The values of multiple roots or of roots that are nearly equal may have poor relative accuracy after the
first root in the neighborhood has been found.
Please report any to Randy Kobes <randy@theoryx5.uwinnipeg.ca>
The C code for the Cephes Math Library is Copyright 1984, 1987, 1989, 2002 by Stephen L. Moshier, and is available at http://www.netlib.org/cephes/. Direct inquiries to 30 Frost Street, Cambridge, MA
The perl interface is copyright 2000, 2002 by Randy Kobes. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | {"url":"http://search.cpan.org/~adoptme/Math-Cephes-0.47/lib/Math/Cephes/Polynomial.pm","timestamp":"2014-04-20T04:10:17Z","content_type":null,"content_length":"21208","record_id":"<urn:uuid:2a7cb718-b45b-4ed1-b026-581a1baf4ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nanophotonic Sensor Based on Photonic Crystal Structure Using Negative Refraction for Effective Light Coupling
In this paper, using the 2-D finite-difference time-domain (FDTD) method, we study a novel biosensor based on collimation effects in photonic crystals (PCs) with negative refractive index. Coupling
the collimated beam to a line of air holes (sensing region) filled with normal air, dry air, liquid, and gas is thoroughly investigated. It is shown that by an appropriate selection of design
parameters, such as, the air cylinder radii and coupling distance, it is possible to achieve ultracompact sensing platforms. The collimation effect features channel allocation in nanosystems and high
sensitivity for biomolecules sensing applications.
© 2009 IEEE
F. Ouerghi, F. AbdelMalek, S. Haxha, R. Abid, H. Mejatty, and I. Dayoub, "Nanophotonic Sensor Based on Photonic Crystal Structure Using Negative Refraction for Effective Light Coupling," J. Lightwave
Technol. 27, 3269-3274 (2009)
Sort: Year | Journal | Reset
1. E. Yablonovitch, "Inhibited spontaneous emission in solid-state physics and electronics," Phys. Rev. Lett. 58, 2059-2062 (1987).
2. S. John, "Strong localization of photons in certain disordered dielectric superlattices," Phys. Rev. Lett. 58, 2486-2489 (1987).
3. M. Skorobogatiy, A. V. Kabashin, "Photon crystal waveguide-based surface plasmon resonance biosensor," Appl. Phys. Lett. 89, 143518-14351 (2006).
4. S. Boutami, B. B. Bakir, J.-L. Leclercq, X. Letartre, C. Seassal, P. R. -Romeo, P. Regreny, M. Garrigues, P. Viktorovitch, "Photonic crystal-based MOEMS devices," IEEE J. Sel. Topics Quantum
Electron. 13, 244-252 (2007).
5. W. C. L. Hopman, P. Pottier, D. Yudistira, J. van Lith, P. V. Lambeck, R. M. De La Rue, A. Driessen, H. J. W. M. Hoekstra, R. M. de Ridder, "Quasi-one-dimensional photonic crystal as a compact
building-block for refractometric optical sensors," IEEE J. Sel. Topics Quantum Electron. 11, 11-16 (2005).
6. S. Haxha, W. Belhadj, F. AbdelMalek, H. Bouchriha, "Analyses of wavelength demultiplexer based on photonic crystals," IEE Proc., Optoelectron. 152, 193-198 (2005).
7. W. Aroua, F. Ouerghi, S. Haxha, F. AbdelMalek, M. Mejatty, H. Bouchriha, V. Haxha, "Analysis and optimisation of high density photonic crystal devices in a subystem by use of finite difference
time domain," IET Proc., Optoelectron. 2, 10-15 (2008).
8. R. D. Meade, A. Devenyi, J. D. Joannopoulos, O. L. Alerhand, D. A. Smith, K. Kash, "Novel applications of photonic band gap materials: Low-loss bends and high Q cavities," J. Appl. Phys. 75,
4753-4755 (1994).
9. W. Bogaerts, R. Baets, P. Dumon, V. Wiaux, S. Beckx, D. Taillaert, B. Luyssaert, J. V. Campenhout, P. Bienstman, D. V. Thourhout, "Nanophotonic waveguides in silicon-on-insulator fabricated with
CMOS technology," J. Lightw. Technol. 23, 401-412 (2005).
10. H. Nakamura, Y. Sugimoto, K. Kanamoto, N. Ikeda, Y. Tanaka, Y. Nakamura, S. Okouchi, Y. Watanabe, K. Inoue, H. Ishikawa, K. Asakawa, "Ultra-fast photonic crystal/quantum dot all-optical switch
for future photonic networks," Opt. Exp. 12, 6606-6614 (2004).
11. A. Mekis, J. C. Chen, I. Kurland, S. Fan, P. R. Villeneuve, J. D. Joannopoulos, "High transmission through sharp bends in photonic crystal waveguides," Phys. Rev. Lett. 77, 3787-3790 (1996).
12. M. Notomi, "Theory of light propagation in strongly modulated photonic crystal refractionlike behaviour in the vicinity of the photonic band gap," Phys. Rev. B. 62, 10696-10705 (2000).
13. M. Qiu, L. Thylen, M. Swillo, B. Jaskorzynska, "Wave propagation through a photonic crystal in a negative phase refractive-index region," IEEE J. Sel. Topics Quantum Electron. 9, 106-110 (2000).
14. E. Cubukcu, K. Aydin, E. Ozbay, S. Foteinopoulou, C. M. Soukoulis, "Electromagnetic waves: Negative refraction by photonic crystals," Nature 423, 604 (2003).
15. X. Wang, Z. F. Ren, K. Kempa, "Unrestricted superlensing in a triangular two dimensional photonic crystal," Opt. Exp. 12, 2919-2924 (2004).
16. V. G. Veselago, "The electrodynamic of substances with simultaneously negative values of $\varepsilon$ and $\mu$," Sov. Phys. USPEKI 10, 509-514 (1968).
17. J. B. Pendry, "Negative refraction makes a perfect lens," Phys. Rev. Lett. 85, 3966-3969 (2000).
18. M. Notomi, "Theory of light propagation in strongly modulated photonic crystals: Refraction like behavior in the vicinity of the photonic band gap," Phys. Rev. B. 62, 10696-10705 (2000).
19. S. Foteinopoulou, E. N. Economou, C. M. Soukoulis, "Refraction in media with a negative refractive index," Phys. Rev. Lett. 90, 107402-1-107402-4 (2003).
20. X. Ao, S. He, "Three-dimensional photonic crystal of negative refraction achieved by interference lithography," Opt. Lett. 29, 2542-2544 (2004).
21. R. Moussa, S. Foteinopoulou, L. Zhang, G. Tuttle, K. Guven, E. Ozbay, C. M. Soukoulis, "Negative refraction and superlens behavior in a two-dimensional photonic crystal," Phys. Rev. B. 71,
085106-1-085106-5 (2005).
22. F. AbdelMalek, W. Belhadj, S. Haxha, H. Bouchriha, "Realization of high coupling efficiency by employing a concave lens based on 2-D photonic crystals with negative refractive index," J. Lightw.
Technol. 25, 3168-3174 (2007).
23. C. Y. Luo, S. G. Johnson, J. D. Joannopoulos, J. B. Pendry, "All-angle negative refraction without negative effective index," Phys. Rev. B. 65, 201104-201107 (2002).
24. C. Y. Luo, S. G. Johnson, J. D. Joannopoulos, J. B. Pendry, "Subwavelength imaging in photonic crystals," Phys. Rev. B. 68, 045115-045129 (2003).
25. W. Belhadj, D. Gamra, F. AbdelMalek, S. Haxha, H. Bouchriha, "Design of photonic crystal structure based in all-angle negative refractive effect for application in focusing systems," IET Proc.,
Optoelectron. 1, 91-95 (2007).
26. P. A. Belov, C. R. Simovski, "Canalization of subwavelength images by electromagnetic crystals," Phys. Rev. B. 71, 193105-193108 (2005).
27. J. Garcia-Pomar, M. Nieto-Vesperinas, "Waveguiding, collimation and subwavelength concentration in photonic crystals," Opt. Exp. 13, 7997-8007 (2005).
28. H. Zhang, H. Zhu, L. Qian, D. Fan, "Collimations and negative refraction by slabs of 2-D photonic crystals with periodically-aligned tube-type air holes," Opt. Exp. 15, 3519-3530 (2007).
29. S. K. Yee, "Numerical solution of initial boundary value problems involving Maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. 14, 302-307 (1966).
30. K. M. Leung, Y. F. Liu, "Full vector wave calculation of photonic band structures in face-centered-cubic dielectric media," Phys. Rev. Lett. 65, 2646-2649 (1990).
31. H. M. Masmoudi, M. A. Al-Sunaidi, J. M. Arnold, "Efficient time-domain beam-propagation method for modeling integrated optical devices," IEEE J. Lightw. Technol. 19, 759-771 (2001).
32. J. P. A. Berenger, "Perfectly matched layer for the absorption of electromagnetic waves," J. Comput. Phys. 114, 185-200 (1994).
33. N. J. Florous, K. Saitoh, S. K. Varsheney, M. Koshiba, "Fluidic sensors based on photonic crystal fiber gratings: Impact of the ambient temperature," IEEE Photon. Technol. Lett. 18, 2206-2208
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/jlt/abstract.cfm?uri=jlt-27-15-3269","timestamp":"2014-04-16T11:03:32Z","content_type":null,"content_length":"100262","record_id":"<urn:uuid:3a4ee634-96bd-43e6-8ce9-b155f21b35cc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
injective module
Homological algebra
For $R$ a ring, let $R$Mod be the category of $R$-modules.
An injective module over $R$ is an injective object in $R Mod$.
This is the dual notion of a projective module.
Equivalent characterizations
Let $R$ be a commutative ring and $C = R Mod$ the category of $R$-modules. We discuss injective modules over $R$ (see there for more).
If the axiom of choice holds, then a module $Q \in R Mod$ is an injective module precisely if for $I$ any left $R$-ideal regarded as an $R$-module, any homomorphism $g : I \to Q$ in $C$ can be
extended to all of $R$ along the inclusion $I \hookrightarrow R$.
Sketch of proof
Let $i \colon M \hookrightarrow N$ be a monomorphism in $R Mod$, and let $f \colon M \to Q$ be a map. We must extend $f$ to a map $h \colon N \to Q$. Consider the poset whose elements are pairs $(M',
f')$ where $M'$ is an intermediate submodule between $M$ and $N$ and $f' \colon M' \to Q$ is an extension of $f$, ordered by $(M', f') \leq (M'', f'')$ if $M''$ contains $M'$ and $f''$ extends $f'$.
By an application of Zorn's lemma, this poset has a maximal element, say $(M', f')$. Suppose $M'$ is not all of $N$, and let $x \in N$ be an element not in $M'$; we show that $f'$ extends to a map
$M'' = \langle x \rangle + M' \to Q$, a contradiction.
The set $\{r \in R: r x \in M'\}$ is an ideal $I$ of $R$, and we have a module homomorphism $g \colon I \to Q$ defined by $g(r) = f'(r x)$. By hypothesis, we may extend $g$ to a module map $k \colon
R \to Q$. Writing a general element of $M''$ as $r x + y$ where $y \in M'$, it may be shown that
$f''(r x + y) = k(r) + g(y)$
is well-defined and extends $f'$, as desired.
Assume that the axiom of choice holds.
Let $R$ be a Noetherian ring, and let $\{Q_j\}_{j \in J}$ be a collection of injective modules over $R$. Then the direct sum $Q = \bigoplus_{j \in J} Q_j$ is also injective.
By Baer’s criterion, it suffices to show that for any ideal $I$ of $R$, a module homomorphism $f \colon I \to Q$ extends to a map $R \to Q$. Since $R$ is Noetherian, $I$ is finitely generated as an
$R$-module, say by elements $x_1, \ldots, x_n$. Let $p_j \colon Q \to Q_j$ be the projection, and put $f_j = p_j \circ f$. Then for each $x_i$, $f_j(x_i)$ is nonzero for only finitely many summands.
Taking all of these summands together over all $i$, we see that $f$ factors through
$\prod_{j \in J'} Q_j = \bigoplus_{j \in J'} Q_j \hookrightarrow Q$
for some finite $J' \subset J$. But a product of injectives is injective, hence $f$ extends to a map $R \to \prod_{j \in J'} Q_j$, which completes the proof.
This is due to Bass and Papp. See (Lam, Theorem 3.46).
Existence of enough injectives
We discuss that in the presence of the axiom of choice at least, the category $R$Mod has enough injectives in that every module is a submodule of an injective one. We first consider this for $R = \
mathbb{Z}$. We do assume prop. 4, which may be proven using Baer's criterion.
By prop. 4 an abelian group is an injective $\mathbb{Z}$-module precisely if it is a divisible group. So we need to show that every abelian group is a subgroup of a divisible group.
To start with, notice that the group $\mathbb{Q}$ of rational numbers is divisible and hence the canonical embedding $\mathbb{Z} \hookrightarrow \mathbb{Q}$ shows that the additive group of integers
embeds into an injective $\mathbb{Z}$-module.
Now by the discussion at projective module every abelian group $A$ receives an epimorphism $(\oplus_{s \in S} \mathbb{Z}) \to A$ from a free abelian group, hence is the quotient group of a direct sum
of copies of $\mathbb{Z}$. Accordingly it embeds into a quotient $\tilde A$ of a direct sum of copies of $\mathbb{Q}$.
$\array{ ker &\stackrel{=}{\to}& ker \\ \downarrow && \downarrow \\ (\oplus_{s \in S} \mathbb{Z}) &\hookrightarrow& (\oplus_{s \in S} \mathbb{Q}) \\ \downarrow && \downarrow \\ A &\hookrightarrow& \
tilde A }$
Here $\tilde A$ is divisible because the direct sum of divisible groups is again divisible, and also the quotient group of a divisible groups is again divisble. So this exhibits an embedding of any
$A$ into a divisible abelian group, hence into an injective $\mathbb{Z}$-module.
The proof uses the following lemma.
Write $U\colon R Mod \to Ab$ for the forgetful functor that forgets the $R$-module structure on a module $N$ and just remembers the underlying abelian group $U(N)$.
The functor $U\colon R Mod \to Ab$ has a right adjoint
$R_* : Ab \to R Mod$
given by sending an abelian group $A$ to the abelian group
$U(R_*(A)) \coloneqq Ab(U(R),A)$
equipped with the $R$-module struture by which for $r \in R$ an element $(U(R) \stackrel{f}{\to} A) \in U(R_*(A))$ is sent to the element $r f$ given by
$r f : r' \mapsto f(r' \cdot r) \,.$
This is called the coextension of scalars along the ring homomorphism $\mathbb{Z} \to R$.
The unit of the $(U \dash R_*)$adjunction
$\epsilon_N : N \to R_*(U(N))$
is the $R$-module homomorphism
$\epsilon_N : N \to Hom_{Ab}(U(R), U(N))$
given on $n \in N$ by
$j(n) : r \mapsto r n \,.$
of prop. 3
Let $N \in R Mod$. We need to find a monomorphism $N \to \tilde N$ such that $\tilde N$ is an injective $R$-module.
By prop. 2 there exists a monomorphism
$i \colon U(N) \hookrightarrow D$
of the underlying abelian group into an injective abelian group $D$.
Now consider the adjunct $N \to R_*(D)$ of $i$, hence the composite
$N \stackrel{\eta_N}{\to} R_*(U(N)) \stackrel{R_*(i)}{\to} R_*(D)$
with $R_*$ and $\eta_N$ from lemma 1. On the underlying abelian groups this is
$U(N) \stackrel{U(\eta_N)}{\to} Hom_{Ab}(U(R), U(N)) \stackrel{Hom_{Ab}(U(R),i)}{\to} Hom_{Ab}(U(R),U(D)) \,.$
Once checks on components that this is a monomorphism. Therefore it is now sufficient to see that $Hom_{Ab}(U(R), U(D))$ is an injective $R$-module.
This follows from the existence of the adjunction isomorphism given by lemma 1
$Hom_{Ab}(U(K),U(D)) \simeq Hom_{R Mod}(K, Hom_{Ab}(U(R), U(D)))$
natural in $K \in R Mod$ and from the injectivity of $D \in Ab$.
$\array{ U(K) &\to& D \\ \downarrow & earrow \\ U(L) } \;\;\;\;\; \leftrightarrow \;\;\;\;\; \array{ K &\to& R_*D \\ \downarrow & earrow \\ L } \,.$
Injective $\mathbb{Z}$-modules / abelian groups
Let $C = \mathbb{Z} Mod \simeq$Ab be the abelian category of abelian groups.
An abelian group $A$ is injective as a $\mathbb{Z}$-module precisely if it is a divisible group, in that for all integers $n \in \mathbb{N}$ we have $n G = G$.
Using Baer’s criterion, prop. 1.
By prop. 4 the following abelian groups are injective in Ab.
The group of rational numbers $\mathbb{Q}$ is injective in Ab, as is the additive group of real numbers $\mathbb{R}$ and generally that underlying any field. The additive group underlying any vector
space is injective. The quotient of any injective group by any other group is injective.
Not injective in Ab is for instance the cyclic group $\mathbb{Z}/n\mathbb{Z}$ for $n \gt 1$.
The notion of injective modules was introduced in
(The dual notion of projective modules was considered explicitly only much later.)
A general discussion can be found in
The general notion of injective objects is in section 9.5, the case of injective complexes in section 14.1.
Baer’s criterion is discussed in many texts, for example
• N. Jacobsen, Basic Algebra II, W.H. Freeman and Company, 1980.
See also
• T.-Y. Lam, Lectures on modules and rings, Graduate Texts in Mathematics 189, Springer Verlag (1999).
Section 4.2 of
For abelian sheaves over the etale site: | {"url":"http://www.ncatlab.org/nlab/show/injective+module","timestamp":"2014-04-20T08:51:04Z","content_type":null,"content_length":"79243","record_id":"<urn:uuid:c6f2ff52-148d-460a-9e8d-1c16145e8e27>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
acceleration,rate at which velocity changes with time, counting in terms of both speed and direction. . A point or an object moving on in a straight line is accelerated if it speeds up or slows down.
Motion on a circle is accelerated even if the speed is constant, because the direction is continually changing. For all other kinds of motion, both effects contribute to the acceleration.
Because acceleration has both a magnitude and a direction, it is a [ vector ] quantity. Velocity is also a vector quantity. Acceleration is defined as the change in the velocity vector in a time
interval, divided by the time interval, in the limit . Instantaneous acceleration (at a precise moment and location) is given by the limit of the ratio of the change in velocity during a given time
interval to the time interval as the time interval goes to zero . If the (see analysis: Instantaneous rates of change). For example, if velocity is expressed in metres per second, acceleration will
be expressed in metres per second per second. | {"url":"http://media-2.web.britannica.com/eb-diffs/810/2810-14260-3469.html","timestamp":"2014-04-18T10:41:55Z","content_type":null,"content_length":"3210","record_id":"<urn:uuid:8d31972e-8632-49f0-bb02-35621ad8ffe4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decades of Dollars Lottery General Information
Decades of Dollars is a multiple-state (NOT Multi-State Lottery Association (MUSL)) lottery game played in 4 states with a grand prize of $250,000 per year for 30 years, or $4 million
cash. On this
The Decades of Dollars game consists of one sets of numbers where the player selects 6 numbers 1 through 47. Decades of Dollars costs $2 per game.
Decades of Dollars drawing schedule: Decades of Dollars drawings are held at approximately 11 p.m. EST every Monday and Thursday night.
Decades of Dollars drawing method: The winning Decades of Dollars numbers are drawn using mechanical machines and ball sets. Six balls are drawn out of a drum with 47 numbered balls.
How to play Decades of Dollars
Just like any multi-number game, Decades of Dollars is a very easy game to play. Two sample panels from a Decades of Dollars playslip are shown here; the actual playslip may contain 5 or 10 panels.
Select 6 numbers from 1 through 47. Alternatively, you can mark the Quick Pick (QP) (Easy Pick (EP) in some states) to let the Lottery terminal randomly select your numbers for you. If you want to
play more numbers, then fill out panels B, C, etc. Each one costs $2.
If you make an error, DO NOT ERASE, instead mark the VOID box for that play.
A typical winning number is reported as,
Since the order in which the numbers are drawn doesn't matter, the winning numbers are reported in their natural order from smallest to highest.
How Do You Win Decades of Dollars? - Prizes and Odds
A summary winner prizes and the odds of winning:
Match Prize Odds
6-of-6 $250,000/year for 30 years 1 in 10,737,573
5-of-6 $10,000 1 in 43,649
4-of-6 $100 1 in 873
3-of-6 $10 1 in 50
2-of-6 $2 1 in 7
Decades of Dollars top prize winners will select at the time of prize claim to receive the prize as $250,000 each year for 30 years or the one-time cash option amount of $4,000,000.
If there are more than two jackpot winners in a single drawing, the top prize may become pari-mutuel and the prize pool would be shared equally amongst the top prize winners.
Also, if there are more than 25 second prize winners of $10,000 in a single drawing, the prize may become pari-mutuel and would be shared equally amongst those winners.
Claiming Decades of Dollars Prizes: Usually prizes of less than $600 can be claimed at any retailer. However, every state has its own rules for claiming lottery prizes. Please see your state's info
page on this web site or go to the Decades of Dollars page of the official site of your state by clicking on the map further below.
Also, how long you have to collect a Decades of Dollars prize depends on the state where you bought your tickets. The time period for claiming a prize ranges from 180 days to 1 year from the draw
Decades of Dollars States
The map shows the states where Decades of Dollars is currently played. They are Arkansas, Georgia, Kentucky, and Virginia.
You can click on a Decades of Dollars state on the map to be directed to that state's Decades of Dollars page.
Links to some Decades of Dollars Web Site Pages
Although we have tried to cover all the pertinent information, and although there is no official web site of the Decades of Dollars lottery game, we recommend that you also look at some additional
pages at the Arkansas, Georgia, Kentucky, and Virginia Lottery official web sites.
Decades of Dollars at US-Lotteries.com
The main purpose of this web site (US-Lotteries.com) is to present a well formatted latest and past results; analysis and statistics of past winning numbers, digits, pairs, triads; search to see if
your favorite numbers have ever won or have come close to winning the jackpot; mathematical tools such as combinations generators and wheels. The complete list of all Decades of Dollars pages of this
web site are listed below. | {"url":"http://www.us-lotteries.com/decades_of_dollars/decades_of_dollars-info.asp","timestamp":"2014-04-19T04:20:01Z","content_type":null,"content_length":"33066","record_id":"<urn:uuid:b8a8beb1-381e-477b-92e3-8065f9c72a45>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Liverpool, TX Math Tutor
Find a Liverpool, TX Math Tutor
...As a certified bilingual teacher, I am required to take 30 hours a year of professional development, which I cover by taking teaching workshops. I started the university by doing 3 years of
Engineering and then switched to Architecture, graduating with a B Arch. I then did an interdisciplinary Master of Arts in Arid and Semi-arid Land Studies at Texas Tech.
41 Subjects: including algebra 2, study skills, trigonometry, government & politics
...I've helped countless students of all ages with math of all levels. I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing.
34 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I received the AP Scholar award, became a member of the California Scholarship Federation, and received the scholarship athlete award. I have my soccer coaching license and have coached middle
and high school teams. I gained tutoring experience as a math teaching assistant and as a private tutor for students at my high school.
22 Subjects: including trigonometry, SAT math, photography, public speaking
...Doctoral students must have excellent reading/writing skills and my experience in the program at Texas A&M has only strengthened these skills. As a doctoral student in political science/public
policy at Texas A&M University, I have a great deal of knowledge about the social sciences and social s...
23 Subjects: including prealgebra, chemistry, reading, biology
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including algebra 2, calculus, Microsoft Excel, prealgebra
Related Liverpool, TX Tutors
Liverpool, TX Accounting Tutors
Liverpool, TX ACT Tutors
Liverpool, TX Algebra Tutors
Liverpool, TX Algebra 2 Tutors
Liverpool, TX Calculus Tutors
Liverpool, TX Geometry Tutors
Liverpool, TX Math Tutors
Liverpool, TX Prealgebra Tutors
Liverpool, TX Precalculus Tutors
Liverpool, TX SAT Tutors
Liverpool, TX SAT Math Tutors
Liverpool, TX Science Tutors
Liverpool, TX Statistics Tutors
Liverpool, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Cedar Lane Math Tutors
Clear Lake Shores, TX Math Tutors
Danbury, TX Math Tutors
Freeport, TX Math Tutors
Hillcrest, TX Math Tutors
Jamaica Beach, TX Math Tutors
Jones Creek, TX Math Tutors
Kemah Math Tutors
Old Ocean Math Tutors
Oyster Creek, TX Math Tutors
Shoreacres, TX Math Tutors
Surfside Beach, TX Math Tutors
Sweeny Math Tutors
Taylor Lake Village, TX Math Tutors
Thompsons Math Tutors | {"url":"http://www.purplemath.com/liverpool_tx_math_tutors.php","timestamp":"2014-04-21T07:47:18Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:bafc92d3-41f1-4bcd-8368-10b9ae1f61d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Penn State - STAT - 200
Complement, Independence, Mutually Exclusive Complement and Mutually Exclusive Consider a course where, if completed, you can receive one of the following letter grades: A, B, C, D, or F and each has
an equal probability of occurring. That is, P(A) read p
Penn State - STAT - 200
All confidence intervals follow the same formula: Sample Statistic +/- Multiplier*Standard Error For example in problem 7.25 on page 343 since we are talking about a 95% confidence interval to
estimate a population mean the above formula would consist of:
Penn State - STAT - 462
Finding P-values For hypothesis testing for proportions were we calculate a Z test statistic (Ill call it Zstat) and for testing means we use a t test statistic (Ill call it tstat). Finding the
p-value (or probability value) is based on the alternative hy
Penn State - STAT - 500
Probability Independence and Test of Two Categorical Variables The idea of independence between two categorical variables may sound familiar as we spoke of independence when we discussed probability.
Recall that in that lesson we stated that two events, c
Penn State - STAT - 500
Variances, N vs N-1 (1995) Seal.<- file 95varqn.html -> . divide by N or N-1 for variance? Seal =David Seal, 06 Nov 1995=ssm, .From: dseal@armltd.co.uk (David Seal)Newsgroups:
sci.math.num-analysis,sci.math,sci.stat.mathSubject: Re: What's Standard D
Penn State - STAT - 200
SOLUTIONS TO ACTIVITY SET 8Activity 8.1 Suppose the amount students at PSU spent on textbooks this semester is a normal random variable with mean = $360 and standard deviation = $90.a. Use the
empirical rule for bell-shaped data to determine intervals t
Penn State - STAT - 462
Solution to Homework 1 The following data were collected for a class of 20 students on reading readiness stanines (X) at the end of kindergarten and reading achievement stanines (Y) at the end of
first grade. You want to examine how well the readiness sco
Penn State - STAT - 462
Solution - Homework 2 Use the data from Homework 1 to complete this assignment and regress Y on X and store the residuals. 1. Create boxplots for both X and Y. Are there any outliers? No outliers
identified. See boxplot below.Box pl ot of X, Y0 X 2 4 Y
Penn State - STAT - 462
Homework 3 - Solutions Covers Chapters 5 and 6 Matrix Methods 1. Using the data for Homework 1 and 2, write in proper matrix form, the matrices for Y, and X. 3 1 1 3 5 4 7 6 7 8 Y = 5 2 7 6 9 8 4 6 9
7 1 1 1 1 1 1 1 1 1 1 X = 1 1 1 1 1 1 1 1 1 1 2 2 1 1 3
Penn State - STAT - 462
Homework 4 Covers Chapters 7 Use the Project Talent data set. 1. Perform a multiple regression by regressing Math on Gender, SES, Sociability, Reading, and Mechanical Reasoning entering the
predictors in this order. a. What is the interpretation of the t-
Penn State - STAT - 462
Homework 5 Covers Chapter 8 Use the Project Talent data set. The variable School Size is interpreted as follows: 1 = number of students is less than 100 2 = number of students is from 100 to 399 3 =
number of students is 400 or more 1. A lack-of-fit test
Penn State - STAT - 462
Homework 6 Covers Chapters 9 and 10 Use the High School and Beyond (HSB) data set. The data is explained in the HSB Read Me file. USE MATH AS PREDICTOR 1. Some researchers feel an interaction exists
between Gender and Writing ability. Create an Gender*Wri
Penn State - STAT - 500
STAT500 HW#2_solutionsSolutions to Homework 2 1) (fifth) a) A=cfw_6 n(A) / n(S) = 1/6 b) B=cfw_2,4,6 n(B) / n(S) = 3/6=1/2 c) C=cfw_3,4,5,6 n(C) / n(S) = 4/6=2/3 d) D=cfw_4,6 n(D) / (S)=2/6=1/3
(sixth) a) A=cfw_6 n(A) / n(S) = 1/6 b) B=cfw_1,3,5 n(B) / n
Penn State - STAT - 500
Stat 500 Homework 3 SolutionsSTAT500 HW#3 Solutions1) Random sample of 25 generated without replacement from Minitab (Will most likely differ from yours):2 240 125 409 783 526 219 364 584 789 84 341
296 652 134 708 104 155 21 562 378 522 86 667 4282)
Penn State - STAT - 500
STAT500 HW#4_solutionsSTAT500 HW#4 Solutions1) a) 95% T Confidence IntervalsVariable speed 10.304) N 20 Mean 9.100 StDev 2.573 SE Mean 0.575 95.0 % CI (7.896,b) The normal probability plot for
reading speed suggests no reason to believe that the data
Penn State - STAT - 500
STAT500 HW#5_solutionsSTAT500 HW#5 Solutions1) Chicago Title Company problem T 0 = 0.831; because nT 0 = 2544*(0.831) = 2114.1 > 5, n*(1 - T 0) = 2544*(1 0.831) = 429.9 > 5, thus the
one-proportion z-test can be used. Ho: = 0.831; Ha: 0.831 = 0.024; two
Penn State - STAT - 500
STAT500 HW#6_solutionsHomework 6 Solutions1. a. This poses an interesting question since we never discussed intuitive decisions prior to performing an analysis. So here is an intuitive thought
process. Since one assumption is that the each population fo
Penn State - STAT - 500
Solution to Homework 1 The following data were collected for a class of 20 students on reading readiness stanines (X) at the end of kindergarten and reading achievement stanines (Y) at the end of
first grade. You want to examine how well the readiness sco
Penn State - STAT - 500
Solution to Homework 1 The following data were collected for a class of 20 students on reading readiness stanines (X) at the end of kindergarten and reading achievement stanines (Y) at the end of
first grade. You want to examine how well the readiness sco
Penn State - STAT - 200
Solutions Displaying Data2.3 Identify the variable type: a) quantitative b) categorical c) categorical d) quantitative 2.6 Discrete or continuous?: a) continuous b) discrete c) continuous d) discrete
2.8 Number of children:a) The variable, number of chi
Penn State - STAT - 200
Solutions Gathering Data 4.5 School testing for drugs : Although this study found similar levels of drug use in schools that used drug testing and schools that did not, lurking variables might have
affected the results. For example, it is possible that sc
Penn State - STAT - 200
Solutions - Probability Distributions 6.3 Boston Red Sox hitting: a) The probabilities give a legitimate probability distribution because each one is between 0 and 1 and the sum of all of them is 1.
b) = 0P(0) + 1P(1) + 2P(2) + 3P(3) + 4P(4) = 0(0.718) +
Penn State - STAT - 200
Solutions - Sampling Distributions 7.6 Exit poll and n: a) The interval of values within the sample proportion will almost certainly fall within three standard errors of the mean: 0.53 to 0.59. b)
Based on the interval calculated in (a), it would be unusu
Penn State - STAT - 200
Solutions - Hypothesis Testing9.4 Iowa GPA: H 0 : = 2.80Ha :2.80In the above hypotheses, H 0 : is the notation for the null hypothesis, H a : is the notation for the alternative hypothesis, and is
the parameter, the mean GPA of the population, about w
Penn State - STAT - 200
Solutions - Categorical Variables11.3 FBI statistics: a) These distributions refer to those of x at given categories of y. RACE OF VICTIM RACE OF MURDERER Blacks WhitesBlacks Whites91% 9%17% 83%b) x
and y are dependent because the probability of a mu
Penn State - STAT - 200
Confidence Intervals Proportion and One Mean1 The term sampling frame refers to the group that actually had a chance to get into the sample. Ideally, this is the same as the population of interest,
but sometimes it isnt. In the following situation, descr
Penn State - PSYCH - 412
Growth Spurt: Muscle Mass and Body Fat Rapid acceleration in height and weight; significant boost in muscle and body fat By the end of puberty, the muscle to fat ratio is: Boys = 3:1 Girls = 5:4 Boys
gain muscle at a faster rate than girls Girls gain fat
Penn State - PSYCH - 412
Changes in Cognition Logical Thinking Transductive Thinking: connects two particular events into a cause-effect relationship simply because they occurred close in time Inductive Thinking: we make
inferences about the world based on a limited set of experi
Penn State - PSYCH - 412
Brain Development neuron: single cell unit of the central nervous system nerve: single cell unit of the peripheral nervous system dendrites: top of the neuron; extensions, branches; gather incoming
information to pass it along to be processed soma (or cel
Penn State - PSYCH - 412
Social Redefinition a period when the individual is being redefined by their society, from that of a child towards that of an adult (increased privileges and responsibilities) social redefinition
tends to be more pronounced in traditional societies compar
Penn State - PSYCH - 412
The Generation Gap occurs when older and younger people fail to understand one another due to their different experiences, opinions, habits and overall behaviorParent-Adolescent Conflict G. Stanley
Hall (1904) believed in storm and stress Anna Freud (194
Penn State - PSYCH - 412
Adolescents and their Peers high school students spend twice as much time with their peers as their parents for boys, time spent with family is replaced with time spent alone for girls, time spent
with family is replaced with time with friendsOrigins of
Penn State - PSYCH - 238
What do we know when we know a person? What we know about someone depends on how well we know them. However, you can never truly know everything about someone even ones own spouse or child. We often
develop first impressions about individuals based on the
Penn State - PSYCH - 238
1. Say that you want to create a self-judgment questionnaire to measure extroversion. You know from reading the Lesson Commentary that research by Dr. Johnson identified seven kinds of statements in
self-judgment personality questionnaires. Which of the s
Penn State - PSYCH - 238
1. Name a personality trait that you think would be more validly measured with I-data than Sdata and explain why.Validity is the extent to which a measurement actually reflects or measures what you
think it does. Likability is one personality trait that
Penn State - PSYCH - 238
L ist four personality t raits that you think describe you well. Make sure that one t rait is a behavioral t rait, one, an emotional t rait, one, a cognitive t rait, and one, a social imp ression.
Label what kind of t rait each of your personality t raits
Penn State - PSYCH - 238
A r ebellious person is someone who defies or resists established r ules and a uthority. Write an item for an S-data personality test of rebelliousness and d escribe examples of L-data and B-data-not
to be collected by self-judgment-that should be predict
Penn State - PSYCH - 238
A fter reading about the research on the authorita r ian personality, r ight-w ing a uthorita r ianism, and Jost's and Block's research on the personality t raits of conservatives, do you agree w ith
some critics that politically liberal psychologists a r
Penn State - PSYCH - 256
Data summaryType of selected items In original list Normal distractor (not in list) Special distractor (not in list) Percentage of reports 42.857143 0.0 16.666666Trial-by-trial dataEach trial has
possible reports of: 7 words 'In list', 8 words 'Not in
Penn State - PSYCH - 256
Data summarySize of line without wings (pixels) 85.0 88.0 91.0 94.0 97.0 100.0 103.0 106.0 109.0 112.0 Proportion of line w/o wings bigger 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.4115.0 118.0 121.0
124.0 127.00.4 0.8 1.0 1.0 0.9Trial-by-trial dataThe M
Penn State - PSYCH - 256
Data summaryCondition Name and font same: Name and font different: RT (ms) 953.3333 1055.7333Trial-by-trial dataOn each trial, a color word (second column) was shown in a colored font (third column).
You identified the color of the font as quickly as p
Penn State - PSYCH - 256
u1 . Which of the stages represent attention according to the information-processing model presented in Chapter 1 of the textbook? A) Filter and selection. B) Sensory store and filter. O C) Selection
and short-term memory. D) Filter and pattern recogniti
Penn State - PSYCH - 256
(^1 . If you described a friend by saying that he has dark hair, blue eyes and freckles, which pattern recognition theory would you be using? O A) Structural theory S B) Geon theory C) Feature theory
D D) Template theoryP oints Earned: 1,0<^ 2 . The in
Penn State - PSYCH - 256
(^1 . Which of the stages represent attention according to the information-processing model presented in Chapter 1 of the textbook? A) Filter and selection. B) Sensory store and filter. 5 C)
Selection and short-term memory. D) Filter and pattern recognit
Penn State - PSYCH - 256
1.According to the attenuation model of attention, which of the following words is most likely to be recognized on an unattended channel? O A) Your name 9 B) Danger words, such as "fire" _)
c ) Words that fit the context of what is recognized on the atte
Penn State - PSYCH - 256
1.The term mental effort is defined as: A) Investing attention in more than one task ' B) The amount of mental capacity required to perform a task 5 C) Selecting information when a stage of
information processing becomes overloaded D) Limitations in the
Penn State - PSYCH - 256
1.The term automatic processing is used to describe: A) Learning that occurs with conscious intent B) Amount of activation required for conscious awareness of information C) Description of how
quickly people can react to a target 0 D) Performance of ment
Penn State - PSYCH - 256
^1 . Which of the following is not an acquisition strategy? - A) Rehearsal. % B) Priming. C) Coding. D) Imaging.P oints Earned: 1,0^2 . When allocating study time, students tend to: O A) Rely only on
STM when the test is the same day. B) Spend more ti
Penn State - PSYCH - 256
^1 . Which of the following is not an acquisition strategy? O A) Rehearsal. B) Priming. C) Coding. C l D) Imaging.P oints Earned: 1,0^2 . Memory for the context in which a word occurs is unimportant
when people are tested by: A) Recognition tests. B)
Penn State - PSYCH - 256
^1 . Memory for the context in which a word occurs is unimportant when people are tested by: A) Recognition tests. B) Recall tests. @ C) Indirect memory tests. D) Direct memory tests.P oints Earned:
1,0^2 . Accurate eyewitness identification depends o
Penn State - ECON - 002
Points Awarded Points Missed Percentage9.00 0.00 100%1. The idea in economics that "there is no free lunch" means A) Businesses would go bankrupt if they offered free lunches. B) The
thought of a free lunch is often better than the reality of consuming
Penn State - ECON - 002
Points Awarded Points Missed Percentage10.00 0.00 100%1. In a competitive market for corn, the law of demand indicates that, other things equal, as A) The demand for corn decreases, the price will
increase. B) Income decreases, the quantity of corn dema
Penn State - ECON - 002
Points Awarded Points Missed Percentage10.00 0.00 100%1. In a competitive market for corn, the law of demand indicates that, other things equal, as A) The demand for corn decreases, the price will
increase. B) Income decreases, the quantity of corn dema
Penn State - ECON - 002
P oints A warded Points M issed Percentage9.001.0090.0 %1.Official unemployment rates may underestimate the true rate of unemployment because the official rate A) includes those workers who only work
part time. B) may include some individuals who are
Penn State - ECON - 002
P oints A warded Points M issed Percentage12.0 0 0.00 100 %1. T he largest component of GDP is: A) Government purchases B) Personal consumption expenditures. C) Gross private domestic investment. D)
Net foreign factor income earned in the United States.
Penn State - ECON - 002
ReviewSheetforExam1Yourexamis30multiplechoicequestionsworth2pointseachand2shortanswer questionsworth20pointseach.Theexamisworthatotalof100points. Ch.1
Knowthedifferencebetweenmicroandmacroeconomics:whatsortsofthings Ch.2 domicroeconomistsstudy?Whatsortso
Penn State - ECON - 002
ReviewSheetforExam2Ch.10:RealGDPandthePriceLevelintheLongRun LRAS:whyisitvertical?Becausepeoplehavecompleteinformationandinputprices adjustfullyinthelongrunWhatdowecallthelevelofGDPwheretheLRASis
Penn State - ECON - 002
ReviewSheetforExam3Ch.15 Whatarethefunctionsofmoney?(beabletolistanddefineeach)p367368 (a) MoneyasaMediumofExchange:Asamediumofexchange,moneyallows
individualstospecializeinproducingthosegoodsforwhichtheyhaveacomparative advantageandtoreceivemoneypayment
Penn State - ECON - 002
https:/elearning.la.psu.edu/econ004/lesson-5/print_viewLesson 5Lesson 5Long Run Aggregate Supply and DemandIn the first part of this course you learned about the most fundamental and powerful of the
tools of economic analysis: supply and demand. The t
Penn State - ECON - 002
https:/elearning.la.psu.edu/econ004/lesson_6/print_viewLesson 6Lesson 6Classical and Keynesian EconomicsIn this unit, we will add the third part of the aggregate supply and demand model, namely short
run aggregate supply. Furthermore, we will discuss
Penn State - ECON - 002
https:/elearning.la.psu.edu/econ004/lesson_7/print_viewLesson 7Lesson 7Aggregate Demand, Consumption, and the MultiplierIn this unit you will carefully investigate aggregate demand. The tool for
exploring aggregate demand is the Keynesian model . Amon | {"url":"http://www.coursehero.com/file/5974051/Quiz06B/","timestamp":"2014-04-20T03:36:22Z","content_type":null,"content_length":"50978","record_id":"<urn:uuid:f8a6368a-1d51-451b-81c8-6e4630e3a0af>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lithonia Algebra 2 Tutor
Find a Lithonia Algebra 2 Tutor
Hello,My name is Kristen, and I am extremely excited to work with your child this year! My previous work experience includes working in the the public school setting as a classroom teacher in the
areas of Math and Science for grades 6-8. I have also worked as the lead teacher of special education ...
30 Subjects: including algebra 2, English, writing, reading
...Patience is necessary since students learn complex ideas at different rates. Precalculus is always a welcome challenge. I am an avid reader.
23 Subjects: including algebra 2, reading, English, algebra 1
...I don't believe in an inability to do math, I'm a firm believer in the fact that some haven't done a great job teaching math. I earned my BS in mathematics from Bethune-Cookman college, my
master's in educational administration and leadership from Florida A&M University, and I'm almost done with...
10 Subjects: including algebra 2, calculus, geometry, algebra 1
...I've also taught deaf and hard-of-hearing students about the Bible and basic American Sign Language grammar. My family and I visit several deaf and hard-of-hearing individuals at their homes
to provide free Bible and ASL lessons each month. I completed a Tax Preparation course as part of my Accounting program at Ogeechee Technical College.
30 Subjects: including algebra 2, Spanish, reading, biology
...Many student,s difficulties begin in Algebra I and I wanted to be sure my students knew the concepts of the first course so they could be successful in the higher level mathematics courses. I
have found that lack of understanding of some basic math concepts such as fractions and factoring keep s...
5 Subjects: including algebra 2, geometry, algebra 1, precalculus
Nearby Cities With algebra 2 Tutor
Atlanta Ndc, GA algebra 2 Tutors
Avondale Estates algebra 2 Tutors
Bethlehem, GA algebra 2 Tutors
Between, GA algebra 2 Tutors
Clarkston, GA algebra 2 Tutors
Conyers algebra 2 Tutors
Ellenwood algebra 2 Tutors
Jersey, GA algebra 2 Tutors
Lovejoy, GA algebra 2 Tutors
Oxford, GA algebra 2 Tutors
Pine Lake algebra 2 Tutors
Porterdale algebra 2 Tutors
Rex, GA algebra 2 Tutors
Scottdale, GA algebra 2 Tutors
Walnut Grove, GA algebra 2 Tutors | {"url":"http://www.purplemath.com/lithonia_ga_algebra_2_tutors.php","timestamp":"2014-04-18T08:30:45Z","content_type":null,"content_length":"23829","record_id":"<urn:uuid:c38e0214-1545-4f9d-ab49-618e6e14420b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lexicon Leponticum
From Lexicon Leponticum
Remarks on any statistical information.
• There are some major restrictons of any statistical calculation on this page sytem. No statistics can be better than the data it is based on. Additionally, calculations had to be performed with
the means available (i.e. the data structure and the installed extensions). Any statistical data presented within this page system has to be seen within this frame.
• Since we do not know how MediaWiki works in detail, we can not guarantee that any information given here is always complete and accurate. Changes may take a while to affect all pages within the
system, so statistics may not always be up to date.
• It is not useful to break down statistical analysis into smaller sections than the accuracy or error range. For instance, the dating of many inscriptions and objects can not be determined more
exactly than one or even two centuries. Therefore, any statistical analysis of the Property:sortdate that is more detailed than about 150 years will only show how the values for this property
have been calculated, and not the actual date of inscriptions or objects.
• Some statistics turned out to be too complex to be performed live. They are rather time-consuming, report an error, reach a software limit, or had to be omitted for some other reason.
Pages containing Statistical Information
Simple statistics can be found on many pages, – mostly simple occurences per value. The following pages provide lists of pages containing statistical information of any kind: | {"url":"http://www.univie.ac.at/lexlep/index.php?title=Statistics&oldid=17309","timestamp":"2014-04-17T09:48:42Z","content_type":null,"content_length":"16956","record_id":"<urn:uuid:2c4bd78a-92ae-4f6e-b1c3-bd575e4003fa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Residual and backward error bounds in minimum residual Krylov subspace methods
- SIAM J. Sci. Comput
"... Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are
often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present sev ..."
Cited by 14 (2 self)
Add to MetaCart
Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often
formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basicidentities and bounds for the LS residual. These results are interesting in the
general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace.
Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to
different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y.
Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As
demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571–581], the best orthogonalization technique used for computing the basis does not
compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than
- SIAM J. Matrix Anal. Appl
"... Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to
scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and ..."
Cited by 10 (4 self)
Add to MetaCart
Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to
scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and Eiermann and Ernst [Private communication, 2002], but we formulate and prove our results in a
different way. We then extend the (lower) bidiagonal Jordan blocks to tridiagonal Toeplitz matrices and study extensions of our bidiagonal analysis to the tridiagonal case. Intuitively, when a scaled
Jordan block is extended to a tridiagonal Toeplitz matrix by a superdiagonal of small modulus (compared to the modulus of the subdiagonal), the GMRES residual norms for both matrices and the same
initial residual should be close to each other. We confirm and quantify this intuitive statement. We also demonstrate principal difficulties of any GMRES convergence analysis which is based on
eigenvector expansion of the initial residual when the eigenvector matrix is ill-conditioned. Such analyses are complicated by a cancellation of possibly huge components due to close eigenvectors,
which can prevent achieving well-justified conclusions. Key words. Krylov subspace methods, GMRES, minimal residual methods, convergence analysis,
, 2006
"... The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of
least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation ..."
Cited by 9 (1 self)
Add to MetaCart
The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least
squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is modified Gram–Schmidt GMRES (MGS-GMRES). Here we show that MGS-GMRES is backward stable. The
result depends on a more general result on the backward stability of a variant of the MGS algorithm applied to solving a linear least squares problem, and uses other new results on MGS and its loss
of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.
- SIAM J. Sci. Comput
"... (1986), pp. 856–869] is applied to streamline upwind Petrov–Galerkin (SUPG) discretized convectiondiffusion problems, it typically exhibits an initial period of slow convergence followed by a
faster decrease of the residual norm. Several approaches were made to understand this behavior. However, the ..."
Cited by 8 (1 self)
Add to MetaCart
(1986), pp. 856–869] is applied to streamline upwind Petrov–Galerkin (SUPG) discretized convectiondiffusion problems, it typically exhibits an initial period of slow convergence followed by a faster
decrease of the residual norm. Several approaches were made to understand this behavior. However, the existing analyses are solely based on the matrix of the discretized system and they do not take
into account any influence of the right-hand side (determined by the boundary conditions and/or source term in the PDE). Therefore they cannot explain the length of the initial period of slow
convergence which is right-hand side dependent. We concentrate on a frequently used model problem with Dirichlet boundary conditions and with a constant velocity field parallel to one of the axes.
Instead of the eigendecomposition of the system matrix, which is ill conditioned, we use its orthogonal transformation into a block-diagonal matrix with nonsymmetric tridiagonal Toeplitz blocks and
offer an explanation of GMRES convergence. We show how the initial period of slow convergence is related to the boundary conditions and address the question why the convergence in the second stage
accelerates. Key words. convection-diffusion problem, streamline upwind Petrov–Galerkin discretization, GMRES, rate of convergence, ill-conditioned eigenvectors, nonnormality, tridiagonal Toeplitz
, 2007
"... The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with
relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and an ..."
Cited by 5 (0 self)
Add to MetaCart
The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with
relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and analyze three extensions of this approach: we incorporate a nonsymmetric permutation to
obtain a large diagonal, we use a more general parametrization for TPABLO, and we use a block Gauss-Seidel preconditioner which can be implemented to have the same execution time as the corresponding
block Jacobi preconditioner. Experiments are presented showing that for certain classes of matrices, the block Gauss-Seidel preconditioner used with the system permuted with the new algorithm can
outperform the best ILUT preconditioners in a large set of experiments.
- SIAM J. Matrix Anal. Appl , 2008
"... Abstract. Matrices with a skew-symmetric part of low rank arise in many applications, including path following methods and integral equations. This paper explores the properties of the Arnoldi
process when applied to such a matrix. We show that an orthogonal Krylov subspace basis can be generated wi ..."
Cited by 5 (1 self)
Add to MetaCart
Abstract. Matrices with a skew-symmetric part of low rank arise in many applications, including path following methods and integral equations. This paper explores the properties of the Arnoldi
process when applied to such a matrix. We show that an orthogonal Krylov subspace basis can be generated with short recursion formulas and that the Hessenberg matrix generated by the Arnoldi process
has a structure, which makes it possible to derive a progressive GMRES method. Eigenvalue computation is also considered. Key words. computation low-rank perturbation, iterative method, solution of
linear system, eigenvalue AMS subject classifications. 65F10, 65F15 DOI. 10.1137/060668274
- Contemp. Math , 1997
"... Abstract. There are many examples where non-orthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per
iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. There are many examples where non-orthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration
than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of non-optimal methods have been shown to converge
in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a non-optimal basis. We prove certain identities for the relative residual gap,
i.e., the relative difference between the residuals of the optimal and nonoptimal methods. These identities and related bounds provide insight into when the delay is small and convergence is
achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality
of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results.
"... Abstract The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS)
the correction is restricted to the right hand side b, while in scaled total least squares (Scaled TLS) [ ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS) the
correction is restricted to the right hand side b, while in scaled total least squares (Scaled TLS) [10; 7] corrections to both b and A are allowed, and their relative sizes are determined by a real
positive parameter γ. As γ → 0, the Scaled TLS solution approaches the LS solution. Fundamentals of the Scaled TLS problem are analyzed in our paper [7] and in the contribution in this book entitled
Unifying least squares, total least squares and data least squares. This contribution is based on the paper [8]. It presents a theoretical analysis of the relationship between the sizes of the LS and
Scaled TLS corrections (called the LS and Scaled TLS distances) in terms of γ. We give new upper and lower bounds on the LS distance in terms of the Scaled TLS distance, compare these to existing
bounds, and examine the tightness of the new bounds. This work can be applied to the analysis of iterative methods which minimize the residual norm [9; 6].
, 2007
"... Abstract. The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block
diagonal with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We prop ..."
Add to MetaCart
Abstract. The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal
with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and analyze three extensions of this approach: we incorporate a nonsymmetric permutation to
obtain a large diagonal, we use a more general parametrization for TPABLO, and we use a block Gauss-Seidel preconditioner which can be implemented to have the same execution time as the corresponding
block Jacobi preconditioner. Experiments are presented showing that for certain classes of matrices, the block Gauss-Seidel preconditioner used with the system permuted with the new algorithm can
outperform the best ILUT preconditioners in a large set of experiments. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4374935","timestamp":"2014-04-21T04:41:39Z","content_type":null,"content_length":"37644","record_id":"<urn:uuid:3f75bd10-e785-43d1-b68a-770065b7aa4d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum information theory, quantum many-body theory, and quantum optics
AG Eisert
Quantum many-body theory, quantum information theory, and quantum optics
Prof. Dr. Jens Eisert
Dahlem Center for Complex Quantum Systems
Address Arnimallee 14
Room 1.3.06
14195 Berlin-Dahlem
Office Annette Schumann-Welde, Room 1.3.11
Telephone +49-(0)30-838-54781
Fax +49-(0)30-838-53741
Email Prof. Dr. Jens Eisert
Annette Schumann-Welde
Our group is concerned with research in quantum information theory, quantum optical implementations of quantum information ideas, and quantum many-body theory.
• We ask what information processing tasks are possible using single quantum systems as carriers of information. On the one hand, we think about the mathematical-theoretical foundations of quantum
information, specifically about the theory of entanglement and questions of tomography.
• A main emphasis of our theoretical research is on the theory of quantum systems with many degrees of freedom, particularly in the condensed matter context, concerning their static properties,
their efficient numerical simulation, as well as their quantum dynamics in non-equilibrium. Tensor networks play a special role here.
• We are also involved in identifying quantum optical realizations of such ideas, specifically using light modes or cold atoms in optical lattices.
Characteristic for our work is to be guided by the rigor of mathematical physics, but at the same time to be pragmatically and physically motivated, which often leads to collaborations with
Non-equilibrium dynamics of strongly correlated quantum many-body systems
Quantum information theory
Quantum system identification, compressed sensing, and tomography
Tensor network approaches to solving condensed matter models
Mathematical physics
Correlations in condensed-matter systems
Quantum optics
Open quantum systems and opto-mechanics
For more recent publications and a complete list of older publications, see this link.
Last Update: Mar 06, 2014 | {"url":"http://www.physik.fu-berlin.de/en/einrichtungen/ag/ag-eisert/index.html","timestamp":"2014-04-21T02:02:21Z","content_type":null,"content_length":"27065","record_id":"<urn:uuid:8be08178-8d32-49dd-a7d9-ed0c1c6f6e7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Purpose and objectives
To present and use a new approach to annual maximum floods analysis for countries that have both types of floods, i.e. snowmelt and rainfall origin. That is the prerequisite condition to employ the
Guidelines to estimate annual maximum flood with given probability of exceedence (T-year return period) for water management structures designing.
2. Description
The proposed Guidelines for calculating annual maximum discharges with given probability of exceedance Qmax,p are based on the following major assumptions and principles:
· Correctness of annual maximum discharge of summer and winter seasons, defined on the grounds of reliable rating curves;
· Maximum use of non-statistical information to verify the reliability of measurement series for statistical calculations;
· Maximum use of information about the statistical properties of measurement series to select the most credible function of probability distribution.
Principles and procedures for carrying out calculations relate to two issues, namely: analysis of measurement series and calculation of maximum discharges with a given probability of exceedance.
The measurement data analysis procedure includes the following:
· Examination of homogeneity of maximum discharge series with the use of genetic (physical) methods:
- Set up of observations from a period of N >= 30 years and plotting a graph of the run of two series of floods of various origin, i.e. annual maximum flood in winter season and maximum flood in
summer season; floods consolidated in each of these series are homogeneous in terms of origin (a priori homogeneous),
- in each series, checking and elimination of measurement nonhomogeneity, which could have resulted from any change in measurement method or instruments,
- in each series, checking and elimination of time nonhomogeneity, which could have resulted from any change in basin or river bed development over the observation period,
· Examination of homogeneity of maximum discharge series with the use of statistical methods:
- investigation of so-called outliers with the use of the Grubbs-Beck test,
- investigation of independence of elements of series with the use of the test of series,
- investigation of stationarity of the series of maximum discharges with the use of three non-parametric tests: the Kruskal-Wallis test, the Spearman rank correlation coefficient test for trend of
mean value and the Spearman rank correlation coefficient test for trend of variance.
In case of recognising non homogeneity of the series, such series cannot be subject to further processing, i.e. it cannot be used as a basis for calculating Qmax,p.
The Qmax,p calculating procedure may be used solely for homogeneous measurement series of size N >= 30years of observation:
· The following four types of probability distribution functions have been adopted as models of statistical properties of each studied series: Gamma, log-Normal, Weibull, log-Gamma.
· For all the above listed types of distributions a condition was assumed that left side lower bound may adopt values from the interval between 0 and the lowest value of maximum discharge observed in
the studied series,
· Estimation of two remaining parameters, with defined different values of lower bound, is carried out with the use of the maximum likelihood method,
· Testing of the hypothesis of goodness of fit of theoretical probability distribution function with the empirical distribution, with the use of Chi2 Pearson test, at a = 0.05 significance level,
leads to selection of non-rejected distribution functions which form a set of uncontradicted probability distribution functions, acceptable as theoretical distributions of the studied measurement
· Selection of the best fitted distribution function, one for each adopted type of distribution, is done with the use of minimum Dmax Kolmogorov distance criterion, between the theoretical and
empirical distribution,
· Selection of one most credible function of probability distribution, from the set of best fitted distribution functions, is done with the use of Akaike Information Criterion (AIC),
· The function of probability of exceedance of annual maximum discharges is defined as a probability function of alternative of two non-eliminated independent events, based on the most credible
function of probability distribution of the annual maximum discharges of the winter season and the most credible function of probability distribution of the annual maximum discharges of the summer
· The upper bound of interval of confidence, resulting from randomness of maximum discharge series, is calculated by simulation method,
· Checking whether the size of the measurement series is sufficient to calculate the annual maximum discharge with probability p, i.e. Qmax,p, where the error resulting from sample randomness does
not exceed 20 %,
· Procedure is completed with estimation of zone of uncertainty of quantile Qmax,p of the annual maximum discharge distribution, selected from among best fitted functions in specific types of
The computing program FLOODS ANALYSIS has been implemented for microcomputers operating in Windows environment.
3. Input
Two measurement series of size N >= 30 years of annual maximum floods of snowmelt and rainfall origin, i.e. one series for winter season and one for summer season.
4. Output
The report contains statistical characteristics and graphs of measurement series, tables of quantiles Qmax,p and probability distribution plots for winter and summer seasons as well as for the year.
5. Operational requirements and restrictions
The program can be implemented for computers with operating system as WINDOWS 98/NT/2000/XP. For edition of final report the Microsoft Word ver. 2000 or higher is necessary.
6. Form of presentation
The Guidelines are in English they consist of 44 pages plus computing program.
7. Operational experience
During the study period both the assumptions and technical basis for the Guidelines as well as research results were presented and discussed during various scientific seminars. A series of discussion
articles were also published in professional journal Water Management.
The present Guidelines have been developed following a year of practical application, a working seminar at the Cracow Branch of IMGW in March 2001 and after incorporation of technically justified
remarks. The Guidelines are in operational use at the Institute of Meteorology and Water Management (IMGW) in Poland since year 2002.
8. Originator and technical support
Institute of Meteorology and Water Management (IMGW), Warsaw, Poland.
9. Availability
The Guidelines are available as a HOMS component free of charge from HNRC for Poland at the Institute of Meteorology and Water Management, ul. Podlesna 61, 01-673 Warszawa, Poland (tel. +48 22
5694326, fax +48 22 5694317, e-mail bogdan.ozga-zielinski@imgw.pl).
10. Conditions on use
Cost of reproduction (if any) and mailing. | {"url":"http://www.wmo.int/pages/prog/hwrp/homs/Components/English/i81301.htm","timestamp":"2014-04-18T10:42:23Z","content_type":null,"content_length":"11478","record_id":"<urn:uuid:395814bc-49cd-4d0b-a33d-cf46460fa518>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math resources - DaemonForums
I probably have one of the worst levels of education in mathematics, out of everyone on this forum.... so anything I can write should probably be taken with a grain of salt. But honestly, the only
thing I've needed to know about math in order to work with programs, is how to read what's in front of me, and to understand how numbers can work together. The only real exceptions to the above
statements, have usually involved some form of cryptography or a derivative subject; where the need for math can really kick in.
One could argue, that virtually everything interesting in Computer Science can fit (more tersely) into Mathematics at some level, CS is in of it self, a branch of math I believe.
Many of the things that I've read about data structures and algorithms over the years, are really not that different then what I've found on topics related to mathematics or physics; just a different
application of thought. The software side of computer science needs to address the issue of design and maintainability of the software, which is probably not necessary in most
branches of mathematics. The parts that seem to fall into math & logic, should be easy enough to come to grips with, but a good education in math is well worth it!!!
Originally Posted by Unknown
The study of mathematics should be fun for it's own sake; applying it, more of an exercise in making life easier. | {"url":"http://daemonforums.org/showthread.php?t=2684","timestamp":"2014-04-17T13:13:22Z","content_type":null,"content_length":"118244","record_id":"<urn:uuid:fc1ef171-59bd-4979-87b5-b4cf0a68e36d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving equations graphically.
October 30th 2012, 12:23 PM #1
Oct 2012
United Kingdom
Solving equations graphically.
Before I start, if this is the wrong section, I'm sorry, I don't really know which section to post it on.
How do I solve these equations graphically :
This is how I work it out, but for some reason I'm not getting the correct answer, even though I've used it to solve other equations like this :
Let x=0
Co-ordinate points=(0,3.5)
Let y=0
Co-ordinate points=(7,0)
Let x=0
Co-ordinate points=(0,3)
Let y=0
Co-ordinate points=(3,0)
So, as we can see, the points for the first equation the co-ordinate points I plot are (0,3.5) and (7,0) and for the second equation I plot (0,3) and (3,0).
The problem I'm having is when I plot them on the graph, they don't intersect, which means I don't have an answer. If they don't intersect, it must be wrong, but how, I think I've done everything
correctly? Also, I've looked at other examples in my maths textbook and they work, but for some reason this one doesn't.
Please, please help. I have a test tomorrow and I really need to know how to do this.
Re: Solving equations graphically.
They do intersect, just not in the first quadrant.
Re: Solving equations graphically.
How do I solve these equations graphically :
$x+2y = 7$
change to slope-intercept form ...
$2y = -x + 7$
$y = -\frac{x}{2} + \frac{7}{2}$
$x+y = 3$
change to slope-intercept form ...
$y = -x+3$
graph them ...
October 30th 2012, 03:19 PM #2
Super Member
Jun 2009
October 30th 2012, 03:26 PM #3 | {"url":"http://mathhelpforum.com/algebra/206406-solving-equations-graphically.html","timestamp":"2014-04-19T00:15:09Z","content_type":null,"content_length":"39825","record_id":"<urn:uuid:823b4677-1c60-49d9-a21d-59e04824682d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 1998 [00145]
[Date Index] [Thread Index] [Author Index]
Re: help!!
• To: mathgroup at smc.vnet.net
• Subject: [mg15109] Re: [mg15065] help!!
• From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>
• Date: Sat, 12 Dec 1998 03:59:09 -0500
• Organization: Universitaet Leipzig, Institut fuer Informatik
• References: <199812100812.DAA04207@smc.vnet.net.>
• Sender: owner-wri-mathgroup at wolfram.com
Hi Peter,
> I have collected a set of points which i now want to include in a
> scatter diagram. The result was ok I used ListPlot[data] to do this
> However, some of the points were placed beneath the x-axis (these
> points did not have a negative y-axis value) The points were loaded as
> data={{1,18},{2,17.6},{3,14.5},{4,12.6},{5,12.5},{6,14}}; and the
> command for plotting the graph was
> plotdata=ListPlot[data,PlotStyle->PointSize[0.01]]; what i did next was
> try to fix the axes problem with AxesOrigin command which i set to 0,12
> and what came out was the same graph but with no intersection of the
> axes The axes seemed not be be interesecting at any point Instead, at
> the point at which the axes should be intersecting there was blank
> space What could be wrong?
Set the PlotRange-option. This will do what you want:
> Furthermore I experienced some problems when
> exporting the graph to another application in this case microsoft word
> What i did was execute the copy as bitmap
Why a bitmap ? Copy it as MetaFile or ExtendedMetaFile.
> command and then i pasted the
> graph in a document in word Everything was fine until i tried to resize
> the object which had as a result the quality to change due to the fact
> that it was a bitmap My question about that is can i set the graphs in
> mathematica to be presented at a certain size so as to be consistent
> with the size that they will be presented in word? In other words is
> there any relation between the size of the graphs in mathematica and
> that of the pasted bitmap in word? How can i set a default size for the
> graphs i create?
The ImageSize option ?
> Why when i execute the copy (not copy as) and then
> paste the object in word the graph is presented at very small scale
> Notice that when i resize the particular pasted object (that is when i
> have only execute the copy command in mathematica) the quality is the
> same as in the notebook i.e. great! Why though is the size so small?
A bug in Word ?
> What is the best solution in exporting a graph?
Encapsulate PostScript.
> I then tried to label the axes of the graph and i realised that the
> fonts in the graph were barely readable How can i change the size
> and/or the fonts IN the graphs?
There are so many versions in Matheamtica 3.0, here is one with 12 pt
Times Bold Italic:
AxesLabel->{"some x-values","some y-values"},
> How can i make my changes to be default
> so as i wont need to change them everytime?
Place :
FontWeight->"Bold"}] & /@ {Plot, ListPlot, Graphics,
in your init.m file and be sure that it tis the right one.
> I then tried to statistically analyse the set of points What statistical
> models (scalemethod) are available in mathematica?
Look in the nice book "Mathematica 3.0 Standard Add-on Packages" that
comes with Your Mathematica copy.
> Where can i find
> them listed in the online documentation?
You will be surprised it is in
Main Menu | Help | Help ... | Add-ons | Standard Packages | Statistics
> Can i use the bestfit command
> for non-linear regression?
What is the "bestfit" command ? Fit[] can only used in linear models.
> Again what models are available in the
> nonlinearfit command and in what wasy can i use it for the data shown
> above?
Mathematica is a Computer Algebra so You can use any mathematical
combination of the special functions implemented in Mathematica.
> I appreciate your understanding on the fact that i have just
> begun to explore mathematica Your help will be most valuable for me
Hope that helps
• References:
□ help!!
☆ From: "Peter Manaras" <divvy@hol.gr> | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Dec/msg00145.html","timestamp":"2014-04-21T04:48:00Z","content_type":null,"content_length":"38820","record_id":"<urn:uuid:07fabe4d-d4bb-4af5-b209-03d545490580>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
My primary current interest is Revenue Management.
More generally, I am interested in Dynamic Optimization and the analysis of complex stochastic systems.
Some popular press coverage: [NYT]; [HBR]; [BB] &hellip
1. N. Bhat, V. F. Farias, and C. C. Moallemi. “ Non-parametric Approximate Dynamic Programming via the Kernel Method. ” Submitted. [pdf]
Preliminary version:
□ N. Bhat, V. F. Farias, and C. C. Moallemi. “ Non-parametric Approximate Dynamic Programming via the Kernel Method. ” Advances in Neural Information Processing Systems 25 ,2012.
2. Y. C. Chen and V. F. Farias. “ What's On The Table: Revenue Management And The Welfare Gap In The US Airline Industry. ” Submitted. [pdf]
3. V. F. Farias, S. Jagabathula, and D. Shah. “ Sparse Choice Models. ” Submitted. [pdf]
4. D. F. Ciocan, V. F. Farias. “ Dynamic Allocation Problems with Volatile Demand ” Mathematics of Operations Research (forthcoming) [pdf]
5. ^4 D. Bertsimas, V. F. Farias, and N. Trichakis. “ Fairness, Efficiency and Flexibility in Organ Allocation for Kidney Transplantation. ” Operations Research (forthcoming). [pdf]
6. V. V. Desai, V. F. Farias, and C. C. Moallemi. “ Pathwise Optimization for Optimal Stopping Problems. ” Management Science (forthcoming). [pdf]
7. D. Bertsimas, V. F. Farias, and N. Trichakis. “ A Characterization of the Efficiency-Fairness Tradeoff. ” Management Science (forthcoming). [pdf]
8. Y. Chen, V. F. Farias. “ Simple Policies for Dynamic Pricing with Imperfect Forecasts ” Operations Research (forthcoming). [pdf]
9. C. W. Chan, V. F. Farias, N. Bambos, and G. J. Escobar. “ Maximizing Throughput of Hospital Intensive Care Units with Patient Readmissions.” Operations Research (forthcoming). [pdf]
10. ^3 V. F. Farias, S. Jagabathula, and D. Shah. “ A New Approach to Modeling Choice with Limited Data. ” Management Science (forthcoming). [pdf]
Preliminary version:
□ V. F. Farias, S. Jagabathula, and D. Shah. “ A Data-Driven Approach to Modeling Choice. &rdquo Advances in Neural Information Processing Systems 22 , 2009.
11. ^5 V. V. Desai, V. F. Farias, and C. C. Moallemi. “ Aproximate Dynamic Programming via a Smoothed Approximate Linear Program. ” Operations Research (forthcoming). [pdf]
Preliminary version:
□ V. V. Desai, V. F. Farias, and C. C. Moallemi. “ The Smoothed Approximate Linear Program. ” Advances in Neural Information Processing Systems 22 ,2009.
12. D. Bertsimas, V. F. Farias, and N. Trichakis. “ The Price of Fairness. ” Operations Research, Vol. 59, No. 1, January-February 2011, pp. 17-31. [pdf]
13. ^2 V. F. Farias, D. Saure, and G. Y. Weintraub. “ An Approximate Dynamic Programming Approach to Solving Dynamic Oligopoly Models ” RAND Journal of Economics (forthcoming) [pdf]
14. V. F. Farias, R. Madan. “ Irrevocable Multi-Armed Bandit Policies. ” Operations Research, Vol. 59, No. 2, March-April 2011, pp. 383-399. [pdf]
15. C. W. Chan, V. F. Farias. “ Stochastic Depletion Problems: Effective Myopic Policies for a class of Dynamic Optimization Problems.” Mathematics of Operations Research 34:2 (May 2009) [pdf]
16. V. F. Farias, B. Van Roy. “An Approximate Dynamic Programming Approach to Network Revenue Management.” Submitted. [pdf]
17. ^1 V. F. Farias, B. Van Roy. “Dynamic Pricing with a Prior on Market Response.” Operations Research, Vol. 58, No. 1, January-February 2010, pp. 16-29. [pdf]
18. V. F. Farias, C. C. Moallemi, B. Van Roy, and T. Weissman. “ Universal Reinforcement Learning.” IEEE Transactions on Information Theory, Vol. 56, No. 5, May 2010, pp 2441-2454. [pdf]
Preliminary version:
□ V. F. Farias, C. C. Moallemi, B. Van Roy, and T. Weissman. “ A Universal Scheme for Learning.” Proceedings of the IEEE International Symposium on Information Theory, Adelaide, Australia,
September 2005.
19. V. F. Farias, C. C. Moallemi, and B. Prabhakar. “Load Balancing with Migration Penalties.” Proceedings of the IEEE International Symposium on Information Theory, Adelaide, Australia, September
2005. [pdf]
20. V. F. Farias, B. Van Roy. “Approximation Algorithms for Dynamic Resource Allocation.” Operations Research Letters, Vol. 34, No. 2, March 2006, pp. 180-190. [pdf]
21. V. F. Farias, B. Van Roy. “Tetris: A Study of Randomized Constraint Sampling.” Probabilistic and Randomized Methods for Design Under Uncertainty, Springer-Verlag [pdf] [Tetris Demo]
1. V.F. Farias. “Revenue Management Beyond "Estimate, Then Optimize"” Stanford University Ph. D. Thesis, 2007.[pdf]
^5 2011 INFORMS JFIG Paper Competition, first place.
^4 2011 INFORMS Pierskalla Award, finalist.
^3 2010 INFORMS MSOM Student Paper Competition, first place.
^2 2009 INFORMS JFIG Paper Competition, second place.
^1 2006 INFORMS MSOM Student Paper Competition, second place. | {"url":"http://web.mit.edu/~vivekf/www/mypapers2.html","timestamp":"2014-04-21T02:05:06Z","content_type":null,"content_length":"7802","record_id":"<urn:uuid:d35d9aa7-db9a-4b3b-adad-d6bf1269ca6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
s -
04-09-2009, 04:05 PM
You enjoy being vague don't you...
What starting points do I use for the transformation then?
Do I have the function right?
Re: Method of Transformations - Trig - Sin
This is what I have thus far ...?...
Re: Exponential Growth (WORD PROBLEM: bacterial growth)
I'm lost....... HELP!
Using the method of transformations I got the following equation:
p(t) = 2^t
p(t) = 100 x 2^t
p(2) = 100 x 2^2t
100 being the Distortion on Y
2 being the Base
Re: Method of Transformations - Trig - Sin
So I leave the (8?/3) alone then?
And set the graph to points of ?/3, 2?/3, 3?/3, 4?/3, 5?/3, etc...
How does the (t) effect the (8?/3) ?
Usually there is an addition or subtraction that effect...
Method of Transformations - Trig - Sin
Using the method of transformations, I have to graph the following function p(t), and label the amplitude & period.
p(t) = 25 sin [ (8?/3) t ] + 115
I'm assuming I use p(t) = A sin (?t + ?) + d...
Re: Exponential Growth (WORD PROBLEM)
So I got some points on the graph starting at the 0 for time and 100 for infection.
(0,100) (0.5,200) (1,400) (1.5,800)
Because the infection doubles every half an hour
How am I to use the...
Exponential Growth (WORD PROBLEM: bacterial growth)
E.coli is a bacterium that reproduces by dividing. Each bacterium splits into two parts every half hour.
This is an example of exponential growth and can be modeled as follows:
Assuming that none...
Trigonometric Functions (WORD PROBLEM)
Consider the rhythm of your heartbeat. A person’s blood pressure is a measure of the pressure exerted on the walls of the arteries by the blood as it is propelled by the rhythmic contractions of
Re: Applications of Polynomial Functions (Word Problem)
GOOD! DONE! Huge thanks to everyone that helped - I have studying to do, I'm pretty sure they won't give me 11 hours on the exam ... lol
Re: Applications of Polynomial Functions (Word Problem)
This is where I am at
Re: Applications of Polynomial Functions (Word Problem)
My final equation has to be in the format: a(x-h)^2 + k
I was told the final equation would be y = - (x/4)^2 + 4 which would mean a = 1/4 yes?
Is there a way I can check my work, assuming the...
Re: Applications of Polynomial Functions (Word Problem)
This is what I have come up with
I'm feeling slightly less crazy
Am I on the right track?
I thought "a" would = -1/4 .... not -1/12
I really appreciate everyone's help - this site is a godsent!...
Re: Applications of Polynomial Functions (Word Problem)
OMG - I don't understand why I can't get this - I am wasting hours and getting no where
How do you figure out what anything is with 4 = a(4-0)^2 + k ?
How do I find a ?
I understand the...
Re: Applications of Polynomial Functions (Word Problem)
How do you get that you want to make y = 4 and x = 4 ?
Does k = 32 ?
Applications of Polynomial Functions (Word Problem)
A wildlife corridor must be designed to arch over a major highway. The arch over the highway is to be constructed in the shape of a parabola. The highway is 8m wide and will be centered under
Re: Applications of Polynomial Functions (Word Problem)
That is exactly what my graph looked like - so that's a good sign!
I tried using transformations to get a function out of it:
Y = x^2 (-2, 4) ( 0, 0) ( 2, 4)
Y1 = -1(x^2) ...
Applications of Polynomial Functions (Word Problem)
We have just started this chapter, and I was grouped with 2 others that didn't want to work 'together', so the questions were divided up ... this is mine ... I have attempted it, and done the
04-09-2009, 03:48 PM
04-09-2009, 02:57 PM
04-09-2009, 11:32 AM
04-09-2009, 10:43 AM
04-08-2009, 03:04 PM
04-08-2009, 12:15 PM
04-08-2009, 10:36 AM
Thread: Trigonometric Functions (WORD PROBLEM)
by plur222
03-03-2009, 05:04 PM
03-03-2009, 03:55 PM
03-03-2009, 03:04 PM
03-03-2009, 12:13 PM
03-02-2009, 10:53 PM
03-02-2009, 03:46 PM
03-02-2009, 11:30 AM
03-01-2009, 05:30 PM
03-01-2009, 04:31 PM | {"url":"http://www.freemathhelp.com/forum/search.php?s=0800bff1b0bac4b39be39508416a7d72&searchid=1123544","timestamp":"2014-04-17T07:23:53Z","content_type":null,"content_length":"45175","record_id":"<urn:uuid:a2372895-e927-4288-809e-704b2369404e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ok, I have a new question! =) [Archive] - OpenGL Discussion and Help Forums
06-23-2000, 02:38 PM
My apologies for asking so many questions. And my thanks to everyone who's helped me so far.
Here's the question:
Is it possible to replace multiple calls to glRotate*() with a single matrix?
I mean. Can I take three rotation matrices (one for each axis) multiply them together (on paper) and thereby having ONE matrix than does all three rotations at once. And just update the
So I have a matrix. I put arbitrary angles into the rotation-angles (I named them a,b and c) and then I do the calulations I have in my angle, upon which I multiply it with the current matrix. Will
that work?
Can I multiply the product of three rotation matrices with the current matrix and expect to get the same result as multiplying the current matrix with the three rotation matrices sequentially? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-131782.html","timestamp":"2014-04-18T10:55:26Z","content_type":null,"content_length":"5497","record_id":"<urn:uuid:327d19b7-a027-451e-a773-724ebec0b9a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difficult Question Help (Probability/Statistics) Joint Probability Distribution
May 5th 2013, 07:32 PM #1
May 2013
Difficult Question Help (Probability/Statistics) Joint Probability Distribution
Hi Guys,
Hope everyone is well. I'm wondering if someone can help me with a difficult problem presented during lecture today.
Two friends plan to meet to go to a nightclub. Each of them arrives at a time uniformly distributed between midnight and 1am and independently of the other. Denote by X (respectively Y) the
random variable representing the arrival time of the first person (respectively, the second). The joint probability distribution is given by
f_(x,y) (x,y) = 2 if 0 ≤ x ≤ y, and 0 otherwise.
a) Find the probability that the first person is waiting for his friend for more than 10 minutes.
b) Determine the marginal probability density functions of X and Y. Check that they are indeed probability density functions.
c) Calculate the means E[X] and E[Y].
d) Calculate the variances V(X) and V(Y).
e) Find the conditional density function of X given that Y = y, for 0 ≤ y ≤ 1. Check that it is indeed a probability density function.
f) Repeat part (e) for the conditional density of Y given that X = x.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/statistics/218596-difficult-question-help-probability-statistics-joint-probability-distribution.html","timestamp":"2014-04-16T13:54:10Z","content_type":null,"content_length":"30947","record_id":"<urn:uuid:8f192d2c-ca8c-40ff-ac10-84b8c3bc6d81>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
is anyone of you learning Statistical Mechanics ? come and let's learn together!!!
Well, I would start the rolling ball. Below is my understanding of the basic ideas underlying statistical mechanics. "Attacks" and Critics to my understanding is well well welcome!!!
For a system (or assembly) of weakly-interacting particles, the detail of interaction (be it collision, Coulomb force, gravitational force and etc.) is not a concern of statistical mechanics as long
as the particles can exchange energy with each other yet do not alter the energy levels of each other’s. In fact, statistical mechanics is not able to tell the details of the interaction or what the
interaction is. In this sense the complexity of atomic dynamics is veiled.
In Statistical mechanics’ picture, the existence of thermodynamic equilibrium, at first, is just a statistical consequence of
The system is composed of a very large number of particles
This is because; the large number of particles will create an overwhelming number of microstates that give the same macrostate M. Statistical mechanics identifies this macrostate M as
“equilibrium-macrostate”. In other words, at thermodynamic equilibrium the system is always in one of these microstates or, in macrostate M. Now, let’s re-examine the arguments and see what the
underlying principles are.
1) There is an overwhelming number of microstate giving the same macrostate M
Implication: For the overwhelming number of microstate to exist, the must be a large number of particles. This implies the number of particles has to be conserved at thermo-dynamic equilibrium. This
is in turn related to the assumption that the number of charges is conserved. This assumption reveals gauge symmetry.
2) This macrostate M is the equilibrium-macrostate
Implication: For the overwhelming microstates to give overwhelming probability (or most of the time the assembly is in that region), we are assuming “all the accessible microstate are equally
probable”. This reveals the symmetry in time’s direction (reversibility in time) as pointed out by Callen (pg 468).
3) The accessible microstates (accessible region in -space) doesn’t change with time
Implication: If the accessible microstates change with time, the probability will keep re-distributing in new sets of macrostates as time goes on. This makes the probability distribution over energy/
other parameters changes with time, and so are the statistical results. Here we are assuming conservation of energy, momentum and angular momentum. This reveals the symmetry of physical laws in
space-time translation and rotation.
4) At thermodynamic equilibrium/equilibrium-macrostate there will be some steady thermodynamic variables about the system
Implication: Some thermodynamic variable show steady values at equilibrium as a result of the steady equilibrium-macrostate. The steady value in this case is the mean value given by the
equilibrium-macrostate. However, some thermodynamic variables show steady values simply because they are the ones survive from the temporal averaging process during measurement. This kind of
thermodynamic variable has zero changing-rate, hence revealing broken symmetry. An example is the volume of a solid. | {"url":"http://www.physicsforums.com/showthread.php?t=154419","timestamp":"2014-04-16T07:37:26Z","content_type":null,"content_length":"31672","record_id":"<urn:uuid:df4caf16-f0c0-46a8-b7d5-f1ea07fdb728>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
How far will a random walk on the integers go?
up vote 3 down vote favorite
Let $S_n$ be the distance walked at time $n$ in a simple symmetric random walk on $\mathbb{Z}$ (with steps $=\pm1$).
It is well known that $\lim_{n\to\infty}(E(|S_n|)/\sqrt{n})=\sqrt{2 /\pi}$.
Can we describe the set $F$ of monotonic "growth" functions with probability 1 of being crossed by $|S_n|$ for $n>1$?
Clearly $f(x)=\sqrt{x}$ is in $F$ and I think I could prove that $g(x)=2f(x/2)$ is in $F$ whenever $f$ is. I have no idea beyond that.
5 Have you heard about the Law of iterated logarithm? en.wikipedia.org/wiki/Law_of_the_iterated_logarithm I think that this may help you. – Leonid Petrov Jul 31 '11 at 8:36
How is $f(x) = \sqrt{x}$ clearly if $F\hspace{.03 in}$? Among other things, $f$ can only be hit by $|S_n|$ at perfect squares. – Ricky Demer Jul 31 '11 at 9:18
2 Does it makes sense for a limit on $n$ to be a function of $n$? Perhaps you mean the expected value is asymptotic to that function of $n$? – Gerry Myerson Jul 31 '11 at 9:21
Gerry Myerson: yes, I edited accordingly. @Ricky Demer: I replaced "hit" with "crossed" - I hope that clarifies that I'm not looking for equality of two functions, but for $f$'s such that $\limsup
|S_n|/f(n) \ge 1$ with probability 1 – Yaakov Baruch Jul 31 '11 at 11:23
@Leonid: I think your comment is the answer to the question (or at least the INTENDED question). – Yaakov Baruch Jul 31 '11 at 11:37
show 1 more comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/71716/how-far-will-a-random-walk-on-the-integers-go","timestamp":"2014-04-18T11:08:47Z","content_type":null,"content_length":"52611","record_id":"<urn:uuid:69a96585-c833-4e41-8478-9a73c200928c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Martin Gisser
bio website florifulgurator.wordpress.com
location Bavaria, Germany
age 46
visits member for 3 years, 7 months
seen Mar 12 '13 at 16:46
stats profile views 520
Hobby Mathematician. PhD drop-out for many reasons 15y ago, but actually can't let go of math.
Interested in Riemannian analysis/geometry - doing it my way...
Unpublished theorem of 1996: A complete Riemannian manifold with strict stochasticly positive Ricci curvature is compact and has finite fundamental group.
(Exact formulation and 1 page proof to be written up one day. I still owe this to KD Elworthy) | {"url":"http://mathoverflow.net/users/9161/martin-gisser?tab=favorites","timestamp":"2014-04-18T05:43:50Z","content_type":null,"content_length":"51611","record_id":"<urn:uuid:1a440ac9-07f3-4792-8b87-8ecf1223cc28>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
1/8 acre equals how many square feet
You asked:
1/8 acre equals how many square feet
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/1/8_acre_equals_how_many_square_feet","timestamp":"2014-04-19T12:02:09Z","content_type":null,"content_length":"52652","record_id":"<urn:uuid:3d4ce710-cd56-4b48-8877-24ad443fe932>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
nesc0325 2-DB, 2-D MultiGroup Diffusion, X-Y, R-Theta, Hexagonal Geometry Fast Reactor, Criticality Search
nesc0567 3-DB, 3-D MultiGroup Diffusion, X-Y-Z, R-Theta-Z, Triangular-Z Geometry, Fast Reactor Burnup
nea-0912 ABLEIT-TRANS, Isotope Concentration and Sensitivities on Cross-Sections Data
nea-1839 ACAB-2008, ACtivation ABacus Code
psr-0190 ADENA, Fission Products Beta Spectra and Gamma Spectra in 19 Group from U235 Pu239 Mixture
nea-0321 ANDROMEDA, 1-D Burnup for Fuel Cycle Analysis of FBR
nea-1638 ANITA-2000, Isotope Inventories from Neutron Irradiation, for Fusion Applications
nea-1343 ANITA-4, Isotope Inventories from Neutron Irradiation, for Fusion Applications
ccc-0519 AUS, Neutron Transport and Gamma Transport System for Fission Reactors and Fusion Reactors
nea-0373 BEST-4, Fuel Cycle and Cost Optimization for Discrete Power Levels
nea-0404 BEST-5, Power Reactor Fuel Cycle Optimization by Bellman Method
ccc-0657 BETA-S, Multi-Group Beta-Ray Spectra
nea-0591 BEVE, Isotope Buildup in LWR Fuel Pin with Self-Shielding in Pellet
nea-0870 BISON, 1-D Burnup and Transport in Slab, Cylindrical, Spherical Geometry
ccc-0459 BOLD/VENTURE-4, Reactor Analysis System with Sensitivity and Burnup
nea-0236 BOLERO, 2 Group Burnup for PWR and BWR in R-Z Geometry with Restart and Recycle
nea-1187 BOREAS, Nuclear and Thermohydraulic LWR Burnup Simulation
nea-1523 BOXER, Fine-flux cross section condensation, 2D few group diffusion and transport burnup calculations
nea-0237 BURNY, 5 Group BWR and PWR Burnup in X-Y Geometry by Diffusion Calculation
nea-0350 BURNY-SQUID, 2-D Burnup of UO2 and Mix UO2 PuO2 Fuel in X-Y or R-Z Geometry
nea-1735 CARL 2.3, radiotoxicity, activity, dose and decay power calculations for spent fuel
ests1071 CECP, Decommissioning Costs for PWR and BWR
ccc-0544 CEPXS ONELD, 1-D Coupled Electron Photon MultiGroup System
ccc-0604 CHAINS-PC, Decay Chain Atomic Densities
nea-0451 CICLON, Neutronics Calculation for PWR Transition Fuel Cycle Management
ccc-0755 CINDER 1.05, Actinide Transmutation Calculations Code
nesc0313 CINDER, Depletion and Decay Chain Calculation for Fission Products in Thermal Reactors
nesc0387 CITATION, 3-D MultiGroup Diffusion with 1st Order Perturbation and Criticality Search
ccc-0643 CITATION-LDI2, 2-D MultiGroup Diffusion, Perturbation, Criticality Search, for PC
nesc0540 CLOTHO, Mass Flow Data Calculation for Program PACTOLUS
iaea0883 CLUB, Cell Calculation PF Candu PWR Fuel Clusters
nesc0873 COAST-4, Design and Cost of Tokamak Fusion Reactors
ests0135 COBRA-SFS CYCLE3, Thermal Hydraulic Analysis of Spent Fuel Casks
nea-1578 COMRAD96, Nuclear Fuel Burnup and Depletion Calculation System
iaea0928 COMTA, Ceramic Fuel Elements Stress Analysis
nesc0498 CONCEPT-5, Cost and Economics Analysis for Nuclear Fuel or Fossil Fuel Power Plant
nea-0427 CONDOR-3, Local and Spectrum Dependent Burnup with Mesh-Wise Depletion
iaea1226 CORD, PWR Core Design and Fuel Management
nea-0463 CRACKLE, Fast Reactor Pu Fuel Management
iaea0873 CRITIC, In-Core Fuel Management for CANDU PWR
nea-0151 DANCOFF-3, Dancoff Correction for Cylindrical Fuel Rod at H2O Gaps and for Fuel Clusters
ccc-0640 DCHAIN, Isotope Buildup and Isotope Decay by 1 Point Approximation
nea-0664 DCHAIN, Isotope Buildup and Isotope Decay by 1 Point Approximation
nea-1603 DCHAIN-SP 2001, Code System for Analyzing Decay and Build-up Characteristics of Spallation Products
nea-0446 DELIGHT-7, Point Reactivity Burnup for HTGR Lattice with P1 Neutron Scattering Approximation
psr-0523 DEPLETOR Version 2, provides depletion capability to the Purdue Advanced Reactor Core Simulator (PARCS) code
nea-0298 DISCOUNT-G, Nuclear Power Program with Cost Analysis and Pu Production Optimization
nesc0579 DWARF, 1-D Few-Group Neutron Diffusion with Thermal Feedback for Burnup and Xe Oscillation
nea-1683 ERANOS 2.0, Modular code and data system for fast reactor neutronics analyses
nea-0534 EREBUS, Burnup by 2-D MultiGroup Neutron Diffusion with Criticality Search
nea-0341 ERUPT, 2-D 2 Group Fuel Management in R-Z Geometry with Fuel Shuffling
nea-0617 FAPMAN-IC, LWR Fuel Cost Analysis with Program ORSIM Interface
nea-0693 FAPMAN-ORSIM, General Cost Optimization for System of Nuclear Power Plants
nea-1080 FEMAXI-6, Thermal and Mechanical Behaviour of LWR Fuel Rods
nea-0897 FISP-6, Fission Products Inventory and Energy Release in Irradiated Fuel
nea-0706 FISPIN, Isotope Buildup and Isotope Decay for Actinides, Fission Products, Structure Materials
nea-0235 FLARE-JAERI, 3-D BWR and ATR Simulation
ccc-0603 FPZD, Reactor Burnup by MultiGroup Neutron Diffusion
nesc0301 FREVAP-6, Metal Fission Products Release from HTGR Fuel Elements
nea-0314 FURNACE-J, 2-D Diffusion Burnup for Fast Reactors from JAERI Fast-Set
nesc0223 GAD-2, Fuel Cycle Depletion Calculation with Partial Refueling and Fuel Recycling
nesc0576 GEM, Fuel Cycle Cost and Economics for Thermal Reactor, Present Worth Analysis
nesc0711 GEOCOST-BC, Geothermal Power Plant Electricity Generator Cost, Thermodynamics Calculation
iaea1222 HAMCIND, Cell Burnup with Fission Products Poisoning
nea-0176 HETERO, Flux and Power Distribution in Thermal Reactor by 3-D, 2 Group Line Source Sink Method
nea-0100 HYTHEST, Dependence of Fuel Fabrication Tolerances on Hydraulics of BWR, PWR
nea-0353 ICON, Reactor Operation Fission Products Inventory Calculation
nea-1340 INVENT-STUDSVIK, Fission Products Abundances in U235, U238, Pu239 Samples
nea-0434 ISOTEX-1, Time-Dependent Heavy Isotope and Fission Products Concentration in U Reactor or Pu Reactor
nea-0624 JOSHUA, Neutronics, Hydraulics, Burnup, Refuelling of LWR
nea-0288 KERBREK, Fuel Cycle Cost Analysis for Power Reactor
nea-1001 KORIGEN, Isotope Inventory, Radiation Heat from PWR Burnup
nea-0417 KOSAK, Power Plant Cost Optimization with Pu Availability Option
nea-0441 KPD, Time-Dependent Fuel Cycle Cost Calculation for Various Reactor Types
nesc0249 LASER, Slowing-Down Neutron Spectra and Burnup for Thermal Reactors, Neutron Transport Theory
nea-0573 LASER-PNC, Neutron Spectra in Uniform Lattice with Burnup Calculation
ccc-0343 LEOPARD-MICRO, Spectrum-Dependent Non-Spatial Fuel Depletion
nea-0965 LOLA-SYSTEM, JEN-UPM PWR Fuel Management System Burnup Code System
nesc9449 LPGC, Levelized Steam Electric Power Generator Cost
ccc-0631 LWRARC, PWR and BWR Spent Fuel Decay Heat Generator
nea-1643 MCB1C, Monte-Carlo Continuous Energy Burnup Code
iaea0889 MCRAC, In Core Fuel Management, Program of PFMP System
nesc9479 MGA, Pu Isotope Abundance from Multichannel Analyzer Gamma Spectra
psr-0455 MONTEBURNS 2.0: An Automated, Multi-Step Monte Carlo Burnup Code System
nesc0798 MSF21/VTE21, Desalination Plant Heat, Mass Balance, Design, Cost Optimization
nea-1845 MURE, MCNP Utility for Reactor Evolution: couples Monte-Carlo transport with fuel burnup calculations
iaea1411 NAAPRO, Neutron Activation Analysis Prognosis and Optimization code
nesc0146 NPRFCCP, Fuel Cycle Cost and Economics for Multi-Region Reactor
nesc0683 NUFUEL, Conditions for Power Production, U Fuel, Pu Recycle and Reprocessing
nesc0588 ORCOST-2, PWR, BWR, HTGR, Fossil Fuel Power Plant Cost and Economics
nea-1324 OREST, LWR Burnup Simulation Using Program HAMMER and ORIGEN
ccc-0371 ORIGEN-2.2, Isotope Generation and Depletion Code Matrix Exponential Method
ccc-0702 ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability
nea-0622 ORIGEN-JR, Radiation Source and Nuclide Transmutation with In-Core Burnup
nea-1880 ORIP-XXI, isotope transmutation simulations
nesc0699 ORSIM, Nuclear Fuel, Fossil Fuel Hydroelectric Power Plant Cost and Economics
nesc0540 PACTOLUS, Nuclear Power Plant Cost and Economics by Discounted Cash Flow Method
nea-0521 PAS-1, 2-D, 3-D Linear Static and Dynamic Stress Analysis with 2-D Steady-State Temperature Distribution
iaea0819 PELINOMIC, Power Plant Cost Optimization for Dispersed Load Centres
nea-1339 PEPIN, Methodology for Computing Concentrations, Activities, Gamma-Ray Spectra, and Residual Heat from Fission Products.
nesc0454 PHENIX, 2-D MultiGroup Diffusion Fast Reactor Burnup Calculation and Fuel Cycle Analysis
nea-1663 PLUTON, Isotope Generation and Depletion in Highly Irradiated LWR Fuel Rods
nesc0340 POWERCO, Nuclear Power Plant Electricity Cost and Economics
nea-1675 PPICA, Power Plant Investment Cost Analysis
iaea0888 PSU-LEOPARD, Program LEOPARD in PFMP System, Fast Neutron and Thermal Neutron Spectra Calculation
nesc0441 PWCOST, Fuel Cycle Cost and Economics by Present Worth Levelized Method
ccc-0639 RACC-PULSE, Neutron Activation in Fusion Reactor System
ccc-0627 RADAC, Radioactive Decay and Accumulation of Long Lived Isotopes
nea-0475 RASPA, Burnup with Fission Products Inventory, Gamma Spectra, Isotopic Power Density
ccc-0443 REAC*3, Isotope Activation and Transmutation in Fusion Reactors
ccc-0708 REBUS-PC 1.4, Code System for Analysis of Research Reactor Fuel Cycles
ccc-0653 REBUS3/VARIANT8.0, Code System for Analysis of Fast Reactor Fuel Cycles
ests0176 RECAP, Replacement Energy Cost for Short-Term Reactor Plant Shut-Down
nesc1065 REFCO83, Nuclear Fuel Cycle Cost Economics Using Discounted Cash Flow Analysis
nea-0262 REFLOS, Fuel Loading and Cost from Burnup and Heavy Atomic Mass Flow Calculation in HWR
nea-1231 REFREP, Near-Field Model for Spent Fuel Repository
nea-0101 REP-3, Time-Dependent Xe and Sm Poisoning from Space-Dependent Flux Distribution
ccc-0137 RIBD, Fission Products Inventory and Delay Heat in Fast Reactors, with Data Library
ccc-0382 RIBD-IRT, Isotope Buildup and Isotope Decay from Fission Source
nea-0239 RIBOT-5, 0-D Burnup for 5 Group BWR or PWR Lattice
nea-0589 RICE-CEGB, Long-Term Actinides and Fission Products Inventory of Irradiated Fuel
nesc0831 RO-75, Reverse Osmosis Plant Design Optimization and Cost Optimization
nea-0598 RSYST, Modular System for Reactor Core and Shielding Problems
nea-1078 SACHET, Dynamic Fission Products Inventory in PWR Multiple Compartment System
nea-1779 SAGEP-FR, Sensitivity Analysis of Fast Reactor Parameters
ccc-0785 SCALE 6.1.2, Modular system for criticality, shielding, source term, fuel depletion/decay, inventories, reactor physics
iaea0913 SCENARIOS, Simulation of Reactor Introduction and Operation Scenario Needs
nea-0235 SCOPERS-2, BWR and PWR Core Performance Simulation
nea-1840 SERPENT 1.1.7, 3-D continuous-energy Monte Carlo reactor physics burnup calculation, lattice physics applications
iaea0925 SHARDA, Thermal Reactor Isotope Irradiation Analysis
nea-1767 SMAFS, Steady-state analysis Model for Advanced Fuelcycle Schemes
nea-0450 SOTHIS, PWR Fuel Cycle Equilibrium Cost Evaluation
nea-0374 SPES, Fuel Cycle Optimization for LWR
nea-0842 SRAC-95, Cell Calculation with Burnup, Fuel Management for Thermal Reactors
iaea0882 STAR, Fuel Management of BWR
iaea0900 STOFFEL-1, Steady-State In-Pile Behaviour of Cylindrical H2O Cooled Oxide Fuel Rod
nea-1151 SUSD, Sensitivity and Uncertainty in Neutron Transport and Detector Response
nea-1628 SUSD3D, 1-, 2-, 3-Dimensional Cross Section Sensitivity and Uncertainty Code
nea-1698 SWAT, Step-Wise Burnup Analysis Code System to Combine SRAC-95 Cell Calculation Code and ORIGEN2
iaea0872 TACHY, BWR Fuel Management by 2-D Coarse Mesh Neutron Diffusion
iaea1338 TEMPUL, Temperature Distribution in Fuel Element after Pulse
nea-0486 TOTEM, Demand Assessment for Nuclear Power Plants and Conventional Power Plants
iaea1214 TRIGAC, Flux and Power Distribution and Burnup for TRIGA Reactor
nea-0415 TRITON, 3-D Multi-Region Neutron Diffusion Burnup with Criticality Search
iaea0884 TRIVENI, 3-D Fuel Management for PHWR CANDU
ccc-0654 VENTURE-PC 1.1, Reactor Analysis System with Sensitivity and Burnup
nea-1856 VESTA 2.1.5, Monte Carlo depletion interface code and AURORA 1.0.0, Depletion analysis tool
iaea0871 VPI-NECM, Nuclear Engineering Program Collection for College Training
nea-0655 VSOP, Neutron Spectra, 2-D Flux Synthesis, Fuel Management, Thermohydraulics Calculation
nea-0072 ZADOC, 2 Group Time-Dependent Burnup in X-Y Geometry with Fuel Management
iaea0912 ZZ AMZ, 70-Group 40 Isotope Multigroup Library for Fast Reactor Calculation
dlc-0089 ZZ LUMP, Lumped Fission Product Cross-Section Library for Fast Reactor Analysis from ENDF/B-V
dlc-0038 ZZ ORYX-E/38B, Group Constant Library from ENDF/B Fission Product Data for ORIGEN Calculation | {"url":"http://www.oecd-nea.org/tools/abstract/list/category/d","timestamp":"2014-04-18T03:45:02Z","content_type":null,"content_length":"44916","record_id":"<urn:uuid:db3d8502-612f-4f6f-9af5-b1da253c80a8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pembroke Pines Calculus Tutor
...I graduated from college in 3 years with a double major in Math and Psychology. I was a Magna Cum Laude graduate as an undergrad while playing basketball for an NCAA National Champion. My
oldest son graduated in the top 1% of his class while being a 4 year starter in basketball at the varsity level.
30 Subjects: including calculus, geometry, ASVAB, GRE
...Algebra 2 is a very important subject and has the foundation for all future math. If this subject is mastered, students can learn easier higher levels of math. With the experience I have
teaching Algebra 2, in high school and also one on one with my students, I am able to see exactly where the student is at.
48 Subjects: including calculus, chemistry, reading, French
...You can sit in the tutorial sessions and be involved in the learning process. This is highly encouraged. The basic concepts of BIOLOGY, CHEMISTRY, and PHYSICS will be addressed.
41 Subjects: including calculus, chemistry, Spanish, physics
...I was a symphony cellist for 10 years and am currently a teacher for Broward County Public Schools. I played for Chicago youth symphony, Sangamon Valley youth symphony, and professionally for
The Jacksonville Symphony in Illinois. I have excellent pitch.
27 Subjects: including calculus, chemistry, physics, statistics
...You’re never too old or too young to laugh while learning math! Calculus I is primarily concerned with understanding the idea of a derivative, techniques of derivation and applications of
derivatives. Calculus 2 covers integration in the same manner.
7 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Pembroke_Pines_Calculus_tutors.php","timestamp":"2014-04-16T13:55:07Z","content_type":null,"content_length":"24189","record_id":"<urn:uuid:b56c21de-06cd-40c4-ac13-32e1e362644f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Number of cases for WLSMV estimation
Dieter Urban posted on Monday, November 22, 1999 - 3:11 pm
In a SEMNET discussion this year G. Gregorich stated ´It (the WLSMV estimator) appears to have good small sample performance for relatively small models, but I believe that it still requires N > p*
(where p = the number of manifest variables and p* = p[p+1]/2). That is because a central matrix of order p* x p* needs to be inverted and thus must be positive-definite.´ Would the authors of Mplus
support that fully?
Bengt O. Muthen posted on Wednesday, November 24, 1999 - 5:07 pm
It is not the case that we must have n > p* for p*=p(p+1)/2 for WLSM or
WLSMV. Although the weight matrix has p* rows, this matrix need not be
inverted in WLSM or WLSMV, only in WLS. However, the quality of estimates
may not be good for n < p*. Simulation studies are needed.
Anonymous posted on Saturday, August 16, 2003 - 9:00 am
Why doesn't the weight matrix need inverted for WLSM and WLSMV?
Is WLSM equivalent to applying the Satorra-Bentler correction with the WLS estimator?
How is WLSMV different?
Bengt O. Muthen posted on Saturday, August 16, 2003 - 4:43 pm
Weight matrices are inverted in the generalized least squares fitting function. WLSM and WLSMV however do not use GLS but a diagonal weight matrix. In the standard error and chi-square computation,
the full weight matrix is involved but not inverted.
Yes, WLSM is analogous to the Satorra-Bentler correction to the WLS estimator with continuous outcomes.
WLSMV is different because it improves the chi-square approximation by not only adjusting the mean but also adjusting the variance.
Valeriana posted on Monday, March 06, 2006 - 11:52 am
ADF or WLS estimation requires very large samples.
WLSM / WLSMV / DWLS methods have the same characteristic of use only the diagonal of the weight matrix. So that, they are better for small samples.
I´d like to know how can I compute the sample size for these methods that uses only the diagonal of the matrix?
Do you have any reference?
Linda K. Muthen posted on Monday, March 06, 2006 - 2:18 pm
Are you asking how many subjects you would need for WLSMV for example? Or are you asking for a study where the number of subjects needed has been studied?
Valeriana posted on Monday, March 06, 2006 - 2:31 pm
I´d like to know how many subjects I need. Like we have the formula to compute the sample size for ADF estimation p+1/2p(p+1), don´t we have something similar for the methods which uses only the
diagonal of the weight matrix?
Linda K. Muthen posted on Monday, March 06, 2006 - 4:58 pm
I am not aware of such a formula. We recommend a simulation study to determine the number of subjects needed because the number of subjects needed depends on so many factors. See the following paper:
Muthén, L.K. & Muthén, B.O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 4, 599-620.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/23/32.html?1141693134","timestamp":"2014-04-19T19:40:07Z","content_type":null,"content_length":"25860","record_id":"<urn:uuid:99118657-5518-4f79-9f2e-b5e3803c2848>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Fictionalism About Mathematics
Harry Deutsch hdeutsch at ilstu.edu
Sat Mar 10 18:36:58 EST 2012
I didn't mean to imply that fictionalism is dominant. But it is a respectable doctrine--or so it is supposed by many philosophers of mathematics, including those who are not engaged in the debate. A survey of prominent views in current philosophy of mathematics would have to include fictionalism. What Richard says about the debate on fictionalism--that some defend it, some oppose it, some ignore it, and so on,-- could be said about any controversial doctrine in philosophy.
On Mar 10, 2012, at 4:30 PM, Richard Heck wrote:
> On 03/10/2012 03:29 PM, Harry Deutsch wrote:
>> The view that mathematical objects are fictitious and that "strictly speaking" seemingly true mathematical statements such as 5 + 6 = 11 are false, though they are true in the "story" of mathematics, is currently a very popular philosophy of mathematics among philosophers. The claim is that such fictionalism solves the epistemological problem of how mathematical knowledge is possible, and it solves the semantical problem of providing a uniform semantics for both mathematical and non-mathematical discourse. Fictionalist have also tried to address the obvious question of how, if mathematics is pure fiction, it nonetheless manages to be so useful in the sciences and in daily life. But I won't go into that here. My question is this: How do mathematical logicians and mathematicians in general react to this fictionalist doctrine? I realize that it may not be clear whether or how the doctrine might affect foundations or one's view of foundations. But I thought I would addres!
>> s this question to the FOM group since work in foundations and work in the philosophy of mathematics are intertwined. Let me put it this way: This fictionalism about mathematics is taken very seriously by philosophers of mathematics, but I doubt that mathematicians would find it at all appealing.
> For what it's worth, I don't know how true this characterization is. There are philosophers of mathematics, some of them quite prominent, who defend fictionalist views, and there are others, also quite prominent, who oppose them. Then there are others who ignore the whole debate, who find the fictionalist line sufficiently implausible, or what have you, to be bothered with it. And among those, you will likely find many who would agree with Lewis's famously funny rebuttal of fictionalism and its kin in *Parts of Classes*.
> One would obviously have to take some kind of formal poll to find out what the percentages are, but I'm not convinced myself that fictionalism is taken seriously by anything like the majority of philosophers of mathematics.
> Richard
> --
> -----------------------
> Richard G Heck Jr
> Romeo Elton Professor of Natural Theology
> Brown University
> Check out my book Frege's Theorem:
> http://tinyurl.com/fregestheorem
> Visit my website:
> http://frege.brown.edu/heck/
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2012-March/016283.html","timestamp":"2014-04-18T20:45:04Z","content_type":null,"content_length":"5826","record_id":"<urn:uuid:7578a6bc-d821-4918-bf9e-5a89197d2386>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithmic Thermodynamics
Posted by John Baez
Mike Stay and I would love comments on this paper:
We got into writing this because Mike has worked a lot on algorithmic information theory. Gregory Chaitin has described this subject as “the result of putting Shannon’s information theory and
Turing’s computability theory into a cocktail shaker and shaking vigorously.” In particular, this theory has revealed a deep link between ‘algorithmic entropy’ and the notion of entropy used in
information theory and thermodynamics.
The algorithmic entropy of a string of bits is a measure of much information that string contains. It’s a bit subtler than the length of the shortest program that does the job . But it’s close: the
difference between these is bounded by a constant. Anyway, the idea is that a random string like this:
has more algorithmic entropy than one with a simple pattern, like this:
On the other hand, in thermodynamics we define the Gibbs entropy of a probability measure $p$ on a countable set $X$ to be
$S(p) = - \sum_{x \in X} p(x) \ln p(x)$
If we think of $X$ as a set of possible outcomes of an experiment, the entropy says how much information we gain — on average — by learning the result of the experiment, if beforehand all we knew was
that each outcome $x$ happened with probability $p(x)$.
If the set of outcomes is a set of bit strings, or some other set of ‘signals’, Gibbs entropy has another name: Shannon entropy. Gibbs was interested in chemistry and steam engines. Shannon was
interested in communication, and measuring the amount of information in a signal. But the math is the same. This is a great story in itself! If you don’t know this story, try the links.
On the other hand, algorithmic entropy seems really different. Why? Both algorithmic entropy and Shannon entropy can be used to measure the amount of information in bit strings. But algorithmic
entropy can measure the information in a single bit string, while Shannon entropy applies only to probability measures on bit strings.
Nonetheless, there are deep relationships between these two kinds of entropy. So, when I started talking about thermodynamics as part of a big set of analogies between different theories of physics
in week289, Mike naturally wondered if algorithmic information theory should become part of this story. And that’s how this paper was born!
Here’s the idea:
Abstract: Algorithmic entropy can be seen as a special case of entropy as studied in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics
to the subject of algorithmic information theory. In particular, suppose we fix a universal prefix-free Turing machine and let $X$ be the set of programs that halt for this machine. Then we can
regard $X$ as a set of ‘microstates’, and treat any function on $X$ as an ‘observable’. For any collection of observables, we can study the Gibbs ensemble that maximizes entropy subject to
constraints on expected values of these observables. We illustrate this by taking the log runtime, length, and output of a program as observables analogous to the energy $E$, volume $V$ and
number of molecules $N$ in a container of gas. The conjugate variables of these observables allow us to define quantities which we call the ‘algorithmic temperature’ $T$, ‘algorithmic pressure’
$P$ and ‘algorithmic potential’ $\mu$, since they are analogous to the temperature, pressure and chemical potential. We derive an analogue of the fundamental thermodynamic relation
$d E = T d S - P d V + \mu d N,$
and use it to study thermodynamic cycles analogous to those for heat engines. We also investigate the values of $T, P$ and $\mu$ for which the partition function converges. At some points on the
boundary of this domain of convergence, the partition function becomes uncomputable. Indeed, at these points the partition function itself has nontrivial algorithmic entropy.
As the first sentence hints, one of the fun things we noticed is that algorithmic entropy is a special case of Gibbs entropy — but only if we generalize a bit and use relative entropy. They say
“everything is relative”. I don’t know if that’s true, but it’s sure true for entropy. Here’s the entropy of the probability measure $p$ relative to the probability measure $q$:
$S(p,q) = - \sum_{x \in X} p(x) \ln \left(\frac{p(x)}{q(x)}\right)$
This says how much information we gain when we learn that the outcomes of an experiment are distributed according to $p$, if beforehand we had thought they were distributed according to $q$.
Another fun thing about this paper is how it turns around an old idea of Charles Babbage. His Analytical Engine was a theoretical design for computer powered by a steam engine. We describe a
theoretical design for a heat engine powered by programs!
Posted at February 14, 2010 6:37 PM UTC
Re: Algorithmic Thermodynamics
Quoting from the paper:
“Charles Babbage described a computer powered by a steam engine; we describe a heat engine powered by programs! We admit that the significance of this line of thinking remains a bit mysterious.
However, we hope it points the way toward a further synthesis of algorithmic information theory and thermodynamics. We call this hoped-for synthesis ‘algorithmic thermodynamics’.”
Though programs can be appreciated abstractly, I think people lose sight of the physical implementation of a program = unfolding of an stored electro-magnetic configuration on a hard drive. I think
it was Bennett who said that a computable operation was no longer reversible after a release of heat had occurred.
This reminded of the other paper you guys collaborated on, something like: Does the HUP imply Godelian Incompleteness?
Posted by: Stephen Harris on February 15, 2010 5:46 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Does the HUP imply Godelian Incompleteness?
That was actually Cris Calude and me.
In this latest paper, John and I don’t try to connect physical thermodynamics to algorithmic thermodynamics; I certainly intend to look into it, though!
Posted by: Mike Stay on February 15, 2010 6:15 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Li and Vitanyi talk about an algorithmic approach to physical thermodynamics by considering the shortest program that outputs a description of the physical state. See chapter 8 of their book we cite.
Posted by: Mike Stay on February 15, 2010 6:31 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Apparently John’s having trouble posting, so he asked me to post this for him:
Mike wrote:
In this latest paper, John and I don’t try to connect physical thermodynamics to algorithmic thermodynamics…
People could find this confusing. After all, we show that that the math of algorithmic thermodynamics is formally identical to the math of physical thermodynamics! So we do connect them in that way.
But I think you mean this: nothing in our paper has anything to do with the physical implementation of programs on hardware, or the application of thermodynamics to the study of hardware. So in that
sense, algorithmic thermodynamics remains unconnected to the physical thermodynamics of computers.
Which is indeed something that would be good to fix, someday….
Posted by: Mike Stay on February 15, 2010 8:04 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Yes, I did read too much into the paper to arrive at a stronger physical connection. Perhaps I also over-interpreted the Wikipedia article on Shannon entropy and Information Theory. I also remember
reading somewhere that Shannon constructed his entropy to look like Boltzmann entropy.
“But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics
and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the
thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a
description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of
possible microscopic states that it could be in, thus making any complete state description longer.”
Introduction to Algorithmic Information Theory by Nick Szabo
“Recent discoveries have unified the fields of computer science and information theory into the field of algorithmic information theory. This field is also known by its main result, Kolmogorov
complexity. Kolmogorov complexity gives us a new way to grasp the mathematics of information, which is used to describe the structures of the world.”
SH: I guess I jumped to my conclusion because I’ve read some descriptions not worded precisely enough.
Posted by: Stephen Harris on February 15, 2010 9:18 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
An aside…
I am a big fan of Jaynes, so I’m happy whenever I see his name pop up.
Posted by: Eric Forgy on February 15, 2010 9:32 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
I haven’t read the paper yet, but while I have a passing moment here, I hope to quickly relate this to a recent challenge I posed in a series of articles:
Information is physical and the ability to transmit wireless information is related to the ability to transmit electromagnetic waves. It would be interesting to study information theory in relation
to electromagnetic theory.
Times up…
Posted by: Eric Forgy on February 15, 2010 7:03 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Nice paper!
When talking about “programs”, you do not distinguish executable code and data - it is custom not to make this distinction in the present context, but maybe some of your readers will be thankful if
you explain this in the introducton :-)
A prefix-free Turing machine is one whose halting programs form a prefix-free set.
There is something here that I don’t get: If the machine halts if fed with a program X, it is supposed not to halt for every other program that starts with X?
(That sounds strange to me, maybe it’s easier for me to think of equivalence classes of programs instead, I’ll try that).
Some misprints:
p.8, double “the”: We use μ/T to stand for the the conjugate variable of N.
p.10, verb should denote plural: Then simple calculations, familiar from statistical mechanics [17], shows…
p.10, missing “to” in “useful to think”: To build intuition, it is useful think of the entropy S
p.10: One “the” too much: The algorithmic temperature, T , is the roughly the number of
[John Baez: Thanks for all the corrections! For some reason I’m not being allowed to post comments. Until this gets fixed, I will have to manifest myself as ‘the ghost in the machine’.
You write:
When talking about “programs”, you do not distinguish executable code and data…
Yeah, sorry. Our way of speaking would probably be understood as convenient slang among people who work on algorithmic information theory. But since we’re hoping to expand our audience beyond those
folks, we should be clearer.
If everyone blogged about their papers before publishing them, everyone would see how many obstacles to understanding their papers contain!
There is something here that I don’t get: If the machine halts if fed with a program X, it is supposed not to halt for every other program that starts with X?
The issue of ‘halting’ is not really the main point here. The key idea is that a program should not contain another as an initial segment. So intuitively, just imagine that every program must end
with a string that says
written out in binary.
As explained in the quote by Mike here, this is just a convenient trick for making
∑[[X]] 2^^-|X|
converge when we sum over all programs X. This let us define a probability measure on programs — or more precisely, on the set of things we’re allowed to feed our universal Turing machine! That’s
very convenient.
Mike’s thesis attributes this trick to Chaitin, while the Wikipedia article says “There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on
self-delimiting programs and is mainly due to Leonid Levin (1974).”]
Posted by: Tim van Beek on February 15, 2010 9:41 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
When you show the two strings above and say that the uppermost has more algorithmic entropy, my first thought is that this is just a new name for kolmogorov complexity, an impression that sticks with
me for a while as I’m reading. On page 6 you make the connection to kolmogorov (algorithmic information) but I wonder if it wouldn’t have been more intuitive to some, if kolmogorov was there from
page 1.
This is the problem with uniting ideas in different fields I guess. What’s “intuitive” depends on your vantage point :-)
[John Baez: For some reason I’m not being allowed to post comments. But I can still edit other people’s comments. So, I’ll manifest myself in this unconventional way. Sorry.
You’re right: ‘Algorithmic entropy’ is almost the same as ‘Kolmogorov complexity’, and we should mention this.
The phrases ‘descriptive complexity’, ‘Kolmogorov-Chaitin complexity’, ‘Solomonoff-Kolmogorov-Chaitin complexity’, ‘stochastic complexity’, and ‘program-size complexity’ are other near-synonyms.
Unfortunately, I don’t think everyone agrees on what all these terms mean! And in our work the details matter. Mike’s thesis summarizes the story:
In the mid-1960’s, Kolmogorov, Solomonoff, and Chaitin independently proposed the idea of using programs to describe the complexity of strings. This gave birth to the field of algorithmic
information theory (AIT). The Kolmogorov complexity KU(s) of a string s is the length of the shortest program in the programming language U whose output is s. Solomonoff proposed a weighted sum
over the programs, but it was dominated by the shortest program, so his approach and Kolmogorov’s are roughly equivalent. Chaitin added the restriction that the programs themselves must be
codewords in an instantaneous code, giving rise to prefix-free AIT. In this model, complexities become true probabilities, and Shannon’s information theory applies directly.
In our paper, we fix a universal prefix-free Turing machine, and define the algorithmic entropy of a natural number s to be
- ln(∑[[x]] 2^^-|x|)
Here we are summing over all bit strings x that give n as output, and we use |x| to mean the length of the bit string x.
If one program x having n as output is a lot shorter than all the rest, then the algorithmic entropy of n is about (ln2) |x|. And in this case its Kolmogorov complexity is just |x|.
More generally, the Kolmogoriv complexity is always just |x|, and there’s a bound on the difference between the Kolmogorov complexity and the algorithmic entropy divided by ln2. The ln2 is no big
deal: it shows up merely because physicists prefer natural logarithms, while computer scientists prefer base 2.
To make the above ideas precise, let me define the concept of a ‘universal prefix-free Turing machine’. This idea plays a basic role in modern algorithmic information theory.
Let me use string to mean a bit string, that is, a finite, possibly empty, list of 0’s and 1’s. If x and y are strings, let xy be the concatenation of x and y. A prefix of a string z is a substring
beginning with the first letter, that is, a string x such that z = xy for some y.
A prefix-free set of strings is one in which no element is a prefix of any other. The domain dom(M) of a Turing machine M is the set of strings that cause M to eventually halt. We call the strings in
dom(M) programs. We assume that when the M halts on the program x, it outputs a natural number M(x). Thus we may think of the machine M as giving a function M from dom(M) to the natural numbers.
A prefix-free Turing machine is one whose halting programs form a prefix-free set. A prefix-free machine U is universal if for any prefix-free Turing machine M there exists a constant c such that for
each string x, there exists a string y with
U(y) = M(x)
|y| ≤ |x| + c.
This gobbledygook says that U can simulate M.
All this stuff and more is in our paper!
Challenge for category theorists: create a category where a ‘universal’ Turing machine actually has some universal property. If the Turing machine M can simulate the Turing machine N, we should have
a morphism from M to N. Then a universal Turing machine should be something like a weak initial object: it has a morphism, not necessarily unique, to every other Turing machine.]
Posted by: M on February 15, 2010 12:15 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
cf. What’s “explicit” depends on your vantage point. :-) Which leads to controversies such as: my explicit is more explicit than your explicit. For a more reasoned treatment, see Barry Mazur’s
`Visions, Dreams and Mathematics’ available from his web page.
Posted by: jim stasheff on February 15, 2010 1:00 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
M wrote: “When you show the two strings above and say that the uppermost has more algorithmic entropy, my first thought is that this is just a new name for kolmogorov complexity, an impression that
sticks with me for a while as I’m reading. On page 6 you make the connection to kolmogorov (algorithmic information) but I wonder if it wouldn’t have been more intuitive to some, if kolmogorov was
there from page 1.”
SH: Your intuition is correct, at least as far as concerning what is usually meant by algorithmic entropy.
In algorithmic information theory (a subfield of computer science), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov-Chaitin complexity, stochastic complexity,
*algorithmic entropy*, or program-size complexity) of an object such as a piece of text is a measure of the computational resources needed to specify the object.
Posted by: Stephen Harris on February 15, 2010 8:30 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
When I started my post (above) which covers part of JB’s reply, JB’s post had not yet appeared or I wouldn’t have duplicated his superior effort.
Posted by: Stephen Harris on February 15, 2010 8:46 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The ghost wrote:
So intuitively, just imagine that every program must end with a string that says THE END written out in binary.
Ok, I think I get what you mean - frankly that is something I already suspected - and I thought about allowing a program to continue after the end marker, let the machine ignore everything after the
endmarker, consider two programs as equivalent iff the part before the endmarker is equal and work with these equivalence classes.
Somehow that seemed more “natural” to me than demanding that the machine is prefix-free (a matter of taste, I guess).
Everthing after the endmarker would be called “dead code” by a practitioner, maybe I see too much of that every day :-)
Posted by: Tim van Beek on February 16, 2010 6:18 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Challenge for category theorists: create a category where a ‘universal’ Turing machine actually has some universal property. If the Turing machine M can simulate the Turing machine N, we should
have a morphism from M to N. Then a universal Turing machine should be something like a weak initial object: it has a morphism, not necessarily unique, to every other Turing machine.
This is a challenge you guys solved a long, long time ago! :)
Universal computers actually need a stronger property than the one you’ve described. Since programs can treat data as programs, we don’t just want morphisms from TMs to TMs, we want the category to
have a universal object – that is, it should have an object $U$ such that every object $A$ is a retract of $U$.
Dana Scott exhibited such a model for the untyped lambda calculus way back in the late 60s (the famous $D_ \infty$ construction, which gives a $D$ such that $D \simeq D \to D$). This model
construction applies to Turing machines, too, because of Kleene’s $s-m-n$ theorem from recursion theory, which basically says that Turing machines/Godel codes/whatever form a partial combinatory
algebra, which is a fancy way of saying that they can model the S and K combinators of the untyped lambda calculus.
John Longley has a nice paper, “Universal Types And What They Are Good For”, which is available at
Posted by: Neel Krishnaswami on February 16, 2010 9:33 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
1. I think the term “cross-entropy” is much better established than “relative entropy”. (Let’s pass over “Kullback-Leibler distance”.)
2. Would it be meaningful to ask what a phase transition means, on the algorithmic side?
[John Baez: Sorry, I’ll have to reply this way.
1. Cross-entropy may be more commonly discussed than relative entropy, but unfortunately it means something else. Given probability measures p and q on a countable set X, the cross-entropy of p with
respect to q is:
-∑[[[x]]] p(x) ln(q(x))
while the relative entropy of p with respect to q is:
-∑[[[x]]] p(x) ln(p(x)/q(x))
They’re closely related: the relative entropy of p with respect to q is the cross-entropy of p with respect to q minus the entropy of p. I like relative entropy better because I know how to define it
for any pair of probability measures p and q on a measure space:
-∫[[X]] ln(dp/dq) dp
as long as p is absolutely continuous with respect to q, so the Radon-Nikodym derivative dp/dq is well-defined.
But some good may come of all this terminological nonsense: maybe some of our formulas would simplify if we used cross-entropy.
2. I’m definitely interested in finding and studying critical points in algorithmic thermodynamics. Manin did something similar in the papers Zoran mentioned here. Unfortunately Manin’s functions are
all uncomputable! Our partition function is computable except in the T → ∞ limit: the ‘high-temperature limit’, where programs with long runtimes count a lot so the halting problem kicks in. I would
love to extract interesting information from studying the behavior of the partition function as T → ∞, but we haven’t figured out how yet.]
Posted by: Allen Knutson on February 15, 2010 1:48 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Some ideas on connections between computation/information and statistical physics/thermodynamics are contained in 3 recent papers of Yuri Manin
* Yuri Manin, Renormalization and computation I: motivation and background, arxiv 0904.4921, Renormalization and Computation II: Time Cut-off and the Halting Problem arxiv 0908.3430
* Yuri Manin, Matilde Marcolli, Error-correcting codes and phase transitions, arxiv 0910.5135
Posted by: Zoran Skoda on February 15, 2010 1:49 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
I was wandering what you get when you restict the possible programs to cellular automata: Programs that change pieces of data only according to “neighbour” data.
Posted by: Gerard Westendorp on February 15, 2010 6:02 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Since Stephen Wolfram proved (or got very very close to prove) that some 1D cellular automata are turing machines, what you suggest wouldn’t change much, far as I can see.
Posted by: M on February 16, 2010 11:47 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
I havn’t had time yet to think about this deeply, but it seems to me that cellular automata do have some features that make computation look more like phyiscics than more general programs do.
For example, the ‘algorithmic volume’ associated with cellular atomata is just a constant times the number of cells.
I don’t know how you should define the halting time of a cellular automata. You could say a cellular automat always goes on for ever, even if the data turns extremely boring after a while. On the
other hand, you could also say that the program always halts after just one step: It processes a state (t) into a state (t+1). If you want to know State (t+2), you have to run the program again.
Posted by: Gerard Westendorp on February 17, 2010 10:43 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Posted by: Mike Stay on February 18, 2010 2:19 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
This seems a bit arbitrary for the equivalent of energy. Also, the program
For i = 1 to N: Next i
Has a halting time that is proporional to N. But that seems equally unsatisfying for the equivalent of energy.
If you fill in random points in Conway’s game of life, you often see an inital stage, in which things get tidied up. After a while, you often get a ‘boring’ state. If you could somehow quantify this,
it would seem to be a better analog of energy.
Posted by: Gerard Westendorp on February 18, 2010 7:32 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Wolfram’s conjecture that the “Rule 110” CA is capable of universal computation was proved by Matthew Cook. The politics and drama surrounding that proof are, I’d say, beyond the scope of this
comment thread.
Posted by: Blake Stacey on February 16, 2010 10:05 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Do you find analogues to all the usual laws of thermodynamics? I wonder what the equivalent of a perpetual motion machine would be.
Posted by: David Corfield on February 16, 2010 9:05 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Sorry for my exceedingly naive thoughts: Let’s take a universal prefix-free Turing machine U, as in the paper.
If the set X of programs that make U halt is analog to the set of microstates, we could define “U executes program x” as analog to “an ideal gas is in microstate y”.
We would then need a process that takes our system from one microstate to the next. Naive idea: We could feed U the output of the program that it executed (assuming that input and output are bit
strings, i.e. when the paper says “the output is a natural number” I assume that the output is a bit string interpreted as a natural number by a fixed encoding).
If the energy of the system is the log of the runtime of the program it executes, a perpetuum mobile of first order would be an U and a program $x{_0}$ such that if $x_{n+1}$ is the output of $U(x_
{n})$ for n=1, 2, 3…then the execution time of $U(x_{n+1})$ is bigger than that of $U(x_{n})$.
(Sorry for my lack of proficiency in tex).
After googling a bit I discovered that Michael Stay already wrote a paper about simple concrete models of prefix-free Turing machines (comes up as the first and second hit),
• Michael Stay: Very Simple Chaitin Machines for Concrete AIT
I’ll try to read it, I promise! But do the models allow the situation I described?
Posted by: Tim van Beek on February 16, 2010 5:33 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Sorry for my exceedingly naive thoughts: Let’s take a universal prefix-free Turing machine U, as in the paper.
If the set X of programs that make U halt is analog to the set of microstates, we could define “U executes program x” as analog to “an ideal gas is in microstate y”.
Yep, that’s the analogy.
We would then need a process that takes our system from one microstate to the next.
Not in order to do thermodynamics; one just assumes that all accessible states (i.e. within some delta of the given $E, V, N$) are equiprobable. Then the entropy is just the log of the number of
accessible states.
We do need an equation of state; that’s given by the particular choice of universal Turing machine.
After googling a bit I discovered that Michael Stay already wrote a paper about simple concrete models of prefix-free Turing machines (comes up as the first and second hit),
The languages I describe there are useful for doing things like computing exactly how many programs halt of a given length, for small lengths. At one point I considered putting an explicit
calculation alongside the description of the thermodynamic cycle in the new paper.
Posted by: Mike Stay on February 16, 2010 10:02 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Mike wrote:
At one point I considered putting an explicit calculation alongside the description of the thermodynamic cycle in the new paper.
Even if you don’t, you could add the reference to your paper and note that it explains prefix-free Turing machines in more detail, that small amout of immodest cross-reference will be forgiven :-)
(at least by people like me who did not know this notion).
Or if there is a more appropriate reference, it would be useful to have it somewhere between “To make this precise, we recall the concept of a universal prefix-free Turing machine.” and “Then we can
define some probability measures on X = dom(U) as follows.” on page 6.
Posted by: Tim van Beek on February 17, 2010 8:52 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
David Corfield wrote:
Do you find analogues to all the usual laws of thermodynamics?
Interesting question. Before I answer it, though, I should emphasize this:
In practice what really matters for a lot of thermodynamics is not those famous numbered ‘laws’, but the way we can use the formula for the Gibbs ensemble to derive a host of formulas relating the
entropy, the partition function, various observables, their mean values, variances and higher moments, and the conjugate variables of these observables! That is what we study in our paper.
But anyway, now that you mention it, I guess a lot of people will be curious about those laws. Here’s the story:
The zeroth law is true in our setup, but we haven’t mentioned that in our paper yet, since it’s mainly concerned with a single system in equilibrium — a single ensemble of programs — rather than two
in contact.
The first law is conservation of energy. We don’t have that, since we don’t have any concept of time in our setup. We could remedy this by treating computers as systems that evolve in time with time
evolution generated by a Hamiltonian.
Nonetheless we do derive an analogue of the fundamental thermodynamic relation, which is closely related to energy conservation, namely
$d E = T d S - P d V$
or the fancier version in my blog entry above. This says that the change in energy of our ensemble of programs is the change in heat minus the work done by this ensemble.
We also lack the second law, namely the idea that entropy also increases, again because we have no concept of time.
I haven’t checked, but I believe we might be able to derive the third law, namely that entropy reaches a minimum at absolute zero.
Posted by: John Baez on February 17, 2010 7:44 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
John wrote:
…since we don’t have any concept of time in our setup.
Surly I’m missing something obvious again, but wouldn’t the analog to the ideal gas be this: After the Turing machine finishes with one program, let it pick the next program x that it executes
according to the probability p(x) given by the Gibbs ensemble?
This provides you with a discrete notion of time, without specifying mircodynamics, and the analog to the coupling to a heat bath would be the provider of the programs that let’s the Turing machine
pick the next program. (Too easy to be true, but I do not see what is wrong with the picture nevertheless).
Posted by: Tim van Beek on February 18, 2010 4:03 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Tim van Beek writes:
Surely I’m missing something obvious again, but wouldn’t the analog to the ideal gas be this: After the Turing machine finishes with one program, let it pick the next program $x$ that it executes
according to the probability $p(x)$ given by the Gibbs ensemble?
You can do this for any Gibbs ensemble, or indeed any probability measure whatsoever. But this is not how the dynamics of an ideal gas actually works. The molecules in the gas do not jump discretely
from one state to another in a random manner: they move continuously following some differential equations describing the laws of physics, either classical or quantum. It’s these underlying dynamical
laws that are based on the concept of energy, and automatically yield conservation of energy. In a random jump dynamics there would be no conservation of energy except in an average statistical
In short: inventing a random jump dynamics based on the probabilistic description is too easy to be much fun, and it doesn’t get you much. A deeper analogy between algorithmic thermodynamics and
ordinary thermodynamics might involve treating a piece of hardware as an actual physical system obeying some differential equations. Then there would really be a god-given concept of energy, time
evolution, energy conservation and so on.
But anyway, that’s not what we’re studying in this paper.
Posted by: John Baez on February 18, 2010 9:04 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
John wrote:
A deeper analogy between algorithmic thermodynamics and ordinary thermodynamics might involve treating a piece of hardware as an actual physical system obeying some differential equations.
These are the dots I did not connect, I thought it was all about describing the Turing machine by some sort of microdynamics - which would have to be a discrete time system, assuming that the set of
atomar operations it can perform is finite resp. there is a program x where the runtime of U(x) attains it’s minimum, which is the part where the analogy to the ideal gas breaks down.
Sorry for being slow.
Posted by: Tim van Beek on February 19, 2010 11:54 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
You don’t really need the concept of time to derive second law. All you need is that the phase space volume is the same at two times.
Posted by: tytung on February 25, 2010 4:14 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Thank you for writing this article as I am bent on penetrating what has been written in A NEW KIND OF SCIENCE by Stephen Wolfram.
While I appreciate the use of formulas here as well as at this link as a way to comprehend what Wolfram writes, I also would note that this author claims: “Above 1D no systematic method seems to
exist for finding exact formulas for entropies” (p.959) and “So the fact that traditional science and mathematics tends to concentrate on equations that operate like constraints provides yet another
reason for their failure to identify the fundamental phenomenon of complexity that I discuss in this book.” (p.221)
For the 1D cellular automata, Wolfram suggests the closest analog to be Limit[Sum[UnitStep[p[i]], {i,k^n}]/n, n → ∞]
Do you recognize this notation or does a 1D entropy not apply significantly when using formulas such as the one you write about?
Posted by: Brenda Tucker on February 16, 2010 7:15 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The formula shown in the link below looks like the standardish formula I often see.
Posted by: Stephen Harris on February 16, 2010 9:15 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Wolfram’s NKOS is probably not the place you should start from when trying to learn mathematical physics. Try this list of books instead.
Posted by: Mike Stay on February 17, 2010 3:37 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Speaking of good books, everybody has heard of “The Feynman Lectures on Physics”. I chanced into another volume dedicated to Richard Feynman with contributions by a bunch of prestigious guys.
Feynman And Computation: Exploring The Limits Of Computers: Anthony J.G. Hey which contained,
W. H. Zurek [SH: The first two sentences are quoted because I found them very funny.]
The cost of erasure eventually offsets whatever thermodynamic advantages of the demon’s information gain might offer. This point (which has come to be known as “Landauer’s principle”) is now
widely recognized as a key ingredient of thermodynamic demonology.
The ultimate analysis of Maxwell’s demon must involve a definition of intelligence, a characteristic which has been all too consistently banished from discussions of demons carried out by
Using the Church Turing thesis as a point of departure, the present author has demonstrated that even this intelligent threat to the second law can be eliminated — the original “smart” Maxwell’s
demon can be exorcized. This is easiest to establish when one recognizes that the net ability of demons to extract useful work from systems depends on the sum of measures of two distinct aspects
of disorder:
Physical entropy is the sum of the statistical entropy and of the algorithmic information content: Z(p) = II(p) + K(p)
Besides your and JB’s splendid paper, it is surprising how much interest and how many well-written papers this topic provides.
Posted by: Stephen Harris on February 17, 2010 5:22 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The Feynman Lectures on Computation, to which that volume is a sequel, give a conceptual construction for “a heat engine powered by programs” — or, in Feynman’s case, a data tape. The gist of the
argument, as I recall: Each cell of the “tape” contains an atom bouncing around on one side of a partition; if one knew which side of the partition the atom was on, one could position a piston
properly and extract energy from the cell. Re-inserting the partition after the energy extraction leaves the cell in a randomized state. If the tape is randomized, there’s no way to know how to
configure the piston; a random tape has no fuel value.
Posted by: Blake Stacey on February 17, 2010 5:37 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
I carefully read the paper many times and would just ask these two questions:
1. What’s the difference in your mind between the # of programs which can be doubled in Algorithmic Temperature and the Mean Length of the programs? It seems as if the length of a program is relative
to the number of programs on initial conditions.
2. Why does Tadaki categorize your algorithmic pressure as his algorithmic temperature (or why do you elect to discuss algorithmic temperature as something that he calls pressure?) Does it make
little difference?
In Wolfram’s NKS he shows mechanical work is possible due to the orderly cyclical repetitions of his 2 dimensional CA as long as no object is present to distract the simple repetitions. (p.446)
Posted by: Brenda Tucker on February 17, 2010 1:16 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Brenda wrote:
What’s the difference in your mind between the # of programs which can be doubled in Algorithmic Temperature and the Mean Length of the programs?
I don’t understand what you mean by “the number of programs which can be doubled in algorithmic temperature”. That phrase doesn’t quite parse.
So, I’ll just say some stuff:
In thermodynamics we have:
$T = d E / d S$
In words: temperature $T$ is the rate of increase of energy $E$ as you increase the Gibbs entropy $S$. In many situations the Gibbs entropy is approximately the logarithm of the number of microstates
with a given energy. For an explanation of this approximation see the Wikipedia article on Boltzmann’s entropy formula. Boltzmann came before Gibbs and his formula for entropy is now seen as an
approximation to Gibbs’ exact formula.
If we make this approximation, the above equation says roughly that “temperature is how much we need to increase the energy of a system to multiply the number of microstates having this energy by a
factor of $e$.”
Or if we don’t mind being off by a factor of $ln 2$: “temperature is how much we need to increase the energy of a system to double the number of microstates having this energy.”
In the introduction of our paper we make this approximation. But we’re talking about an analogy where we treat:
• a program as analogous to a microstate,
• the logarithm of the runtime of a program as analogous to the energy of a microstate,
• the ‘algorithmic temperature’ as analogous to the temperature.
So, we may say: “the algorithmic temperature is how much we need to increase the log runtime of a program to double the number of programs having this log runtime.”
Or, again ignoring a factor of $ln 2$, which actually compensates for the one we ignored before: “the algorithmic temperature is how many times we need to double the runtime in order to double the
number of programs having this runtime.”
You can think of this sentence as a rough verbal definition of ‘algorithmic temperature’. The actual definition is $T = d E /d S$.
Why does Tadaki categorize your algorithmic pressure as his algorithmic temperature (or why do you elect to discuss algorithmic temperature as something that he calls pressure?) Does it make
little difference?
So far it seems to make little difference. As we say in our paper:
Before proceeding, we wish to emphasize that the analogies here were chosen somewhat arbitrarily. They are merely meant to illustrate the application of thermodynamics to the study of algorithms.
There may or may not be a specific ‘best’ mapping between observables for programs and observables for a container of gas! Indeed, Tadaki has explored another analogy, where length rather than
log run time is treated as the analogue of energy. There is nothing wrong with this. However, he did not introduce enough other observables to see the whole structure of thermodynamics, as
developed in Sections 4.1–4.2 below.
I would like to know which analogy is ‘best’, but so far I don’t see a strong argument for one being ‘best’.
Posted by: John Baez on February 19, 2010 2:01 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Also, I have to ask because it is starting to bug me.
How do you know there are two relative probability measures (p,q)? Since it is one dimensional, is it even possible to use the relative part of the equation? Maybe there is only one variable for
information to be learned?
Posted by: Brenda Tucker on February 17, 2010 1:37 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
You say ‘since it is one dimensional’, but you don’t say what ‘it’ is. So, I don’t understand what you’re saying.
Posted by: John Baez on February 18, 2010 9:06 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Interesting Idea. But I think a better name should be “Thermodynamic Algorithms”?
Posted by: tytung on February 25, 2010 3:58 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The fact that the energy (and the other variables, for that matter) is entirely arbitrary makes me rather leery of any strong connection to thermodynamics. Saying instead “statistical mechanics”
alleviates some of this, as that truly is maximization of Shannon entropy relative to some prior measure, consistent with observed constraints.
The first question that springs to my mind is: “what are reasons to consider particular constraints as useful?”. Physically, there are certain macroscopic variables we care about, and the ones you
pick are ones we do tend to care about in CS. The second is: “what are good priors to consider?” In physics, of course conservation of phase-space volume gives us that prior, but anything analogous
is much harder to think about without some notion of time to give us conserved quantities.
Posted by: Aaron Denney on February 26, 2010 6:05 AM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
Aaron writes:
The fact that the energy (and the other variables, for that matter) is entirely arbitrary makes me rather leery of any strong connection to thermodynamics. Saying instead “statistical mechanics”
alleviates some of this…
That’s true; we say ‘thermodynamics’ merely because we expect that relations like $d E = T d S - P d V$ and the Maxwell relations are likely to be familiar to most people from classes on
thermodynamics. But it’s important to emphasize that such relations arise automatically from any collection of observables. I thought we made this clear — but now I think it should be made even
Posted by: John Baez on February 26, 2010 8:29 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
I haven’t yet read the paper, but I prefer a two column text layout, as opposed to one column with 2 inch margins on each side.
Posted by: Greg Buchholz on February 26, 2010 8:02 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The TeX file is here and you’re free to reformat it, but you’ll need tikz to get the diagrams, and you’ll also need this eps file to get the picture of the piston.
Posted by: John Baez on February 26, 2010 8:21 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
The kind of margins that make sense for printed output generally don’t make sense for online reading (amongst other things fit-to-window-width doesn’t do what you want it to). I wrote a program which
rendered the pages of a pdf to bitmap images, figured out the margins and re-rendered the bitmap at sceen size (designed to be used for those cases where the author doesn’t make the raw source
available, unlike here) so that you can use an image viewer to display them for reading. Its a real hack job (for Linux/BSD), written primarily for me to be able to read papers on a 7 inch tablet
device whilst on the bus. I don’t have a web-space at the moment but if anyone thinks it’d be any use I can try and figure a way to make it available.
[I did look at the pdf spec to see if there was a way to modify a pdf file directly to achieve this (amongst other things the bitmap images are much larger than pdfs). Unfortunately the pdf doesn’t
even make listing a bounding-box-for-actual-content mandatory (in addition to the bounding-box-giving-paper-size) and so most pdf creation programs dont bother to provide it. This was even before
considering how to move the content so I quickly gave up on the idea of tweaking the pdf. This is one of those problems that really ought to have been seen but clearly the pdf specification writers
cared more about cool things like being able to embed active content into pdf documents.]
Posted by: bane on March 2, 2010 1:05 PM | Permalink | Reply to this
Re: Algorithmic Thermodynamics
A revised version of our paper on algorithmic thermodynamics is now available on the arXiv, and I’ve written an introduction to it here.
Posted by: John Baez on October 12, 2010 1:32 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2010/02/algorithmic_thermodynamics.html","timestamp":"2014-04-16T07:57:10Z","content_type":null,"content_length":"109982","record_id":"<urn:uuid:2aeebc21-bc16-4cfc-bd7b-60f297a10032>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
oxidation no. of fe in fe4[fe(CN)6]3 ??????
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512b3f26e4b098bb5fbad5e4","timestamp":"2014-04-17T01:40:31Z","content_type":null,"content_length":"114705","record_id":"<urn:uuid:ab363081-509c-4674-8661-279294e48c7f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Happy birthday to them!
As you might know, tomorrow is February 12th. It's also the 199th anniversary of Charles Darwin's birth. It's also the 149th anniversary of the publication of Darwin's "The Origin of Species." It's
also the 199th anniversary of Abraham Lincoln's birth.
So many things to celebrate tomorrow; where to start?
How about we just concentrate on the wonderful coincidence that Darwin and Lincoln share the same birthday AND year!
I share a birthday with my mother, Bruce Lee and Jimi Hendrix. Not the same year, obviously, just the same day. Every time I go out to a restaurant to celebrate my birthday, it seems someone else is
there doing the same thing, stealing my thunder, and often my piece of free birthday cake. An actuary friend explained that if you got 23 people together in a room, there's a 50-50 chance of at least
one coincidental birthday.
After the jump, you'll find a complete breakdown for those who are curious to see the math involved. But first, with whom do you share a birthday? We'd love to know, especially if it's the same day
AND year.
To figure out the exact probability of finding two people with the same birthday in a given group, it turns out to be easier to ask the opposite question: what is the probability that NO two will
share a birthday, i.e., that they will all have different birthdays? With just two people, the probability that they have different birthdays is 364/365, or about .997. If a third person joins
them, the probability that this new person has a different birthday from those two (i.e., the probability that all three will have different birthdays) is (364/365) x (363/365), about .992. With
a fourth person, the probability that all four have different birthdays is (364/365) x (363/365) x (362/365), which comes out at around .983. And so on. The answers to these multiplications get
steadily smaller. When a twenty-third person enters the room, the final fraction that you multiply by is 343/365, and the answer you get drops below .5 for the first time, being approximately
.493. This is the probability that all 23 people have a different birthday. So, the probability that at least two people share a birthday is 1 - .493 = .507, just greater than 1/2.
Statistics courtesy of Math Guy over at NPR.
February 11, 2008 - 6:07am | {"url":"http://www.mentalfloss.com/article/18024/happy-birthday-them","timestamp":"2014-04-16T11:04:31Z","content_type":null,"content_length":"85494","record_id":"<urn:uuid:2f1ed79c-2b50-482a-a546-a8d078f5ca27>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Collingswood, NJ Geometry Tutor
Find a West Collingswood, NJ Geometry Tutor
...I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. Therefore, I am qualified to tutor students in SAT Math. I
have a bachelor's in mathematics from Rutgers University.
16 Subjects: including geometry, English, algebra 2, calculus
...As with my other lessons, MCAT Verbal lessons are modified to best attend to my student's individual needs. My master's and doctoral work has been in reading and literacy, so I am uniquely
equipped and qualified to help students in the verbal section. I have spent years working with high school...
47 Subjects: including geometry, English, chemistry, reading
...I like to have fun and I tailor my teaching style to meet each students' specific needs and way of learning. I scored a 1910 on the SAT on my first try. I received a perfect 5 on my AP English
Language and Composition test and a 3 on my AP Human Geography test I'm very good at math, reading and writing as well as other areas.
7 Subjects: including geometry, ESL/ESOL, algebra 1, GED
...While getting my Master's degree I worked with children with special needs as a Teacher's Assistant so I am comfortable and experienced in all types of learners. I am Pennsylvania state
certified to teach K-6. I am currently working as a 4th grade teacher.
15 Subjects: including geometry, reading, writing, algebra 1
...As an illustrator, I have spent plenty of time learning and analyzing the different parts and pieces of an image, from anatomy to composition to color, and can easily teach and critique. I
took Algebra 1 in middle school. I love math and I'm good at explaining it to people, especially people who don't like math.
19 Subjects: including geometry, calculus, algebra 2, trigonometry
Related West Collingswood, NJ Tutors
West Collingswood, NJ Accounting Tutors
West Collingswood, NJ ACT Tutors
West Collingswood, NJ Algebra Tutors
West Collingswood, NJ Algebra 2 Tutors
West Collingswood, NJ Calculus Tutors
West Collingswood, NJ Geometry Tutors
West Collingswood, NJ Math Tutors
West Collingswood, NJ Prealgebra Tutors
West Collingswood, NJ Precalculus Tutors
West Collingswood, NJ SAT Tutors
West Collingswood, NJ SAT Math Tutors
West Collingswood, NJ Science Tutors
West Collingswood, NJ Statistics Tutors
West Collingswood, NJ Trigonometry Tutors
Nearby Cities With geometry Tutor
Ashland, NJ geometry Tutors
Audubon, NJ geometry Tutors
Center City, PA geometry Tutors
East Camden, NJ geometry Tutors
East Haddonfield, NJ geometry Tutors
Echelon, NJ geometry Tutors
Erlton, NJ geometry Tutors
Middle City East, PA geometry Tutors
Middle City West, PA geometry Tutors
Oaklyn geometry Tutors
South Camden, NJ geometry Tutors
West Collingswood Heights, NJ geometry Tutors
Westmont, NJ geometry Tutors
Westville Grove, NJ geometry Tutors
Woodlynne, NJ geometry Tutors | {"url":"http://www.purplemath.com/West_Collingswood_NJ_geometry_tutors.php","timestamp":"2014-04-19T06:59:02Z","content_type":null,"content_length":"24695","record_id":"<urn:uuid:43809851-9bf2-468c-81ff-517876c2ef2a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the uniqueness of the solution of the evolution dam problem
Carrillo Menéndez, José (1994) On the uniqueness of the solution of the evolution dam problem. Nonlinear Analysis: Theory, Methods & Applications, 22 (5). pp. 573-607. ISSN 0362-546X
Official URL: http://www.sciencedirect.com/science/article/pii/0362546X94900841
The purpose here is to investigate some aspects of the evolution dam problem $\partial(\chi+\alpha u)/\partial t=\Delta u+{\rm div}(\chi e)$ in $Q$, $\chi\in H(u)$ in $Q$, $u\geq 0$ in $Q$, $u=\phi$
in $\Sigma_2$, $\partial u/\partial v+\chi ev\leq 0$ on $\Sigma_2\cap \{\phi=0\}$, $e\in {\bf R}^n$, $e=(0,\cdots, 0,1)$, $\partial u/\partial v+\chi ev=0$ on $\Sigma_1$. Here $H$ is the Heaviside
maximal graph $H(s)=1$ if $s>0$, $H(s)=[0,1]$ if $s=0$, $\alpha$ is the storativity constant: $\alpha=0$ if the fluid is incompressible and $\alpha>0$ if the fluid is compressible. Also $Q=\Omega\
times (0,T)$ where $\Omega$ is a bounded domain in ${\bf R}^n$ ($\Omega$ represents a porous medium separating a finite number of reservoirs) with Lipschitz boundary $G$ where $G$ is divided into two
parts: $G_1$, which is the impervious part, and $G_2$, which is the pervious part, with the following assumptions: $G_2$ is an open subset of $G$, $G_2\not=\emptyset$, $G_2\cap G_1=\emptyset$, $G_2\
cup G_1=G$ and $e\nu\leq 0$ a.e. on $G_1$; $\Sigma_i$ denotes $G_i\times (0,T)$, $i=1,2$. The pressure at $\Sigma_2$ is known and is represented by a nonnegative function $\phi\in C^{0,1}(\overline
Q)$. The above problem can be completed by prescribing an initial value to $\chi+\alpha u$. The existence of a weak solution of the evolution dam problem is well known; however, many properties of
the solutions of the initial value dam problem, in particular, comparison and uniqueness theorems, continue to be open for a general domain $\Omega$. The results of this paper give precise
information about the comparison (Theorem 4.7) and the uniqueness (Corollary 4.9) statements. The key point to prove comparison and uniqueness results is that the function $h\mapsto \chi(x',x_n+h,
t-h)$, $x'\in{\bf R}^{n-1}$, is a.e. nonincreasing
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15598/","timestamp":"2014-04-18T11:16:20Z","content_type":null,"content_length":"29014","record_id":"<urn:uuid:02ac11bf-dbd6-4d39-aaf0-d7de53aafe26>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Papers
Dynamical Resolution of Redundancy for Robot Manipulators
Abstract: A large class of work in the robot manipulator literature deals with the kinematical resolution of redundancy based on the pseudo-inverse of the manipulator Jacobian. In this paper an
alternative dynamical approach to redundancy resolution is devel¬oped which utilizes the mapping between the actuator torques and the acceleration of the end-effector, at a given dynamic state of the
manipulator. The potential advantages of the approach are discussed and an example of a planar 3R manipulator following a circular end-effector trajectory is used to illustrate the proposed approach
as well as to compare it with the more well-known approach based on the pseudo-inverse.
Energy-Based Stability Measures for Reliable ocomotion of Statically Stable Walkers: Theory and Application
Abstract: To plan safe, reliable walker motions, it is important to assess the stability of a walker. In this article, two basic modes of walker stability are defined and .developed: stance stability
and walker stability. For slowly moving, statically stable walkers, it is convenient to use the magnitude of the amount of the work required to destabilize a walker as a measure of the stability of
that walker. Furthermore, as shown in this article, the com¬pliance of the walker and/or terrain can significantly affect the work necessary to destabilize the walker. Consideration of compliance and
the two modes of walker stability leads to the definition and development of .four energy-based stability mea¬sures: the rigid stance stability measure, the compliant stance stability measure, the
rigid walker stability measure, and the compliant walker stability measure. (The rigid stance stability measure is identical to the energy stability margin reported in Messuri and Klein [1985).)
Several examples are used to demonstrate the application and use of these stability measures in type selection. gait planning, and control of the walker. The outcome of the present work is a more
complete approach to using stability measures to ensure reliable walker gait planning and control.
"Chaos" by Design in Mobile Robots
Abstract: A large class of tasks for mobile robots require the robot to cover an enclosed space that might contain several objects (obstacles). The usual approach to this problem is to design a
sensor-intensive, computer-controlled robot that usually has a relatively simple kinematic form (type). The control of such a robot is difficult, expensive, and frequently unreliable. This paper
demonstrates how the complexity in sensing and control can be circumvented by synthesizing a "chaotic" kinematic motion that, when appropriately embodied by a kinematic form, covers the space and
easily deals with obstacles. The evolution of the "chaotic", sensorless, mechanical mobile robot from concept, through analysis, numerical simulation and form embodiment, to realization is described.
The testing of the prototype clearly demonstrates how, for the present task, complex "chaotic" motions do considerably simplify the control of the robot. A computer controlled prototype capable of
mimicking the behavior of its mechanical counterpart is also described.
The Definition, and Characterization of Acceleration Sets for Spatial Manipulators
Abstract: In this article. rhe approach developed by the authors for systematicallv studving rhe acceleration capabiliry and properties of rhe end effector of a planar manipulator is extended to the
general. serial, spatial manipulator possessing three degrees of freedom. The acceleration of the end effector at a given configuration of the manipulator is a linearfunction of the actuator rorques
and a (nonlinear) quadratic function of the joint velocities. By decomposing the functional relationships benveen the inputs (actuator toques and joint velocities) and the output (acceleration of the
end effector) into nvo fundamental mappings-~ linear mapping benveen the actuator toques and the acceleration space of the end effector and a quadratic (nonlinear) mapping benveen the joint
velocities and the acceleration space of the end effector-and by deriving the properties of these two mappings, it is possible to determine the properties of all acceleration sets that are the images
of the appropriate input sers under rhe two fundamental mappings. A central feature of rhis article is the determination of the properties of the qundraric mapping, which then makes ir possible to
obtain analytic expressions for most acceleration properties of interest. We show rhar a fundamental way of studving these quadratic mappings is in t e r n of the mapping of (input) line congruences
into (output) line congruences. The article concludes with the application of the analytical results to the computation of the various acceleration properties of an actual spatial manipulator:
The Development of Virtual Concurrent Engineering and its Application to Design for Producibility
Abstract: It is well known that so-called concurrent engineering is a desirable alternative to the largely sequential methods which tend to dominate most product-development methods. However, the
proper implementation of a concurrent engineering method is still relatively rare, due in part to unreliable guesswork, poor knowledge structuring and utilization, and the lack of integrated, global
decision making. Thus, to remedy the current shortcomings, we propose an initial, straightforward theory for product development, which is based on properly-defined concurrent engineering principles.
After establishing the overall goal for product development, we formulate an objective that a product-development method must meet. This objective then leads to three fundamental criteria, which
basically govern where concurrent engineering must be implemented, how consistent communication between different domains must be carried out, and how to structure and network the vast amount of
expert knowledge in an effective, feasible manner. The product-development objective and the three criteria guide the establishment of a feasible computer-based implementation of concurrent
engineering, called virtual concurrent engineering (VCE). The effectiveness of VCE is demonstrated by applying it to refine a method, called design for producibility (DFP), that integrates the design
and manufacturing stages of product development. Two elements that are crucial to the success of DFP are the producibility cost function network, and the software package AUTOPROD (Automated
Producibility). The refined DFP method has been successfully applied to concurrent product and process design in three domains: stamping, forming, and machining.
Development and Implementation of a Design for Producibility Method for Precision Planar Stamped Products
Abstract: In this paper, we establish the Design for Producibility Design Development Meth¬odology for stamped products which integrates both the design and manufacture of the stamped product. The
general concept of considering manufacturing tech¬niques at the initial design stages, often termed Design for Manufacturability, is certainly not novel. However, the proper implementation of this
concept is not so widely understood. Our proposed method permits the product designer to control both product functionality ( which he has incorporated into an initial design) and subsequent
manufacturing costs through an iterative re-design process. The key step in this method is the mapping scheme from the final product design domain to the manufacturing domain; this scheme captures
the cause-effect relations between de-sign ( and thus functional) specifications and manufacturing requirements and costs. We also describe the implementation of the design methodology in a
knowledge-based computer environment called the Producibility Evaluation Package (P.E.P. ) that, as a result of the design methodology, also automatically generates much of the process plan.
An Optimization-Based Framework for Simultaneous Plant-Controller Redesign
Abstract: In this paper we develop a framework for the redesign of computer-controlled, closed-loop, mechanical systems for improved dynamic performance. A central notion which underlies the redesign
framework is that, in order to achieve the best possible performance from a constrained closed-loop system, the plant and controller should be designed simultaneously. The framework is presented as
the formulation and solution of a progression of optimization problems which establish the limits of performance of the dynamic system under various conditions of interest, thereby enabling the
engineer to systematically establish the various redesign possibilities. Using a second order linear dynamic system and a nonlinear controller as an example, we demonstrate the application of the
framework and substantiate the idea that in order to achieve the best possible performance from a constrained closed-loop system, the plant and controller should be redesigned simultaneously. We then
show how the redesign framework can be used to select the best control strategy for a robotic manipulator from a dynamic performance standpoint. Finally, in order to demonstrate that the redesign
framework yields solutions which the engineer can implement with confidence, we present the experimental verification of the numerical solution of a manipulator redesign optimization problem.
Reactor-scale models for rf diode sputtering of metal thin films
Abstract: This article describes the development of an integrated physical model for the rf diode sputtering of metal thin films. The model consists of: (1) a computational fluid dynamic finite
element model for the velocity and pressure distribution of the working gas Ar flow in the chamber, (2) a steady-state plasma model for the flux and energy of Ar ions striking the target and the
substrate, (3) a molecular dynamics sputtering model for the energy distribution, angle distribution, and yield of the sputtered atoms (Cu) from the target, and (4) a direct simulation Monte Carlo
(DSMC) model for the transport of Cu atoms through the low-pressure argon gas to the deposition substrate. The individual models for gas flow, plasma discharge, Cu sputtering, and DSMC-based Cu atom
transport are then integrated to create a detailed, steady-state, input–output model capable of predicting thin-film deposition rate and uniformity as a function of the process input variables:
power, pressure, gas temperature, and electrode spacing. Deposition rate and uniformity in turn define the characteristics of thin films exploited in applications, for example, the saturation
magnetic field for a giant magnetoresistive multilayer. This article also describes the development of an approximate input–output model whose CPU time is several orders-of-magnitude faster than that
of the detailed model. Both models were refined and validated against experimental data obtained from rf diode sputtering experiments.
The Fast Resource Evaluation Grid
Abstract: The success of product design, development and delivery in a technological enterprise crucially depends on the optimal integration of enterprise resources. The Fast Resource Evaluation Grid
permits the comprehension of the strengths and importances of these resources as a rational basis for the allocation of effort to close resource gaps.
A Value-Centered Model of Product Design, Development and Delivery
Abstract: A value-centered model of product design, development and delivery (PD3) can enable the proper assessment and allocation of resources in order to maximize the value of the product. To
develop such a model, a two-dimensional grid that unifies the function and process elements of PD3 is described. The grid leads to a rational definition of enterprise resources. Then, useful
attributes –strength, importance, cost - of a resource are defined, and used to develop the notion of resource value. The determination of the resource attributes, including value, is illustrated by
an elementary example.
Distributed Nonlinear Model Predictive Control for Dynamic Supply Chain Management (presented at the International Workshop on Assessment and Future Directions of Model Predictive Control,
Freudenstadt-Lauterbad, Germany, August 26-30, 2005.)
Abstract: The purpose of this paper is to demonstrate the application of a new theory developed for distributed nonlinear model predictive control (NMPC) to a promising and exciting future domain for
NMPC – that of dynamic management of supply chain networks. Recent work by the first author provides a distributed implementation of NMPC for application in large scale systems comprised of
cooperative dynamic subsystems [1, 2]. The generic control objective is for the subsystems to cooperatively meet mutual objectives, e.g., a set of unmanned vehicles are required to realize a
predefined formation while avoiding collisions. The nonlinear subsystems can be coupled through performance objectives and state constraints [1], as well as through their dynamics [2]. By the
implementation, each subsystem optimizes locally for its own policy, and communicates the most recent policy to those subsystems to which it is coupled. Stabilization is guaranteed for arbitrary
interconnection topologies, provided each subsystem not deviate too far from the previous policy (consistent with traditional control deviation penalties), and that the updates happen sufficiently
fast. Simulations have demonstrated performance comparable to a centralized implementation [1]. A contribution of this paper will be to demonstrate the relevance and efficacy of the distributed NMPC
approach in the venue of supply chain management. A supply chain can be defined as the interconnection (coupling) and evolution (dynamics) of a demand network. Example subsystems include raw
materials, distributors of the raw materials, manufacturers, distributors of the manufactured products, retailers, and customers. Key elements to an efficient supply chain (performance) is accurate
pinpointing of process flows and timing of supply needs at each subsystem, both of which enable subsystems to request items as they are needed, thereby reducing safety stock levels to free space and
capital. Recently, Braun et al. [3] demonstrated the effectiveness of MPC in realizing these elements for management of a linear dynamic semiconductor chain, citing benefits over traditional
approaches and robustness to model and demand forecast uncertainties. In this context, the chain is isolated from competition, and so a cooperative approach is appropriate. It turns out that Braun et
al. employ a heuristic version of the distributed implementation in [1], with no theoretical justification, by sharing policies with down stream echelons in an acyclic single chain topology. Since
there are no cycles, implemented policies can be shared in real-time (ignoring communication delay). When uncertainty is present, control deviation penalties are critical to yield stability, a fact
consistent with the theory in [1]. Limitations of their approach are that it requires acyclic topologies and sequential updates from up to down stream, while realistic supply chains contain cycles
and do not in general operate sequentially. The second main contribution of this paper will be to demonstrate the scalability, stability and robustness properties of our distributed implementation in
a supply chain simulation example, permitting nonsequential updates and cycles in the interconnection network topology. We will also briefly examine the implications of incorporating game theoretic
approaches so that the local management mechanisms can simultaneously handle cooperative and competitive objectives, as this occurs in more realistic settings. | {"url":"http://users.soe.ucsc.edu/~sdesa/papers.htm","timestamp":"2014-04-19T10:41:30Z","content_type":null,"content_length":"29244","record_id":"<urn:uuid:22b7a198-81c2-4618-9c28-107eb7bf54bf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodinville Algebra 1 Tutor
Find a Woodinville Algebra 1 Tutor
...I do a lot of creative writing in my spare time (poetry and fictional prose), as well as a lot of different kinds of writing for school (including research papers, essays, newspaper and
magazine columns, & newspaper and magazine features). In high school, I took 2 years of Rhetoric, and I have t...
15 Subjects: including algebra 1, English, reading, algebra 2
...I believe that everyone is smart and enjoy helping students discover and understand themselves as learners. I have degrees in both Biology and English Literature and have been certified as an
athletic trainer as well as massage therapist. I have taught anatomy, physiology, kinesiology, pathology, medical terminology, athletic training, and massage therapy.
10 Subjects: including algebra 1, English, writing, elementary (k-6th)
...I discovered that most students that I helped had trouble in economics because they have had trouble in math previously. If you feel this is your problem too, I am confident I can help you. I
have been playing piano since I was 6 years old.
20 Subjects: including algebra 1, reading, calculus, statistics
...I passed the military intelligence electronics specialist course quite quickly. I received a 3.7 or higher in my electrical engineering / electronic devices courses at Washington State
University. I have used electrical engineering principles in a variety of theoretical and practical applications: fluid flow, transport phenomenon, and electric devices.
62 Subjects: including algebra 1, English, chemistry, reading
...I gave tours of the capital and taught students of all levels the workings of government. I received a minor in Classical Studies as well as a minor in Ancient History from Western Washington
University. The classes included, among others, Greek mythology, literature, and ancient history, as well as Roman literature and history.
34 Subjects: including algebra 1, chemistry, English, reading | {"url":"http://www.purplemath.com/Woodinville_algebra_1_tutors.php","timestamp":"2014-04-21T07:41:58Z","content_type":null,"content_length":"24203","record_id":"<urn:uuid:89715a32-2a22-44e7-bfd9-08b2d1726565>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definitions for conjecturekənˈdʒɛk tʃər
This page provides all possible meanings and translations of the word conjecture
Random House Webster's College Dictionary
con•jec•ture^*kənˈdʒɛk tʃər(n.; v.)-tured, -tur•ing.
1. (n.)the formation or expression of an opinion or theory without sufficient evidence for proof.
2. an opinion or theory so formed or expressed; speculation; surmise.
3. Obs. the interpretation of omens.
4. (v.t.)to conclude or suppose from evidence insufficient to ensure reliability.
5. (v.i.)to form conjectures.
^* Syn: See guess.
Origin of conjecture:
1350–1400; ME (< MF) < L conjectūra inferring, reasoning =conject(us) ptp. of conjicere to throw together, form a conclusion (con-con - +-jicere, comb. form of jacere to throw) +-ūra -ure
Princeton's WordNet
1. speculation, conjecture(noun)
a hypothesis that has been formed by speculating or conjecturing (usually with little hard evidence)
"speculations about the outcome of the election"; "he dismissed it as mere conjecture"
2. guess, conjecture, supposition, surmise, surmisal, speculation, hypothesis(noun)
a message expressing an opinion based on incomplete evidence
3. conjecture(verb)
reasoning that involves the formation of conclusions from incomplete evidence
4. speculate, theorize, theorise, conjecture, hypothesize, hypothesise, hypothecate, suppose(verb)
to believe especially on uncertain or tentative grounds
"Scientists supposed that large dinosaurs lived in swamps"
1. conjecture(Noun)
A statement or an idea which is unproven, but is thought to be true; a guess.
I explained it, but it is pure conjecture whether he understood, or not.
2. conjecture(Noun)
A supposition based upon incomplete evidence; a hypothesis.
The physicist used his conjecture about subatomic particles to design an experiment.
3. conjecture(Noun)
A statement likely to be true based on available evidence, but which has not been formally proven.
4. conjecture(Noun)
Interpretation of signs and omens.
5. conjecture(Verb)
To guess; to venture an unproven idea.
I do not know if it is true; I am simply conjecturing here.
Origin: From coniectura, from coniectus, perfect passive participle of conicio, from con- + iacio; see jet. Compare adjective, eject, inject, project, reject, subject, object, trajectory.
Webster Dictionary
1. Conjecture(noun)
an opinion, or judgment, formed on defective or presumptive evidence; probable inference; surmise; guess; suspicion
2. Conjecture(verb)
to arrive at by conjecture; to infer on slight evidence; to surmise; to guess; to form, at random, opinions concerning
3. Conjecture(verb)
to make conjectures; to surmise; to guess; to infer; to form an opinion; to imagine
1. Conjecture
A conjecture is a proposition that is unproven. Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is contrasted by hypothesis, which is a testable
statement based on accepted grounds. In mathematics, a conjecture is an unproven proposition that appears correct.
Translations for conjecture
Kernerman English Multilingual Dictionary
(an) opinion formed on slight evidence; a guess
He made several conjectures about where his son might be.
Get even more translations for conjecture »
Find a translation for the conjecture definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for conjecture? | {"url":"http://www.definitions.net/definition/conjecture","timestamp":"2014-04-16T11:39:12Z","content_type":null,"content_length":"40855","record_id":"<urn:uuid:64ab8185-d0ca-4360-98f3-d708cb8d0860>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Separate digits in C++
A very large array of pointers to strings representing the digits of the index into the arrray of pointers would work. With a mere 1,188,888,826 bytes (1.189GB) of ram, 4 bytes per pointer, 1 byte
per digit, using sequential pointers to calculate string lengths (no string terminators), you could quickly convert the numbers from 0 to 99,999,999 into strings. | {"url":"http://www.physicsforums.com/showthread.php?t=179547","timestamp":"2014-04-17T21:40:36Z","content_type":null,"content_length":"69869","record_id":"<urn:uuid:0db777a1-dba8-4cce-b3ca-2786add8ffea>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Overview of AFFIRM: A Specification and Verification System. Information Processing 80
, 1989
"... Kernel Implements Processes The relationship between the abstract kernel and an individual task is pictured in Figure 4, and is formalized by the theorem AK-IMPLEMENTS-PARALLEL-TASKS.
Intuitively, this theorem says that for a given good abstract kernel state AK and abstract kernel oracle ORACLE, th ..."
Cited by 59 (0 self)
Add to MetaCart
Kernel Implements Processes The relationship between the abstract kernel and an individual task is pictured in Figure 4, and is formalized by the theorem AK-IMPLEMENTS-PARALLEL-TASKS. Intuitively,
this theorem says that for a given good abstract kernel state AK and abstract kernel oracle ORACLE, the final state reached by task I can equivalently be achieved by running TASK-PROCESSOR on the
initial task state, with an oracle constructed by the function CONTROL-ORACLE. The oracle constructed for TASK-PROCESSOR accounts for the precise sequence of delays to task I in the abstract kernel.
Task project AK Figure 4: AK Implements Parallel Tasks THEOREM AK-IMPLEMENTS-PARALLEL-TASKS (IMPLIES (AND (GOOD-AK AK) (FINITE-NUMBERP I (LENGTH (AK-PSTATES AK)))) (EQUAL (PROJECT I (AK-PROCESSOR AK
ORACLE)) (TASK-PROCESSOR (PROJECT I AK) I (CONTROL-ORACLE I AK ORACLE)))) 6. The Target Machine The target machine TM is a simple von Neumann computer. It is not based on an existing physical machine
, 2001
"... Over the past decade, software architecture research has emerged as the principled study of the overall structure of software systems, especially the relations among subsystems and components.
From its roots in qualitative descriptions of useful system organizations, software architecture has mature ..."
Cited by 39 (2 self)
Add to MetaCart
Over the past decade, software architecture research has emerged as the principled study of the overall structure of software systems, especially the relations among subsystems and components. From
its roots in qualitative descriptions of useful system organizations, software architecture has matured to encompass broad explorations of notations, tools, and analysis techniques. Whereas initially
the research area interpreted software practice, it now offers concrete guidance for complex software design and development. We can understand the evolution and prospects of software architecture
research by examining the research paradigms used to establish its results. These are, for the most part, the paradigms of software engineering. We advance our fundamental understanding by posing
research questions of several kinds and applying appropriate research techniques, which differ from one type of problem to another, yield correspondingly different kinds of results, and require
different methods of validation. Unfortunately, these paradigms are not recognized explicitly and are often not carried out correctly; indeed not all are consistently accepted as valid. This
retrospective on a decade-plus of software architecture research examines the maturation of the software architecture research area by tracing the types of research questions and techniques used at
various stages. We will see how early qualitative results set the stage for later precision, formality, and automation and how results build up over time. This generates advice to the field and
projections about future impact. Keywords: Software architecture, research paradigms 1.
- Theoretical Computer Science , 1992
"... We define algebraic systems called concurrent regular expressions which provide a modular description of languages of Petri nets. Concurrent regular expressions are extension of regular
expressions with four operators - interleaving, interleaving closure, synchronous composition and renaming. This a ..."
Cited by 15 (2 self)
Add to MetaCart
We define algebraic systems called concurrent regular expressions which provide a modular description of languages of Petri nets. Concurrent regular expressions are extension of regular expressions
with four operators - interleaving, interleaving closure, synchronous composition and renaming. This alternative characterization of Petri net languages gives us a flexible way of specifying
concurrent systems. Concurrent regular expressions are modular and hence easier to use for specification. The proof of equivalence also provides a natural decomposition method for Petri nets. 1
Introduction Formal models proposed for specification and analysis of concurrent systems can be categorized roughly into two groups: algebra based and transition based. The algebra based models
specify all possible behaviors of concurrent systems by means of expressions that consist of algebraic operators and primitive behaviors. Examples of such models are path expressions[3], behavior
expressions[21] and extend...
, 2001
"... Data Types and Software Validation ," Communications of the ACM, Vol. 21, No. 12, 1978, pp. 1048-1064. ..."
, 1993
"... This article presents an example of correct circuit design through interactive transformation. Interactive transformation differs from traditional hardware design transformation frameworks in
that it focuses on the issue of finding suitable hardware architecture for the specified system and the issu ..."
Cited by 10 (1 self)
Add to MetaCart
This article presents an example of correct circuit design through interactive transformation. Interactive transformation differs from traditional hardware design transformation frameworks in that it
focuses on the issue of finding suitable hardware architecture for the specified system and the issue of architecture correctness. The transformation framework divides every transformation in designs
into two steps. The first step is to find a proper architecture implementation. Although the framework does not guarantee existence of such an implementation, nor its discovery, it does provide a
characterization of architectural implementation so that the question "is this a correct implementation?" can be answered by equational rewriting. The framework allows a correct architecture
implementation to be automatically incorporated with control descriptions to obtain a new system description. The significance of this transformation framework lies in the fact that it requires
simpler mechanism o...
- Proc. 2nd International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols , 1989
"... this paper, we propose an algebraic model called concurrent regular expressions for modeling of concurrent systems. These expressions can be converted automatically to Petri nets, and thus all
analysis techniques that are applicable to Petri nets can be used. Conversely, any Petri net can be convert ..."
Cited by 5 (3 self)
Add to MetaCart
this paper, we propose an algebraic model called concurrent regular expressions for modeling of concurrent systems. These expressions can be converted automatically to Petri nets, and thus all
analysis techniques that are applicable to Petri nets can be used. Conversely, any Petri net can be converted to a concurrent regular expression providing further insights into its language
, 1982
"... One of the main reasons why constructing deductive proofs that programs satisfy their specifications can be very expensive in practice is the absence of reusable problem domain theories. These
theories contain functions that define relevant concepts in the application area of the program, and they c ..."
Cited by 1 (0 self)
Add to MetaCart
One of the main reasons why constructing deductive proofs that programs satisfy their specifications can be very expensive in practice is the absence of reusable problem domain theories. These
theories contain functions that define relevant concepts in the application area of the program, and they contain properties that are deduced from these definitions. Presently, the cost of proving
programs is highly inflated by the fact that we usually have to build a new problem domain theory for each new application. If we can develop reusable problem domain theories, the cost of specifying
and proving programs in actual practice can be greatly reduced. The development of these theories also would have significant benefits for other aspects of computing science. This paper discusses the
composition of problem domain theories and their relation to program specification and proof. REUSABLE PROBLEM DOMAIN THEORIES 2 Acknowledgements During its eight year existence, well over 50 people
have contr... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1422236","timestamp":"2014-04-17T07:52:11Z","content_type":null,"content_length":"31279","record_id":"<urn:uuid:1a45b074-68af-4407-9130-491e33ce0503>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Semirings with subtractive primes
up vote 2 down vote favorite
Let $S$ be a commutative semiring with identity such that each prime ideal of $S$ is subtractive. Does this imply all ideals of $S$ to be subtractive?
By a commutative semiring with identity I mean an algebraic structure, consisting of a nonempty set $S$ with two operations of addition and multiplication such that the following conditions are
$(S,+)$ is a commutative monoid with identity element $0$; $(S,.)$ is a commutative monoid with identity element $1 \not= 0$; Multiplication distributes over addition, i.e. $a(b+c) = ab + ac$ for all
$a,b,c \in S$; The element $0$ is the absorbing element of the multiplication, i.e. $s.0=0$ for all $s\in S$.
A nonempty subset $I$ of a semiring $S$ is said to be an ideal of $S$, if $a+b \in I$ for all $a,b \in I$ and $sa \in I$ for all $s \in S$ and $a \in I$.
A nonempty subset $P$ of a semiring $S$ is said to be a prime ideal of $S$, if $P \not= S$ is an ideal of $S$ such that $ab \in P$ implies either $a\in P$ or $b\in P$ for all $a,b \in S$.
An ideal $I$ of a semiring $S$ is said to be subtractive, if $a+b \in I$ and $a \in I$ implies $b \in I$ for all $a,b \in S$.
semirings prime-ideals
add comment
1 Answer
active oldest votes
I think the answer is no. Let $S=\{0\}\cup[1,\infty)$ be the subsemiring of the (usual) reals. A non-zero ideal of $S$ is of the form $\{0\}\cup[a,\infty)$ where $a\geq 1$. Clearly, the
only prime ideal of $S$ (according to your definition) is $\{0\}$ and it is subtractive. But no proper non-zero ideal of $S$ is subtractive.
up vote 1 Correction: My argument is not right: actually every non-zero ideal of $S$ ie either of the form $\{0\}\cup[a,\infty)$ or $\{0\}\cup(a,\infty)$ where $a\geq 1$. Hence $P=\{0\}\cup(1,\
down vote infty)$ is a non-zero prime ideal which is not subtractive. And in general a unitary semiring has to always have a maximal ideal (as the unitary ring does.)
add comment
Not the answer you're looking for? Browse other questions tagged semirings prime-ideals or ask your own question. | {"url":"http://mathoverflow.net/questions/105026/semirings-with-subtractive-primes","timestamp":"2014-04-20T09:02:27Z","content_type":null,"content_length":"50187","record_id":"<urn:uuid:e1dac040-512d-4e2c-9e52-3af27fe8747f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
[BioC] limma - interpreting factorial design
john seers (IFR) john.seers at bbsrc.ac.uk
Tue Feb 24 19:05:53 CET 2009
Hi Bjoern
Thanks again for taking the time to reply.
>if you are just concerned about the numerical values,
I am not concerned specifically with the numerical values. I am just looking at them to make sure I am interpreting and understanding correctly.
>just take the equations and "interpret" them:
That is difficult if you are not quite sure what you are looking at.
>Thus, the estimated coefficient in example 2 is a quarter of that in
>example 1. (And the interaction effect should be (Mu.S-Mu.U)-(WT.S-WT.U) )
So I multiply it by 4 to show they are equivalent. Check!
>However the grand mean is already directly estimated. (So there is no
>need to multiply it by four.
OK, maybe we are getting close to my problem. What do you mean it is already directly estimated? Both the interaction and the grand mean are shown to be divided by 4 in the comparisons. How do you know that the first coefficient does not have to be multiplied by 4? But the 4th coefficient does? I can see that by looking at the actual numerical figures but I cannot get that from the documentation. What about coefficients 2 and 3? Are they directly estimated?
>Again try interpreting the equation given)
Do you mean the (WT.U+WT.S+Mu.U+Mu.S)/4?
That is what I am trying to interpret. This says to me that the coefficient (divided or multiplied or left as it is) will give the grand mean. How do I work out which to do? It does not match what we did with the interaction above.
If I set up a contrast matrix like this that would extract the "directly estimated" grand mean as a contrast and the "not directly estimated" interaction, but why?
contrast.matrix<-cbind(gm=c(1,0,0,0), dp=c(0,1,0,0), TNF=c(0,0,1,0), Interaction=c(0,0,0,4))
>Otherwise "The R book" has a good section on contrasts.
I'll have a look, and at the statsoft.
>But it would be best to have a look at linear models and
>parameterizations first.
I am doing but it does not seem to help with this question. As I said I have run anova and regressions against the data and it all comes out roughly as expected. But I must be having a blind spot about this divide by 4.
>Potentially, it would be helpful to add a few comments to the
>limmaUsersguide here?
I saw an email from Gordon Smyth in the mailings that said he would like feedback on this section.
-----Original Message-----
From: Bjoern Usadel [mailto:usadel at mpimp-golm.mpg.de]
Sent: 24 February 2009 16:29
To: john seers (IFR)
Cc: bioconductor at stat.math.ethz.ch
Subject: Re: [BioC] limma - interpreting factorial design
Hi John,
if you are just concerned about the numerical values, you can really
just take the equations and "interpret" them:
Interaction effect
example 1:
example 2:
Thus, the estimated coefficient in example 2 is a quarter of that in
example 1. (And the interaction effect should be (Mu.S-Mu.U)-(WT.S-WT.U) )
However the grand mean is already directly estimated. (So there is no
need to multiply it by four. Again try interpreting the equation given)
But it would be best to have a look at linear models and
parameterizations first.
Otherwise "The R book" has a good section on contrasts.
If you didn't want to pursue that further: use approach 1 in the limma
guide, as this is usually the easiest one and helps you formulating the
question you really want.
Potentially, it would be helpful to add a few comments to the
limmaUsersguide here?
john seers (IFR) wrote:
> Hi Bjoern
> Thanks for the reply.
> I am following the example on page 47 exactly, the only difference being using dp as Strain and TNF as Treatment.
> Here are my factors which gives you which measurements correspond to which treatment:
>> dp
> [1] Yes Yes Yes No No No Yes Yes Yes No No No
> Levels: No Yes
>> TNF
> [1] No No No No No No Yes Yes Yes Yes Yes Yes
> Levels: No Yes
>> If you then compare these values with the ones you really want to
>> extract you can come up with some simple transformations to do so.
> I have not got to that stage yet of what I "really" want to extract. I am trying to understand exactly why these two approaches are equivalent and what the figures actually represent.
>> In your example you also seem to extract different things from the
>> treatment-contrast parametrization than from the sum to zero
>> parametrization.
> In both cases I am extracting the major/primary coefficients and seeing how they relate. So they will be different. I am not extracting anything specific yet. I am having trouble with a description of a coefficient that is described as the "Grand mean" but is 4 times too big for what I think of as a Grand mean.
> The only directly comparable coefficient in these two approaches is the interaction and they are the same in the example. (If multiplied by 4). So, assuming it is correct to multiply by 4 what is the interpretation of the Grand mean coefficient at 18.9249361? If it is not correct to multiply by 4 what is the interpretation of an interaction coefficient that is 4 times smaller than the treatment contrasts coefficient?
> I have run an anova on this gene and with a bit of fiddling I can derive all the figures supplied by limma in both approaches and how they are linked. Except for when they should be 4 times bigger or 4 times smaller.
> Regards
> John
> ---
> -----Original Message-----
> From: Bjoern Usadel [mailto:usadel at mpimp-golm.mpg.de]
> Sent: 24 February 2009 12:18
> To: john seers (IFR)
> Subject: Re: [BioC] limma - interpreting factorial design
> Dear John,
> could you please also post
> which of your measurements correspond to which treatment?
> What helps a lot in interpretation is regrouping the terms on page 47 of
> the user guide e.g. (WT.U-WT.S+Mu.U-Mu.S)/4 and then comparing these to
> other contrasts or the contrast of interest.
> If you then compare these values with the ones you really want to
> extract you can come up with some simple transformations to do so.
> In your example you also seem to extract different things from the
> treatment-contrast parametrization than from the sum to zero
> parametrization.
> contrast.matrix<-cbind(Intercept=c(1, 0, 0, 0), dp=c(0,1,0,0),
> TNF=c(0,0,1,0), Interaction=c(0,0,0,1))
> If tnf is a factor exactly like in the limma example would most likely
> not extract the TNF main effect.
> Also the intercept has a different meaning which might cause the
> differences.
> Best Wishes,
> Björn
> john seers (IFR) wrote:
>> Hello All
>> Can someone help me with unravelling a bit of confusion I have about the
>> limma factorial design?
>> 8.7 Factor Designs (Page 47 approx) in the user guide has three
>> approaches that are basically equivalent. I am comparing the "sum to
>> zero" and the "treatment contrast" approaches. In the sum to zero
>> approach the comparisons are divided by 4 and this is where my
>> misunderstanding lies.
>> Just looking at the first gene as an example. I have put the expression
>> values below to give an idea of the magnitudes.
>> With the treatment contrast just extracting the coefficients straight I
>> get the following (code below):
>> eb$coef[1,]
>> # Intercept dp TNF Interaction
>> # 4.84942088 0.05031631 -0.36610669 0.15883329
>> With the sum to zero the comparisons are divided by 4. So one way to
>> extract the coefficients is below in the code. Using this way (in effect
>> multiplying by 4) I get the following:
>> eb$coef[1,]
>> # gm dp TNF Interaction
>> # 18.9249361 -0.2594659 0.5733801 0.1588333
>> So here is my problem. The grand mean looks 4 times too large but the
>> interaction matches the interaction from the treatments contrast
>> approach. So I can have one "looking" right but not both. i.e. To
>> multiply by 4 or not to multiply by 4, that is the question. How do I
>> interpret this? What am I missing in my understanding?
>> Thanks for any help
>> Regards
>> John
>> # Sum to zero code
>> fit<-lmFit(eset, design)
>> contrast.matrix<-cbind(gm=c(4,0,0,0), dp=c(0,4,0,0), TNF=c(0,0,4,0),
>> Interaction=c(0,0,0,4))
>> #contrast.matrix<-cbind(Interaction=c(0,0,-2,-2))
>> fit2<-contrasts.fit(fit, contrast.matrix)
>> eb<-eBayes(fit2)
>> # Treatment contrasts code
>> design<-model.matrix(~dp*TNF)
>> fit<-lmFit(eset, design)
>> contrast.matrix<-cbind(Intercept=c(1, 0, 0, 0), dp=c(0,1,0,0),
>> TNF=c(0,0,1,0), Interaction=c(0,0,0,1))
>> # Gene 1 expression level
>> exprs1<-exprs[1,]
>> # 4.865401 5.114202 4.719609 4.882969 4.857923
>> # 4.807370 4.538509 4.759865 4.779017 4.430844
>> # 4.519123 4.499975
>> _______________________________________________
>> Bioconductor mailing list
>> Bioconductor at stat.math.ethz.ch
>> https://stat.ethz.ch/mailman/listinfo/bioconductor
>> Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor
>> .
Björn Usadel, PhD
Max Planck Institute of Molecular Plant Physiology
AG Integrative Carbon Biology
Am Muehlenberg 1
14476 Potsdam-Golm
Tel.: +49 331 5678153
email usadel at mpimp-golm.mpg.de
More information about the Bioconductor mailing list | {"url":"https://stat.ethz.ch/pipermail/bioconductor/2009-February/026469.html","timestamp":"2014-04-20T08:17:10Z","content_type":null,"content_length":"14875","record_id":"<urn:uuid:56b22ad4-278c-4270-80b6-807df5d1b2bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Revista de la Unión Matemática Argentina
Servicios Personalizados
Links relacionados
versión impresa ISSN 0041-6932
Rev. Unión Mat. Argent. vol.50 no.1 Bahía Blanca jun. 2009
Functional versions of the Caristi-Kirk theorem
Mihai Turinici
Abstract. Many functional versions of the Caristi-Kirk fixed point theorem are nothing but logical equivalents of the result in question.
2000 Mathematics Subject Classification. Primary 47H10. Secondary 54H25.
Key words and phrases. Metric space; lsc function; Fixed/periodic point; Normality; Local boundedness; Feng-Liu property; Semimetric; Maximal element; Cauchy and asymptotic sequence; Regularity;
Non-expansive map.
Let also fixed (periodic) under 10] is basic for us.
(1b) , for all .
Then, necessarily,
hence, in particular,
The original proof of this result is by transfinite induction; see also Wong [40]. (It works as well for highly specialized versions of Theorem 1; cf. Kirk and Saliga [19]). Note that, in terms of
the associated (to
the contractivity condition (1b) reads
(1c) progressive).
So, by the Bourbaki meta-theorem [7], the underlying result is logically equivalent with the Zorn maximality principle subsumed to the precise order; i.e., with Ekeland's variational principle [12].
This tells us that the sequential type argument used in its proof (cf. Section 6) is also working in our framework; see also the paper of Pasicki [25]. A proof of Theorem 1 involving the chains of
the structure 33]; and its sequential translation has been developed in Dancs, Hegedus and Medvegyev [11]. Further aspects involving the general case may be found in Brunner [9] and Manka [21]; see
also Taskovic [32], Valyi [39], Nemeth [23] and Isac [14].
Now, the Caristi-Kirk fixed point theorem found (especially via Ekeland's approach) some basic applications to control and optimization, generalized differential calculus, critical point theory and
normal solvability; see the above references for details. So, it must be not surprising that, soon after its formulation, many extensions of Theorem 1 were proposed. (These involve its standard
version related to (1.2); and referred to as Theorem 1(st). But, only a few are concerned with the extended version of the same, related to (1.1); and referred to as Theorem 1(ex)). For example, in
the 1982 paper by Ray and Walker [28], the following result of this type was obtained. Call the function semi-normal when
and normal provided (in addition)
(Further aspects involving these notions will be developed in Section 2).
Theorem 2. Assume that there exists a normal function and some with
(1d) , for all .
Then, has at least one fixed point in .
Clearly, Theorem 2 includes Theorem 1(st), to which it reduces when 24]). Summing up, Theorem 2 is but a logical equivalent of Theorem 1 (st). (For a different proof of this, we refer to Section 4
On the other hand, in the 2005 paper by Turinici [38] the following fixed point result was established. Call the function right locally bounded from above at
If this holds for each right locally bounded from above (on locally bounded (above) in case: the image of each bounded part in
Theorem 3. Let the right locally bounded from above (on ) and the locally bounded (above) be such that
(1e) , .
Then, conclusions of Theorem 2 are retainable.
This result extends the one in Bae, Cho and Yeom [4]; see also Bae [3]. But, as precise there, all these extend Theorem 1(st); hence, so does Theorem 3. (In fact, a direct verification is available,
via 3 is a logical equivalent of Theorem 1 (st). For a technical extension of such facts we refer to Section 4.
Further, in Section 5, we show that our developments include as well the fixed point statement (comparable with Theorem 1(ex)) due to Feng and Liu [13]. And in Section 6, some extensions of Theorem 1
are given, in terms of maximality principles over metrical structures comparable with the 1976 Brezis-Browder's [8]. In fact, the obtained statements are extendable beyond the metrizable context; we
shall discuss them elsewhere.
(A) Let
Some basic facts involving the couple
Lemma 1. The following are valid
(i) ,
(ii) is a topological order isomorphism from to ; hence so is (between and )
(iii) is almost concave: is decreasing on ,
(iv) is concave: , for all with and all
(v) is sub-additive and is super-additive.
The proof is evident, by (2.1) above; so, we do not give details. Note that iii) and iv) are equivalent to each other, under ii). This follows from the (non-differential) mean value theorem in Bantas
and Turinici [5].
(B) Now, let
Further, let
Given the function
(Here, by convention,
The definition is consistent, via (2.2); moreover,
Lemma 2. Under these conventions,
Proof. Let the points 1 (i) and the implicit formula (b2), this gives 1(iii): 2.2 ) into account to get the desired conclusion.
In particular, the nonexpansivity condition (2a) holds under
In such a case, Lemma 2 includes directly the statement in Park and Bae [24]. Further aspects may be found in Suzuki [30]; see also Turinici [37].
(A) The former of these requires the normality setting (of (c1)).
Theorem 4. Assume that is normal and
(3a) , for each .
Then, is strongly fp-admissible (); hence fp-admissible ().
Proof. Let 2 give (1b) (modulo 1 applies to
As in Theorem 1, we have a couple of fixed point statements under this formulation; referred to as Theorem 4(ex) and Theorem 4(st). Clearly, Theorem 4 includes Theorem 1 (for 1. On the other hand,
when 4(st) is just Theorem 2 above. Further aspects may be found in Zhong, Zhu and Zhao [41]; see also Lin and Du [20].
(B) Another answer to the same is to be stated in the original semi-normality setting (with (c1) not accepted). Roughly speaking, this is to be obtained via "surrogates" of (c1); like, e.g.,
(3b) there exists
(Here, as already precise,
Theorem 5. Assume that is semi-normal and (3a)+(3b) hold. Then has at least one fixed point in .
Proof. Without loss, one may assume 1(i) and the choice of
This shows that 1(st) applies to
Now, Theorem 5 includes Theorem 1(st) (for 5 is a logical equivalent of Theorem 1 (st). Combining with a preceding fact, one therefore derives that Theorem 4 (st) includes Theorem 5 . Concerning this
aspect, note that the function
is semi-normal; but not normal. Hence, this inclusion is not obtainable in a direct way. Further technical aspects may be found in Turinici [36]; see also Jachymski [16].
Let again 1.2 ) is by using contractivity condition like in Theorem 3. A natural extension of these is as below. Let right locally -proper at
If right locally -proper (on
Theorem 6. Let the contractivity condition (1e) be true. Then, has at least one fixed point in .
Proof. If
where, as usually, 1 (st) is applicable to
Now, Theorem 6 includes Theorem 1(st), to which it reduces when 6 is logically equivalent with Theorem 1(st); hence to Theorem 3 as well. Concerning this last aspect, note that Theorem 6 includes
directly Theorem 3 (by taking
Clearly, 6 works here. On the other hand, Theorem 3 is not (directly) applicable when
Finally, combining this with the construction of Theorem 4, we may state a "hybrid" fixed point result as follows. In addition to
Theorem 7. Let the above data be such that
(4b) .
Then, has at least one fixed point in .
Proof. (Sketch) By the same way as in Theorem 6 , we get (3a) over 4
This result extends Theorem 6; hence, Theorem 1(st) as well. On the other hand, Theorem 7 follows from Theorem 1(st); because, so does Theorem 4(st). Summing up, Theorem 7 is but a logical equivalent
of Theorem 1(st); note that this conclusion also includes the fixed point statement in Suzuki [31]. Further, we may ask whether the construction in Theorem 5 may be used as well in deriving a fixed
point result extending Theorem 6. The answer is positive; further aspects will be delineated elsewhere. Some extensions of the obtained facts to multivalued maps 22]; see also Petrusel and
Sîntamarian [27].
Let 13], an interesting fixed point result (comparable with Theorem 1 ) was established. Let
it will be referred to as a Feng-Liu function. Clearly,
For, if this fails, there must be an
Theorem 8. Suppose that some Feng-Liu function may be found with
(5a) , for all .
Then, conclusions of Theorem 1 are retainable.
This result includes Theorem 1 for 15]; we do not give details). But, the reciprocal is also true; this will follow from the
Proof. (of Theorem 8) Denote for simplicity 5.1), we have the generic equivalencies
In particular, this shows that 8. This, added to (5a), tells us that Theorem 1 is applicable to
Summing up, Theorem 8 is but a logical equivalent of Theorem 1. Further technical aspects may be found in Jachymski [15], Petrusel [26], Bîrsan [6] and Rozoveanu [29]; see also Kada, Suzuki and
Takahashi [17].
(A) Let quasi-order (i.e.: reflexive and transitive relation) over it. By a pseudometric on reflexive [triangular [sufficient [almost metric (over maximal, in case: 7]. To state one of these, one may
proceed as below. Call the ascending sequence Cauchy when: asymptotic, provided:
(6a) each ascending sequence is
(6b) each ascending sequence is
Either of these will be referred to as: regular (modulo 34] is available.
Proposition 1. Assume that is regular (modulo ) and
(6c) is sequentially inductive:
each ascending sequence is bounded above (modulo ).
Then, for each there exists a -maximal with .
Note that the non-sufficient version of this result extends the Brezis-Browder ordering principle [8]. Further statements in the area were obtained in Altman [1] and Anisiu [2]; see also Kang and
Park [18]. However, all these are (mutually) equivalent; see Turinici [35] for details.
(B) A basic application of these facts may be given along the following lines. Let
Theorem 9. Let the precise condition be admitted; and let be a selfmap with the property (1b). Then, (1.1) is retainable in the stronger sense: for each there exists in with
Proof. Let 1 applies for
The sequence
This, by the relation above, yields (passing to limit as
Now, the regularity condition (6d) holds whenever
In particular, (6f) holds when 9 is just the fixed point statement in Caristi and Kirk [10] (Theorem 1). Some related aspects may be found in Isac [14] and Nemeth [23].
Acknowledgement. The author is very indebted to the referee for the careful reading of the manuscript and a number of useful suggestions.
[1] Altman M., A generalization of the Brezis-Browder principle on ordered sets, Nonlinear Analysis, 6 (1981), 157-165. [ Links ]
[2] Anisiu M. C., On maximality principles related to Ekeland's theorem, Seminar Funct. Analysis Numer. Meth. (Faculty of Math. Research Seminars), Preprint No. 1 (8 pp), "Babes-Bolyai" Univ.,
Cluj-Napoca (România), 1987. [ Links ]
[3] Bae J. S., Fixed point theorems for weakly contractive multivalued maps, J. Math. Analysis Appl., 284 (2003), 690-697. [ Links ]
[4] Bae J. S., Cho E. W. and Yeom S. H., A generalization of the Caristi-Kirk fixed point theorem and its applications to mapping theorems, J. Korean Math. Soc., 31 (1994), 29-48. [ Links ]
[5] Bantas G. and Turinici M., Mean value theorems via division methods, An. St. Univ. "A. I. Cuza" Iasi (S. I-a, Mat.), 40 (1994), 135-150. [ Links ]
[6] Bîrsan T., New generalizations of Caristi's fixed point theorem via Brezis-Browder principle, Math. Moravica, 8 (2004), 1-5. [ Links ]
[7] Bourbaki N., Sur le theoreme de Zorn, Archiv der Math., 2 (1949/1950), 434-437. [ Links ]
[8] Brezis H. and Browder F. E., A general principle on ordered sets in nonlinear functional analysis, Advances in Math., 21 (1976), 355-364. [ Links ]
[9] Brunner N., Topologische Maximalprinzipien, Zeitschrift Math. Logik Grundlagen Math., 33 (1987), 135-139. [ Links ]
[10] Caristi J. and Kirk W. A., Geometric fixed point theory and inwardness conditions, in "The Geometry of Metric and Linear Spaces" (Michigan State Univ., 1974), pp. 74-83, Lecture Notes Math. vol.
490, Springer, Berlin, 1975. [ Links ]
[11] Dancs S., Hegedus M. and Medvegyev P., A general ordering and fixed-point principle in complete metric space, Acta Sci. Math. (Szeged), 46 (1983), 381-388. [ Links ]
[12] Ekeland I., Nonconvex minimization problems, Bull. Amer. Math. Soc. (New Series), 1 (1979), 443-474. [ Links ]
[13] Feng Y. and Liu S., Fixed point theorems for multi-valued contractive mappings and multi-valued Caristi type mappings, J. Math. Anal. Appl., 317 (2006), 103-112. [ Links ]
[14] Isac G., Un theoreme de point fixe de type Caristi dans les espaces localement convexes, Review Res. Fac. Sci. Univ. Novi Sad (Math. Series), 15 (1985), 31-42. [ Links ]
[15] Jachymski J. R., Caristi's fixed point theorem and selections of set-valued contractions, J. Math. Anal. Appl., 227 (1998), 55-67. [ Links ]
[16] Jachymski J. R., Converses to the fixed point theorems of Zermelo and Caristi, Nonlinear Analysis, 52 (2003), 1455-1463. [ Links ]
[17] Kada O., Suzuki T. and Takahashi W., Nonconvex minimization theorems and fixed point theorems in complete metric spaces, Math. Japonica, 44 (1996), 381-391. [ Links ]
[18] Kang B. G. and Park S., On generalized ordering principles in nonlinear analysis, Nonlinear Analysis, 14 (1990), 159-165. [ Links ]
[19] Kirk W. A. and Saliga M., The Brezis-Browder order principle and extensions of Caristi's theorem, Nonlinear Analysis, 47 (2001), 2765-2778. [ Links ]
[20] Lin L. J. and Du W. S., Ekeland's variational principle, minimax theorems and existence of nonconvex equilibria in complete metric spaces, J. Math. Anal. Appl., 323 (2006), 360-370. [ Links ]
[21] Manka R., Turinici's fixed point theorem and the Axiom of Choice, Reports Math. Logic, 22 (1988), 15-19. [ Links ]
[22] Mizoguchi N. and Takahashi W., Fixed point theorems for multivalued mappings on complete metric spaces, J. Math. Anal. Appl., 141 (1989), 177-188. [ Links ]
[23] Nemeth A. B., A nonconvex vector minimization principle, Nonlinear Analysis, 10 (1986), 669-678. [ Links ]
[24] Park S. and Bae J. S., On the Ray-Walker extension of the Caristi-Kirk fixed point theorem, Nonlinear Analysis, 9 (1985), 1135-1136. [ Links ]
[25] Pasicki L., A short proof of the Caristi-Kirk fixed point theorem, Comment. Math. (Prace Mat.), 20 (1977/1978), 427-428. [ Links ]
[26] Petrusel A., Caristi type operators and applications, Studia Universitatis "Babes-Bolyai" (Mathematica), 48 (2003), 115-123. [ Links ]
[27] Petrusel A. and Sîntamarian A., Single-valued amd multi-valued Caristi type operators, Publ. Math. Debrecen, 60 (2002), 167-177. [ Links ]
[28] Ray W. O. and Walker A., Mapping theorems for Gateaux differentiable and accretive operators, Nonlinear Analysis, 6 (1982), 423-433. [ Links ]
[29] Rozoveanu P., Ekeland's variational principle for vector valued functions, Math. Reports (St. Cerc. Mat.), 2(52) (2000), 351-366. [ Links ]
[30] Suzuki T., Generalized distance and existence theorems in complete metric spaces, J. Math. Anal. Appl., 253 (2001), 440-458. [ Links ]
[31] Suzuki T., Generalized fixed point theorems by Bae and others, J. Math. Anal. Appl., 302 (2005), 502-508. [ Links ]
[32] Taskovic M. R., The Axiom of Choice, fixed point theorems and inductive ordered sets, Proc. Amer. Math. Soc., 116 (1992), 897-904. [ Links ]
[33] Turinici M., Maximal elements in a class of order complete metric spaces, Math. Japonica, 5 (1980), 511-517. [ Links ]
[34] Turinici M., Pseudometric extensions of the Brezis-Browder ordering principle, Math. Nachr. 130 (1987), 91-103. [ Links ]
[35] Turinici M., Metric variants of the Brezis-Browder ordering principle, Demonstr. Math., 22 (1989), 213-228. [ Links ]
[36] Turinici M., Mapping theorems and general Newton-Kantorovich processes, Ann. Sci. Math. Quebec, 14 (1990), 85-95. [ Links ]
[37] Turinici M., Pseudometric versions of the Caristi-Kirk fixed point theorem, Fixed Point Theory (Cluj-Napoca), 5 (2004), 147-161. [ Links ]
[38] Turinici M., Functional type Caristi-Kirk theorems, Libertas Math., 25 (2005), 1-12. [ Links ]
[39] Valyi I., A general maximality principle and a fixed point theorem in uniform space, Periodica Math. Hungarica, 16 (1985), 127-134. [ Links ]
[40] Wong C. S., On a fixed point theorem of contractive type, Proc. Amer. Math. Soc., 57 (1976), 283-284. [ Links ]
[41] Zhong C. K., Zhu J. and Zhao P. H., An extension of multi-valued contraction mappings and fixed points, Proc. Amer. Math. Soc., 128 (1999), 2439-2444. [ Links ]
Mihai Turinici
"A. Myller" Mathematical Seminar
"A. I. Cuza" University
11, Copou Boulevard
700506 Iasi, Romania
Recibido: 5 de marzo de 2008
Aceptado: 31 de mayo de 2009 | {"url":"http://www.scielo.org.ar/scielo.php?script=sci_arttext&pid=S0041-69322009000100010&lng=es&nrm=iso","timestamp":"2014-04-19T07:47:49Z","content_type":null,"content_length":"105475","record_id":"<urn:uuid:734edce7-8deb-4e7f-a75f-bfb5f0a06a2d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic transformation operations which preserve computed answer substitutions of logic programs
Results 1 - 10 of 22
- Theoretical Computer Science , 1994
"... In this paper, we present a specialised semantics for logic programs. It is a generalization of the s-semantics [16] and it is intended to describe program behaviour whenever some constraints on
procedure calls are assumed. Both operational and fixpoint constructions are defined. They characterize s ..."
Cited by 69 (19 self)
Add to MetaCart
In this paper, we present a specialised semantics for logic programs. It is a generalization of the s-semantics [16] and it is intended to describe program behaviour whenever some constraints on
procedure calls are assumed. Both operational and fixpoint constructions are defined. They characterize successful derivations of programs where only atoms satisfying a given call-condition are
selected. The concept of specialisable call correct (s.c.c., in short) program with respect to a given call-condition is introduced. We show that specialisable call correct programs can be
transformed into call-correct ones. A sufficient condition to verify specialisable call correctness is stated.
- Theoretical Computer Science , 1995
"... We propose a transformation system for Constraint Logic Programming (CLP) programs and modules. The framework is inspired by the one of Tamaki and Sato for pure logic programs [37]. However, the
use of CLP allows us to introduce some new operations such as splitting and constraint replacement. We pr ..."
Cited by 37 (7 self)
Add to MetaCart
We propose a transformation system for Constraint Logic Programming (CLP) programs and modules. The framework is inspired by the one of Tamaki and Sato for pure logic programs [37]. However, the use
of CLP allows us to introduce some new operations such as splitting and constraint replacement. We provide two sets of applicability conditions. The first one guarantees that the original and the
transformed programs have the same computational behaviour, in terms of answer constraints. The second set contains more restrictive conditions that ensure compositionality: we prove that under these
conditions the original and the transformed modules have the same answer constraints also when they are composed with other modules. This result is proved by first introducing a new formulation, in
terms of trees, of a resultants semantics for CLP. As corollaries we obtain the correctness of both the modular and the non-modular system w.r.t. the least model semantics. AMS Subject Classification
- Handbook of Logic in Artificial Intelligence and Logic Programming , 1998
"... Program transformation is a methodology for deriving correct and efficient programs from specifications. In this chapter, we will look at the so called 'rules + strategies' approach, and we will
report on the main techniques which have been introduced in the literature for that approach, in the case ..."
Cited by 34 (3 self)
Add to MetaCart
Program transformation is a methodology for deriving correct and efficient programs from specifications. In this chapter, we will look at the so called 'rules + strategies' approach, and we will
report on the main techniques which have been introduced in the literature for that approach, in the case of logic programs. We will also present some examples of program transformation, and we hope
that through those examples the reader may acquire some familiarity with the techniques we will describe.
- JOURNAL OF LOGIC PROGRAMMING , 1995
"... ..."
, 1999
"... Needed narrowing is a complete operational principle for modern declarative languages which integrate the best features of (lazy) functional and logic programming. We define a transformation
methodology for functional logic programs based on needed narrowing. We provide (strong) correctness results ..."
Cited by 19 (13 self)
Add to MetaCart
Needed narrowing is a complete operational principle for modern declarative languages which integrate the best features of (lazy) functional and logic programming. We define a transformation
methodology for functional logic programs based on needed narrowing. We provide (strong) correctness results for the transformation system w.r.t. the set of computed values and answer substitutions
and show that the prominent properties of needed narrowing -- namely, the optimality w.r.t. the length of derivations and the number of computed solutions -- carry over to the transformation process
and the transformed programs. We illustrate the power of the system by taking on in our setting two well-known transformation strategies (composition and tupling). We also provide an implementation
of the transformation system which, by means of some experimental results, highlights the benefits of our approach.
- PROC. OF THE INTERNATIONAL CONFERENCE ON ALGEBRAIC AND LOGIC PROGRAMMING, ALP'97, SOUTHAMPTON (ENGLAND , 1997
"... Functional logic languages with a complete operational semantics are based on narrowing, a generalization of term rewriting where unification replaces matching. In this paper, we study the
semantic properties of a general transformation technique called unfolding in the context of functional logic l ..."
Cited by 11 (9 self)
Add to MetaCart
Functional logic languages with a complete operational semantics are based on narrowing, a generalization of term rewriting where unification replaces matching. In this paper, we study the semantic
properties of a general transformation technique called unfolding in the context of functional logic languages. Unfolding a program is defined as the application of narrowing steps to the calls in
the program rules in some appropriate form. We show that, unlike the case of pure logic or pure functional programs, where unfolding is correct w.r.t. practically all available semantics,
unrestricted unfolding using narrowing does not preserve program meaning, even when we consider the weakest notion of semantics the program can be given. We single out the conditions which guarantee
that an equivalent program w.r.t. the semantics of computed answers is produced. Then, we study the combination of this technique with a folding transformation rule in the case of innermost
conditional narrowing, and prove that the resulting transformation still preserves the computed answer semantics of the initial program, under the usual conditions for the completeness of innermost
conditional narrowing. We also discuss a relationship between unfold/fold transformations and partial evaluation of functional logic programs.
- ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS , 1993
"... An Unfold/Fold transformation system is a source-to-source rewriting methodology devised to improve the efficiency of a program. Any such transformation should preserve the main properties of
the initial program: among them, termination. To this end, in the field of logic programming, the class of ..."
Cited by 10 (3 self)
Add to MetaCart
An Unfold/Fold transformation system is a source-to-source rewriting methodology devised to improve the efficiency of a program. Any such transformation should preserve the main properties of the
initial program: among them, termination. To this end, in the field of logic programming, the class of acyclic programs plays an important role, as it is closely related to the one of terminating
programs. The two classes coincide when negation is not allowed in the bodies of the clauses. In this paper it is proven that the Unfold/Fold transformation system defined by Tamaki and Sato
preserves the acyclicity of the initial program. As corollaries, it follows that when the transformation is applied to an acyclic program, then finite failure set for definite programs is preserved;
in the case of normal programs, all major declarative and operational semantics are preserved as well. These results cannot be extended to the class of left terminating programs without modifying the
, 1993
"... The simultaneous replacement transformation operation, is here defined and studied wrt normal programs. We give applicability conditions able to ensure the correctness of the operation wrt the
set of logical consequences of the completed database. We consider separately the cases in which the underl ..."
Cited by 10 (4 self)
Add to MetaCart
The simultaneous replacement transformation operation, is here defined and studied wrt normal programs. We give applicability conditions able to ensure the correctness of the operation wrt the set of
logical consequences of the completed database. We consider separately the cases in which the underlying language is infinite and finite; in this latter case we also distinguish according to the kind
of domain closure axioms adopted. As corollaries we obtain results for Fitting's and Kunen's semantics. We also show how simultaneous replacement can mimic other transformation operations such as
thinning, fattening and folding, thus producing applicability conditions for them too.
"... We consider the replacement transformation operation, a very general and powerful transformation, and study under which conditions it preserves universal termination besides computed answer
substitutions. With this safe replacement we can significantly extend the safe unfold/fold transformation sequ ..."
Cited by 9 (3 self)
Add to MetaCart
We consider the replacement transformation operation, a very general and powerful transformation, and study under which conditions it preserves universal termination besides computed answer
substitutions. With this safe replacement we can significantly extend the safe unfold/fold transformation sequence presented in [11]. By exploiting typing information, more useful conditions can be
defined and we may deal with some special cases of replacement very common in practice, namely switching two atoms in the body of a clause and the associativity of a predicate. This is a first step
in the direction of exploiting a Pre/Post specification on the intended use of the program to be transformed. Such specification can restrict the instances of queries and clauses to be considered and
then relax the applicability conditions on the transformation operations.
- Proceedings of the 1995 International Logic Programming Symposium (ILPS '95 , 1995
"... We study the relationships between the correctness of logic program transformation and program termination. We consider definite programs and we identify some `invariants' of the program
transformation process. The validity of these invariants ensures the preservation of the success set semantics, p ..."
Cited by 5 (4 self)
Add to MetaCart
We study the relationships between the correctness of logic program transformation and program termination. We consider definite programs and we identify some `invariants' of the program
transformation process. The validity of these invariants ensures the preservation of the success set semantics, provided that the existential termination of the initial program implies the
existential termination of the final program. We also identify invariants for the preservation of the finite failure set semantics. We consider four very general transformation rules: definition
introduction, definition elimination, iff-replacement, and finite failure. Many versions of the transformation rules proposed in the literature, including unfolding, folding, and goal replacement,
are instances of the iff-replacement rule. By using our proposed invariants which are based on Clark completion, we prove, for our transformation rules, various results concerning the preservation of
both the success set and finite ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=86907","timestamp":"2014-04-20T21:35:53Z","content_type":null,"content_length":"38605","record_id":"<urn:uuid:97ea2712-6e43-4d6c-a7bc-b2b39e4567e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is dx?
Date: 08/25/2001 at 13:31:38
From: Michael Morse
Subject: dx
What does dx mean and where does it come from?
Date: 08/26/2001 at 06:10:56
From: Doctor Jeremiah
Subject: Re: dx
Hi Michael,
I assume by "dx" you mean the calculus version of dx.
Calculus is all about how to measure the slope of any arbitrary line,
especially curved ones.
Consider y = 2x^2 (the x^2 means "x squared").
If you used the "normal" method to get the slope you would pick two
points (lets pick (1,2) and (3,18) for this example) and then you
would make a ratio of the "rise" (the difference in the y values) and
the "run" (the difference in the x values)
If you did this you would have a slope of:
m = (y2-y1)/(x2-x1) = (18-2)/(3-1) = 16/2 = 8
The problem with this method is that it produces the wrong answer.
The only time it's right is for a straight line. For example, pick two
different points: (2,8) and (3,18)
Then you would have this slope:
m = (y2-y1)/(x2-x1) = (18-8)/(3-2) = 10/1 = 10
A curved line is not straight, so the slope will never be right except
for one thing: the closer the two points are to each other, the more
accurate the slope is. The curve gets flatter and flatter as the two
points get closer and closer. When they get infinitely close to each
other we get the most accurate answer because essentially the points
are so close to each other that there is no room for any curvy bits.
If we define "dx" to be the difference between two x-values that are
infinitely close to each other (an infinitely small difference in x
values), and we define "dy" to be the difference between two y-values
that are infinitely close to each other (an infinitely small
difference in y values), then we can pick two infinitely close points
and do this:
m = (y2-y1)/(x2-x1) = dy/dx
So dy/dx is the slope of a line. If we use the rules of calculus to
"differentiate" our equation (using the mythical d function):
y = 2x^2
d(y) = d(2x^2)
dy = 2 d(x^2)
dy = 2*2x*d(x)
dy = 2*2x*dx
dy = 4x*dx
We find that an infinitely small difference in y can be measured with
this equation: dy = 4x*dx. But if we rearrange it slightly:
dy = 4x*dx
dy/dx = 4x*dx/dx
dy/dx = 4x*1
dy/dx = 4x
We find that the slope of y = 2x^2 is 4x. Notice that the slope is
not a number; it actually changes depending on where in the graph we
are; you can see that the slope changes by graphing y = 2x^2.
So the slope at any point on the graph can be found with this equation
because the two points that we use to calculate with are infinitely
close together (for all intents and purposes they are the same point)
And since we know the definitions of dx and dy, we could say that the
slope at any point equals an infinitely small difference in y (dy)
divided by an infinitely small difference in x (dx). This is
absolutely true. And for a straight line graph it is the same as
taking the difference of any two points.
- Doctor Jeremiah, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/53768.html","timestamp":"2014-04-20T22:09:59Z","content_type":null,"content_length":"7794","record_id":"<urn:uuid:4261a20f-0cbc-435f-b27a-ee7df5bac028>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laguna Niguel Geometry Tutor
...I've been tutoring since 2003. I have a BS in Math (Education). I'm also passionate about grammar. I minored in Spanish.
12 Subjects: including geometry, Spanish, algebra 1, algebra 2
...And with my help, you or your child will too! Many overseas students are sometimes first and second generation, and one burden is that their family knows hardly any English. So all the
contracts and job descriptions fall on their shoulders; this is where I come in.
39 Subjects: including geometry, chemistry, writing, English
...In order to optimize my tutoring sessions, I will guide the student through a brief self-assessment of his/her learning strengths and weaknesses. I also seek to build on these strengths by
creating a learning environment that minimizes rote learning and focuses on stretching the student's thinki...
20 Subjects: including geometry, calculus, physics, ESL/ESOL
...I have a degree in math and have utilized this in teaching and tutoring students ranging from elementary school to high school. Having learned several foreign languages, my English grammar
skills have vastly improved. I have taught English grammar through teaching Latin and Spanish.
30 Subjects: including geometry, Spanish, English, reading
...I'm highly motivated and encouraging as a teacher. I graduated with a Master of Arts in Applied Linguistics from Biola University with High Honors and Cum Laude from the University of
California, Los Angeles, with a Bachelor of Arts in English and a minor in Social Thought and departmental honor...
22 Subjects: including geometry, English, ACT Reading, ACT Math | {"url":"http://www.purplemath.com/Laguna_Niguel_geometry_tutors.php","timestamp":"2014-04-21T02:21:57Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:f486ce3e-3406-4142-8051-5b9bbb15b588>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coasters101: Coaster Physics Calculations - Coaster101
Coasters101: Coaster Physics Calculations
Welcome to another session of Coasters101! Have you always wondered what type of calculations and mathematics engineers must do to design a roller coaster? Today, we are going to talk about the
physics of roller coasters and run a few calculations in order to compute:
•mass of train
•lift incline length
•force required to pull the train up the lift
•time required to reach the top of the lift
•maximum speed of the train
•radius of the bottom of the first drop to limit g-forces
Please keep in mind that everything is generally simplified. Please comment if you have any questions!
First, draw a quick sketch of the center-line of your intended layout. Since this is my first attempt, I will make it rather simple. I will use Excel for calculations and a CAD program for drawing
(though not required). I plan on linking the two together later on. There are a few basic parameters I decide to start out with: height of the station track from the ground is ten feet or 3.048
meters, height of the first hill is 120 feet or 36.576 meters, and the angle of the lift is at 42 degrees. Pictured below is my first quick sketch.
Now, it’s time to start some calculations. Let’s begin by calculating the force needed to pull the train up the lift hill incline and thus the power needed for the motor. In order to so, we need to
estimate the maximum mass of a fully loaded train to ensure our lift hill motor can pull even the heaviest train of gravy loving coaster enthusiasts up the massive hill. We will plan on having a six
car train. Each car holds two riders at 100 kg each, for a maximum mass of 735 kg (535 kg car + 2 x 100 kg riders). Therefore, the fully loaded coaster train will have a total mass of 4500 kg (about
10,000 pounds).
We selected the angle of the lift to be at 42 degrees. This means that the train is going to be pulled up vertically a distance of 36.576m-3.048m=33.528m. The length of the incline will be 33.528m/
We will finish calculating the force required to pull the train up the incline. There are two other assumptions we will make at this point: the velocity of the train as it exits the station and the
velocity at the top of the lift hill. The velocity coming out of the station will be 10mph or 4.4704 m/s. The speed at the top of the lift will be 18mph or 8.04672 m/s.
Energy is never destroyed, it is simply transferred from one body to another. Thus, we use an energy balance equation:
Kinetic Energy + Potential Energy + Work = Kinetic Energy + Potential Energy
Which is also written as: KE1 + PE1 + W = KE2 +PE2
Kinetic energy is a function of the velocity (KE=(1/2)mv^2). Potential energy is a function of the height (PE=mgh). Work=Force*distance.
Substitute the KE, PE, and W equations into our energy balance equation and we get this resulting equation:
(1/2)mv^2 + mgh + Fd = (1/2)mv^2 + mgh
Now we can insert our values and solve: .5(4500)(4.4704^2)+4500(9.8)(3.048)+F(50.106)=.5(4500)(8.046^2)+4500(9.8)(36.576)
F=31,518.8 N of force or 7085.708314 lbsf. Now we know how big of a motor we will need. One detail to note, we did not include the mass of the lift hill chain in our calculation.
How much time will it take for the train to reach the top of the lift hill? The acceleration of the train can be found using this equation: (vf)^2=(vo)^2+2ad Inserting our values: (8.04672)^2
(4.4704)^2+2(a)d Now solve for a and we get 0.45 m/s^2.
Next, use this equation to compute the time: Vf=Vo+at. Input values and solve for t. =(8.05-4.47)/0.45=8.01 sec. That’s pretty quick for a lift hill but it is only going up one hundred feet and at a
42 degree angle.
Now it’s time to calculate the maximum velocity of the ride. Since the 1st drop is the longest, the velocity at the bottom will be the greatest. Energy relationships will be used to calculate the
velocity: KE1+PE1=KE2+PE2
Solve for v2 and we get 27.42 m/s or 61.34mph!
Finally, we want to figure out how what the radius of the curve of the bottom of the first drop should be in order to keep the g forces felt by the riders to be 2.5 g’s or less.
G’s felt + G’s + 1
2.5=g+1, g=1.5
<!–[if ppt]–><!–[endif]–>
Acentripetall= 1.5 * 9.8 = 14.7
<!–[if ppt]–><!–[endif]–>
Radius=v^2/Acentripetal = 27.42^2/14.7=51.14m or 167.78 feet.
Stay tuned for the next session of Coasters101!
3 Responses
1. Wow – it’s not as easy as Roller Coaster Tycoon makes it out to be! Great article.
2. Thanks so much for creating all these articles!!
They were extremely helpful for a physics project, especially the article on banking turns and calculating the ideal banked turn!!!
Keep the great work coming!
Thanks so much,
3. This site is really helpful!
I am working on a project for my school, but I have a question:
about the radius of the drop, what do you mean by G’s=Acentripetal/9.8?
What exactly is Acentripetal? | {"url":"http://www.coaster101.com/2010/11/16/coasters101-coaster-physics-calculations/","timestamp":"2014-04-19T17:12:27Z","content_type":null,"content_length":"65232","record_id":"<urn:uuid:f81fb4d5-51e6-4268-a9ea-50bbe8a5d5af>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problem
October 23rd 2009, 10:49 AM #1
Junior Member
Sep 2009
Word Problem
A rectangular page is designed to contain 72 square inches of print. The margins at the top and bottom of the page are each 4 inches deep. The margins on each side are 2 inches wide. The
dimensions of page are such that the least possible amount of paper is used.
Thus the width of the page is( ) inches, its height is ( )inches, its total area is ( ) square inches. There's a whole lot of white space on that page, its minimum area not withstanding!
A rectangular page is designed to contain 72 square inches of print. The margins at the top and bottom of the page are each 4 inches deep. The margins on each side are 2 inches wide. The
dimensions of page are such that the least possible amount of paper is used.
Thus the width of the page is( ) inches, its height is ( )inches, its total area is ( ) square inches. There's a whole lot of white space on that page, its minimum area not withstanding!
1. Draw a rough sketch.
2. If the printed area is determined by the width x and the length y the area of the sheet of paper is:
$A = (x+4)(y+8)$
3. Additional condition: $x \cdot y = 72$
Solve this equation for y and plug in the term into the first equation. You'll get the function:
4. Differentiate A wrt x and solve the equation A'(x) = 0 for x.
I'll leave the rest for you.
Can use Algebra, but I'm told is straight forward as a Calculus Problem:
A rectangular page is designed to contain 72 square inches of print. The margins at the top and bottom of the page are each 4 inches deep. The margins on each side are 2 inches wide. The
dimensions of page are such that the least possible amount of paper is used.
Thus the width of the page is( ) inches, its height is ( )inches, its total area is ( ) square inches. There's a whole lot of white space on that page, its minimum area not withstanding!
Let x be the width of the page and y be the height. Then we're given that (x - 4)(y - 8) = 72, so y = 72/(x - 4) + 8. Hence the area is xy = 72x/(x - 4) + 8x. Take the derivative of this quantity
with respect to x and set it equal to zero to find its minimum value for x > 4. Then use this info to calculate y and the total area xy.
October 23rd 2009, 11:34 AM #2
October 23rd 2009, 11:38 AM #3
Aug 2009 | {"url":"http://mathhelpforum.com/calculus/109960-word-problem.html","timestamp":"2014-04-19T04:26:13Z","content_type":null,"content_length":"37609","record_id":"<urn:uuid:224c93dc-96c3-4b29-bdea-79678ea129e9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
graph question...
January 8th 2009, 01:26 PM #1
Junior Member
Oct 2008
graph question...
sorry about the lack of a graph shown
fig 10. shows a sketch of the graph $y=\frac{1}{x}$
i) sketch the graph $y=\frac{1}{x-2}$
ii) find the x-coordinates of the points of intersection of the graphs of $y=x$ and $y=\frac{1}{x-2}$
im not sure about the translation of this graph for question i.
question ii. i got as far as $(x-1)^2-1=1$
but then got $(x-1)^2=0$
which is wrong, can someone explain why $(x-1)^2=2$ is right?
also how would i position these points on sketch drawn in i. ?
thanks alot if you can help as this is the last bit of revision im doing for tomorrow!
1. You would transform this using the same rules as normal graphs, say for examply you have $y=x$ and $y=x-2$, whats translation would the second one be? Apply this rule to your graph, this is
basic transformations.
2. At points of intersection the two graphs are equal, so we can say:
$x=\frac{1}{x-2}$, therefore $x(x-2)=1$.
$x^2 - 2x -1=0$. I am sure you can solve the resulting quadratic
Hope this helps
January 8th 2009, 01:44 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/67356-graph-question.html","timestamp":"2014-04-16T05:04:35Z","content_type":null,"content_length":"33767","record_id":"<urn:uuid:d422e5e8-4081-4fb0-a6e4-ff954f5fca55>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monmouth Junction Algebra Tutor
...Since I have been employed, I have been exposed to 3rd-7th grade curriculum. I excel at test preparation, always making my own practice tests for the children - especially math. I have the
children's book report books so that I may assist them in their writing activities.
19 Subjects: including algebra 1, reading, English, anatomy
...My style is to break down math to the simplest form and build up from there. I am also very patient as I am aware that math is not easy for everyone. I very much emphasize the importance of
practicing and make sure my students do homework to apply the lessons taught.
11 Subjects: including algebra 1, algebra 2, geometry, SAT math
...As a recent test taker, I have lots of fresh experience with these standardized tests, which many tutors lack. I am also a Class A United States Chess Federation player. My USCF rating is 1960.
11 Subjects: including algebra 1, SAT math, ACT Math, chess
...I will go the extra mile to help your son or daughter succeed in class. This means staying 30 - 60 minutes extra until my student understands the material. I am also getting my Associates in
Biology and took many of the science classes.I took many classes in biology, and I am going to get my associates degree in Biology.
8 Subjects: including algebra 2, algebra 1, chemistry, biology
...I have a very strong academic math background, including a math degree from Johns Hopkins. I have over 3000 hours tutoring experience. I can tutor the most difficult high school subjects,
including calculus and precalculus.
32 Subjects: including algebra 2, algebra 1, English, GRE | {"url":"http://www.purplemath.com/monmouth_junction_algebra_tutors.php","timestamp":"2014-04-16T04:45:01Z","content_type":null,"content_length":"24095","record_id":"<urn:uuid:629a2a89-5c1a-4c6a-b781-344da4b0571e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Influence of Photon Attenuation on Tumor-to-Background and Signal-to-Noise Ratios for SPECT Imaging.
The Influence of Photon Attenuation on Tumor-to-Background and Signal-to-Noise Ratios for SPECT Imaging.
Department of Mathematics and Computer Science, College of the Holy Cross, Worcester, MA 01610.
IEEE Nuclear Science Symposium conference record. Nuclear Science Symposium
02/2007; 5:3609-3615. DOI:10.1109/NSSMIC.2007.4436905 In proceeding of: Nuclear Science Symposium Conference Record, 2007. NSS '07. IEEE, Volume: 5
ABSTRACT Expanding on the work of Nuyts et. al [1], Bai et. al. [2], and Bai and Shao [3], who all studied the effects of attenuation and attenuation correction on tumor-to-background ratios and
signal detection, we have derived a general expression for the tumor-to-background ratio (TBR) for SPECT attenuated data that have been reconstructed with a linear, non-iterative reconstruction
operator O. A special case of this is when O represents discrete filtered back-projection (FBP). The TBR of the reconstructed, uncorrected attenuated data (TBR(no-AC)) can be written as a weighted
sum of the TBR of the FBP-reconstructed unattenuated data (TBR(FBP)) and the TBR of the FBP-reconstructed "difference" projection data (TBR(diff)). We evaluated the expression for TBR(no-AC) for a
variety of objects and attenuation conditions. The ideal observer signal-to-noise ratio (SNR(ideal)) was also computed in projection space, in order to obtain an upper bound on signal detectability
for a signal-known-exactly/background-known-exactly (SKE/BKE) detection task. The results generally show that SNR(ideal) is lower for tumors located deeper within the attenuating medium and increases
for tumors nearer the edge of the object. In addition, larger values for the uniform attenuation coefficient μ lead to lower values for SNR(ideal). The TBR for FBP-reconstructed, uncorrected
attenuated data can both under- and over-estimate the true TBR, depending on several properties of the attenuating medium, including the shape of the attenuator, the uniformity of the attenuator, and
the degree to which the data are attenuated.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
0 Downloads
Available from | {"url":"http://www.researchgate.net/publication/23939717_The_Influence_of_Photon_Attenuation_on_Tumor-to-Background_and_Signal-to-Noise_Ratios_for_SPECT_Imaging","timestamp":"2014-04-16T16:16:03Z","content_type":null,"content_length":"132345","record_id":"<urn:uuid:166b480b-b922-4859-a69d-33839c5166c1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: infinite countable total orderings embeddable in the reals
Replies: 1 Last Post: Jun 22, 2013 9:44 PM
Messages: [ Previous | Next ]
infinite countable total orderings embeddable in the reals
Posted: Jun 22, 2013 5:25 PM
I'm looking at infinite countable total orderings (S, <_{S} )
with | S | = aleph_0
--- embeddable in the reals.
For example, any countably infinite ordinal beta < omega_1
is such.
In ordinal logic, some "large" countable ordinals have
been defined:
-- Veblen hierarchy,
< http://en.wikipedia.org/wiki/Veblen_function > .
The Feferman?Schütte ordinal is denoted Gamma_0:
< http://en.wikipedia.org/wiki/Feferman%E2%80%93Sch%C3%BCtte_ordinal > .
The Bachmann-Howard ordinal,
Then, one with a rather forbidding name is known by
it's name, namely:
Psi_{0}(Omega_{omega}) ,
< http://en.wikipedia.org/wiki/%CE%A8%E2%82%80%28%CE%A9%CF%89%29 > .
Maybe some set theories without the full axioms can
define a countable total ordering embeddable in the reals,
but are incapable of proving that that ordering is not complete,
like the real numbers are.
I don't know...
David Bernier
On Hypnos,
Date Subject Author
6/22/13 infinite countable total orderings embeddable in the reals David Bernier
6/22/13 Re: infinite countable total orderings embeddable in the reals William Elliot | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2577672&messageID=9142520","timestamp":"2014-04-20T04:14:41Z","content_type":null,"content_length":"18604","record_id":"<urn:uuid:772ec2a5-603b-49f7-a44c-12e234178294>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
More sets and functions
August 21st 2007, 12:36 PM #1
Aug 2007
More sets and functions
(1) Let $f: X \to Y$ be a function and $A_{1}, A_{2} \in \mathcal{P}(X)$. (i) Prove that $A_{1} \subseteq A_{2} \Rightarrow \overrightarrow{f}(A_{1}) \subseteq \overrightarrow{f}(A_{2})$. Prove
that the converse is not universally true. Give a simple condition on $f$ which is equivalent to the converse. (ii) Prove that $\overrightarrow{f}(A_{1} \cap A_{2}) \subseteq \overrightarrow{f}
(A_{1}) \cap \overrightarrow{f}(A_{2})$. Prove that equality is not universally true. (iii) Prove that $\overrightarrow{f}(A_{1} \cup A_{2}) = \overrightarrow{f}(A_{1}) \cup \overrightarrow{f}(A_
(i) So $x \in A_1 \Rightarrow x \in A_2$. Then $\overrightarrow{f}(A_{1}) = \{f(x) | x \in A_1 \} \Rightarrow \overrightarrow{f}(A_{2}) = \{f(x) | x \in A_2 \}$. Thus $\overrightarrow{f}(A_1) \
subseteq \overrightarrow{f}(A_2)$. The converse is $\overrightarrow{f}(A_1) \subseteq \overrightarrow{f}(A_2) \Rightarrow A_1 \subseteq A_2$. Suppose that $A_{1} = \{c \}$ and $A_{2} = \{d \}$.
Let $f(a) = a$. Then $\overrightarrow{f}(A_1) = \overrightarrow{f}(A_2) = \emptyset$ but $A_1 ot \subseteq A_2$. Is this correct? What would be the condition on $f$ which is equivalent to its
(ii) $\overrightarrow{f}(A_{1} \cap A_{2}) = \{ f(x) | x \in A_{1} \cap A_2 \}$ which is a subset of $\overrightarrow{f}(A_{1}) \cap \overrightarrow{f}(A_{2})$. What would be a counterexample to
show inequality?
(iii) This is basically the same as part (ii) except with an $\text{or}$?
Last edited by shilz222; August 21st 2007 at 12:49 PM.
(i) Prove that $A_{1} \subseteq A_{2} \Rightarrow \overrightarrow{f}(A_{1}) = \overrightarrow{f}(A_{2})$.
That is not true!
This is true.
(i) Prove that $A_{1} \subseteq A_{2} \Rightarrow \overrightarrow{f}(A_{1}) \subseteq \overrightarrow{f}(A_{2})$.
I meant to say that $\overrightarrow{f}(A_1) \subseteq \overrightarrow{f}(A_2)$.
For 1(i) the converse $\overrightarrow{f}(A_1) \subseteq \overrightarrow{f}(A_2)$ is not universally true.
Suppose that $A_{1} = \{a, b \}$ and $A_2 = \{c,d \}$. Define a function $f(A) = \emptyset$. Then $\overrightarrow{f}(A_1) = \overrightarrow{f}(A_2) = \emptyset$ but $A_{1} ot \subseteq A_{2}$.
A condition on $f$ which is equivalent to its converse is that $f(A) eq \emptyset$.
Is this correct?
Ok if $A_ 1 = \{ a,b,c,d \} \ \text{and} \ A_2 = \{c,d,e,f,g,l,m \}$ and define a function $f(c) = 6$. Then $\overrightarrow{f}(A_1) \subseteq \overrightarrow{f}(A_2) = 6$ but $A_1 ot \subseteq
Last edited by shilz222; August 23rd 2007 at 01:46 PM.
Suppose that $A = \{ 1,2,3,4\} ,\;B = \{ h,j,k,m\}$ then define a function $f:A \mapsto B$ by $f = \left\{ {\left( {1,j} \right),\left( {2,m} \right),\left( {3,k} \right),\left( {4,j} \right)} \
Let $A_1 = \left\{ {1,3} \right\}\;\& \;A_2 = \left\{ {2,3,4} \right\}$ note that $\overrightarrow f \left( {A_1 } \right) \subseteq \overrightarrow f \left( {A_2 } \right)\quad but\quad A_1 ot\
subseteq A_2$
Was my example correct?
Do you see how different in detail my example is from the one you tried to give? To be honest, I would not accept it. It lacks detail that shows that you really understand how functions operate.
But I am not grading you. So I cannot say if it correct or not; just I do not find it so.
How do you become better at these types of problems and math in general? Do you just need to read a lot of books an do a lot of problems and ask interesting questions? I mean is math meant to be
learned slowly?
I have always advised students to simply copy out proofs from well-written textbooks, a book at the correct level. Set theory & logic is the basis of all proofs.
Well it certainly is not meant to be approached in a shotgun way that your assortment of problems indicates you are trying.
My own take on this is that Math is learned in basically the same manner as practically any other field. You need to work problems, ask questions, and work more problems. Some time for the
information to "gel" helps, as well as seeing how the material applies to either real-life problems or how it is applied to other fields.
Just keep at it and be patient. You'll get it.
Maybe the problem is that this is very abstract to thee. The first book on math having serious proofs I ever read was on number theory which is much less abstract. And perhaps this will make you
understand what proofs are about. All set theory is is doing it much much more formally.
For 1(i) the condition on $f$ is that it has to be injective?
August 21st 2007, 12:42 PM #2
August 21st 2007, 12:44 PM #3
Aug 2007
August 21st 2007, 01:17 PM #4
August 23rd 2007, 12:45 PM #5
Aug 2007
August 23rd 2007, 01:29 PM #6
August 23rd 2007, 01:36 PM #7
Aug 2007
August 23rd 2007, 01:44 PM #8
August 23rd 2007, 02:12 PM #9
Aug 2007
August 23rd 2007, 02:30 PM #10
August 23rd 2007, 02:40 PM #11
Aug 2007
August 23rd 2007, 02:55 PM #12
August 23rd 2007, 02:56 PM #13
August 23rd 2007, 08:06 PM #14
Global Moderator
Nov 2005
New York City
August 24th 2007, 11:08 AM #15
Aug 2007 | {"url":"http://mathhelpforum.com/discrete-math/17940-more-sets-functions.html","timestamp":"2014-04-20T17:56:22Z","content_type":null,"content_length":"88936","record_id":"<urn:uuid:2c27d8ec-2619-4c19-b4be-5cbd756ed047>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical Mechanics/Central Field
From Wikibooks, open books for an open world
Prev Up Next
Consider a central potential V(r). A central potential is where the potential is dependent only on the field point's distance from the origin; in other words, the potential is isotropic.
The Lagrangian of the system can be written as
$\mathcal{L} = \frac{1}{2} m\dot{\vec{x}}^{2} - V(r)$
Since the potential is spherically symmetry, it makes sense to write the Lagrangian in spherical coordinates.
$\dot{\vec{x}}^{2} = \left(\frac{d}{dt}\left(r \sin\phi \sin\theta, r \cos\phi \sin\theta, r \cos\theta\right)\right)^{2}$
It can then be worked out that:
$\dot{\vec{x}}^{2} = \dot{r}^{2}+r^{2}\dot{\theta}^{2}+r^{2}\dot{\phi}^{2}\sin^{2} \theta$
Hence the equation for the Lagrangian is
$\mathcal{L} = \frac{1}{2} m\left(\dot{r}^{2}+r^{2}\dot{\theta}^{2}+r^{2}\dot{\phi}^{2}\sin^{2} \theta\right) - V(r)$
One can then extract three laws of motion from the Lagrangian using the Euler-Lagrange formula
$\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{r}}\right) - \frac{\partial\mathcal{L}}{\partial r} = 0 \Rightarrow \frac{d}{dt}\left(m\dot{r}\right) - \left(mr\dot{\theta}^{2} + mr\dot{\
phi}^{2}\sin^{2}\theta - \frac{\partial V}{\partial r}\right) = 0 \Rightarrow m\frac{d^{2}r}{dt^{2}} = mr\dot{\theta}^{2} + mr\dot{\phi}^{2}\sin^{2}\theta - \frac{\partial V}{\partial r}$
This looks messy, but when we look at the Euler-Lagrange relation for $\phi$, we have
$\frac{d}{dt}\left(mr^{2}\dot{\phi}\sin^{2}\theta\right) = 0$
Hence $mr^{2}\dot{\phi}\sin^{2}\theta$ is a constant throughout the motion. | {"url":"http://en.wikibooks.org/wiki/Classical_Mechanics/Central_Field","timestamp":"2014-04-20T13:32:43Z","content_type":null,"content_length":"26519","record_id":"<urn:uuid:584dac11-f395-4510-ae30-73b2ddde8f02>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
07-08-2003 #1
i need some help here.
ive got a 2d array (vector<vector<double>> v), and i need to find the determinant. im at work, and i have to convert an old matlab program over, but im run into this little problem...ive been
googling and i havent found a good definition of what it is. can someone help me out?
Go to www.gamedev.net and read their articles and resources concerning matrices and quaternions - I think its in the math and physics section.
They have several articles about the determinant and sample code that will find the determinant of a matrix.
ok, im reading it, alot of it is going way over my head. right now im tringt o sort out the 2d array (which i guess is a matrix), in my other thread.
ok, i get how to multply them to get one. but i have a question.
If you scroll down to where he multplies and has that huge 4 chunk matrix, are those multplied together or added together to produce the final matrix?
Matrix Matrix_Multiply(Matrix mat1, Matrix mat2)
int i, j, k;
Matrix mat;
// For speed reasons, use the unroll-loops option of your
for(i = 0; i < 4; i++)
for(j = 0; j < 4; j++)
for(k = 0, mat[i][j] = 0; k < 4; k++)
mat[i][j] += mat1[i][k] * mat2[k][j];
return mat;
Multiplied. You concatenate matrices by multiplying them together.
I would first try this outside of the vector STL. Use a standard 2D array and attempt to find its determinant. Then when you fully understand it, use the STL. Personally, I'd stick with what is
straightforward and not worry about all the razzamatazz of the STL. There are times to use it and it is a great tool, but this is not one of those times.
Also I would not recommend returning a matrix as this can get confusing. If you know the data type of your matrices then there is really no reason to make the function work with all data types -
unless you are coding an abstract generic matrix class that has a determinant function. I'm not sure why he (the author) is creating a matrix object but remember its only one approach to the
If you do not like the 2D array because of the inherent multiplies to fetch elements and set elements, turn the 2D array into a 1D array like the author states.
07-08-2003 #2
07-08-2003 #3
07-08-2003 #4 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/41648-determinant.html","timestamp":"2014-04-16T05:35:40Z","content_type":null,"content_length":"51481","record_id":"<urn:uuid:f993a0bf-2bd2-4d0e-a6a3-e0279c090296>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 4.
Thermal conductance of DNW and phonon dispersion relation. (left) Thermal conductance as a function of the diameter of DNW without vacancy defects for several temperature. Inset is the exponent n of
diameter dependence of thermal conductance for several temperature. (right) Phonon dispersion relation of 〈100〉 DNW with 1.0 nm in diameter for the wave vector q. Here a=3.567 Å. Green and purple
solid lines show weight functions in thermal conductance for 300 and 5 K.
Yamamoto et al. Nanoscale Research Letters 2013 8:256 doi:10.1186/1556-276X-8-256
Download authors' original image | {"url":"http://www.nanoscalereslett.com/content/8/1/256/figure/F4","timestamp":"2014-04-16T16:08:13Z","content_type":null,"content_length":"11930","record_id":"<urn:uuid:acb31b12-7747-4ded-b0b9-215e51051e88>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
A general formula for valuing defaultable securities
Results 1 - 10 of 15
- Journal of Finance , 2007
"... We test the doubly stochastic assumption under which firms ’ default times are correlated only as implied by the correlation of factors determining their default intensities. Using data on U.S.
corporations from 1979 to 2004, this assumption is violated in the presence of contagion or “frailty ” (un ..."
Cited by 44 (2 self)
Add to MetaCart
We test the doubly stochastic assumption under which firms ’ default times are correlated only as implied by the correlation of factors determining their default intensities. Using data on U.S.
corporations from 1979 to 2004, this assumption is violated in the presence of contagion or “frailty ” (unobservable explanatory variables that are correlated across firms). Our tests do not depend
on the time-series properties of default intensities. The data do not support the joint hypothesis of well-specified default intensities and the doubly stochastic assumption. We find some evidence of
default clustering exceeding that implied by the doubly stochastic model with the given intensities. WHY DO CORPORATE DEFAULTS CLUSTER IN TIME? Several explanations have been explored. First, firms
may be exposed to common or correlated risk factors whose co-movements cause correlated changes in conditional default probabilities. Second, the event of default by one firm may be “contagious, ” in
that one such event may directly induce other corporate failures, as with the collapse of Penn
, 2008
"... This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default
correlation arises only from exposure to observable risk factors. At the high confidence levels at which bank loan p ..."
Cited by 33 (2 self)
Add to MetaCart
This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default correlation
arises only from exposure to observable risk factors. At the high confidence levels at which bank loan portfolio and CDO default losses are typically measured for economic-capital and rating
purposes, our empirical results indicate that conventionally based estimates are downward biased by a full order of magnitude on test portfolios. Our estimates are based on U.S. public non-financial
firms existing between 1979 and 2004. We find strong evidence for the presence of common latent factors, even when controlling for observable factors that provide the most accurate available model of
firm-by-firm default probabilities. ∗ We are grateful for financial support from Moody’s Corporation and Morgan Stanley, and for research assistance from Sabri Oncu and Vineet Bhagwat. We are also
grateful for remarks from Torben Andersen, André Lucas, Richard Cantor, Stav Gaon, Tyler Shumway, and especially Michael Johannes. This revision is much improved because of suggestions by a referee,
an associate editor, and Campbell Harvey. We are thankful to Moodys and to Ed Altman for generous assistance with data. Duffie is at The Graduate School of Business, Stanford University. Eckner and
Horel are at Merrill Lynch. Saita is at Lehman
, 2006
"... This article presents a framework for studying the role of recovery on defaultable debt prices for a wide class of processes describing recovery rates and default probability. These debt models
have the ability to differentiate the impact of recovery rates and default probability, and can be employe ..."
Cited by 26 (0 self)
Add to MetaCart
This article presents a framework for studying the role of recovery on defaultable debt prices for a wide class of processes describing recovery rates and default probability. These debt models have
the ability to differentiate the impact of recovery rates and default probability, and can be employed to infer the market expectation of recovery rates implicit in bond prices. Empirical
implementation of these models suggests two central findings. First, the recovery concept that specifies recovery as a fraction of the discounted par value has broader empirical support. Second,
parametric debt valuation models can provide a useful assessment of recovery rates embedded in bond prices.
- CONTAGION IN PORTFOLIO CREDIT RISK 25 , 2007
"... We value synthetic CDO tranche spreads, index CDS spreads, k th-to-default swap spreads and tranchelets in an intensity-based credit risk model with default contagion. The default dependence is
modelled by letting individual intensities jump when other defaults occur. The model is reinterpreted as ..."
Cited by 17 (5 self)
Add to MetaCart
We value synthetic CDO tranche spreads, index CDS spreads, k th-to-default swap spreads and tranchelets in an intensity-based credit risk model with default contagion. The default dependence is
modelled by letting individual intensities jump when other defaults occur. The model is reinterpreted as a Markov jump process. This allows us to use a matrix-analytic approach to derive
computationally tractable closed-form expressions for the credit derivatives that we want to study. Special attention is given to homogenous portfolios. For a fixed maturity of five years, such a
portfolio is calibrated against CDO tranche spreads, index CDS spread and the average CDS spread, all taken from the iTraxx Europe series. After the calibration, which renders perfect fits, we
compute spreads for tranchelets and k th-to-default swap spreads for different subportfolios of the main portfolio. Studies of the implied tranche-losses and the implied loss distribution in the
calibrated portfolios are also performed. We implement two different numerical methods for determining the distribution of the Markov-process. These are applied in separate calibrations in order to
verify that the matrix-analytic method is independent of the numerical approach used to find the law of the process. Monte Carlo simulations are also performed to check the correctness of the
numerical implementations.
, 2005
"... We propose a two-sided jump model for credit risk by extending the Leland-Toft endogenous default model based on the geometric Brownian motion. The model shows that jump risk and endogenous
default can have significant impacts on credit spreads, optimal capital structure, and implied volatility of e ..."
Cited by 13 (0 self)
Add to MetaCart
We propose a two-sided jump model for credit risk by extending the Leland-Toft endogenous default model based on the geometric Brownian motion. The model shows that jump risk and endogenous default
can have significant impacts on credit spreads, optimal capital structure, and implied volatility of equity options: (1) The jump and endogenous default can produce a variety of non-zero credit
spreads, including upward, humped, and downward shapes; interesting enough, the model can even produce, consistent with empirical findings, upward credit spreads for speculative grade bonds. (2) The
jump risk leads to much lower optimal debt/equity ratio; in fact, with jump risk, highly risky firms tend to have very little debt. (3) The two-sided jumps lead to a variety of shapes for the implied
volatility of equity options, even for long maturity options; and although in generel credit spreads and implied volatility tend to move in the same direction under exogenous default models, but this
may not be true in presence of endogenous default and jumps. In terms of mathematical contribution, we give a proof of a version of the “smooth fitting ” principle for the jump model, justifying a
conjecture first suggested by Leland and Toft under the Brownian model. 1
, 2007
"... We model dynamic credit portfolio dependence by using default contagion in an intensity-based framework. Two different portfolios (with 10 obligors), one in the European auto sector, the other
in the European financial sector, are calibrated against their market CDS spreads and the corresponding CD ..."
Cited by 6 (3 self)
Add to MetaCart
We model dynamic credit portfolio dependence by using default contagion in an intensity-based framework. Two different portfolios (with 10 obligors), one in the European auto sector, the other in the
European financial sector, are calibrated against their market CDS spreads and the corresponding CDS-correlations. After the calibration, which are perfect for the banking portfolio, and good for the
auto case, we study several quantities of importance in active credit portfolio management. For example, implied multivariate default and survival distributions, multivariate conditional survival
distributions, implied default correlations, expected default times and expected ordered defaults times. The default contagion is modelled by letting individual intensities jump when other defaults
occur, but be constant between defaults. This model is translated into a Markov jump process, a so called multivariate phase-type distribution, which represents the default status in the credit
portfolio. Matrix-analytic methods are then used to derive expressions for the quantities studied in the calibrated portfolios.
, 2006
"... We study a model for default contagion in intensity-based credit risk and its consequences for pricing portfolio credit derivatives. The model is specified through default intensities which are
assumed to be constant between defaults, but which can jump at the times of defaults. The model is transla ..."
Cited by 6 (2 self)
Add to MetaCart
We study a model for default contagion in intensity-based credit risk and its consequences for pricing portfolio credit derivatives. The model is specified through default intensities which are
assumed to be constant between defaults, but which can jump at the times of defaults. The model is translated into a Markov jump process which represents the default status in the credit portfolio.
This makes it possible to use matrix-analytic methods to derive computationally tractable closed-form expressions for single-name credit default swap spreads and k th-to-default swap spreads. We
”semi-calibrate” the model for portfolios (of up to 15 obligors) against market CDS spreads and compute the corresponding k th-to-default spreads. In a numerical study based on a synthetic portfolio
of 15 telecom bonds we study a number of questions: how spreads depend on the amount of default interaction; how the values of the underlying market CDS-prices used for calibration influence k
th-th-to default spreads; how a portfolio with inhomogeneous recovery rates compares with a portfolio which satisfies the standard assumption of identical recovery rates; and, finally, how well k
th-th-to default spreads in a nonsymmetric portfolio can be approximated by spreads in a symmetric portfolio.
, 2008
"... We study default contagion in large homogeneous credit portfolios. Using data from the iTraxx Europe series, two synthetic CDO portfolios are calibrated against their tranche spreads, index CDS
spreads and average CDS spreads, all with five year maturity. After the calibrations, which render perfe ..."
Cited by 3 (2 self)
Add to MetaCart
We study default contagion in large homogeneous credit portfolios. Using data from the iTraxx Europe series, two synthetic CDO portfolios are calibrated against their tranche spreads, index CDS
spreads and average CDS spreads, all with five year maturity. After the calibrations, which render perfect fits, we investigate the implied expected ordered defaults times, implied default
correlations, and implied multivariate default and survival distributions, both for ordered and unordered default times. Many of the numerical results differ substantially from the corresponding
quantities in a smaller inhomogeneous CDS portfolio. Furthermore, the studies indicate that market CDO spreads imply extreme default clustering in upper tranches. The default contagion is introduced
by letting individual intensities jump when other defaults occur, but be constant between defaults. The model is translated into a Markov jump process. Expressions for the investigated quantities are
derived by using matrix-analytic methods.
, 2006
"... We develop a structural model of credit risk in a network economy, where any firm can lend to any other firm, so that each firm is subject to counterparty risk either from direct borrowers or
from remote firms in the network. This model takes into account the role of each firm’s cash management. We ..."
Cited by 3 (0 self)
Add to MetaCart
We develop a structural model of credit risk in a network economy, where any firm can lend to any other firm, so that each firm is subject to counterparty risk either from direct borrowers or from
remote firms in the network. This model takes into account the role of each firm’s cash management. We show that we can obtain a semi-closed form formula for the price of debt and equity when cash
accounts are buffers to bankruptcy risk. As in other structural models, the strategic bankruptcy decision of shareholders drives credit spreads, and differentiates debt from equity. Cash flow risk
also causes credit risk interdependencies between firms. Our model applies to the case where not only financial flows but also operations are dependent across firms. We use queueing theory to obtain
our semi-closed form formulae in steady state. We perform a simplified implementation of our model to the US automotive industry and show how we infer the impact on a supplier’s credit spreads of
revenue changes in a manufacturer or even in a large car dealer. (Credit Risk; Contagion; Queueing Networks) 1
, 2009
"... This study develops a stress-testing framework to assess liquidity risk of banks, where liquidity and default risks can stem from the crystallisation of market risk arising from a prolonged
period of negative asset price shocks. In the framework, exogenous asset price shocks increase banks ’ liquidi ..."
Cited by 3 (0 self)
Add to MetaCart
This study develops a stress-testing framework to assess liquidity risk of banks, where liquidity and default risks can stem from the crystallisation of market risk arising from a prolonged period of
negative asset price shocks. In the framework, exogenous asset price shocks increase banks ’ liquidity risk through three channels. First, severe mark-to-market losses on the banks ’ assets increase
banks ’ default risk and thus induce significant deposits outflows. Secondly, the ability to generate liquidity from asset sales continues to evaporate due to the shocks. Thirdly, banks are exposed
to contingent liquidity risk, as the likelihood of drawdowns on their irrevocable commitments increases in such stressful financial environments. In the framework, the linkage between market and
default risks of banks is implemented using a Merton-type model, while the linkage between default risk and deposit outflows is estimated econometrically. Contagion risk is also incorporated through
banks ’ linkage in the interbank and capital markets. Using the Monte Carlo method, the framework quantifies liquidity risk of individual banks by estimating the expected cash-shortage time and the
expected default time. Based on publicly available data as at the end of 2007, the framework is applied to a group of banks in Hong Kong. The simulation results suggest that liquidity risk of the
banks would be contained in the face of a prolonged period of asset price shocks. However, some banks would be vulnerable when such shocks coincide with interest rate hikes due to monetary
tightening. Such tightening is, however, relatively unlikely in a context of such shocks. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3772471","timestamp":"2014-04-21T07:57:41Z","content_type":null,"content_length":"42272","record_id":"<urn:uuid:3531a422-3c2d-4203-b64d-383adb89766e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Online Math Homework Help
Discount and market price are two of the most important concepts in mathematics. These two concepts are also studied in finance and economics and play an important role in the real life market
scenario. Let’s try to understand both the concepts in this post.
Market price is the price that is written in the price tag of a product. It is the amount that one needs to pay to buy a product. At times, customers are allowed to buy the product at a lesser price
than the market price. For example: The market price of
cot mobile for kids
was Rs. 50. Maria was allowed to buy the same in Rs.45. Therefore, Rs. 5 is the discount offered to Maria while buying
cot mobile for kids
. The mathematical formula of finding discount price is Discount = Market Price – Selling Price (MP – SP).
Example 1: The market price of
action figure toy
is Rs.199. It is available in online shops for Rs.190. What is the discount offered?
Discount = Market Price – Selling Price
Discount = 199 – 190
= Rs. 9 is the discount offered on
action figure toy
in online shops.
Example 2: Mary bought
kids’ toy guns
at Rs. 50 and sold it for Rs. 40. What is the discount offered?
Discount = Market Price – Selling Price
Discount = 50 – 40
= Rs. 10 is the discount offered on
kids toy guns
by Mary.
Discount Percentage
The discount percentage is calculated on market price. The mathematical formula to find out discount percentage is [Discount / MP] X 100.
Example: Arun bought mobile phone for Rs.10000 and sold it out at Rs.9500. Find the discount and discount percentage:
Discount = Market Price – Selling Price
Discount = 10000 – 9500 = 500
Discount Percentage = [Discount / MP] X 100
= [500 / 10000] x 100
= 5%
The discount offered by Arun is 5%.
These are the basics on discount and discount percentages. | {"url":"http://onlinemathhomeworkhelp.blogspot.com/2013/01/discount-and-discount-percentage.html","timestamp":"2014-04-17T03:48:25Z","content_type":null,"content_length":"89175","record_id":"<urn:uuid:fe312ac7-1366-4c0d-9f88-c2f648b23c51>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eliminate 't'
November 7th 2010, 12:15 PM #1
The term "hyperbolic" coems from the fact that if we define the Cartesian coordinates of a point to be $x=\cosh t$ and $y=\sinh t$, where $t$ is so-called parametric variable, then elimination of
$t$ leads to the equation of $x^2-y^2=1$, the equation of an hyperbola.
I know of the identity $\cosh^2 t - \sinh^2 t = 1$, which directly leads to hyperbola equation, but this isn't what is expected here.
So, how do you "eliminate $t$"?
This surely isn't meant by "eliminate $t$"? Or is it?
I believe that you are supposed to use the defining equations for cosh(t) and sinh(t), the ones in terms of the expontials e^t and e^(-t).
Start with (cosh t)^2-(sinh t)^2, and substitute the exponential definitions for these expression. Multiply out, simplify, and then you will get 1. This says that these functions satisfy the
equation for the hyperbola.
I believe that you are supposed to use the defining equations for cosh(t) and sinh(t), the ones in terms of the expontials e^t and e^(-t).
Start with (cosh t)^2-(sinh t)^2, and substitute the exponential definitions for these expression. Multiply out, simplify, and then you will get 1. This says that these functions satisfy the
equation for the hyperbola.
I agree that your method works, however in my opinion it has two disadvantages:
1. The approach is more difficult than it needs to be, and
2. You need to know the answer in advance. Extremely unlikely in an exam situation.
Posts #2 and #4 say exactly how to do it - quick and simple.
November 7th 2010, 02:50 PM #2
November 7th 2010, 10:00 PM #3
November 7th 2010, 10:36 PM #4
November 8th 2010, 03:03 AM #5
November 8th 2010, 10:25 AM #6 | {"url":"http://mathhelpforum.com/calculus/162456-eliminate-t.html","timestamp":"2014-04-19T07:14:19Z","content_type":null,"content_length":"50356","record_id":"<urn:uuid:84417866-2ea3-4e7a-8baa-0f7af1f99883>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laguna Niguel Geometry Tutor
...I've been tutoring since 2003. I have a BS in Math (Education). I'm also passionate about grammar. I minored in Spanish.
12 Subjects: including geometry, Spanish, algebra 1, algebra 2
...And with my help, you or your child will too! Many overseas students are sometimes first and second generation, and one burden is that their family knows hardly any English. So all the
contracts and job descriptions fall on their shoulders; this is where I come in.
39 Subjects: including geometry, chemistry, writing, English
...In order to optimize my tutoring sessions, I will guide the student through a brief self-assessment of his/her learning strengths and weaknesses. I also seek to build on these strengths by
creating a learning environment that minimizes rote learning and focuses on stretching the student's thinki...
20 Subjects: including geometry, calculus, physics, ESL/ESOL
...I have a degree in math and have utilized this in teaching and tutoring students ranging from elementary school to high school. Having learned several foreign languages, my English grammar
skills have vastly improved. I have taught English grammar through teaching Latin and Spanish.
30 Subjects: including geometry, Spanish, English, reading
...I'm highly motivated and encouraging as a teacher. I graduated with a Master of Arts in Applied Linguistics from Biola University with High Honors and Cum Laude from the University of
California, Los Angeles, with a Bachelor of Arts in English and a minor in Social Thought and departmental honor...
22 Subjects: including geometry, English, ACT Reading, ACT Math | {"url":"http://www.purplemath.com/Laguna_Niguel_geometry_tutors.php","timestamp":"2014-04-21T02:21:57Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:f486ce3e-3406-4142-8051-5b9bbb15b588>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rat Population
Date: 03/14/97 at 19:47:23
From: Adam Hilton
Subject: Estimating rat population
Two rats, one male and one female, live on an island and mate on
January 1. The number of young produced in every litter is 6, and 3
of those 6 are females. The original female gives birth to 6 young on
Jan. 1, and produces another litter of six 40 days later and every 40
days thereafter as long as she lives.
Each female born on the island will produce her first litter 120 days
after her birth, and then produce a new litter every 40 days
thereafter. The rats are on an island with no natural enemies and
plenty of food. Therefore, in this first year, there are only births
so no rats die.
What will be the total number of rats by the next Januray 1, including
the original pair?
My mom and I tried to plot out the generations, using a calendar,
labeling the generations alphabetically. We came up with 3,590. I
have to do a report showing how I did this, start to finish, with
graphs or anything else I used. The teacher doesn't really care if I
get the answer right, she wants to see the math I have used.
Adam Hilton
Date: 03/14/97 at 23:13:30
From: Doctor Steven
Subject: Re: Estimating rat population
This is a problem in regression, which is a complicated way of saying
that the number of rats at a point in time depends on the number of
rats at a previous point in time.
We start with 8 rats, so we'll call the number of rats at our first
point in time N1. N1 = 8. (We have 8 because the first pair had a
litter on Jan 1st.)
Forty days later, we'll have another 6 rats. So N2 = 14. Forty days
after this we'll have another 6 rats, so N3 = 20. Another forty days
later, our first litter of rats will now be mature so they will be
contributing to the rat population. Now for each PAIR of mature rats
we add six, so for EACH rat we contribute only 3. This means that
N4 = N3 + 6*N(4-3) (the number of rats in the previous time period
plus 3 times the number of rats three time periods ago).
In general, for a number greater than 4 (denoted by R), we have:
NR = N(R-1) + 3*N(R-3).
Now the problem is to find how many time periods we go through
before the next Jan 1st. Well, there are 365 days in a year so we
divide this by 40 to get the number of time periods we go through.
So 365/40 = 9.125. We only want whole time periods so the last
increase in rats before Jan 1st next year is the ninth increase, or
the 10th point in time. So the problem is asking us to find N10:
N1 = 8
N2 = 14
N3 = 20
N4 = N3 + 3*N1 = 20 + 3*8 = 20 + 24 = 44
N5 = N4 + 3*N2 = 44 + 3*14 = 44 + 42 = 86
N10 = N9 + 3*N7 = ....
The result for N10 is what your teacher wants. The first three N's
(N1, N2, N3) are called the initial conditions, because they are not
found using the formula given for N's greater than 3 (N4, N5, ...).
Hope this helps.
-Doctor Steven, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56901.html","timestamp":"2014-04-18T21:34:37Z","content_type":null,"content_length":"7921","record_id":"<urn:uuid:903b33f5-0a5d-4d8e-acf8-d4599da0f34c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
May 31, 2009
The Mathematics of Music at Chicago
Posted by John Baez
As a card-carrying Pythagorean, I’m fascinated by the mathematics of music… even though I’ve never studied it very deeply. So, my fascination was piqued when I learned a bit of ‘neo-Riemannian theory
’ from Tom Fiore, a topology postdoc who works on double categories at the University of Chicago.
Neo-Riemannian theory is not an updated version of Riemannian geometry… it goes back to the work of the musicologist Hugo Riemann. The basic idea is that it’s fun to consider things like the
24-element group generated by transpositions (music jargon for what mathematicians call translations in $\mathbb{Z}/12$) and inversion (music jargon for negation in $\mathbb{Z}/12$). And then it’s
fun to study operations on triads that commute with transposition and inversion. These operations are generated by three musically significant ones called P, L, and R. Even better, these operations
form a 24-element group in their own right! I explained why in week234 of This Week’s Finds. For more details try this:
Yes, that’s my student Alissa Crans, of Lie 2-algebra fame!
Posted at 8:58 PM UTC |
Followups (18)
May 28, 2009
Quantum Gravity and Quantum Geometry in Corfu
Posted by John Baez
This September there will be a physics ‘summer school’ covering loop quantum gravity, spin networks, renormalization and higher gauge theory:
I look forward to seeing my quantum gravity friends Abhay Ashtekar, John Barrett and Carlo Rovelli again — it’s been a while. It’s sad how changing one’s research focus can mean you don’t see friends
you used to meet automatically at conferences.
I’m also eager to meet Vincent Rivasseau, who is a real expert on renormalization and constructive quantum field theory! His book From Perturbative to Constructive Renormalization is very impressive.
I had a brief and unsuccessful fling with constructive quantum field theory as a grad student, so it’ll be nice (but a bit scary) to meet someone who’s made real progress in this tough subject.
Posted at 4:22 PM UTC |
Followups (5)
Metric Coinduction
Posted by David Corfield
Dexter Kozen and Nicholas Ruozzi have a paper Applications of Metric Coinduction which begins
Mathematical induction is firmly entrenched as a fundamental and ubiquitous proof principle for proving properties of inductively defined objects. Mathematics and computer science abound with
such objects, and mathematical induction is certainly one of the most important tools, if not the most important, at our disposal.
Perhaps less well entrenched is the notion of coinduction. Despite recent interest, coinduction is still not fully established in our collective mathematical consciousness. A contributing factor
is that coinduction is often presented in a relatively restricted form. Coinduction is often considered synonymous with bisimulation and is used to establish equality or other relations on
infinite data objects such as streams or recursive types.
In reality, coinduction is far more general. For example, it has been recently been observed that coinductive reasoning can be used to avoid complicated $\epsilon-\delta$ arguments involving the
limiting behavior of a stochastic process, replacing them with simpler algebraic arguments that establish a coinduction hypothesis as an invariant of the process, then automatically deriving the
property in the limit by application of a coinduction principle. The notion of bisimulation is a special case of this: establishing that a certain relation is a bisimulation is tantamount to
showing that a certain coinduction hypothesis is an invariant of some process.
Posted at 9:34 AM UTC |
Followups (9)
May 26, 2009
Alm on Quantization as a Kan Extension
Posted by Urs Schreiber
Recently I was contacted by Johan Alm, a beginning PhD student at Stockholm University, Sweden, with Prof. Merkulov.
He wrote that he had thought about formalizing and proving aspects of the idea that appeared as the The $n$-Café Quantum Conjecture about the nature of [[path integral quantization ]].
After a bit of discussion of his work, we thought it would be nice to post some of his notes here:
Johan Alm, Quantization as a Kan extension (lab)
$n$Café-regulars may be pleased to meet some old friends in there, such as the [[Leinster measure]] starring in its role as a canonical path integral measure.
Posted at 7:21 PM UTC |
Followups (45)
May 24, 2009
Elsevier Journal Prices
Posted by John Baez
Do you have data about Elsevier’s journal prices compared to other journals? If so, let me know! Before we launch the revolution, we need to get our facts straight.
My friend the physicist Ted Jacobson wants such data. With the help of a librarian, he has compared the prices of Elsevier’s physics journals to other physics journals subscribed to by his
Posted at 9:47 PM UTC |
Followups (10)
May 22, 2009
Charles Wells’ Blog
Posted by John Baez
Charles Wells is perhaps most famous for this book on topoi, monads and the category-theoretic formulation of universal algebra using things like ‘algebraic theories’ and ‘sketches’:
It’s free online! Snag a copy and learn some cool stuff. But I’ll warn you — it’s a fairly demanding tome.
Luckily, Charles Wells now has a blog! And I’d like to draw your attention to two entries: one on sketches, and one on the evil influence of the widespread attitude that ‘the philosophy of math is
the philosophy of logic’.
Posted at 5:43 PM UTC |
Followups (12)
May 19, 2009
Where is the Philosophy of Physics?
Posted by David Corfield
As the subtitle of this blog says, we run ‘A group blog on math, physics and philosophy’. To what extent, though, do we cover all the interfaces of this triad? Well, we do some philosophy of
mathematics here, and we certainly do some mathematical physics. But the question I’ve been wondering about recently is whether we should be doing more philosophy of physics.
If we followed the position that physics is the search for more and more adequate mathematical structures to describe the world, perhaps we needn’t take the philosophy of physics to be anything more
than a philosophy of mathematics along with an account of how the structures which are most promising for physics are chosen. But this view of physics would be controversial.
Posted at 2:01 PM UTC |
Followups (45)
TFT at Northwestern
Posted by Urs Schreiber
Quite unfortunately I couldn’t make it to this event that started yesterday:
Topological Field Theories at Northwestern University
Workshop: May 18-22, 2009
Conference: May 25-29, 2009
(website, Titles and abstracts)
An impressive concentration of extended TFT expertise.
But with a little luck $n$Café regulars who are there will provide the regrettable rest of us with reports about the highlights and other lights
In fact, Alex Hoffnung already sent me typed notes that he had taken in talks! That’s really nice of him. I am starting to collect this and other material at
Posted at 1:57 PM UTC |
Followups (16)
May 18, 2009
A Prehistory of n-Categorical Physics
Posted by John Baez
I’m valiantly struggling to finish this paper:
Perhaps blogging about it will help…
Posted at 8:58 PM UTC |
Followups (68)
Higher Structures in Göttingen III
Posted by John Baez
Göttingen was famous as a center of mathematics during the days of Gauss, Riemann, Dirichlet, Klein, Minkowksi, Hilbert, Weyl and Courant. One of the founders of category theory, Saunders Mac Lane,
studied there! He wrote:
In 1931, after graduating from Yale and spending a vaguely disappointing year of graduate study at Chicago, I was searching for a really first-class mathematics department which would also
include mathematical logic. I found both in Göttingen.
It’s worth reading Mac Lane’s story of how the Nazis eviscerated this noble institution.
But now, thanks to the Courant Research Centre on Higher-Order Structures, Göttingen is gaining fame as a center of research on higher structures (like $n$-categories and $n$-stacks) and their
applications to geometry, topology and physics! They’re having another workshop soon:
Posted at 5:41 PM UTC |
Followups (2)
Journal Club – Geometric Infinity-Function Theory – Week 4
Posted by Urs Schreiber
In our journal club on [[geometric $\infty$-function theory]] this week Chris Brav talks about chapter 4 of Integral Transforms:
Tensor products and integral transforms.
This is about tensoring and pull-pushing $(\infty,1)$-categories of quasi-coherent sheaves on perfect stacks.
Luckily, Chris has added his nice discussion right into the wiki entry, so that we could already work a bit on things like further links, etc. together. Please see section 4 here.
Discussion on previous weeks can be found here:
week 1: Alex Hoffnung on Introduction
week 2: myself on Preliminaries
week 3: Bruce Bartlett Perfect stacks
Posted at 7:08 AM UTC |
Followups (6)
May 15, 2009
The Relevance of Predicativity
Posted by David Corfield
If I get around to writing a second book in philosophy of mathematics, one thing I’ll probably need to retract is the ill-advised claim made in the first book that the notion of predicativity is
irrelevant to mainstream mathematics.
Here’s a passage which goes directly against such a thought, from Nik Weaver’s Is set theory indispensable?
Posted at 4:53 PM UTC |
Followups (12)
May 11, 2009
Journal Club – Geometric Infinity-Function Theory – Week 3
Posted by Urs Schreiber
This week in our Journal Club on [[geometric $\infty$-function theory]] Bruce Bartlett talks about section 3 of “Integral Transforms”: perfect stacks.
So far we had
Week 1: Alex Hoffnung on Introduction
Week 2, myself on Preliminaries
See here for our further schedule. We are still looking for volunteers who’d like to chat about section 5 and 6.
Posted at 11:04 PM UTC |
Followups (31)
May 9, 2009
Smooth Structures in Ottawa II
Posted by John Baez
guest post by Alex Hoffnung
Hi everyone,
I am going to even further neglect my duties to the journal club and take a moment to report on the Fields Workshop on Smooth Structures in Logic, Category Theory and Physics which took place this
past weekend at the University of Ottawa. The organizers put together a great series of talks giving an overview of the past and current trends and applications in smooth structures. I should right
away try to put the idea of smooth structures in some context. Further, I should warn you that I may do this with some amount of bias.
Posted at 7:57 PM UTC |
Followups (50)
May 8, 2009
In Search of Terminal Coalgebras
Posted by David Corfield
Tom Leinster has put up the slides for his joint talk – Terminal coalgebras via modules – with Apostolos Matzaris at PSSL 88.
It’s all about establishing the existence of, and constructing, terminal coalgebras in certain situations. I realise though looking through the slides that I never fully got on top of the flatness
idea, and nLab is a little reluctant to help at the moment (except for flat module).
So perhaps someone could help me understand the scope of the result, maybe via an example. Say I take the polynomial endofunctor
$\Phi(X) = 1 + X + X^2.$
Given that terminal coalgebras can be said to have cardinality $i$, in which categories will I find such a thing?
Posted at 10:07 AM UTC |
Followups (20)
May 7, 2009
Odd Currency Puzzle
Posted by John Baez
Sorry to be posting so much light, frothy stuff lately — but since it’s an odd day, I can’t resist another puzzle.
What’s the oddest currency ever used in America?
Of course this is a subjective question, so I’d be interested to hear your opinion…
Posted at 7:34 PM UTC |
Followups (49)
May 6, 2009
nLab - More General Discussion
Posted by David Corfield
With the previous thread on nLab reaching 343 comments, it’s probably time for a new one.
Let me begin discussions by asking whether it is settled that distributor be the term preferred over profunctor. I ask since it would be good to have an entry on the 2-category of small categories,
profunctors and natural transformations. Should it be $Dist$ or $Prof$?
Posted at 9:06 AM UTC |
Followups (94)
May 5, 2009
Posted by David Corfield
If we were to have a page at nLab on things to be categorified should it be titled categorifAcienda, categorifIcienda or something else?
My suggestions are based on the gerundives formed from verbs such as agenda and Miranda. Concerning verbs more closely resembling ‘categorify’ we have
• Satisfacio (satisfy) - satisfaciendus
• Efficio (bring to pass) - efficiendus
Unfortunately, categorify is a hybrid word, with Greek stem and Latin suffix. I suppose categorize was out of the question.
Posted at 4:26 PM UTC |
Followups (14)
Journal Club – Geometric Infinity-Function Theory – Week 2
Posted by Urs Schreiber
May 4, 2009
The Foibles of Science Publishing
Posted by John Baez
The latest news about Elsevier journals and Scientific American.
Posted at 1:17 AM UTC |
Followups (15) | {"url":"http://golem.ph.utexas.edu/category/2009/05/index.shtml","timestamp":"2014-04-20T06:17:58Z","content_type":null,"content_length":"103567","record_id":"<urn:uuid:3ab75a55-c3e5-4726-baea-218595253405>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rocket Physics
Picture of Saturn V Launch for Apollo 15 Mission
. Source:
Rocket physics is basically the application of
Newton's Laws
to a system with variable mass. A rocket has variable mass because its mass decreases over time, as a result of its fuel (propellant) burning off.
A rocket obtains thrust by the principle of action and reaction (Newton's third law). As the rocket propellant ignites, it experiences a very large acceleration and exits the back of the rocket (as
exhaust) at a very high velocity. This backwards acceleration of the exhaust exerts a "push" force on the rocket in the opposite direction, causing the rocket to accelerate forward. This is the
essential principle behind the physics of rockets, and how rockets work.
The equations of motion of a rocket will be derived next.
Rocket Physics — Equations Of Motion
To find the equations of motion, apply the principle of impulse and momentum to the "system", consisting of rocket and exhaust. In this analysis of the rocket physics we will use Calculus to set up
the governing equations. For simplicity, we will assume the rocket is moving in a vacuum, with no gravity, and no air resistance (drag).
To properly analyze the rocket physics, consider the figure below which shows a schematic of a rocket moving in the vertical direction. The two stages, (1) and (2), show the "state" of the system at
and time
, where
is a very small (infinitesimal) time step. The system (consisting of rocket and exhaust) is shown as inside the dashed line.
is the mass of the rocket (including propellant), at stage (1)
is the total mass of the rocket exhaust (that has already exited the rocket), at stage (1)
is the velocity of the rocket, at stage (1)
is the linear momentum of the rocket exhaust (that has already exited the rocket), at stage (1). This remains constant between (1) and (2)
is the mass of rocket propellant that has exited the rocket (in the form of exhaust), between (1) and (2)
is the change in velocity of the rocket, between (1) and (2)
is the velocity of the exhaust exiting the rocket, at stage (2)
Note that all velocities are measured with respect to ground (an inertial reference frame).
The sign convention in the vertical direction is as follows: "up" is positive and "down" is negative.
Between (1) and (2), the change in linear momentum in the vertical direction of all the particles in the system, is due to the sum of the external forces in the vertical direction acting on all the
particles in the system.
We can express this mathematically using Calculus and the principle of impulse and momentum:
where Σ
is the sum of the external forces in the vertical direction acting on all the particles in the system (consisting of rocket and exhaust).
Expand the above expression. In the limit as
→0 we may neglect the "second-order" term
. Divide by
and simplify. We get
Since the rocket is moving in a vacuum, with no gravity, and no air resistance (drag), then Σ
= 0 since no external forces are acting on the system. As a result, the above equation becomes
The left side of this equation must represent the thrust acting on the rocket, since
is the acceleration of the rocket, and Σ
(Newton's second law).
Therefore, the thrust
acting on the rocket is equal to
The term
is the velocity of the exhaust gases relative to the rocket. This is approximately constant in rockets. The term
is the burn rate of rocket propellant.
As the rocket loses mass due to the burning of propellant, its acceleration increases (for a given thrust
). The maximum acceleration is therefore just before all the propellant burns off.
From equation (2),
which becomes
The mass of the ejected rocket exhaust equals the negative of the mass change of the rocket. Thus,
Again, the term
is the velocity of the exhaust gases relative to the rocket, which is approximately constant. For simplicity set
Integrate the above equation using Calculus. We get
This is a very useful equation coming out of the rocket physics analysis, shown above. The variables are defined as follows:
is the initial rocket velocity and
is the initial rocket mass. The terms
are the velocity of the rocket and its mass at any point in time thereafter (respectively). Note that the change in velocity (delta-
) is always the same no matter what the initial velocity
is. This is a very useful result coming out of the rocket physics analysis. The fact that delta-
is constant is useful for those instances where powered gravity assist is used to increase the speed of a rocket. By increasing the speed of the rocket at the point when its speed reaches a maximum
(during the periapsis stage), the final kinetic energy of the rocket is maximized. This in turn maximizes the final velocity of the rocket. To visualize how this works, imagine dropping a ball onto a
floor. We wish to increase the velocity of the ball by a constant amount delta-
at some point during its fall, such that it rebounds off the floor with the maximum possible velocity. It turns out that the point at which to increase the velocity is just before the ball strikes
the floor. This maximizes the total energy of the ball (gravitational plus kinetic) which enables it to rebound off the floor with the maximum velocity (and the maximum kinetic energy). This maximum
ball velocity is analogous to the maximum possible velocity reached by the rocket at the completion of the gravity assist maneuver. This is known as the
Oberth Effect
(opens in new window). It is a very useful principle related to rocket physics.
In the following discussion on rocket physics we will look at staging and how it can also be used to obtain greater rocket velocity.
Rocket Physics — Staging
Often times in a space mission, a single rocket cannot carry enough propellant (along with the required tankage, structure, valves, engines and so on) to achieve the necessary mass ratio to achieve
the desired final orbital velocity (
). This presents a unique challenge in rocket physics. The only way to overcome this difficulty is by "shedding" unnecessary mass once a certain amount of propellant is burned off. This is
accomplished by using different rocket stages and ejecting spent fuel tanks plus associated rocket engines used in those stages, once those stages are completed.
The figure below illustrates how staging works.
The picture below shows the separation stage of the Saturn V rocket for the Apollo 11 mission.
The picture below shows the separation stage of the twin rocket boosters of the Space Shuttle.
In the following discussion on rocket physics we will look at rocket efficiency and how it relates to modern (non-rocket powered) aircraft.
Rocket Physics — Efficiency
Rockets can accelerate even when the exhaust relative velocity is moving slower than the rocket (meaning
is in the same direction as the rocket velocity). This differs from propeller engines or air breathing jet engines, which have a limiting speed equal to the speed at which the engine can move the air
(while in a stationary position). So when the relative exhaust air velocity is equal to the engine/plane velocity, there is zero thrust. This essentially means that the air is passing through the
engine without accelerating, therefore no push force (thrust) is possible. Thus, rocket physics is fundamentally different from the physics in propeller engines and jet engines.
Rockets can convert most of the chemical energy of the propellant into mechanical energy (as much as 70%). This is the energy that is converted into motion, of both the rocket and the propellant/
exhaust. The rest of the chemical energy of the propellant is lost as waste heat. Rockets are designed to lose as little waste heat as possible. This is accomplished by having the exhaust leave the
rocket nozzle at as low a temperature as possible. This maximizes the Carnot efficiency, which maximizes the mechanical energy derived from the propellant (and minimizes the thermal energy lost as
waste heat). Carnot efficiency applies to rockets because rocket engines are a type of heat engine, converting some of the initial heat energy of the propellant into mechanical work (and losing the
remainder as waste heat). Thus, rocket physics is related to heat engine physics. For more information see
Wikipedia's article on heat engines
(opens in new window).
In the following discussion on rocket physics we will derive the equation of motion for rocket flight in the presence of air resistance (drag) and gravity, such as for rockets flying near the earth's
Rocket Physics — Flight Near Earth's Surface
The rocket physics analysis in this section is similar to the previous one, but we are now including the effect of air resistance (drag) and gravity, which is a necessary inclusion for flight near
the earth's surface.
For example, let’s consider a rocket moving straight upward against an atmospheric drag force
and against gravity
(equal to 9.8 m/s
on earth). The figure below illustrates this.
is the center of mass of the rocket at the instant shown. The weight of the rocket acts through this point.
From equations (1) and (3) and using the fact that acceleration
we can write the following general equation:
The sum of the external forces acting on the rocket is the gravity force plus the drag force. Thus, from the above equation,
As a result,
This is the general equation of motion resulting from the rocket physics analysis accounting for the presence of air resistance (drag) and gravity. Looking closely at this equation, you can see that
it is an application of Σ
(Newton’s second law). Note that the mass
of the rocket changes with time (due to propellant burning off), and
is the thrust given from equation (3). An expression for the drag force
can be found on the page on
drag force
Due to the complexity of the drag term
, the above equation must be solved numerically to determine the motion of the rocket as a function of time.
Another important analysis in the study of rocket physics is designing a rocket that experiences minimal atmospheric drag. At high velocities, air resistance is significant. So for purposes of energy
efficiency, it is necessary to minimize the atmospheric drag experienced by the rocket, since energy used to overcome drag is energy that is wasted. To minimize drag, rockets are made as aerodynamic
as possible, such as with a pointed nose to better cut through the air, as well as using stabilizing fins at the rear of the rocket (near the exhaust), to help maintain steady orientation during
In the following discussion on rocket physics we will take a closer look at thrust.
Rocket Physics — A Closer Look At Thrust
The thrust given in equation (3) is valid for an optimal nozzle expansion. This assumes that the exhaust gas flows in an ideal manner through the rocket nozzle. But this is not necessarily true in
real-life operation. Therefore, in the following rocket physics analysis we will develop a thrust equation for non-optimal flow of the exhaust gas through the rocket nozzle.
To set up the analysis consider the basic schematic of a rocket engine, shown below.
is the internal pressure inside the rocket engine (which may vary with location)
is the ambient pressure outside the rocket engine (assumed constant)
is the pressure at the exit plane of the rocket engine nozzle (this is taken as an average pressure along this plane)
is the cross-sectional area of the opening at the nozzle exit plane
The arrows along the top and sides represent the pressure acting on the wall of the rocket engine (inside and outside). The arrows along the bottom represent the pressure acting on the exhaust gas,
at the exit plane.
Gravity and air resistance are ignored in this analysis (their effect can be included separately, as shown in the previous section).
Next, isolate the propellant (plus exhaust) inside the rocket engine. It is useful to do this because it allows us to fully account for the contact force between rocket engine wall and propellant
(plus exhaust). This contact force can then be related to the thrust experienced by the rocket, as will be shown. The schematic below shows the isolated propellant (plus exhaust). The dashed blue
line (along the top and sides) represents the contact interface between the inside wall of the rocket engine and the propellant (plus exhaust). The dashed black line (along the bottom) represents the
exit plane of the exhaust gas, upon which the pressure
is the resultant downward force exerted on the propellant (plus exhaust) due to contact with the inside wall of the rocket engine (represented by the dashed blue line). This force is calculated by:
(1) Multiplying the local pressure
at a point on the inside wall by a differential area on the wall, (2) Using Calculus, integrating over the entire inside wall surface to find the resultant force, and (3) Determining the vertical
component of this force (
). The details of this calculation are not shown here.
(Note that, due to geometric symmetry of the rocket engine, the resultant force acts in the vertical direction, and there is no sideways component).
Now, sum all the forces acting on the propellant (plus exhaust) and then apply Newton's second law:
is the mass of the propellant/exhaust inside the rocket engine
is the acceleration of the rocket engine
is the velocity of the exhaust gases relative to the rocket, which is approximately constant
is the burn rate of the propellant
The right side of the above equation is derived using the same method that was used for deriving equation (1). This is not shown here.
(Note that we are defining "up" as positive and "down" as negative).
Next, isolate the rocket engine, as shown in the schematic below.
is the force exerted on the rocket engine by the rocket body.
Now, sum all the forces acting on the rocket engine and then apply Newton's second law:
is the mass of the rocket engine. The term
is the force exerted on the rocket engine due to the ambient pressure acting on the outside of the engine.
Now, by Newton’s third law
acting on the rocket body is pointing up (positive). Therefore, by Newton’s second law we can write
is the mass of the rocket body (which excludes the mass of the rocket engine and the mass of propellant/exhaust inside the rocket engine).
Combine the above two equations and we get
Combine equations (5) and (6) and we get
where the term (
) is the total mass of the rocket at any point in time. Therefore the left side of the equation must be the thrust acting on the rocket (since
, by Newton's second law).
Thus, the thrust
This is the most general equation for thrust coming out of the rocket physics analysis, shown above. The first term on the right is the momentum thrust term, and the last term on the right is the
pressure thrust term due to the difference between the nozzle exit pressure and the ambient pressure. In deriving equation (3) we assumed that
, which means that the pressure thrust term is zero. This is true if there is optimal nozzle expansion and therefore maximum thrust in the rocket nozzle. However, the pressure thrust term is
generally small relative to the momentum thrust term.
A subtle point regarding
is that
(the momentum of the rocket exhaust, described in the first section) remains constant. This is because the pressure force pushing on the exhaust at the rocket nozzle exit (due to the pressure
) exactly balances the pressure force pushing on the remainder of the exhaust due to the ambient pressure
. Again, we are ignoring gravity and air resistance (drag) in the derivation of equation (3) and (7). To include their effect we simply add their contribution to the thrust force, as shown in the
previous section on rocket physics.
The rocket physics analysis in this section is basically a force and momentum analysis. But to do a complete thrust analysis we would have to look at the thermal and fluid dynamics of the expansion
process, as the exhaust gas travels through the rocket nozzle. This analysis (not discussed here) enables one to optimize the engine design plus nozzle geometry such that optimal nozzle expansion is
achieved during operation (or as close to it as possible).
The flow of the exhaust gas through the nozzle falls under the category of compressible supersonic flow and its treatment is somewhat complicated. For more information on this see
Wikipedia's page on rocket engines
(opens in new window). And here is a
summary of the rocket thrust equations provided by NASA
(opens in new window).
In the following discussion on rocket physics we will look at the energy consumption of a rocket moving through the air at constant velocity.
Rocket Physics — Energy Consumption For Rocket Moving At Constant Velocity
An interesting question related to rocket physics is, how much energy is used to power a rocket during its flight? One way to answer that is to consider the energy use of a rocket moving at constant
velocity, such as through the air. Now, in order for the rocket to move at constant velocity the sum of the forces acting on it must equal zero. For purposes of simplicity let's assume the rocket is
traveling horizontally against the force of air resistance (drag), and where gravity has no component in the direction of motion. The figure below illustrates this schematically.
The thrust force
acting on the rocket is equal to the air drag
, so that
Let's say the rocket is moving at a constant horizontal velocity
, and it travels a horizontal distance
. We wish to find the amount of mechanical energy it takes to move the rocket this distance. Note that this is not the same as the total amount of chemical energy in the spent propellant, but rather
the amount of energy that was converted into motion. This amount of energy is always less than the total chemical energy in the propellant, due to naturally occurring losses, such as waste heat
generated by the rocket engine.
Thus, we must look at the energy used to push the rocket (in the forward direction) and add it to the energy it takes to push the exhaust (in the backward direction). To do this we can apply the
principle of work and energy.
For the exhaust gas:
is the total mass of the exhaust ejected from the rocket over the flight distance
d u
is the velocity of the exhaust gases relative to the rocket, which is approximately constant
is the work required to change the kinetic energy of the rocket propellant/exhaust (ejected from the rocket) over the flight distance
For the rocket:
is the thrust acting on the rocket (this is constant if we assume the burn rate is constant)
is the work required to push the rocket a distance
(constant burn rate), and assuming optimal nozzle expansion we have
Now, the time
it takes for the rocket to travel a horizontal distance
The total mass of exhaust
Substitute this into equation (8), then combine equations (8) and (9) to find the total mechanical energy used to propel the rocket over a distance
. Therefore,
, which means we can substitute
with the drag force in the above equation.
is the drag coefficient, which can vary along with the speed of the rocket. But typical values range from 0.4 to 1.0
is the density of the air
is the projected cross-sectional area of the rocket perpendicular to the flow direction (that is, perpendicular to
is the speed of the rocket relative to the air
Substitute the above equation into the previous equation (for
) and we get
As you can see, the higher the velocity
, the greater the energy required to move the rocket over a distance
, even though the time it takes is less. Furthermore, rockets typically have a large relative exhaust velocity
which makes the energy expenditure large, as evident in the above equation. (The relative exhaust velocity can be in the neighborhood of a few kilometers per second).
This tells us that rockets are inefficient for earth-bound travel, due to the effects of air resistance (drag) and the high relative exhaust velocity
. It is only their high speed that makes them attractive for earth-bound travel, because they can "get there" sooner, which is particularly important for military and weapons applications.
For rockets that are launched into space (such as the Space Shuttle) the density of the air decreases as the altitude of the rocket increases. This decrease, combined with the increase in rocket
velocity, means that the drag force will reach a maximum at some altitude (typically several kilometers above the surface of the earth). This maximum drag force must be withstood by the rocket body
and as such is an important part of rocket physics analysis, and design.
In the following discussion on rocket physics we will look at the energy consumption of a rocket moving through space.
Rocket Physics — Energy Consumption For Rocket Moving Through Space
A very useful piece of information in the study of rocket physics is how much energy a rocket uses for a given increase in speed (delta-
), while traveling in space. This analysis is conveniently simplified somewhat since gravity force and air drag are non-existent. Once again, we are assuming optimal nozzle expansion.
We will need to apply the principle of work and energy to the system (consisting of rocket and propellant/exhaust), to determine the required energy.
The initial kinetic energy of the system is:
is the initial rocket (plus propellant) mass
is the initial rocket (plus propellant) velocity
The final kinetic energy of the rocket is:
is the final rocket mass
is the final rocket velocity
The exhaust gases are assumed to continue traveling at the same velocity as they did upon exiting the rocket. Therefore, the final kinetic energy of the exhaust gases is:
is the infinitesimal mass of rocket propellant that has exited the rocket (in the form of exhaust), over a very small time duration
is the velocity of the exhaust exiting the rocket, at stage (2), at a given time
The last term on the right represents an integration, in which you have to sum over all the exhaust particles for the whole burn time.
Therefore, the final kinetic energy of the rocket (plus exhaust) is:
Now, apply the principle of work and energy to all the particles in the system, consisting of rocket and propellant/exhaust:
Substituting the expressions for
into the above equation we get
is the mechanical energy used by the rocket between initial and final velocity (in other words, for a given delta-
). This amount of energy is less than the total chemical energy in the propellant, due to naturally occurring losses, such as waste heat generated by the rocket engine.
must be solved for in the above equation.
From equation (4),
is the velocity of the exhaust gases relative to the rocket (constant).
Note that all velocities are measured with respect to an inertial reference frame.
After a lot of algebra and messy integration we find that,
This answer is very nice and compact, and it does not depend on the initial velocity
. This is perhaps a surprising result coming out of this rocket physics analysis.
If we want to find the mechanical power
generated by the rocket, differentiate the above expression with respect to time. This gives us
is the burn rate of rocket propellant.
Note that, in the above energy calculations, the rocket does not have to be flying in a straight line. The required energy
is the same regardless of the path taken by the rocket between initial and final velocity. However, it is assumed that any angular rotation of the rocket stays constant (and can therefore be excluded
from the energy equation), or it is small enough to be negligible. Thus, the energy bookkeeping in this analysis of the rocket physics only consists of translational rocket velocity.
This concludes the discussion on rocket physics.
Return from Rocket Physics to Miscellaneous Physics page Return from Rocket Physics to Real World Physics Problems home page | {"url":"http://www.real-world-physics-problems.com/rocket-physics.html","timestamp":"2014-04-21T02:15:51Z","content_type":null,"content_length":"48522","record_id":"<urn:uuid:7fd770d8-5aec-4506-adec-0321ec6a61ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Proposed Roadmap Overview
Sturla Molden sturla@molden...
Mon Feb 20 09:29:46 CST 2012
Den 20.02.2012 08:35, skrev Paul Anton Letnes:
> In the language wars, I have one question. Why is Fortran not being considered? Fortran already implements many of the features that we want in NumPy:
Yes ... but it does not make Fortran a systems programming language.
Making NumPy is different from using it.
> - slicing and similar operations, at least some of the fancy indexing kind
> - element-wise array operations and function calls
> - array bounds-checking and other debugging aid (with debugging flags)
That is nice for numerical computing, but not really needed to make NumPy.
> - arrays that mentally map very well onto numpy arrays. To me, this spells +1 to ease of contribution, over some abstract C/C++ template
Mentally perhaps, but not binary. NumPy needs uniformly strided memory
on the binary level. Fortran just gives this at the mental level. E.g.
there is nothing that dictates a Fortran pointer has to be a view, the
compiler is free to employ copy-in copy-out. In Fortran, a function call
can invalidate a pointer. One would therefore have to store the array
in an array of integer*1, and use the intrinsic function transfer() to
parse the contents into NumPy dtypes.
> - in newer standards it has some nontrivial mathematical functions: gamma, bessel, etc. that numpy lacks right now
That belongs to SciPy.
> - compilers that are good at optimizing for floating-point performance, because that's what Fortran is all about
Insanely good, but not when we start to do the (binary, not mentally)
strided access that NumPy needs. (Not that C compilers would be any better.)
> - not Fortran as such, but BLAS and LAPACK are easily accessed by Fortran
> - possibly other numerical libraries that can be helpful
> - Fortran has, in its newer standards, thought of C interoperability. We could still keep bits of the code in C (or even C++?) if we'd like to, or perhaps f2py/Cython could do the wrapping.
Not f2py, as it depends on NumPy.
- some programmers know Fortran better than C++. Fortran is at least used by many science guys, like me.
That is a valid arguments. Fortran is also much easier to read and debug.
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060842.html","timestamp":"2014-04-16T08:21:50Z","content_type":null,"content_length":"4843","record_id":"<urn:uuid:c832ecf7-f1fe-440b-aea2-db760d0c4705>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collision and Heat of Block
1. The problem statement, all variables and given/known data
A bullet of mass m and speed v0 is shot at a block of mass M, that is completely strawberry jam. The bullet enters and comes through other side at vf. Initial temperature of block is T0. The initial
temperature of the bullet is much larger than T0, so 90% of the heat in the collision goes into the jam, 10% to the bullet. The specific heat of the jam is C.
The block travels to the right on a frictionless surface and hits a patch of sandpaper of length L, and coefficient of kinetic friction, mu-kinetic. Only half of the heat generated from friction with
sandpaper flows to jam. Ignore the heat coming out of jam and ignore any jelly coming out from collision with bullet.
2. Relevant equations
What is the approximate temperature of the jam after it has passed over the sandpaper?
3. The attempt at a solution
So, I think it's an elastic collision, since they don't stick, so linear momentum and energy are conserved:
m*v0 = m*vf + Mv2 (v2 being an unknown)
(1/2)m*v0^2 = (1/2)m*vf^2 + (1/2)M*v2^2 (+ heat energy?)
The knowns for heat are the temperature of the block, T0, and its specific heat, C.
I'm not sure, but I think I might have missed some energy in the above energy equation due to heat. The problem is I don't know what equation to use. I think it's from heat transfer (Q = mc*delta-T)
* .9, but I'm not sure about this. What side would it go on? The right?
I know the friction does work on the block when it is on there, and since only half flows into the block, the energy is (1/2)*mu-kinetic*MgL.
So overall, I think the temperature of the block is affected by two things: the heat transfer from the collision and the work done by friction when it passes over the sandpaper. These both affect its
internal energy, delta-U. Do I use delta-U here?
My problem is I'm not sure how this all factors in. Could someone verify if I am right and/or give me a hint in the right direction?
I think what I would do is:
Create a dummy variable for the new temperature from the first heat transfer (the collision), so: Q = MC*(T-dummy - T0)*.90 and then add this equation on the right side of energy and solve. Then, I
would do W(friction) = (1/2)mu-kinetic*MgL = MC(Tfinal - T-dummy). (work done by friction being negative, since non-conservative force).
But am I right? | {"url":"http://www.physicsforums.com/showthread.php?t=448574","timestamp":"2014-04-16T22:16:01Z","content_type":null,"content_length":"21789","record_id":"<urn:uuid:bc0772c9-ea3d-466b-a46f-6539927fcafe>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why are Schur multipliers of finite simple groups so small?
up vote 25 down vote favorite
Given a finite simple group $G$, we can consider the quasisimple extensions $\tilde G$ of $G$, that is to say central extensions which remain perfect. Some basic group cohomology (based on the
standard trick of averaging a cocycle to try to make it into a coboundary) shows that up to isomorphism, there are only finitely many such quasisimple extensions, and they are all quotients of a
maximal quasisimple extension, which is known as the universal cover of $G$, and is an extension of $G$ by a finite abelian group known as the Schur multiplier $H^2(G,{\bf C}^\times)$ of $G$ (or
maybe it would be slightly more accurate to say that it is the Pontryagian dual of the Schur multiplier, although up to isomorphism the two groups coincide).
On going through the list of finite simple groups it is striking to me how small the Schur multipliers are for all of them; with the exception of the projective special linear groups $A_{n-1}(q)=
PSL_n({\bf F}_q)$ and the projective special unitary groups ${}^2 A_{n-1}(q^2) = PSU_n({\bf F}_q)$, all other finite simple groups have Schur multiplier of order no larger than 12, and even the
projective special linear and special unitary groups of rank $n-1$ do not have Schur multiplier of size larger than $n$ (other than a finite number of small exceptional cases, but even there the
largest Schur multiplier size is 48). In particular, in all cases the Schur multiplier is much smaller than the order of the group itself (indeed it is always of order $O(\sqrt{\frac{\log|G|}{\log\
log|G|}})$). For comparison, the standard proof of the finiteness of the Schur multiplier (based on showing that every $C^\times$-valued cocycle on $G$ is cohomologous to $|G|^{th}$ roots of unity)
only gives the terrible upper bound of $|G|^{|G|}$ for the order of the multiplier.
In the case of finite simple groups of Lie type, one can think of the Schur multiplier as analogous to the notion of a fundamental group of a simple Lie group, which is similarly small (being the
quotient of the weight lattice by the root lattice, it is no larger than $4$ in all cases except for the projective special linear group $PSL_n$, where it is of order $n$ at most). But this doesn't
explain why the Schur multipliers for the alternating and sporadic groups are also so small. Intuitively, this is asserting that it is very difficult to make a non-trivial central extension of a
finite simple group. Is there any known explanation (either heuristic, rigorous, or semi-rigorous) that helps explain why Schur multipliers of finite simple groups are small? For instance, are there
results limiting the size of various group cohomology objects that would support (or at least be very consistent with) the smallness of Schur multipliers?
Ideally I would like an explanation that does not presuppose the classification of finite simple groups.
gr.group-theory schur-multipliers finite-groups
I don't know if some intution may be derived from the following (I'm still struggling to try to understand the Schur multiplier), but there's the following: in "The second homology group of a
2 group; relations among commutators" (Proc. Amer. Math. Soc. 3, (1952). 588–595) C. Miller shows that the second homology/Schur multiplier of $G$ can be interpreted as the group of all relations
among formal commutators of elements of $G$, modulo those relations that hold "universally" (i.e., in the free group). I would expect few "nice" relations among commutators in simple groups beyond
obvious ones. – Arturo Magidin May 17 '13 at 18:31
There may be some insight gained from the transfer map from the homology of the Sylow subgroups. Maybe there's some general structure theory for Sylow subgroups of simple groups that shows that
they have small Schur multiplier at each prime. – Ian Agol May 17 '13 at 19:02
@Arturo: It may be helpful to put the result of Claire Miller in the context of the nonabelian tensor product of groups, see the bibliography at pages.bangor.ac.uk/~masoao/nonabtens.html – Ronnie
Brown May 17 '13 at 20:33
7 I remember asking Michael Aschbacher a similar question (I think it was actually why are the outer automorphism groups of all finite simple groups solvable) many years ago, and he opined that
questions like that were pointless. Many common properties of the finite simple groups are just consequences of the classification. – Derek Holt May 17 '13 at 20:52
@Terry Tao: Yes indeed; to give an explicit and unusual example with simple groups, the alternating group $A_{6}$ has an Abelian Sylow $3$-subgroup, but has a perfect triple cover. Less exotic are
1 th example ${\rm PSL}(2,q)$ where $q \equiv \pm 3$ (mod 8) and $q>3.$ These have Abelian Sylow $2$-subgroups of order $4$, yet each has a perfect double cover ${\rm SL}(2,q).$ I think that Agol's
suggested explanation is slightly off the mark: the Schur multipliers of Sylow $p$-subgroups of simple groups can get big, so I don't think you can explain the phenomenon locally (ie one prime at
a time). – Geoff Robinson May 24 '13 at 13:56
show 7 more comments
2 Answers
active oldest votes
The Schur multiplier $H^2(G;{\mathbb C}^\times) \cong H^3(G;{\mathbb Z})$ of a finite group is a product of its $p$-primary parts
$$H^3(G;{\mathbb Z}) = \oplus_{ p | |G|} H^3(G;{\mathbb Z}_{(p)})$$
as is seen using the transfer. The $p$-primary part $H^3(G;{\mathbb Z}_{(p)})$ depends only of the $p$-local structure in $G$ i.e., the Sylow $p$-subgroup $S$ and information about
how the subgroups of $S$ become conjugate or "fused" in $G$. (This data is also called the $p$-fusion system of $G$.)
More precisely, the Cartan-Eilenberg stable elements formula says that
$$H^3(G;{\mathbb Z}_{(p)}) = \{ x \in H^3(S;{\mathbb Z}_{(p)})^{N_G(S)/C_G(S)} |res^S_V(x) \in H^3(V;{\mathbb Z}_{(p)})^{N_G(V)/C_G(V)}, V < S\}$$
One in fact only needs to check restriction to certain V above. E.g., if S is abelian the formula can be simplified to $H^3(G;{\mathbb Z}_{(p)}) = H^3(S;{\mathbb Z}_{(p)})^{N_G(S)/C_G
(S)}$ by an old theorem of Swan. (The superscript means taking invariants.) See e.g. section 10 of my paper linked HERE for some references.
up vote 14 down Note that the fact that one only need primes p where G has non-cyclic Sylow $p$-subgroup follows from this formula, since $H^3(C_n;{\mathbb Z}_{(p)}) = 0$.
vote accepted
However, as Geoff Robinson remarks, the group $H^3(S;{\mathbb Z}_{(p)})$ can itself get fairly large as the $p$-rank of $S$ grows. However, $p$-fusion tends to save the day. The
heuristics is:
Simple groups have, by virtue of simplicity, complicated $p$-fusion, which by the above formula tends to make $H^3(G;{\mathbb Z}_{(p)})$ small.
i.e., it becomes harder and harder to become invariant (or "stable") in the stable elements formula the more $p$-fusion there is. E.g., consider $M_{22} < M_{23}$ of index 23: $M_{22}
$ has Schur multiplier of order 12 (one of the large ones!). However, the additional 2- and 3-fusion in $M_{23}$ makes its Schur multiplier trivial. Likewise $A_6$ has Schur
multiplier of order 6, as Geoff alluded to, but the extra 3-fusion in $S_6$ cuts it down to order 2.
OK, as Geoff and others remarked, it is probably going to be hard to get sharp estimates without the classification of finite simple groups. But $p$-fusion may give an idea why its
not so crazy to expect that they are "fairly small" compared to what one would expect from just looking at $|G|$...
add comment
I would be very surprised if you receive a "conceptual" answer to this problem- though I would be delighted to be proved wrong. Regarding your last comment, there have been examples
recently where computational evidence has indicated that human intuition about the size of cohomology groups was probably faulty, being based on limited evidence.
up vote 7 Regarding the comment about the bad general bound for the size of the Schur multiplier of a finite group, it can get quite big for $p$-groups, as you no doubt know. If my memory is
down vote correct, an elementary Abelian $p$-group of order $p^{n}$ has Schur multiplier of order $p^{n(n-1)/2}$, as is well-known.
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory schur-multipliers finite-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/130988/why-are-schur-multipliers-of-finite-simple-groups-so-small?sort=oldest","timestamp":"2014-04-20T16:30:59Z","content_type":null,"content_length":"66949","record_id":"<urn:uuid:7fc2a5ba-cfa5-417d-b083-e7d10dc7df87>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Properties
Robin Greenlee
Boise State University
Learner Description: This page is designed for teachers who teach 5th grade and work with differnt math properties.
What are the different Math Properties?
The three that I am going to focus on are: Commutative Property, Associative Property, and Distributive Propery. These properties apply to addition and multiplication. I have provided a defintion of
each property below:
• Commutative Property: Applies to addition and subtraction. When using the commutative property you can change order of addends for addition or the order of factors for multiplication and the
answer will be the same.
Example: 2+1=1+2 or 3x5=5x3
• Associative Property: Applies to addition and subtraction. When using the associative property you can group addends or factors in any way and you will still get the same answer.
Example: (1+2)+3=1+(2+3) or (3x2)x4=3x(2x4)
• Distributive Property: Applies to Multiplication. Multiplying a sum by a number is the same as multiplying each number in the sum by the number and adding the products.
Example: 5 x(3+4)=(5x3)+(5x4)
Each of these properties helps us solve different math problems. They are a way to allow us an easier way to solve math problems that can be overwhelming. The diagram below shows the different
math properties and provides links to videos if you click on the rectangle for each property. | {"url":"http://edtech2.boisestate.edu/greenleer/502/conceptmap.html","timestamp":"2014-04-16T15:59:23Z","content_type":null,"content_length":"3224","record_id":"<urn:uuid:ada4013a-02c5-42ef-a060-94c6bc7953a3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Date: 03/03/99 at 23:26:54
From: Maureen
Subject: Angular and Linear Speed
Speed of a Bicycle: The radii of the sprocket assemblies and the wheel
of the bicycle are 4 inches, 2 inches, and 13 inches, respectively. If
the cyclist is pedaling at the rate of one revolution per second, find
the speed of the bicycle in a) feet per second and b) miles per hour.
(The back wheel has a radius of 13 inches. The chain is connected to
the center part of the back tire, with a radius of 2 inches, and is
also connected to the pedal, with a radius of 4 inches.)
I have the formula that speed = r/t. I was also told that Velocity =
Wt. I am not sure where omega comes into the problem, and do not know
if I am supposed take the average of the three circles or what.
Whatever helpful hints you can give me will sure help.
Thank you!
Date: 03/04/99 at 16:16:03
From: Doctor Rick
Subject: Re: Angular and Linear Speed
What you need to do is to work step by step, following the "power
train" of the bike.
The front sprocket wheel (turned directly by the pedals) has a radius
of 4 inches. I would not put in the actual number, but just call the
radius r1.
The pedals are turning 1 revolution per second. This means they turn
through an angle of 2pi radians per second; this is the angular
velocity W (omega).
The linear velocity of the chain (at radius r1) is W * r1. (Your
formula was incorrect; linear velocity is angular velocity times
radius, not time.)
Each link in the chain is moving at the same linear velocity (though
the direction changes). You can use this fact to find the linear
velocity v2 of the rim of the rear sprocket wheel (radius 2 inches, I
would call it r2). Then use v = W * r to find the angular velocity W2
of the sprocket.
The wheel turns with the rear sprocket, so its angular velocity is the
same, W2. Knowing the radius of the wheel (13 inches), you can find
the linear velocity of the outer edge of the wheel (relative to the
hub). And this is equal to the speed of the bike.
You will have to do some conversions to get the speed in feet per
second and miles per hour, because up to this point you have been
working in inches and seconds. Keep the units with your numbers when
you put everything together, and you will be able to see how to do the
That is an outline of the method I would use to solve this problem. I
hope it helps you.
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/56316.html","timestamp":"2014-04-17T19:32:13Z","content_type":null,"content_length":"7256","record_id":"<urn:uuid:1e90f3b4-e339-4792-92c7-5727ebc3cd16>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |