content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
[FOM] Are proofs in mathematics based on sufficient evidence?
Charles Silver silver_1 at mindspring.com
Wed Jul 21 11:12:32 EDT 2010
Tangentially, Richard L. Epstein has a critical thinking book,
workbook, teacher's edition, CD (I think), in which, after
establishing examples of deductively valid arguments in English, then
presents a myriad of examples of arguments that are not deductively
valid (like those we encounter daily), but are evaluated in terms of
their strengths. Some may be (from my weak memory) extremely strong:
very, very strong; very strong; strong; not very strong; etc., etc.
One can consider the arguments at the top of the strength list to be
"inductively valid," or close to it--whatever that means. Since these
are everyday arguments, the non-deductively valid ones that are super-
strong are very valuable outside of the realm of pure logic. He
provides answers to exercises, which he invites the reader to disagree
with. I regard some of the examples given on this thread to appear
among his close-to-deductively-valid arguments. (This is not an
ordinary "critical thinking" book. For one thing, Dick is a recursion
theorist and has also written a book on computability.)
On Jul 16, 2010, at 10:06 PM, Vaughan Pratt wrote:
> On 7/8/2010 4:18 AM, Arnold Neumaier wrote:
>> I wonder why you put mathematical proof and logical proof into the
>> same
>> category, as opposed to legal or other kinds of proofs.
>> There are worlds between these two notions of proof, in spite of the
>> common ground these notions have.
> (A belated response to Arnold's early response to my original
> question.)
> The Wikipedia article Proof (truth) actually does distinguish these,
> as
> it has done since I first wrote it. Part of the confusion arose
> when a
> zealous editor deleted all the material on both mathematical and
> logical
> proof on the ground that only informal proof took evidence for
> premises.
> This editor did not see enough difference between the two to treat
> them differently in that regard.
> While there are plenty of nuanced distinctions, I see two somewhat
> independent binary distinctions in the concept "sufficient evidence
> for
> truth." (Not everyone will see the same ones, and I may change my own
> mind about these later on.)
> 1. The evidence may be drawn either from experience or hypothesis.
> 2. Sufficiency may be either soft or hard.
> 1. Experiential evidence comes from nature, namely the real or
> sensorially apprehended world, augmented with inferences from that
> evidence about nature. This includes the actual state of a computer
> gate, register, or memory.
> Hypothetical evidence comes either directly from what-if
> counterfactuals
> or axioms or indirectly as consequences of direct hypotheticals
> (reasoning). This includes the activity of program verification.
> One might call these respectively fact (real truth) and fiction
> (imagined truth).
> There is a phenomenon whereby fiction appears as fact: just as a
> pot-boiler that can't be put down conjures up images hard to
> distinguish
> from facts gleaned from newspapers, so can mathematical axioms seem
> real
> to the mathematician accustomed to intensively visualizing abstract
> universes.
> 2. The soft-hard dichotomy in sufficiency is to me the same as the
> informal-formal dichotomy (I could be talked out of this but first
> read
> the next paragraph). Hard is when there are precise criteria that
> evidence must meet to constitute a complete proof, soft is
> everything else.
> The term "precise" offers a loop-hole here. Precision can only be
> measured up to some standard of equality, isomorphism, equivalence, or
> whatever. Each such standard may have a mathematically or
> scientifically rigorous definition, but there may be more than one,
> and
> they may induce a partial order on the standards. We see this in
> proof
> theory, with Girard's notion of proof net as an abstraction of
> sequential proof ("bureaucracy" to use Girard's term), and with the
> even
> more abstract notion of proof contemplated in Dosen and Petric in
> their
> 2004 book Proof-Theoretical Coherence, where a proof is simply a
> morphism interpreted as a proof in a category with suitable structure
> supporting that interpretation.
> These two distinctions combine in the following four ways, with the
> associated applications.
> Experience/soft Scientific investigation, arguments in court, work,
> bars, home, etc.
> Hypothesis/soft Mathematical reasoning, counterfactual reasoning
> Experience/hard Formal deduction applied to the real world, whether
> it be Aristotle's syllogisms as popularized by Lewis Carroll, Boolean
> logic applied to database search, etc.
> Hypothesis/hard Formal deduction applied to mathematics (the core
> focus of FOM perhaps?), but also to counterfactual reasoning about
> real-world situations.
> One distinction this analysis does not make is between counterfactuals
> about real-world situations and mathematical theories. There may well
> be such a distinction in most people's minds; in that respect I may be
> out of step with everyone else. To me every mathematical theory
> *could*
> be about some real-world situation suitably abstracted. One cannot
> reason about counterfactuals with every detail filled in, how would
> you?
> Unlike experiential evidence, counterfactual evidence can't be poked
> around in because there is no real world backing it up. It is
> therefore
> necessarily abstract. One might quibble as to whether that
> abstraction
> is like mathematical abstraction, but I have great difficulty in
> drawing
> that line and therefore no basis for joining such a quibble.
> Vaughan Pratt
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-July/014935.html","timestamp":"2014-04-19T04:20:12Z","content_type":null,"content_length":"9827","record_id":"<urn:uuid:33606287-81de-4592-a9af-75692c0dcd82>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sales, Cost of Goods Sold and Gross Profit
Sales, Cost of Goods Sold
and Gross Profit
Previous lesson: The FIFO Method & Weighted Average Cost
Next lesson: Perpetual and Periodic Inventory
We mentioned previously how a trading business differs from a service business: whereas a service business provides a service, such as accounting, medical or repair work, a trading business trades in
inventory (it buys goods at a low price and sells it at a higher price).
A trading business will also differ from a service business in terms of its income and expenses – i.e. the way a profit is made: whereas a service business renders services, a trading business makes
The income statement for a trading business will thus look slightly different to that of a service business:
The first section of the income statement for a trading business describes the core activities of a trading business: i.e. the buying and selling:
Sales: Sales are the full income for the year for selling goods.
Cost of goods sold: This refers to the cost of all the goods that we sold this year. It is also known as cost of sales. Cost of goods sold is an expense charged against sales to work out a gross
profit (see definition below). So, for example, we may have sold 100 units this year at $4 each, and these 100 units that we sold cost us $3 each originally. So our sales would be $400 and our cost
of the goods we sold (cost of sales) would amount to $300. This would result in a gross profit of $100 (sales minus cost of sales). Cost of goods sold is not the same as purchases, as you will see
from our examples below.
Gross profit: An initial profit on the product we are selling, before deducting general business expenses.
Okay, let’s do an example where we can work out the sales, cost of sales and the gross profit for a business. Here's our example of Ms. Sheppard's business again:
Cindy Sheppard runs a sweet shop. She enters into the following transactions during July:
July 1 Purchases 1,200 sweets at $1 each.
July 13 Purchases 500 sweets at $1.20 each.
July 14 Sells 700 sweets at $2 each.
How many sweets does she have at the end of the month?
1,200 + 500 – 700 = 1,000 sweets
Okay, let's calculate the value of her closing stock using each of the FIFO, LIFO and weighted average cost methods:
1. The First-In-First-Out Method (FIFO)
As you can see, even though the purchases amounted to $1,800, the cost of goods sold (or cost of sales) amounted to $700.
2. The Last-In-First-Out Method (LIFO)
In this case, even though our purchases amounted to $1,800, our cost of goods sold (or cost of sales) amounted to $800. This is calculated as follows: (500 X $1.20) + (200 X $1.00) = $800.
3. The Weighted Average Cost Method
Again our purchases are $1,800, but this time our cost of sales comes to $741.
So as you can clearly see, purchases and cost of goods sold, although related, are not the same thing.
Now, let’s look at a summary of the figures we have calculated from these three methods:
We can work out some very useful formulas using these figures…
The closing inventories can always be calculated as follows:
If we switch around the equation to make cost of goods sold the subject, we have a formula for working this out:
Try these two formulas using the above table and you will see that they work every time.
For example, with the FIFO figures, we can see that we had 0 inventories to start with, plus we purchased $1,800 worth of goods. Of these $1,800, we sold $700, so we were left with $1,100 closing
inventories. Using the same figures, we can see that we purchased $1,800 worth of goods and were left with $1,100, so we must have sold $700 worth of goods (the cost of goods that we sold).
The last formula above (with the cost of goods sold as the subject) is actually well known - it is called the cost of goods sold formula or the cost of sales formula.
Almost every accounting student I have encountered has had to memorize this formula because they simply didn't understand what it means and how it works in practice. The explanations above should
make it easier for you to understand and work with this key formula.
Once we have calculated our gross profit from the sales and cost of goods sold, we add other income to this and deduct general business expenses from this, to arrive at our net profit.
Previous lesson: The FIFO Method & Weighted Average Cost
Next lesson: Perpetual and Periodic Inventory
Read Other Questions Relating to This Lesson
(along with their answers)
Click below to see questions and solutions on this same topic from other visitors to this page...
Gross Profit Quick Question
If net revenue equals $50,000, cost of sales $20,000 and operating expenses $10,000, then what does the gross profit come to?
Cost of Sales Formula?
What is the cost of sales formula?
Cost Price, Sales Price, Mark-Up
Q: How do you find the cost price if the sales are $216,000 and the mark-up is 50%? A: "Mark-up" literally means the amount you "mark up" the cost …
Cost of Goods Sold and
Interest Expense
(Please note that in the following question R = Rands, which is the South African currency) a) Sales totaled R15,000,000 b) Cash R370,000 c) Marketable …
Multi-Step Income Statement
Q: I have racked my brain on this and I cannot come up with an answer. If there were a business that did not have a cost of goods figure, would they still …
Accounting Entry for Giving Away a Free Sample
Q: What entry will be passed in the general journal for goods taken as a free sample? A: The first thing to work out here is whether free samples …
Cost Accounting - Fixed vs Variable?
Q: Fixed costs are really variable, the more you produce the less they become. Do you agree? Explain? A: Thanks for your questions Ujala. I agree …
General Journal Closing for Sales
When doing a general journal closing I am supposed to start with sales. Do I subtract sales returns and allowances and discounts in order to close revenue …
Calculate Gross Sales Question
(Rs = Rupees = Indian currency) Opening stock Rs.30000, Closing stock Rs.40000, Purchases Rs.560000, Returns outward Rs.15000, Returns inward Rs.20000, …
Gross Profit Question
Q: How do you calculate the gross profit when you are given gross profit percentage?
Income Balancing Amount Question
Q: You are given the following figures: - Beginning capital of 200,000 - Ending capital of 190,000 - Additional investment 30,000 and - Personal withdrawal …
Value of Closing Stock Question
Q: Opening stock 80,000, purchases 160,000, sales 200,000. Gross profit 33%. Goods destroyed by fire 30,000. Received claim of 20,000. What is the …
Why Show the Highest Cost of Goods Sold Amount?
Q: Why would someone want to show the highest cost of goods sold amount? A: For tax purposes. An intelligent businessman wants to claim the greatest …
Purchases, Cost of sales,
Control accounts
Balances at 31 January 2009: Debtors control account.............................$32,400 Creditors control account...........................$25,200 …
Return from Sales, Cost of Goods Sold and Gross Profit to Inventory
Return to Home Page
New! Comments
Have your say about what you just read! Leave me a comment in the box below. | {"url":"http://www.accounting-basics-for-students.com/cost-of-goods-sold.html","timestamp":"2014-04-19T17:21:46Z","content_type":null,"content_length":"28875","record_id":"<urn:uuid:69f5405c-3829-4318-af7f-1b1d2525b753>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molecular Vision: Tröße, Mol Vis 2009; 15:1332-1350. Figure 3
Figure 3. Correspondence analysis plot. The principal components 1 and 2, which explain the highest amounts of variance in the data set, are shown on the x- and y-axis of the plot, respectively. The
samples are colored according to the dietary groups. The low-His samples (LLL) are blue, and the medium-His samples (MMM) are dark red. The dark red and blue lines are plotted from the point of
origin through the respective group medians, which are marked by an equally colored dot. The total variance retained in the plot is 16.349%, the x-axis component variance is 10.623%, and the y-axis
component variance is 5.726%. | {"url":"http://www.molvis.org/molvis/v15/a141/trosse-fig3.html","timestamp":"2014-04-17T16:34:32Z","content_type":null,"content_length":"2488","record_id":"<urn:uuid:02b9cee7-8bb9-4c26-bbf0-3452e4f96032>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
2.5.3. Bit-Vectors
Common Lisp the Language, 2nd Edition
Next: Hash Tables Up: Arrays Previous: Strings
A bit-vector can be written as the sequence of bits contained in the string, preceded by #*; any delimiter character, such as whitespace, will terminate the bit-vector syntax. For example:
#*10110 ;A five-bit bit-vector; bit 0 is a 1
#* ;An empty bit-vector
The bits notated following the #*, taken from left to right, occupy locations within the bit-vector with increasing indices. The leftmost notated bit is bit-vector element number 0, the next one is
element number 1, and so on.
The function prin1 will print any bit-vector (not just a simple one) using this syntax, but the function read will always construct a simple bit-vector when it reads this syntax. | {"url":"http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/html/cltl/clm/node32.html","timestamp":"2014-04-18T10:48:21Z","content_type":null,"content_length":"2289","record_id":"<urn:uuid:a17d5dec-acb5-4de9-abe0-dece2c94e6f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Being able to add quickly and accurately is one of the most important math skills children need to develop. They start by adding single- and double-digit numbers in kindergarten and first grade and
move to adding numbers as part of the properties of operations and order of operations in fourth, fifth, and sixth grade. The stronger a children’s addition skills are, the more they can focus on
complex math skills.
Math Game Time’s free games and worksheets were designed to help children practice and build confidence in their addition skills in fun and creative ways. When you combine these games and worksheets
with our free instructional addition videos, children will be adding like pros! | {"url":"http://www.mathgametime.com/subject/addition","timestamp":"2014-04-16T10:53:23Z","content_type":null,"content_length":"66232","record_id":"<urn:uuid:50fd8641-564c-42cc-bb0a-0b39d5f10ef2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
proof by induction
May 23rd 2012, 07:53 AM #1
Super Member
Sep 2008
proof by induction
prove by mathematical induction, that for n
$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$
assume that the summation formula is true for n=k
$\sum_{i=1}^n i^{2} = \frac{1}{6} k (k+1) (2k+1)$
so must be true for n= k+1 ?
so do I put k+1 into the formula, and try and get it match the original? really stuck from this part,
Last edited by Tweety; May 23rd 2012 at 08:00 AM.
Re: proof by induction
prove by mathematical induction, that for n
$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$
assume that the summation formula is true for n=k
$\sum_{i=1}^n i^{2} = \frac{1}{6} k (k+1) (2k+1)$
so must be true for n= k+1 ?
so do I put k+1 into the formula, and try and get it match the original? really stuck from this part,
Note $\sum_{i=1}^{n+1} i^{2} =\sum_{i=1}^n i^{2} +(n+1)^2$
May 23rd 2012, 08:06 AM #2 | {"url":"http://mathhelpforum.com/pre-calculus/199138-proof-induction.html","timestamp":"2014-04-20T12:50:25Z","content_type":null,"content_length":"36849","record_id":"<urn:uuid:0bcde850-39d8-413d-b683-72af65d40aae>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
height of a variety
The height of a variety should reflect how close the variety is to being ordinary and other arithmetic properties.
Let $X$ be a smooth proper $n$dimensional variety over an algebraically closed field $k$ of characteristic $p$. Then one can define the Artin-Mazur formal group $\Phi$. Since it is a one-dimensional
formal group, it is completely determined up to isomorphism by its height?. This is the height of the variety . The height could be infinite if $\Phi\simeq \widehat{\mathbb{G}_a}$, otherwise $\Phi$
is a p-divisible group.
For an elliptic curve, the height is either $1$ in which case it is ordinary, or $2$ in which case it is supersingular.
A Calabi-Yau variety of any dimension is ordinary if and only if it has height $1$. For K3 surfaces the height is less than or equal to $10$ or infinite, but for all higher dimensional Calabi-Yau
varieties the height has no known bound. Infinite height Calabi-Yau varieties are known as supersingular.
The height of an abelian variety depends on its $p$-rank, but must be $1$, $2$, or infinite.
Relation to Witt Cohomology
Let $\mathcal{W}$ be the sheaf of Witt vectors on a variety $X$ satisfying the conditions above. If $X$ has finite height, then the Dieudonne module of the Artin-Mazur formal group is isomorphic to
$H^n(X, \mathcal{W})$. By standard Dieudonne theory, $D(\Phi)$ is a free of rank $ht(X)$ module over $W$, so $ht(X)=\dim_K H^n(X, \mathcal{W})\otimes K$ where $K$ is the fraction field of $W$.
One consequence of the above is that $X$ is supersingular (of infinite height) if $H^n(X, \mathcal{W})$ is not a finite-type $W$-module. It is possible also that it is a torsion module in which case
$H^n(X, \mathcal{W})\otimes K=0$ and again we can conclude that $X$ is of infinite height (since if $X$ were of finite height it would be a free module).
Relation to Crystalline Cohomology
Suppose that $X$ is a variety with the above hypotheses, then the torsion free part of the crystalline cohomology $H^n_{crys}(X/W)$ is a Cartier module under the action of Frobenius. We can consider
the part with slopes less than $1$, i.e. $H^n_{crys}(X/W)\otimes_W K_{[0,1)}$. The dimension of this is the height of $X$. | {"url":"http://www.ncatlab.org/nlab/show/height+of+a+variety","timestamp":"2014-04-16T19:01:50Z","content_type":null,"content_length":"24207","record_id":"<urn:uuid:3be8d1e7-6466-4ef6-83dc-43831aad01f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Product of normal PDFs
The product of two normal PDFs is proportional to a normal PDF. This is well known in Bayesian statistics because a normal likelihood times a normal prior gives a normal posterior. But because
Bayesian applications don’t usually need to know the proportionality constant, it’s a little hard to find. I needed to calculate this constant, so I’m recording the result here for my future
reference and for anyone else who might find it useful.
Denote the normal PDF by
Then the product of two normal PDFs is given by the equation
Note that the product of two normal random variables is not normal, but the product of their PDFs is proportional to the PDF of another normal.
I think it’s particularly elegant how the proportionality constant is expressed as a “normal”.
As is almost always the case, this all becomes unambiguously nicer if you work with variances instead of standard deviations. Better still, with reciprocal variances. If your means are m,n and your
reciprocal variances are t,u then the new mean is (tm+un)/(t+u) — the weighted average of the means, weighted by the reciprocal variances — and the new reciprocal variance is t+u.
(It’s even better formally, but a bit too mysterious statistically, to work with the reciprocal variance and the mean times the reciprocal variance. Then these just add. That’s because a normal PDF
is exp(polynomial(x)) and these are basically just the coefficients of x^2 and x.)
For multivariate normals, if A and B are the inverses of the covariance matrices and m,n the means — so that the PDFs are exp(-1/2 (x-m)^T A (x-m)) and similarly for B,n — then this generalizes
nicely: the mean is (A+B)^-1 (Am+Bn) and the inverse covariance is A+B.
g: You probably know this, but to add some jargon for others: the reciprocal variance (AKA the precision) and the mean/variance are the “natural parameters” of the Gaussian when written as a member
of the exponential family.
Multivariate generalizations of the results in this post can be found, for example, in these cribs:
Gaussian identities only: http://cs.nyu.edu/~roweis/notes/gaussid.pdf
Matrix Cookbook (much larger, contains a section on Gaussians): http://www2.imm.dtu.dk/pubdb/p.php?3274
Working with the inverse variance often arises in statistical estimation theory. The inverse variance is the Fisher Information of the true value. This works just like a quantity of information
should. Given two normally distributed estimates of a parameter, we can find the combined Information by simply adding the Information from each of the individual estimates. The new mean is the
information-weighted average of the individual means.
It is very straightforward and intuitive to think about normals in those terms.
can you describe how to obtain the mean and variance (or concentration) of circular convolution of two normal pdfS.
Also how to obtain the mean and variance (or concentration) of product of two raleigh pdfs.
I’m not sure what you mean by “circular” convolution, but the convolution of two PDFs is the PDF of the sum of the independent random variables. If X and Y are independent normals, E(X+Y) = E(X) + E
(Y) and Var(X+Y) = Var(X) + Var(Y). I haven’t looked at the product of Rayleigh random variables.
[...] Product of normal PDFs (johndcook.com) [...]
Tagged with: Bayesian, Probability and Statistics
Posted in Math, Statistics | {"url":"http://www.johndcook.com/blog/2012/10/29/product-of-normal-pdfs/","timestamp":"2014-04-18T10:48:32Z","content_type":null,"content_length":"35961","record_id":"<urn:uuid:e9ed3fc3-27d0-42ee-9213-043054ebd554>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
QPolygon Class Reference
Member Function Documentation
QPolygon::QPolygon ()
Constructs a polygon with no points.
See also QVector::isEmpty().
QPolygon::QPolygon ( int size )
Constructs a polygon of the given size. Creates an empty polygon if size == 0.
See also QVector::isEmpty().
QPolygon::QPolygon ( const QPolygon & polygon )
Constructs a copy of the given polygon.
See also setPoints().
QPolygon::QPolygon ( const QVector<QPoint> & points )
Constructs a polygon containing the specified points.
See also setPoints().
QPolygon::QPolygon ( const QRect & rectangle, bool closed = false )
Constructs a polygon from the given rectangle. If closed is false, the polygon just contains the four points of the rectangle ordered clockwise, otherwise the polygon's fifth point is set to
Note that the bottom-right corner of the rectangle is located at (rectangle.x() + rectangle.width(), rectangle.y() + rectangle.height()).
See also setPoints().
QPolygon::~QPolygon ()
Destroys the polygon.
QRect QPolygon::boundingRect () const
Returns the bounding rectangle of the polygon, or QRect(0, 0, 0, 0) if the polygon is empty.
See also QVector::isEmpty().
bool QPolygon::containsPoint ( const QPoint & point, Qt::FillRule fillRule ) const
Returns true if the given point is inside the polygon according to the specified fillRule; otherwise returns false.
This function was introduced in Qt 4.3.
QPolygon QPolygon::intersected ( const QPolygon & r ) const
Returns a polygon which is the intersection of this polygon and r.
Set operations on polygons will treat the polygons as areas. Non-closed polygons will be treated as implicitly closed.
This function was introduced in Qt 4.3.
void QPolygon::point ( int index, int * x, int * y ) const
Extracts the coordinates of the point at the given index to *x and *y (if they are valid pointers).
See also setPoint().
QPoint QPolygon::point ( int index ) const
This is an overloaded function.
Returns the point at the given index.
void QPolygon::putPoints ( int index, int nPoints, int firstx, int firsty, ... )
Copies nPoints points from the variable argument list into this polygon from the given index.
The points are given as a sequence of integers, starting with firstx then firsty, and so on. The polygon is resized if index+nPoints exceeds its current size.
The example code creates a polygon with three points (4,5), (6,7) and (8,9), by expanding the polygon from 1 to 3 points:
QPolygon polygon(1);
polygon[0] = QPoint(4, 5);
polygon.putPoints(1, 2, 6,7, 8,9);
The following code has the same result, but here the putPoints() function overwrites rather than extends:
QPolygon polygon(3);
polygon.putPoints(0, 3, 4,5, 0,0, 8,9);
polygon.putPoints(1, 1, 6,7);
See also setPoints().
void QPolygon::putPoints ( int index, int nPoints, const QPolygon & fromPolygon, int fromIndex = 0 )
This is an overloaded function.
Copies nPoints points from the given fromIndex ( 0 by default) in fromPolygon into this polygon, starting at the specified index. For example:
QPolygon polygon1;
polygon1.putPoints(0, 3, 1,2, 0,0, 5,6);
// polygon1 is now the three-point polygon(1,2, 0,0, 5,6);
QPolygon polygon2;
polygon2.putPoints(0, 3, 4,4, 5,5, 6,6);
// polygon2 is now (4,4, 5,5, 6,6);
polygon1.putPoints(2, 3, polygon2);
// polygon1 is now the five-point polygon(1,2, 0,0, 4,4, 5,5, 6,6);
void QPolygon::setPoint ( int index, int x, int y )
Sets the point at the given index to the point specified by (x, y).
See also point(), putPoints(), and setPoints().
void QPolygon::setPoint ( int index, const QPoint & point )
This is an overloaded function.
Sets the point at the given index to the given point.
void QPolygon::setPoints ( int nPoints, const int * points )
Resizes the polygon to nPoints and populates it with the given points.
The example code creates a polygon with two points (10, 20) and (30, 40):
static const int points[] = { 10, 20, 30, 40 };
QPolygon polygon;
polygon.setPoints(2, points);
See also setPoint() and putPoints().
void QPolygon::setPoints ( int nPoints, int firstx, int firsty, ... )
This is an overloaded function.
Resizes the polygon to nPoints and populates it with the points specified by the variable argument list. The points are given as a sequence of integers, starting with firstx then firsty, and so on.
The example code creates a polygon with two points (10, 20) and (30, 40):
QPolygon polygon;
polygon.setPoints(2, 10, 20, 30, 40);
QPolygon QPolygon::subtracted ( const QPolygon & r ) const
Returns a polygon which is r subtracted from this polygon.
Set operations on polygons will treat the polygons as areas. Non-closed polygons will be treated as implicitly closed.
This function was introduced in Qt 4.3.
void QPolygon::translate ( int dx, int dy )
Translates all points in the polygon by (dx, dy).
See also translated().
void QPolygon::translate ( const QPoint & offset )
This is an overloaded function.
Translates all points in the polygon by the given offset.
See also translated().
QPolygon QPolygon::translated ( int dx, int dy ) const
Returns a copy of the polygon that is translated by (dx, dy).
This function was introduced in Qt 4.6.
See also translate().
QPolygon QPolygon::translated ( const QPoint & offset ) const
This is an overloaded function.
Returns a copy of the polygon that is translated by the given offset.
This function was introduced in Qt 4.6.
See also translate().
QPolygon QPolygon::united ( const QPolygon & r ) const
Returns a polygon which is the union of this polygon and r.
Set operations on polygons, will treat the polygons as areas, and implicitly close the polygon.
This function was introduced in Qt 4.3.
See also intersected() and subtracted().
QPolygon::operator QVariant () const
Returns the polygon as a QVariant | {"url":"http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qpolygon.html","timestamp":"2014-04-19T01:57:51Z","content_type":null,"content_length":"29482","record_id":"<urn:uuid:8a27213c-9537-40f4-92bc-166944215116>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase Difference of Sampled Waves
Date: 05/27/2002 at 09:21:31
From: Sean Goddard
Subject: Extracting Phase information
Hi Dr. Math,
OK, I'm no kid, and my maths is way too rusty to be of use to
anyone (me included).
My problem is that I have two SIN waves which I have sampled into
my PC, and I need to find the LAG/LEAD of the second with respect
to the first (reference).
What I'm trying to do is return the result in the form of a
phasor, so I obviously need to be looking in using an I and Q
I'm trying to give a number in the range of -90 to +90 for the
phase shift, with an accuracy of +/- 0.1 degrees.
I'm REALLY struggling, and have derived something that sort of
works, but not very well.
Please help!
Date: 05/29/2002 at 14:01:15
From: Doctor Douglas
Subject: Re: Extracting Phase information
Hi, Sean,
It's a little difficult to answer this fully without knowing more
about your system. However, if we know that the two sine waves
are of the same frequency (they'd better be, otherwise their
phase difference will be changing with time), then you can do
the following:
1. Find the period T (in say, sampling intervals) of
both waves. You can use zero-crossings to do this.
2. Determine a zero-crossing time Z of the first wave.
3. Starting from that time Z, step forward until the next
event of the SECOND wave crossing zero with the same
polarity (i.e. both crossings are positive-going or
both are negative-going).
Call this time Y. Note that T, Z, and Y are all
measured in number of sampling intervals and that
Y >= Z.
4. The number
P = [360] * [(Y-Z)/T] (degrees)
will be the phase measurement that you require, where
360 is the number of degrees in a full period, and the
numbers Y, Z, and T are derived from your measurements.
If the number P is greater than 180, then you can
subtract 360 from it to get a number in the range
-180 to +180. I think you want to consider this
range rather than just the range from -90 to +90.
To get the accuracy that you require, the numbers Y, Z, and
T will need to be of sufficient accuracy. Roughly speaking,
you'll want T to be at least 3600 sampling intervals (360/0.1).
More is better (longer period sine waves or faster sampling).
I don't think that there's any need to make things more
complicated with a in-phase and quadrature (if that's what you
meant by I-Q) analysis.
- Doctor Douglas, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/60699.html","timestamp":"2014-04-17T16:06:31Z","content_type":null,"content_length":"7429","record_id":"<urn:uuid:f7928430-a29f-4865-bf5f-166e673a9305>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interactive lecture on diminishing marginal product: tennis ball production
In this interactive lecture, the engagement trigger is a demonstration in which students "produce" tennis balls with fixed capital and increasing labor, thus generating a production function.
Students calculate the marginal product of each worker and discover that marginal product falls as the number of workers rises.
Learning Goals
Marginal product of labor, diminishing marginal product.
Context for Use
This activity is intended for a Principles of Microeconomics course. It is appropriate for any class size; however, instructors of varying class sizes may want to choose different methods for
assessment. This is an interactive lecture segment, with the demonstration serving as an engagement trigger. The demonstration itself can be completed in as little as ten minutes, although the
accompanying lecture and class discussion will likely fill the rest of a 50-minute class period. The instructor will need to bring the materials (tennis balls and buckets).
Students should already be familiar with the concepts of marginal thinking and cost-benefit analysis. The description here focuses on diminishing marginal product only but the activity could easily
be extended for a more in-depth discussion of costs by adding input prices and using the data generated by the demonstration to create all the various costs curves (average, fixed, variable, etc.).
Description and Teaching Materials
A more extensive variation of this activity, including worksheets that could be used to calculate all the costs, can be found at
. Some discussion of the activity can also be found at
Set up two buckets, several feet apart, with several tennis balls in one of the buckets. Students have a set amount of time (I use ten seconds) to move tennis balls from one bucket to another; each
ball that makes it into the empty bucket counts as one unit produced. Students must carry the balls from one bucket to the other; balls cannot be thrown and any ball that bounces out of the bucket
will not count. Begin with one student and then add additional 'workers' in each round, recording the total production for each round. Total production should rise for the first few rounds but
eventually, diminishing marginal product should set in and total output will level off and even begin to fall, which is a good point to stop. Once you have the production function data, define
marginal product and calculate marginal product for the first two or three workers in the demonstration. Then have students calculate the marginal product of each of the rest of the workers in the
demonstration, and discuss why marginal product begins to fall as there are more workers.
Teaching Notes and Tips
The production 'technology' can vary, depending on your students and room environment. I tell the students that each student can only carry one ball at a time and they must individually place the
ball in the other bucket but an alternative is to let them form a line and pass the balls down (diminishing marginal product sets in partly because they invariably start dropping them). The important
thing is that you have to be very clear with them that they cannot change the technology as they add more workers; as long as that's clear, you should start getting diminishing marginal product
pretty quickly. You can vary the amount of time for each round, or how far apart the buckets are, and that will affect the total length of the demonstration (and total production numbers) but should
not impact when diminishing returns set in.
This works well for large classes; even though only a small subset of the students are physically participating, the other students get involved by cheering on their classmates. If you have enough
space, you can even set up two firms on opposite sides of the room; they will invariably try to compete with each other and while this has nothing to do with the ultimate objective (demonstrating
diminishing returns), it can ensure that students really are trying to maximize their output, and the rest of the class will typically root for the firm on their side of the classroom.
Because I teach a very large class, I assess the students with clicker questions. After running the demonstration and calculating marginal product for the first two workers, I ask the students to
calculate the marginal product of each of the other workers. Before we go through the answers together, I have them use their clickers to indicate the marginal product of, say, the fourth worker.
References and Resources | {"url":"http://serc.carleton.edu/econ/interactive/examples/43019.html","timestamp":"2014-04-17T16:20:39Z","content_type":null,"content_length":"26706","record_id":"<urn:uuid:cd59356c-1977-44d9-a0f2-57d72dc5b76a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Almaden Trigonometry Tutor
Find a New Almaden Trigonometry Tutor
...I am extremely patient with them, and I will explain it in as many times and different ways I need to until the idea is clear. I find tutoring one of most rewarding experiences one can do. I
look forward to helping students grow and mature academically.The foundation for all other math courses,...
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I like to talk through examples and discuss the problems, to ensure there is a true understanding of the concepts. I'm currently in school to gain my credentialing in teaching in order to teach
Mathematics for grades 6-12, and have been tutoring for over 10 years. I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial
9 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...I have a special skill for making math notation and concepts more easily understood, helping retention and test-taking skills considerably. I am a patient, engaging tutor with an easy-going
personal style. I am a semi-retired business professional with a great love for and interest in mathematics.
22 Subjects: including trigonometry, calculus, geometry, statistics
...I have ten years of practical, hands-on computer programming experience through my work as a scientist. Python is my primary programming language. I have also programmed in Pascal and C.
17 Subjects: including trigonometry, chemistry, writing, geometry
...What makes me good at tutoring? Knowing math, knowing my students, being good at drawing people out, and being good at adjusting how I teach so that it suits the unique individual I am working
with. To learn, students must feel comfortable, interested, and challenged.
22 Subjects: including trigonometry, English, reading, geometry | {"url":"http://www.purplemath.com/New_Almaden_Trigonometry_tutors.php","timestamp":"2014-04-18T13:59:42Z","content_type":null,"content_length":"24173","record_id":"<urn:uuid:8983819d-b7c8-4454-867e-1243d35c053a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The x-intercept of 2x + 3y = 15 is (___, 0). A. 2 B. 3 C. 7.5 D. 5
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c2816fe4b066f22e104da0","timestamp":"2014-04-20T16:28:13Z","content_type":null,"content_length":"39306","record_id":"<urn:uuid:cebc1991-55d8-4c9d-abac-7a0c13aa26da>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items tagged with scale
Hello, with "logplot", if I use "scale" or change the minimum vertical range to some smaller values, the plot mechanism will automatically print "0., 0.5*0.0001, 0.0001, 0.5*0.001, 0.001, .... " at
the y-axis numeric mark.
However, I want the 10^(-n) numeric format and alwasy keep the format.
How could I always get the 10^(-n) numeric format for y-axis ?
Thanks a lot! | {"url":"http://www.mapleprimes.com/tags/scale","timestamp":"2014-04-19T07:10:21Z","content_type":null,"content_length":"90889","record_id":"<urn:uuid:6405245c-e9f2-406a-94da-e87582e7db72>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nuclear protein purification
Hi all,
During nuclear protein purification I came across the step where I need to add 5M NaCl slowly with shaking until I adjust the molarity to 2M NaCl, how many mls do I need to add?
Honestly I tried to tackle the problem in different ways but I couldnt, any help would be really appreciated.
Thanks all.
AHH, Gravity! Such a cold cruel mistress.
it must be 2/5 of final volume, thus 2/3 of your original volume. E.g. you have 1 liter in the beginning, you add 0.66 l of 5M NaCl -> the final volume is 1.66 l and cV = cV, i.e. 0.66l . 5M = 1.66l
. xM, how much is x?
Cis or trans? That's what matters.
Thank you for the elaboration, I know its been almost a month for me to reply I am sorry for that but I finally tried the experiment and the volume I am adding to is 5 ml. But I am not sure how we
reached the 2/3 ratio of the original volume.
because 2/3 = .66 and if we multiply (.66)(5ml) I will get 3.3ml to add which is correct but still I dont understand why.
AHH, Gravity! Such a cold cruel mistress.
cV = cV right? So, you'll have:
2M . V1 = 5M . V2
2M/5M = V2/V1 -> ratio of original (5M stock) and final (your solution) volumes is 2 : 5. In other words, the 5M solution will make 2 parts of of 5 parts of 2M solution. Thus the solution to which
you're adding the NaCl will make 3 parts of it, thus 2/3.
Cis or trans? That's what matters.
Now I totally understand it! Thank you very much.
AHH, Gravity! Such a cold cruel mistress. | {"url":"http://www.biology-online.org/biology-forum/post-150247.html","timestamp":"2014-04-21T02:20:21Z","content_type":null,"content_length":"27048","record_id":"<urn:uuid:1cb8478c-365f-4fb5-86b5-b468ffc542eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help Online
College is a very difficult environment for students to adjust right out of high school. There are many reasons why students routinely struggle with college homework and need someone to help them
one-on-one. The biggest problem with homework is that students are required to do tons of it in a short amount of time. This can lead to frustration among students who are struggling with math,
physics, chemistry and English assignments while at the same time trying to balance out there personal life and student life. The initial problem and issues faced by students prompted us to look into
the matter and create a website where students can get college homework help at a reasonable price. The real problem is not with students but with the educational system itself. With class sizes
getting bigger and bigger and professors using automated tools such as online homework suites such as blackboard there are increasingly being distance from the student.
Do My Homework For Me!
Students are unable to secure time to meet with the Prof. to get them
do my homework
. Many students was working to a college do not have the time to sit down and come in doing office hours to understand a problem with the Prof. Professors and teachers are more interested in
furthering their own academic research and for securing grants for the universities. The whole thing has become a sick joke on the student itself. Therefore, students routinely turn to online
platform such as is my homework to get the help that they need to be able to get good grades in their final exam.
The other problem with college homework is that it is very difficult for student to be able to visualize mathematical and geckos problems and therefore they are unable to really get help when they
need to. This creates a problem for the student who is required to turn in an assignment and get automatically graded by the Prof. While we understand that students are in college to learn and be
able to utilize the many resources which are out there in the formal library, online resources and fellow students, it is essential that professors and teachers look at the system as a whole and
figure out if it is really helping the student are hurting them.
The college homework help that students routinely require and so desperately need comes in the form off tutoring centers. Yet these are staffed by graduate students and other students who are not
quite proficient at solving a problem let alone explaining it to a student. These are usually swamped and do not have enough materials or tutors to be able to help the many students who go to these
establishments. Therefore, students are increasingly turning to online homework help sites to be able to get the homework help they deserve and so desperately need. If you are one of the many
University students who is struggling with math and need someone to explain to you step-by-step how math problem solved then you need only posted on site and some scholar will be able to help you.
The best part is that you have to pay only when the work is done right and according to your expectations. | {"url":"http://www.paycollegehomeworkhelp.com/","timestamp":"2014-04-17T10:36:33Z","content_type":null,"content_length":"8542","record_id":"<urn:uuid:4cba0292-91ef-43e0-b11e-efba29d49260>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotic Safety
Once upon a time, I wrote a blog post about the proposal by Reuter and various collaborators that quantum gravity in four dimension is controlled by a UV fixed point. Below some cutoff (at the Planck
scale, if not below it), gravity is described by an effective theory. This effective description breaks down at the cutoff scale, and the theory is ill-defined unless one of two things happens
1. New degrees of freedom enter (as in String Theory).
2. The UV physics is controlled by a fixed point, so the apparently infinite number of coupling are actually not independent, but rather lie on an (IR-repulsive) trajectory emanating from the fixed
point set.
The latter is known to occur for pure gravity in $2+\epsilon$ dimensions. The hope is that the same holds true in 4 dimensions. The technique used to study this is the so-called Exact Renormalization
Group, truncated to some finite-dimensional space of couplings. Reuter’s original work truncated to a 2-dimensional subspace, consisting of the cosmological constant and Einstein-Hilbert term. The
existence of a fixed point in that (brutal) truncation was not terribly convincing.
Since I wrote my post, there have been many followup papers, by various authors (see the recent review by Percacci), purporting to adduce further evidence, by including various other couplings, and
checking to see whether the fixed point persists. For instance, this paper considers adding a polynomial (up to 6th order) in the curvature scalar. Reuter considered adding the square of the Weyl
The trouble with all of these papers is that they really don’t address the issue in a meaningful way.
The terms considered vanish on-shell (in flat space) and in conventional perturbation theory, any divergence in these terms can be absorbed by a field redefinition. The ${(\text{Weyl})}^2$ term is a
slight exception. But it can be rewritten as the Gauss-Bonnet density plus terms which vanish on-shell. The Gauss-Bonnet density, being a topological invariant, receives no corrections.
The first term which can receive a nontrivial renormalization in pure gravity, and hence which would actually serve as an acute test of whether the fixed point really exists, is cubic in the Riemann
tensor. Goroff and Sagnotti did the perturbative computation to show that it, in fact, received a log-divergent correction at 2-loops. This is the first divergence in pure gravity; the 1-loop
divergences can be absorbed by field redefinitions, by the argument of the previous paragraph.
So the first nontrivial test of the asymptotic safety proposal will come when someone computes the ERGE for $S = \int d^4 x \sqrt{-g} \left(M^4 c_1 + \tfrac{M^2}{c_2} R + \tfrac{c_3}{M^2} \tensor{R}
Now, I’ve thought about doing this computation myself. But
1. Goroff and Sagnotti’s computation was hard. And using the ERGE approach can’t make it any easier.
2. It’s a pretty foregone conclusion what the result will be: there is no fixed point for any finite value of $c_3$.
So maybe I should throw this out there for the readers of this blog. Anyone want to attain fame and fortune by performing the first nontrivial test of the gravitational asymptotic safety hypothesis?
Posted by distler at January 30, 2008 10:43 AM
Re: Asymptotic Safety
Yes, I want to do this. Even if it is a foregone conclusion, it needs to be checked.
It may take me some time though, as I will have to go over the papers first and review renormalization group flow.
I am a high energy theory graduate student looking for a real problem to work on.
Posted by: Metal on January 30, 2008 8:19 PM | Permalink | Reply to this
Re: Asymptotic Safety
Anybody contemplating this calculation might actually not want to do it in the Goroff and Sagnotti brute force way but rather using the covariant methods (heat-kernel etc) of
Two loop quantum gravity.
A.E.M. van de Ven (Hamburg U. and SUNY, Stony Brook) . DESY-91-115, ITP-SB-91-52, Oct 1991. 51pp.
Published in Nucl.Phys.B378:309-366,1992.
Posted by: Robert on January 31, 2008 6:06 AM | Permalink | Reply to this
Re: Asymptotic Safety
Dear Jacques,
I have some questions about the Reuter program which I cannot resist asking you.
First of all, from reading your blog, here is what I think I understand. Please correct me if I am wrong.
From your older post AND the discussions in the comment section, it seems to me that Reuter et al’s renormalization group is *almost* non-perturbative. By *almost* I mean that the formulation seems
non-perturbative enough, except that one needs to specify initial conditions for the flow in the UV which can (implicitly) involve the choice of a perturbative cutoff. Especially because (as you
pointed out), their IR cutoff function might NOT be compatible with the use of a lattice regularization in the UV. But if one assumes the best-case scenario for the Reuter program (which is what I
think you are doing in your latest post), am I wrong in thinking that there might exist *some* non-perturbative regularization scheme which might be compatible with their choice of the IR cutoff
My point is that if one does not believe in the non-perturbative validity of the RG evolution equation, what is the sense in talking about including the Riemann cubed coupling? Even if one ends up
finding that there is a non-trivial fixed point, would it be anything more than a curiosity?
One more question: Apologies in advance if I am being naive, but even before we start with this exact renormalization group business, how can a theory with black holes at high energies[1] look like a
CFT? Is there any motivation to believe that there are no black holes in strongly coupled, asymptotically flat, gravity? (Asymptotically flat, because I believe in the existence of black holes in AdS
because of Hawking-Page vs. Confinement-Deconfinement in AdS/CFT.).
[1] That one can produce black holes in high energy scattering is something I have heard many times, especially from Willy, and it has always seemed reasonable to me.
Hope everybody on 9th floor is doing fine,
Posted by: Chethan Krishnan on January 31, 2008 9:25 AM | Permalink | Reply to this
Re: Asymptotic Safety
But if one assumes the best-case scenario for the Reuter program (which is what I think you are doing in your latest post), am I wrong in thinking that there might exist some non-perturbative
regularization scheme which might be compatible with their choice of the IR cutoff function?
That, indeed, is a very dubious point.
To get the initial condition for the RG flow, one needs to assume a compatibility between the IR cutoff (which they make explicit) and the UV cutoff (which they don’t specify). It’s clear that one
can impose a perturbative UV cutoff, compatible with their IR cutoff. But if you do that, then the realm of validity of the “Exact” RGE is to sum up all orders in perturbation theory; it’s not
nonperturbative. Conversely, it’s hard to imagine a nonperturbative UV cutoff which would be compatible.
I am willing to suspend disbelief on this point, and see what the calculation yields.
My point is that if one does not believe in the non-perturbative validity of the RG evolution equation, what is the sense in talking about including the Riemann cubed coupling? Even if one ends
up finding that there is a non-trivial fixed point, would it be anything more than a curiosity?
Of course, there are an infinite number of coupling which receive nontrivial additive renormalizations in pure gravity. It is very very dubious that these additive renormalization all vanish
simultaneously at some point in coupling-constant space.
My point is that Reuter and company have assiduously avoided including any of these coupling in their ansatz. So they really haven’t performed a nontrivial test of the hypothesis. Including this $\
text{Riemann}^3$ coupling would be the first nontrivial test.
how can a theory with black holes at high energies[1] look like a CFT? Is there any motivation to believe that there are no black holes in strongly coupled, asymptotically flat, gravity?
I don’t think that’s even a question we can ask here. Recall that the fixed point is purportedly at some positive value of the cosmological constant. I assume that we are supposed to be in the part
of the phase diagram where the flos stays in the regime of positive cosmological constant. (There’s the troubling matter that $\Lambda$ seems to flow to $\infty$ as one goes to the infrared, but
whatever …)
In any case, the existence of a “quantum” conformal symmetry in quantum gravity is compatible with there being a nontrivial dimensionful scale in the theory, so I don’t see a-priori why it’s
incompatible with blackholes.
The bigger headache is that, since we’re definitely in de Sitter space, it’s not clear what the observables are supposed to be, and hence what any of this means.
Posted by: Jacques Distler on January 31, 2008 1:30 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Dear Jacques,
About my Q#1: Thanks for answering precisely my question.
About my Q #2: Oops, I mis-spoke. Yes, I should have said asymptotically de Sitter, not asymptotically flat. What I meant was essentially only that it is *not* asymptotically anti de Sitter.
The reason I brought up black holes was because if we are confident that they dominate high energies, you might expect an area-like entropy as opposed to a volume-like entropy which is what one would
expect if the theory was *still* a QFT even at high energies. I have seen variations of arguments of this sort in mnay places I think, one specific place I just looked up is hep-th/9812237.
Posted by: Chethan on January 31, 2008 2:28 PM | Permalink | Reply to this
Re: Asymptotic Safety
The reason I brought up black holes was because if we are confident that they dominate high energies, you might expect an area-like entropy as opposed to a volume-like entropy which is what one
would expect
There are funny statements in Reuter et al about a reduction in the effective dimensionality of spacetime as one approaches the fixed point, which may accord with what you are saying.
But this is all handwaving. I’d like a clean gauge-invariant statement, which is something hard-to-come-by in de Sitter spacetimes.
What’s even less believable is the other side of the critical RG trajectory, where the running cosmological constant flips sign as you flow to the IR.
I’m pretty sure I know what quantum gravity is in anti-de Sitter spacetimes. If pure gravity exists in 3+1 AdS (I have reasons to believe it does not), then I’m hard-pressed to imagine how it could
be described by this asymptotic safety hypothesis.
(I presume, for this reason, that those who believe in asymptotic safety don’t believe in AdS/CFT.)
Posted by: Jacques Distler on January 31, 2008 6:51 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
One would think they could perform a proof of concept by using a more well known nonrenormalizable field theory that has a known nontrivial fixed point (say derived from a lattice analysis), and then
applying the same procedure (truncation + Polchinksi’s ERG) and analyzing the strong coupling regime.
2+epsilon gravity doesn’t suffice since it seems to me the fixed point shows up at weak coupling.
Off the top of my head, I can’t think of a good candidate, but perhaps some exist in condensed matter or somesuch.
Posted by: Haelfix on February 2, 2008 2:01 AM | Permalink | Reply to this
Re: Asymptotic Safety
The Exact RG has been very successful at finding nonperturbative fixed points, particularly in scalar field theory. There is a nice review by Bagnuls and Bervillier, hep-th/0002034, which includes a
discussion of how to find the Wilson-Fisher fixed point in three dimensions.
The most successful truncation scheme for doing such studies (in scalar field theory) is the derivative expansion, whereby the momentum dependence of the vertices is truncated, but an infinite number
of interactions are retained. This is in contrast to an expansion in powers of the field, where the full momentum dependence of some finite number of vertices is kept. Field expansions are, in
general, not reliable and can lead to the `discovery’ of spurious fixed points. This is not surprising as the approximation only makes sense if the field is not fluctuating very much, and one might
reasonably expect the opposite to be true in the nonperturbative regime of interest.
The derivative expansion, on the other hand, is generally qualitatively reliable and quantitatively reasonable, even at lowest order [the Local Potential Approximation (LPA)]. From a computational
point of view, retaining an infinite number of interactions is a less brutal way of treating the non-linear ERG equation than the alternative.
Regarding the approach of Reuter et al., where the truncation is severe, one must be very wary of the possibility that the claimed fixed point is an artefact of the approximation scheme. That said,
it’s existence does seem to have a certain stability, as discussed by the authors (though the issue of gauge invariance is perhaps down-played), and is surely intriguing.
Posted by: Oliver on February 2, 2008 11:58 AM | Permalink | Reply to this
Re: Asymptotic Safety
Adding terms like a polynomial in the curvature scalar is not a robust test of anything, since such terms can be eliminated by a field redefinition.
The first robust test will be to add a term that is
• not a topological invariant,
• cannot be eliminated by a local field redefintion,
• receives a nontrivial renormalization in perturbation theory.
So far, all of the papers seem to have carefully avoided including such terms.
If the fixed point were to persist in the presence of such terms, that would, indeed, be intriguing.
Posted by: Jacques Distler on February 2, 2008 1:41 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Dear Jacques
thank you for your thoughts and comments.
> So far, all of the papers seem to have carefully avoided including such terms.
Well, people do what they can. It seems reasonable to start from the simplest truncation
and then progressively add more complicated terms.
Be assured that nobody has purposefully avoided the Goroff Sagnotti term.
Our reason for not having done it is the same as yours - technical complexity.
An easier calculation that would achieve the same goal would be to consider gravity with
curvature squared terms coupled to matter.
As discussed in ‘t Hooft and Veltman’s classic paper, in this theory the (one loop)
curvature squared divergences cannot be eliminated by field redefinitions.
But let me try to understand this business of field redefinitions better.
I thought that the argument for looking only at the Riemann cube (or better Weyl cube)
term holds only in a perturbative setting, where you take the Hilbert action
and treat everything else as an infinitesimal perturbation.
The terms that can be eliminated by field redefinitions are those that vanish on shell,
and the equations that are used in this argument are Einstein’s equations in vacuum.
So, perturbatively, everything that contains the Ricci tensor can be eliminated up to
terms of higher order.
If you want to do a nonperturbative calculation and treat the other terms on the same
footing as the Hilbert term, you would have to do this test using the full field equations.
It gets very complicated very quickly.
Are you saying that all the terms that can be eliminated using the perturbative argument
can also be eliminated in the nonperturbative sense?
I am aware of the transformation that will rid us of the R-squared and Ricci-squared terms
(though at a cost - see below) but what about the Ricci-squared-Weyl and Ricci-Weyl-squared?
> Adding terms like a polynomial in the curvature scalar is not a robust test of anything,
> since such terms can be eliminated by a field redefinition.
This is indeed another example where there is an explicitly known finite (as opposed to
infinitesimal) field redefinition that can be used to “eliminate” some terms, in this
case all higher powers of R.
But in doing this you generate infinitely many new terms in a scalar potential and you have
not actually reduced the number of couplings.
You have just converted one theory into an equivalent one.
And since there is no a priori argument that in this equivalent formulation there must be a
nontrivial fixed point, finding one in an f(R) theory is definitely not an empty statement.
Some of the other issues that were raised here are discussed in a faq page I have set up at
Best regards
Posted by: Roberto Percacci on February 4, 2008 12:28 PM | Permalink | Reply to this
Re: Asymptotic Safety
An easier calculation that would achieve the same goal would be to consider gravity with curvature squared terms coupled to matter. As discussed in ‘t Hooft and Veltman’s classic paper, in this
theory the (one loop) curvature squared divergences cannot be eliminated by field redefinitions.
Yes, that would also serve as a good test, and would probably be easier than including the Goroff-Sagnotti term in pure gravity.
The terms that can be eliminated by field redefinitions are those that vanish on shell, and the equations that are used in this argument are Einstein’s equations in vacuum. So, perturbatively,
everything that contains the Ricci tensor can be eliminated up to terms of higher order.
If you want to do a nonperturbative calculation and treat the other terms on the same footing as the Hilbert term, you would have to do this test using the full field equations.
It gets very complicated very quickly.
The equations of motion for $f(R)$ gravity are more complicated. But they still admit Einstein spaces ($R_{\muu}\propto g_{\muu}$) as solutions. And, indeed, you expand about an appropriate Einstein
space in your ‘nonperturbative’ treatment.
By a field redefinition, you can alter the coefficients of $(R-c)^n$, just as you could in the ‘perturbative’ case.
Perhaps I’m missing something …
But in doing this you generate infinitely many new terms in a scalar potential and you have not actually reduced the number of couplings. You have just converted one theory into an equivalent
Could you elaborate (or give a reference)? I’m not sure what “scalar potential” we’re talking about, here.
Posted by: Jacques Distler on February 4, 2008 3:36 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
> The equations of motion for f(R) gravity are more complicated. But they still admit Einstein spaces as
>solutions. And, indeed, you expand about an appropriate Einstein space in your ‘nonperturbative’ treatment.
When you say that something vanishes on shell it means that it vanishes
for every solution of the field equations, not just some solution.
The use of a particular background to compute some beta function is
a mathematical trick to extract specific terms from a functional trace.
The resulting beta functions would come out the same for any background.
Anyway, the issue here is not how we compute the beta function,
but whether, or how, one can eliminate certain terms from the action.
> Could you elaborate (or give a reference)? I’m not sure what “scalar potential” we’re talking about, here.
I am referring to the fact that a theory with Lagrangian f(R) is equivalent
via field redefinitions to a metric-scalar theory where the action contains
the Hilbert term, a canonical scalar kinetic term and a scalar potential.
See for example equations 3-6 of astro-ph/0307338.
For the R-squared and Ricci-squared terms see hep-th/9601082.
Posted by: Roberto Percacci on February 5, 2008 10:21 AM | Permalink | Reply to this
Re: Asymptotic Safety
This may be a bit tangential, but I’m extremely wary of the “equivalence” between f(R) theories and tensor-scalar theories. Sure, they can be thought of as equivalent at the level of equations of
motion, but generically you’re talking about two different boundary value problems (or, equivalently, initial value problems).
If I consider a gravity theory with Lagrangian $R + R^2$, there simply isn’t a well defined BV problem where only the metric is held fixed at the boundary. But in the scalar-tensor theory one appears
to have a BV problem where a metric (and not its normal derivative) is fixed at the boundary, and some condition is also put on the scalar.
Do these two theories really end up being equivalent, and not just at the level of the equations of motion? I guess I should look at the paper claiming equivalence at the level of the path integral.
Posted by: Robert McNees on February 5, 2008 12:31 PM | Permalink | Reply to this
Re: Asymptotic Safety
When you say that something vanishes on shell it means that it vanishes for every solution of the field equations, not just some solution.
Of course.
I’m certainly not claiming that $f(R)$-gravity is equivalent to Einstein-Hilbert.
What I am saying (which I believe is correct) is that, when you compute the ERG functional $\beta$-function for $f(R)$-gravity, you expand in fluctuations about an Einstein space, and the computation
is effectively the same as the computation in Einstein-Hilbert, with a shifted value for the effective $\Lambda$ and $G_N$.
So, if you had found a zero of the $\beta$-function before, you expect it to persist in $f(R)$-gravity.
Posted by: Jacques Distler on February 5, 2008 1:51 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
I wrote:
… the computation is effectively the same as the computation in Einstein-Hilbert, with a shifted value for the effective $\Lambda$ and $G_N$.
So, if you had found a zero of the $\beta$-function before, you expect it to persist in $f(R)$-gravity.
I should amend that. $R^2$ is an exception. That requires a separate calculation. If, however, you find a fixed point for $S = \int d^4 x \sqrt{-g}\left[g_0 M^4 + g_1 M^2 R + g_2 R^2\right]$ you are
guaranteed to find a fixed point in the theory where you add additional polynomial in $R^n$, $n\geq 3$.
Posted by: Jacques Distler on February 7, 2008 2:00 AM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Jacques, I am not sure that I understand what you are saying. We agree that f(R) gravity cannot be reduced to Einstein-Hilbert: a generic change in the higher couplings cannot be compensated by a
field redefinition.
But then you propose to exploit the special properties of the background on which we do our calculation. Fair enough. We do our calculations on spheres. On a sphere the variation of the action of f
(R) gravity is proportional to (2f(R)-Rf’(R)) times the trace of the variation of the metric. If we want to remain within the chosen class of backgrounds the only possible variations of the metric
are global rescalings, such that the trace of the variation of the metric is a constant. By writing f as a Taylor series one can easily see that using such a variation of the metric you can
compensate a change in any given coupling, except for the coefficient of R squared.
I assume that you must have been reasoning along similar lines. But here I lose you. Such global rescalings are only a one parameter group: you must change all the couplings together in a specific
way. As a result, even on a sphere, you can use them to fix the value of only one coupling, not infinitely many. So maybe you had something else in mind?
Posted by: Roberto Percacci on February 7, 2008 12:29 PM | Permalink | Reply to this
Re: Asymptotic Safety
But here I lose you. Such global rescalings are only a one parameter group: you must change all the couplings together in a specific way. As a result, even on a sphere, you can use them to fix
the value of only one coupling, not infinitely many. So maybe you had something else in mind?
I am making a rather mundane observation.
The crucial ingredient in the ERGE involves expanding the effective action to quadratic order in fluctuations about your chosen background. Since, for an Einstein space, $R=\text{const}$, this
inverse propagator, for $S=$ an $n^{\text{th}}$ order polynomial in $R$, has the same functional form as the one constructed for $S$ a quadratic polynomial in $R$ (but with shifted coefficients).
Posted by: Jacques Distler on February 8, 2008 1:09 AM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
True but not enough to draw your conclusion. In the calculation of the beta functions you need to keep track of the functional dependence of the inverse propagator on R, which is treated as an
external parameter. There is no way that you can obtain with an R-squared truncation the same R-dependence that you have with higher truncations, irrespective of any shift in the couplings.
Consider a similar issue in the more familiar context of a scalar field theory with a generic even potential V(ψ^2). The inverse propagator has the form
If you truncate the potential at order ψ^4, in d=3 the ERGE will give you (an approximation to) the Wilson Fisher fixed point. This is rather crude, and, as Oliver was saying, a lot of work has gone
into better truncations.
Translated into this context, your “mundane observation” is: for constant scalar background, the form of the inverse propagator for any potential can be reproduced by that of a quartic potential, for
some suitable couplings. This is true but not very useful. What matters is the functional dependence on the scalar. A higher polynomial potential will give rise to higher powers of ψ in the inverse
propagator and such information is essential in extracting the beta functions of the higher couplings. You certainly cannot conclude that “if there is a fixed point for the quartic potential there
must be a fixed point for any polynomial potential”.
Posted by: Roberto Percacci on February 8, 2008 12:05 PM | Permalink | Reply to this
Re: Asymptotic Safety
A higher polynomial potential will give rise to higher powers of $\psi$ in the inverse propagator and such information is essential in extracting the beta functions of the higher couplings.
Of course you need to compute the $\beta$-functions for the higher couplings. But in $d=3$, we know what the result will be: they’re all irrelevant.
(For those unfamiliar with this story, $d=3$ scalar field theory (on which we impose a $\mathbb{Z}_2:\phi\to-\phi$ symmetry) has a Gaussian fixed point, at which $\phi^2$ and $\phi^4$ are relevant,
and $\phi^6$ is marginally irrelevant. If you perturb away^1 from the Gaussian fixed point, you can flow to another fixed point, the Wilson-Fisher fixed point, which has only a single relevant
You certainly cannot conclude that “if there is a fixed point for the quartic potential there must be a fixed point for any polynomial potential”.
But you can. Adding $\phi^6$ or $\phi^8$, etc, does not alter the basic structure of the RG flow (but see below).
And much was known about the Wilson-Fischer fixed point before the ERGE techniques came along.
This is rather crude, and, as Oliver was saying, a lot of work has gone into better truncations.
Now, it’s true that conventional perturbative techniques do better at computing the critical exponents at the Wilson-Fischer fixed point if you include also a $\phi^6$ term. The reason (at least, as
I have understood it) is that $\phi^6$ is marginally irrelevant, and so flows to zero rather slowly.
But the ERGE calculations do just fine in the truncation where you omit all of the higher-order polynomials. Most of the improvements (or so is my impression) come from things like optimizing the
cutoff function used, rather than from adding more terms to the effective action.
^1 To reach the W-F fixed point, you want to deform in the direction of $m^2\lt0$ and $\lambda_{\phi^4}\gt 0$. And, to complicate matters, you need to tune a parameter to suppress the direction
that’s irrelevant at Wilson-Fisher.
Posted by: Jacques Distler on February 9, 2008 10:42 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Roberto is absolutely correct when he says that
You certainly cannot conclude that “if there is a fixed point for the quartic potential there must be a fixed point for any polynomial potential”.
It does, just so happen, that a truncation based on a field expansion correctly identifies the Wilson-Fisher fixed point but this is, in some sense, a fluke. The series for the critical exponents are
not convergent, and a spurious fixed point can be found.
I also think that the irrelevance of the 6pt, 8pt vertices etc. can be a bit of a red-herring. This statement is true in the vicinity of the Gaussian fixed point and it allows one to draw a very
powerful conclusion: namely, that all scale dependence of the action along a Renormalized Trajectory (RT) emanating from the Gaussian fixed point is carried by the two point coupling, m, the four
point coupling, λ, and the anomalous dimension, γ (I’m assuming that all quantities have been rescaled to dimensionless, using the effective scale). However, this does not mean that higher point
couplings are not generated along the flow, nor that these couplings won’t become important at some scale. All that one can say is that (remarkably) all of these couplings can be written in the form
Even in the vicinity of the Gaussian fixed point, the determination of the exact scale dependence of the Wilsonian effective action is a nonperturbative problem, amounting to computing the `perfect
action’. Whilst a perturbative calculation will give an excellent approximation, in this regime, there is generally no reason to trust it, away from here.
Adding φ^6 or φ^8, etc, does not alter the basic structure of the RG flow (but see below).
If one were really able to compute the action exactly, in the Wilsonian framework, it wouldn’t make sense to talk about adding a coupling to the action. The action along an RT is determined, in
principle, by the flow equation, given
1. The choice of fixed point;
2. The integration constants associated with the relevant / marginally relevant directions.
As it is, adding couplings to an action, in this context, implicitly implies a truncation and such a procedure could well have a profound effect on the (approximation to the) flow.
Posted by: Oliver Rosten on February 10, 2008 3:51 PM | Permalink | Reply to this
Re: Asymptotic Safety
Is there any analytic statement about the bound on the error as you move away from your fixed point whilst adding different couplings? I mean,I would think the divergence structure would manifest
itself pretty clearly.
Posted by: Haelfix on February 10, 2008 4:50 PM | Permalink | Reply to this
Re: Asymptotic Safety
The series for the critical exponents are not convergent …
Perturbation theory is never convergent.
Last I checked, the most accurate determination of the critical exponents at the Wilson-Fisher fixed point came from doing a conventional multiloop computation of the anomalous dimensions (ie.,
working about the Gaussian fixed point), and then using Padé approximants to extrapolate the results.
and a spurious fixed point can be found.
This whole discussion has been about the fact that truncations of the ERGE can lead to the appearance of spurious fixed points.
I have argued that the fixed point found in the ERGE analysis of pure gravity is probably spurious, and suggested the operator(s) which should be added to the truncation to test this.
I did explain why I think that adding $R^n$ couplings to the truncation does not provide an acute test of the persistence of the fixed point.
Roberto was the one who suggested the similarity to adding $\phi^{2n}$ terms to the truncation of the ERGE in $d=3$ scalar field theory.
In the latter case, there are lots of ways to argue that adding $\phi^{2n}$ terms to the truncation will not affect the W-F fixed point.
You seem to want to argue that none of these arguments are valid because all kinds ‘o crazy stuff can happen along RG trajectories. I would respond that, if you believe the behaviour to be
sufficiently wild, then none of the usual families of truncations (the field expansion, the derivative expansion) can be expected to be valid. And no amount of “evidence” amassed in the context of
one of those families of truncations can be persuasive.
Posted by: Jacques Distler on February 11, 2008 12:17 AM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Even if you did use the Goroff and Sagnatti coupling and found a fixed point, couldn’t you just argue the same thing and say it could just be a spurious point merely an artifact of the approximation
scheme? It is after all just the first of an infinite amount of bad terms (never mind the large N additivity divergences)
So back to the premise, under what circumstances (divergence structure) of a field theory can the truncation +ERGE be expected to provide an adequate flow sufficiently accurate enough so thta people
can trust the fixed point? Ultimately it should be a statement about the error bound I would think
Posted by: Haelfix on February 11, 2008 3:55 AM | Permalink | Reply to this
Re: Asymptotic Safety
Haelfix: I am inclined to believe that if a fixed point exists there should be a way of understanding it. But I think the question you raise is premature. Things may well get stuck at the next test,
so for the time being we should just keep humbly collecting evidence.
Concerning analytic bounds on errors, I am not aware of any such bounds that could be applied here, but maybe other people have more to say on this.
Jacques: Please, do not push my Wilson-Fisher analogy beyond its narrow limits. My statement was referring only to the inverse propagator and the beta functions that come from it via the ERGE. In the
scalar case in d=3 there may be other arguments that allow you to say that once you have a fixed point for a quartic potential there should also be one for a higher polynomial, but such arguments do
not exist in the case of gravity. So there is no shortcut to doing the actual calculation of the beta functions and looking for their zeroes. In this sense, I repeat, testing polynomials in R is
Whether you consider the result of such calculation a “robust” or “acute” test is a matter of taste and semantics. On february 2nd you gave a precise definition of what you mean by a “robust” test. I
am happy to agree with you that our calculation of polynomials in R was not a “robust” test in your sense, because it does not meet your third criterion.
Ultimately I think we agree on everything and our difference lies just in how much weight we are willing to give to perturbative evidence. You seem to think that the ERGE approach will stumble in the
same place where perturbation theory did. I do not say that this is unreasonable. After all most of the solid understanding we have of the world comes from perturbation theory. On the other hand,
from whatever little experience I have of playing with the ERGE, I see no reason to expect that the Weyl cube term should play any special role. I may be wrong of course. Hopefully time will tell.
Since I think that I made my points clear and that there cannot be much gain in repeating myself, I am probably not going to make other entries here, at least not until new facts emerge. For the time
being let me say that I have found this blog an instructive experience.
Posted by: Roberto Percacci on February 11, 2008 5:43 PM | Permalink | Reply to this
Re: Asymptotic Safety
Since I think that I made my points clear and that there cannot be much gain in repeating myself, I am probably not going to make other entries here, at least not until new facts emerge. For the
time being let me say that I have found this blog an instructive experience.
Well, I’d like to thank you for stopping by, and taking the time to comment. I found this discussion very helpful. And I’m sure many of my readers did, as well.
Posted by: Jacques Distler on February 11, 2008 5:56 PM | Permalink | PGP Sig | Reply to this
Re: Asymptotic Safety
Yes, thank you for clearing things up, I’ve learned quite a bit and I think I understand you’re position better as a result.
Posted by: Haelfix on February 11, 2008 6:04 PM | Permalink | Reply to this
Re: Asymptotic Safety
I am surprised that no one came here to ask for fame and fortune after this article was posted on arxiv:
Posted by: Daniel de França MTd2 on March 9, 2009 2:47 PM | Permalink | Reply to this
Field redefinitions
Let us take an action $S[\psi]$ with a coupling $c$. If we vary the coupling constant $c$, then the variation of the action is $\delta c \frac {\partial S[\psi]}{\partial c}$. If this variation can
be undone by a field redefinition $\psi \rightarrow \psi + \delta \psi$, the we must have
(1)$\delta c \frac {\partial S[\psi]}{\partial c} = \int d^4 x \frac {\delta S[\psi]}{\delta \psi(x)} \delta \psi(x).$
Now, in the case of gravitation,
(2)$S = \frac 1 {16 \pi G} \int d^4 x \sqrt{- g} \left(f(R) - 2 \Lambda\right).$
Take $f(R) = R + c R^2$, for example. In order to be able to undo the constant $c$ above, we should, if I’m not mistaken, find a transformation for the metric, $g_{\mu u} \rightarrow g_{\mu u} + \
delta \alpha_{\mu u}$ such that
(3)$\int d^4 x \sqrt{- g} \left[- \delta c R^2 + (1 + 2 c R) R_{\mu u} \delta \alpha^{\mu u} - \frac 1 2 (R + g R^2 - \Lambda) g_{\mu u} \delta \alpha^{\mu u}\right] = 0.$
Now, I haven’t tried very hard, but I couldn’t find a solution for $\delta \alpha^{\mu u}$. I have’t looked at the $Weyl^2$ ter, yet.
Posted by: Sidious Lord on February 5, 2008 2:53 PM | Permalink | Reply to this
Re: Field redefinitions
> Now, I haven’t tried very hard, but I couldn’t find a solution for δα μν.
Before you try harder: in the variation of R squared you are forgetting a term 2cR(trace δ Ricci). (It’s not a total derivative.)
For someone who is looking for a fixed point, the fewer couplings one has to consider the better. So I would be delighted if somebody told me that c can be eliminated in this way but I don’t think
it’s possible.
Posted by: Roberto Percacci on February 6, 2008 4:02 AM | Permalink | Reply to this
Re: Field redefinitions
Before you try harder: in the variation of R squared you are forgetting a term 2cR(trace δ Ricci). (It’s not a total derivative.)
I agree. In the Einstein-Hilbert action variation there also is a term $g^{\mu u} \delta R_{\mu u}$, but this is a divergence and doesn’t contribute to the equations of motion.
However, in this case it appears multiplying a curvature scalar, and the result is not a divergence anymore. Thank you for this correction.
So the correct equation (correcting another typo where I wrote $g$ instead of $c$) is
(1)$\int d^4 x \sqrt{-g} \left[- \delta c R^2 + (1 + 2 c R) R_{\mu u} \delta \alpha^{\mu u} + 2 c R g^{\mu u} \delta R_{\mu u} - \frac 1 2 (R + c R^2 - \Lambda) g_{\mu u} \delta \alpha^{\mu u}\right]
= 0.$
Now, the variation $\delta R_{\mu u}$ will involve the Riemann tensor, so things will start to look a bit messy.
Posted by: Lord Sidious on February 6, 2008 12:29 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/~distler/blog/archives/001585.html","timestamp":"2014-04-20T10:47:28Z","content_type":null,"content_length":"95984","record_id":"<urn:uuid:4752364d-5f8d-4c6f-8efd-bbb120bac92b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excellent and Experienced UCLA Mathematics Tutor
My name is Alex, I am a 2012 graduate with a B.S. in Mathematics from UCLA.
I have many years of tutoring experience working with students in private, online and home settings. Additionally, I have been trained professionally for math tutoring at campus tutoring center.
I am patient and responsive to students' needs, make mathematics easy, enjoyable and comprehensible for students at all levels, ranging from elementary level math to calculus and beyond.
I am available at night during weekdays and all day on weekends, on short notice as well as scheduled appointments.
Zhiwei's subjects | {"url":"http://www.wyzant.com/Tutors/CA/Los-Angeles/8256550/?g=3MG","timestamp":"2014-04-23T23:14:00Z","content_type":null,"content_length":"89240","record_id":"<urn:uuid:4c5713fe-4f0a-42e9-a8c1-185325aa2032>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
rectangular prisms
hi cooljackiec,
(i) The tetrahedron. I think the little one has sides that are half the large one. So that should fix the ratio of volumes.
LATER EDIT: I've changed my mind about this one. See post 10 on
http://www.mathisfunforum.com/viewtopic … 60#p277760
(ii) If you call the sides a, b and c then
ab = 56 ........................(p)
bc = 110 ........................(q)
ca = 385 .........................(r)
If you do equation (p) divide by (q) you'll eliminate b
Then make a the subject and substitute into (r) to solve for c.
Then you can substitute back to get a and b.
(iii) Again call the sides a, b and c.
From the information given
This time you cannot solve for a, b and c because there are only two equations. But you don't need to, because we only want to get
There's a neat way to get this. Start with
If you substitute in the values you know you'll be left with the value of a^2 + b^2 + c^2
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=277720","timestamp":"2014-04-18T08:24:05Z","content_type":null,"content_length":"20047","record_id":"<urn:uuid:9a274804-6053-4c8a-bc8c-83025a70a42b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Gregorio Prealgebra Tutor
...Besides topics such as Matrices and Logarithm, graphing of polynomials and functions, solving for real and complex zeroes call for a tremendous amount of analysis. I can help the student master
these skills with careful examination of the clues and signs. I have taught Geometry to students with different learning styles and needs.
5 Subjects: including prealgebra, geometry, Chinese, algebra 2
...Students should not learn grammar and syntax rules just to learn rules; they should learn them in order to become aware of the functions and relations of words within sentences. A knowledge of
grammar makes writing much easier, as this foundation allows students to write with confidence. I took French every year in grade school and high school, receiving honors in French upon
49 Subjects: including prealgebra, reading, elementary (k-6th), public speaking
...I am familiar with different chords and rhythms, and I have played in worship bands at my church and at Christian clubs in college. I write and record my own music as well. I have become
skilled in sight singing as part of my music education at SJSU, and it was arguably one of my strongest areas.
17 Subjects: including prealgebra, reading, English, geometry
...I've participated in the Stanford Jazz Workshop on piano for the past 15 years, and I've studied under Randy Masters at the Community School of Music and Art in Mountain View and under Frank
Sumares at San Jose State University. I play all types of music. I also do piano arrangements.
37 Subjects: including prealgebra, reading, English, physics
I come from a long line of educators. I was taught to love to learn and to love to teach. I am currently teaching various subjects to 8th grade students in East Palo Alto and have been teaching in
various classroom settings for about 4 years.
13 Subjects: including prealgebra, reading, English, writing | {"url":"http://www.purplemath.com/san_gregorio_prealgebra_tutors.php","timestamp":"2014-04-18T18:37:50Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:555dae5b-e940-422b-8ff5-aa3a7e8e514a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Statics and Strengths of Materials
grandnat_6: I have almost always used Ssu = shear ultimate strength = 0.60*Stu, and Ssy = shear yield strength = 0.577*Sty, where Stu = tensile ultimate strength, and Sty = tensile yield strength.
Most text books claim the above. I currently do not know why Machinery's Handbook instead says Stu = 0.75*Stu.
The above values are shear strength, and tensile strength, the point where the material shears or ruptures. These material strength values do not include a safety factor. They are material strength
values, not allowable stress.
I am still currently leaning toward a yield factor of safety of FSy = 3 for your arm beams, and perhaps FSy = 5 for the pins. The factor of safety is the same in tension and compression. The
allowable tensile (or compressive) stress is Sta = Sty/FSy. The allowable shear stress is Ssa = Ssu/FSu = 0.60*Stu/FSu, where FSu = ultimate factor of safety. Because your current FSy values are so
high, you can just use FSu = FSy, for now.
We do not yet know the tensile yield strength (Sty) of your A513 steel tubes, because you did not state an SAE steel grade designation yet. A513 covers a lot of SAE steel grade designations (SAE
1008, 1010, 1020, 4130, 4140, just to name a few). You (your supplier) must state the SAE steel grade designation, before we can look up the strength of your A513. If no SAE grade designation for
A513 tubes is stated, then we would be forced to assume SAE 1008.
And, your supplier must state whether the A513 steel tube thermal condition is as-welded (not annealed), normalized, DOM, or DOM stress-relieved, before we can look up the strength. If no thermal
condition for A513 tubes is stated, then we would be required to assume normalized, or perhaps as-welded, depending on the SAE grade designation. However, A500, grade B, on the other hand,
specifically defines a strength. | {"url":"http://www.physicsforums.com/showthread.php?p=3807107","timestamp":"2014-04-24T11:36:25Z","content_type":null,"content_length":"80963","record_id":"<urn:uuid:5d00ce30-5f36-4ea5-a625-362c9a3c343e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Nieghbourhood functions
Scott Ransom ransom at cfa.harvard.edu
Sun Apr 8 20:46:45 CDT 2001
Hi Robert,
Isn't this just a convolution? For your first order case you are
convolving the array with a kernel that looks like:
and for second order:
If this is what you are looking for, there is a convolve function built
in to Numeric but it is for rank-1 arrays only. You can use 2D FFTs for
a 2D case -- although the efficiency won't be the greatest unless you
can use the FFT of the kernel array and/or the data array over and over
again (since in general a convolution by FFTs takes 3 FFTs -- 1 for the
data, 1 for the kernel, and 1 inverse one after you have multiplied the
first two together). It would work well for higher order cases...(but
beware of the wrapping that goes on accross the boundaries!)
For lower order cases or when you can't re-use the FFTs, you'll probably
want a brute force technique -- which I'll leave for someone else...
Robert.Denham at dnr.qld.gov.au wrote:
> I am looking for efficient ways to code neighbourhood functions. For
> example a neighbourhod add for an element in an array will simply be the sum
> of the neighbours:
> 1 0 2
> 3 x 3 , then x becomes 7 (first order neighbour), 11 (2nd order) etc.
> 1 1 0
> I would be interested in efficient ways of doing this for a whole array,
> something like a_nsum = neighbour_sum(a, order=1), where each element in
> a_nsum is the sum of the corresponding element in a.
Scott M. Ransom Address: Harvard-Smithsonian CfA
Phone: (617) 495-4142 60 Garden St. MS 10
email: ransom at cfa.harvard.edu Cambridge, MA 02138
GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2001-April/000570.html","timestamp":"2014-04-19T09:30:53Z","content_type":null,"content_length":"4376","record_id":"<urn:uuid:d77d8aae-c6a4-402f-a095-bfaf4bbe189f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Numpy complex types, packing and C99
David Cournapeau cournape@gmail....
Thu Jul 2 06:56:11 CDT 2009
On Thu, Jul 2, 2009 at 9:02 AM, David Cournapeau<cournape@gmail.com> wrote:
> True, but we can deal with this once we have tests: we can force to
> use our own, fixed implementations on broken platforms. The glibc
> complex functions are indeed not great, I have noticed quite a few
> problems for special value handling (e.g. cabs(inf + I * NAN) is nan,
> but should be inf, etc....).
Ok, here we are. There are two branches.
- the first one, complex_umath_tests, is a branch with thorough
covering of special values. I use the C99 standard for reference:
The main file is
Around 10 tests fail on Linux ATM.
- the second branch is the interesting one:
It implements clog, cexp, creal, cimag, carg, cabs, ccos, csin and
csqrt. The list is dictated by my needs for windows 64, but the list
can grow arbitrarily, of course. Most the implementations are taken
from msun (the math library of FreeBSD). Unfortunately, they don't
implement that many functions compared to the glibc.
If I merge the two branches and use the npymath complex functions in
umath, there is no unit test failures anymore :)
I think I will merge the complex_umath_tests branch soon
(platform-specific failures on build bot will be interesting), unless
someone sees a problem with it.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-July/043715.html","timestamp":"2014-04-16T23:04:20Z","content_type":null,"content_length":"4607","record_id":"<urn:uuid:17c5503d-6d6b-460a-befd-ba626b90bb27>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
This question popped up in another thread and was answered there exactly. Let's see what geogebra can do.
Agnishom wrote:
AB and CD are two parallel chords in the same circle measuring 5 cm and 11 cm respectively.
If, the distance between AB and CD is 6 cm
Then find out the radius of the circle
1) Enter in the input box points (1,1),(12,1),(4,7),(9,7).
2) Use the rigid polygon tool and click points A,B,D and C to form a rigid polygon.
3) Go into the algebra pane and show points C and D.
You will notice that from the entered points you have two parallel line segments of length 11 and 5 and that they are 6 units away from each other.
4) Use the circle through three point tool and click A,C and D and a circle will appear, circumscribing the polygon.
5) Look in the algebra pane for the equation of that circle you will see
(x - 6.5)² + (y - 2)² = 31.25
6) Check that points A,B,C and D lie on the circle by plugging in.
7)Move the polygon and circle by dragging A or B. Since this is a rigid polygon it will remain invariant under the motion. Check the RHS of the equation of the circle. It will remain at 31.25
regardless of how we move or rotate the object.
Geogebra conjectures that
We are done! | {"url":"http://www.mathisfunforum.com/post.php?tid=19093","timestamp":"2014-04-21T09:55:11Z","content_type":null,"content_length":"16264","record_id":"<urn:uuid:4639391d-4afd-4159-a245-14d15304ec7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Herndon, VA Precalculus Tutor
Find a Herndon, VA Precalculus Tutor
...I graduated from Paine College with a Bachelor of Science degree in Mathematics. I taught Math on the high school level but I have also tutored students in elementary school and college. I
have taught and tutored for ten years.
19 Subjects: including precalculus, calculus, algebra 2, ASVAB
I have been teaching mathematics for the last 10 years. I believe a good math teacher should make the subject so interesting that students start liking it. I have seen students start talking
about the difficulty of the subject right from elementary school, and unfortunately that impression gets stronger when they reach middle or high school.
9 Subjects: including precalculus, calculus, trigonometry, discrete math
Dear Prospective Student,I hope you are ready to learn, gain confidence and increase your problem solving skills, all while raising your grades! I offer tutoring sessions for all high school math
subjects—from pre-algebra to AP calculus. I have been tutoring various levels of math for 6 years now.
22 Subjects: including precalculus, calculus, geometry, GRE
...Phonics can best be learned in an enriched, one-on-one setting full of engaging activities with someone (myself) who allows personalized plans of action, sets goals, and effectively uses
research-based strategies. Having completed a degree-based program that specializes in Pre-K education, I hav...
64 Subjects: including precalculus, reading, chemistry, English
...I have tutored High School and College level Calculus. Calculus is one of the most powerful tools in math and as such is used through out the engineering field. I understand the concepts well
and can explain them in a manner in which they make sense . I am very familiar with chemistry.
23 Subjects: including precalculus, chemistry, Spanish, calculus
Related Herndon, VA Tutors
Herndon, VA Accounting Tutors
Herndon, VA ACT Tutors
Herndon, VA Algebra Tutors
Herndon, VA Algebra 2 Tutors
Herndon, VA Calculus Tutors
Herndon, VA Geometry Tutors
Herndon, VA Math Tutors
Herndon, VA Prealgebra Tutors
Herndon, VA Precalculus Tutors
Herndon, VA SAT Tutors
Herndon, VA SAT Math Tutors
Herndon, VA Science Tutors
Herndon, VA Statistics Tutors
Herndon, VA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Alexandria, VA precalculus Tutors
Annandale, VA precalculus Tutors
Arlington, VA precalculus Tutors
Bethesda, MD precalculus Tutors
Centreville, VA precalculus Tutors
Chantilly precalculus Tutors
Fairfax, VA precalculus Tutors
Falls Church precalculus Tutors
Gaithersburg precalculus Tutors
Great Falls, VA precalculus Tutors
Reston precalculus Tutors
Rockville, MD precalculus Tutors
Springfield, VA precalculus Tutors
Sterling, VA precalculus Tutors
Vienna, VA precalculus Tutors | {"url":"http://www.purplemath.com/Herndon_VA_Precalculus_tutors.php","timestamp":"2014-04-17T21:26:41Z","content_type":null,"content_length":"24199","record_id":"<urn:uuid:d1827096-8dcd-4fd8-aa92-cb94d92582ca>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2008 [00249]
[Date Index] [Thread Index] [Author Index]
Re: How I can fit data with a parametric equation?
• To: mathgroup at smc.vnet.net
• Subject: [mg91908] Re: How I can fit data with a parametric equation?
• From: Bill Rowe <readnews at sbcglobal.net>
• Date: Fri, 12 Sep 2008 05:29:02 -0400 (EDT)
On 9/11/08 at 6:14 AM, dinodeblasio at gmail.com wrote:
>Hello and thanks for your collaboration, I read a little bit and I
>wrote the following code: "when I try to do the FindFit command, the
>parameters have to be positive, so i was searching for the optimal
>values of the parameter that fit the data. How I can modify my code
>in order to find the best parameters fitting the data?I tried to do
>the Norm between the value of the data and the value of the equation
>but i cant do more. Thanks.
>Remove["Global`*"] data = {{1, 1}, {28, 0.719188377}, {54,
>0.35746493}, {81, 0.182114228}, {117,
>0.166082164}, {260, 0.132765531}};
>express = (1 - k*x)*(1 - k*x/q)*(1 - p*k*x/q) "this is the
>equation with which i want to fit the data"
When you want to place constraints on the parameters, using
NMinimize is probably going to work for you better than FindFit.
First create a function that computes the summed square error
In[18]:= ss[x_, y_, k_, p_, q_] :=
Total[((1 - k*x)*(1 - k*x/q)*(1 - p*k*x/q) - y)^2]
Here I've separated the x,y components to simplify the code
In[19]:= {xx, yy} = Transpose[data];
Now, NMinimize can be used to find the desired parameters
In[20]:= NMinimize[{ss[xx, yy, k, p, q], k > 0 && p > 0 && q >
0}, {k,
p, q}]
Out[20]= {0.0178264,{k->0.00204306,p->1.,q->0.348625}}
In[21]:= Show[
ListPlot[data, PlotRange -> All, Frame -> True, Axes -> None,
PlotMarkers -> Automatic], Plot[express /. Last[%20], {x, 0, 300}]]
Shows the estimates give a reasonable fit to the data. | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Sep/msg00249.html","timestamp":"2014-04-17T04:19:30Z","content_type":null,"content_length":"26608","record_id":"<urn:uuid:bd899dfc-bfa2-41b6-9778-0cd07b0e3568>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
How to run a chi-square 2 way test in SPSS In this example, we want to test the claim that there is an association between the restrictions on - read more
Chi-Square Test for Association using SPSS Introduction. The chi-square test for independence, ... In our enhanced linear regression guide, we show you how to correctly enter data in SPSS to run a
chi-square test for independence. Alternately, we have a generic, ... - read more
Share your answer: how to run chi squared in spss?
how to run chi squared in spss resources | {"url":"http://www.askives.com/how-to-run-chi-squared-in-spss.html","timestamp":"2014-04-20T01:51:29Z","content_type":null,"content_length":"34901","record_id":"<urn:uuid:c8eb1f3c-f0fb-49b6-ac3b-435d03026e8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buffalo Grove Algebra 2 Tutor
Find a Buffalo Grove Algebra 2 Tutor
...My students have improved their grades by 2-3 letter grades within 3-4 sessions on average. I have an excellent track record with references to support it with referrals from Principals and
former student's parents. Seeing is believing.
15 Subjects: including algebra 2, chemistry, geometry, Spanish
...I've helped students push past the 30 mark, or just bring up one part of their score to push up their overall score. In the past 5 years, I've written proprietary guides on ACT strategy for
local companies. These guides have been used to improve scores all over the midwest.
24 Subjects: including algebra 2, calculus, physics, GRE
...I had passed that exam in 1985, 1991, 1997 and 2004, always in the ninetieth percentile. I took the MCAT in 1977, and based on that result and my GPA was accepted into 3 medical schools.
Although much specialized science as advanced since 1977 (such as immunology) I believe that my continuing good scores on board exams indicated that I was keeping up.
17 Subjects: including algebra 2, chemistry, statistics, reading
...Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are
common in discrete math. Other topics in which I am well versed are formulation of proofs, which is a ma...
22 Subjects: including algebra 2, calculus, geometry, statistics
...I am currently attending DePaul University to pursue my master's degree in applied statistics. I have tutored students of varying levels and ages for more than six years. While I specialize in
high school and college level mathematics, I have had success tutoring elementary and middle school students as well.
19 Subjects: including algebra 2, calculus, statistics, algebra 1 | {"url":"http://www.purplemath.com/Buffalo_Grove_Algebra_2_tutors.php","timestamp":"2014-04-18T23:28:50Z","content_type":null,"content_length":"24271","record_id":"<urn:uuid:44fd7c8a-890b-4b36-961e-d0862c7234d9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Port Washington, NY Math Tutor
Find a Port Washington, NY Math Tutor
...I believe that everybody learns at his or her own pace and with his or her own style and that one's instructor needs to be cognizant of this. Although I have taken several upper level
engineering courses, I am focusing on tutoring math through calculus II, including SAT Math, and high school lev...
16 Subjects: including prealgebra, precalculus, reading, trigonometry
...If you need an exceptional,committed, and dedicated tutor please contact me and my rate is always negotiable. Thank you! I worked with Champion Learning and Liberty Learning Center for 8 years
where I had the opportunity to work with students from K-6th very frequently.
37 Subjects: including calculus, linear algebra, algebra 1, algebra 2
...In each psychology, math, or accounting course I have taken thus far in my college career I have received a 95 or above (nothing less than an A). I am also part of two honors societies at my
school which are based on students having a GPA of 3.5 or above. I am looking forward to sharing my inter...
11 Subjects: including calculus, geometry, physics, precalculus
...The purpose of math is to engage in logical thinking. Throughout student teaching and tutoring I have always placed emphasis on the theory behind a formula (if you know the theory, there is no
wording on an exam that can disguise or complicate a question), and it is very pleasing to hear when a ...
5 Subjects: including algebra 1, algebra 2, geometry, precalculus
...I'm able to work with a variety of different mediums, including: charcoal, colored pencil, pen, clay, glass (specifically stained glass using the Copper Foil or Tiffany method), metal, and
printmaking tools for producing woodblock, intaglio, aquatint, or drypoint prints. I'm happy to help one co...
21 Subjects: including algebra 2, SAT math, geometry, algebra 1 | {"url":"http://www.purplemath.com/Port_Washington_NY_Math_tutors.php","timestamp":"2014-04-19T17:42:41Z","content_type":null,"content_length":"24292","record_id":"<urn:uuid:a07c0874-9976-4f05-ba9d-f3a67e12a8f8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
bin/157177: primes(1) prints non-prime for numbers > 2^32
Robert Lorentz robert.lorentz at gmail.com
Tue Jun 14 04:00:24 UTC 2011
The following reply was made to PR bin/157177; it has been noted by GNATS.
From: Robert Lorentz <robert.lorentz at gmail.com>
To: bug-followup at FreeBSD.org,
kcwu at kcwu.csie.org
Subject: Re: bin/157177: primes(1) prints non-prime for numbers > 2^32
Date: Mon, 13 Jun 2011 23:29:33 -0400
I regression tested this on FreeBSD 8.2-RELEASE against FreeBSD =
9.0-CURRENT r221981, using primegen-0.97 on amd64 platform and came up =
with some interesting results.
First off, from primes(1) man page, syntax is:
primes [ low [high] ]
So your bc line effectively says:
primes 4294967296=20
Where low =3D 4294967296 and high is not explicitly stated. According =
to the man page high defaults to 1000000000 where the last prime =
possible is 999999937. Since 1,000,000,000 < 4,294,967,296, the correct =
output of "primes 4294967296" should be nothing.
In FreeBSD 8.2-RELEASE, primes correctly does output nothing.
However on FreeBSD 9.0-CURRENT, primes incorrectly prints primes =
starting at 4,294,967,296 and seems to go forever (not sure where it =
will stop). This is contrary to what the manual page says and is a =
Your original problem I did regression test and confirm to be working in =
8.2-RELEASE and broken in 9.0-CURRENT. I isolated one of your examples =
primes 4295360520 4295360522 | xargs -n 1 factor
On FreeBSD 9.0-CURRENT: 4295360521: 65539 65539
On FreeBSD 8.2-RELEASE: No output=20
On FreeBSD 9.0-CURRENT I debugged the source in =
/usr/ports/math/primegen/work/primegen-0.97 a bit and realized that if I =
ran the compiled version in =
/usr/ports/math/primegen/work/primegen-0.97/primes I got the correct =
expected results. However, if I run the installed version in =
/usr/games/primes, I get the incorrect results. The binaries in those =
two places aren't the same (verified using md5). =20
This appears to be an issue with the port building, probably building in =
32 bit. If the inputs to primes are interpreted as 32bit then a "low" =
of (2^32 + 1) is interpreted as 1, therefore being less than 1000000000, =
therefore the code would continue to generate primes, and if this is the =
case then I wouldn't be surprised that the prime generation code also =
would misbehave.
More information about the freebsd-bugs mailing list | {"url":"http://lists.freebsd.org/pipermail/freebsd-bugs/2011-June/044543.html","timestamp":"2014-04-20T16:21:19Z","content_type":null,"content_length":"5367","record_id":"<urn:uuid:89e232ba-7dea-46d4-ba2e-677c34a59272>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tao Han
University of Pittsburgh
420B Allen Hall
Pittsburgh, PA 15260 USA
Office: (412) 624-2763
Fax: (412) 624-9163
My research field is in elementary particle physics theory, focusing on high-energy collider physics and in connection to astro-particle physics and cosmology. I formulate theoretical models of
elementay particles and their interactions, and develop the strategies to test the theory by experiments and observations. This research direction, bridging the abstract theory and experimental
observation, is the field of Phenomenology.
The fundamental questions I have been contemplating on from theory, and seeking for an answer in experiments, include
• The origin of mass for elementary particles
• Fundamental forces and their unification
• Symmetries and their breakdown: Gauge-, Super-symmetry, CP Violation etc.
• Nature of particle Dark Matter
• Property of Space-time
The experiments at the Large Hadron Collider (LHC) are collecting a staggering amount of data at the unprecedented energy and luminosity frontier. The discovery of neutrino mass and oscillation has
stimulated the theoretical development for further understanding of many fundamental aspects in particle physics. Experiments and observation in cosmology and astroparticle physics have deeply
reached the regime to probe dark matter hypothesis associated with the weak scale new physics. Phenomenology is thus in a golden era and I am thrilled to be part of the team at the dawn of the major
discoveries associated with the above questions at the shortest distances of about 10^-10 nm.
Research Highlights
Representative Presentations
Recent Publications:
• Baryon number violation at the LHC: the top option. Zhe Dong, Gauthier Durieux, Jean-Marc Gerard, Tao Han, Fabio Maltoni. e-Print: arXiv:1107.3805 [hep-ph]
• Phenomenology of a lepton triplet. Antonio Delgado, Camilo Garcia Cely, Tao Han, Zhihui Wang, Phys.Rev. D84 (2011) 073007, e-Print: arXiv:1105.5417 [hep-ph]
• Nearly Degenerate Gauginos and Dark Matter at the LHC. Gian F. Giudice, Tao Han, Kai Wang, Lian-Tao Wang, Phys.Rev. D81 (2010) 115011, e-Print: arXiv:1004.4902 [hep-ph]
• New Physics Signals in Longitudinal Gauge Boson Scattering at the LHC. Tao Han, David Krohn, Lian-Tao Wang, Wenhan Zhu, JHEP 1003 (2010) 082, e-Print: arXiv:0911.3656 [hep-ph] | {"url":"http://physicsandastronomy.pitt.edu/content/tao-han","timestamp":"2014-04-20T13:29:45Z","content_type":null,"content_length":"26803","record_id":"<urn:uuid:9b4ba32f-ddd8-4c21-a179-999b7ecdd841>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] nonuniform scatter operations
Geoffrey Irving irving@naml...
Sun Sep 28 15:15:30 CDT 2008
On Sat, Sep 27, 2008 at 10:01 PM, Nathan Bell <wnbell@gmail.com> wrote:
> On Sun, Sep 28, 2008 at 12:34 AM, Geoffrey Irving <irving@naml.us> wrote:
>> Is there an efficient way to implement a nonuniform gather operation
>> in numpy? Specifically, I want to do something like
>> n,m = 100,1000
>> X = random.uniform(size=n)
>> K = random.randint(n, size=m)
>> Y = random.uniform(size=m)
>> for k,y in zip(K,Y):
>> X[k] += y
>> but I want it to be fast. The naive attempt "X[K] += Y" does not
>> work, since the slice assumes the indices don't repeat.
> I don't know of numpy solution, but in scipy you could use a sparse
> matrix to perform the operation. I think the following does what you
> want.
> from scipy.sparse import coo_matrix
> X += coo_matrix( (Y, (K,zeros(m,dtype=int)), shape=(n,1)).sum(axis=1)
> This reduces to a simple C++ loop, so speed should be good:
> http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/coo.h#L139
Thanks. That works great. A slightly cleaner version is
X += coo_matrix((Y, (K, zeros_like(K)))).sum(axis=1)
The next question is: is there a similar way that generalizes to the
case where X is n by 3 and Y is m by 3 (besides the obvious loop over
range(3), that is)?
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-September/037720.html","timestamp":"2014-04-18T13:16:51Z","content_type":null,"content_length":"4495","record_id":"<urn:uuid:205149c6-fa7f-4df4-a4b0-2cfe9a1777d9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Topic-models] samplers for LDA likelihood
Wray Buntine wray.buntine at nicta.com.au
Mon Mar 29 21:01:10 EDT 2010
In response to David Mimno's questions:
> is there a general way to define an optimal mean field
> importance sampler for a new topic model? What would the mean-field
> importance sampler for CTM look like, for example?
Well, one needs to develop the mean field approximation.
Other than using Ghahramani and Beal's elegant formulation
("Propagation Algorithms for Variational {B}ayesian Learning", 2000),
which makes mean-field simple, I cannot really say much.
We've adopted the "left-to-right sequential sampler" for some
other topic models, but not the mean-field importance
sampler. I expect the mean-field importance sampler could
have more general uses.
> Also, what is the relationship between this mean field approximation and
> the standard variational approximation used in training models? The update
> in Eq. 4 doesn't quite match the standard variational update, for example,
> which seems like it should also minimize KL divergence with the
> intractable "real" model. More specifically, what are the implications of
> not including the variational Dirichlet, and just using variational
> multinomials over the words (if I'm understanding that correctly)?
Well, the standard variational method for LDA looks at the
distribution of the document proportions. This instead looks at
the distribution of word topics. The standard variational method
has a nice solution, whereas the one of Eq. 4 is an approximation.
But, in our case, for importance sampling of the word topics,
Eq 4 is sampling what we need. The standard LDA variational approach
doesn't help here, it is sampling the wrong variables.
Not sure I've helped here ;-)
Wray Buntine
Principal Researcher
Statistical Machine Learning
NICTA | Locked Bag 8001 | Canberra ACT 2601
T +61 2 6267 6323 | F +61 2 6267 6230
www.nicta.com.au | wray.buntine at nicta.com.au
>From imagination to impact.
More information about the Topic-models mailing list | {"url":"https://lists.cs.princeton.edu/pipermail/topic-models/2010-March/000760.html","timestamp":"2014-04-17T15:26:47Z","content_type":null,"content_length":"4847","record_id":"<urn:uuid:1f1c2f09-4768-4e65-8a4f-6f71dbd54c06>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question on independence
up vote 0 down vote favorite
For each natural number $n \geq 2$, define the set $A_n$ to be the set of points $p/n$ with $0 < p < n, \gcd(p,n) = 1$. Now define a sequence of independent random variables $X_1, X_2, \cdots$, where
$X_n \in [0, f(n)]$ for some non-negative function $f$ satisfying $0 \leq f \leq 1/2$ and $f > 0$ infinitely often. Now consider the random sets $B_n$ which are unions of the form $\bigcup_{a \in
A_n} [a - X_n/n, a+ X_n/n]$. Now, in this setting, does the independence of the sequence of random variables $X_n$ imply any notion of independence or 'almost' independence for the new random sets
defined? Here the probability being considered is the Lebesgue measure of the unit interval $[0,1]$, though if another natural measure $\mu \ll m$ would be fine as well.
Edit: Another attempt at rescuing this question.
Edit: I have redefined my original question; which is now listed below. I figured out what I am trying to ask more specifically.
Suppose that I am given a sequence of events $A_1, A_2, \cdots \subset [0,1]$, where $A_n = \displaystyle \bigcup_{j=1}^{k_n} [u_j, v_j]$, a finite union of closed intervals. and we don't know
whether they are independent. Suppose we give a sequence of random variables $X_1, X_2, \cdots$, where for each $n$ we have $X_n$ takes on values in $[0, f(n)]$ with respect to some probability
distribution, for some non-negative function $f$ with $0 \leq f(n) \leq 1/2$ for all $n$ and $f(n) > 0$ infinitely often. For my purposes, it suffices to assume that $f(n)$ is small; say $f(n) = o(1)
$. Now suppose that $X_1, X_2, \cdots$ is independent. Now consider the (random) sets $B_n = \displaystyle \bigcup_{j=1}^{k_n} [u_j - X_n, v_j + X_n]$. What can we say (if anything) about the
independence of the sequence of events $B_1, B_2, \cdots$?
Independent with respect to which measure on $[0,1]$? – Did May 29 '11 at 20:55
add comment
1 Answer
active oldest votes
Let $f=0$. Then we do not know whether the $A_n=B_n$ are independent.
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/66334/a-question-on-independence","timestamp":"2014-04-21T10:31:24Z","content_type":null,"content_length":"51120","record_id":"<urn:uuid:d623b513-87d5-4631-a106-80d38aae778f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cicero, IL Geometry Tutor
Find a Cicero, IL Geometry Tutor
...I have 4 years of teaching experience: 2 years as a middle school math teacher and 2 years as a high school math teacher. During this time I created my own curriculum every year, wrote my own
instructional lessons, and designed practice and homework for every lesson. These are skills I bring to all my tutoring engagements.
17 Subjects: including geometry, calculus, physics, GRE
I am a certified teacher with 16 years of teaching experience and nearly seven years of tutoring experience, specializing in preparing students for the ACT and SAT. I have a solid history of
results with students typically making gains of 5-8 points on their ACT composite, and increases of between ...
20 Subjects: including geometry, reading, English, writing
...Also, it has review of natural numbers, arithmetic operations, integers, fractions, decimals and negative numbers, factorization of natural numbers, problems involving ratios, proportions,
percents, measurement, intro to standard Cartesian coordinate plane, and basic geometry problems. I have tu...
11 Subjects: including geometry, calculus, algebra 2, trigonometry
...I have been in the Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred
Heart. So if you are really struggling with chemistry or math or just want to improve your grades I'm the ...
20 Subjects: including geometry, chemistry, physics, GRE
...While working with attorneys, I was constantly complimented on my ability to teach without intimidation. Several of the attorneys requested that I tutor their parents in using their computers.
I enjoyed working with seniors as much as I had enjoyed working with younger students.
14 Subjects: including geometry, algebra 1, algebra 2, GED | {"url":"http://www.purplemath.com/cicero_il_geometry_tutors.php","timestamp":"2014-04-17T04:54:39Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:b34f232e-0022-4d26-94dd-5d3115806569>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 23,
Volume 23, Issue 12, 01 December 1955
View Description Hide Description
The depolarization factors of the totally symmetric vibrations of CCl[3]H, CCl[3]D, and CCl[3]Br are computed using previously derived expressions for the spur and anisotropy of the
polarizabilitytensor. The associated normal coordinate treatment uses the Wilson FG‐matrix technique with a general quadratic potential function. Calculated and observed depolarization factors
agree well for stretching but not for the bending modes.
View Description Hide Description
The dielectric dispersion of rochelle salt was measured at 3000∼20 000 Mc/sec using a new method. In this method ε′ and ε″ is obtained by measuring the half‐power point and the shift of resonant
frequency of a rectangular TE [01] mode cavity.
These results show that the relaxation frequency of rochelle salt exists in the decimeter wave region.
View Description Hide Description
Studies of anisotropic infrared absorption demand quite thin sections of single crystals, having relatively large cross section parallel with one of the few spectroscopically significant planes.
This article describes the principles of design, the construction, and the successful operation with benzene of an apparatus for producing such specimens of single crystals. The method involves
gradual introduction of vapor into an enclosed space, supporting a gradient of temperature and having the shape desired for the specimen, until self‐nucleation occurs in a small vicinity of the
colder pole of temperature. Processes of attrition among the nuclei ensue automatically and instantaneously, resulting in the survival of only those that become attached to the confining wall
with a most favored orientation, an orientation that can be selected to some degree by factors of design and operation. Thereafter, vapor issuing into the enclosure accretes upon the few
surviving nuclei until these seeds coalesce and develop to fill the entire volume available. The method promises to be especially useful for producing and investigating crystals that exist only
at low temperatures.
View Description Hide Description
Infrared absorption spectra were observed with radiation polarized linearly along each of the two extinction axes contained in a thin cross section from a single crystal of benzene grown
especially for this purpose. The spectra indicate consistently that the cross section was the 010 plane, that they record separately the absorptions polarized along the a‐axis and the c‐axis,
respectively. The anisotropies of the induced absorptions exhibited by five fundamental molecular vibrations (ones that appear only in spectra for condensed phases) are all quite similar and may
even be identical. It thus appears that the induced motions, super‐imposed upon the several intrinsic ones by the perturbing intermolecular interactions, have a property of similarity independent
of the differing natures of the intrinsic motions that are being perturbed.
View Description Hide Description
Proton and fluorine magnetic resonance of trifluoroacetic acid in water reflects the influence of electrolyticdissociation below a mole fraction 0.5 and of hydration above 0.5. The dissociation
constant is estimated to be 1.8.
View Description Hide Description
An approximate Hamiltonian for a nonrigid internal rotor has been derived. The potential energy has been expanded in a Taylor's series in the displacement coordinates and in a Fourier series in
the angle of internal rotation Θ. The Hamiltonian was transformed by a contact transformation, and a second‐order Hamiltonian in which vibrations and rotations have been separated has been
obtained. The Hamiltonian consists of terms which constitute the usual rigid internal rotational problem, of centrifugal distortion terms involving both over‐all and internal angular momentum,
and of terms that arise because of the repulsive nature of the barrier. These repulsive terms enter as a single term, 2JF[v] (m|1 — cos3Θ|m), in the expression for the rotational transitions of
symmetric rotors, where J is the total angular momentum quantum number and m is the pseudo‐quantum number for internal rotation. The repulsive constant, F[v] , is given by the relationwhere B[xx]
^(i)+B[yy] ^(i) is the derivative of the rigid rotor rotational constant with respect to the ith symmetry coordinate, and a[i] ^(1) is one‐half the displacement of the equilibrium position of the
ith internal coordinate in going from Θ=0 to Θ=π/3.
The dependence of the barrier height upon the vibrational motion has also been studied.
View Description Hide Description
The theory of the interactions of hindered internal rotation with over‐all rotations is extended to include nonrigid asymmetric rotors. The interdependence of hindered internal rotation and
vibrations and their effect upon the vibrational energy levels is considered. The theory is developed analogously to the theory developed for symmetric rotors in a previous paper. An approximate
over‐all internal rotational wave function of the form 1/(2π)^½ exp[i(P [ z ] I [ z2]/I [ z ]—τ)]φ[ r ] is discussed, where φ[ r ] is the wave function that diagonalizes the rigid rotor
Hamiltonian and τ is an integer.
While the general energy relation involves too many parameters to compute or to fit empirically, certain special cases are discussed. In particular for the 0[00]→1[01] transition of the CH[3]NO
[2] type of molecule one has , where (1[01]|〈H〉[ R ]|1[01]) is the rigid rotor energy for the 1[01] level, Θ is the angle of internal rotation, II is the internal angular momentum,m indicates a
given internal rotational level, and the F's and G's are constants which can be evaluated from a knowledge of the vibrational characteristics of the molecule and of the forces that constitute the
hindering barrier to internal rotation.
View Description Hide Description
The physical principles involved in conventional absolute intensity measurements are reviewed. Experimental difficulties rule out the use of extrapolation techniques for some spectral
transitions. For this reason it is of interest to re‐examine the possibility of using total absorptionmeasurements, in conjunction with the curves of growth, for making intensity estimates.
Extrapolation methods yield results which are independent of spectralline shape. Use of the curves of growth, on the other hand, implies the assumption that the line contour can be described by
combined Doppler and Lorentz broadening.
The curves of growth permit a unique correlation between total absorption and f‐value either for spectral lines with pure Doppler broadening or for pure collision broadening. Furthermore, a
simple experimental procedure can be devised for estimating both the absolute intensity and the spectral line profile on the basis of single‐path and multiple‐path absorptionmeasurements. The
suggested procedure involves absorptionmeasurements for optical densities (path lengths) under conditions in which the integrated fractional absorption is a relatively sensitive function of
spectralline shape. Representative calculations referring to utilization of the proposed method have been carried out for spectral lines belonging to the ^2Σ→^2II transitions, (0,0)‐band, of OH,
and also for lines belonging to the fundamental vibration‐rotation spectrum of CO.
View Description Hide Description
Diffusion coefficients of the series of even‐numbered fatty acids from C[6] to C[18], of three dialkyl phosphoric acids, and of 2‐ethylhexoic and benzoic acids have been measured in dilute
solutions in n‐decane at 30°C. Gravity mixed diaphragm cells have been employed in making the measurements. The Stokes‐Einstein hydrodynamic relationship adjusted by an empirical coefficient
represents the magnitude and variation of diffusion coefficients in the series of fatty acids if corrections are made for solute association and nonsphericity. The unmodified Stokes‐Einstein
equation predicts diffusion coefficients lower than those measured by approximately a factor of two.
View Description Hide Description
Thermal diffusion measurements have been made on a series of solutions of polystyrene as follows: (1) Five molecular weights (10 000 to 336 000) in toluene; (2) 136 000 molecular weight in
o‐xylene, styrene, ethyl benzene, dioxane, and pyridine; (3) Styrene dimer in toluene and styrene; measurements were also made on some binary monomeric mixtures.
The thermodynamic propertyX∂[μ]/∂X describes adequately the concentration dependence of the thermal diffusion ratio α. It appears that that portion of the motion of the polystyrene molecule in
dilute and somewhat concentrated solutions which is segmental involves 10—13 chain atoms in the moving segment. The results seem consistent with Kauzmann and Eyring's picture for motion of long
chain molecules.
View Description Hide Description
The heat capacity of erbium has been measured over the range 15 to 320°K and the thermodynamic functions have been calculated. Three maxima have been observed which occur at 19.9°K, 53.5°K, and
84°K. The two at the lower temperatures show a dependence on the thermal history of the sample, and this dependence was investigated. A correlation of the various contributions to the entropy at
room temperature has been made and extended to the other rare earth metals.
View Description Hide Description
It is shown that the differences in the thermodynamic properties of isotopic molecules subject to small quantum effects (u ^2/24 law) depend on the difference in the reciprocal masses of the
atoms in the molecule and are, therefore, independent of all masses except of those atoms isotopically substituted. This theorem provides a rigorous proof of the rule of the geometric mean for
gaseous molecules. It is shown that the partition function ratio for a pair of double‐labeled molecules, e.g., N^15D[3]/N^15H[3] is equal to the ratio for the single‐labeled pair N^14D[3]/N^14H
[3]. The application of the u ^2/24 law to isotopic isomer equilibria is pointed out.
View Description Hide Description
Total collision cross sections have been measured for krypton atoms with energies between 700 and 2100 ev, scattered in room temperature krypton, to obtain potential energy information for the
interaction of two krypton atoms. The potential function may be represented byfor r between 2.42 A and 3.14 A.
The present potential appears to be consistent with potentials, valid at larger separation distances, which have been derived from measurements of gaseous compressibility, transport, and crystal
properties, within the limits of uncertainty of these larger distances potentials.
View Description Hide Description
The formal theory of the first paper of this series is extended here to include the binding of several different species of ions or molecules on a protein,aggregation in a protein solution, and
solutions containing more than one type of protein molecule.
View Description Hide Description
Usual theories of statistical mechanics of nonuniform state stand on the assumption of approximately stationary random processes. Generally speaking, this assumption is not reasonable when the
state concerned deviates considerably from thermal equilibrium. For a system in such a state, it is difficult not only to define but also to observe a complete set of gross variables. In this
case the deviations of the actual future values of the gross variables from their expected future values cannot be neglected.
A new method of coarse‐graining is proposed as available to such a state. As a concrete example of system we take a gas. We get a coarse‐grained function 〈f〉[ D ] by averaging the fine‐grained
probability function of one molecule over a series of observations made on the system on different occasions where at each observation the system is initially in a common macroscopic state as
determined by our incomplete set of initial conditions. It is shown that 〈f〉[ D ] satisfies the Boltzmann‐Maxwell equation. The equation is valid even when the state concerned deviates
considerably from thermal equilibrium and the random processes are not stationary. In this situation, however, the interpretation of the equation is different from the usual one; i.e., the
equation predicts not the result of an individual observation of the gas concerned but the result obtained by averaging over a series of observations made repeatedly on different occasions on the
same gas under a common incomplete set of macroscopic initial conditions where, with the lapse of time from the start, the result of each observation deviates from the others even in the
macroscopic sense. For the time being, the theory is given only in the sense of classical mechanics.
View Description Hide Description
Attention is directed to the very large departures of the structures of the zinc family (IIb) metals from those produced by the close packing of spheres. This seems to indicate that cohesion in
these metals differs considerably from the normal nondirectional metallic bonding. It is proposed that there is a system of covalent bonds in the basal plane resulting from bonding orbitals which
are hybrids of one of the d and two of the p atomic orbitals. These absorb some of the s electrons and leave only 1.33 electrons/atom in the s band. Support for the proposed bonding scheme is
provided by showing that with it one can account for many otherwise puzzling properties of the IIb metals and their alloys. The properties which are discussed in terms of the proposed scheme are
(1) the abnormally large axial ratios in zinc and cadmium, (2) the effect of temperature on the axial ratios of these metals and the alloys of magnesium and cadmium, (3) the effect of alloying on
the axial ratio, (4) the dependence of the axial ratio on the degree of order in MgCd[3], (5) the great abundance of Schottky defects in certain magnesium‐cadmium alloys, (6) the asymmetry of the
heats of alloying of magnesium and cadmium, (7) the structure of solid mercury, (8) electrical conductivities of the solids, (9) the positive Hall coefficient for zinc and cadmium, (10) the poor
solvent power of these metals as solids, and (11) certain unusual features of the electrical conductivities of the liquid metals and the dilute amalgams.
View Description Hide Description
The photolysis of acetic anhydride has been studied using a hot mercury arc at temperatures from 60° to 160°C by analysis of the gaseous products. CO and C[2]H[6] are produced at equal rates
which are one‐half the rate of production of CO[2]. At 60°C, CO production is delayed initially but in time reaches the same rate as C[2]H[6]. Using acetone as actinometer, the quantum yield of
CO[2] is two, of CO and C[2]H[6], unity. The decomposition of acetic anhydride can be photosensitized by acetone. Mass‐spectral analysis of the liquid residue at 160°C showed the absence of
acetone, biacetyl, methyl acetate, acetylacetone, and acetonyl acetone. A fragment of mass 57, increasing in intensity with duration of reaction was present in amounts sufficient to account for a
material balance. Some steps in the mechanism are discussed.
View Description Hide Description
The electrokinetic effects streaming potential, streaming current, and electro‐osmotic pressure were studied by applying and measuring sinusoidal variations of hydrodynamic pressure and
electrical voltage. Phenomenological relations between the effects were investigated, and an improved experimental method for measuring the electrokinetic coefficients, hence the zeta potential,
was used. Saxen's law was verified within 6% at frequencies of 20, 100, and 200 cycles per second. The systems studied were restricted to glass‐water and glass‐salt solutions. The advantages and
disadvantages of using sinusoidally varying quantities for electrokineticmeasurements are discussed.
View Description Hide Description
A variational approach to the calculation of the radial distribution function is presented. The approximations consist in the neglect of fourth‐order correlation in the entropy and the use of a
constant third‐order correlation chosen to satisfy the third‐order normalization condition. The average interaction energy, containing only pair terms, does not involve correlations higher than
second order. Finally, one obtains approximate expression for the excess Helmholtz free energy as a function of the radial distribution function (r.d.f.) and macroscopic parameters. This free
energy, when minimized with respect to the r.d.f. at constant temperature and density, yields an integral equation for the r.d.f. This theory has a simpler structure and yields thermodynamic
functions in a more direct way than earlier theories. The theory has not yet been adequately tested; however, the author speculates that it will give good results for short‐range forces but poor
results for long‐range forces.
View Description Hide Description
The nonlinearity of the differential equations governing space‐charge buildup or decay cause the principle of superposition to be inapplicable. Thus, charging and discharging curves should differ
and depend strongly on applied voltage when space‐charge formation is appreciable. Two approximate theories of space‐charge formation and decay are compared, and it is found that one of their
main differences is that one has been partly linearized while the other has not. Finally, it is mentioned that of the several theories of the ac response of materials with space charge, only one
has not been linearized; hence, it is the only one applicable for applied voltages above about kT/e. | {"url":"http://scitation.aip.org/content/aip/journal/jcp/23/12","timestamp":"2014-04-16T16:23:42Z","content_type":null,"content_length":"174419","record_id":"<urn:uuid:f6856691-2271-4eb1-9d28-919b668738da>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lorton, VA Algebra Tutor
Find a Lorton, VA Algebra Tutor
Hello, I am currently teaching at a high school. I teach in general education and special education classrooms. I have had great success in helping my students maximize their math learning and
success. My SOL pass rate is always near the top in FCPS. Students enjoy working with me. I make math fun!
7 Subjects: including algebra 1, algebra 2, geometry, special needs
...I work with a lot higher level math most of the time, but explaining this sort of math is like the alphabet to me and I can make it that easy for my students. I can assist any student with the
Virginia SOL (Standards of Learning) tests in Algebra, Geometry and/or Chemistry and have extensive experience doing so. Thanks very much for your consideration.
28 Subjects: including algebra 2, algebra 1, chemistry, calculus
...Additionally, for three years during school, I was a teaching assistant for an honors biology course. I love helping students learn these technical subjects, and I relish the opportunity to
practice my teaching skills.A good start is crucial when it comes to mathematics, and it's difficult to se...
14 Subjects: including algebra 2, algebra 1, biology, Java
I am a retired chemist with 30 years of practical, chemical and pharmaceutical industrial experience. I have enjoyed tutoring high school level chemistry and algebra for the last 3 years. I use a
process oriented approach to teaching math and science principles.
5 Subjects: including algebra 1, algebra 2, chemistry, prealgebra
...Continuity as a Property of Functions. II. Derivatives A.
21 Subjects: including algebra 1, algebra 2, calculus, statistics
Related Lorton, VA Tutors
Lorton, VA Accounting Tutors
Lorton, VA ACT Tutors
Lorton, VA Algebra Tutors
Lorton, VA Algebra 2 Tutors
Lorton, VA Calculus Tutors
Lorton, VA Geometry Tutors
Lorton, VA Math Tutors
Lorton, VA Prealgebra Tutors
Lorton, VA Precalculus Tutors
Lorton, VA SAT Tutors
Lorton, VA SAT Math Tutors
Lorton, VA Science Tutors
Lorton, VA Statistics Tutors
Lorton, VA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lorton_VA_Algebra_tutors.php","timestamp":"2014-04-16T13:38:33Z","content_type":null,"content_length":"23720","record_id":"<urn:uuid:a5084237-fb06-4cfc-b9e2-1bdb2af5ce92>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Virginia
3.M.S.3.1 Through communication, representation, reasoning and proof, problem solving, and making connections within and beyond the field of mathematics, students will demonstrate understanding of
numbers, ways of representing numbers, and relationships among numbers and number systems, demonstrate meanings of operations and how they relate to one another, and compute fluently and make
reasonable estimates.
3.M.S.3.2 Through communication, representation, reasoning and proof, problem solving, and making connections within and beyond the field of mathematics, students will demonstrate understanding of
patterns, relations and functions, represent and analyze mathematical situations and structures using algebraic symbols, use mathematical models to represent and understand quantitative
relationships, and analyze change in various contexts.
3.M.S.3.3 Through communication, representation, reasoning and proof, problem solving, and making connections within and beyond the field of mathematics, students will analyze characteristics and
properties of two- and three-dimensional geometric shapes and develop mathematical arguments about geometric relationships, specify locations and describe spatial relationships using coordinate
geometry and other representational systems, apply transformations and use symmetry to analyze mathematical situations, and solve problems using visualization, spatial reasoning, and geometric
3.M.S.3.4 Through communication, representation, reasoning and proof, problem solving, and making connections within and beyond the field of mathematics, students will demonstrate understanding of
measurable attributes of objects and the units, systems, and processes of measurement, and apply appropriate techniques, tools and formulas to determine measurements.
3.M.S.3.5 Through communication, representation, reasoning and proof, problem solving, and making connections within and beyond the field of mathematics, students will formulate questions that can be
addressed with data and collect, organize, and display relevant data to answer them, select and use appropriate statistical methods to analyze data, develop and evaluate inferences and predictions
that are based on models, and apply and demonstrate an understanding of basic concepts of probability. | {"url":"http://www.ixl.com/standards/west-virginia/math/grade-3?documentId=2001000141&subsetId=-1","timestamp":"2014-04-17T06:41:15Z","content_type":null,"content_length":"79398","record_id":"<urn:uuid:eb9e854b-0665-4c5d-8803-b111e1720646>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stockbridge, GA ACT Tutor
Find a Stockbridge, GA ACT Tutor
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including ACT Math, geometry, statistics, probability
...I also ran cross country in high school and participated or led several service organizations in college and high school, so I can easily relate to a wide variety of interests and backgrounds.
I bring a professional, optimistic, and energetic attitude to every session and I think that everyone c...
17 Subjects: including ACT Math, chemistry, writing, physics
...As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. I can tutor basic Mendelian genetics, Complex patterns of inheritance, Molecular biology/
genetics, and eukaryotic and prokaryotic genetics. I have also tutored genetics to undergraduate students.
15 Subjects: including ACT Math, chemistry, geometry, biology
I have been homeschooling children in all subjects including math for the last 8 years, and have taught math working one-on-one with other children with various learning styles. I have a Bachelor
of Science degree in Mechanical Engineering and a MBA, and professional experience with Fortune 500 com...
15 Subjects: including ACT Math, algebra 1, GRE, GED
...I do still study the topics to keep the information fresh in my head. Took the actual test when I considered joining the military. Made a 92 or 93 on the actual test.
29 Subjects: including ACT Math, chemistry, reading, physics
Related Stockbridge, GA Tutors
Stockbridge, GA Accounting Tutors
Stockbridge, GA ACT Tutors
Stockbridge, GA Algebra Tutors
Stockbridge, GA Algebra 2 Tutors
Stockbridge, GA Calculus Tutors
Stockbridge, GA Geometry Tutors
Stockbridge, GA Math Tutors
Stockbridge, GA Prealgebra Tutors
Stockbridge, GA Precalculus Tutors
Stockbridge, GA SAT Tutors
Stockbridge, GA SAT Math Tutors
Stockbridge, GA Science Tutors
Stockbridge, GA Statistics Tutors
Stockbridge, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Chamblee, GA ACT Tutors
Conley ACT Tutors
Covington, GA ACT Tutors
Ellenwood ACT Tutors
Fayetteville, GA ACT Tutors
Forest Park, GA ACT Tutors
Hampton, GA ACT Tutors
Hapeville, GA ACT Tutors
Jonesboro, GA ACT Tutors
Lake City, GA ACT Tutors
Lovejoy, GA ACT Tutors
Mcdonough ACT Tutors
Morrow, GA ACT Tutors
Rex, GA ACT Tutors
Tyrone, GA ACT Tutors | {"url":"http://www.purplemath.com/Stockbridge_GA_ACT_tutors.php","timestamp":"2014-04-17T01:38:08Z","content_type":null,"content_length":"23836","record_id":"<urn:uuid:706bb230-1d77-45da-96bd-32658b8221f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
, in Time, is sometimes taken for an extremely small part of duration; but, more properly, it is only an instant or termination or limit in time, like a point in geometry. Maclaurin's Fluxions, vol.
1, pa. 245.
, in the new Doctrine of Infinites, denote the indefinitely small parts of quantity; or they are the same with what are otherwise called infinitesimals, and differences, or increments and decrements;
being the momentary increments-or decrements of quantity considered as in a continual flux.
Moments are the generative principles of magnitude: they have no determined magnitude of their own; but are only inceptive of magnitude.
Hence, as it is the same thing, if, instead of these Moments, the velocities of their increases and decreases be made use of, or the finite quantities that are proportional to such velocities; the
method of proceeding which considers the motions, changes, or fluxions of quantities, is denominated, by Sir Isaac Newton, the Method of Fluxions.
Leibnitz, and most foreigners, considering these infinitely small parts, or insinitesimals, as the differences of two quantities; and thence endeavouring to find the differences of quantities, i. e.
some Moments, or quantities indefinitely small, which taken an infinite number of times shall equal given quantities; call these Mo-| ments, Differences; and the method of procedure, the Differential
, or Momentum, in Mechanics, is the same thing with Impetus, or the quantity of motion in a moving body.
In comparing the motions of bodies, the ratio of their Momenta is always compounded of the quantity of matter and the celerity of the moving body: so that the momentum of any such body, may be
considered as the rectangle or product of the quantity of matter and the velocity of the motion. As, if b denote any body, or the quantity or mass of matter, and v the velocity of its motion; then bv
will express, or be proportional to, its Momentum m. Also if B be another body, and V its velocity; then its Momentum M, is as BV. So that, in general, , i. e. the Momenta are as the products of the
mass and velocity. Hence, if the Momenta M and m be equal, then shall the two products BV and bv be equal also; and consequently , or the bodies will be to each other in the inverse or reciprocal
ratio of their velocities; that is, either body is so much the greater as its velocity is less. And this force of Momentum is of a different kind from, and incomparably greater than, any mere dead
weight, or pressure, whatever.
The Momentum also of any moving body, may be considered as the aggregate or sum of all the Momenta of the parts of that body; and therefore when the magnitudes and number of particles are the same,
and also moved with the same celerity, then will the Momenta of the wholes be the same also.
MONADES. Digits. | {"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/m/moment.html","timestamp":"2014-04-18T11:05:00Z","content_type":null,"content_length":"8063","record_id":"<urn:uuid:3812d43a-a442-437c-9764-28b960b22918>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about $\Delta(n)_{U}$ notaion in illusie's cotangent compelexe et deformations
up vote 1 down vote favorite
In illusie's book cotangent complexe et deformations, 38page, the notation $\Delta(n)_{U}$ appears, and I cannot find the direct explanation or hint about meaning of this notation in this book.
I think $\Delta(n)$ means simply the simplicial set [0,n] in this book, but $\Delta(n)_{U}$ in 38page is different, I think It is an object of topos Simpl(T) but cannot understand what it exactly
Another question is also the question about notation in the same book.
In 56page of this book, the notation $\mathbb{Z}^{(\Delta(n))}$ appears, and I can't understand this notation and I can't find appropriate explanation in the book. In this case, the meaning of $\
Delta(n)$ is not seams to be the only simplicial set [0,n]. I think it should be understanded by a sort of functor, but I can't find what it exactly means too.
If you knows about these notations in the book, or another resource explained about this part in english, it must be a big help to me. Thanks
cotangent-complex ag.algebraic-geometry
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged cotangent-complex ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/126254/question-about-deltan-u-notaion-in-illusies-cotangent-compelexe-et-defor","timestamp":"2014-04-17T07:27:43Z","content_type":null,"content_length":"46350","record_id":"<urn:uuid:99e6d3d0-42a4-41b1-81e7-8581a399fa1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
How to run a chi-square 2 way test in SPSS In this example, we want to test the claim that there is an association between the restrictions on - read more
Chi-Square Test for Association using SPSS Introduction. The chi-square test for independence, ... In our enhanced linear regression guide, we show you how to correctly enter data in SPSS to run a
chi-square test for independence. Alternately, we have a generic, ... - read more
Share your answer: how to run chi squared in spss?
how to run chi squared in spss resources | {"url":"http://www.askives.com/how-to-run-chi-squared-in-spss.html","timestamp":"2014-04-20T01:51:29Z","content_type":null,"content_length":"34901","record_id":"<urn:uuid:c8eb1f3c-f0fb-49b6-ac3b-435d03026e8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(16x2/3)/(4x1/4) simplify using property of exponents.
• 7 months ago
• 7 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/521c39d9e4b06211a67d7acf","timestamp":"2014-04-18T10:49:21Z","content_type":null,"content_length":"39673","record_id":"<urn:uuid:96d87e47-94a4-491e-ae72-10a10d2c604f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Petco Park Fences Are Coming Along Nicely…
Wow… that’s a significant move*. It’s funny, you hear the numbers on the fences at Petco being moved in, but really… none of that means crap until you see it with your own crazy eyes… then it
becomes real. It’s science.
I did some quick math**, and based on Chase Headley’s 2012 numbers… if projected out, he’s going to hit 158 home runs at Petco in 2013. NL MVP. You heard it here first.
UPDATE: Here’s another photo b/c I immediately got one billion questions about the fence placement when I put up this post.
Photo and tweet via Tom Garfinkel: “The black fence is the construction fence… The concrete fence represents where the new wall will be..”
[via, h/t Mickey]
* The black wall is the construction fence, not where the real fence will be (that would make it a Little League field)… I believe the new fence is near all the trucks gathered by the dirt pile. (I
** I did no math.
4 Responses to Petco Park Fences Are Coming Along Nicely…
1. ** I did the math.
You’re not that far off. According to my calculations Headley will hit 161.2933333 (3s repeating, of course) home runs. How much long till Spring?
□ thanks. fixed.
2. Padres need moveable fences. They are the only ones who cant hit it out. Bottom of every inning, move em in 30′. Call it a home field advantage.
□ hahaha.
This entry was posted in Baseball, MLB and tagged 2013 padres, chase headley, Padres, Petco Park, petco park fences are moving. Bookmark the permalink. | {"url":"http://www.lobshots.com/2013/01/08/petco-park-fences-are-coming-along-nicely/","timestamp":"2014-04-17T10:34:13Z","content_type":null,"content_length":"21079","record_id":"<urn:uuid:597d443d-9314-464a-94ad-12be4282f2b0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 19
"... symmetric λµ-calculus ..."
"... Abstract. We give an analysis of various classical axioms and characterize a notion of minimal classical logic that enforces Peirce’s law without enforcing Ex Falso Quodlibet. We show that a
“natural ” implementation of this logic is Parigot’s classical natural deduction. We then move on to the comp ..."
Cited by 9 (5 self)
Add to MetaCart
Abstract. We give an analysis of various classical axioms and characterize a notion of minimal classical logic that enforces Peirce’s law without enforcing Ex Falso Quodlibet. We show that a “natural
” implementation of this logic is Parigot’s classical natural deduction. We then move on to the computational side and emphasize that Parigot’s λµ corresponds to minimal classical logic. A
continuation constant must be added to λµ to get full classical logic. The extended calculus is isomorphic to a syntactical restriction of Felleisen’s theory of control that offers a more expressive
reduction semantics. This isomorphic calculus is in correspondence with a refined version of Prawitz’s natural deduction.
- Computational Logic and Applications CLA’05. Discrete Mathematics and Theoretical Computer Science proc , 2006
"... LAMA- Équipe de logique, Université de Savoie, F-73376 Le Bourget du Lac, France In this paper, we introduce the λµ ∧ ∨- call-by-value calculus and we give a proof of the Church-Rosser property
of this system. This proof is an adaptation of that of Andou (2003) which uses an extended parallel reduct ..."
Cited by 3 (0 self)
Add to MetaCart
LAMA- Équipe de logique, Université de Savoie, F-73376 Le Bourget du Lac, France In this paper, we introduce the λµ ∧ ∨- call-by-value calculus and we give a proof of the Church-Rosser property of
this system. This proof is an adaptation of that of Andou (2003) which uses an extended parallel reduction method and complete development. Keywords: Call-by-value, Church-Rosser, Propositional
classical logic, Parallel reduction, Complete development 1
- in the π-calculus. IFIP-TCS’12, LNCS 7604 , 2012
"... We study the Λµ-calculus, extended with explicit substitution, and define a compositional output-based translation into a variant of the π-calculus with pairing. We show that this translation
preserves single-step explicit head reduction with respect to contextual equivalence. We use this result to ..."
Cited by 1 (1 self)
Add to MetaCart
We study the Λµ-calculus, extended with explicit substitution, and define a compositional output-based translation into a variant of the π-calculus with pairing. We show that this translation
preserves single-step explicit head reduction with respect to contextual equivalence. We use this result to show operational soundness for head reduction, adequacy, and operational completeness.
Using a notion of implicative type-context assignment for the π-calculus, we also show that assignable types are preserved by the translation. We finish by showing that termination is preserved.
- in Computational Type Theory Diploma thesis, Institut für Informatik, Universität Potsdam , 2009
"... Abstract. We present a hybrid proof calculus λµPRL that combines the propositional fragment of computational type theory with classical reasoning rules from the λµ-calculi. The calculus supports
the top-down development of proofs as well as the extraction of proof terms in a functional programming l ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We present a hybrid proof calculus λµPRL that combines the propositional fragment of computational type theory with classical reasoning rules from the λµ-calculi. The calculus supports the
top-down development of proofs as well as the extraction of proof terms in a functional programming language extended by a nonconstructive binding operator. It enables a user to employ a mix of
constructive and classical reasoning techniques and to extract algorithms from proofs of specification theorems that are fully executable if classical arguments occur only in proof parts related to
the validation of the algorithm. We prove the calculus sound and complete for classical propositional logic, introduce the concept of µ-safe terms to identify proof terms corresponding to
constructive proofs and show that the restriction of λµPRL to µ-safe proof terms is sound and complete for intuitionistic propositional logic. We also show that an extension of λµPRL to arithmetical
and first-order expressions is isomorphic to Murthy’s calculus P ROGK.
"... We prove the strong normalization of full classical natural deduction (i.e. with conjunction, disjunction and permutative conversions) by using a translation into the simply typed λµ-calculus.
We also extend Mendler’s result on recursive equations to this system. 1 ..."
Add to MetaCart
We prove the strong normalization of full classical natural deduction (i.e. with conjunction, disjunction and permutative conversions) by using a translation into the simply typed λµ-calculus. We
also extend Mendler’s result on recursive equations to this system. 1
, 905
"... A completeness result for the simply typed λµ-calculus ..."
, 905
"... Confluency property of the call-by-value λµ ∧ ∨-calculus ..."
, 905
"... Arithmetical proofs of strong normalization results for the symmetric λµ-calculus ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=718716","timestamp":"2014-04-24T13:46:34Z","content_type":null,"content_length":"30304","record_id":"<urn:uuid:a1dc9334-e421-4ecb-995b-8c20284f191c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Bayesian Inference
Understanding Bayesian approaches to estimating probabilities are important. Often people don't get the full import of it and/or fail to see the consequences of not estimating it the right way. Most
books that discuss it have confusing terminology to explain something that is fairly simple. Estimating Bayesian probabilities for events are relatively easier to understand, while those involving
hypotheses aren't so even though the math involved is the exact same! In this blog, I'll try to explain the concept, but more importantly show the formula that any student can use so that they don't
have "think through" the Bayesian logic all the time. However for some problems, it might be better to enumerate out the cases. To develop an intuition for this, you need to keep an eye out for new
information that could be coming in and altering our belief in an existing hypothesis.
Here is the structure of the problem you will almost always run into. For simplicity let us assume you have a two hypotheses H0 and H1. There are probabilities associated with them P(H0) and P(H1).
There comes along a piece of evidence E and you want to update your probability measures. The formula you could use is as simple as
..and thats it! This works, always. You just need to be able to map your problem to this framework.
To demonstrate this here is an example worked out.
Q. There are two boxes. Box A has 2 white coins and 1 black coin. Box B has 1 white coin and 2 black coins. If a user picks a box at random, what is the probability that box A was chosen? If the user
reveals the coin to be a white one, what is the probability that box A was chosen?
A. The first half is simple. No extra information is revealed. The probability is simply 50%. The second half gets a little more interesting. As there are only two boxes to be chosen, knowing the
probability of one implies you know the other.
So let's cast it in the framework indicated above. The hypothesis H0 is that box A was chosen. P(H0) is the "prior", that is the probability that box A was chosen prior to any new knowledge or
evidence. This we know is 50% which is the same as P(H1).
The next piece is to understand P(E | H0). This is the probability that you would see the "evidence"
that the hypothesis H0 is true. In this case its relatively easy to estimate this as 2/3. This also means that P(E | H1) is 1/3. Plugging all these into the equation above gives,
The intuition behind this is also easy to follow. As the person revealed a white ball it is more likely it came from box A than B. To further reinforce that intuition, think what would you have
concluded if box A had a 100 white coins.
The above is a relatively simple exercise in Bayesian inference. While most real world problems can be mapped to this frame work the difficulty comes in
• Realizing there is a Bayesian "trap" hidden somewhere
• If there is, casting it to the above framework
Here is another example of a scenario where Bayesian thinking comes into play often, that of tests for diseases.
Q. Assume there is a disease D, that has a test T. Overall 2% of the population get the disease. If a person actually has the disease, the test is right 90% of the time. If the person does not have
the disease, the test could still show as positive 20% of the time. If the test shows positive for a person, what is the probability that the person has the disease.
A. Here, the hypotheses are "No Disease" and "Disease", named H0 and H1 respectively. With no prior evidence we know that P(H0) is 98% and P(H1) = 100% - 98% = 2%. Now, there is new evidence (E) that
the test is showing up as a positive. So let us see how each of the parts would fit in.
We want to estimate P(H1|E), we know P(H1) & P(H0). Additionally we know P(E|H1) = 90% and thus P(E | H0) = 10%. Simply plug them all in again.
Notice, that even though the test is right 90% of the time the person actually has just a 15.51% chance of having it given the test proved positive. The intuition here is that it is a rare disease
and it would take a very accurate test to confirm it.
All is fine in such scenarios where the numbers are nicely given to us. The Bayesian angle is becomes elusive when it is not put forth cleanly. The next example demonstrates that.
Q. A man has two children. One of them is a boy, what is the probability that the other is a girl?
A. You might be tempted to say 50%. You be wrong! Here is why. Your evidence (E) here is "one of the children is a boy". The hypothesis you want the probability for is H0 = "Other child is a girl".
This makes H1 = "Other child is a boy". The values for P(H0) = P(H1) = 1/2. We want to estimate P(H0 | E). To estimate this, notice P(E | H1) = 1/4, as there is exactly one way this is possible. Now
we are all set, simply plug it into the formula (again!)
$$P(H_{0}|E) = \frac{\frac{1}{2}\times \frac{1}{2}}{\frac{1}{2}\times \frac{1}{2} + \frac{1}{4}\times \frac{1}{2}} = \frac{2}{3}$$
This is one example where it is likely easier to visualize the problem. In the diagram below, the left hand side shows the situation without any information, and the right hand side shows the
information provided and how it ends up encapsulating the relevant cases. It is easier to see why the probability is 66% from this figure.
No discussion on Bayesian inference is complete without a mention of the
Monty Hall
problem. The problem statement is quite simple, you are shown 3 doors behind one of which there is a treasure. You are allowed to pick one door, but not open it. Once you pick a door, a door which
does not contain the treasure is opened. You are allowed to stick to your choice or switch. What should you do? The simplest explanation is as follows: If you switch the probability that you will win
is 1/2, else it is 1/3, so you must always switch.
Clearly not all fit the "formula" framework. Some of these problems can solved more easily by using the conventional counting method. Perhaps the most startling of Bayesian puzzles to hit the web is
Tuesday Birthday
problem. It is a very subtle variant of the boy/girl problem mentioned above, but with a startling result. The problem is "A man has two children. One of them is a boy born on a Tuesday. What is the
probability that the other child is a boy?". The link I mention above (and other sources on the web) describe the solution and I'll try to describe it in my own words here.
If the first child is a boy born on a Tuesday, then the second child can be either Boy/Girl and could be born on any of the 7 days. This yields 14 cases (7 x 2). If the second child is a boy born on
a Tuesday, then, just as the previous argument, the first child can be a Boy/Girl born on any of the 7 days yielding 7 x 2 = 14 cases. However, both sets have a case of a Boy-Boy. The total 14 + 14 =
28 double counts this case. So in reality we have 28 - 1 = 27 cases. Next of these 27 cases, we need to know how many have two boys in them. We can apply the same logic. If the first child is a boy
born on a Tuesday, the second boy child can be born on any of the 7 days giving 7 cases. Same logic applies if the second child is a boy born on a Tuesday, but like before we need to subtract one
because the case of boy-boy is counted twice. This gives a total of 13 cases where there are two boys. So the required probability is 13/27.
If you are creative you can extend this to make your own tricky problems. What happens if you change day of week to month of year? If you follow the train of thought above, you will arrive at 23/47,
which is slightly greater than 13/27.
If you are looking to buy some books in probability here are some of the best books to learn the art of Probability here are some great books to own
Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics) This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that
they are all tractable and don't require too much advanced mathematics to solve.
Introduction to Algorithms This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists
Introduction to Probability Theory
An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition
The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!)
Introduction to Probability, 2nd Edition
The Mathematics of Poker Good read. Overall Poker/Blackjack type card games are a good way to get introduced to probability theory
Let There Be Range!: Crushing SSNL/MSNL No-Limit Hold'em Games Easily the most expensive book out there. So if the item above piques your interest and you want to go pro, go for it.
Quantum Poker Well written and easy to read mathematics. For the Poker beginner.
Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5) An excellent resource (students/engineers/
entrepreneurs) if you are looking for some code that you can take and implement directly on the job.
Understanding Probability: Chance Rules in Everyday Life A bit pricy when compared to the first one, but I like the look and feel of the text used. It is simple to read and understand which is vital
especially if you are trying to get into the subject
Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) This one is a must have if you want to learn machine learning. The
book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some
great examples and test data in the text book too.
Discovering Statistics Using R This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a
casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing.
4 comments:
1. On second child problem - (1/4) / (1/4 + 1/8) = 2/3, not 1/3.
Same is obvious from the picture - 'second child is girl' happens on two boxes out of three.
2. Typo, corrected. Thanks for pointing out
3. Another typo... In the disease/test question, P(E | H0) should be 20% instead of 10% (of course, you can change the false positive precentage from 20% to 10% in the question definition).
4. In the second child problem, the answer would be 1/3 IF the initial condition was "The FIRST child is a boy" or indeed "The SECOND child is a boy".
In the disease example I find that the way to visualise the problem is to take an example with a sample: say, of 1000 people, 20 will have the disease, of which the test will find 90% ie 18, and
980 will not have the disease, of which the test will falsely identify 20%, ie 196. So 18 of the (18 + 196 =) 214 who test positive actually have the disease = 1/12 = 8.33% | {"url":"http://bayesianthink.blogspot.com/2012/08/understanding-bayesian-inference.html","timestamp":"2014-04-17T09:33:55Z","content_type":null,"content_length":"95100","record_id":"<urn:uuid:b96098a4-9c0a-4b78-af4e-21c3920186cc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Counting Principal - Concept
A system of linear equations is two or more equations that contain the same variables. A solutions to a system of equations are the point where the lines intersect. There are four methods to solving
systems of linear equations: graphing, substitution, elimination and matrices. Solving systems of equations first shows up in Algebra I, but more complex applications occur in Algebra II.
So often times we're given situations where we're trying to figure out the number of possible outcomes for a set of information, okay? And one example that I want to start with is just a going to a
sandwich shop and dealing with the lunch that we're receiving. So if you buy a sandwich you can get a choice of a soda or tea and a side of fries, chips, slaw or salad. and we're trying to figure out
the number of different combinations that you can get for this particular problem.
So one of the easiest ways to start with this kind of problem is to just make a what's called a tree diagram, okay? And it's really like what it sounds like it just sort of connecting branches of a
tree. So what we end up with is we start with our sandwich, we start at one point. From there we can either get our soda or our tea. Okay? So that takes us through one decision that we have to make.
Then once we end up at that point we then can choose our other side. So if we got our soda, we could go with our fries, side chips or slaw or our salad. Okay? We could also do those same different
combination with our tea. We could get those fries, chips, slaw or salad and then all we have to do is once we go through all those things is count the number of endpoints that we have. So with our
soda there were 4 different options we could get. With our tea there were 4 different options that we could get for a total of 8 different things, okay.
So in general the tree diagram is a good way to sort of start organizing your data but once you start dealing with a lot more ingredients. So say we had you know different toppings on our sandwich or
many different deserts or who knows what, you're going to start getting a lot of branches and it's going to become a little bit overwhelming. It will always get you through it but it's not always
going to become the most efficient thing, okay?
So there's also what we can do is called the fundamental counting principle and in general I'm not a huge fan of definitions but I've done this one up here just so you can actually see how it works,
okay? So what it says if there are m ways for one event to occur and n ways for another, then there are m times n ways for both to occur. So really all you have to do is take the possible numbers of
one outcome times the possible numbers of the other and that's going to be your answer. Going back to our sandwich. There were 2 drinks, there were 4 solids. So the number of possible events is just
2 times 4 which is 8, okay? So we're often times able to do a tree diagram but more often than not and it's going to be significantly easier we can just use the fundamental counting principle,
multiply our individual components together to get our answer.
combination tree diagram fundamental counting principal | {"url":"https://www.brightstorm.com/math/precalculus/topics-in-discrete-math/fundamental-counting-principal/","timestamp":"2014-04-17T21:24:04Z","content_type":null,"content_length":"64386","record_id":"<urn:uuid:58c05aa7-08e2-403c-92cd-21253e30142b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 41
, 1998
"... this paper is organized as follows. In Section 2 we introduce the concept of regeneration and adaptation at regeneration, and provide theoretical support. In Section 3, the splitting techniques
required for adaptation are reviewed. Section 4 contains four illustrations of adaptive MCMC. Some of the ..."
Cited by 73 (4 self)
Add to MetaCart
this paper is organized as follows. In Section 2 we introduce the concept of regeneration and adaptation at regeneration, and provide theoretical support. In Section 3, the splitting techniques
required for adaptation are reviewed. Section 4 contains four illustrations of adaptive MCMC. Some of the proofs from Sections 2 and 3 are placed in the Appendix. 2 Regeneration: A Framework for
- J. Theoretical Prob , 1997
"... Consider a particle moving on the surface of the unit sphere in R 3 and heading towards a specific destination with a constant average speed, but subject to random deviations. The motion is
modeled as a diffusion with drift restricted to the surface of the sphere. Expressions are set down for variou ..."
Cited by 21 (11 self)
Add to MetaCart
Consider a particle moving on the surface of the unit sphere in R 3 and heading towards a specific destination with a constant average speed, but subject to random deviations. The motion is modeled
as a diffusion with drift restricted to the surface of the sphere. Expressions are set down for various characteristics of the process including expected travel time to a cap, the limiting
distribution, the likelihood ratio and some estimates for parameters appearing in the model. KEY WORDS: Drift; great circle path; likelihood ratio; pole-seeking; skew product; spherical Brownian
motion; stochastic differential equation; travel time. 1.
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework
for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consi ..."
Cited by 13 (5 self)
Add to MetaCart
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for
constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems
which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plug-and-play property. Our work builds on recently
developed plug-and-play inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its
applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second
example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A
- J. Stat. Phys
"... We study the problem of parameter estimation for time-series possessing two, widely separated, characteristic time scales. The aim is to understand situations where it is desirable to fit a
homogenized singlescale model to such multiscale data. We demonstrate, numerically and analytically, that if t ..."
Cited by 12 (6 self)
Add to MetaCart
We study the problem of parameter estimation for time-series possessing two, widely separated, characteristic time scales. The aim is to understand situations where it is desirable to fit a
homogenized singlescale model to such multiscale data. We demonstrate, numerically and analytically, that if the data is sampled too finely then the parameter fit will fail, in that the correct
parameters in the homogenized model are not identified. We also show, numerically and analytically, that if the data is subsampled at an appropriate rate then it is possible to estimate the
coefficients of the homogenized model correctly.
"... Elephant seals migrate over vast areas of the eastern North Pacific Ocean between rookeries in Southern California and distant northern foraging areas. Several models of particle movement were
evaluated and a model for great circle motion found to give reasonable results for the movement of an adult ..."
Cited by 10 (8 self)
Add to MetaCart
Elephant seals migrate over vast areas of the eastern North Pacific Ocean between rookeries in Southern California and distant northern foraging areas. Several models of particle movement were
evaluated and a model for great circle motion found to give reasonable results for the movement of an adult female. This model takes specific account of the fact that the movement is on the surface
of a sphere and that the animal is apparently heading toward a particular destination. The parameters of the motion were estimated. Such a great circle path of migration may imply that these seals
have the ability to assess their position with respect to some global or celestial cues, allowing them to continually adjust their ____________ *The work of DRB supported by the Office of Naval
Research Grant N0001494 -1-0042 and the National Science Foundation Grant DMS-9625774. Elephant seal dive data were collected in previous studies with partial support of a contract to BSS from the
Space and Missile C...
- ANN. STAT , 2007
"... We apply the techniques of stochastic integration with respect to the frac-tional Brownian motion and the theory of regularity and supremum estimation for stochastic processes to study the
maximum likelihood estimator (MLE) for the drift parameter of stochastic processes satisfying stochastic equati ..."
Cited by 10 (5 self)
Add to MetaCart
We apply the techniques of stochastic integration with respect to the frac-tional Brownian motion and the theory of regularity and supremum estimation for stochastic processes to study the maximum
likelihood estimator (MLE) for the drift parameter of stochastic processes satisfying stochastic equations driven by fractional Brownian motion with any level of Holder-regularity (any Hurst
parameter). We prove existence and strong consistency of the MLE for linear and nonlinear equations. We also prove that a version of the MLE using only discrete observations is still a strongly
consistent estimator.
- Extremes , 2003
"... Modeling of extreme values in the presence of heterogeneity is still a relatively unexplored area. We consider losses pertaining to several related categories. For each category, we view
exceedances over a given threshold as generated by a Poisson process whose intensity is regulated by a specific l ..."
Cited by 8 (1 self)
Add to MetaCart
Modeling of extreme values in the presence of heterogeneity is still a relatively unexplored area. We consider losses pertaining to several related categories. For each category, we view exceedances
over a given threshold as generated by a Poisson process whose intensity is regulated by a specific location, shape and scale parameter. Using a Bayesian approach, we develop a hierarchical mixture
prior, with an unknown number of components, for each of the above parameters. Computations are performed using Reversible Jump MCMC. Our model accounts for possible grouping effects and takes
advantage of the similarity across categories, both for estimation and prediction purposes. Some guidance on the specification of the prior distribution is provided, together with an assessment of
inferential robustness. The method is illustrated throughout using a data set on large claims against a well-known insurance company over a 15-year period.
"... Introduction Cell lineage data consists of observations on quantitative characteristics of the descendants of some initial cell. In the past (e.g. Powell, 1955, 1956, 1958; Powell and Errington,
1963) cell lineage data was collected by direct observation and more recently has been collected via time ..."
Cited by 5 (0 self)
Add to MetaCart
Introduction Cell lineage data consists of observations on quantitative characteristics of the descendants of some initial cell. In the past (e.g. Powell, 1955, 1956, 1958; Powell and Errington,
1963) cell lineage data was collected by direct observation and more recently has been collected via time lapse photography (e.g. Staudte et al 1984). The data is collected largely in order to
estimate the correlations between mother and daughter cells and between sister cells. Particular interest is in whether the observed correlations between related cells are due to similarities in the
environments in which the cells develop, inherited effects, or a combination of environmental and inherited effects. The bifurcating autoregressive model (BAR(1)) for trees of cell lineage data was
originally proposed by Cowan (1984) and extended in Cowan & Staudte (1986), Staudte (1992), Huggins & Staudte (1994), Huggins (1996). The BAR(1) model is an adaption of the AR(1) model to tree stru
, 1995
"... Bayesian inference for the superposition of nonhomogeneous Poisson processes is studied. A Markov chain Monte Carlo method with data augmentation is developed to compute the features of the
posterior distribution. For each observed failure epoch, a latent variable is introduced that indicates which ..."
Cited by 4 (4 self)
Add to MetaCart
Bayesian inference for the superposition of nonhomogeneous Poisson processes is studied. A Markov chain Monte Carlo method with data augmentation is developed to compute the features of the posterior
distribution. For each observed failure epoch, a latent variable is introduced that indicates which component of the superposition model gives rise to the failure. This data augmentation approach
facilitates specification of the transitional kernel in the Markov chain. Moreover, new Bayesian tests are developed for the full superposition model against simpler submodels. Model determination by
a predictive likelihood approach is studied. A numerical example based on a real data set is given. Key words and phrases: Additive intensity function, Data augmentation, Gibbs sampling, Metropolis
algorithm, Model selection, Predictive reliability function. AMS 1991 subject classifications: Primary 62F15, secondary 62M20. Abbreviated Title: Superposed Poisson Processes 1 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2291765","timestamp":"2014-04-20T07:11:23Z","content_type":null,"content_length":"37056","record_id":"<urn:uuid:128c6647-09c5-42ab-9250-a99ac77ca551>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
7.5 Other Clutter
Statements that contain runtime type conversions suffer a little performance penalty each time the statement is executed. If the statement is located in a portion of the program where there is a lot
of activity, the total penalty can be significant.
People have their reasons for writing applications with mixed typing. Often it is a matter of saving memory space, memory bandwidth, or time. In the past, for instance, double-precision calculations
took twice as long as their single-precision counterparts, so if some of the calculations could be arranged to take place in single precision, there could be a performance win.1 But any time saved by
performing part of the calculations in single precision and part in double precision has to be measured against the additional overhead caused by the runtime type conversions. In the following code,
the addition of A(I) to B(I) is mixed type:
INTEGER NUMEL, I
PARAMETER (NUMEL = 1000)
REAL*8 A(NUMEL)
REAL*4 B(NUMEL)
DO I=1,NUMEL
A(I) = A(I) + B(I)
In each iteration, B(I) has to be promoted to double precision before the addition can occur. You don’t see the promotion in the source code, but it’s there, and it takes time.
C programmers beware: in Kernighan and Ritchie (K&R) C, all floating-point calculations in C programs take place in double precision — even if all the variables involved are declared as float. It is
possible for you to write a whole K+R application in one precision, yet suffer the penalty of many type conversions.
Another data type–related mistake is to use character operations in IF tests. On many systems, character operations have poorer performance than integer operations since they may be done via
procedure calls. Also, the optimizers may not look at code using character variables as a good candidate for optimization. For example, the following code:
DO I=1,10000
IF ( CHVAR(I) .EQ. ’Y’ ) THEN
A(I) = A(I) + B(I)*C
might be better written using an integer variable to indicate whether or not a computation should be performed:
DO I=1,10000
IF ( IFLAG(I) .EQ. 1 ) THEN
A(I) = A(I) + B(I)*C
Another way to write the code, assuming the IFLAG variable was 0 or 1, would be as follows:
DO I=1,10000
A(I) = A(I) + B(I)*C*IFLAG(I)
The last approach might actually perform slower on some computer systems than the approach using the IF and the integer variable. | {"url":"http://cnx.org/content/m33724/1.1/","timestamp":"2014-04-17T04:09:45Z","content_type":null,"content_length":"48846","record_id":"<urn:uuid:aeb7b026-b2e0-4248-b7fc-a270b6dc94e4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
(TC0 3) Suppose The Transfer Function Of An FIR ... | Chegg.com
Signal processing
Anonymous Anonymous
need homework help
Image text transcribed for accessibility: (TC0 3) Suppose the transfer function of an FIR filter is given by the equation below. (As written, this filter is not causal.) What characteristic does this
filter have? H(z) = 0.2z2 + 0. 1z + l + 0.1z-1 + 0.5z-2 Linear phase response Non-linear phase response Flat frequency response Constant Group delay (TCO 3) Suppose you have a DSP system given by the
following difference equation. What is its magnitude frequency response |H(W)|at a frequency of 500 Hz if the sampling frequency of the following system is 2000 Hz? y[n] = x[n] - 0.5z[n - l] 0.5 0.56
1.12 1.0
Electrical Engineering
Answers (1)
• need homework help
Rating:5 stars 5 stars 1
Anonymous answered 54 minutes later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/tc0-3-suppose-transfer-function-fir-filter-given-equation--written-filter-causal-character-q3962975","timestamp":"2014-04-21T10:51:25Z","content_type":null,"content_length":"21066","record_id":"<urn:uuid:884a5552-0c88-4fd3-85ca-8b1d1d058587>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
228 inches equals how many yards
You asked:
228 inches equals how many yards
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/228_inches_equals_how_many_yards","timestamp":"2014-04-17T15:48:14Z","content_type":null,"content_length":"53634","record_id":"<urn:uuid:bb805f44-6105-498f-965f-e7e380287d54>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
A line which touches a circle or ellipse at just one point. Below, the blue line is a tangent to the circle c. Note the radius to the point of tangency is always perpendicular to the tangent line.
Tangent to a circle.
2. Trigonometry
One of the trigonometry functions. In a right triangle, the tangent of an angle is the opposite side over the adjacent side.
For more on this see Trigonometry tangent function.
This page explores the derivatives of trigonometric functions in calculus. Interactive calculus applet.
Interactive demonstration of the graph of the tangent function in trigonometry
This page shows how to draw the two possible tangents to a given circle through an external point with compass and straightedge or ruler. This construction assumes you are already familiar with
constructing the perpendicular bisector of a line segment.
Constructing the tangent to a circle at a given point on the circle with compass and straightedge or ruler. It works by using the fact that a tangent to a circle is perpendicular to the radius at the
point of contact. It first creates a radius of the circle, then constructs the perpendicular bisector of the radius at the given point.
Introduction to the 6 trigonometry functions - sine, cosine, tangent, secant, cosecant, cotangent
A pictorial index to the parts of a circle. A diagram with links to full definitions.
A memory aid for remembering the definitions of sin, cos, and tan
Definition and properties of a tangent
Definition and properties of the tangent line to an ellipse
Definition of the tangent function as applied to right triangles in trigonometry
Definition of the arctan function in trigonometry. The inverse of the tangent function. The angle whose tangent is a given number.
Definition of the arctan function in trigonometry. The inverse of the tangent function. The angle whose tangent is a given number.
(C) 2009 Copyright Math Open Reference. All rights reserved | {"url":"http://www.mathopenref.com/tangent.html","timestamp":"2014-04-18T20:43:01Z","content_type":null,"content_length":"10788","record_id":"<urn:uuid:2df23a9b-36d1-4944-8146-ab2f2335fc06>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hash Functions
From CryptoDox, The Online Encyclopedia on Cryptography and Information Security
Hash functions are functions for summarizing or probabilistically identifying data. Such a summary is known as a hash value or simply a hash, and the process of computing such a value is known as
A hash function takes an input string (message) of arbitrary size and reduces it to a short string. A typical cryptographic hash function takes any input string and makes a 256 bit string from it.
The hash value of an input string is analogous to the fingerprint of a person. It is often also called a "message digest." Hash functions are used for digital signatures such as RSA and DSA, but
also for the construction of MACs (message authentication codes), the protection of passwords, and for the derivation of independent secret keys from a single master key. Cryptographic hash
functions are an essential building block for applications that require data integrity, such as detectors of computer viruses, Internet security (for example PGP or IPSEC), and the security of
electronic commerce and banking.
A cryptographic hash function must be a one-way function, which means that finding an input corresponding to a given output string is difficult: Even an opponent who spends a significant amount of
money, say $10 million, will have a negligible success probability.
There are various algorithms available for generating message-digests, or hashes. Some of these are listed below. An example of the output generated by hash functions is also shown in the figure.
Current recommended hash functions for cryptographic applications:
Historically important hash functions:
Books of Interest
External Links
• Hash Functions | {"url":"http://cryptodox.com/Hash_Functions","timestamp":"2014-04-19T12:55:39Z","content_type":null,"content_length":"23166","record_id":"<urn:uuid:b2d6d3f2-813e-4f8f-8a84-3e29ecbb4579>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question that arises in trying to make mathematically precise a well known informal statement about analytic functions
up vote 0 down vote favorite
It is often stated that a single-valued analytic function f(z) is uniquely and completely determined if (1) it is analytic at all points of a convergent sequence of points in the complex plane and at
their limit point and (2) one is given the points of the sequence and the values of f(z) at each of these points.
Let z(1),z(2),...,z(n)... be a convergent sequence of complex numbers which are strictly decreasing in absolute value as n increases, and whose limit point is zero. Let f(z) be analytic at all the
points of this sequence and at their limit point. Supposing that for each positive integer i one is given z(i) and f(z(i)). Does there then always exist a unique power series P(z) centered at zero
such that (3) the radius of convergence R of P(z) is positive (or infinite) and (4) if k is any positive integer for which the absolute value of z(k) is less than R, P(z(k))=f(z(k))?
If such a unique power series exists, how do we obtain its coefficients from the data we are given? One can set up an equation for these unknown coefficients involving two infinite column vectors and
an infinite Vandermonde matrix. The rows of the matrix are all of the form 1,z(j),z(j)^2,z(j)^3...where j is a positive integer. But I do not know what conditions are needed to insure that such
matrices have a unique inverse.
I took the liberty of fixing your paragraph breaks – Yemon Choi Nov 9 '12 at 20:27
Obviously, you need, at a minimum, that the sequence $f(z_i)$ converges, otherwise, it's hopeless. Building on Steve's answer below, the sequence of pairs $\bigl(z_i,f(z_i)\bigr)$ will have to
3 have the property that the higher differences converge as well. Even that won't be enough to get everything you want because, obviously, you can throw away any finite number of the 'data points' $
\bigl(z_i,f(z_i)\bigr)$ and it won't affect the limits, so if a solution exists for the remaining data and has large enough $R$, then you'll get a contradiction if the missing data don't match the
function $f$. – Robert Bryant Nov 9 '12 at 21:10
add comment
4 Answers
active oldest votes
If I am not misunderstanding your question, you read off the power series pretty much directly from the given data. You know f(0). You also know f'(0) by using the definition of the
up vote 2 derivative. The higher derivatives can all be determined by using higher order difference equations http://en.wikipedia.org/wiki/Finite_difference#Higher-order_differences. Since the
down vote function is analytic the taylor series you get this way does converge on some disk...
Steve, I believe you have the right approach to this problem. However one must still find a definition of the nth derivative of f(z) at z=0 as the limit of a formula involving finite
differences which requires no information other than the data we have. Suppose z(i) is a real number for each positive integral i. Many of the approximations to the nth derivative of
f(z) at z=0 require that we know f(z) at values of z between z(k) and z(k+1)-for some integers k-and we do not have this information. – Garabed Gulbenkian Nov 11 '12 at 19:07
Here is the recipe for f''(0) which you should be able to generalize. Take 3 points within a distance of $\delta$ from 0. Find the unique quadratic passing through all 3 points. Half
the lead term of this quadratic is an approx of the second derivative. Form a sequence of such approximations as $/delta$ goes to 0. The limit of this sequence is the second
derivative. – Steve Nov 11 '12 at 20:23
It is sort of sad that such basic observations are not usually given to our calculus students, or complex analysis students for that matter. It was many years after learning these
subjects, and feeling somewhat dissatisfied with my state of knowledge, that I finally figured these things out for myself. This plus analyticity is, in my opinion, the real reason
for the identity theorem. – Steve Nov 11 '12 at 20:25
Thanks, Steve. That was exactly what I was looking for. So, to approximate the nth derivative we can use the n+1 points z(k),z(k+1) ...z(k+n) together with the values of f(z) at these
points and let k approach infinity. To obtain the leading coefficient of the nth degree interpolating polynomial we have just to solve a system of n+1 linear equations whose matrix is
a (finite) Vandermonde matrix -which always has a unique inverse. So it seems that these matrices are useful after all. – Garabed Gulbenkian Nov 13 '12 at 15:25
add comment
For "most" choices of the values $f(z_i)$, there will be no holomorphic function with the given values. The reason if there is such a function then it is determined by any infinite
subsequence of the given data. In other words, suppose that some values $f(z_i)$ are the values of a holomorphic $f$, and suppose that you try to change some of these values, while keeping
up vote infinitely many of them (including the value at the limit point) the same. Then the only function that could fit these new data would be the same $f$, since it's determined by the infinitely
2 down many unchanged values. So the new data, if they differed at all from the old, would not be the values of a holomorphic function.
add comment
To say Robert Bryant's explanation in another way: assuming (as in the OP) that the $f(z_i)$ are already the values of a function $f$ analytic in a nbd $U$ of $0$, there is a unique power
series $P$ such that $P(z(k))=f(z(k))$ for large enough $k$, and not for any positive integer $k$ for which $|z(k)|$ is less than the radius of convergence of $P$, that is $z(k)\in U\cap B
up vote (0,R)$. The reason is that the latter set is not connected, and we can't apply the principle of isolated zeros therein, not even if we known that $U$ is connected. Also note that changing
0 down finitely many $f(z_i)$ always produces the values of an analytic function on a nbd $U'$ of $0$, but of course $U\cap U'$ will be disconnected (the simplest way is to take as $U'$ a disjoint
vote union of small nbd's of $0$ and of finitely many $z(i)$'s).
add comment
As mentioned, if you know that your points come from sampling a holomorphic function then you can use limits of finite differences to compute all of the derivatives at the limit point. Or
do it step by step, compute $f(0)=\lim_{z\rightarrow0}f(z)$, then replace all points $(z,y)$ with $(z,(y-f(0))/z)$ and continue.
up vote 0 However, without assuming you're starting with a holomorphic function there is very little you can say using limits. For example you can take $f(z)=\exp(-z^2)$ and take $x_i$ converging to
down vote 0 from the right, and you'd get the power series expansion at $0$: $f(z)=0+0z+0z^2+\cdots$. Even worse, you could add a tiny random error to each sample so as to not effect any of the
limits, but still mess up the function.
add comment
Not the answer you're looking for? Browse other questions tagged cv.complex-variables or ask your own question. | {"url":"http://mathoverflow.net/questions/111934/a-question-that-arises-in-trying-to-make-mathematically-precise-a-well-known-inf/111936","timestamp":"2014-04-19T09:52:46Z","content_type":null,"content_length":"72886","record_id":"<urn:uuid:5670d0d9-2318-4ba0-8800-49c9d2a2aef7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
An effective way to supercharge your investments.
The probability calculator utilizes the Black Scholes formula (Nobel Prize winners in Economics). The formula indicates (in terms of percentage) the probability of any market moving from its present
price, "Stock Price" to another price, "Strike Price" within a defined period of time.
When an option is ATM ("at-the-money"), the chances are 50% that a market will go up, and consequently, a 50% chance that it will go down. So, any OTM ("out-of-the-money") option must have less than
a 50% chance of being reached. The farther away the option is OTM, the lower the probability of it being reached within a certain time period. Conversly, any ITM ("in-the-money") option must have
greater than a 50% chance of being reached. The further the option is ITM, the higher the probability of it being reached within a certain time period. | {"url":"http://www.optionsrez.com/probabilitycalculator.htm","timestamp":"2014-04-21T09:56:29Z","content_type":null,"content_length":"42778","record_id":"<urn:uuid:ba0acab5-6d45-42aa-9a0b-010983e5b862>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] slices vs. range() over a certain axis
Robert Kern robert.kern@gmail....
Sat Nov 15 03:32:14 CST 2008
On Fri, Nov 14, 2008 at 22:40, David Warde-Farley <dwf@cs.toronto.edu> wrote:
> I'm trying to clarify my understanding of how slicing works and how it
> differs from specifying a sequence of indices. My question is best
> illustrated by an example:
> In [278]: x = zeros((5,50))
> In [279]: y = random_integers(5,size=50)-1
> The behaviour that I want is produced by:
> In [280]: x[y,range(50)] = 1
> Why doesn't
> In [281]: x[y,0:50] = 1
> produce the same effect? Is there a way to do what I am attempting in
> [280] with slicing?
The reasoning is a bit clearer with __getitem__ rather than
__setitem__. When the subscript is only a set of slices, then the
resulting array is a view. Slices specify a subarray with homogeneous
strides like any other array.
Fancy indices don't. The result must, in general, be a copy because we
will be pulling items scattered across memory in no necessary order.
Fancy indexing needs to have separate semantics. Specifically,
broadcast the index arrays then create a new array of the broadcasted
shape with elements found by iterating over the broadcasted index
Now the question is, what do we do when we combine the two into one
subscript. Instead of reinterpreting the slices as lists and shoving
them into the fancy indexing semantics, we leave them as slices. The
procedure for fancy indexing changes slightly. We do the same
broadcasting *just* for the actual fancy indices. However, the "item"
that gets placed into each position in the output array is no longer a
scalar, but rather the result of the remaining slices. This gives us
more capabilities than interpreting a slice as the equivalent range()
since you can always just use the range().
Clear as mud?
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038662.html","timestamp":"2014-04-18T00:28:20Z","content_type":null,"content_length":"5040","record_id":"<urn:uuid:d10e061e-f6f8-4338-9634-b58d83e0c96f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal problem
If A and B are both orthogonal nxn matrices. Can anyone show That AB is orthogonal... Thanks
If A and B are both orthogonal nxn matrices. Can anyone show That AB is orthogonal...iv been stuck on this question for a while now Thanks
dopi An $n\times n$$A$ is orthogonal iff $AA^T=I$ Now suppose $A$ and $B$ be orthogonal, and let $C=AB$. Now $C^T=B^TA^T$, so: $<br /> CC^T=ABB^TA^T=AIA^T=AA^T=I<br />$ Hence $C$ is orthogonal. RonL
CaptainBlack Say if you add A and B (A+B)...would that be Orthognal?
From the definition, we have for A and B: $<br /> \begin{array}{l}<br /> AA^{ - 1} = A^{ - 1} A = I \\ <br /> BB^{ - 1} = B^{ - 1} B = I \\ <br /> \end{array}<br />$ $<br /> \left( {AB} \right)\left(
{AB} \right)^{ - 1} = ABB^{ - 1} A^{ - 1} = AIA^{ - 1} = AA^{ - 1} = I<br />$
Don't post the same question in two different fora. I have merged these two because they both have responses. RonL
TD! This is true of any non-singlar square matrices $A$ and $B$! RonL | {"url":"http://mathhelpforum.com/advanced-algebra/2631-orthogonal-problem.html","timestamp":"2014-04-18T01:22:08Z","content_type":null,"content_length":"55067","record_id":"<urn:uuid:7d84eb82-d567-453e-8471-0679bca7581f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
March 2010
Genomics is one of the fastest moving areas of science and Gavin Harper, a mathematician and statistician, has put himself right at its centre. He works for
Oxford Nanopore Technologies
, a spin-off company from Oxford University's chemistry department, which is developing new technology for analysing single molecules using a combination of biology and electronics. The main
application for this new generation of technology is sequencing DNA. With 75 employees from 18 different countries and all sorts of scientific backgrounds, Gavin's work environment is nothing like
the solitary paper-and-pencil affair traditionally associated with mathematics. In fact, he is something of a communications hub in the company.
Gavin first became interested in maths towards the end of his school career. "I enjoyed a wide range of subjects, but as school maths became a bit more advanced, we stopped doing routine
solve-these-equations type tasks and things started getting more interesting," he says. "Solving puzzles was something I enjoyed, so maths seemed like an interesting thing to do at university." Gavin
went on to do a degree in maths and statistics at the University of Edinburgh. Towards the end of the degree he developed an interest in research and decided to do a nine month diploma in
mathematical statistics at the University of Cambridge, followed by a DPhil (equivalent to a PhD) in the statistics department at Oxford University.
Spotting patterns
At Oxford Gavin began his journey into biochemistry, working on a problem that at first glance doesn't appear very mathematical at all: discovering new medical drugs. Once researchers have identified
the chemical processes that can cause a disease or fight its symptoms, the hunt is on to find the chemical compounds that interfere with or mimic these processes. Typically, this involves screening a
vast number of candidate compounds, and it's this vastness that requires mathematical and statistical detective work. "For example, you may have a million molecules that need to be screened to see if
they are active or not, for instance whether they bind to a target receptor," says Gavin. "Your screening experiment will give you a read-out telling you something about molecule activity, but the
data is very noisy — it contains a lot of possibly irrelevant and imprecise information about the molecules."
It's a bit like listening to a radio with bad reception: there'll be lots of crackling, but somewhere in there is a voice telling a meaningful story. In the case of screening chemical compounds, the
story is not just which of them are active and which aren't, but also whether those that are active share common characteristics that might give you more insight into the process you're looking at.
Pattern recognition in large data sets
Rather than screening molecules, let's assume you're screening people using a lifestyle questionnaire with yes/no questions such as "do you smoke?" and "are you over-weight?". For each person you get
a string of 0s and 1s, where 1 means "yes" and 0 means "no". A few years later you check which of your respondents went on to develop cancer and try to spot a pattern in their answers which links the
cancer to their lifestyle. For example, for four people and four questions you might get the grid:
1100 cancer
1110 cancer
1010 no cancer
0101 no cancer
It's easy to spot that people with cancer all have a 1 in the first two positions, while the people without cancer don't. With a larger data set, for example 1000 people being asked 100 questions
each, you will not be able to spot a connection that easily, so you need an efficient method for comparing all combinations of digits in the sequence over all 1000 respondents.
To filter out meaningful information, you need to spot patterns within your data, just as your ear spots the familiar patterns of a human voice. This can be difficult even when your data set is
small, especially if you don't know what you're looking for — just think of those "what's the next number in the sequence" exam questions. With large data sets, things get very hard very rapidly. You
need a systematic method — an algorithm — for sifting through your data, classifying what you see, and spotting links. It's a mathematical task that reaches into computer science and even artificial
intelligence. Mathematicians have developed a range of standard algorithms, but there's no one-size-fits-all solution. Depending on your particular problem, you need to invent something new, or adapt
existing algorithms.
Maths isn't just important in interpreting the output of a screening process, but also in choosing its input. Before you start screening you'll already have some information on what kinds of
molecules might turn out to be active. There are practical limitations to how many molecules you can screen, so you want to pick your molecules carefully to get a good chance of finding activity
without putting all your eggs in one basket. "For example, if I think that molecule A is likely to be active, but am not sure, then picking a thousand molecules similar to molecule A would be very
bad," Gavin explains. "Either the experiment tells me that, yes, it's active and there are 999 other things like it, which I already knew, or I find out that they're all inactive." So you need to
balance your chance of spotting activity against what's known about the similarities between different molecules. This kind of problem — having to choose the best combination from a bewildering range
of possibilities according to some constraints — turns up all over the place, from telephone routing to airline pricing, so mathematicians have developed a whole theory around it — it's called
combinatorial optimisation.
Gavin's DPhil, partly sponsored by the pharmaceutical giant Glaxo Wellcome (now GlaxoSmithKline), was about improving the methods for analysing data from screening chemical compounds. His research
was an interesting mix of theory and practice, getting his head around the mathematical theory behind pattern recognition, combinatorics and experimental design, and at the same time making sure that
his methods were actually usable in a real lab, for example by producing efficient computer programs to analyse the experiment's output.
Gavin ended up unearthing a pattern recognition algorithm that had been developed in the 1970s, but had never been used on chemical data, and adapting it to suit his purpose. His pioneering work
eventually earned him a job as the only mathematician within the computational chemistry department at GlaxoSmithKline, where he stayed for nine years.
Reading DNA
After GlaxoSmithKline, Gavin moved to his present job at Oxford Nanopore. The company is working on new technology whose main application will be to sequence DNA, the nucleic acid which makes up our
genetic code.
DNA is made up of two strands, which curl around each other in the familiar shape of the double helix, and are linked to each other by what looks like the rungs of a ladder. You can think of each of
the two strands as a string made up of four basic building blocks, called nucleotides, or bases. There are four nucleotides in DNA, adenine, guanine, thymine and cytosine, denoted by the letters A,
G, T and C. The order in which these bases occur is the DNA sequence. It defines the production of proteins and therefore contains the instructions for an organism's development and functioning.
There are two strands in the double helix, but the order of bases on one defines the order of bases on the other. Each nucleotide links up with its opposite number on the other strand, with adenine
only bonding with thymine and guanine only bonding with cytosine. So your portion of DNA is really defined by just one sequence of A, G, T and C. Figuring out this sequence is what people mean when
they talk about sequencing DNA.
Current technologies for sequencing DNA involve differently coloured fluorescent labels, which attach themselves to the different bases, each colour signalling a particular nucleotide. The hardware
and reagents in this process are still complex and expensive, however prices are falling towards the cost of a full human genome being in the $10,000 to $20,000 range.
A DNA strand passing through a nanopore in a silicone chip. An enzyme, shown in green, processes the DNA strand, cleaving single bases and firing them through the nanopore. Each base then blocks an
electrical signal. Image: Oxford Nanopore Technologies.
So the race is on to develop cheaper and faster sequencing technology. Oxford Nanopore are developing a technology which involves guiding DNA through nanopores — incredibly small holes 10,000 times
smaller than a human hair — in a silicone chip. An enzyme feeds single bases from the end of a DNA strand through the nanopore. As each one passes through it blocks a current passing through the
nanopore, creating a characteristic signature for each base. In this way the bases can be identified without expensive optical equipment, promising a lower cost system that could be scaled up using
electronics. Eventually the technology platform Oxford Nanopore is developing may also be used to identify proteins in the body, both for diagnostic purposes and to discover new ones. (Watch a movie
on the Oxford Nanopore website to find out more about the process.)
Mounting complexity
Maths and signal processing
Suppose you've got a scrambled audio signal. The signal might be represented by a wave form like the one shown below — how do you clean it up?
In the 19th century the mathematician Joseph Fourier realised that a periodic wave form like the one above can be broken up into simpler components which are described by the familiar sine and cosine
functions. A mathematical tool called the Fourier transform does this for you. For example, you can use it to find out that the wave form above is given by the function f(x)=sin(x)+cos(x-1)-sin(x/2).
You can use this information to filter meaningful information out of your signal. Fourier analysis is a staple tool in signal processing and can be used to find patterns in all kinds of information,
from acoustic to visual to electrical. See below to find out more.
A faster method also means that a lot more data needs to be analysed. Each silicone chip contains hundreds of nanopores, and for each of them an electrical signal is measured and needs to be
interpreted. But the signals can be noisy — how do you know which bit of the signal corresponds to which DNA base, and whether there are other parts of the DNA strand producing electrical signals
that confuse the reading? Again it's a problem of pattern recognition.
To solve this particular problem, Gavin went back to an area of maths he had studied at university, and which he never thought he'd ever use again. It's called signal processing, and it provides the
tools for breaking up a signal — any type of signal, from electrical to sound — into simpler components. "We are able to break the signal up into smaller chunks, to see which chunk corresponds to
which base, or if there is something else in the system that's being read by the nanopore." Gavin's adaptation of these tools will not only form part of the final product the company are developing,
but it's also essential in the actual development process. By assessing the clarity of the signal produced in an experiment Gavin can come up with suggestions for how the process that produced it —
DNA bases passing through nanopores — might be improved to produce a clearer signal.
Generally, Gavin's ability to tackle huge data sets means that he's right there in the lab at the heart of the experimental development process. Hundreds of different experiments are run at the
company labs every day, constantly evaluating potential improvements to the technology. At every experiment Gavin is there looking at the data, figuring out how it's different from other experiments
that have been run, and works with the scientific team to improve the experiment.
In terms of the actual maths that's being used, it's a case of looking into the mathematical tool box Gavin equipped himself with at university and picking the right tool for the job. "You wind up
using combinatorics, probability, even graph theory — an awful lot of the things you learnt as an undergraduate," he says. "It's funny how you do something as an undergraduate thinking that you'll
never ever use it again, and then you suddenly turn up for work one day, scratch your head, remind yourself how it works, and then you crack on with using it."
DNA and graph theory: an example
In DNA sequencing the DNA is broken up into chunks, which are sequenced separately and then have to be re-assembled in the correct order. How can graph theory help? Here is an (over-simplified)
Suppose you have five chunks of the DNA sequence, each consisting of 3 letters, which you have to piece together to get the whole sequence. Each of the five chunks occurs in the entire sequence
exactly once, and chunks overlap: in the whole sequence the last letter of one chunk is the first of another. For example, the two chunks AGT and TTG combine to AGTTG.
Arrange your chunks in a graph, with an arrow going from one chunk to another if the last letter of the first is the first letter of the second. Your problem is now equivalent to finding a path
around your graph which visits every vertex exactly once. Such a path is called a Hamiltonian path. There are mathematical algorithms for finding Hamiltonian paths in graphs.
The figure below shows a graph consisting of five chunks. The red arrows denote a Hamiltonian path. The resulting sequence is ACTGTTTAGCT.
Communication is key
While Gavin's work involves all sorts of different areas of maths, the research that goes on at Oxford Nanopore stretches across different areas of science. There are electronic and mechanical
engineers, biologists, chemists, physicists, and computer scientists. Communication is not always easy, but it's essential. There's always a hive of activity around Gavin's desk, with people from all
the different groups getting an interpretation of something that's relevant to their different disciplines.
Gavin can act as a communication hub because he has a finger in every pie. He's got intimate knowledge of the experiments and the science and technology behind them, and his algorithms put him on the
path to computer science. "As we're encoding our algorithms into tidy computer code for the final product, I will be talking to the guys on the computing side," he says, "I discuss the science of
what's happening to them, because they're not in there with the other scientists every day. At the same time I talk to the scientists, doing exactly the opposite: discussing the limitations of the
analysis and interpretation of the output. So I play the scientist to the guys doing the programming, and I play the programming guy to the people doing the science."
Communication comes naturally to Gavin, but his maths background is also a great help. "My mathematical training really helps in communicating the science to the computing group, because you need to
boil a complicated experiment down to [its essential parts]. The precision of thought [that comes from mathematical training] really helps to abstract what the key elements of the experiments and the
data are." Then there is also the challenge of communicating maths and statistics to the scientists. "For example, how do you present very large data sets to an audience? This is about data
visualisation [— being able to present a picture, or a graph]. So I'm currently trying to develop some standard visual representation of the experiments, even though the experiments are changing, and
trying to get the scientists to understand this standard representation, so that I can use it again and again."
Perhaps the most daunting aspect of Gavin's job, if you're looking in from the outside, is having to learn about completely new bits of science on the job — after all, he was never formally trained
in biology or chemistry. "You're always going to move into new scientific areas within this kind of job. So there's a challenge in deciding how much you need to learn and how to extract the relevant
information without getting overloaded. Personal interactions and social skills really are a key part of my job."
Traditionally, there have been few overlaps between maths and biology, and even fewer between maths and medicine. But with the advent of genetics comes not only the challenge of dealing with vast
amounts of data, but also a new approach to biomedicine. We're no longer thinking in terms of one organ, one cell, or one molecule, but in terms of complex systems of interacting agents that make up
an organism. Where there's complexity, you need maths, so a mathematician playing a pivotal role in a genetics company is by no means unusual. As the population scientist Joel E. Cohen put it,
"Mathematics is biology's next microscope, only better."
Further reading
You can find out more about the mathematics of signal processing in the following Plus articles and podcasts:
And you can find out more about the role of maths and statistics in the biomedical sciences in our ongoing project, Do you know what's good for you?
About the author
Gavin Harper was interviewed by Marianne Freiberger, Co-Editor of Plus, in March 2010.
More from Maths Careers
You can find out more about careers with mathematics on the Maths Careers website, which is run by the Institute of Mathematics and its Applications. In particular you might want to look at these
career profiles on the Maths Careers site: | {"url":"http://plus.maths.org/content/comment/reply/2451","timestamp":"2014-04-17T06:42:34Z","content_type":null,"content_length":"43911","record_id":"<urn:uuid:0cd2cbd7-27a2-4d54-a841-62581d646a8a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limiting processes - maths online Gallery
The tool
Numerical computation of sequences
helps to analyse numerical sequences and illustrates the connection between the definition of a sequence in terms of an expression and its numerical properties. It gives you a "feeling" for the
possible behaviour of sequences and invites to do some experiments. | {"url":"http://www.univie.ac.at/future.media/moe/galerie/grenz/grenz.html","timestamp":"2014-04-16T13:26:18Z","content_type":null,"content_length":"4954","record_id":"<urn:uuid:439bde51-9276-4766-87cc-958572b63c70>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematician announces that he's proved the ABC conjecture
(Phys.org)—In all of history there are very few names that stand out in the field of mathematics, at least among those not in the field: Euclid, Newton, Pythagoras, etc. This is likely due to several
reasons, chief among them is that math is so seldom used by most people and the fact that its use in other sciences causes the underlying concepts to become overshadowed. That might change if what
Shinichi Mochizuki of Kyoto University is claiming is true; that he has written a proof of the ABC conjecture. To mathematicians it's akin to the Grand Unified Theory of physics, a proof that would
tie together most of the fundamental ideas in the field into one neat, fully explainable bundle.
The ABC conjecture is at its core, an association between whole numbers and is formed on the basis of the simple mathematical equation a+b=c and involves what are known as square free numbers;
numbers that can't be divided by a number squared. Square free numbers are described using sqp(n) where n is the biggest such number that can be calculated by multiplying factors of n which are prime
numbers. The whole idea was first proposed by two mathematicians working separately back in 1985.
While the concept of the ABC conjecture is not all that complex in and of itself, providing proof of it has proven to be impossible, until now, maybe. The proof Mochizuki came up with is 500 pages
long and involves concepts that very few people understand, thus, it will likely take years of serious work by many mathematicians to prove that the proof is correct.
Anyone that has sat through higher level math classes that call for creating proofs can attest to the monumental effort that must have gone into creating such a proof, though virtually all
mathematicians would agree that if the proof is indeed correct it will have been more than worth the effort. In fact, many suggest it would mark one of the most profound achievements in mathematics
history, not only because of the proof itself but because of what it would mean to the science as a whole. In proving this one conjecture, many other proofs involving many other theorems would
naturally follow. It would be as if Mochizuki had conceived and written proofs for hundreds of other important theorems all at once, including the famous Fermat's Last Theorem.
More information:
Mochizuki, S. Inter-universal teichmuller theory, 4 parts:
5 / 5 (1) Sep 12, 2012
In case anybody didn't get the hint, this is a Big Deal if he really has done it.
3 / 5 (2) Sep 12, 2012
so what that means?
1 / 5 (1) Sep 12, 2012
Bekanntheitsgrad - the positive side of information flow - with the speed of wildfire without the destruction.
We all look forward to understanding the implications (in the near future) if true.
not rated yet Sep 12, 2012
Bekanntheitsgrad - the positive side of information flow - with the speed of wildfire without the destruction.
We all look forward to understanding the implications (in the near future) if true.
Say again but dumb it down for me to understand :P ?
5 / 5 (1) Sep 12, 2012
I note that indeed the Fermat theorem would follow, not in general but for numbers large enough.
@ Guilherme22:
Apparently this has recently been identified as a crucial theorem that joins a lot of work on so called Diophantine analysis together.
That analysis concerns itself with integer numbers, so have applications to discrete problems* and to computer analysis. It is also much harder and less general than analysis over real numbers.
* Such as how many balls fit into a box (discrete problem), instead of how large volume of water (continuous problem).
not rated yet Sep 12, 2012
Well, I've watched this Hodge conjecture video a bunch of times; it's a good place to start.
A teichmuller space is a universal covering of a riemann surface; so, you know that's pretty important.
At the end of the Hodge Conjecture, the audience asks how does all this relate to number theory? At which point, Tate gasps and says, "oh no, I think we need some refreshments outside!"(paraphrased
5 / 5 (1) Sep 12, 2012
I really wish I could understand this...
4.2 / 5 (5) Sep 12, 2012
There is no royal road to mind bending mathematics.
I actually started to read the latest proof (3d in series), and here are the fruits of my abject ignorance:
Mochizuki is attempting to do on the philosophical level is to make all the info contained in complex mathematical objects available to help with proofs outside of their hierarchical sand boxes. In
other words, to "repackage" them in such a way as to make them portable to some degree.
The notation that he uses can look familiar to those of us who struggled through linear algebra back in the day, but is replete with exotic spaces full of epileptic curves "over" vast number fields
along with various tensors and operators controlling and morphing through mind-numbing matrix and group comingleings, all to free one stubborn "species". So freed, it may then move "vertically" or
"horizontally" through "log-theta-lattice" to some other useful destination.
After that, ABC appears to be relatively easy.
5 / 5 (5) Sep 12, 2012
Elliptic not epileptic curves you moron! Oh, that was me...
not rated yet Sep 13, 2012
Maybe I was hasty. But I note on another blog that Mochizuki relies heavily on Teichmüller theory. [ http://www.nature...-1.11378 ] Which may have a connection to the Hodge conjecture ("Hodge
Using p-adic fields, which is comparable to use analysis on reals, isn't the same as going outside of discrete problems.
And nowhere do I see the claim that this is anything but discrete analysis. Are you claiming this? (Video is > 1 h.)
not rated yet Sep 13, 2012
On the contrary, I see claims that it is specifically number theory. [ http://www.lifesl...ers.html ]
1 / 5 (1) Sep 13, 2012
Same boat.
Sheer unerschöpflich (inexhaustible)is the literature to aid and support an understanding of Perleman's research.
Not so here.
not rated yet Sep 17, 2012
"it will likely take years of serious work by many mathematicians to prove that the proof is correct."
Work will not prove the proof is correct. Work will only increase confidence in the proof (for the optimistic). I say the longer the proof the greater the chance of an error. I not interested until
there is an amazing twist that produces a short proof.
Lex Talonis
1 / 5 (3) Sep 17, 2012
I worked all of this out ages ago.
Pretty easy - between playing pac man, eating pizza, and a few beers.
Only took half an hour too.
Mostly because I couldn't find the pencil sharpener - but no mind.
5 / 5 (1) Oct 26, 2012
I have a small difficulty with the opening paragraph. It states that most people seldom use mathematics. It is used by almost everyone, though mostly at a low level. Go work in a cabinet shop or look
at a DOW jones graph-mathematics is there. The other statement seems to say that mathematics obscures the underlying concepts. I think that it illuminates the underlying concepts. | {"url":"http://phys.org/news/2012-09-mathematician-abc-conjecture.html","timestamp":"2014-04-20T11:48:23Z","content_type":null,"content_length":"86163","record_id":"<urn:uuid:1264e403-e43e-4fee-b8aa-ba70747d731e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Lost on Tuesday, March 6, 2012 at 2:22am.
Lost is an understatement..please help me understand this.
Orbits and Distance Johannes Kepler (1571–1630)discovered a relationship between a planet’s distance
D from the sun and the time T it takes to orbit the sun. This formula is , where T is in
Earth years and corresponds to the distance
between Earth and the sun, or 93,000,000 miles.
(a) Neptune is 30 times farther from the sun than
Earth . Estimate the number of years
required for Neptune to orbit the sun.
(b) Write this formula with rational exponents
• ALG/TRIG - Damon, Tuesday, March 6, 2012 at 4:19am
T = k D^(3/2)
let x = 93,000,000 miles
for earth
1 year = k x^(3/2)
k = 1/x^(3/2)
for Neptune
T = [1/x^3/2) (30x)^(3/2)
T = 30^(3/2) = 164 years
Related Questions
English 1/ or literature - What is a example of a understatement in the cask of ...
ALG@ NEED HELP PLEASE - Explain how to factor the following trinomials forms: x2...
Gifted+Talented English-(Figurative Speech Poetry) - Our assignment is to write ...
Alg2 - i am realy lost as to how to do this alg 2 problem: find the two real-...
Statistics - Given the following information, determine the 68.3 percent, 95.5 ...
Math - Satelite orbits vary in their distance from the earth. Orbits can be ...
trig - Complete the general solution to y = arcsin -. y=____±2k IM lost please ...
trig - rules of logarithms: can someone please explain how to do these problems...
alg/trig - 9/x + 9/x-2= 12
english - From the adventures of Huckleberry Finn: "Good gracious! anybody hurt... | {"url":"http://www.jiskha.com/display.cgi?id=1331018557","timestamp":"2014-04-19T14:38:45Z","content_type":null,"content_length":"8751","record_id":"<urn:uuid:6458f69c-6030-48f3-8c01-4577779107cf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Urgent Help for homework - sum and difference
03-28-2011 #1
Registered User
Join Date
Mar 2011
Urgent Help for homework - sum and difference
I am not good at programming, my teacher gave me to write a program and i have to finish it in 2 days. Here is the program which i have to write
Write a program to determine the sum Z=X+Y and the difference Z=X-Y, where X, Y are n-digit integer decimal numbers and n<=100.
When I asked how to do that he explained me and he helped me little. Here is what he wrote. Ofcourse they are only clues about the way.
cost nmax=100
int X[nmax]
for(int i=0; i<nmax; i++)
x[i]= rand()%10;
z[i] = X[i]+ Y[i] + c;
Can anyone help me about that program?
I suggest you try writing a program with what you know.
And, then compile it, link it, and run it.
When, you get the program written post it here and ask for more help.
Note: I do not expect your program to do much; but, your current stuff has typos and would not even compile.
Note: My instructor would want "nmax" to be all upper case "NMAX".
A few good test cases are (for 4 digit numbers 0-3 index in int arrays)
0,0,0,0 + 0,0,0,1
9,9,9,9 + 0,0,0,1
1,0,0,0 - 0,0,0,1
Hint: You will need to add the ones column first just like you learned addition as a child.
Edit: I am a C programmer learning C++; so if you are supposed to use classes, I have little I can do to help.
The line below implies classes/OOP.
Tim S.
Last edited by stahta01; 03-28-2011 at 09:34 AM.
Interesting exercise!
This looks reasonable... the intention then is that you store each decimal digit in an int in an array. Fair enough. I don't think using rand()%10 is a good way to test initially -- I think the
suggested test cases in stahta01's post are a far better idea - start with something small and manageable that you know the answer to!
The task description says integers, so this must include support for negative numbers too, right? I'm not sure of the best way to do that. Maybe have a 101st element that stores the sign.
So you should also try some tests like:
1,0,0,0 - 9,9,9,9
1,0,0,0 + (-9,9,9,9) // should be same
-1,0,0,0 - (-1,0,0,0)
The Z=X+Y syntax does suggest that you need to put this all in a class and implement operator+. If you're not all that comfortable with operator overloading, implement in C style functions first
to get the bit-fiddling right. Unless you do something very bizarre, you should just be able to drop your implementation into an overloaded operator+ later.
A design decision that needs to be made is does element 0 hold the most significant or least significant digit. I would go with the least significant digit will be in array element zero.
Tim S.
03-28-2011 #2
Registered User
Join Date
May 2009
03-28-2011 #3
Registered User
Join Date
Mar 2010
03-28-2011 #4
Registered User
Join Date
May 2009 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/136292-urgent-help-homework-sum-difference.html","timestamp":"2014-04-16T13:54:44Z","content_type":null,"content_length":"50410","record_id":"<urn:uuid:61c61647-8023-4788-9290-bb652d8d1e52>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
DATA MINING
Desktop Survival Guide
by Graham Williams
The basis of naïve Bayes is Bayes theorem. Essentially we want to classify a new record based on probabilities estimated from the training data. That is, we want to determine the probability of a
hypothesis (or specifically class membership) given the training data (i.e., ) Bayes theorem gives us the clue to calculating this value:
Ignoring the detail we can work with to identify a hypothesis that best matches our data (since is constant for a particular ). Both of these quantities we can estimate from the training data . is
simply the proportion of the database consistent with hypothesis (i.e., the proportion of entities with class -- is regarded as the hypothesis).
Calculation of still poses some challenges and this is where the naïve part of naïve Bayes comes in. The naïve assumption is that the variables (each record is describe by variables ) are
conditionally independent given the class (i.e., the hypothesis that the classification is , which could be the class soft in our example data) of the record. That is, given that a patient has a soft
contact lens, to use our example, the probability of the patient being all of young, myope, non-astigmatic and with a normal-tear-rate is the same as the product of the individual probabilities. The
naïve assumption, more concretely, is that being astigmatic or not, for example, does not affect the relationship between being young given the use of soft lenses. Mathematically we write this as:
Empirically determining the values of the joint probability on the left of this equation is a problem. To estimate it from the data we need to have available in the data every possible combination of
values and examples of their classification so we can then use these frequencies to estimate the probabilities. This is usually not feasible. However, the collection of probabilities on the right
poses little difficulty. We can easily obtain the estimates of these probabilities by counting their occurrence in the database.
For example, we can count the number of patients with Age being young and belonging to class soft (perhaps there are only two) and divide this by the number of patients overall belonging to class
soft (perhaps there are five). The resulting estimate of the probability (i.e., ) is .
The Naïve Bayes algorithm is then quite simple. From the training data we estimate the probability of each class, , by the proportions exhibited in the database. Similarly the probabilities of each
variable's value, given a particular class (), is simply the proportion of those training entities with that class having the particular variable value.
Now to place a new record () into a class we simply assign it to the class with the highest probability. That is, we choose the which maximises:
Copyright © 2004-2010 Togaware Pty Ltd Support further development through the purchase of the PDF version of the book.
The PDF version is a formatted comprehensive draft book (with over 800 pages).
Brought to you by Togaware. This page generated: Sunday, 22 August 2010 | {"url":"http://www.togaware.com/datamining/survivor/Algorithm3.html","timestamp":"2014-04-17T03:50:30Z","content_type":null,"content_length":"61284","record_id":"<urn:uuid:540f2568-c010-4c98-9ed7-5c7bd64e7e27>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
solve the square root of 6x+1 = x-1
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50fca631e4b010aceb3360d6","timestamp":"2014-04-16T08:12:26Z","content_type":null,"content_length":"59606","record_id":"<urn:uuid:91989b00-8dd0-4fab-a8b0-f3c207db3b38>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Algebra Primer: Part 2- Linear Independence
Linear Algebra Primer: Part 2- Linear Independence and Matroids
This tutorial will introduce the concept Linear Independence. It will also briefly introduce Matroid Theory, which branches into the realms of graph theory and abstract algebra as well.
In Graph Theory, Abstract Algebra, and Linear Algebra, there is a concept known as
. The idea behind independence is if a structure is acyclic. In Graph Theory, that is pretty easy to see. Any forest or tree is independent, as they are acyclic. Another way to look at independence
from a Graph Theory perspective is that there exists a unique path between two vertices. If there is a cycle, there are at least two paths between two vertices. Consider the trivial cycle
with vertices labeled A, B, and C. From vertex A-C, there are two paths: A-C and A-B-C. There are two paths from A-B and B-C as well. Consider the examples below of independent and dependent graphs.
It is clear that the first graph is independent, as it has no cycles. The second graph is dependent, as it contains a cycle as a subgraph. The last graph is a circuit itself, so it is clearly
Let's now discuss Linear Independence, which is what pertains to Linear Algebra. A set of vectors (set S) is defined to be linearly independent when no vector in the set can be formed from a linear
combination of the other vectors. Another way to describe linear independence is in terms of span. So if S is linearly independent, then all the vectors in span(S) (the span of S is the set of all
vectors formed from linear combinations of the vectors in S) are formed from a unique linear combination of the vectors in S. A set of vectors that is not linearly independent is called linearly
This concept of linear independence is a little abstract, so let's decompose it. Consider the following vector sets:
• S = {(1, 2, 3), (4, 5, 6)}: Here, S is linearly independent. There is no way to form (4, 5, 6) from multiples of (1, 2, 3).
• S = {(1, 2, 3), (2, 4, 6), (3, 4, 5)}: Here S is linearly dependent. It is clear that (2, 4, 6) = 2(1, 2, 3) + 0(3, 4, 5), so a vector in S is formed from a linear combination of the other two
Let's back up and revisit the graph theory intuition. A structure is graphically independent if there exists a unique path for all pairs of vertices. Regarding linear independence, if there is a
unique linear combination for each vector in span(S), that could be thought of as a unique path. Similarly, if S is linearly dependent, then the linear combination to form an arbitrary vector
in S could be substituted in for
to create two linear combinations. Thus, there are two paths to the same end result.
It is easy for small sets to determine independence. It gets trickier to eyeball and construct linear combinations for larger vector sets, especially when the Vector Space is bigger than a 2-3
dimensions. Let's talk about some heuristics to use:
• The Multiples Test: If there are two vectors in the set, a and b, such that ka = b, for some constant k, then the set of vectors is linearly dependent.
• Determinant Test: If the Vector Space is of dimension n and |S| = n, then the determinant test can be used. Consider a matrix M whose column vectors are the vectors in S. The set S is linearly
independent if and only if det(M) != 0.
• Dimension Test: If the Vector Space is of dimension n and |S| > n, then S is linearly dependent. Let's explore the intuition behind this a little more. If there are n dimensions, then there are n
independent coordinate axes. So a subset of independent axes (or vectors) is independent. However, any extra axes produce a dependence. It isn't necessary to have two y-axes when only one will
• Linear Combinations Test: When all else fails, this is a good test to fall back on. Row reduction can help expedite the process. Consider a matrix M whose column vectors are the vectors in S. If
|S| = n and there are n distinct vectors that will satisfy the equation Mx = 0, where x and 0 are vectors, then S is linearly independent. Otherwise, S is linearly dependent.
Introduction to Matroid Theory
Matroids are structures that encapsulate this concept of independence found in graph theory, abstract algebra, and linear algebra. A Matroid is constructed from a ground set G. From this ground set,
a second set I is constructed that contains all the independent subsets of the ground set. Thus, if the ground set is independent, then the I = P(G), where P(G) is the power set of G.
Matroids allow for isomorphisms between Linear Algebra and Graph Theory. When dealing with Graphs, Matroids are constructed from the edge sets. Thus, the intuition developed above regarding graph
theory and independence is more than just intuition. Matroids are a tool to answer graph questions using linear algebra, and linear algebra questions using graph theory. Some of these applications
include path finding, matchings, scheduling, spanning trees, and planarity.
Matroids have three fundamental properties or axioms. This first property states that no subset of a circuit is a circuit. Think about it this way. If a set of vectors is linearly independent, that
means that no vector in the set can be formed from a linear combination of the other vectors. So removing a vector from the set won't change this fact. From a graph theory perspective, a circuit is
formed by adding edges, not removing them. Thus, this first property makes sense.
The second property states that the null set is independent, which follows from the first property. The null set has no elements; thus, no circuits.
The final property states that all maximally independent subsets of the ground set all have the same cardinality. Consider the graph
. Clearly, removing one edge from the graph leaves a spanning tree, which is independent. Any arbitrary edge can be removed for the same result- a maximally independent subgraph. The same argument
can be made for a set of Vectors.
I hope that this tutorial has been helpful in introducing the concept of Linear Independence. The introduction of Matroids is but the beginning of where Linear Algebra overlaps with Graph Theory.
There will be future tutorials on Algebraic Graph Theory, which utilizes Linear Algebra to analyze graphs. | {"url":"http://www.dreamincode.net/forums/topic/321825-linear-algebra-primer-part-2-linear-independence-and-matroids/page__pid__1855704__st__0","timestamp":"2014-04-17T08:17:01Z","content_type":null,"content_length":"77335","record_id":"<urn:uuid:433eb2b8-ff75-414c-8e4a-a823f0a40d48>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors in Cook County, IL
Chicago, IL 60640
Awesome Math and Test Prep Tutor
...I've taught math and test prep on two continents. I have two years classroom experience at the high school level, so I know exactly what high school teachers expect. I taught
1 and 2 in a classroom setting at Indiana University, and I worked as a teaching...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Cook_County_IL_algebra_tutors.aspx","timestamp":"2014-04-18T09:30:24Z","content_type":null,"content_length":"62904","record_id":"<urn:uuid:a67cd4aa-900a-4755-b6cd-841a1b736e9a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume of a Lie groupoid
volume of a Lie groupoid
One expects a notion of volume measure on Lie groupoids (also known as differentiable stacks), which generalizes the notion of groupoid cardinality of finite groupoids in that it reduces the volume
of the object space by the degree to which automorphisms encode weak quotients.
A solution to this was proposed by Alan Weinstein, and re-interpreted in terms of 2-vector bundles by Richard Hepworth.
One would expect some relation of this to the Lagrangian BV formalism, which is also a formalism for integration over $L_\infty$-algebroids.
• Alan Weinstein, The volume of a differentiable stack (arXiv)
• Richard Hepworth, 2-Vector Bundles and the Volume of a Differentiable Stack (pdf)
Revised on September 10, 2011 11:50:26 by
David Corfield | {"url":"http://www.ncatlab.org/nlab/show/volume+of+a+Lie+groupoid","timestamp":"2014-04-18T13:08:30Z","content_type":null,"content_length":"12833","record_id":"<urn:uuid:837423f2-8bbc-4ab2-a870-ae829a18f4b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Trillion Triangles and a Few Multicore Processors
A Trillion Triangles and a Few Multicore Processors
October 02, 2009
Solving an old mathematic problem seems a lot like picking at a scab. But then I'm not a mathematician. Still, it's hard not to be impressed when mathematicians solve a problem that's stumped other
mathematicians for hundreds of years.
Solving an old mathematic problem seems a lot like picking at a scab. But then I'm not a mathematician. Still, it's hard not to be impressed when mathematicians solve a problem that's stumped other
mathematicians for hundreds of years.
The latest example is the Congruent Number Problem, first posed by the Persian mathematician al-Karaji (c.953 - c.1029). The problem involves determining which whole numbers can be the area of a
right-angled triangle whose sides are whole numbers or fractions. The area of such a triangle is called a "congruent number." For example, the 3-4-5 right triangle that students see in geometry has
area 1/2 x 3 x 4=6, so 6 is a congruent number. The smallest congruent number is 5, which is the area of the right triangle with sides 3/2, 20/3, and 41/6. The first few congruent numbers are 5, 6,
7, 13, 14, 15, 20, and 21.
Actually, the international team of mathematicians didn't really solve the problem -- they only resolved it for the first 1 trillion cases. As it turns out, the biggest challenge was that these
numbers were so large that they couldn't fit in the computer's main memory (right, it's a hardware problem), so the researchers resorted to accessing hard drives (what do you think *that* did to
network performance?).
To ensure accuracy, the mathematicians split into two teams. Team "1" -- Bill Hart (Warwick University) and Gonzalo Tornaria (Universidad de la Republica) -- used "Selmer," a DUNK Teraserve R2850
with four 2.4 Ghz AMD quad-core CPUs, 128-GB RAM, a 1.5-TB hard drive, and an NVIDIA nForce Pro 3600 chipset. Team "2" -- Mark Watkins (University of Sydney), David Harvey (NYU), and Robert Bradshaw
(University of Washington) -- used "Sage," a Sun Fire X4450 Server built around 4x6-core 2.66-GHz Intel Xeon CPUs, 128-GB RAM, and 2.7 TB hard drive. As for the software, the teams based their
calculations on the freely available C library FLINT (short for "Fast Library for Number Theory").
"The difficult part was developing a fast general library of computer code for doing these kinds of calculations," says Bill Hart. "Once we had that, it didn't take long to write the specialized
program needed for this particular computation." (For a detailed description of how they approached the problem, see Congruent Number Theta Coefficients to 10^12.)
Many congruent numbers were known prior to the new calculation. For example, every number in the sequence 5, 13, 21, 29, 37, ..., is a congruent number. But other similar looking sequences, like 3,
11, 19, 27, 35, ...., are more mysterious and each number has to be checked individually. The calculation found 3,148,379,694 of these congruent numbers up to a trillion. I'm impressed. | {"url":"http://www.drdobbs.com/parallel/a-trillion-triangles-and-a-few-multicor/228800488","timestamp":"2014-04-16T11:08:24Z","content_type":null,"content_length":"94017","record_id":"<urn:uuid:684b465d-c317-482d-8897-35b2783b7d59>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
moving and rotating vertices
Hi. I've been writing a simple animation program which works by moving different groups of an obj object. Currently I have it putting each group in a display list and then moving it the required
amount with glRotate and glTranslate. This has the problem that as parts move, they leave huge gaps.
To get round this problem, I decided to instead load the vertices into an array and when it's time to draw, change the vertex positions and then draw the faces in immediate mode.
I would like to know what the best method for moving these vertices is. Is there a nice, simple way to do it, or am I going to have to learn all that matric stuff and manually implement all the
functions? If it is the latter, does anyone have any pointers on how to get started?
I'm not sure what's happening in your code - you might post some of it for us. In any case, transformations can be tricky. You have to do them in the right order and with the appropriate inversions
or negations to get the results you expect. It can be kind of counterintuitive.
You should be able to do what you're trying to do through OpenGL without getting into the matrix math yourself - you probably just have something backwards or in the wrong order. If you're serious
about it, though, you'll want to get up to speed on that stuff. A fairly good book is '3D Math Primer for graphics and game development', which you should be able to find fairly easily.
If you still haven't solved your problem, post some code and I'm sure it'll get sorted out.
No, I'm certain it's not to do with the order of statements. I want to manually set the position of these vertices. At current, the code uses display lists for each group, and as such looks hideous
when moving pieces of it.
The code to do the drawing of this model will probably look something like:
int i, j, k;
Vertex3D *frameVertices;
Vertex3D *frameNormals;
frameVertices = malloc(sizeof(Vertex3D) * nVertices);
frameNormals = malloc(sizeof(Vertex3D) * nVertices);
for (i = 0; i < nVertices; i++) {
frameVertices[i] = vertices[i];
frameNormals[i] = normals[i];
vertexUsed[i] = NO;
for (i = 0; i < nGroups; i++) {
for (j = 0; j < groups[i].nFaces; j++) {
vertex[0] = groups[i].vertex[0];
vertex[1] = groups[i].vertex[1];
vertex[2] = groups[i].vertex[2];
vertex[3] = groups[i].vertex[3];
for (k = 0; k < 4; k++) {
if (!vertexUsedgroups[[i].face[j].vertex[k]]) {
vertexUsed[groups[i]. face[j].vertex[k]] = YES;
glBegin(GL_QUADS); {
for (k = 0; k < 4; k++) {
} glEnd();
That's just off the top of my head, so it might not work quite right, but that's basically what I plan to do at drawing time. I can't see any way that this would be possible using glRotate and such.
You will almost certainly want to change the doX/doY/doZ groups to a single translation function and an angle/axis (quaternion) rotation function. Also, why do you have 2 sets of translate and rotate
calls with identical parameters? (edit: Sorry, I didn't notice one set was for the normals.)
You could use GL feedback mode to do the transformations too.
The translate() and rotate(0 functions don't exist yet. That's the problem. GL feedback mode looks exactly like what I'm looking for. Thanks.
No wait, it isn't. After doing a little testing, I've discovered that it in fact will only tell you of the coordinates in the window (why didn't I just read the whole manual to begin with). From the
looks of it I'm just going to have to bite the bullet and learn how to do all the matrix transforms and such by hand. Does anyone know of a good website with tutorials on this nature?
I'm using feedback mode to render animated models myself, so, yes, it's possible
You do get full 3D coordinates out of feedback mode, but they are in window space. You just have to transform them back to world space by setting up an orthogonal projection and applying simple
transformations to fix any remaining differences (e.g. the Z value will probably need changing).
Posts: 1,232
Joined: 2002.10
Feedback doesn't return a point if it gets clipped by the projection, which makes it useless for some operations.
You can get the same results for vertex position by using gluProject, without clipping. Doesn't help if you want the lit color, etc.
You're not going to see anything you draw in feedback mode, so you can make any changes you want to the viewport and projection matrix to prevent anything being clipped.
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Origin problems - rotating shapes frances_farmer 1 2,382 Feb 10, 2007 08:08 AM
Last Post: unknown
Rotating a point in 3D Joseph Duchesne 1 2,441 Dec 19, 2006 12:43 PM
Last Post: unknown
Rescaling vertices? WhatMeWorry 3 2,234 May 19, 2006 12:55 PM
Last Post: WhatMeWorry
rotating objects ferum 11 3,748 Dec 19, 2005 02:28 PM
Last Post: ferum
rotating... MACnus 4 3,556 Mar 25, 2005 07:46 PM
Last Post: phydeaux | {"url":"http://idevgames.com/forums/thread-6908.html","timestamp":"2014-04-16T22:27:52Z","content_type":null,"content_length":"45955","record_id":"<urn:uuid:831dd15c-6e33-40dc-a2c0-5b0d8ed5b6b3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermat's Variational Principle for Anisotropic Inhomogeneous Media
Fermat's variational principle states that the signal propagates from point S to R along a curve which renders Fermat's functional {\mathcal{I}}(
) stationary. Fermat's functional {\mathcal{I}}(
) depends on curves
which connect points S and R, and represents the travel times from S to R along
. In seismology, it is mostly expressed by the integral {\mathcal{I}}(
) = \int\limits_S^Rℒ(
, taken along curve
, where ℒ(
) is the relevant Lagrangian,
are coordinates,
is a parameter used to specify the position of points along
, and
= d
. If Lagrangian ℒ(
) is a homogeneous function of the first degree in
, Fermat's principle is valid for arbitrary monotonic parameter
. We than speak of the first-degree Lagrangian ℒ
). It is shown that the conventional Legendre transform cannot be applied to the first-degree Lagrangian ℒ
) to derive the relevant Hamiltonian ℋ
), and Hamiltonian ray equations. The reason is that the Hessian determinant of the transform vanishes identically for first-degree Lagrangians ℒ
). The Lagrangians must be modified so that the Hessian determinant is different from zero. A modification to overcome this difficulty is proposed in this article, and is based on second-degree
Lagrangians ℒ
. Parameter
along the curves is taken to correspond to travel time
, and the second-degree Lagrangian ℒ
,{\dot x}
) is then introduced by the relation ℒ
,{\dot x}
) = ½[ℒ
,{\dot x}
, with {\dot x}
= d
. The second-degree Lagrangian ℒ
,{\dot x}
) yields the same Euler/Lagrange equations for rays as the first-degree Lagrangian ℒ
,{\dot x}
). The relevant Hessian determinant, however, does not vanish identically. Consequently, the Legendre transform can then be used to compute Hamiltonian ℋ
) from Lagrangian ℒ
,{\dot x}
), and vice versa, and the Hamiltonian canonical equations can be derived from the Euler-Lagrange equations. Both ℒ
,{\dot x}
) and ℋ
) can be expressed in terms of the wave propagation metric tensor
,{\dot x}
), which depends not only on position
, but also on the direction of vector {\dot x}
. It is defined in a Finsler space, in which the distance is measured by the travel time. It is shown that the standard form of the Hamiltonian, derived from the elastodynamic equation and
representing the eikonal equation, which has been broadly used in the seismic ray method, corresponds to the second-degree Lagrangian ℒ
,{\dot x}
), not to the first-degree Lagrangian ℒ
,{\dot x}
). It is also shown that relations ℒ | {"url":"http://www.ingentaconnect.com/content/klu/sgeg/2002/00000046/00000003/00450806","timestamp":"2014-04-17T02:23:13Z","content_type":null,"content_length":"42632","record_id":"<urn:uuid:9907e294-93dd-42f8-9eba-5c571bd889dc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
The pythagorean theorem in everyday life
Grade 9
from Josh (student)
What are some ways that we use the pythagorean theorem in jobs, or even in everyday life?
Hi Josh,
People building fences will use the 3-4-5 triangle to make right angles: Suppose that you have laid the posts on one side
P P P P P P
Where should you lay the next post to continue the fence at a right angle? It is tough to produce a 90 degree angle out in the open with nothing to guide you. One trick is to take a long enough rope
and mark off two points A and B on its length such that the lengths of the three segments are proportional to 3, 4, 5:
Have a friend hold the A point on the last fence post, another friend hold both ends together on the line of fence posts already completed, and hold the B point in the direction where you want to
continue the fence. Now tighten everything and drive a stake at B.
Since the 3-4-5 triangle is right, you will want to place the new fence line in the direction of the A-B line.
Go to Math Central | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.00/josh2.html","timestamp":"2014-04-16T21:52:00Z","content_type":null,"content_length":"2388","record_id":"<urn:uuid:62a5c2b1-8ca5-45b9-8809-911515e71d36>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
IARCS Problem Archive
Problem 1: Equal Gifts, (R Shreevatsa, CMI)
It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends
are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is
up to their father to divide up these books between them.
He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in
such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a
value higher than 300 Rupees.
Suppose there are 4 pairs of books whose cost in Rupees are:
(3,5), (7,11), (8,8), (2,9)
By giving the books worth 3,7,8 and 2 to Lavanya and the rest to Nikhil, the net difference in value would be 5+11+8+9-3-7-8-2 = 13. However, by giving books worth 3,7,8 and 9 to Lavanya and the rest
to Nikhil, their father can ensure that the difference in values is just 1. You can verify that you cannot do better than this.
You task is to help their father decide how to divide the books.
Input format
The first line of the input contains a single integer N indicating the number of pairs of books that need to be divided between Lavanya and Nikhil. The next N lines, lines 2,3,…,N+1, each contain two
integers indicating the costs of one pair of books.
Output format
A single integer indicating the smallest possible difference between the total value of books assigned to Lavanya and Nikhil.
Test Data:
You may assume that N ≤ 150 and that the cost of every book is in the range 1…300.
Here is the sample input and output corresponding to the example discussed above.
Sample Input
Sample Output
CPU Timelimit: 3 seconds
Memory limit: 64M
Grading style: ioi | {"url":"http://opc.iarcs.org.in/index.php/problems/EQGIFTS","timestamp":"2014-04-20T10:54:45Z","content_type":null,"content_length":"4884","record_id":"<urn:uuid:a9c39251-2a99-4533-8298-3479835af1c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Preparing for Foundations of Algebra
Print Email Sadlier-Oxford Home » Preparing for Foundations of Algebra: Foundations of Algebra
Preparing for Foundations of Algebra
Add and Subtract Whole Numbers and Decimals
Adding Whole Numbers and Decimals Student Lesson Student Worksheet
Subtracting Whole Numbers and Decimals Student Lesson Student Worksheet
Multiply Decimals
• Student Worksheet 1
Multiply Whole Numbers and Decimals Student Lesson • Student Worksheet 2
Divide Decimals
Dividing by a Whole Number Student Lesson Student Worksheet
Dividing by a Decimal Student Lesson Student Worksheet
Add and Subtract Fractions
Adding Fractions and Mixed Numbers Student Lesson Student Worksheet
Subtracting Fractions, Mixed Numbers Student Lesson Student Worksheet
Multiply and Divide Fractions
Multiplying Fractions and Mixed Numbers Student Lesson Student Worksheet
Dividing Fractions and Mixed Numbers Student Lesson Student Worksheet
Addition and Subtraction Equations Student Lesson Student Worksheet
Multiplication and Division Equations Student Lesson Student Worksheet
• Student Worksheet 1
Equations: More than One Operation Student Lesson • Student Worksheet 2
Equations with Grouping Symbols Student Lesson Student Worksheet
Solutions to Inequalities Student Lesson Student Worksheet
Rational Numbers
Adding Rational Numbers Student Lesson Student Worksheet
Subtracting Rational Numbers Student Lesson Student Worksheet
Multiplying Rational Numbers Student Lesson Student Worksheet
Dividing Rational Numbers Student Lesson Student Worksheet
Equations: Addition and Subtraction Student Lesson Student Worksheet
Equations: Multiplication and Division Student Lesson Student Worksheet
Fractions and Decimals to Percents Student Lesson Student Worksheet
Probability of Single Events Student Lesson Student Worksheet
Compound Events: Independent/Dependent Student Lesson Student Worksheet
Add and Subtract Whole Numbers and Decimals
Adding Whole Numbers and Decimals Student Lesson Student Worksheet
Subtracting Whole Numbers and Decimals Student Lesson Student Worksheet
Multiply Decimals
• Student Worksheet 1
Multiply Whole Numbers and Decimals Student Lesson • Student Worksheet 2
Divide Decimals
Dividing by a Whole Number Student Lesson Student Worksheet
Dividing by a Decimal Student Lesson Student Worksheet
Add and Subtract Fractions
Adding Fractions and Mixed Numbers Student Lesson Student Worksheet
Subtracting Fractions, Mixed Numbers Student Lesson Student Worksheet
Multiply and Divide Fractions
Multiplying Fractions and Mixed Numbers Student Lesson Student Worksheet
Dividing Fractions and Mixed Numbers Student Lesson Student Worksheet
Addition and Subtraction Equations Student Lesson Student Worksheet
Multiplication and Division Equations Student Lesson Student Worksheet
• Student Worksheet 1
Equations: More than One Operation Student Lesson • Student Worksheet 2
Equations with Grouping Symbols Student Lesson Student Worksheet
Solutions to Inequalities Student Lesson Student Worksheet
Rational Numbers
Adding Rational Numbers Student Lesson Student Worksheet
Subtracting Rational Numbers Student Lesson Student Worksheet
Multiplying Rational Numbers Student Lesson Student Worksheet
Dividing Rational Numbers Student Lesson Student Worksheet
Equations: Addition and Subtraction Student Lesson Student Worksheet
Equations: Multiplication and Division Student Lesson Student Worksheet
Fractions and Decimals to Percents Student Lesson Student Worksheet
Probability of Single Events Student Lesson Student Worksheet
Compound Events: Independent/Dependent Student Lesson Student Worksheet | {"url":"http://www.sadlier-oxford.com/math/mc_preparingforalgebra.cfm?grade=8&sp=student","timestamp":"2014-04-19T09:27:02Z","content_type":null,"content_length":"34364","record_id":"<urn:uuid:e0f789ea-44a2-44fc-b60f-af0348c1b46e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noun: statistical regression
1. (statistics) the relation between selected values of x and observed values of y (from which the most probable value of y can be predicted for any value of x)
- regression, simple regression, regression toward the mean
Derived forms: statistical regressions
Type of: statistical method, statistical procedure
Part of: regression analysis
Encyclopedia: Statistical regression | {"url":"http://www.wordwebonline.com/en/STATISTICALREGRESSION","timestamp":"2014-04-19T15:13:59Z","content_type":null,"content_length":"7924","record_id":"<urn:uuid:685fa7fe-4b3a-4d48-82be-ff49d60233cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Analysis/Rational Numbers
From Wikibooks, open books for an open world
←Ordered Sets Real Analysis Axioms of The Real Numbers→
Ordered Fields
Before we can build up the notion of an ordered field we first need some basic concepts from algebra.
Groups play an important role in mathematics. They describe the most basic structures in algebra. The subject of group theory studies the nature and structure of general groups. In this book we will
mostly be concerned with groups you are already familiar with, so this section is more to set some standard terminology. We begin by defining a binary operation which is often thought of as either
multiplication or addition, depending on the context. To avoid confusion we will denote this operation by * while discussion general groups, but in specific cases we will generally use there + or ·
Definition: A binary operation on a set S is a function from S×S→S
Definition: A group is a set G together with a binary operation on G that satisfies the following axioms.
• G is closed under the binary operation. That is, for all x, y in G, x*y is in G.
• The binary operation is associative. That is, for all x, y, and z in G, x*(y*z)=(x*y)*z.
• There exists an identity element, which we denote by e, that satisfies e*x=x*e=x for all x in G.
• For all x in G there exists an inverse element, which we denote by x^-1, so that x*x^-1=x^-1*x=e.
• The integers $\mathbb Z$ together with the binary operation of addition are a group.
• The rational numbers $\mathbb Q$ with the binary operation of addition are a group.
• The non-zero rational numbers $\mathbb{Q}\setminus\{0\}$ with the binary operation of multiplication are a group.
• The set $\mathbb{Z}$ together with the binary operation of multiplication is not a group.
• The set {-1,1} with the binary operation given by multiplication is a group.
• The set {e,o}, with a binary relation given by: e+e=e; e+o=o; o+e=o; and o+o=e; is a group. If one thinks of e as a shorthand for even, and o as a short hand for odd, these are the familiar rules
from childhood "An even number plus an even number is again an even number", etc.
It is often useful to talk about when two groups are the basically the same. It may happen that two groups have a different underlying set, and have a different binary operation, but behave exactly
the same algebraically. When this happens the two groups are called isomorphic.
Definition The groups (G,*) and (H,⊗) are said to be isomorphic if there is a bijective function φ:G→H that satisfies the following two properties:
• φ(e[G])=e[H], where e[G] is the identity element in G and e[H] is the identity element in H;
• φ(x*y)=φ(x)⊗φ(y) for all x and y in G.
A Field[edit]
The set of integers $\mathbb{Z}$ and the operation of addition $+\$ form a group, multiplication $\times$ lacks inverses. If we allow multiplication and addition to operate on $\mathbb{Z}$ we can
define a set where every element except zero has a multiplicative inverse. This is the set of rational numbers.
Rational Numbers[edit]
The next standard extension adds the possibility of quotients or division, and gives us the rational numbers (or just rationals) $\mathbb Q$, which includes the multiplicative inverses of $\mathbb{Z}
\setminus\{0\}$ of the form $\frac{1}{z}$ fractions such as $\frac{1}{2}$, as well as products of the two sets of the form $\frac{z_1}{z_2}$ such as $\frac{64}{7}, \frac{17}{16\times 10^5}$. The
rationals allow us to use arbitrary precision, and they suffice for measurement.
The rational numbers can be constructed from the integers as equivalence classes of order pairs (a,b) of integers such that (a,b) and (c,d) are equivalent when ad=bc using the definition of
multiplication of integers. These ordered pairs are, of course, commonly written $\tfrac{a}{b}$. One can define addition as (a,b)+(c,d)=(ad+bc,bd) and multiplication as (ac,bd) all using the
definition of addition and multiplication of integers.
See Also[edit] | {"url":"http://en.wikibooks.org/wiki/Real_analysis/Rational_Numbers","timestamp":"2014-04-25T05:02:02Z","content_type":null,"content_length":"32467","record_id":"<urn:uuid:97f7b0d5-c324-4328-955d-f29852d52883>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Porter, TX Prealgebra Tutor
Find a Porter, TX Prealgebra Tutor
...I travel to France once a year to visit my family. 4. I taught middle school and high school students. I am qualified to tutor prealgebra because I have a Bachelor of Science in Mathematics.
3 Subjects: including prealgebra, French, elementary math
I have been tutoring for seven years and teaching High School Mathematics for four years. My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate
on the Geometry EOC and my students still contact me for math help while in college. I know I can help...
8 Subjects: including prealgebra, physics, geometry, biology
...What works for one doesn't necessarily work for others, and I have the willingness to differentiate the instructional strategies. Give me a try! Quieres aprender Ingles?
39 Subjects: including prealgebra, reading, English, chemistry
...Hi, my name is Tonya. I have been teaching Math for 10 years. I have a Bachelor's degree in Mathematics.
2 Subjects: including prealgebra, elementary math
...I am a research scientist and Yale educated with a post graduate degree. I have tutored in the past. I am experienced with statistics, math, psychology, biology, reading, writing, photography,
computer programs (SPSS, Excel, Powerpoint), and more.I have studied pharmacology in graduate school and spent over 10 years in industry in pharmacology.
85 Subjects: including prealgebra, reading, English, Spanish
Related Porter, TX Tutors
Porter, TX Accounting Tutors
Porter, TX ACT Tutors
Porter, TX Algebra Tutors
Porter, TX Algebra 2 Tutors
Porter, TX Calculus Tutors
Porter, TX Geometry Tutors
Porter, TX Math Tutors
Porter, TX Prealgebra Tutors
Porter, TX Precalculus Tutors
Porter, TX SAT Tutors
Porter, TX SAT Math Tutors
Porter, TX Science Tutors
Porter, TX Statistics Tutors
Porter, TX Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Dogwood Acres, TX prealgebra Tutors
Huffman prealgebra Tutors
Hufsmith prealgebra Tutors
New Caney prealgebra Tutors
North Cleveland, TX prealgebra Tutors
North Houston prealgebra Tutors
Oak Ridge North, TX prealgebra Tutors
Patton Village, TX prealgebra Tutors
Patton Vlg, TX prealgebra Tutors
Plum Grove, TX prealgebra Tutors
Roman Forest, TX prealgebra Tutors
Sorters, TX prealgebra Tutors
Timberlane Acres, TX prealgebra Tutors
Woodbranch, TX prealgebra Tutors
Woody Acres, TX prealgebra Tutors | {"url":"http://www.purplemath.com/porter_tx_prealgebra_tutors.php","timestamp":"2014-04-19T17:36:22Z","content_type":null,"content_length":"23670","record_id":"<urn:uuid:570f26f7-79ca-4ac2-aadd-6b7eeb6e3c5e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Get a Clue' Brain Teaser
Get a Clue
Probability puzzles require you to weigh all the possibilities and pick the most likely outcome.
Puzzle ID: #18568
Category: Probability
Submitted By: Poker
Corrected By: Winner4600
In Clue (called Cluedo in some countries, including its origin, England), you attempt to solve a murder mystery. There are six possible suspects, six possible weapons that the murderer could have
used, and nine possible locations for the murder to have occurred.
If you guess a random suspect, a random weapon, and a random room, what is the probability of getting at least one right?
Show Answer
What Next?
(user deleted) i guess
Aug 16, 2004
xpitxbullx This teaser is incorrect. Probablility is added. 1/6 + 1/6 + 1/9. The odds are 25/54 of getting at least one correct. You are correct in the number of possible right and wrong
Aug 16, 2004 combinations but some winning combinations are easier to achieve than other winning or losing combinations. Many people dont understand this concept but you can concure this with
any math professor.
Poker "some winning combinations are easier to achieve than other winning or losing combinations."??? What do you mean by this? Every combination is equally likely to have occured.
Aug 19, 2004 Besides, by merely adding 1/6, 1/6, and 1/9, you are counting the combinations that get two right twice and the one that gets all three right three times. In order to do it by
adding, you would have to add all the odds of getting one right, then subtract the odds of getting two right (to reduce each to counting it once), then add the odds of getting all
three right (since subtracting each two once subtracted the three three times, which is one time too many - it got removed completely!), making 1/6+1/6+1/9-1/36-1/54-1/54+1/324.
When you do the math with a calculator, it comes out as 31/81. Don't believe me? Try it - then multiply the answer by 81 to get 31.
xpitxbullx Not true. I am an authority in probability and event calculations. Your formula is incorrect. It is 1/6 + 1/6 + 1/9. How do I know this? Its how I make a living. Since I'm not
Aug 19, 2004 living in a cardboard box and my family is not hungry, I must be doing something right.
xpitxbullx By the way, you can calculate this problem as if you had two 6-sided dice and one 9-sided die.
Aug 19, 2004
xpitxbullx Since you probably wont take my word for it, please consult someone else of mathamatical authority if you want to be enlightened. Perhaps your local COLLEGE math professor. (Even
Aug 19, 2004 though I learned this in 6th grade first). If I seem grumpy, Im tired. No offense intended.
xpitxbullx Sorry I wont shut up but, if you are going to use multiplication to calculate probability, you need to use the 'multiplicative rule'. You can just use the rule you made up. http://
Aug 19, 2004 www.netnam.vn/unescocourse/statistics/46.htm
xpitxbullx ARGGHH! Im tired. I meant you CANNOT use the rule you just made up.
Aug 19, 2004
Kepeli I have to agree with xpitxbullx. I took an intro course in probability and statistics at Michagan State. There are proven formulas used to find these answers.
Aug 19, 2004
xpitxbullx I want to apologize for being so bull-headed. Poker is correct and I was not. Great teaser because it made me test my logic and concede to the proper math.
Aug 21, 2004
Jimbo Poker is certainly correct. A more mathematical way of expressing the answer would be:
Aug 21, 2004 P(not event) = 1 - P(event)
The probability of not getting them all wrong is the complement of getting at lest one right. (If you don't get them all wrong then you must have had at least one correct).
P = 1 - (5/6*5/6*8/9)
P = 1 - 200/324 = 124/324 = 31/81
Nice Puzzle Poker!
Good luck with the Vacancies coulumns Pit Bull!
mosoh Bravo, I'm not too bright and I got it (although it took me quite some time
Aug 21, 2004
kloneo jumbo is right. There is only one combination where not at least one of the variables is right. And this combination is: The all wrong.
Oct 04, 2004 1-p thing learned that in 6e grade.
AndyTover I calculated the probabilty of getting the first one wrong (5/6), multiplied by the chance of getting the second one wrong (5/6) and then did the same again for the last one (8/9).
Oct 23, 2004 The answer was 61.73 - this is the chance of getting all three wrong; therefore the chance of getting at least one right is 38.27%.
CGauss6180 Just to point out a little something that every1 seems to have overlooked. Sadly, contrary to what the guy who says statistics are how he makes his living seems to think, 1/6 + 1/6
Oct 24, 2004 + 1/9 is not 25/54. It turns out that 3/18 + 3/18 + 2/18 is 8/18 or 24/54. At least you were close to the wrong answer though dude.
Oct 27, 2004 "I am an authority in probability and event calculations. Your formula is incorrect. It is 1/6 + 1/6 + 1/9. "
Addition with probabilities is NEVER correct...
given 3 die rolls what is probability of rolling a 1?
1/6+1/6+1/6?? no
given 6 die rolls what is probabilty of rolling a 1?
1/6+1/6+1/6+1/6+1/6+1/6?? no
given 10,000 die rolls what is probabilty of rolling a 1?
1/6+1/6+1/6+.....+1/6?? no
if you are given 100 billion trillion number of dice rolls.. the probability of rolling a 1 (at least once) is STILL LESS THEN 100%
pi202 Hey slow_turtle: you say "addition with probability is NEVER correct". I say that's a load of rubbish! What's the probability of throwing a 1 or a 2 with a normal die? Well, it's
Dec 16, 2004 prob(throwing a 1) + prob(throwing a 2) = 1/6 + 1/6 =2/6 =2/3
Never say never!
Poker You can add with probability. You just have to know what to add. If you want to add the possibilities, go ahead. There are 1x6x9 possibilities that get the suspect right - that's
Dec 19, 2004 54. Of the ones that don't get the suspect right, there are still 5x1x9 possibilities that get the weapon right - that's 45. Of the ones that don't get the suspect or weapon right,
there are still 5x5x1 possibilities that get the room right - that's 25. Add those together to get 124. So the probability is 124/324 or, reduced, 31/81.
kuru im really rusty with math... so im probably wrong... but i was thinking... would you only use multiplication when you are figuring the probably of the combinations. sure there are a
Dec 29, 2004 bunch of probabilty is large for combinations but we arent supposed to be thinking of that just the probability of getting 1 answer right. i also added mine like the others ...1/6+1
/6+1/9.......2/6+1/9...........6/18+2/18.......8/18......or 4/9ths to be easier.... any one get anything similar???? HAHAHAH ok this teaser is going to make me go back to school
kuru ok after thinking about it more i think im wrong.. HAHha multiplacation seems to make more sense now
Dec 29, 2004
Jessica270 a duhhhhhhhhhhhhhhh
Mar 12, 2005
brianz I agree with Poker.
May 02, 2005 But it's fun looking at the comments. I'd say you'd learn more than by reading a book.
wordsrcool I just have to say it. Your all wrong, the question's wrong. The chances of someone randamly guessing are 0. In clue you always start with some cards. Therefore you will always
Jun 15, 2005 start with some idea of what not to guess, raising your chances of getting at least one right. I'm still struggleing with the math, but I know my clue game.
tiny_dancer I just got done reading a brain teaser that was really long so I decided i was not going to try to salve it
Jul 20, 2005
Riddlerman HAHAHA!!! My answer was:
Jul 31, 2005 1/8515157028618240000
katiebug I am no good at math, so my answer wasn't right, but great job anyways!!!
Dec 17, 2005
mr_brainiac Get the answer right? I didn't even read the question right!!
Jan 03, 2006
shadow-x thought it was 3/9s but it was a guess ...
Mar 03, 2006
brainjuice i got 1/324
Mar 27, 2006
why the answer is not the same as 1-5/6*5/6*8/9.
i got 1/324 from 1/6*1/6*1/9. can anybody explain it for me?
dishu Brainjuice your starting point is right but your calculation is wrong. 1-5/6*5/6*8/9 is not 1/324. You have to follow the PEDMAS rules here. You have to first perform the
Jun 29, 2006 multiplication 5/6*5/6*8/9 which will give you 200/324. When you subtract this from 1 you will get 124/324 which reduces to 31/81 on dividing both numerator and denominator by 4
dishu Brainjuice your starting point is right but your calculation is wrong. 1-5/6*5/6*8/9 is not 1/324. You have to follow the PEDMAS rules here. You have to first perform the
Jun 29, 2006 multiplication 5/6*5/6*8/9 which will give you 200/324. When you subtract this from 1 you will get 124/324 which reduces to 31/81 on dividing both numerator and denominator by 4
IluvDepp_Bloom That was really hard, considering i am SUPER-BAD at math!
May 13, 2007
masquerademe235 Hmm... but if you're only trying to get one right, then take a whack at either the murderer or the weapon, and you'll have a 1/6 chance. I don't know. I won't argue about it.
Sep 26, 2007 | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=18568&op=0&comm=1","timestamp":"2014-04-18T03:17:49Z","content_type":null,"content_length":"49972","record_id":"<urn:uuid:923c9341-66d7-45f9-ada9-a2e5726eb13d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
December 1st 2009, 01:31 PM #1
Oct 2009
A ball is thrown upward with an initial velocity of 64ft/sec from an initial height of 80ft. The acceleration due to gravity is -32ftsec^2
a. Write the velocity and position functions v(t) and y(t).
b. When does the ball hit the ground?
c. What is the maximum height obtained by the ball?
Need some help dont know where to start???
Read v(t) as "velocity as a function of time." In plain english, velocity as a function of time is 1.) the initial velocity, upward, which we'll call +64 ft/s; and 2.) the velocity that results
from constant (downward) acceleration, which we'll call (-32 ft/s^2)*t.
I don't know how rigid your instructor is about units, but it helps here to write the simple form of the formula without them:
$v(t) = 64 - 32t$
Now, the next bit actually involves your subject line. Can you figure out y(t) if I tell you $v(t) = dy/dt$ and $dy = v(t) dt$?
December 1st 2009, 01:45 PM #2
Junior Member
Nov 2009 | {"url":"http://mathhelpforum.com/calculus/117861-antiderivatives.html","timestamp":"2014-04-19T15:35:34Z","content_type":null,"content_length":"32218","record_id":"<urn:uuid:a0b9eea0-4a9e-41e0-96b6-1db4de49a47c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poisson's ratio
From Wikipedia, the free encyclopedia
Poisson's ratio, named after Siméon Poisson, is the negative ratio of transverse to axial strain. When a material is compressed in one direction, it usually tends to expand in the other two
directions perpendicular to the direction of compression. This phenomenon is called the Poisson effect. Poisson's ratio $u$ (nu) is a measure of this effect. The Poisson ratio is the fraction (or
percent) of expansion divided by the fraction (or percent) of compression, for small values of these changes.
Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. This is a common observation when a rubber
band is stretched, when it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion, and will have the same value as above. In certain rare
cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio.
The Poisson's ratio of a stable, isotropic, linear elastic material cannot be less than −1.0 nor greater than 0.5 due to the requirement that Young's modulus, the shear modulus and bulk modulus have
positive values.^1 Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible material deformed elastically at small strains would have a Poisson's ratio of
exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation (Seismic Performance of
Steel-Encased Concrete Piles by RJT Park) (which occurs largely at constant volume.) Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0: showing very little lateral
expansion when compressed. Some materials, mostly polymer foams, have a negative Poisson's ratio; if these auxetic materials are stretched in one direction, they become thicker in perpendicular
direction.Some anisotropic materials have one or more Poisson ratios above 0.5 in some directions.
Assuming that the material is stretched or compressed along the axial direction (the x axis in the below diagram):
$u = -\frac{d\varepsilon_\mathrm{trans}}{d\varepsilon_\mathrm{axial}} = -\frac{d\varepsilon_\mathrm{y}}{d\varepsilon_\mathrm{x}}= -\frac{d\varepsilon_\mathrm{z}}{d\varepsilon_\mathrm{x}}$
$u$ is the resulting Poisson's ratio,
$\varepsilon_\mathrm{trans}$ is transverse strain (negative for axial tension (stretching), positive for axial compression)
$\varepsilon_\mathrm{axial}$ is axial strain (positive for axial tension, negative for axial compression).
Length change
For a cube stretched in the x-direction (see figure 1) with a length increase of $\Delta L$ in the x direction, and a length decrease of $\Delta L'$ in the y and z directions, the infinitesimal
diagonal strains are given by
$d\varepsilon_x=\frac{dx}{x}\qquad d\varepsilon_y=\frac{dy}{y}\qquad d\varepsilon_z=\frac{dz}{z}.$
Integrating these expressions and using the definition of Poisson's ratio gives
$-u \int\limits_L^{L+\Delta L}\frac{dx}{x}=\int\limits_L^{L-\Delta L'}\frac{dy}{y}=\int\limits_L^{L-\Delta L'}\frac{dz}{z}.$
Solving and exponentiating, the relationship between $\Delta L$ and $\Delta L'$ is then
$\left(1+\frac{\Delta L}{L}\right)^{-u} = 1-\frac{\Delta L'}{L}.$
For very small values of $\Delta L$ and $\Delta L'$, the first-order approximation yields:
$u \approx \frac{\Delta L'}{\Delta L}.$
Volumetric change
The relative change of volume ΔV/V of a cube due to the stretch of the material can now be calculated. Using $V=L^3$ and $V+\Delta V=(L+\Delta L)(L-\Delta L')^2$:
$\frac {\Delta V} {V} = \left(1+\frac{\Delta L}{L} \right)\left(1-\frac{\Delta L'}{L} \right)^2-1$
Using the above derived relationship between $\Delta L$ and $\Delta L'$:
$\frac {\Delta V} {V} = \left(1+\frac{\Delta L}{L} \right)^{1-2u}-1$
and for very small values of $\Delta L$ and $\Delta L'$, the first-order approximation yields:
$\frac {\Delta V} {V} \approx (1-2u)\frac{\Delta L}{L}$
For isotropic materials we can use Lamé’s relation^2
$u \approx \frac{1}{2} - \frac{E}{6K}$
where $K$ is bulk modulus.
Width change
If a rod with diameter (or width, or thickness) d and length L is subject to tension so that its length will change by ΔL then its diameter d will change by:
$\Delta d = - d \cdot u {{\Delta L} \over L}$
The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used:
$\Delta d = -d \cdot \left( 1 - {\left( 1 + {{\Delta L} \over L} \right)}^{-u} \right)$
$d$ is original diameter
$\Delta d$ is rod diameter change
$u$ is Poisson's ratio
$L$ is original length, before stretch
$\Delta L$ is the change of length.
The value is negative because it decreases with increase of length
Isotropic materials
For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the
other axis in three dimensions. Thus it is possible to generalize Hooke's Law (for compressive forces) into three dimensions:
$\varepsilon_x = \frac {1}{E} \left [ \sigma_x - u \left ( \sigma_y + \sigma_z \right ) \right ]$
$\varepsilon_y = \frac {1}{E} \left [ \sigma_y - u \left ( \sigma_x + \sigma_z \right ) \right ]$
$\varepsilon_z = \frac {1}{E} \left [ \sigma_z - u \left ( \sigma_x + \sigma_y \right ) \right ]$
$\varepsilon_i = \frac {1}{E} \left [ \sigma_i(1+u) - u \left ( \sigma_x + \sigma_y+\sigma_z \right ) \right ]$
$\varepsilon_x$, $\varepsilon_y$ and $\varepsilon_z$ are strain in the direction of $x$, $y$ and $z$ axis
$\sigma_x$ , $\sigma_y$ and $\sigma_z$ are stress in the direction of $x$, $y$ and $z$ axis
$E$ is Young's modulus (the same in all directions: $x$, $y$ and $z$ for isotropic materials)
$u$ is Poisson's ratio (the same in all directions: $x$, $y$ and $z$ for isotropic materials)
These equations will hold in the general case which includes shear forces as well as compressive forces, and the full generalization of Hooke's law is given by:
$\varepsilon_{ij} = \frac {1}{E} \left [ \sigma_{ij}(1+u) - u \delta_{ij}\sigma_{kk} \right ]$
where $\delta_{ij}$ is the Kronecker delta and
$\sigma_{kk} = \sigma_x + \sigma_y+\sigma_z = \sigma_{11} + \sigma_{22}+\sigma_{33}\,$
Orthotropic materials
For orthotropic materials such as wood, Hooke's law can be expressed in matrix form as^3^4
$\begin{bmatrix} \epsilon_{{\rm xx}} \\ \epsilon_{\rm yy} \\ \epsilon_{\rm zz} \\ 2\epsilon_{\rm yz} \\ 2\epsilon_{\rm zx} \\ 2\epsilon_{\rm xy} \end{bmatrix} = \begin{bmatrix} \tfrac{1}{E_{\rm
x}} & - \tfrac{u_{\rm yx}}{E_{\rm y}} & - \tfrac{u_{\rm zx}}{E_{\rm z}} & 0 & 0 & 0 \\ -\tfrac{u_{\rm xy}}{E_{\rm x}} & \tfrac{1}{E_{\rm y}} & - \tfrac{u_{\rm zy}}{E_{\rm z}} & 0 & 0 & 0 \\ -\
tfrac{u_{\rm xz}}{E_{\rm x}} & - \tfrac{u_{\rm yz}}{E_{\rm y}} & \tfrac{1}{E_{\rm z}} & 0 & 0 & 0 \\ 0 & 0 & 0 & \tfrac{1}{G_{\rm yz}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm zx}} & 0 \\ 0 &
0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm xy}} \\ \end{bmatrix} \begin{bmatrix} \sigma_{\rm xx} \\ \sigma_{\rm yy} \\ \sigma_{\rm zz} \\ \sigma_{\rm yz} \\ \sigma_{\rm zx} \\ \sigma_{\rm xy} \end{bmatrix}
${E}_{\rm i}\,$ is the Young's modulus along axis $i$
$G_{\rm ij}\,$ is the shear modulus in direction $j$ on the plane whose normal is in direction $i$
$u_{\rm ij}\,$ is the Poisson's ratio that corresponds to a contraction in direction $j$ when an extension is applied in direction $i$.
The Poisson's ratio of an orthotropic material is different in each direction (x, y and z). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the
equation are independent. There are only nine independent material properties; three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be
obtained from the relations
$\frac{u_{\rm yx}}{E_{\rm y}} = \frac{u_{\rm xy}}{E_{\rm x}}~, \qquad \frac{u_{\rm zx}}{E_{\rm z}} = \frac{u_{\rm xz}}{E_{\rm x}}~, \qquad \frac{u_{\rm yz}}{E_{\rm y}} = \frac{u_{\rm zy}}{E_{\rm
From the above relations we can see that if $E_{\rm x} > E_{\rm y}$ then $u_{\rm xy} > u_{\rm yx}$. The larger Poisson's ratio (in this case $u_{\rm xy}$) is called the major Poisson's ratio while
the smaller one (in this case $u_{\rm yx}$) is called the minor Poisson's ratio. We can find similar relations between the other Poisson's ratios.
Transversely isotropic materials
Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is $y-z$, then Hooke's law takes the form^5
$\begin{bmatrix} \epsilon_{{\rm xx}} \\ \epsilon_{\rm yy} \\ \epsilon_{\rm zz} \\ 2\epsilon_{\rm yz} \\ 2\epsilon_{\rm zx} \\ 2\epsilon_{\rm xy} \end{bmatrix} = \begin{bmatrix} \tfrac{1}{E_{\rm
x}} & - \tfrac{u_{\rm yx}}{E_{\rm y}} & - \tfrac{u_{\rm yx}}{E_{\rm y}} & 0 & 0 & 0 \\ -\tfrac{u_{\rm xy}}{E_{\rm x}} & \tfrac{1}{E_{\rm y}} & - \tfrac{u_{\rm zy}}{E_{\rm y}} & 0 & 0 & 0 \\ -\
tfrac{u_{\rm xy}}{E_{\rm x}} & - \tfrac{u_{\rm yz}}{E_{\rm y}} & \tfrac{1}{E_{\rm z}} & 0 & 0 & 0 \\ 0 & 0 & 0 & \tfrac{1}{G_{\rm yz}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm zx}} & 0 \\ 0 &
0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm xy}} \\ \end{bmatrix} \begin{bmatrix} \sigma_{\rm xx} \\ \sigma_{\rm yy} \\ \sigma_{\rm zz} \\ \sigma_{\rm yz} \\ \sigma_{\rm zx} \\ \sigma_{\rm xy} \end{bmatrix}
where we have used the plane of isotropy $y-z$ to reduce the number of constants, i.e., $E_y = E_z,~ u_{xy} = u_{xz},~ u_{yx} = u_{zx}$.
The symmetry of the stress and strain tensors implies that
$\cfrac{u_{\rm xy}}{E_{\rm x}} = \cfrac{u_{\rm yx}}{E_{\rm y}} ~,~~ u_{\rm yz} = u_{\rm zy} ~.$
This leaves us with six independent constants $E_{\rm x}, E_{\rm y}, G_{\rm xy}, G_{\rm yz}, u_{\rm xy}, u_{\rm yz}$. However, transverse isotropy gives rise to a further constraint between $G_{\rm
yz}$ and $E_{\rm y}, u_{\rm yz}$ which is
$G_{\rm yz} = \cfrac{E_{\rm y}}{2(1+u_{\rm yz})} ~.$
Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of $u_{\rm xy}$ and $u_{\rm yx}$ is the major
Poisson's ratio. The other major and minor Poisson's ratios are equal.
Poisson's ratio values for different materials
│ Material │ Plane of symmetry │ $u_{\rm xy}$ │ $u_{\rm yx}$ │ $u_{\rm yz}$ │ $u_{\rm zy}$ │ $u_{\rm zx}$ │ $u_{\rm xz}$ │
│ Nomex honeycomb core │ $x-y$, $x$=ribbon direction │ 0.49 │ 0.69 │ 0.01 │ 2.75 │ 3.88 │ 0.01 │
│ glass fiber-epoxy resin │ $x-y$ │ 0.29 │ 0.32 │ 0.06 │ 0.06 │ 0.32 │ │
Negative Poisson's ratio materials
Some materials known as auxetic materials display a negative Poisson’s ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive
(i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal
direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain.^7 This can also be done in a structured way and lead to new aspects in material design as for
Mechanical Metamaterials.
Applications of Poisson's effect
One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the
pipe, resulting in a radial stress within the pipe material. Due to Poisson's effect, this radial stress will cause the pipe to slightly increase in diameter and decrease in length. The decrease in
length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise
prone to failure.^citation needed
Another area of application for Poisson's effect is in the realm of structural geology. Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale,
excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a
direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints
and dormant stresses in the rock.^8
The use of cork as a stopper for wine bottles is due to cork having a Poisson ratio of practically zero, so that, as the cork is inserted into the bottle, the upper part which is not yet inserted
does not expand as the lower part is compressed. The force needed to insert a cork into a bottle arises only from the compression of the cork and the friction between the cork and the bottle. If the
stopper were made of rubber, for example, (with a Poisson ratio of about 1/2), there would be a relatively large additional force required to overcome the expansion of the upper part of the rubber
See also
External links
Conversion formulas
Homogeneous isotropic linear elastic materials have their elastic properties uniquely determined by any two moduli among these, thus given any two, any other of the elastic moduli can be calculated
according to these formulas.
$(K,\,E)$ $(K,\,\lambda)$ $(K,\,G)$ $(K,\, u)$ $(E,\,G)$ $(E,\,u)$ $(\lambda,\,G)$ $(\lambda,\,u)$ $(G,\,u)$ $(G,\,M)$
$K=\,$ $K$ $K$ $K$ $K$ $\tfrac{EG}{3 $\tfrac{E}{3(1-2u)} $\lambda+ \tfrac{2G}{3}$ $\tfrac{\lambda $\tfrac{2G(1+u)} $M - \tfrac{4G}
(3G-E)}$ $ (1+u)}{3u}$ {3(1-2u)}$ {3}$
$E=\,$ $E$ $\tfrac{9K(K-\lambda)} $\tfrac{9KG} $3K(1-2u)\,$ $E$ $E$ $\tfrac{G(3\lambda + 2G)} $\tfrac{\lambda(1+u) $2G(1+u)\,$ $\tfrac{G
{3K-\lambda}$ {3K+G}$ {\lambda + G}$ (1-2u)}{u}$ (3M-4G)}{M-G}$
$\lambda $\tfrac{3K $\lambda$ $K-\tfrac{2G} $\tfrac{3Ku}{1+u} $\tfrac{G $\tfrac{Eu}{(1+u) $\lambda$ $\lambda$ $\tfrac{2 G u} $M - 2G\,$
=\,$ (3K-E)}{9K-E}$ {3}$ $ (E-2G)}{3G-E}$ (1-2u)}$ {1-2u}$
$G=\,$ $\tfrac{3KE} $\tfrac{3(K-\lambda)} $G$ $\tfrac{3K(1-2u)} $G$ $\tfrac{E}{2(1+u)}$ $G$ $\tfrac{\lambda $G$ $G$
{9K-E}$ {2}$ {2(1+u)}$ (1-2u)}{2u}$
$u=\,$ $\tfrac{3K-E} $\tfrac{\lambda}{3K-\ $\tfrac{3K-2G} $u$ $\tfrac{E}{2G} $u$ $\tfrac{\lambda}{2(\ $u$ $u$ $\tfrac{M - 2G}
{6K}$ lambda}$ {2(3K+G)}$ -1$ lambda + G)}$ {2M - 2G}$
$M=\,$ $\tfrac{3K $3K-2\lambda\,$ $K+\tfrac{4G} $\tfrac{3K(1-u)} $\tfrac{G $\tfrac{E(1-u)} $\lambda+2G\,$ $\tfrac{\lambda $\tfrac{2G(1-u)} $M$
(3K+E)}{9K-E}$ {3}$ {1+u}$ (4G-E)}{3G-E}$ {(1+u)(1-2u)}$ (1-u)}{u}$ {1-2u}$ | {"url":"http://www.bioscience.ws/encyclopedia/index.php?title=Poisson's_ratio","timestamp":"2014-04-16T07:32:05Z","content_type":null,"content_length":"85280","record_id":"<urn:uuid:3853f98a-e387-45c5-aa29-71bbc5e4ceea>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone guide me through inequality questions? i) 15-2y-y^2<0 ii) 20x-4x^2-25>0
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512d7f8ae4b098bb5fbc33f0","timestamp":"2014-04-21T08:01:38Z","content_type":null,"content_length":"104281","record_id":"<urn:uuid:6efde9b4-feb9-46f6-9ae0-3cef4598e281>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sage Mathematical Software System - Sage
sage: a = RR(sqrt(2)); a
sage: b = sqrt(RealField(100)(2)); b
sage: (a-b).parent()
Real Field with 53 bits of precision
sage: b.parent()
Real Field with 100 bits of precision
sage: c = RealField(100)(a); c
sage: b-c
There are different types of numbers available. There are three major groups for floating point arithmetic:
• Python: float, complex, decimal
• Sage specific: RDF, CDF, RQDF, CC, RR, RIF, CIF
• included Systems: pari, maxima
Most important are the Sage specific types. They use specific libraries and have more functionality. Important are RR and CC, because their behavior is independent of the underlying system and
architecture using the
library. RDF and CDF use the GSL library and are very fast and compatible with the Sage framework.
The example on the left shows how to construct different types of floating point numbers and interactions between them.
sage: M = IntegerModRing(7)
sage: M(2) + M(8)
sage: M.list()
[0, 1, 2, 3, 4, 5, 6]
sage: A.<a,b,c> = AbelianGroup([2,2,3]); A
Multiplicative Abelian Group isomorphic to C2 x C2 x C3
sage: A.order()
sage: A.list()
[1, c, c^2, b, b*c, b*c^2, a, a*c, a*c^2,
a*b, a*b*c, a*b*c^2]
sage: c^5*b*a^4*c
Sage is built on an object oriented programming language. It uses this feature to describe categories of mathematical objects. A good example are algebraic objects like groups, rings and fields.
On the left side you can see some examples on how to construct and use them. The first one picks two integers out of the ring of integers modulo 7. The list() method lists all elements of that ring.
Similarly, the second example constructs an abelian group and assigns its generators to the letters a, b and c.
sage: X = species.SingletonSpecies()
sage: Y = species.BinaryTreeSpecies()
sage: L = CombinatorialSpecies()
sage: L.define(X+X*Y*Y+Y*L)
sage: L.generating_series().coefficients(10)
[0, 1, 1, 3, 8, 23, 70, 222, 726, 2431]
sage: L.structures([1,2,3]).count()
The demonstration on the left hand side shows how Sage is able to work with combinatorial objects from the theory of
Combinatorial Species
\section{SageTex Examples}
This is a small calculation:
The sum of $1+2+\sqrt{3} = \sage{1+2+sqrt(3)}$.
Here you can see a $sin()$-Function:
\sageplot{plot(sin(x), x, 0, 2*pi)}
It is possible to call Sage commands from inside a LaTeX document. The SageTex package provides special LaTeX commands that translate Sage code into a Python file. Then this file is evaluated by Sage
and the results of each calculation are written back into the LaTeX file. This even works for graphics, source-code and saved objects.
The example on the left shows how you can embed a formula like $$1+2+\sqrt{3}$$ and a plot in a LaTeX document.
This package is part of the Sage distribution in the
sub-directory together with the documentation. You can also obtain it online:
CTAN: SageTex Package | {"url":"http://www.sagemath.org/tour-research.html","timestamp":"2014-04-21T04:40:22Z","content_type":null,"content_length":"21281","record_id":"<urn:uuid:d4c87fea-c711-489d-91cf-589d4336633e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourth Mississippi State Conference on Differential Equations and Computational Simulations,
Electron. J. Diff. Eqns., Conf. 03, 1999, pp. 119-125.
Determination of the source/sink term in a heat equation
Ping Wang & Kewang Zheng Abstract:
In this work, we consider the problem of determining an unknown parameter in a heat equation with ill-posed nature. Applying Tikhonov regularization, we obtain a stable approximation to the unknown
parameter from over-specified data. We also present numerical computations that verify the accuracy of our approximation.
Published July 10, 2000.
Math Subject Classifications: 35K05, 35R25.
Key Words: Heat equation, inverse problem, regularization.
Show me the PDF file (104K), TEX file, and other files for this article.
Ping Wang
Department of Mathematics, Pennsylvania State University
Schuylkill Haven, PA 17972, USA
e-mail: pxw10@psu.edu
Kewang Zheng
Department of Mathematics
Hebei University of Science and Technology, China
Return to the EJDE web page | {"url":"http://www.emis.de/journals/EJDE/conf-proc/03/w1/abstr.html","timestamp":"2014-04-19T00:03:51Z","content_type":null,"content_length":"1600","record_id":"<urn:uuid:08723566-9f5d-4c2e-b514-31674ca16822>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
RERTR Publications:
Foreign Research Reactor Spent Nuclear Fuel
The textbook development of calculating the thermal decay heat of reactor fuel is based upon integrating empirical emission rates of the beta and gamma radiation from fission products. These results
are, however, generally useful only for fission product decay times of the order of a few days. For longer decay times, this heat load estimate is very conservative.
Other analytical expressions have been developed to fit experimental decay heat data for longer decay times. For purposes of determining the heat load of spent fuel which can have cooling times of
the order of several hundred days of even years, these latter expressions should be used to calculate the heat load of spent reactor fuel. These analytical expressions also agree very well with
ORIGEN decay heat calculations.
Integrated Beta And Gamma Emission Rates
The heat load from decaying fission products in a fuel assembly is proportional to empirical emission rates of beta and gamma radiation. The rates^4 per U-235 fission, and as a function of decay time
These energy rates are roughly equal for 0.4 MeV mean energy beta particles and 0.7 MeV mean energy gamma-rays.
For a fuel assembly irradiated continuously for P), the heat (H) load power per assembly,
This expression for the heat load is the integral^5 of the above energy rates over the irradiation time, assuming 200 MeV per U-235 fission, and for the fuel assembly power in watts. For a low
duty-factor fuel assembly irradiation, the power and irradiation time are replaced by an average power and an elapsed time. With H) load power per assembly is
A convenient estimate for the average power
where G is the mass of U-235 burned in the fuel assembly in grams, and the constant is g^235U burned per Watt-day.
A similar heat load expression to Eq. -1, given by Etherington (Ref. 6) and attributed to Way and Wigner (Ref. 7 and 8), is
with all times in seconds. With all times in days, this heat load expression is
(Note, the Etherington reference to Way and Wigner appears to be incorrect. Reference 7 is Vol. 73 (not Vol. 70)^6 of Phys. Rev.; Ref. 8 may be the intended reference. However neither Ref. 7 or 8
appears to have the formula attributed to Way and Wigner. Reference 8 however, lists the same
Fuel assembly decay heat loads calculated with these expressions are expected to be conservative, and within a factor of two or less of measured heat loads. This same conservative heat load estimate
also has been found to be true for heat load calculations made with the ORIGEN code^9. The thermal heat load of a fuel assembly is independent of the fuel assembly type.
The constants used in the above equations are based upon empirical data and therefore, are not necessarily exact; it is not uncommon to find several percent variation in a recommended value. The
constants considered here, and their range, are:
1. the beta plus gamma fission product energy rate per fission;
2.7 - 3.2 ·10^-6 MeV/s-f,
1. the total energy release per fission; 190 - 200 MeV/f, and
2. the mass of ^235U burned per megawatt-day; 1.2 - 1.3 g^235U/MWd.
Depending upon the specific values of the constants that are chosen, the calculated heat load can vary by several percent. In any case, the thermal decay heat is expected to be over predicted.
Decay Heat Curves
An analytical expression given by El-Wakil (Ref. 10), which correlate with the decay heat curves of Ref. 11, estimate heat loads about one-half the heat loads calculated above. This heat load
expression is
where all symbols, etc. have the same meaning as above and the times are in days.
The ratio of Eq. -2 to Eq. -1 is
For decay times (
Experimental Decay Heat Data
Another analytical expression given by Untermyer and Weills (Ref. 12), has been used to fit experimental decay heat data. This heat load expression is
where the irradiation (
A plot of the ratio (
These data clearly show the relative decay heat estimated by the decay heat expressions for a typical irradiation time. The ORIGEN ratio is in good agreement with both Eqs. -2 and -3. | {"url":"http://www.rertr.anl.gov/FRRSNF/TM26REV1/TDHEAT.html","timestamp":"2014-04-16T07:59:57Z","content_type":null,"content_length":"33210","record_id":"<urn:uuid:9182b064-643d-444b-a4b2-e76fe615f2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 221b:
Graduate Quantum Mechanics II
Reading References
This has both source material for lectures and pointers to some more in depth discussion than needed for this class.
Books for the class include:
• Merzbacher, Quantum Mechanics, 3rd edition
• recommended: Sakurai, Advanced Quantum Mechanics
• recommended: Ballentine, Quantum Mechanics
• recommended: Baym, Lectures on Quantum Mechanics (might find here or call Westview Press (1-800-386-5656)). There is a 20% discount if you order from the press, so I recommend that route!
• recommended: Mandl and Shaw, Quantum Field Theory
• also see:
□ Sakurai, Modern Quantum Mechanics
□ Hatfield, Quantum Field Theory of Point Particles and Strings
□ Fradkin, Field Theories of Condensed Matter Systems
□ Fetter and Walecka, Quantum Theory of Many Particle Systems
□ Wilczek, Fractional Statistics and Anyon Superconductivity
□ Jackson, Classical Electrodynamics
□ 221B homepage from 2002
□ Itzykson and Zuber, Quantum Field Theory
□ Online Lecture notes, for Klein Gordon and Dirac equations, by D. Gingrich
□ Ramond, Field Theory, A modern Primer
□ A. Zee, Quantum Field Theory in a Nutshell
In order covered in lecture (topic in parentheses)
• Identical Particles
□ Merzbacher 10.6 (beginning, harmonic oscillator review)
□ Merzbacher 21.1-21.3 (algebra of operators)
□ Ballentine Chap 17 (symmetry of wavefunction)
(Aside: spin and statistics: hand-wavy proof and The rigorous book)
Reif 9.4-9.7 (statistics)
Normalization of fermionic wavefunction in terms of one particle states.
Merzbacher 18.8, Sakurai Modern QM 6.3,6.4 (He atom)
Baym chap 18, Ballentine pp.477-478 (scattering cross sections, rotational lines)
Sakurai Modern QM 6.5 (Young Tableaux)
Young Diagrams, Particle Data Group (at LBNL)
Merzbacher 21.4-6 (2nd quantization)
Baym eqn. 19-31 ff. (momentum basis), 19-56 ff. KE operator, correlation functions (1 and 2 particle)
Hatfield, eqn 2.56 ff (locality discussion--find the bug here if you want!)
□ Applications
☆ Baym, 19-94 ff. (Pert. of many body ground state/corrln function)
☆ Merzbacher 22.1,22.2 (Angular Momentum)
☆ Merzbacher 2nd edition! 21.3 (spin waves)
☆ Merzbacher 22.5 (Quantum Stat Mech)
☆ Baym eqn 20-18 ff. (Hartree and Thomas Fermi approximations)
☆ Merzbacher 22.4, Ballentine 18.2 (Hartree Fock)
☆ Ballentine 18.3 (Criticisms of Hartree Fock)
☆ Chapter 3 of this thesis for more on Bose-Einstein Condensates
☆ Fetter and Walecka, sec 41 (Independent pair model, more detail than we'll do, they do many examples)
○ Notes on the independent pair approximation.
☆ Fradkin 2.1-2.4, 3.1 (Hubbard Model, strong and weak coupling)
○ How to get from eqn. 2.3.12 to 2.3.13 in Fradkin's book
☆ Baym Ch. 8 (pairing interaction in superconductivity), see also (instead?) these lectures
☆ Ballentine 18.5 (BCS vacuum and Bogoliubov transformations)
☆ Merzbacher,2nd ed p. 546-548 (difference between normal and BCS ground state energies in detail)
☆ See also for more discussion, J. Moore's lecture notes:
☆ General comments, props of states, and a summary of points about BCS .
☆ (Aside, for further examples: Birrell and Davies, Quantum fields in curved space, secs 3.2, 3.3, Bogoliubov transformations in general relativity)
☆ Wilczek, p4 and p 17-20 (anyons)
□ Photons and the electromagnetic field
☆ Jackson 11.9 (Field equations and gauge potentials)
☆ Also see Murayama's notes from 221b in the past for much of this.
☆ Mandl and Shaw 1.1 and 1.2 (Quantizing EM field)
☆ Sakurai, 2.1-2.2 (Quantizing EM field)
○ For more information on gauge fixing and A[0] see for example, Field Theory: A modern primer, by Ramond, section 7.1.
☆ Sakurai, 2.3 (Classical vs. Quantum properties)
☆ Sakurai, 2.4 (Interactions with matter)
☆ Mandl and Shaw pages 13,14 and 19 (interactions with matter)
☆ Mandl and Shaw 1.4.4 or Sakurai p 51 (Thomson scattering)
☆ Merzbacher 23-4 (interaction with a current)
☆ Murayama's notes (same as above), pages 5-8.
☆ Itzykson & Zuber p. 138-141 or Ballentine p. 535-539 (Vacuum energy and Casimir Effect)
□ Relativistic Quantum Mechanics and Quantum Field Theory
☆ Ramond 1.2, 1.3 (Lorentz and Poincare group)
☆ Baym p. 499-504 (Klein-Gordon Field)
○ Also see for the KG equation these Notes by D. Gingrich
☆ Sakurai 3-1 (probability and KG field)
☆ Mandl and Shaw 3-2 (Complex scalar field quantization)
☆ Mandl and Shaw p. 73 (spin-statistics comments)
☆ Sakurai 3-2 (Derivation of Dirac equation)
☆ Merzbacher 24-2 (Dirac equation, our gamma matrix conventions!), also see Mandl and Shaw 4-2 and these notes on Dirac equation by D. Gingrich
☆ Mandl and Shaw pp.63-67, 334-339 and/or Sakurai 83-84 and 91-94 (Basis functions for Dirac equation, Lorentz transformations, helicity)
☆ Mandl and Shaw 4-3 (Quantizing Dirac equation)
☆ See also the books by Halzen and Martin and the book by Griffiths on particle theory for other ways of introducing this material.
☆ Mandl and Shaw 4-5 (Gauge invariance)
☆ Mandl and Shaw 8.1, 8.2 and 11.5 (rates, cross sections and spin sums) (Sakurai also does this but because of his funny metric, things look a bit different along the way)
☆ Mandl and Shaw 138-139 (rates in terms of matrix elements)
☆ Sakurai, beginning of 4-2 (brief description of S-matrix theory)
☆ Mandl and Shaw 6-2,6-3 (also brief description of S-matrix theory)
☆ Zee, Chapter 3 (renormalization), Ramond, Chapter 4 (for a scalar field theory): both a lot of useful detail, Zee is more about the ideas, Ramond more about how to do it. | {"url":"http://astro.berkeley.edu/~jcohn/221b/reading.html","timestamp":"2014-04-20T13:20:40Z","content_type":null,"content_length":"11028","record_id":"<urn:uuid:ca5e4042-d213-4fb2-b458-3be435cd726d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
[51] Mountain climbing, ladder moving, and the ring width of a polygon (with J. E. Goodman and Chee K. Yap), Amer. Math. Monthly 96 (1989), 494-510.
[52] On the lower envelope of bivariate functions and its applications (with H. Edelsbrunner, J. T. Schwartz and M. Sharir), Proc. 28th FOCS Symposium 1987, 27-38.
[53] Some universal graphs (with P. Komjáth and A. H. Mekler), Israel Journal of Mathematics 64 (1988), 158-168.
[54] On arrangements of Jordan arcs with three intersections per pair (with H. Edelsbrunner, L. Guibas, J. Hershberger, R. Pollack, R. Seidel, M. Sharir and J. Snoeyink), Proc. 4th ACM Symposium on
Computational Geometry, 1988, 258-265.
See also Discrete and Comp. Geom. 4 (1989), 523-539.
[55] Arrangements of curves in the plane - Topology, Combinatorics and Algorithms (with H. Edelsbrunner, L. Guibas, R. Pollack, R. Seidel and M. Sharir). '88 ICALP Proc. 214-229.
Also in: Theoretical Computer Science 92 (1992), 319-336.
[56] A problem of Leo Moser about repeated distances on the sphere (with P. Erdős and D. Hickerson), Amer. Math. Monthly 96 (1989), 569-575.
[57] Small sets supporting Fáry embeddings of planar graphs (with H. de Fraysseix and R. Pollack), 20th STOC Symposium 1988, 426- 433.
Almost identical with: How to draw a planar graph on a grid, Combinatorica 10 (1990), 41-51.
[58] The upper envelope of piecewise linear functions and the boundary of a region enclosed by convex plates: Combinatorial analysis (with M. Sharir), Discrete and Computational Geom. 4. (1989),
[59] Delicate symmetry, Computers and Mathematics with Applications 17 (1989), 117-124.
[60] Isomorphic subgraphs in a graph (with P. Erdős and L. Pyber), in: Combinatorics, Coll. Math. Soc. J. Bolyai 52 (1988), 553-556.
[61] On the average volume of subsets in Euclidean d-space (with N. Sauer), European J. Combinatorics 12 (1991), 417-421.
[62] Problem on convex polygons, James Cook Math. Notes 5 (1988), 5116-5117.
[63] Cell decomposition of polygons by bending (with J. E. Goodman), Israel J. Math. 64 (1988), 129-138.
[64] On vertical visibility in arrangements of segments and the queue size in the Bentley-Ottman line sweeping algorithm (with M. Sharir), SIAM J. Computing 20 (1991), 460-470.
[65] Embedding a planar triangulation with vertices at specified points (with P. Gritzmann, B. Mohar, and R. Pollack), Amer. Math. Monthly 98 (1991), 165-166.
[66] On the game of Misery, Studia Sci. Math. Hung. 27 (1992), 353-358.
[67] On the maximal number of certain subgraphs in Kr-free graphs (with E. Győri and M. Simonovits), Graphs and Combinatorics 7 (1991), 31-37.
[68] Large discs in convex unions (with C. A. Rogers), American Math. Monthly 95 (1988), 765-767.
[69] Variations on the theme of repeated distances (with P. Erdős), Combinatorica 10 (1990), 261-269.
[70] On the structure of a system of stars in graphs (Russian, with Armenian summary), Mat. Voprosy Kibernet. Vychisl. Tekhn. 15 (1988), 165-173 (MR 90j:05079).
[71] An upper bound on the number of planar k-sets (with W. Steiger and E. Szemerédi), Proc. 30th FOCS Symposium, l989, 72-81.
Also in: Discrete Comput. Geom. 7 (1992), 109-123.
[72] Towards a new geometry, Magyar Tudomány 1990/8, 895-903 (in Hungarian).
[73] Weaving patterns of lines and line segments in space (with R. Pollack and E. Welzl), Proc. SIGAL Internat. Symposium on Algorithms, Tokyo Lecture Notes in Comp. Sci. 450 (1990), 439-446,
Springer. Also in: Algorithmica 9 (1993), 561-571.
[74] Graph distance and Euclidean distance on the grid (with R. Pollack and J. Spencer), Topics in Graph Theory and Combinatorics, The Ringel-Festzeitschrift (R. Bodendiek, R. Henn, eds),
Physica-Verlag, Heidelberg, 1990, 553-559.
[75] Some new bounds for epsilon-nets (with G. Woeginger), Proc. 6th ACM Symposium on Comput. Geom., 1990, 10-15.
See also: Almost tight bounds on epsilon-nets (with J. Komlós and G. Woeginger), Discr. Comput. Geom. 7 (1992), 163-173. | {"url":"http://www.renyi.hu/~pach/publications/51_75.html","timestamp":"2014-04-19T02:21:08Z","content_type":null,"content_length":"7170","record_id":"<urn:uuid:f2d1d554-e5f2-48be-98c7-384cc0a17152>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doppler Measurement of Gas Dynamics -- Analysis of Beam Integrated Velocity Field Spectra from Planets and Astrophysical Sources
Previous abstract Next abstract
Session 70 - Planetary Systems Near & Far.
Display session, Friday, January 09
Exhibit Hall,
[70.04] Doppler Measurement of Gas Dynamics -- Analysis of Beam Integrated Velocity Field Spectra from Planets and Astrophysical Sources
T. Hewagama, J. Goldstein (CCSSE), D. Buhl, F. Espenak, T. Kostiuk (GSFC), K. Fast, T. Livengood (GSFC and UMD)
Infrared Heterodyne Spectroscopy observations of ro-vibrational emission line spectra of CO_2 in Mars and Venus and of C_2H_6 in Titan have been used to study molecular abundances and atmospheric
dynamics (to 2 ms^-1). These investigations were limited by the lack of a beam-integrated radiative transfer algorithm. Typically, a mean viewing angle was assumed in the analysis whereas the
spectrometer FOV contains a range in velocities and viewing angles weighted by the spectrometer response. A beam-integrated spectrum is therefore quantitatively different from individual spectra at
arbitrary viewing angles. For example, the 1'' FOV of the NASA/IRTF is comparable to the size of Titan's disc and the observed beam includes dynamical contributions from all regions of the disc
intercepted by the beam. These differences affect molecular abundance and wind field retrievals.
We have developed analysis software which models beam-integrated observations. Our model is characterized by an effective beam response, molecular abundance, and a wind field. The model is
numerically implemented by binning the beam into a grid of elements, (2) for each element, calculating the mean viewing angle and the corresponding emergent spectrum using a radiative transfer
algorithm, and (3) constructing a beam-integrated spectrum by convolving the beam element spectra with the beam model. The Doppler shift due to the wind field is modeled by applying an appropriate
frequency shift in each element. Using experimental errors as weights, a \chi^2 statistic characterizes the differences between the modeled and observed spectra, and a non-linear search algorithm is
used to investigate the parameter space. Results from the application of the software to dynamical studies of the atmospheres in Jupiter and Titan will be discussed. These analysis techniques are
general and have potential application to other astrophysical sources.
The author(s) of this abstract have provided an email address for comments about the abstract: tilak@cuzco.gsfc.nasa.gov | {"url":"http://aas.org/archives/BAAS/v29n5/aas191/abs/S070004.html","timestamp":"2014-04-21T07:27:47Z","content_type":null,"content_length":"3616","record_id":"<urn:uuid:881b7ffc-a21a-414b-bce0-23b9d848bc72>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Belmont, MA Geometry Tutor
Find a Belmont, MA Geometry Tutor
...I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these
subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams.
36 Subjects: including geometry, chemistry, English, reading
...I enjoy helping others understand the logic and rules that govern our writing, interpretation, and speech. I have almost six months' experience tutoring in English half-time, including
grammar. I have a masters degree in math, but have not lost sight of the difficulties encountered in elementary math.
29 Subjects: including geometry, reading, English, literature
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including geometry, calculus, statistics, algebra 1
...Another summer, I spent 9 weeks in Novosibirsk, Russia where I spoke Russian intensely. I took and passed the Praxis 1 and Praxis 2 (Elementary Education) during the teacher certification
process in the State of New Hampshire. I also received a Recognition of Excellence award for scoring in the top 15% of Praxis 1 test takers.
42 Subjects: including geometry, English, reading, German
...The lessons we teach ourselves are the ones we remember best. Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the
student can solve in succession. Ultimately this will allow the student to start from a place of confiden...
12 Subjects: including geometry, chemistry, physics, calculus
Related Belmont, MA Tutors
Belmont, MA Accounting Tutors
Belmont, MA ACT Tutors
Belmont, MA Algebra Tutors
Belmont, MA Algebra 2 Tutors
Belmont, MA Calculus Tutors
Belmont, MA Geometry Tutors
Belmont, MA Math Tutors
Belmont, MA Prealgebra Tutors
Belmont, MA Precalculus Tutors
Belmont, MA SAT Tutors
Belmont, MA SAT Math Tutors
Belmont, MA Science Tutors
Belmont, MA Statistics Tutors
Belmont, MA Trigonometry Tutors
Nearby Cities With geometry Tutor
Allston geometry Tutors
Arlington Heights, MA geometry Tutors
Arlington, MA geometry Tutors
Brighton, MA geometry Tutors
Lexington, MA geometry Tutors
Medford, MA geometry Tutors
Melrose, MA geometry Tutors
Newton Center geometry Tutors
Newton Centre, MA geometry Tutors
Newtonville, MA geometry Tutors
Waltham, MA geometry Tutors
Watertown, MA geometry Tutors
Waverley geometry Tutors
West Medford geometry Tutors
Winchester, MA geometry Tutors | {"url":"http://www.purplemath.com/belmont_ma_geometry_tutors.php","timestamp":"2014-04-16T07:39:30Z","content_type":null,"content_length":"24221","record_id":"<urn:uuid:2c36989f-2bb4-457c-b790-fd13a6b6067d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
FQXi Community
Cheating the Causal Game
A new quantum framework that blurs cause-and-effect at a fundamental level could improve information processing and lead to a theory of quantum gravity.
October 16, 2012
Caslav Brukner
Fabio Costa, Ognyan Oreshkov and Caslav Brukner
University of Vienna, Austria
Floppy the dog is a hero—according to my young daughter’s early reading book, that is. He smelt smoke, barked to alert the family and saved the house from burning down. Prior to this, the story goes,
Dad had accidentally left the stove on, then put a wooden tray on it. Stories like this make sense, to all age groups, because we can piece together the correct order in which the events must have
occurred, even when they are presented out of sequence.
From an early age we take the cause and effect of events happening in time for granted; it’s how we think. Without cause and effect, where would science be? We could not attempt to predict the
outcome of experiments to test ideas about the world, or try to formulate such theories of what will happen. Even the math that describes the atomic world—quantum theory—assumes that events take
place in time in an ordered and connected fashion. Which makes it all the more strange that some physicists are trying to ditch this neat time-ordering.
This is by no means an obvious strategy to employ, notes
Caslav Brukner
, at the University of Vienna, Austria, one of the physicists behind the idea. "It’s simply new physics," he says. "We are asking whether space, time and causal order are truly fundamental
ingredients of nature." The team hopes that by taking an approach that doesn’t rely on causal structure, it might provide a clue about where causal order comes from. Is it a necessary property of
nature or can it be derived from more primitive concepts?
Uncertainty is inherent in quantum theory. It’s well established that the physical aspects of quantum experiments, such as a particle’s position or momentum, are not well defined before they are
measured. But postulating that the ordering of events is also somewhat fuzzy takes this conception of uncertainty to a bold new level. Now Brukner and his colleagues, Ognyan Oreshkov and Fabio Costa,
also at the University of Vienna, have calculated that time-ordering can become muddled in some situations. Even weirder, this is helpful rather than harmful, and if harnessed could potentially
improve quantum information processing protocols, and help researchers trying to devise a theory of quantum gravity.
Brukner illustrates his approach with what he calls a
causal game
. Imagine that you and I are each given a different number, either 0 or 1. The aim of the game is for each of us to guess the other’s number correctly. Now if we define the causal relations between
us—say, I am before you, and I can cheat and send my number to you before you guess—then it means that you can guess perfectly, whereas I can guess correctly only half of the time. This means that
together we have a 75 per cent chance of winning the game if we have well-defined causal relations between us.
But Brukner’s new findings make things a bit more complicated. "We have shown that there are certain quantum resources that would allow us to go beyond this 75 per cent if the causal relations
between us are not well-defined," says Brukner. In other words, if you don’t define your time-ordering, you can win the game more often. Their work is published in
Nature Communications
A new framework for quantum mechanics which does not assume a pre-existing
global time. It demonstrates the possibility for two agents to perform a
communication task in which it is impossible to tell with certainty who
influences whom.
Credit: University of Vienna
That sounds great in theory, but in practice, would a situation in which the causal relationship between us is muddied ever actually arise? A key idea to achieve such a situation involves the fact
that quantum mechanics allows objects to exist in
, so that they can be in two or more contradictory states simultaneously; for instance, an electron could be in two different places at the same time. Brukner imagines that non-causal processes are
most likely to be found where you have some sort of superposition of spacetime. "If you have a well-defined spacetime then you know that everything is going to be well-defined, with well-defined
causal relations, so it can’t be that," he says. He sees realising these non-causal processes in the lab as one of the research field’s key challenges—it won’t be easy mixing up spacetime itself in
an experiment. "It’s one of the weakest points of the whole thing, we are not sure how you can perform these quantum resources," says Brukner. The problem is that, when you start explaining how to do
an experiment using well-defined language (appropriate for the well-defined spacetime in which we live), then you have well-defined causal relations between events. "We are working on this now, it’s
a new land," he says.
Brukner may not need to worry so much, however. Other independent researchers are more optimistic about creating the required conditions to replicate the team’s theoretical quantum game in the lab,
in a fairly down-to-earth scenario. All that is needed is to create a situation in which two players—you and I—can send information to each other through a wire and, crucially, the causal relation of
who signals to whom must be ill-defined. Physicist
Matt Leifer
of the Perimeter Institute (PI), in Waterloo, Ontario, who was not involved in developing the paper, says that we can imagine such a wire that is controlled by a quantum system that is in a
superposition of two states. That means that in this set-up, the signal can either go from me to you or from you to me, but we don’t know which will occur.
It’s simply new physics.
We are asking whether
space, time and causal
order are truly fundamental.
- Caslav Brukner
"With suitable interactions between the quantum system and the wiring then we should be able to generate some of the correlations described by Caslav," says Leifer. Although challenging, Leifer says
it may be possible to set up some sort of an experiment along these lines in quantum computing. "I expect it to be difficult but not beyond the realms of possibility," he says.
Brukner’s team is not the first to investigate these issues. Perhaps fittingly for a theory of indefinite causality, it’s difficult to pin down exactly where and when these notions began. One of the
first researchers to venture forth into what Brukner described as the "new land"—devising theories using indefinite causal relations—was
Lucien Hardy
, who is also based at PI (see
Then, one afternoon over coffee at the University of Pavia, Italy, a group of quantum theorists were discussing Hardy’s ideas and wondered whether casual relations could be superposed. Just as
Brukner now notes that such fuzziness can increase the chances of a win in a quantum game, they realised that in a similar manner, this uncertainty could have useful applications in quantum
Giulio Chiribella
, who was in that first discussion with
Benoit Valiron
FQXI member Giacomo Mauro D’Ariano
Paolo Perinotti
says: "From that afternoon discussion, we suggested a way to superpose the ordering of operations in a computation, and we argued that this effect could lead to more efficient protocols for
information processing."
Quantum computers, in theory, exploit superposition to perform powerful operations on data—but the idea of indefinite causal structure brings the phenomenon of superposition into a new realm, the
realm of the ordering of computational operations. "Caslav’s causal game has been a key result supporting this intuition," says Chiribella, from the Center for Quantum Information, Tsinghua
University, Beijing. "This is because it has provided the first concrete example of an advantage coming from indefinite causal ordering."
He describes Brukner as "one of the leading researchers in the new school of quantum foundations," and always looks forward to meeting up with him over conference dinners to chat about quantum
physics and the new directions the community is exploring. "No matter which subject we pick, these chats are always fun and inspirational," says Chiribella. "I very much like his pragmatic attitude,
which combines foundational ideas with applications in quantum information, always keeping an eye on the possible experimental implementations."
Chribella has made some important contributions to the new research field: in testing the properties of two black boxes. The game is to work out the contents of the box based on how it changes
numbers that are input into the box. For instance, if you input 1, 2 and 3 into the box and get out 3, 5 and 7, you could calculate that the box multiplies by 2 and adds one. Chiribella discovered
that the process of working out the properties of the boxes is more efficient if, instead of examining one box first then the second, you take advantage of a superposition of the two possible
"These works clearly demonstrate that harnessing the new quantum effects that arise in the absence of a definite causal structure can offer advantages and help us to save computational resources and
energy costs," says Chiribella. "I expect many new examples of this kind to appear in the next few years."
Like many theoretial physicists, Brukner has his eyes on the bigger prize. The team hopes that their new description of cause-and-effect in the quantum world will help researchers developing a theory
of quantum gravity, a grand project for theorists world-over. A successful theory of quantum gravity would merge quantum theory with Einstein’s theory of general relativity to describe every
interaction in the universe that we know about, from the subatomic scale to the cosmological. One of the biggest obstacles has been that general relativity and quantum mechanics treat time very
differently. In the former theory, time is another dimension alongside space and can bend and stretch, speed up and slow down, in different circumstances. Quantum theories, however, usually assume
that time is set apart from space and ticks at a set rate. Theories of indefinite causality tackle this mismatch head-on, by questioning what time is at a fundamental level.
Guilio Chiribella
Teaching at a student summer camp in Tsinghua University, Beijing, China.
Hardy is very excited by Brukner and Chiribella’s work, and hopes that over the next five years, "we will see a proto-version of quantum gravity based on this way of thinking." Hardy has already
developed a mathematical framework for physical theories—including quantum theory—that allows for indefinite causality by avoiding the notion of systems evolving in time. "The causaloid framework I
developed might accommodate a theory of quantum gravity," he says.
All the researchers hope their efforts will help them to formulate a more general quantum theory, in which our familiar causal structure—of dogs barking and families waking—is not assumed, but
emerges in the right conditions. "I find this new direction promising because it challenges one of the key paradigms of quantum and classical mechanics: the paradigm of a state evolving in time,"
says Chiribella. "We are now pushing quantum theory to the extreme limits of what can be conceived by our imagination."
Please read the important Introduction that governs your participation in this community. Inappropriate language will not be tolerated and posts containing such language will be deleted. Otherwise,
this is a free speech Forum and all are welcome!
• Please enter the text of your post, then click the "Submit New Post" button below. You may also optionally add file attachments below before submitting your edits.
• HTML tags are not permitted in posts, and will automatically be stripped out. Links to other web sites are permitted. For instructions on how to add links, please read the link help page.
• You may use superscript (10^100) and subscript (A[2]) using [sup]...[/sup] and [sub]...[/sub] tags.
• You may use bold (important) and italics (emphasize) using [b]...[/b] and [i]...[/i] tags.
• You may also include LateX equations into your post.
LaTeX equations may be displayed in FQXi Forum posts by including them within [equation]...[/equation] tags. You may type your equation directly into your post, or use the LaTeX Equation Preview
feature below to see how your equation will render (this is recommended).
For more help on LaTeX, please see the
LaTeX Project Home Page
preview equation clear equation insert equation into post at cursor
Your name: (optional) Important: In order to combat spam, please select the letter in this menu between 'R' and 'T':
[hide] You may optionally attach up to two documents to your post. To add an attachment, use the following feature to browse your computer and select the file to attach. The maximum file size for
attachments is 1MB.
Once you're done adding file attachments, click the "Submit New Post" button to add your post.
VLADIMIR F. TAMARI wrote on March 24, 2013
oops sorry - here is the correct link to Eric Reiter's unquantum.net
VLADIMIR F. TAMARI wrote on March 24, 2013
My reaction to this new research about reordering causality went something like this: "Oh No! Here is another group of talented imaginative young physicists who have found another aspect of
probability to play with - are we in for another half century of multiverse-type thinking, this time around featuring micro-scrambled-time universes?" Doubtless clever mathematics can twist 'reality'
around and make the outcome appear to follow experimental results. This may even lead to interesting...
JOHN MERRYMAN wrote on November 20, 2012
Thanks Zeeya. Should be interesting.
read all article comments | {"url":"http://www.fqxi.org/community/articles/display/173","timestamp":"2014-04-21T13:14:07Z","content_type":null,"content_length":"40156","record_id":"<urn:uuid:275bc5b9-0e03-4155-a0fd-c9f9e8f8d37a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monte Carlo Methods for Multiple Target Tracking and Parameter Estimation
Daniel Duckworth
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2012-68
May 9, 2012
Multiple Target Tracking (MTT) is the problem of identifying and estimating the state of an unknown, time-varying number of targets. A successful algorithm will identify how many unique targets have
existed, at what times they were active, and what sequence of states they followed when active. This work presents two novel algorithms for MTT, Particle Markov Chain Monte Carlo Data Association
(PMCMCDA) and Particle Filter Data Association (PFDA). These algorithms consider MTT in a Bayesian Framework and seek to approximate the posterior distribution over track states, data associations,
and model parameters by combining Markov Chain Monte Carlo and Particle Filtering to perform approximate inference. Both algorithms are evaluated experimentally on two pedagogical examples, and
proofs of convergence in the limit of infinite samples are given.
Advisor: Stuart J. Russell
BibTeX citation:
Author = {Duckworth, Daniel},
Title = {Monte Carlo Methods for Multiple Target Tracking and Parameter Estimation},
School = {EECS Department, University of California, Berkeley},
Year = {2012},
Month = {May},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-68.html},
Number = {UCB/EECS-2012-68},
Abstract = {Multiple Target Tracking (MTT) is the problem of identifying and estimating the state of an unknown, time-varying number of targets. A successful algorithm will identify how many unique targets have existed, at what times they were active, and what sequence of states they followed when active.
This work presents two novel algorithms for MTT, Particle Markov Chain Monte Carlo Data Association (PMCMCDA) and Particle Filter Data Association (PFDA). These algorithms consider MTT in a Bayesian Framework and seek to approximate the posterior distribution over track states, data associations, and model parameters by combining Markov Chain Monte Carlo and Particle Filtering to perform approximate inference. Both algorithms are evaluated experimentally on two pedagogical examples, and proofs of convergence in the limit of infinite samples are given.}
EndNote citation:
%0 Thesis
%A Duckworth, Daniel
%T Monte Carlo Methods for Multiple Target Tracking and Parameter Estimation
%I EECS Department, University of California, Berkeley
%D 2012
%8 May 9
%@ UCB/EECS-2012-68
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-68.html
%F Duckworth:EECS-2012-68 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-68.html","timestamp":"2014-04-19T02:27:21Z","content_type":null,"content_length":"6429","record_id":"<urn:uuid:a718e89c-7bfb-41c7-ae06-a58d422b1547>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Yuli
Total # Posts: 20
how do you find time when all you're given are two different speeds the height and the angle?
A toy rocket is traveling to the right at 15.0 m/s when it undergoes a constant acceleration 3.00 m/s^2 to the left how long does it take before the rocket stops moving to the right? what distance
does the rocket travel before it stops moving to the right? what distance does i...
A gaseous mixture in a 25.00 L container is made of 16.0 g N2 and 14.0 g Ar and has a total pressure of 1.00 atm. 1.Calculate the partial pressure of the N2 in the mixture. the ans is 0.620 atm
2.Calculate the temperature of the gas mixture. can someone explain to me how to do...
cold water mass of calimeter+ water: 68.66g mass of calimeter: 19.19g mass of water: 49.47g final temperature of water: 39.1C initial temperature of water: 25.5C delta(T) of water: 13.6C hot water
mass of calimeter+ water: 79.55g mass of calimeter: 20.61g mass of water: 58.94g...
A 62.7-kg person jumps from rest off a 2.98-m-high tower straight down into the water. Neglect air resistance. She comes to rest 1.09 m under the surface of the water. Determine the magnitude of the
average force that the water exerts on the diver. This force is nonconservativ...
A 65.0-kg skier coasts up a snow-covered hill that makes an angle of 27.4° with the horizontal. The initial speed of the skier is 6.37 m/s. After coasting 2.07 m up the slope, the skier has a speed
of 4.06 m/s. Calculate the work done by the kinetic frictional force that a...
A cable lifts a 1220-kg elevator at a constant velocity for a distance of 39.7 m. What is the work done by the tension in the cable? can someone help me with this the answer is supposed to be in
joules i tried doing it by 1220(9.81)(39.7) = 475137J but that is not the answer c...
A total of 2.00mol of a compound is allowed to react with water in a foam coffee cup and the reaction produces 185g of solution. The reaction caused the temperature of the solution to rise from 21.0
to 24.7C . What is the enthalpy of this reaction? Assume that no heat is lost ...
A piston has an external pressure of 9.00 atm. How much work has been done if the cylinder goes from a volume of 0.130 liters to 0.500 liters? can someone explain to me how am i supposed to do this
please? Thanks :D
In the following experiment, a coffee-cup calorimeter containing 100mL of H2 is used. The initial temperature of the calorimeter is 23.0 C . If 3.40g CaCl2 of is added to the calorimeter, what will
be the final temperature of the solution in the calorimeter? The heat of soluti...
how do i change 2866.0 J to kJ/mol? can someone please help me?
I did all my questions but there is this last one i dont get. Calculate the number of moles of zinc that must have reacted( assume that Zn is limiting and the yield of H2 gas is 100%) how do you do
this? could you help me with this last one please?
how would you calculate the temperature of H2 in kelvin? and the moles that H2 produced? do you know a formula for that? could you please explain it to me thanks for all your help so far i've
understood everything thanks to you.
A student generates H2(g) over water using the reaction between zinc and hydrochloric acid Zn(s)+ 2HCl(aq)---> ZnCl2(aq)+ H2(g) Data: Mass of vial and zinc 15.5082 mass of vial 15.3972 mass of zinc
0.111 final burette reading 45.30mL barometric pressure: Patm 100.6kPa tempe...
A student generates H2(g) over water using the reaction between zinc and hydrochloric acid Zn(s)+ 2HCl(aq)---> ZnCl2(aq)+ H2(g) Data: Mass of vial and zinc 15.5082 mass of vial 15.3972 mass of zinc
0.111 final burette reading 45.30mL barometric pressure: Patm 100.6kPa tempe...
i know and i found a formula to do so. But i was wondering if you could tell me the formula to find the pressure exerted by H2 gas in kPa and also the one to find the kelvin temperature.
A student generates H2(g) over water using the reaction between zinc and hydrochloric acid Zn(s)+ 2HCl(aq)---> ZnCl2(aq)+ H2(g) Data: Mass of vial and zinc 15.5082 mass of vial 15.3972 mass of zinc
0.111 final burette reading 45.30mL barometric pressure: Patm 100.6kPa temp...
A student generates H2(g) over water using the reaction between zinc and hydrochloric acid Zn(s)+ 2HCl(aq)---> ZnCl2(aq)+ H2(g) Data: Mass of vial and zinc 15.5082 mass of vial 15.3972 mass of zinc
0.111 final burette reading 45.30mL barometric pressure: Patm 100.6kPa tempe...
given tanθ= -2√10/3 and π/2<θ< π find: a.sin2θ b.tan θ/2 could some1 help me with these 2 they're the only ones i couldn't get
given sinθ=-5/13 and π<θ<3π/2 find sin2θ cos( θ-4π/3) sin(θ/2) can some1 please help me with this? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Yuli","timestamp":"2014-04-18T23:39:12Z","content_type":null,"content_length":"11828","record_id":"<urn:uuid:8ad440bb-0e29-477e-aadc-f404582bd291>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Apache Junction Algebra Tutor
...I will be conducting individual and group classes starting May 2014 at my home located in Mesa, AZ in E. Guadalupe and S. Hawes.
13 Subjects: including algebra 2, algebra 1, reading, English
...I am now an anatomy professor at my Alma mater. My expertise is condensing the information and helping my students understand what it all means and why it is important. I took physiology in
undergraduate school and then again in medical school.
14 Subjects: including algebra 1, algebra 2, chemistry, physics
...Any age, any level, children are always a joy to tutor. I have participated in structured programs like America Reads as well as being an Instructional Assistant for Mesa Public Schools. It is
no secret that teaching these children is a lifelong passion of mine.
33 Subjects: including algebra 2, algebra 1, English, reading
...I have taught and tutored everything from basic mathematics up through Calculus, Differential Equations and Mathematical Structures. Just a little about my work and research. While my PhD is in
Mathematics, my research area has been Mathematics Education.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am also able to adapt different ways of presenting material and making it engaging to the you, as well as adapt to each unique student. I have an unparalleled amount of patience and also make
it fun to learn math... even for those who claim to "hate" math! The process of learning and figuring it out for yourself (with my guidance) is more rewarding than having it explained to you.
17 Subjects: including algebra 1, algebra 2, reading, calculus | {"url":"http://www.purplemath.com/apache_junction_algebra_tutors.php","timestamp":"2014-04-20T04:21:23Z","content_type":null,"content_length":"23850","record_id":"<urn:uuid:9f8ab22a-eb9e-4efb-8ea2-37cb02d71f11>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Size and Precision in NIH Peer Review
The Working Group on Peer Review of the Advisory Committee to the Director of NIH has recommended that at least 4 reviewers should be used to assess each grant application. A sample size analysis of
the number of reviewers needed to evaluate grant applications reveals that a substantially larger number of evaluators are required to provide the level of precision that is currently mandated. NIH
should adjust their peer review system to account for the number of reviewers needed to provide adequate precision in their evaluations.
Citation: Kaplan D, Lacetera N, Kaplan C (2008) Sample Size and Precision in NIH Peer Review. PLoS ONE 3(7): e2761. doi:10.1371/journal.pone.0002761
Editor: Tom Tregenza, University of Exeter, United Kingdom
Received: May 7, 2008; Accepted: June 27, 2008; Published: July 23, 2008
Copyright: © 2008 Kaplan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This study was supported entirely by personal discretionary funds.
Competing interests: The authors have declared that no competing interests exist.
On February 21, 2008 the recommendations of the Working Group on Peer Review of the Advisory Committee to the Director of the National Institutes of Health (NIH) were posted on the internet [1]. This
committee made several suggestions including shortening of the application size, giving applicants unambiguous feedback about resubmission, using short pre-buttals to correct factual errors in
review, and eliminating the special status of amended applications. A further recommendation of the group was to “engage more persons to review each application – “optimally 4 or more” [2].
Thus, the Advisory Committee has left the actual number of reviewers to evaluate each grant application ambiguous. No guidelines were provided to determine the number of reviewers that would be
needed. Consequently, we have conducted a statistical analysis to provide guidance in arriving at appropriate numbers. Our analysis shows an inherent statistical inconsistency in the NIH peer review
recommendations concerning the number of reviewers. We also demonstrate how crucial this number is and how it influences the precision of the eventual score.
For each grant proposal reviewers from the relevant scientific community are asked to report their evaluations within a pre-defined scale. The average grade obtained through this process is
considered a valid estimate of the “true” value of the proposal.
The survey sample size is a crucial parameter in determining whether we can rely on these mean estimates. Elementary sampling techniques give us the minimum number of respondents that are needed for
the evaluation procedure to deliver reliable estimates:(1)
In expression (1), n is the minimum required sample size or number of evaluators. Z[α/2] is the upper percentile of the standard normal distribution. For a 95% confidence interval and an alpha (type
I error, i.e. the probability of rejecting the null hypothesis when it is true) of .05, Z[α/2] is equal to 1.96. The parameter σ represents the underlying standard deviation. Finally, L indicates the
desired half-width of the interval between two consecutive evaluations or the precision of the evaluation.
There are two important implications of this equation. First, the inverse correlation between n and L indicates that more reviewers are needed to obtain a more fine-grained or precise evaluation.
Moreover, this relation is exponential so that greater precision comes with an increasingly greater number of reviewers.
Second, typically the standard deviation σ of a population is not observed and needs to be estimated. Since the data necessary to estimate σ for the review of biomedical research proposals have not
been collected in a statistically robust sampling system, we have relied on a model system of peer review with short movie proposals reviewed on a scale from 1 to 5 by undergraduate students
[Lacetera, Kaplan, Kaplan, submitted]. We used short movie proposals in order to increase the potential sample size since all undergraduate students could be considered expert enough to grade the
proposals. In this study 10 proposals were scored by an average of 48 reviewers. The average standard deviation was approximately 1.0 with a standard deviation considerably less than 0.1. Therefore,
we estimate σ to be equal to 1. Obviously, a more accurate estimate of the standard deviation can eventually be obtained for each form of application requested by NIH, although it should be clear
that a large number of independent evaluators is required to make any estimate of σ reliable.
Using equation (1), we can assess the effect of having 4 reviewers for each proposal. With four reviewers and a standard deviation of 1, the review would be expected to distinguish applications at
the level of the unit interval:(2)
Thus, four reviewers would be able to distinguish among whole integer scores.
Yet, in the evaluation of grant proposals NIH currently uses a 41-grade scale with a range of scores from 1.0 to 5.0 [3]. Moreover, these scores are averaged to yield a score with 3 significant
figures instead of 2 [3]. It is this number, inappropriately expanded to 3 significant figures by averaging, that is used by NIH in their scoring decisions. Although NIH does not explain the
rationale for the conversion of their scores to 3 significant figures, with 80,000 applications per year it seems likely that the NIH peer review system needs that level of precision to facilitate
their making choices close to the funding line. As a consequence to the use of scores with 3 significant figures, differences as small as 0.01 are used in making funding decisions. Nevertheless, in
order to obtain reliable scores with a precision level of 0.01, an unrealistically large number of reviewers would be needed:(3)
Expression (3) implies that, in order for a mean score of 3.56 to be taken as reliable and therefore as identifying a better, more promising proposal than one receiving a rating of 3.57, the
evaluation of almost 40 thousand referees would need to be obtained.
In Figure 1 the exponential relationship between the number of reviewers and the precision of the ratings that would provide reliable estimates of the mean is shown. On the x-axis, smaller numbers
indicate higher precision. Even for a precision level of 0.1, as many as 384 reviewers would be required.
Figure 1. The relationship between the precision of the evaluation system (how fine-grained it is established to be) and the minimum required number of evaluators needed for reliable estimates.
The disconnect between the needed precision in order to allocate funds in a fair way and the number of reviewers required for this level of precision demonstrates a major inconsistency that underlies
NIH peer review. With only four reviewers used for the evaluation of applications, an allocation system that requires a precision level in the range of 0.01 to 0.1 is not statistically meaningful and
consequently not reliable. Moreover, the 4 reviewers NIH proposes are not independent which degrades the precision that could be obtained otherwise.
Consequently, NIH faces a major challenge. On the one hand, a fine-grained evaluation is mandated by their review process. On the other hand, for such criterion to be consistent and meaningful, an
unrealistically high number of evaluators, independent of each other, need to be involved for each and every proposal.
Further insights can be derived from the analysis of expression (1). The value of σ is a measure of the underlying variability in the ratings. The minimum number of reviewers for any given degree of
ratings precision decreases with decreasing standard deviations. The standard deviation across ratings is also an indicator of the degree of agreement among different reviewers. If the standard
deviation is small, for instance equal to 0.01 instead of our previous working estimate of 1.0, there is essentially consensus among the referees. If σ = 0.01, then the following relation holds:(4)
Therefore, 4 independent evaluators can provide statistical legitimacy only under the circumstance of all evaluators giving essentially the same evaluation. For proposals that are expected to be more
controversial, as potentially transformative ideas have been proposed to be^5, a small number of evaluators would lead to unreliable mean estimates.
Our estimate of σ is not based on an analysis of biomedical research experts judging research projects close to their area of specialty. Scoring standard deviations for large numbers of experts
obtained in a statistically acceptable sampling system have not been collected. Instead, as described above, we have used a model system that has allowed us to readily collect opinion data about
proposals with undefined potential. Although we believe our estimate is reasonable, it is informative to visualize how the sample size estimate varies with different values of standard deviation for
a level of precision of 0.1 (Figure 2). It is evident that small sample sizes are able to provide levels of precision only when the standard deviation is exceptionally small. We used a level of
precision of 0.1 because the NIH peer review system mandates scoring at this precision level. For greater levels of precision, as suggested by the conversion from 2 significant figures to 3, the
increase in sample size is steeper with increasing standard deviation.
Figure 2. The relationship between the standard deviation of the scores and the minimum required number of evaluators needed for a precision of 0.1, which is the level of precision currently obtained
in the NIH peer review system.
The importance of scoring accuracy ultimately relates to the rank ordering of proposals. In our model system there were 5 movie proposals with mean scores ranging from 3.46 to 3.64. We have analyzed
how the rank ordering of these 5 proposals varied as reviewers were randomly included in the analysis from 1 to 40 reviewers (Figure 3). What is most striking in these graphs is the extreme
variability in the rank ordering with low numbers of reviewers. For instance, the upper-left and lower-left panels of Figure 3 show proposals that had relatively good rankings with less than 10
reviewers but that ended with relatively poor rankings with over 30 reviewers. Conversely, the lower-right panel shows a proposal that began with poor rankings but settled at with the best ranking
after 25 reviewers. Even the addition of 1 reviewer can markedly change the rank ordering of the proposals and consequently the funding decision. This effect is especially apparent when there are few
reviewers. The number of reviewers has profound implications in terms of the actual funding decisions that are eventually made.
Figure 3. Five individual movie proposals were evaluated by 40 reviewers and the rank ordering of the proposals was assessed as reviewers were randomly included in the analysis.
The 5 proposals were closely spaced with mean scores of 3.46 to 3.64. Proposals that had the same score were given an averaged rank; the figures changed little by assigning proposals with the same
score the highest ranking.
It is clear from our analysis that NIH needs to adjust their peer review system to account for low precision evaluations. Additionally, it would be valuable to determine the standard deviations of
scores given by independent reviewers. This information could be used to obtain more appropriate estimates of σ and consequently would be invaluable in designing and implementing a statistically
rational system of social choice for NIH.
Our data demonstrate that funding decisions will vary widely with the number of reviewers in considering proposals that are closely scored. Making choices between applications that vary by less than
1 will require larger numbers of reviewers than NIH has been contemplating. Recognition of the statistical inconsistencies of NIH peer review will allow for the implementation of new policies that
take into consideration the accepted relationship between the number of reviewers, the precision of scoring needed, and the standard deviation of the scores given.
The Working Group also recommended shortening the length of the application although no specific suggestions were included^2. Obviously, the length of the application impacts the number of reviewers
that could possibly be used for scoring. More reviewers can be used for shorter applications.
It is commonly accepted that NIH will not fund clinical trials that do not include a cogent sample size determination. It is ironic that NIH insists on this analysis for clinical studies but has not
recognized its value in evaluating its own system of peer review. We posit that this analysis should be considered in the revisions of NIH scientific review.
The NIH peer review structure has not been based in rigorous applications of statistical principles involving sampling [4]. It is this deficiency that explains the statistical weakness and
inconsistency of NIH peer review. Although NIH has made an excellent effort to remedy some of the most egregious problems inherent to their peer review system, the Working Group has neither fully
realized nor addressed the statistical problems that have beset the NIH peer review system.
Author Contributions
Conceived and designed the experiments: DK NL CK. Performed the experiments: DK NL CK. Analyzed the data: DK NL CK. Wrote the paper: DK NL CK.
Missing footnote or reference?
Posted by criedl
I'm very confused...
Posted by pamedeo
On Application Length
Posted by mtaffe | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0002761","timestamp":"2014-04-16T18:59:18Z","content_type":null,"content_length":"63487","record_id":"<urn:uuid:889d3568-0439-4b4d-892d-468608684c12>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] An issue with EDSLs in the ``finally tagless''
[Haskell-cafe] An issue with EDSLs in the ``finally tagless'' tradition
Brad Larsen brad.larsen at gmail.com
Wed Sep 23 21:59:49 EDT 2009
I seem to have run into an instance of the expression problem [1], or
something very similar, when experimenting with ``finally tagless''
EDSLs, and don't see a good way to work around it.
I have been experimenting with embedded DSLs, using the techniques
described in a couple recent papers [2,3]. The idea is this:
implement an embedded DSL using type classes, rather than ADTs or
GADTs. This allows one to define analyses, queries, and manipulations
of EDSL programs independently, as class instances. Furthermore, by
using type classes rather than data types, there is no interpretive
overhead in the analyses, queries, and manipulations on the EDSL
programs. Finally, using type classes permits greater modularity, as
an EDSL can be defined as the combination of several simpler EDSLs
Suppose we have a type class for simple integer arithmetic expressions:
> class IntArithExpr exp where
> integer :: Integer -> exp Integer
> add :: exp Integer -> exp Integer -> exp Integer
We can write an evaluator for these expressions like this:
> newtype E a = E { eval :: a }
> instance IntArithExpr E where
> integer = E
> add e1 e2 = E (eval e1 + eval e2)
> -- eval $ add (integer 20) (integer 22) <==> 42
The trouble comes in when defining a more general arithmetic
expression type class. Suppose we want polymorphic arithmetic
> class PolyArithExpr exp where
> constant :: a -> exp a
> addP :: exp a -> exp a -> exp a
We then try to define an evaluator:
> -- instance PolyArithExpr E where
> -- constant = E
> -- addP e1 e2 = E (eval e1 + eval e2) -- bzzt!
The instance definition for `addP' is not type correct:
Could not deduce (Num a) from the context ()
arising from a use of `+' at /home/blarsen/mail.lhs:42:20-36
One way to remedy this is to change the class definition of
PolyArithExpr so that `addP' has a Num constraint on `a':
> class PolyArithExprFixed exp where
> pae_constant :: a -> exp a
> pae_add :: Num a => exp a -> exp a -> exp a
which allows us to define an evaluator:
> instance PolyArithExprFixed E where
> pae_constant = E
> pae_add e1 e2 = E (eval e1 + eval e2)
I find this ``fix'' lacking, however: to define a new interpretation
of the EDSL, we may be forced to change the DSL definition. This is
non-modular, and seems like an instance of the expression
problem. (There may be a multiparameter type class solution for this.)
How can one define the polymorphic arithmetic EDSL without cluttering
up the class definitions with interpretation-specific constraints, and
still write the desired interpretations?
Bradford Larsen
[1] Philip Wadler. The Expression Problem. November 12, 1998.
[2] Jacques Carette, Oleg Kiselyov, and Chung-chieh Shan. Finally
Tagless, Partially Evaluated: Tagless Staged Interpreters for
Simpler Typed Languages. APLAS 2007.
[3] Robert Atkey, Sam Lindley, and Jeremy Yallop. Unembedding
Domain-Specific Languages. ICFP Haskell Symposium 2009.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-September/066670.html","timestamp":"2014-04-16T07:27:24Z","content_type":null,"content_length":"6204","record_id":"<urn:uuid:607ce97f-788e-493c-90c7-da4285f9e3d6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: strength reduction of constant multiplication ?
preston@dawn.cs.rice.edu (Preston Briggs)
Fri, 9 Oct 1992 03:10:09 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: preston@dawn.cs.rice.edu (Preston Briggs)
Organization: Rice University, Houston
Date: Fri, 9 Oct 1992 03:10:09 GMT
References: 92-10-036
Keywords: arithmetic, theory
Youfeng Wu <wu@sequent.com> writes:
>[discussion of converting integer multiply by constant into simpler
I believe the problem of finding the optimal sequence of adds and left
shifts is NP-complete if we attempt to reuse intermediate results. Adding
subtract and right shift doesn't simplify the problem, though it can lead
to shorter solutions.
For example, multiplication by 22 without reusing intermediates seems to
take 5 instructions (of a certain limited form)
x4 = x1 << 2
x5 = x4 + x1
x10 = x5 << 1
x11 = x10 + x1
x22 = x11 << 1
If we allow the reuse of intermediates results (which will cost additional
registers), we can find a shorter solution:
x2 = x1 << 1
x3 = x2 + x1
x24 = x3 << 3
x22 = x24 - x2
I've only shown simple instructions. Naturally, shorter solutions using
special shift and add instructions are possible.
In practice, it probably suffices to use any heuristic approach that works
well for small integers, especially if supplemented by an exception table.
The idea of the exception table is to record (in a hash table, for
instance) the cases for which your heuristic is non-optimal and their
optimal solution (derived once at great expense by the compiler writer
using some sort of branch-and-bound exponential searcher).
The following paper describes the approach used in the PL.8 compiler:
title="Multiplication by Integer Constants",
author="Robert Bernstein",
journal="Software -- Practice and Experience",
A second reference describes a similar approach used by HP
Integer Multiplication and Division on the HP Precision
Magenheimer, Peters, Pettis, and Zuras
Proceedings of ASPLOS II
The mathematically inclined will resort to Knuth, Volume 2.
Preston Briggs
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/92-10-037","timestamp":"2014-04-19T22:08:32Z","content_type":null,"content_length":"6218","record_id":"<urn:uuid:b85f6698-37a7-414d-ad04-072c4b7fe97e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time Value of Money
What It Measures
Time Value of Money (TVM) is one of the most important concepts in the financial world. If a business is paid $1 million for something today, that money is worth more than if the same $1 million was
paid at some point in the future. The reason money given today is worth more is straightforward: If I have money today, I have the potential to earn interest on the capital.
TVM values how much more a given sum of money is worth now (or at a specific future date) compared to in the future (or, in the case of a future payment, a date that is even further in the future).
TVM calculations take into account likely interest gains, discounted cash flow, and potential risk, to create a value figure for a specific amount of money or investment opportunity.
There are several calculations commonly used to express the time value of money, but the most important are present value and future value.
Why It Is Important
If company A has the opportunity to realize $10,000 from an asset today, or two years in the future, TVM allows the company to calculate exactly how much more that $10,000 is worth if it’s received
today, as opposed to in the future. It is important to know how to calculate the time value of money because it means you can distinguish between the value of investment opportunities that offer
returns at different times.
How It Works in Practice
If a business has the option of receiving a $1 million investment today or a guaranteed payment of the same amount in two years’ time, you can use TVM calculations to show the relative value of the
two sums of money.
Option A, take the money now: The business might accept the $1,000,000 investment immediately and put the capital into an account paying a 4.5% annual return. In this account, the $1,000,000 would
earn $92,025 interest over two years (annually compounded), making the future value of the investment $1,092,025. This can be expressed using the following formula:
Future value = 1,000,000 × (1 + 0.045)^2
which might be expressed as:
Future value = Original sum × (1 + Interest rate per period)^No. of periods
Obviously, the present value of the $1 million if it is received today would be $1 million. But if the money isn’t received for another two years, we can still calculate its present and future
The present value of a future $1 million investment is based on how much you would need to receive today to receive $1 million in two years’ time. This is done by discounting the $1,000,000 by the
interest rate for the period. Assuming an annual interest rate of 4.5%, we can calculate the present value using the following formula:
Present value = Future value ÷ (1 + Interest rate per period)^No. of periods
Using this formula, we can see that the present value of a future payment of $1 million in two years’ time is:
1,000,000 ÷ (1 + 0.045)^2 = $915,730
In other words, the investment in two years time is the equivalent of receiving $915,730 today, and investing it at 4.5% for two years.
Tricks of the Trade
• There are five key components in TVM calculations. These are: present value, future value, the number of periods, the interest rate, and a payment principal sum. Providing you know four of these
values, you can rearrange the TVM formulae to calculate the fifth.
• When calculating TVM, you may sometimes need to supplement the calculation to discount future payments to take account of risk as well as time value. Discount rates can be adjusted to take
account of risks like the other party not paying you back (default risk) or the fact that the item you intended to purchase has become more expensive, reducing the buying power of the money. In
this case, the company lending the principal sum might insist on a higher interest rate to compensate for the risk.
• If a future payment is not certain, you can use the capital asset pricing model to calculate the risk involved. | {"url":"http://www.qfinance.com/cash-flow-management-calculations/time-value-of-money?action=recommended&id=-38534&recommendedArticleType=librios","timestamp":"2014-04-20T05:43:33Z","content_type":null,"content_length":"50727","record_id":"<urn:uuid:fcc3f334-423a-4d71-a46d-99df838ddafd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
This module provides a simple, naive implementation of nondeterministic finite automata (NFA).
The transition function consists of a Map, but there are also accessor function which help you query the automaton without worrying about how it's implemented.
1. The states are a list of lists, not just a simple flat list as you might expect. This allows you to optionally group your states into "columns" which is something we use in the GenI polarity
automaton optimisation.
2. We model the empty an empty transition as the transition on Nothing. All other transitions are Just something.
data NFA st ab Source
Note: you can define the final state either by setting isFinalSt to Just f where f is some function or by putting them in finalStList
finalSt will use this if defined
can be ignored if isFinalSt is defined
there can be more than one transition between any two states and a transition could be the empty symbol
if you don't care about grouping states into columns you can just dump everything in one big list
:: (Ord ab, Ord st)
=> NFA st ab
-> st from state
-> Maybe ab transition
-> st to state
-> NFA st ab
lookupTrans :: (Ord ab, Ord st) => NFA st ab -> st -> Maybe ab -> [st]Source
lookupTrans aut st1 ab returns the states that st1 transitions to via a.
automatonPaths :: (Ord st, Ord ab) => NFA st ab -> [[ab]]Source
Returns all possible paths through an automaton from the start state to any dead-end.
Each path is represented as a list of labels.
We assume that the automaton does not have any loops in it.
automatonPathSets :: (Ord st, Ord ab) => NFA st ab -> [[[ab]]]Source
The set of all bundled paths. A bundled path is a sequence of states through the automaton from the start state to any dead end. Any two neighbouring states can have more than one possible transition
between them, so the bundles can multiply out to a lot of different possible paths.
The output is a list of lists of lists:
• Each item in the outer list is a bundled path through the automaton, i.e. without distinguishing between the possible transitions from any two neighbouring states
• Each item in the middle list is represents the set of transitions between two given neighbouring states
• Each item in the inner list represents a transition between two given states | {"url":"http://hackage.haskell.org/package/GenI-0.22/docs/NLP-GenI-Automaton.html","timestamp":"2014-04-23T08:43:06Z","content_type":null,"content_length":"12142","record_id":"<urn:uuid:cf3f560c-9815-494a-9890-46847ed792f3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sparse arrays (hash tables)
Sparse arrays are quite simple, and are among the most heavily used of data structures in Mathematica (often without giving that name to them). One uses them to store values associated to "indices"
attached to a given "head." But this head need not be an atom, and, unlike Mathematica part specifications, the indices need not be positive integers. When left-hand-sides of such indices are free of
patterns, Mathematica will compute what is called a "hash value" for them; think of these as positive integers that can be used as an index into an actual table (more on this below).
Below we use the symbol a as a head for a sparse array defined for three particular indices.
One might (and frequently will) choose to view such definitions as something other than a sparse array. For example, one often defines Mathematica functions in this way. But most often function
definitions will rely on Mathematica patterns and this disqualifies them from being sparse arrays in an important aspect.. Specifically, it is only families of associated values free of patterns that
may be hashed. This is both because pattern matching relies heavily on technology that is intrinsically non-hashable and because rule ordering neecessary for pattern matching precludes look-up based
on hashed values.
What is the benefit of these sparse arrays? It all boils down to efficiency. The Mathematica kernel will compute what is called a "hash value" for each left-hand-side, and use that to place it into
an internal data structure with associated right-hand-value. While multiple left-hand-sides might have the same hash value and hence be placed in the same bin, the Mathematica kernel checks that bins
do not grow too large, rehashing to a larger space when need be. The upshot is that if we assume the computation of a hash value is constant time, then:
(i) Adding new elements to such a table is, on average, constant time.
(ii) Element look-up is, on average, constant time.
Hence we may use such arrays to store values and retrieve them quickly. Below we show simple implementations of a union that does not sort its elements. One uses Mathematica lists and the other uses
a sparse array.
First we check that these give the same results.
Now we check speed and algorithmic complexity.
It is quite clear that unionNoSort1 has quadratic complexity whereas unionNoSort2 has only linear complexity.
There are two further observations to be made about this implementation of unsorted union. First, the semantics are not entirely the same as in Union because that latter will test equivalence using
SameQ whereas unionNoSort2 relies on hash look-up. Second, in addition to hashing we make rudimentary use of a sort of "stack" by nesting the result we construct. We will cover this in more detail in
the next section.
Application: sparse sets
One might use sparse arrays to represent sets with cardinality relatively small by comparison to the universe of allowable elements. We show a simple implementation of such sets, including functions
to find unions, intersections and complements of pairs of such sets.
Here we show some simple examples.
We will use Mathematica built-in logical operations on the set elements in order to check our set functions for correctness.
It is easy to demonstrate that the average complexity for adding or removing elements is O(1) (that is, constant time), as expected. | {"url":"http://library.wolfram.com/conferences/devconf99/lichtblau/Links/index_lnk_5.html","timestamp":"2014-04-21T12:13:16Z","content_type":null,"content_length":"30964","record_id":"<urn:uuid:6c806843-b561-44da-8c33-cba16f97b89d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related Work
Related Work
The 1990's witnessed a broad development of software for discrete optimization. Almost without exception, these new software packages were based on the techniques of branch, cut, and price. The
packages fell into two main categories--those based on general-purpose algorithms for solving mixed-integer linear programs (MILPs) (without the use of special structure) and those facilitating the
use of special structure by interfacing with user-supplied, problem-specific subroutines. We will call packages in this second category frameworks. There have also been numerous special-purpose codes
developed for use in particular problem settings.
Of the two categories, MILP solvers are the most common. Among the dozens of offerings in this category are MINTO [27], MIPO [3], bc-opt [8], and SIP [26]. Generic frameworks, on the other hand, are
far less numerous. The three frameworks we have already mentioned (SYMPHONY, ABACUS, and COIN/BCP) are the most full-featured packages available. Several others, such as MINTO, originated as MILP
solvers but have the capability of utilizing problem-specific subroutines. CONCORDE [2,1], a package for solving the Traveling Salesman Problem (TSP), also deserves mention as the most sophisticated
special-purpose code developed to date.
Other related software includes several frameworks for implementing parallel branch and bound. Frameworks for general parallel branch and bound include PUBB [33], BoB [4], PPBB-Lib [35], and PICO [10
]. PARINO [23] and FATCOP [6] are parallel MILP solvers.
Ted Ralphs | {"url":"http://www.coin-or.org/SYMPHONY/man-5.2.3/node6.html","timestamp":"2014-04-19T19:35:37Z","content_type":null,"content_length":"5030","record_id":"<urn:uuid:ec4667c0-ca63-466e-aae6-345ed9accd5d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Test taking fun= weekend fun
So yesterday I took 3 online IQ tests for fun, but to be honest I feel like the online ones are total crap because there are such large discrepancies when it comes to the score. I took 3 not only
because I’m kind of getting my test addiction again, but also so I can average the scores to find something that is closer to the “true” value.
Test 1:
43 questions: 119
Test 2:
30 questions: 135 (with this you can already see the huge discrepancy)
Test 3:
57 questions: 124
When you average the scores you end up with 126, but I thought that the tests with more questions should be more accurate than ones with less, so I took the number of questions into consideration
when trying to calculate the “true” value. So I added up all of the questions:
Then I divided the number of questions by the total number to find what the part of the total it is:
43/130= 0.33077
30/130= 0.23076
57/130= 0.43864
Then I multiplied these values to the IQ score I got from the corresponding test, and added all of the products up. (This is essentially the same method that scientists use to find the atomic mass of
an element.)
0.33077*119+0.23076*135+0.43864*124= 125
Then, for fun, and to utilize some things I’ve learned in Stats, I found the z-score. mean=100 standard deviation=15
(126-100)/15= 1.667
Then I used normalcdf(-1*10^99,1.667,0,1) to find where I am in relation to the rest of society and found that I’m in the (around, slightly over) 95th percentile.
Just for the hell of it I found the percent error of the average of the scores, assuming that the value 125 is the “true” value.
(126-125)*100/125= 0.8%
So surprisingly the average was pretty damn accurate. So much so that you could essentially consider it the true value. To be honest I was pretty disappointed with the result because I wanted to be
more than 2 z scores away from the mean, so I could have an unusually high IQ :(. I mean I only needed to have 6 more IQ points to fall into that category. Hahahaha but all in all, this was a pretty
fun way to spend my weekend. I think I spent around an hour and 7 mins taking tests and making calculations. Fun stuff…
these girls from theatre just left their bags near me I think they want me to look after them I feel so much responsibility for these bags what if they never come back and i have to to raise
these bags on my own don’t know if I can support these bags im only 16 why is this happening to me
they came back its ok
(via tyler-the-exterminator) | {"url":"http://comrademathyou.tumblr.com/","timestamp":"2014-04-16T05:02:00Z","content_type":null,"content_length":"39379","record_id":"<urn:uuid:09872c0f-5874-4631-afcf-b6d4ca0ff999>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
dlib C++ Library - mlp_ex.cpp
// The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
This is an example illustrating the use of the multilayer perceptron
from the dlib C++ Library.
This example creates a simple set of data to train on and shows
you how to train a mlp object on that data.
The data used in this example will be 2 dimensional data and will
come from a distribution where points with a distance less than 10
from the origin are labeled 1 and all other points are labeled
as 0.
#include <iostream>
#include <dlib/mlp.h>
using namespace std;
using namespace dlib;
int main()
// The mlp takes column vectors as input and gives column vectors as output. The dlib::matrix
// object is used to represent the column vectors. So the first thing we do here is declare
// a convenient typedef for the matrix object we will be using.
// This typedef declares a matrix with 2 rows and 1 column. It will be the
// object that contains each of our 2 dimensional samples. (Note that if you wanted
// more than 2 features in this vector you can simply change the 2 to something else)
typedef matrix<double, 2, 1> sample_type;
// make an instance of a sample matrix so we can use it below
sample_type sample;
// Create a multi-layer perceptron network. This network has 2 nodes on the input layer
// (which means it takes column vectors of length 2 as input) and 5 nodes in the first
// hidden layer. Note that the other 4 variables in the mlp's constructor are left at
// their default values.
mlp::kernel_1a_c net(2,5);
// Now let's put some data into our sample and train on it. We do this
// by looping over 41*41 points and labeling them according to their
// distance from the origin.
for (int i = 0; i < 1000; ++i)
for (int r = -20; r <= 20; ++r)
for (int c = -20; c <= 20; ++c)
sample(0) = r;
sample(1) = c;
// if this point is less than 10 from the origin
if (sqrt((double)r*r + c*c) <= 10)
// Now we have trained our mlp. Let's see how well it did.
// Note that if you run this program multiple times you will get different results. This
// is because the mlp network is randomly initialized.
// each of these statements prints out the output of the network given a particular sample.
sample(0) = 3.123;
sample(1) = 4;
cout << "This sample should be close to 1 and it is classified as a " << net(sample) << endl;
sample(0) = 13.123;
sample(1) = 9.3545;
cout << "This sample should be close to 0 and it is classified as a " << net(sample) << endl;
sample(0) = 13.123;
sample(1) = 0;
cout << "This sample should be close to 0 and it is classified as a " << net(sample) << endl; | {"url":"http://dlib.net/mlp_ex.cpp.html","timestamp":"2014-04-19T04:39:22Z","content_type":null,"content_length":"9332","record_id":"<urn:uuid:97d2e3f8-3c40-40d7-95b8-f92ed569bd39>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |