content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve x-5=7. x- 5 has a square root sign over it though.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You can dispel of a square root by taking the square. Remember what you do to one side must be done to the other.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506bb9e6e4b088f3c14cb998","timestamp":"2014-04-21T02:12:31Z","content_type":null,"content_length":"27602","record_id":"<urn:uuid:f18d9310-1857-4f4d-935d-49405a7ac122>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
More integration:evaluation
June 16th 2008, 08:22 PM
More integration:evaluation
Just put a yes or no as to my answer =P, c is a constant, sorry about format and squareroots
1) integrate: (x^4-2x^3+5x+1)/(x^4) dx
I got: x -5/(2x^3) -3/(x^3) -2 +c
2) integrate: sinx/(squareroot of cosx) dx
I got: -0.5(squareroot of cosx) +c
3) integrate (e^x)squareroot(e^x+4) dx
I got: 0.5(squareroot e^x+4) +c
4) integrate xe^(x^2+1) dx
I got: 0.5e^(x^2+1) +c
5) integrate 1/((squareroot x)((squareroot x) +1)^2) dx
Sorry about this 1 a bit hard to read.
It is 1 divided by (rootx)(rootx plus 1)squared
anyways..I got: -2/(squareroot x +1) +c
LAST ONE =P
6) integrate (x^2)/(1+x^6) done through two substitutions
let u=x^3 , then let u=root3 tan theta
In the end i got: (squareroot 3)*arctan((x^3)/(squareroot 3)) +c
arctan is tan^-1 for anyone maths illiterate reading this.
I hope to see 6 yeses in your answers and thanks in advance =P
June 16th 2008, 08:43 PM
note that $\frac {x^4 - 2x^3 + 5x + 1}{x^4} = \frac {x^4}{x^4} - \frac {2x^3}{x^4} + \frac {5x}{x^4} + \frac 1{x^4} = 1 - \frac 2x + 5x^{-3} + x^{-4}$
2) integrate: sinx/(squareroot of cosx) dx
I got: -0.5(squareroot of cosx) +c
Incorrect: your constant is off. check again (you can differentiate the answer to see that you are wrong)
3) integrate (e^x)squareroot(e^x+4) dx
I got: 0.5(squareroot e^x+4) +c
Incorrect again: are you sure you didn't mean $\int \frac {e^x}{\sqrt{e^x + 4}}~dx$? your answer would still be wrong, but you would have been closer
4) integrate xe^(x^2+1) dx
I got: 0.5e^(x^2+1) +c
correct (Clapping)
5) integrate 1/((squareroot x)((squareroot x) +1)^2) dx
Sorry about this 1 a bit hard to read.
It is 1 divided by (rootx)(rootx plus 1)squared
anyways..I got: -2/(squareroot x +1) +c
Correct (Clapping)
LAST ONE =P
Okie dokie :p
6) integrate (x^2)/(1+x^6) done through two substitutions
let u=x^3 , then let u=root3 tan theta
In the end i got: (squareroot 3)*arctan((x^3)/(squareroot 3)) +c
arctan is tan^-1 for anyone maths illiterate reading this.
I hope to see 6 yeses in your answers and thanks in advance =P
just a substitution of u = x^3 will do. no second substitution is needed, and trig sub is definitely not needed
June 16th 2008, 08:57 PM
Just put a yes or no as to my answer =P, c is a constant, sorry about format and squareroots
1) integrate: (x^4-2x^3+5x+1)/(x^4) dx
I got: x -5/(2x^3) -3/(x^3) -2 +c
2) integrate: sinx/(squareroot of cosx) dx
I got: -0.5(squareroot of cosx) +c
3) integrate (e^x)squareroot(e^x+4) dx
I got: 0.5(squareroot e^x+4) +c
4) integrate xe^(x^2+1) dx
I got: 0.5e^(x^2+1) +c
5) integrate 1/((squareroot x)((squareroot x) +1)^2) dx
Sorry about this 1 a bit hard to read.
It is 1 divided by (rootx)(rootx plus 1)squared
anyways..I got: -2/(squareroot x +1) +c
LAST ONE =P
6) integrate (x^2)/(1+x^6) done through two substitutions
let u=x^3 , then let u=root3 tan theta
In the end i got: (squareroot 3)*arctan((x^3)/(squareroot 3)) +c
arctan is tan^-1 for anyone maths illiterate reading this.
I hope to see 6 yeses in your answers and thanks in advance =P
I am feeling good tonight, I will do all of these for didactic purposes.
1) $\int\frac{x^4-2x^3+5x+1}{x^4}dx=\int\bigg[x-\frac{2}{x}+\frac{5}{x^3}+\frac{1}{x^4}\bigg]dx=x-2\ln(x)-\frac{5}{x^2}-\frac{1}{3x^3}+C$
2) $\int\frac{\sin(x)}{\sqrt{\cos(x)}}dx=-\int\frac{-\sin(x)}{\sqrt{\cos(x)}}dx$
Now let $\vartheta=\cos(x)\Rightarrow{d\vartheta=-\sin(x)}$
So we have
3) $\int{e^x\sqrt{e^x+4}dx}$
Let $\varphi=e^x\Rightarrow{d\varphi=e^x}$
Giving us
$\int\sqrt{\varphi+4}d\varphi=\frac{2}{3}(\varphi+4 )^{\frac{3}{2}}+C$
backsubbing we get
4) $\int{xe^{x^2+1}dx}$
Let $\psi=x^2+1\Rightarrow{d\psi=2x}$
So we have
$\frac{1}{2}\int{e^{\psi}d\psi}=\frac{1}{2}e^{\psi} +C$
Backsubbing we get
5) $\int\frac{dx}{\sqrt{x}(\sqrt{x}+1)^2}$
Let $\xi=\sqrt{x}\Rightarrow\xi^2=x\Rightarrow{2\xi=dx}$
So we have
$2\int\frac{\xi}{\xi(1+\xi)^2}=2\int\frac{d\xi}{(1+ \xi)^2}=\frac{-2}{1+\xi}+C$
Backsubbing we get
6) $\int\frac{x^2}{1+x^6}dx$
Let $\zeta=x^3\Rightarrow{d\zeta=3x^2}$
So now we have
$\frac{1}{3}\int\frac{d\zeta}{1+\zeta^2}=\frac{1}{3 }\arctan(\zeta)+C$
Now backsubbing we get
June 16th 2008, 09:41 PM
Ok thanks i think i get all that, funny symbols and all =P, i just differentiated instead of integrating at some points..doh..and made a few minor errors that totally screwed me over...hmmm
June 16th 2008, 09:45 PM
June 16th 2008, 09:54 PM
Well actually when the mood takes me i say a i u e o ka ki ku ke ko...
Btw could you please check out my post just before this one..more integration gone wrong i guess...
June 16th 2008, 09:56 PM
June 16th 2008, 09:57 PM
Jap..tho i hardly speak it since i stopped learning 2 years ago
nihongo o wakrimasu ka
June 16th 2008, 09:58 PM
June 16th 2008, 10:00 PM
so thats a big yes to you cheking out my other post =P...all simple stuff i hope | {"url":"http://mathhelpforum.com/calculus/41751-more-integration-evaluation-print.html","timestamp":"2014-04-20T22:18:38Z","content_type":null,"content_length":"19788","record_id":"<urn:uuid:c4997dd0-97c4-4daa-8d60-19564ad2be3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
maximize problem
March 9th 2008, 05:52 PM #1
Mar 2008
You have 40m of fencing with which to enclose a rectangular space for a garden.find the largest area that can be enclosed with this much fencing and the dimensions of the corresponding garden.
Let the dimension for the space be $l$ by $w$, where $l$ denotes length and $w$ denotes width.
From given we have $2l+2w=40$, also the area can be expressed as $A=l\times w$. Our goal is to maximize the area $A$.
Using the given perimeter constraint, we can write $l=20-w$. Then let's rewrite the area as: $A=(20-w)w=-w^2+20w$
Now the area is expressed as a function of $w$ and it is a quadratic function whose graph is a parabola open down. Hence it will have a maximum. More specific, we have
which implies that when $w=10$, we obtain maximum area $A=100$. Thus the dimension should be $l=w=10$.
March 9th 2008, 10:04 PM #2 | {"url":"http://mathhelpforum.com/calculus/30506-maximize-problem.html","timestamp":"2014-04-20T18:53:06Z","content_type":null,"content_length":"33630","record_id":"<urn:uuid:4c6aefa9-971d-4499-842e-e3f6857c4d20>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to get the covariance of x and |x|----cov(x,|x|)?
How to get the covariance of x and |x|----cov(x,|x|)? under the condition of continuous X or discrete X, thanks!
Hello, cov(X,|X|)=E[X*|X|]-E[X]E[|X|]... But then it can be simplified or not, depending on what your exercise really is. Please state the context...
thanks for your quick reply, The context is that the E(X) is the given, but E(|x|) is unknown, when X, as a random variable, can be located in both the positive interval[0,inf) and negative interval
(-inf,0), what's the results of Cov(X,|x|)? I encounter a problem that is X is a random variable, so it can jump between positive and negative case, so whether we can simply consider the covariance
in the two case? if not, what's the result? Thanks!
thanks for your quick reply, The context is that the E(X) has been given, but E(|x|) is unknown. When X, as a random variable, can be located in both the positive interval[0,inf) and negative
interval(-inf,0), what's the results of Cov(X,|x|)? I encounter a problem that is X is a random variable, so it can jump between positive and negative case, so whether we can simply consider the
covariance in positive case x>0 and negative case x<0 separately? if not, what's the result? Thanks!
Hello, You can't consider it if X>0 or not. You have to condition by it. Okay, let's set $sgn(X)=-1$ if $X<0$ and $sgn(X)=1$ if $X\geq 0$. In other words $sgn(X)=\bold{1}_{X\geq 0}-\bold{1}_{X<0}$
(it's a random variable following a simple discrete distribution, let's say p is the probability that it equals 1) Then $cov(|X|,X)=E[sgn(X)X^2]-E[sgn(X)X]E[X]$ Now let's consider a conditional
covariance. (sorry, I love using conditional expectations :D) $cov(|X|,X \mid sgn(X))=E[sgn(X)X^2 \mid sgn(X)] - E[sgn(X)X \mid sgn(X)]E[X \mid sgn(X)]$. Since sgn(X) is indeed sgn(X)-measurable, we
can pull it out from the expectations : $cov(|X|,X \mid sgn(X))= sgn(X) E[X^2 \mid sgn(X)]-sgn(X) (E[X \mid sgn(X)])^2 = sgn(X) Var(X \mid sgn(X))$ So $cov(|X|,X)=E[sgn(X) Var[X \mid sgn(X)]]$, which
is an expectation with respect to sgn(X)'s distribution (meaning that sgn(X) is the sole remaining random variable in the expectation). So this gives $pVar(X)-(1-p)Var(X)=(2p-1)Var(X)$, where p is,
as said before, $P(sgn(X)=1)=P(X\geq 0)$ That's the furthest one can go with the provided information. And it doesn't matter whether X is discrete or continuous.
thank you so much for your very smart and creative answer, but I still have one more question, which is that VAR[X|sgn(x)=-1]=VAR(X)? and also VAR[X|sgn(x)=1]=VAR(X)?
Yes it's equal. So we can say that $Var[X|sgn(X)]=Var[X]$, which lets us simplify : $E[sgn(X) Var[X|sgn(X)]=E[sgn(X) Var[X]]=Var[X] E[sgn(X)]$ , since Var[X] is a constant ! If you understand my
method, I have some doubts about your thread belonging to the pre-university subforum :D
acctually, http://latex.codecogs.com/png.latex?Var[X|sgn(X)]=Var[X] this equation is not hold when I use integration to solve cov(x,|x|), the two final results are not consistent...please check it
again to avoid future mistake... | {"url":"http://mathhelpforum.com/statistics/182125-how-get-covariance-x-x-cov-x-x-print.html","timestamp":"2014-04-18T21:37:34Z","content_type":null,"content_length":"11165","record_id":"<urn:uuid:84fa16b0-3022-4ffa-a833-d5b0284cc955>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
shape : scalar > 0
The shape of the gamma distribution.
scale : scalar > 0, optional
The scale of the gamma distribution. Default is equal to 1.
size : shape_tuple, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. | {"url":"http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.random.mtrand.RandomState.gamma.html","timestamp":"2014-04-17T03:58:25Z","content_type":null,"content_length":"11471","record_id":"<urn:uuid:58a21926-b8a7-4578-88bf-baa58ffeb6dd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
9781404203198 isbn/isbn13 $$ Compare Prices at 110 Bookstores! Cell Communication: Understanding How Information Is Stored And Used In Cells (The Library of Cells) discount, buy, cheap, used, books & textbooks
Search Results: displaying 1 - 1 of 1 --> 9781404203198 ( ISBN )
Cell Communication: Understanding How Information Is Stored And Used In Cells (The Library of Cells)
Author(s): Michael Friedman, Brett Friedman
ISBN-13: 9781404203198 ISBN: 9781404203198
Format: Library Binding Pages: 48 Pub. Date: 2004-12 Publisher: Rosen Pub Group
List Price: $29.25
Click link below to compare 110+ bookstores prices! Get up to 90% off list price!
[Detail & Customer Review from Barnes & Noble]
[Detail & Customer Review from Amazon]
Book Description
Cell Communication: Understanding How Information Is Stored And Used In Cells (The Library of Cells)
High school students learn how cells respond to their changing environment by cell signaling and how genetic information in each cell's DNA directs these processes. Includes detailed illustrations
and sidebars.
Recent Book Searches:
/ Mission Mathematics: Grades 5-8 /
/ Mathematics in the Middle /
/ Navigating Through Algebra in Grades 9-12 (Principles and Standards for School Mathematics Navigations Series) / David Erickson, Johnny W. Lott, Mindy Obert
/ Navigating Through Geometry in Grades 3-5 (Principles and Standards for School Mathematics Navigations Series) /
/ Navigating Through Geometry in Grades 9-12 (Principles and Standards for School Mathematics Navigations Series) /
/ Navigating Through Probability in Grades 9-12 (Principles and Standards for School Mathematics Navigations Series) /
/ Computer Algebra Systems in Secondary School Mathematics Education /
/ Navigating Through Measurement In Grades 9-12 (Principles and Standards for School Mathematics Navigations Series) /
/ Third-Grade Book (Curriculum and Evaluation Standards for School Mathematics Addenda Series. Grades K-6) / Grace Burton, Et Al
/ Geometry and Spatial Sense (Curriculum and Evaluation Standards for School Mathematics Addenda Series. Grades K-6) / John Del Grande, Lorna Morrow
/ Algebra for Everyone: In-Service Handbook / David J. Glatzer, Stuart A. Choate, Albert P. Shulte
/ Assessment in the Mathematics Classroom: 1993 Yearbook (Yearbook (National Council of Teachers of Mathematics)) / Norman Lott Webb
/ Professional Development for Teachers of Mathematics: 1994 Yearbook (Yearbook (National Council of Teachers of Mathematics)) / Douglas B. Aichele, Arthur F. Coxford
/ Multicultural andGender Equity in the Mathematics Classroom: The Gift of Diversity 1997 Yearbook (Yearbook (National Council of Teachers of Mathematics)) /
/ Mission Mathematics: Grades K-6 /
/ How to Use Cooperative Learning in the Mathematics Class / Alice F. Artzt, Claire M. Newman
/ Elementary School Mathematics: What Parents Should Know About Estimation : Elementary School Mathematics : What Parents Should Know About Problem Solving / Barbara J. Reys
/ Mathematics Assessment: A Practical Handbook : For Grades 9-12 (Classroom Assessment for School Mathematics Series) /
/ Involving Families in School Mathematics: Readings from "Teaching Children Mathematics," "Mathematics Teaching in the Middle School," and "Arithmetic Teacher /
/ Guidelines for the Tutor of Mathematics / Connie Laughlin, Henry S. Kepner
/ Connecting Mathematics (Curriculum and Evaluation Standards for School Mathematics A) / Gary W. Froelich
/ Measurement in the Middle Grades (Curriculum and Evaluation Standards for School Mathematics Addenda Series. Grades 5-8) / Dorothy Geddes, Robert Berkman
/ A Cognitive Analysis of U.S. and Chinese Students' Mathematical Performance on Tasks Involving Computation, Simple Problem Solving, and Complex proble ... Research in Mathematics Education
Monograph) / Jinfa Cai
/ Results from the Sixth Mathematics Assessment of the National Assessment of Educational Progress /
/ Mission Mathematics: Grades 9-12 /
/ The Teaching and Learning of Algorithms in School Mathematics: 1998 Yearbook (Yearbook (National Council of Teachers of Mathematics)) /
/ Qualitative Research Methods in Mathematics Education (Journal for Research in Mathematics Education. Monograph, No. 9) /
/ Reflecting on Practice in Elementary School Mathematics: Readings from Nctm's School-Based Journals and Other Publications (Changing the Faces of Mathematics) /
/ Mathematics Projects Handbook /
/ Principles and Standards for School Mathematics / National Council of Teachers of Mathematics
Browse ISBN Directory:
9781577661115-9781577662761 9781577662785-9781577681328 9781577681335-9781577684053 9781577684060-9781577685807 9781577685838-9781577686866 More...
More About Using This Site and Buying Books Online:
Be Sure to Compare Book Prices Before Buy
This site was created for shoppers to compare book prices and find cheap books and cheap college textbooks. A lot of discount books and discount text books are put on sale by many discounted book
retailers and discount bookstores everyday. You just need to search and find them. Our site provides many book links to some major bookstores for book details and book coupons. But be sure not just
jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you could save by doing book price comparison.
Buy Books from Foreign Country
Our goal is to find the cheapest books and college textbooks for you, both new and used books, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from
US, UK, Canada, Australia, New Zealand, Netherlands, Ireland, Germany, France, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book
shop, be sure to check the shipping options. It's not unusual that shipping could take 2 -3 weeks and cost could be multiple of a domestic shipping charge.
Buy Used Books and Used Textbooks
Buying used books and used textbooks is becoming more and more popular among college students for saving. Different second hand books could have different conditions. Be sure check used book
condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's reputation when
available. If you are in a hurry to get a book or textbook for your class, you would better choose buying new books for prompt shipping.
Please See Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon ASIN | {"url":"http://www.alldiscountbooks.net/_9781404203198_i_.html","timestamp":"2014-04-18T19:31:33Z","content_type":null,"content_length":"34008","record_id":"<urn:uuid:a04bb491-c126-4dd6-b012-3056ed9847ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove 1-(1/n)<an<1
Suppose the numbers a0,a1,a2,...,an satisfy the following conditions: a0=1/2, a(k+1)=ak+(1/n)(ak)^2 ; k=1,2,...,n-1 Prove that 1-(1/n)<an<1.
Calculate the value of $a_1$, check that $1-\frac{1}{n}<a_1<1$ and use method of induction to prove the required.
by induction,i have Let n=1, 0<an<1 (true) we assume n=k is true. 1-(1/k)<ak<1 Let n=k+1 1-(1/k)+(1/n)(ak)^2<ak+(1/n)(ak)^2<1+(1/n)(ak)^2 1-(1/k)+(1/k+1)(ak)^2<a(k+1)<1+(1/k+1)(ak)^2 i can't solve
for n=k+1 | {"url":"http://mathhelpforum.com/discrete-math/174715-prove-1-1-n-1-a.html","timestamp":"2014-04-21T15:16:56Z","content_type":null,"content_length":"33019","record_id":"<urn:uuid:b345a029-132c-40be-add7-d6de8f6e8d60>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inside the inverted proofs class: Meeting the design challenges
March 11, 2013, 8:00 am
It’s been a while since I last wrote about the recently-completed inverted transition-to-proof course. In the last post, I wrote about some of the instructional design challenges inherent in that
course. Here I want to write about the design itself and how I tried to address those challenges.
To review, the challenges in designing this course include:
• An incredibly diverse set of instructional objectives, including mastery of a wide variety new mathematical content, improvement in student writing skills, and metacognitive objectives for
success in subsequent proof-based courses.
• The cultural shock encountered by many students when moving from a procedure-oriented approach to mathematics (Calculus) to a conceptual approach (proofs).
• The need for strong mathematical rigor, so as to prepare students well for 300-level proof based courses, balanced with a concern for student morale and emotional well-being in the process.
• The need to satisfy the university’s writing requirements, particularly in the form of the Proof Portfolio.
These are a lot of plates to keep spinning. Not all of these challenges are solvable by instructional design, either. But a sound design for the course, a good structure underneath it all, always
makes other things easier to deal with.
First of all, it’s not a given that this course has to be done in an inverted format. But most of my colleagues who have taught the course before (including myself in Fall 2011) used a
“nearly-inverted” structure in the course. Prior to a class meeting, students would be assigned readings from the textbook (written by my colleague Ted Sundstrom, who has been sort of the architect
of this course for a long time) and asked to work 1–2 “Preview Activities” from the book in advance of the course. Then, course meetings would usually be organized mostly around active student work.
What prevents this setup from being the canonical “flipped” classroom is that usually, students would still have take-home homework after class is over, and there was usually some amount of lecturing
going on during the class meetings. This is, at least, how I taught the class the first time back in Fall 2011.
The first design decision I made when looking at this class for the second time was to make it fully inverted. I’ve already written about why I decided on a fully-inverted structure. In addition to
the more philosophical reasons given in that article, inverting this class accomplished several practical objectives. First, by having students make first contact with the material before class, it
made the class meetings more focused on the material that made the least amount of sense — which in turn helped me to not be as rushed through the whole body of content to cover. Second, by focusing
class time on sense-making activities done in groups, we were able to work on building those peer-to-peer social networks that help students learn from each other and reduce the amount of stress in
the class. These hit at least two of the design challenges I mentioned earlier — handling the diversity of material and dealing with cultural shock and personal stress in transitioning to doing
Third, and most practical of all, the work we did in class took the place of take-home homework — we called it “Classwork” instead of “Homework” to emphasize the point — and by doing most of it in
class, it freed up time outside of class for students to focus on preparing for class really well and, importantly, to focus on their Proof Portfolios. In fact I made the point many times to students
that what I wanted them to focus on outside of class was reading the book and watching the videos, and working on the Portfolio — and that’s it. In the previous version of the class, students had
reading and homework and the portfolio to manage outside of class. By inverting, we were replacing homework with a greater emphasis on rigorous preparation — and it paid off handsomely in the form of
better-prepared students and better portfolio writing.
This is an aspect of the inverted classroom I don’t think we stress enough: It helps students manage their time and tasks better. Instead of homework whose difficulty is often hard to gauge, let them
do something simple — watching videos and doing basic exercises. Students are better able to manage this kind of cognitive load. How many of us have complained that students don’t have good time
management skills? And yet we set up course structures that force students to deal with homework loads and time management issues that would be tough even for a seasoned academician to handle. This
isn’t a skill anybody is born with — they have to be taught. By us.
Anyway, back to the course. So I designed the course around the idea that before class, students would have first contact with the material, then come to class and work in groups on challenging
problems that required them to make sense of that material. And they’d be working on the Portfolio outside of class. To keep students honest, I reserved 5 minutes at the beginning of each class for a
three-question multiple choice quiz over the reading. The first thing we’d do in class was take the quiz using clickers, so students would know their results right away, and I’d have data to work
with in case we needed to discuss something before class started. We always went over the quiz right after taking it and folded in any recurring questions from the Guided Practice. This Q&A time was
the closest thing to direct instruction we saw in the class, and it was effective — because it was targeted at specifically the one or two items that needed to be discussed. Having the Q&A time also
reassured students that they weren’t being expected to learn all the material on their own — that questions were OK to ask.
I also felt it was important to make sure students weren’t skimming through the class on the strength of their group-mates, so I put in two hour-long exams and a final in the class over lower-level
ideas (definitions, basic mechanics like writing down contrapositives, etc.). There would also be some proofs on the exams that were adapted, sometimes verbatim, from the Classwork.
The grade breakdown for the course I finally settled on was:
• Preparation, 5% as measured by Guided Practice exercises. I will have a lot to say about Guided Practice in the next post — it’s what made the whole course work, IMO.
• Quizzes, 5%
• Classwork, 20%
• Proof Portfolio, 30%. This is actually a bit lower than what the university mandates — the writing requirement says that one-third of the semester grade should come from writing assignments that
involve drafts and revisions — but we also had writing assignments elsewhere.
• Midterm Exams, 2 at 10% each
• Final Exam, 20%.
There are two items for which I owe you an explanation. One is the details about Guided Practice, which is how I got students to do the work prior to class. That’s in the next post. The other is the
details about Classwork. This is considerably messier, since the original way I had Classwork set up to work didn’t work out well, and we had to experiment with different configurations before we
finally found something we were all happy with. That’s coming later too.
In the meanwhile — any questions or comments on this?
Image: http://www.flickr.com/photos/ecstaticist/
This entry was posted in Flipped classroom, Inverted classroom, Math, Problem Solving, Teaching, Transition to proof and tagged Flipped classroom, Instructional design, Inverted classroom,
mathematics, Transition-to-proof. Bookmark the permalink. | {"url":"http://chronicle.com/blognetwork/castingoutnines/2013/03/11/inside-the-inverted-proofs-class-meeting-the-design-challenges/","timestamp":"2014-04-19T12:02:36Z","content_type":null,"content_length":"64685","record_id":"<urn:uuid:05de79cb-1a8a-4663-b039-45752a3c3f98>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Review Of The Rahn-Sturdivan Approach
We have now established that this article got the wrong answer because it relied too much on metallurgical speculation and didn't use available data to check itself. Here we review the major
alternative approach, the steps taken by Rahn and Sturdivan in their two papers of 2004, and show how they differ from the metallurgical approach.
(1) Note that the five fragments fell into two well-defined groups (by their antimony concentrations), both in the FBI's runs in 1964 and Dr. Guinn's runs in 1977 (which used different pieces of the
(2) Use the General Linear Model on run 4 of the FBI's data to determine that the chance of these groups arising randomly from five bullets or planted samples fell between one in ten thousand and one
in a hundred thousand. (In other words, the groups are not random.)
(3) Use Guinn's data from the 14 test bullets to establish that the antimony (from the four production runs grouped together) is distributed log-normally.
(4) Note that the antimony from the individual production runs shows indistinguishable distributions—the four lots were the same. (Among other things, this shows that the 14 bullets were effectively
random samples of the four production runs.)
(5) Use the combined log-normal distribution to calculate the probability that any one of the fragments in the groups might have arisen by chance (from a third bullet or a planted fragment), and get
2% to 3%. (In other words, that didn't happen, either.)
(6) Use the resulting genuineness of the groups to derive other important conclusions (no conspiracy involving a second gunman that hit either of the men, the fragments not planted, Oswald's rifle
fired that day, location of rear had wound rendered irrelevant, location of back wound rendered irrelevant, best shooting scenario provided, and the NAA serving as Rosetta Stone for the assassination
by tying together the core physical evidence).
Note how this approach maximizes the use of available data and minimizes assumptions and theoretical constructs. It is as grounded as humanly possible. Given its strong results and implications, it
is no wonder that so many JFK conspiracists have come out of the woodwork to attack it. Yet so far, no one has managed to touch it, in spite of all the heavy rhetoric to the contrary. | {"url":"http://kenrahn.com/JFK/Scientific_topics/NAA/Review_of_RG/Rahn-Sturdivan.html","timestamp":"2014-04-20T10:46:46Z","content_type":null,"content_length":"3473","record_id":"<urn:uuid:61ae2154-4dab-4cae-89fe-115b434c16a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
topological modular form
Stable Homotopy theory
Elliptic cohomology
String theory
Critical string models
Extended objects
Topological strings
Om manifolds with rational string structure the Witten genus takes values in modular forms. On manifolds with actual string structure this refines further to a ring of “topological modular forms”.
This ring is at the same time the ring of homotopy groups of an E-∞ ring spectrum, called tmf.
Relation to modular forms
Write $\overline{\mathcal{M}}$ for the Deligne-Mumford compactification of the moduli stack of elliptic curves regarded as a derived scheme, such that tmf is defined as the global sections of the
derived structure sheaf
$tmf = \mathcal{O}(\overline{\mathcal{M}}) \,.$
Write $\omega$ for the standard line bundle on $\overline{\mathcal{M}}$ such that the sections of \omega^{\otimes k are the ordinary modular forms of weight $k$ (as discussed there).
Then there is the descent spectral sequence
$H^s(\overline{M}, \omega^{\otimes t}) \Rightarrow tmf_{2t-s}$
and since the ordinary modular forms embed on the left as
$H^0(\overline{\mathcal{M}}, \omega^{\otimes t}) \hookrightarrow H^s(\overline{M}, \omega^{\otimes t})$
this induces an edge morphism
$tmf_{2 \bullet} \longrightarrow MF_\bullet$
from topological modular forms to ordinary modular forms.
The kernel and cokernel of this map are 2-torsion and 3-torsion and hence “away from 6” this map is an isomorphism.
See also the references at tmf.
An introductory exposition is in
A collection of resources is in
The original identification of topological modular forms as the coefficient ring of the tmf E-∞ ring and the refinement of the Witten genus to a morphism of E-∞ rings, hence to the string orientation
of tmf is due to
• Michael Hopkins, Topological modular forms, the Witten Genus, and the theorem of the cube, Proceedings of the International Congress of Mathematics, Zürich 1994 (pdf)
• Michael Hopkins, Algebraic topology and modular forms, Proceedings of the ICM, Beijing 2002, vol. 1, 283–309 (arXiv:math/0212397)
• Matthew Ando, Michael Hopkins, Charles Rezk, Multiplicative orientations of KO-theory and the spectrum of topological modular forms, 2010 (pdf)
see also remark 1.4 of
• Paul Goerss, Topological modular forms (after Hopkins, Miller and Lurie) (pdf).
and for more on the sigma-orientation see | {"url":"http://ncatlab.org/nlab/show/topological+modular+form","timestamp":"2014-04-16T19:01:22Z","content_type":null,"content_length":"34783","record_id":"<urn:uuid:4a781fef-259c-444c-b79d-bb9f2df437da>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tim Curtin’s incompetence with basic statistics
Tim Curtin’s incompetence with basic statistics is the stuff of legend. Curtin has now demonstrated incompetence at a fairly new journal called The Scientific World Journal. Consider his very first
“result” (emphasis mine):
I first regress the global mean temperature (GMT) anomalies against the global annual values of the main climate variable evaluated by the IPCC Hegerl et al. [17] and Forster et al. [28] based on
Myhre et al. [29], namely, the total radiative forcing of all the noncondensing greenhouse gases [RF]
Annual(Tmean) = a + b[RF] + u(x)
The results appear to confirm the findings of Hegerl et al. [17] with a fairly high R^2
and an excellent t-statistic (>2.0) and P-value (<0.01) but do not pass the Durbin-Watson test (>2.0) for spurious correlation (i.e., serial autocorrelation), see Table 1. **This result validates
the null hypothesis** of no statistically significant influence of radiative forcing by noncondensing GHGs on global mean temperatures.
Any first year stats student or competent peer reviewer should be able to tell you that you a statistical test cannot prove the null hypothesis. But it’s far worse than that as Tamino explains:
The DW statistic for his first regression is d = 1.749. For his sample size with one regressor, the critical values at 95% confidence are dL = 1.363 and dU = 1.496. Since d is greater than dU, we
do not reject the null hypothesis of uncorrelated errors.
This test gives no evidence of autocorrelation for the residuals. But Tim Curtin concluded that it does. He further concluded that such a result means no statistically significant influence of
greenhouse gas climate forcing (other than water vapor) on global temperature. Even if his DW test result were correct (which it isn’t), that just doesn’t follow. …
In other words, the regression which Curtin said fails the DW test actually passes, while the regression which he said passes, actually fails.
And — the presence of autocorrelation doesn’t invalidate regression anyway.
I have to wonder what kind of “peer-reviewed” scientific journal would publish this. Who were the referees for this paper?
And do check out Curtin’s responses in comments where he insists that he didn’t get it wrong. Curtin’s understanding of statistics is so poor that he can’t recognize his own mistakes.
1. #1 Rattus Norvegicus May 31, 2012
I checked ISI recently on TSWJ and it wasn’t listed. I would also add the the publisher looks pretty damn sketchy to me. But paying $1K to rebut TC? You’ve got to be kidding.
2. #2 Bernard J. May 31, 2012
But paying $1K to rebut TC? You’ve got to be kidding.
My thoughts exactly.
However, if they offer to print a rebuttal gratis then I’d consider it. TSWJ would be cutting their own throats though, because a refutation of Curtin would necessarily draw attention to the
extremely poor reviewing (cough, cough) that permitted the ‘paper’ to be published in the first place.
3. #3 JamesA May 31, 2012
Rattus: I know they look sketchy, but they are in there, just listed under the pretty stupid name “TheScientificWorldJOURNAL”, with the appreviation “THESCIENTIFICWORLDJO”. You can find it if you
search for the ISSN 1537-744X. Interestingly, the publisher’s details are different in the journal citation reports to the master journal list, but they could have moved at some point.
4. #4 Hank Roberts May 31, 2012
> ISSN 1537-744X
Oh, that’s a lovely search result. The ScienceBloggers could have a field day dedicated to this particular journal.
Research Article
TheScientificWorldJOURNAL (2011) 11, 1667–1678 ISSN 1537-744X; doi:10.1100/2011/462736
Seasonal Variation of the Effect of Extremely Diluted Agitated Gibberellic Acid (10e-30) on Wheat Stalk Growth: A Multiresearcher Study
Peter Christian Endler, Wolfgang Matzer, Christian Reich,
Thomas Reischl, Anna Maria Hartmann, Karin Thieves, Andrea Pfleger, ürgen Hoföcker, Harald Lothaller, and Waltraud Scherer-Pongratz
Division Complementary Health Sciences, Interuniversity College for Health and Development Graz, Castle of Seggau, 8042 Graz, Austria
Received 21 March 2011; Revised 9 August 2011; Accepted 9 August 2011
Academic Editor: Joav Merrick
The influence of a homeopathic high dilution of gibberellic acid on wheat growth was studied at different seasons of the year. Seedlings were allowed to develop under standardized conditions for
7 days; plants were harvested and stalk lengths were measured. The data obtained confirm previous findings, that ultrahigh diluted potentized gibberellic acid affects stalk growth. Furthermore,
the outcome of the study suggests that experiments utilizing the bioassay presented should best be performed in autumn season. In winter and spring, respectively, no reliable effects were found.
KEYWORDS: homeopathy, ultra high dilution, bio-assay, gibberellic acid, wheat
Correspondence should be addressed to Christian Reich, college@inter-uni.net
Copyright © 2011 Peter Christian Endler et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited. Published by TheScientificWorldJOURNAL; http://www.tswj.com/
5. #5 Tim Curtin
May 31, 2012
Hank, I am at a loss, what’s wrong with this article?
BTW, I suggest that TWSJ would accept comments on my paper without demanding their $1000 fee for initial submissions. Hank, James & co, who have never so far as I know ever published any paper
anywhere, do try!
6. #6 chek May 31, 2012
Homeopathically grown wheat, Tim Curtin’s non-GHG GHG’s – perhaps the title of ‘journal’ confers altogether too much gravitas.
How about the quackerazzi?
7. #7 Robert Murphy May 31, 2012
“Hank, I am at a loss, what’s wrong with this article?”
It’s homeopathic nonsense. The Scientific World Journal has a propensity for publishing such twaddle:
8. #8 P. Lewis May 31, 2012
You need to ask? Why am I not surprised?
The “active” ingredient has been diluted 10^30 times! At around 10^24 or 10^26 times dilution with pure water (though how a homeopath can guarantee their water source is uncontaminated I know not
— and care less) there won’t be a single molecule of the active ingredient present.
I hear the sound of quacking!
9. #9 Eli Rabett
May 31, 2012
FWIW, do a G&T
a. crowd source the rebutal
b. put it up on arXiv. The net will find it.
10. #10 Lionel A May 31, 2012
Robert @ 2:31pm
It’s homeopathic nonsense.
Indeed. Now I wonder if absorbing too much ‘gibberellic acid‘ in ones ‘crispies’ helps in the production of these impressive papers. The name of that hormone could not be more inspired.
11. #11 Bernard J. May 31, 2012
I’ve just had a thought…
Curtin, if we homeopathically dilute CO2 in, say, rainwater, then surely its amazing benefits would be magnified, a la the gibberellin paper that you seem to find so plausible.
Less is more. If CO2 promotes plant growth, we should be diluting it in the atmosphere, rather than concentrating it. That’s the inference from the homeopathy that you are apparently happy to
Of course, using this logic we could collect just twenty dollars in tax from the whole of the nation’s annual productivity, and with this single redback we could build all of the infrastructure
and fund all of the human services that the country needs. And get change. Homeopathy meets economics. Econopathy.
In many quarters that’d be the same old same old, really…
And I like Eli’s idea. Let Arvix smack the arse of scientific Stupid.
12. #12 JamesA May 31, 2012
TC: “Hank, James & co, who have never so far as I know ever published any paper anywhere, do try!”
For your info, I’ve contributed to 69 papers since 2003 (8 as first author), notching up an average of 47 cites per paper and an h-index of 29. I don’t care to count how many I’ve reviewed over
the years but I wouldn’t consider the fact I have two to do this week to be unusual. I’ve never submitted anything to any journal with an impact factor as low as 1.5 and I’m not about to start on
your account.
What does concern me more is that Thomson ISI are listing a journal that lets in such nonsense (Scopus as well, for that matter). Between TC’s paper and that one on homeopathy, I am seriously
thinking about bringing that to their attention.
13. #13 ianam May 31, 2012
Hank, I am at a loss, what’s wrong with this article?
Tim doesn’t know how much 10e-30 is.
Tim, you’re a moron, a crank, an ignoramus, and a buffoon. Congratulations on finding someone inept enough to publish your droolings.
14. #14 Hank Roberts May 31, 2012
Tim, that paper is about dilutions in excess of parts per billion.
Parts per billion.
15. #15 DarylD
William Lamb's Town Down Under
May 31, 2012
A great find Hank, a journal that has a trending tendency to publish horse hockey bunkum, instead of real science!
For even organic seeds for cultivation, are allowed the option to have organic based seed coatings to enhance growth and germination etc.
For TC’s education , debunking “Homeopathy” James Randi.
Youtube link : http://www.youtube.com/watch?v=BWE1tH93G9U&feature=related
Epic face palm indeed : http://www.youtube.com/watch?v=RA06Z5e1ZFc
16. #16 bill June 1, 2012
I am at a loss
Never were truer words spoken.
Seriously, you don’t know what the problem with Homeopathy is? And concentrations of hundred-parts per million (CO2) have no effect, while concentrations of single parts per billion (or less!),
on the other hand…?
Here’s my favourite primer on the subject.
PS. David Bellamy is a patron of the British Homeopathic Association.
17. #17 Hank Roberts June 1, 2012
> that one on homeopathy
Oh, way way more than one.
More than dozens, in fact.
With extensively detailed statistical treatment in the papers.
You might ought to look into this stuff a bit more.
Whatever it is, there’s a vast amount of it being published.
18. #18 Hank Roberts
June 1, 2012
This seems typical of the publisher/journal
Full text: http://downloads.tswj.com/2010/378193.pdf
Volume 10 (2010), Pages 2330-2347, doi:10.1100/tsw.2010.224
Review Article
A Review of Three Simple Plant Models and Corresponding Statistical Tools for Basic Research in Homeopathy
Lucietta Betti, Grazia Trebbi, Michela Zurla,1
Daniele Nani,2
Maurizio Peruzzi,3 and
Maurizio Brizzi4
1 Department of Agroenvironmental Sciences and Technologies, University of Bologna, Italy
2 Italian Society of Anthroposophical Medicine, Milan, Italy
3 Association for Sensitive Crystallization, Sondrio, Italy
4 Department of Statistical Sciences, University of Bologna, Italy
Received 23 July 2010; Revised 4 November 2010; Accepted 5 November 2010
Academic Editor: Joav Merrick Copyright © 2010 Lucietta Betti et al.
ABSTRACT In this paper, we review three simple plant models … to study the effects of homeopathic treatments. We will also describe the set of statistical tools applied in the different models. …
… the most significant results were achieved with the 45th decimal potency, both for As2O3 (As 45x) and water (W 45x) ….
The statistical analysis was performed by using parametric and nonparametric tests, and Poisson distribution had an essential role when dealing with germination experiments.
Finally, we will describe some results related to the changes in variability, which seems to be one of the targets of homeopathic treatment effect.
[full details of their statistical methods are in the PDF.]
—– There ya go. That’s the level TC needs to match, to get published by TSWJ
19. #19 bill June 2, 2012
I think we can assume the Homeopaths are making use of these ‘soft-approach’ journals to gain ‘publishing’ credibility.
Now, why does that sound familiar?
20. #20 Lionel A June 2, 2012
And another economist shoots his mouth off and gets it all wrong:
Yes, climate change is a problem and yes, we do have to do something: but in Britain, we’ve done it already
21. #21 DarylD
William Lamb's Town Down Under
June 3, 2012
Say, Lionel A. , an interesting ‘gish gallop’ of complete nonsense “Tim Worstall” writes, just like Tim Curtin, like peas in a pod.
Such gems or howlers as “the vilenesses that are Greenpeace, FoE and the rest of the forward-to-the-Middle-Ages crowd over there” and the list just continues from the first paragraph until the
Epic face palm.
22. #22 adelady
city of wine and roses
June 3, 2012
Yup! Sometimes I’m half tempted to feel a bit guilty about the piling on when deniers are making fools of themselves.
And then Tim Worstall turns up and proves, yet again, that they really are as blinkered, silly and nasty as they always were.
23. #23 bill June 3, 2012
Amazingly, over on the hysteria-about-Flipper side, Nasa’s James Hansen manages to get the point: a tax on fossil fuels as they come out of the ground, rebated to households.
What an egregious, pontifical wally. Classic ‘I looked into this for 2 days last week, and, based on my rigid political outlook, here’s the reality of the situation.’ He’s trying to place himself
between Delingpole and ‘the hippies’ but, let’s face it, he’s Delingpole.
Well, he is from the Adam Smith Institute. Like Marx, I doubt that he really deserves many of his followers…
24. #24 bill June 3, 2012
PS I’ll also point out that it’s the ‘Flipper side’ that’s going to favour the very tax plan he’s supposedly supporting, whereas the Delingpoles will be the ones ‘howling at the Moon’ because
this is precisely the kind of ‘violating the sacred principles of the economy (i.e. pure selfishness)’ measure they’ve beaten themselves into an onanistic, apocalyptic frenzy over. This coward
simply hasn’t the guts to call out the real idiots here…
25. #25 Lionel A June 3, 2012
Worstofall this ignoramus, as Bill points out, writes, ‘Nasa’s James Hansen‘ not having appreciated that NASA is an acronym and not a name and so he should have written, ‘NASA’s Jim Hansen‘, what
a tool.
I’ll bet that shortly some pedant will try to point out that Nasa’s OK in this instance.
Then Worstofall writes this bunkum:
The science tells us that there is uncertainty; uncertainty is an economic problem to be solved through economic methods.
Strewth, ‘uncertainty is an economic problem‘ . FFS, this man is a tool of the first magnitude. But then over the year the Telegraph has seen plenty of those as columnists.
Do many economists go to a brain transplant centre that swaps out for monkey brains I wonder. No, that would leave them with more intelligence than shown by Worstofall, Axolotl I would go for –
an organism that never grew up.
26. #26 bill June 3, 2012
Economics – the political struggle overseen by accountants. And one of the few ‘sciences’ where being consistently wrong will most-likely make very little difference to your career, and may even
enhance it if you’re wrong in the right (Right) way. Alan Greenspan, for instance.
Hang on, though; are we seeing a bit of a trend here? Peter Sinclair describes Arthur Laffer pulling the same routine, complete with the tired canards.
But they’re apparently accepting that the reality of the situation is that Denial is an untenable position, and so they’re suddenly throwing out the solutions with minimal economic impact.
The very solutions they were telling us up to only last week would cause civilization to grind to a halt, much wailing and gnashing of teeth, and, all-in-all, oceans of tears before bedtime.
You know, all you Deniers reading this – that’s a Carbon Tax. Because that’s actually the most rational and cost-effective strategy – and you’ve just bleated and complained about it, and put off
its implementation for years, and you’re still making hysterical pledges of blood to wind it back and pronouncing your ridiculous Jeremiads.
And we’ve been telling you for ages it was actually your policy.
But he sees a fundamentally backward system in the United States that imposes taxes on things people want more of: income and jobs. At the same time, the U.S. allows something we want less of
— carbon dioxide pollution — to be emitted without penalty… Congress should offset a simple carbon tax with a reduction in income or payroll taxes.
Well, whaddyaknow – the Australian Government’s very strategy!
Make the Polluters Pay, people! If they want to pay less they can pollute less – that’s the point.
If you ignore the hare-brained disinformation, and, particularly in Worstall’s case, the insults (this wally’s condescending to Jim Hansen!?!) this is actually encouraging. And don’t forget, the
reactionary shibboleths are a way of saying ‘look, I’m one of you, it’s safe to listen to me.’
Heartland’s tanked. BEST and ‘we never said it wasn’t warming’. The excommunication of the SkyDragons. The hard-core nutters are losing, folks.
27. #27 Hank Roberts June 4, 2012
> uncertainty is an economic problem to be
> solved through economic methods.
For sale: fresh-water ice, in form of large ice shelf, variable thickness*, approximate area the size of France, available in convenient coastal West Antarctic location. Suitable for hauling to
any desert country. You provide tow vehicles.
*decreasing, price will rise as amount of ice decreases. When there’s almost none left, it will be immensely valuable. Trust us, we’re economists.
28. #28 Tim Curtin
June 4, 2012
Hi fans
Apologies for my silence here over the last few days, Roland Gaross is more compelling than you lot, not least because of your belief that when Rafa serves at 200 kmh and there’s nobody who
actually lays a racket on it, nevertheless the ball comes back to him at 300 kmh.
For that is the message of the infamous Kiehl-Trenberth (K-T) cartoon (1997) that is the total intellectual underpinning of both TAR and AR4.
My papers unjustly lampooned here provide the econometric support for Claes Johnson’s compelling physics showing that CO2-based AGW is garbage.
I deeply regret not knowing his work until very recently (but do now thanks to one of you-lot citing Slaying The Sky Dragon).
What CJ does is to show that K-T’s “back radiation” at 324 W/sq.m despite only 168 of incoming solar radiation of 342 W/sq.mt. reaching the surface is what causes AGW. What does the back
radiation? K-T do not say, it can hardly be the sun, given the laws of thermodynamics (am I right that the sun is a little bit hotter than the earth?). Please tell, perhaps it’s the ballboys.
What Claes has shown is that there is NO basis in physics for the K-T and IPCC claim that back radiation is what causes global warming, and my econometrics provides massive empirical support to
Grant Foster of Closed Mind in rightly pointing out that I had just once is dozens of regressions wrongly interpreted the Durbin-Watson statistic (albeit not without authority from none other
than Zwiers of AR4 WG4 Chap. 9).
Characteristically, Foster did not notice that my Table 1 in my TSWJ paper (2012) reported a regression of Temperature against increases in ALL so-called greenhouse gases. I was wrong to do that,
because the rates of increase in CH4 and CFC etc all have different growth profiles from that of atmospheric CO2, and that is why the D-W test for auto-regression was inapplicable, unbeknown to
both Lambert and Foster. I admit my errors, but they will never admit it. The rest of my TWSJ paper reported only the results of regressions using the first-differencing method recommended by
Nobel-winner Granger (cited in my paper) and many others
In his second attack on me, Foster, having suppressed my above comments, turned his attention to my ACE2011 (peer reviewed) paper, available also from my website (www.timcurtin.com).
Again he disallowed my comment that in fact my Table 1 in my ACE2011 paper regressed only temperature change against the rising atmospheric concentration of CO2, with a Durbin-Watson test that
unequivocally revealed autocorrelation using Foster’s own Tables. Foster’s impeccable accuracy meant that he did not notice the difference between correlations where the independent variable was
ALL GHGs, which in fact have different time series and stationarity, and atmospheric CO2 on its own.
Interestingly, none of Foster, Lambert and their commentators has challenged any of my differenced results in either of my ACE2011 and my TSWJ2012 papers. Those results provide strong support for
Claes Johnson’s contention that
“Global climate can be described as a thermodynamic system with gravitation subject to radiative forcing by blackbody radiation… Understanding climate thus requires understanding blackbody
radiation. “..`backradiation’’ is unphysical because it is unstable and serves no role, and thus should be removed from climate science. Since climate alarmism feeds on a “greenhouse effect”
based on “backradiation”, removing backradiation removes the main energy source of climate alarmism.
Hank: Thanks for your comments, the only intelligent ones here! But I note that at Real Climate you seem to be unaware that solar-induced evaporation is the main determinant of sea-level changes.
I can cite the evidence, email me at tcurtin bigblue.net.au,
29. #29 Lotharsson June 4, 2012
Characteristically, Foster did not notice … that is why the D-W test for auto-regression was inapplicable, unbeknown to both Lambert and Foster.
Shorter TC:
You guys are wrong! You didn’t point out that I wasn’t checking my rear view mirrors with sufficient rigour due to your unseemly haste to point out that I hadn’t removed the handbrake!
30. #30 Lotharsson June 4, 2012
I admit my errors, …
You refuse to admit high-school level errors if they impact your ideological claims – those errors must be clung to at all costs. Do you still think CO2 will make seawater potable, amongst
several other doozies?
31. #31 Lotharsson June 4, 2012
I deeply regret not knowing his work until very recently (but do now thanks to one of you-lot citing Slaying The Sky Dragon).
How amusement! You’re unsurprisingly signing up for that bunch of arrant error. You should really go the whole hog – why don’t you spend some time with Girma and Nasif Nahle and the Time Cube guy
(and heck, why not throw in the Xenu clan) to create the unholy grand unified theory of why anything-but-climate-science explains climate observations? Nobel Prizes await, I assure you!
What does the back radiation? K-T do not say, it can hardly be the sun,…
The very diagram you disparage pretty much shows you what does the back radiation, and clearly shows that it is not the sun. Heck, if you operated at a sufficient level of intellect you could
look beyond the pictures and actually read the damn words (say, the text describing the diagram in the IPCC report that you claim to be demolishing, if the actual paper is beyond you).
But at least you are providing a masterclass in clown-trolling so you haven’t entirely wasted your typing effort!
Understanding climate thus requires understanding blackbody radiation….
Well Doh-de-doh-de-doh-doh-frackin’-doh!
What on earth do you think you encounter in the first chapter or two of a basic climate science textbook?
Clearly neither you nor your quotee have ever actually comprehended one (but in your case that was self-evident already), or these clueless statements wouldn’t appear in public. (And
understanding climate takes a lot more than simply understanding black body radiation or doing simple regressions, which apparently neither you nor your quotee have figured out yet.)
But then this, which you endorse, is even more priceless:
“..`backradiation’’ is unphysical because it is unstable and serves no role, …
It’s not even wrong due to reliance on some particularly stupid category errors (never mind what appear to be self-contradictions, unbeknownst to the author and to you). And this is endorsed by
the serial promoter of fallacies who touts actually unphysical 5th (and now 6th) order polynomials when it suits him!
But hey, willingness to engage in that kind of post-modern scientificking is what’s needed if you’re going for a unified Curtin-Girma-Nahle-TimeCube theory.
(You couldn’t make this stuff up – but TC can!)
32. #32 Robert Murphy June 4, 2012
Tim Curtin says:
“I admit my errors…”
OK, which statement of yours below is the erroneous one?
1) “That is why the CO2 and H2O are not GHGs”
2) “The second key aspect of water vapor is that it is a potent greenhouse gas”
And while you’re at it, when you claimed that the IPCC “expunged” the fact water vapor is a key GHG, but you were shown that they actually did consider water vapor not only a potent GHG but the
GHG responsible for 60% of the GHE, where did you admit your error? Nowhere.
When you later said Many thanks for your link to Spencer who like me shows how “because greenhouse gases allow the atmosphere to cool to outer space, adding more GHGs can’t cause warming.” and it
was shown that you had dishonestly edited his statement to make it look completely opposite of what he actually said (he was arguing that adding GHG’s will warm the surface), where did you admit
your error and your deliberate distortion? Nowhere.
Admitting your errors would require a level of intellectual honesty you are constitutionally incapable of. If you were still working in academia, your actions would be grounds for dismissal.
33. #33 P. Lewis June 4, 2012
Funny, isn’t it, that independent groups’ pyrgeometer observations match closely with modelling results of downwelling longwave radiation over many, many years in many, many publications?
Well no, it isn’t really! Only the terminally idiotic could persist in thinking all those observations and modelling results are wrong/some cog in a mega-conspiracy.
34. #34 Eli Rabett
June 4, 2012
Ah, but you have to understand that Claes believes in intelligent photons carrying thermometers.
It is arrant nonsense.
35. #35 ianam June 4, 2012
Apologies for my silence here over the last few days, Roland Gaross is more compelling than you lot, not least because of your belief that when Rafa serves at 200 kmh and there’s nobody who
actually lays a racket on it, nevertheless the ball comes back to him at 300 kmh.
For that is the message of the infamous Kiehl-Trenberth (K-T) cartoon (1997) that is the total intellectual underpinning of both TAR and AR4.
What does the back radiation? K-T do not say
I guess the words “Greenhouse Gases” just above the back radiation arrow are whited out on your copy.
I admit my errors, but they will never admit it
That’s like a stopped clock admitting that it was wrong a couple of times but blaming the rest of its errors on everyone else.
36. #36 ianam June 4, 2012
“..`backradiation’’ is unphysical because it is unstable and serves no role, and thus should be removed from climate science
Like how tennis balls coming back into the server’s court because of faulting into the net are “unphysical” and should be removed from the descriptions of tennis games, alleviating the need for
bad tennis players to be alarmed.
37. #37 bill June 4, 2012
‘Hi fans’. Some people sure do love the attention, don’t they?
38. #38 Lotharsson June 5, 2012
“..`backradiation’’ is unphysical because [DEL:it is unstable and serves no role:DEL] I cannot accept its implications…
Fixed it.
39. #39 ianam June 5, 2012
I cannot accept its implications…
It “helps” that he is genuinely too stupid to understand simple concepts, like that the magnitude of energy flows within a system is not limited by the magnitude of the energy flowing into the
system. It’s tragically funny that Curtin writes “What does the back radiation? It can hardly be the sun …, perhaps it’s the ballboys” … that he is so stupid and inept that he can’t even model
his own simple analogy. During a tennis “game”, a tennis player may serve five balls, yet a ball can hit the player’s racket far more than five times during that game … what does that? Where oh
where are all those extra balls coming from?
40. #40 Bernard J. June 5, 2012
Thanks indeed Lotharsson (June 4, 2:56 pm).
I now understand from whence Curtin’s lunacy originates – he is a Climate Cubist.
This explain’s his and Spencer’s preoccupation with third order polynomials…
I’m sure that they have a cuby-house in a tree where they meet.
From this revelation we can infer that the IPCC is in fact the Intergovernmental Panel against Climate Cubism.
It demonstrates how three* Climate Cubists can hold to notions that are mutually orthogonal, but are still each correct.
The possibilities for explanation-by-cubism are manifold…
[*It also permits the coining of a collective noun - a joke of climate cubists.]
41. #41 adelady
city of wine and roses
June 5, 2012
ianam. I think you might have ‘hit’ on a useful analogy there – for some audiences. Worth keeping in the back of the mind for suitable occasions.
42. #42 Lotharsson June 5, 2012
I like it:
During [DEL:a tennis “game”:DEL] a suitable time period in a climate system, a [DEL:tennis player:DEL] planetary surface may [DEL:serve five balls:DEL] absorb on average five Watts of
shortwave radiation, yet [DEL:a ball can hit the player’s racket far more than five times:DEL] emit far more than five Watts of longwave radiation during that [DEL:game:DEL] period … what
does that?
And one might go on:
Ever noticed that the [DEL:better the tennis player you’re facing:DEL] higher the concentration of greenhouse gases in the atmosphere, the more likely it is that [DEL:a ball you serve:DEL] a
longwave photon the surface emits comes back to [DEL:your side of the court:DEL] the surface?
If only TC would begin to apply basic accounting principles that one presumes were even taught to economists back in the day (including not certain ignoring inflows or outflows that are deemed
inconvenient), then he might begin to eliminate some of the more laughable errors he makes.
43. #43 ianam June 5, 2012
I think you might have ‘hit’ on a useful analogy there
The funny/ironic/tragic thing is that it’s Curtin’s analogy. The first time I glanced at his comment, “perhaps it’s the ballboys” whizzed right past me and out through the window.
44. #44 Bernard J. June 5, 2012
This* is why climate cubist Tim Curtin struggles with his tennis metaphors, and it also explains why he has no effective grasp of statistics.
[*Given the very existence of Curtin's illogical approach to science, it comes as no surprise is that there is in the world someone who just as bizarrely thought to patent the above idea.]
45. #45 ianam June 5, 2012
the better the tennis player you’re facing, the more likely it is that a ball you serve comes back to your side of the court
This is even true for a robot that is able to intercept tennis balls and send them in random directions. There’s an atmospheric analogy, but it seems to be beyond TC’s grasp.
46. #46 ianam June 5, 2012
Roland Gaross is more compelling than you lot, not least because of your belief that when Rafa serves at 200 kmh and there’s nobody who actually lays a racket on it, nevertheless the ball
comes back to him at 300 kmh
It’s strange how TC gets so close … he mentions two tennis players, yet somehow one of them becomes “nobody”, just as “Greenhouse Gases” is mysteriously missing from his copy of the K-T diagram.
This odd behavior suggests that there may be an invisible force acting on TC …
47. #47 Chris O'Neill June 5, 2012
This explain’s his and Spencer’s preoccupation with third order polynomials
48. #48 Tim Curtin
June 5, 2012
My dear fans, your comments have become ever more bizarre.
The Kiehl-Trenberth cartoon (1997 and TAR + AR4) shows incoming solar radiation of 341 W/sq.m of which apparently only 166 are absorbed by the surface. Yet the surface magically is able to
radiate out no less than 396 W/sq.m, more than double it received. Sounds like Gillard-Swan budgeting to me.
It gets better, of the 396 radiated out by the surface from the only 166 W/sq.m absorbed by the surface, as much as 323 W/sq.m is radiated back to the surface, perhaps by clouds (not known for
anything much other than albedo) or by the ballboys and girls at the French Open.
My son sent me today Daniel Kahneman’s Thinking, Fast and Slow; he won a Nobel in 2002. He explains the psychology of why you lot including Foster and Lambert are incapable of understanding
Re polynomials, it is undeniable that in climate time series the best fits are always achieved by higher polynomials, and they are usually also the best stat significant.
My experience so far is that polynomials anticipate ENSO switches from El Nino to La Nino better than any other statistical procedure. Watch this space – and prove me wrong!
49. #49 bill June 5, 2012
Ah, Tim C dwells in a Happy Land of Magical Ponies…
50. #50 Lotharsson June 5, 2012
Since TC is so determinedly clueless, I’ve embedded a clue:
Yet the surface magically is able to radiate out no less than 396 W/sq.m, more than double it received directly from the sun, and entirely consistent with what it received from all sources.
(This won’t help – as shown by the rest of his comment, which determinedly misinterprets the situation despite pointed comments correcting his misconceptions.)
Don’t let him anywhere near your wallet – Magical Pony Accounting procedures are a massive risk to your personal wealth.
…it is undeniable that in climate time series the best fits are always achieved by higher polynomials…
Well, doh! The same applies to practically any time series because you’re giving yourself more degrees of freedom to approximate the underlying series (noise and all). This is well understood
basic mathematics – and you’re wimping out at 5th and 6th order – go much much higher and you’ll generally be able to find even better fits.
Doesn’t imply they are physically meaningful or even provide any insight into the physical system though, no matter how much you protest.
My experience so far is that polynomials anticipate ENSO switches from El Nino to La Nino better than any other statistical procedure.
That may or may not be so – but it has practically nothing to do with AGW or warming trends.
51. #51 Bernard J. June 5, 2012
Re polynomials, it is undeniable that in climate time series the best fits are always achieved by higher polynomials, and they are usually also the best stat significant.
Oh, for Pete’s sake Curtin, are you really this stupid?!
The reason that higher order polynomials fit better is that anything can be fitted when using additional parametres.
And you need to understand the difference between statistical significance and physical significance – and when to use or not use statistical significance.
My experience so far is that polynomials anticipate ENSO switches from El Nino to La Nino better than any other statistical procedure.
Pick your polynomial of most-preferred order, and predict the temporal limits of the next five El Niño and La Niña.
Come on, your experience – self-vaunted as it is – should be able to slap a prediction down on the table without skipping a heart-beat.
Climate cubism, indeed…
52. #52 Bernard J. June 5, 2012
Sorry for doubling up on your comments Lotharsson. For some strange reason I was repeatedly told to stop posting so soon after previously posting, even though I hadn’t posted for hours. It took
about two hours just to get past that post-guard…
53. #53 Lotharsson June 5, 2012
Sorry for doubling up on your comments Lotharsson.
No worries. Pile on as you see fit…
54. #54 Lionel A June 5, 2012
TC 04062012 1:22 pm ATTLGT writes.
What CJ does is to show that K-T’s “back radiation” at 324 W/sq.m despite only 168 of incoming solar radiation of 342 W/sq.mt. reaching the surface is what causes AGW.
Seriously Curtin, so you cannot understand a ‘cartoon’. Little wonder so many concepts escape your stunning intelligence.
What is wrong with your attribution WRT that 168? Go on look carefully, maybe you will need to ‘phone a friend. Or you could just wake up Bart and ask him.
Clearly you do have problems with comprehension.
For those wondering why Curtin is so coy about where to find ‘the cartoon’ in AR4 – IPCC Fourth Assessment Report: Climate Change 2007. : Contribution of Working Group I [Physical Science Base]
to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, 2007 and in Chapter 1 on page 96.
See here for the 11 Chapters and other sections of that part of the AR4
55. #55 DarylD
William Lamb's Town Down Under
June 5, 2012
“Everyone lies: it’s just a question of how, when and why. From the relationship saving “yes, you do look thin in those pants” to the improbable “your table will be ready in 5 minutes”,
manipulating the truth is part of the human condition. Accept it now.”
Take Tim, every time he puts pen to paper, he is telling a gish gallop of lies, in order to hide his complete lack of understanding of the real world of science and mathematics, from first
He corrupted one artificial branch of mathematics, from it’s true purpose in life and then completely ignored the reality of the properties matter, to create his artificial world of pure nonsense
that defies reality.
Sadly, Tim C., in his deluded state of mind, writes complete “BS” about absolutely nothing. He thinks himself as a major player in the world of denialati ersatz skeptics. Back in the real world,
he is and will always remain, one very minor mediocre player. For he is one, that is lacking a complete understanding from first principles, of that which he talks about.
If Tim, wishes to overcome his fear of the inconvenient truth and complete ignorance, on the complex study of mathematics, logic, the properties of matter, all the laws of physics and global
warming. He should drop back his gish gallop, into low, low first gear and go back to basic first principles, in both science and mathematics, as is taught in Year 7, in all High Schools or
Secondary Colleges across Australia.
56. #56 ianam June 5, 2012
your comments have become ever more bizarre
They might seem less bizarre if you were to actually read them for comprehension.
The Kiehl-Trenberth cartoon (1997 and TAR + AR4) shows incoming solar radiation of 341 W/sq.m of which apparently only 166 are absorbed by the surface. Yet the surface magically is able to
radiate out no less than 396 W/sq.m
Magic like Rafa’s racket hitting more balls than handed to him by the ballboys. Where do all those other balls come from?
more than double it received
You left off “from the sun”.
as much as 323 W/sq.m is radiated back to the surface, perhaps by clouds
Or, you know, “Greenhouse Gases”, like it says just above that back radiation arrow.
or by the ballboys and girls at the French Open.
Yeah, those are the only possibilities … one tennis player and a bunch of ballboys and girls; no other source of balls coming into one’s court.
57. #57 Lotharsson June 6, 2012
Magic like Rafa’s racket hitting more balls than handed to him by the ballboys. Where do all those other balls come from?
That’s an even better formulation.
58. #58 bill June 6, 2012
Top tip – if you get the ”you’re posting too often – slow down’, message clear your History (Shift+Ctrl+Del in FireFox), and do a forced/shift refresh of the page.
59. #59 Lotharsson June 6, 2012
I’ve also seen “You’re posting too often” on the first comment I’ve made in a day. I think they’re probably triggering on IP addresses and not handling situations where posters are behind proxies
very well. Or something.
60. #60 Lionel A June 6, 2012
Some more help for poor confused Tim C
There is a more recent paper containing an updated version of the schematic (add that one to your vocabulary TimC continued use of ‘cartoon’ will demean you)) diagram under discussion which can
be accessed here EARTH’S GLOBAL
ENERGY BUDGET.
Note in particular this statement
For an equilibrium climate, OLR necessarily balances the incoming ASR,
and consider where blackbody temperatures fit in there.
Now Tim C should be able to catch up.
61. #61 MikeH June 7, 2012
Tim Curtin and the Skydragon Slayers – crank magnetism at work.
1. Cranks overestimate their own knowledge and ability, and underestimate that of acknowledged experts.
2. Cranks insist that their alleged discoveries are urgently important.
3. Cranks rarely if ever acknowledge any error, no matter how trivial.
4. Cranks love to talk about their own beliefs, often in inappropriate social situations, but they tend to be bad listeners, and often appear to be uninterested in anyone else’s experience or
62. #62 Michael Brown June 7, 2012
Tim Curtin’s article appears in “The Scientific World Journal” (tswj.com) whose website is registered to “Hindawi Publishing Corporation” which is on Beall’s watchlist of predatory open-access
Hindawi spams thousands with invitations to write articles for their journals. I still have records of receiving such invitations on 3/2/2012, 15/12/2011, 28/9/2011, 5/9/2011, 8/7/2011, 19/4/
2011, 4/42011, 10/2/2011 and 11/1/2011 (and many other invitations probably ended up in the trash).
The entry on Hindawi from Beall’s list follows;
Based in Cairo, Egypt, this publisher is now on its own after its collaboration with the publisher Sage ended in 2011. This publisher has way too many journals than can be properly handled by one
publisher, I think, yet supporters like ITHAKA boast that the prevailing low wages in Egypt, as well as the country’s large college-educated, underemployed workforce, allow the company to hire
sufficient staff to get the job done. Still, this publisher continues to release new fleet startups of journals, each group having titles with phrases in common: Advances in … (31 titles) and
Case Reports in … (32 titles). It appears that Hindawi wants to strategically dominate the open-access market by having the largest open-access journal portfolio.
63. #63 P. Lewis June 7, 2012
Were Trenberth and various co-workers’ values for surface upward/downward longwave energy fluxes reasonable values in their various publications?
64. #64 P. Lewis June 7, 2012
Oops! No bl**dy preview!
Let’s try again:
65. #65 Tim Curtin
June 7, 2012
Lionel: Thanks for the link to the still absurd 2009 update to the KT1997 cartoon by Trenberth, Fasullo, and Kiehl (TFK).
It remains a cartoon because while the LHS does balance incoming solar of 341 W/sq.m with surface absorption of 161 + 23 reflected by the surface, 79 reflected by clouds and atmosphere, plus 78
absorbed by the Atmosphere, the RHS produces 396 W/sq.m. surface radiation of which 356 goes nowhere, and 353 is “back (sic) radiation” “absorbed by surface”. What a load of bs!
How can 2.63 W/sq.m. from GHG (AR4, WG1, page 141) produce more “radiative forcing” than the TFK 341 from their solar 161 W/sq.m. actually reaching the surface?
TFK remind me of the lawyer I occasionally played golf with in PNG who manifestly had wandered all over the course while playing the 5th hole and proudly claimed he had parred it!
Claes Johnson has exposed the back radiation in the RHS as garbage, without refutation so far, and it is such patent rubbish that Ray Pierrehumbert could not bring himself even to mention the TFK
“back radiation” in his “Infrared radiation and planetary temperature” (Physics Today 2011).
When the climate science is settled, how come Pierrehumbert (associate of Hansen & Schmidt) ignores Trenberth et al. ?
Could it be because he knows KT et al believe in pixies?
But he is no better himself, with his new belief that RF from CO2 (et al. at max 2.63 W/sq.m. as of 2005) raises T enough to produce more atmospheric water vapour than evaporation induced by the
amount of energy per second averaged over the entire planet per sq. metre per second which is 342 W/sq.metre (AR4, WG1, p.96)?
P. Lewis:
Another believer in pixies! Whence those “downward fluxes”?
66. #66 P. Lewis June 7, 2012
P. Lewis:
Another believer in pixies! Whence those “downward fluxes”?
Pyrgeometers, you absolute fcuking retard!
67. #67 P. Lewis June 7, 2012
Sorry folks. That was what I thought, not what I should have written.
What I meant to write was:
Pyrgeometers, Tim(ewaster) Curtin. Pyrgeometers!
C’mon ScienceBlogs, sort out the “kill” option.
68. #68 Lotharsson June 7, 2012
Curtin, you really are determined to prove that your intellectual abilities are sorely lacking with regard to basic climate science, aren’t you? All of your bluster merely reconfirms your own
Claes Johnson has exposed the back radiation in the RHS as garbage, without refutation so far,…
Claes Johnson’s claims were pre-refuted by evidence he (and you) pretend doesn’t exist by dubbing it “unphysical” on ludicrous grounds – when we actually measure the frackin’ back radiation with
frackin’ scientific instruments, as has been pointed out to you already. You’re the one who asserts pixies here by dismissing the counter-evidence out of hand!
Ray Pierrehumbert could not bring himself even to mention the TFK “back radiation” in his “Infrared radiation and planetary temperature” (Physics Today 2011).
Er, dude, what is this I see before me when I load the PDF? What are all those downward directed squiggly arrows in Figure 1, especially the squiggly arrow from layer 1 towards the ground? Would
this be … radiation, emitted from greenhouse gas molecules, and directed towards the ground? And if so, would it not be accurate to call it “back radiation” as the term is commonly defined, even
if Pierrehumbert doesn’t feel the need to explicitly use the term?
I mean, did you not read the reference you cited and hope (against all previous evidence) that no-one else here would bother? Or did you simply fail to understand the paper? Or are you trying to
increase the amount of clownery in your clown-trolling?
If you were operating with any basic intellectual competence in the field, you would presumably stop and ask yourself why the satellite observations cited in that paper and elsewhere are entirely
consistent with KTR, if Curtin & Johnson are correct that KTR is massively erroneous – especially since the radiative transfer equations used for the theoretical predictions from fairly
fundamental principles generate the phenomenon of “back radiation”, as illustrated by Figure 1. How the heck do you think massively wrong equations came up with the right answers? Even worse –
how the heck do you think GHG molecules direct ALL of their emitted radiation away from the planetary surface? Do they have some sort of gravitational field sensitive orientation mechanism
coupled to a cute little system of lenses, or are they “intelligently emitting”, or what? There are so many gaping holes in your claims that it is staggering that you can’t come up with even one
before you hit “Submit” – especially after all the free coaching you get! At what point will you begin to consider that you may not actually be competent in this particular field, and refrain
from accusations until you actually achieve competence?
(Oh, and you might want to revisit your paragraph at the end mentioning water vapour. The claim appears to be so incoherent it “isn’t even wrong”. Start by noting that TOA radiative power doesn’t
“induce evaporation”…)
69. #69 Lotharsson June 7, 2012
Er, lack of preview bites again – replace KTR with TFK in my last comment!
70. #70 P. Lewis June 7, 2012
What a wonderful picture: heliotropic (aided by negative geotropism) GHG molecules.
71. #71 Lionel A June 7, 2012
So Curtin, you still cannot understand what you derogatorily term a ‘cartoon’:
‘…the RHS produces 396 W/sq.m. surface radiation of which 356 goes nowhere…’
’356′ goes nowhere’! Have another closer look and THINK how the OSR reacts with that stuff indicated and that other stuff nearby and how that may be a factor in the origin of that Back Radiation.
You do need leading by the hand don’t you.
If I can get what is going on here, when I am neither a statistician (although have had some exposure to that discipline) or a climate scientist (very nearly but chose another path after many
years as a aviation engineer where concepts such as the gas laws and much else were par for the job) then you seem to be the Barney Rubble here.
BTW there is one small balancing error when one does the sums but only a minor one of the magnitude that the likes of Steve McIntyre would trumpet around the world. The error is probably due to
backlash in the system through getting bored by being slagged off as a ‘cartoon’.
This is getting real tedious too.
72. #72 MikeH June 8, 2012
Claes Johnson, the physics crank in a discussion with blogger Science of Doom.
73. #73 Bernard J. June 8, 2012
Those who can, account. Those who can’t, economise.
74. #74 Hank Roberts June 8, 2012
> heliotropic (aided by negative geotropism) GHG molecules.
IR goes out randomly no matter what the molecule’s orientation — but, if we could make each self-aligning molecule incorporate a small laser, the kind that occur naturally in planetary
atmospheres anyhow, and still something that would remain near the top of the atmosphere, that’d be an effective Maxwell’s Demon for cooling the planet:
But tack it onto a self-orienting molecule so it only fires when pointed at the coldest part of the sky.
Yeah, that’d work.
75. #75 Chris O'Neill June 8, 2012
There’s a good supply of cranks who are pathologically incapable of understanding Trenberth’s diagram. Doug Cotton and John Nicol come to mind. The latter is particularly stunning considering he
supposedly has a PhD in, and lectured, Physics.
76. #76 Lotharsson June 8, 2012
Claes Johnson, the physics crank in a discussion with blogger Science of Doom.
Confused he is, and evidence, he outright ignores. No wonder TC latched on to his claims.
77. #77 Tim Curtin
June 8, 2012
Chris et al: when will you ever grasp the 2nd Law of thermodynamics, with your pixieland acceptance of the Trenberth-Pierrehumbert denial of it? For your information, Claes Johnson is one of the
world’s top applied mathematicians by any measure, and he applies his maths to the physics of climate change. Anyone who denies the 2nd Law is an ass, and that includes Grant Closed Mind, with
his inane response to Marvell here:
Marvell said (like me): “If one regresses one non-stationary variable on another, one gets a spurious regression (the standard errors are much too small). In that case, one must difference the
variables [as I did], unless the two variables are cointegrated. The latter means that the residual in the regression is stationary – i.e., that the two variables tend not to move too far apart
over the long term. In the example in the post, the two variables would probably be cointegrated, although one would have to do a cointegration test to determine whether that is the case. Neither
correlations nor cointegraton establish the causal direction, and one cannot assume that temperature changes do not cause changes in greenhouse gas levels. On all these points, see Kaufmann,
Kauppi, and Stock, Emissions, Concentrations & Temperature: a Time Series Analysis. Climate Change (2006) 77: 249-278.”
Here is Closed Mind’s fatuous response “Neither temperature nor CO2 is a stochastic process, and the evidence I see is pretty strong that the stochastic component is stationary.”
But temperature is basically stochastic.
CM goes on: “But you (and your reference) are among those who think they can get to the heart of the matter by completely ignoring the most relevant scientific discipline. It’s called ‘physics’”,
a subject of which CM knows nothing because he too denies the Second Law.
He claims to be a statistician, but where are the stats showing that heat can indeed transfer from a hot body to a hotter, pace Flanders and Swan?
78. #78 Lotharsson June 8, 2012
For your information, Claes Johnson is one of the world’s top applied mathematicians by any measure, and he applies his maths to the physics of climate change.
Unfortunately, like you, evidence strongly suggests that he’s one of the world’s bottom climate scientists. It’s no good applying maths or statistics to a massive lack of the prerequisite
scientific understanding and/or outright denial of evidence that refutes your thesis.
79. #79 adelady
city of wine and roses
June 8, 2012
“where are the stats showing that heat can indeed transfer from a hot body to a hotter”
Oh, my giddy aunt.
Try Roger Pielke Snr http://www.drroyspencer.com/2010/07/yes-virginia-cooler-objects-can-make-warmer-objects-even-warmer-still/ .
For those free of preemptive baggage which might deny them use of ordinary reading skills, there are also good scientific explanations at Skeptical Science, Science of Doom, RealClimate and a
dozen others. I know I’m wasting electrons on TC here, but casual visitors might like some guidance.
80. #80 Richard Simons June 8, 2012
When I wear a parka in -30C weather, it keeps me warmer even though the parka is cooler than I am. TC – if you can explain why this does not violate the 2nd Law of Thermodynamics, then you should
have an inkling of why global warming does not violate the 2nd Law.
81. #81 Lionel A June 8, 2012
And so to Windmills again.
TC, Have you tried homoeopathy.
Print out a page from this Deltoid thread. Dissolve in 10 litres of water stir and shake well. Take one tenth of the resultant pap and shake well in 10 litres of water. Now dilute again in
another 10 litres of water and shake. Repeat another 10^28 times.
Put a drop of the resultant on your tongue and run around the room clockwise, no anti-clockwise seeing as you are south of the equinox, ten times trying not to swallow. Now spit out that drop
into the nearest eye. You should feel a bit better after a short rest.
82. #82 Robert Murphy June 8, 2012
Mr. Curtin, atmospheric physicists don’t deny the 2nd Law of Thermodynamics when they accept the fact that greenhouse gases warm the surface of a planetary body. Unlike you, they actually
understand it, however. . Hell, even this asshat (who posted on this very thread) at one time accepted the validity of the greenhouse effect:
“The second key aspect of water vapor is that it is a potent greenhouse gas” (Tim Curtin)
Of course you ruined it right afterward by saying this:
“…it is the non-GHG gases, namely Oxygen and Nitrogen (+ derivatives other than CO2 and H2O), which have what climate scientists (all of whom are flat earthers) believe is the alleged heat
trapping effect of atmospheric H2O and CO2. “(Tim Curtin)
When are you going to explain your two contradictory statements? They can’t both be true.
83. #83 Lotharsson June 8, 2012
When are you going to explain your two contradictory statements?
Come now, let us be reasonable!
TC is engaged in quantum relativistic logic (QRL), which is far more advanced than your regular logics! Under QRL it is perfectly acceptable, nay, it is entirely routine for both proposition A
and proposition not-A to be true. In fact, under QRL one’s inferences are fixed and one’s axioms are variables that collapse into existence from the superposition of all possible axioms at the
very moment at which they are needed to rebut a challenge to an inference. Furthermore, axiom collapse produces virtual axioms that wink into and out of existence in a mysterious and
non-deterministic fashion unbound by the constrictions of linear time-like dimensions or a requirement to conserve consistency- or coherency-like metrics; their definition and applicability are
related by a Heisenberg-like principle, and their observed profile depends significantly on the speed at which the related scientific concepts zoom above the claimaint’s head – all of which goes
some way to explaining why A and not-A are not in fact contradictory when viewed from the correct reference frame, even though you might claim to observe that they are from yours.
Heck, quantum logic is so advanced there might only be three people in the world who understand it – and I’m trying to think who the other two might be.
84. #84 Hank Roberts June 9, 2012
> … Pielke Snr http://www.drroyspencer …
But we know who you meant.
I suggested TW read Spencer
a while back in the thread.
He tried, but didn’t succeed.
85. #85 Hank Roberts June 9, 2012
> TW
86. #86 MikeH June 9, 2012
Since TC is in awe of Claes Johnson perhaps he could explain the following from his guru.
The Sun emitting light generates electromagnetic waves covering space of which the Earth occupies a part
and thus is in electromagnetic contact with the Sun. In this contact the Earth is a receiver and not emitter, and in particular does not emit any photon particles reaching the Sun.
Radiation requires both an emitter/source and absorber/receiver
This is just woo – an infantile misunderstanding of the second law combined with a crank attempt to justify the misunderstanding with a mythical electromagnetic contact between receiver and
emitter. How does that work with the background radiation from the big bang? Talk about “intelligent” photons – these can see into the future as well.
The debate over the Slayer’s crankdom is well and truly over with even the hardcore deniers running a mile from it. Trust TC to be late to the party.
87. #87 adelady
city of hanging head in shame
June 9, 2012
Thx Hank.
I noticed that – eventually – and figured referring back/apologising /explaining wouldn’t save me from well-deserved ridicule. I doubt preview could have saved me so I can’t even blame the
technology, just my own failure to delete the correct portions of a stream of names.
88. #88 Lotharsson June 9, 2012
Not only that, but:
…and in particular [earth] does not emit any photon particles reaching the Sun…
is patently and stupidly incorrect. Apparently he thinks the photons emitted by earth in the general direction of the sun are … what? Overawed by the mighty counter flux, so they stop, have a
little think about their future prospects and decide to turn around and go the other way?!
The guy is making up fairy tales that wouldn’t pass muster with a middle of the pack high school physics student, whilst TC applauds from the sidelines.
89. #89 Tim Curtin
June 9, 2012
Dear fans, here is what Eddington said in 1915: “if your theory is found to be against the second law of thermodynamics, I can give you no hope; there is nothing for it but to collapse in deepest
humiliation” (h/t Claes Johnson).
None of you let alone Trenberth and Pierrehumbert has ever understood that Law. And it is that Law which explains why my LSR analysis of changes in temperature vis a vis changes in [CO2] yields
no statistically significant relationship between those variables.
Unlike Joelle Gergis’ Kahnt do stats., all my data are in the public domain. Get off your b*** and prove me wrong.
Kind regards
90. #90 Robert Murphy June 9, 2012
But Tim, the GHE does not go against the 2nd Law of Thermodynamics, so your point is moot. Even an idiot like this guy agrees:
“The second key aspect of water vapor is that it is a potent greenhouse gas” (Tim Curtin)
91. #91 Lionel A June 9, 2012
On entropy and the second law of thermodynamics, is it not interesting that the quote TC has just produced above is so beloved of those who are conceptually challenged as in this example from a
Creationist and probable acquaintance of TC.
What is there about the model of global energy flows, as described by Trenberth and many others that urges you to invoke the Eddington quote?
Nobody is eschewing the second law. Consider:
The entropy of an ISOLATED system increases in the course of a spontaneous change
D(elta)Stot>0 where Stot is the total entropy of all parts of the isolated system.
Oreeversible processes (like cooling to the temperature of the surroundings and the free expansion of gases) are spontaneous processes and hence are accompanied by an increase in entropy. We
can express this by saying that irreversible processes generate entropy. On the other hand, reversible processes are finely balanced changes, with the system in equilibrium with its
surroundings at every stage. Each infinitesimal step along the path is reversible, and occurs without dispersing energy chaotically and hence without increasing entropy: reversible processes
do not generate entropy. At most, reversible processes transfer entropy from one part of an isolated system to another.
Atkins, Physical Chemistry, Fourth Edition, p.84.
As for
‘…back radiation in the RHS as garbage, without refutation so far, and it is such patent rubbish that Ray Pierrehumbert could not bring himself even to mention the TFK “back radiation” in his
“Infrared radiation and planetary temperature” (Physics Today 2011).’
when Pierrehumbert certainly mentions back radiation in his ‘Principles of Planetary Climate’.Of course what probably confused you was that you were looking under ‘B’ in the index when you should
have been looking under ‘I’ for Infra-red (Back Radiation).
It does seem also, with your references to CO2 band saturation and comments WRT water vapour, that you have fallen for Knut Angström’s much debunked and no longer cited for validity 1900 paper.
You should look out pages 53-55 of ‘The Warming Papers’ edited and commented by David Archer and Ray Pierrehumbert as well as <a href="http://www.realclimate.org/index.php/archives/2007/06/
92. #92 Lionel A June 9, 2012
…as well as this article.
Not sure what happened to that URL, text got truncated causing failure somehow. Odd. Has happened before.
93. #93 Bernard J. June 9, 2012
And it is that Law which explains why my LSR analysis of changes in temperature vis a vis changes in [CO2] yields no statistically significant relationship between those variables.
Parsimony – and the small matter of several hundred years of scientific endeavour by tens of thousands of professionals – indicate that the simplest (d.f. = 0) alternative, that “no statistically
significant relationship” was found because you misapply statistical analysis, is actually the case.
And when are you going to learn that regressions are not the sole extent of statistical treatment? Sheesh, you could have saved my first year undergrads a lot of grind by advising Dytham to ditch
his pages and pages of dichotomous key with the simple sentence “do a regression”. That would shorten chapter three to a single line that could then be moved to the contents, and allow the
expunging of a dozen or so superfluous chapters so that the entire book becomes a nice, thin, single chapter.
What particuarly flummoxes me though is that you haven’t even batted an eyelid at Claes Johnson’s:
…the Earth is a receiver and not emitter, and in particular does not emit any photon particles reaching the Sun.
besides the fact that the phrase “photon particle” is an oxymoron, Johnson is essentially stating that the entire (non-stellar?) universe is black with respect to the perspective of the sun. This
has some major implications for the nature of whatever windy solar mechanism mediates this remarkable blacking-out, and it also suggests that there is a spherical, incoming-ER horizon surrounding
Imagine it… the closer an object moves to the sun, the dimmer the night side of the object becomes, until the incoming-photonic horizon is crossed, and the only night illumination remaining is
that resulting from refraction and/or reflection of out-going solar radiation.
I think that someone might have been overlooked for a Nobel prize…
94. #94 Lionel A June 9, 2012
…the Earth is a receiver and not emitter, and in particular does not emit any photon particles reaching the Sun.
Yes, I was going to have some fun with that. I wonder how all those astronauts managed to see and photograph the Earth from space? That is but one question of many forming.
95. #95 DarylD
William Lamb's Town Down Under
June 9, 2012
Alas, the clueless “Slaying the Sky Dragon” supporter Tim Curtin, is now busy spreading his gish gallop of nonsense over at “The Conversation” .
Link : http://theconversation.edu.au/can-australian-farmers-take-on-the-challenge-of-climate-change-6957
It seems, he is not faring very well in the popularity stakes over there either.
I do believe this Monty Python sketch “Hy-Brazil is Sinking” sums up Tim’s total denial of reality in the face of over whelming evidence nicely : – http://www.youtube.com/watch?v=d8IBnfkcrsM
96. #96 Lotharsson June 9, 2012
None of you let alone Trenberth and Pierrehumbert has ever understood that Law.
When you can’t answer reasoned critiques of your claims, you simply assert that those making the critiques “don’t understand” what they’re talking about.
Funny how you can never seem to accurately explain how they got it wrong…whereas your critics patiently (and impatiently) explain in quite some detail…
97. #97 bill June 10, 2012
While much gets made of the initials DK in this context, have you considered something more along the lines of AB?
Those who suffer from it are “…blind”, but affirm, often quite adamantly and in the face of clear evidence of their blindness, that they are capable of seeing. Failing to accept being blind
gets dismissed by the sufferer through confabulation.
98. #98 Bernard J. June 10, 2012
More on:
…the Earth is a receiver and not emitter, and in particular does not emit any photon particles reaching the Sun.
another implication is that it the sun were surrounded by Earths at the current stronomical radius a la Ringworld, but in three dimensions rather than two and minus the scrith, then to all
intents and purposes it would be completely enclosed, but it would never heat beyond its current temperature.
Now there is some nifty thermodynamics…
99. #99 Eli Rabett
June 10, 2012
Tim and Claes believe in some serious action at a distance.
About all you can really say is they are ignorant
100. #100 cohenite June 10, 2012
Actually Tim Curtin’s conclusion about the Null Hypothesis is correct if Trenberth is any guide:
“Given that global warming is “unequivocal”, to quote the 2007 IPCC report, the null hypothesis should now be reversed, thereby placing the burden of proof on showing that there is no human
influence [on the climate].”
2 3 4 5 … 9 Next » | {"url":"http://scienceblogs.com/deltoid/2012/05/17/tim-curtins-incompetence-with/comment-page-3/","timestamp":"2014-04-17T04:29:34Z","content_type":null,"content_length":"207982","record_id":"<urn:uuid:7882d83e-743d-4960-b57b-fd04669a24ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stuck on Trigonometric Integral
1. June 27th 2008, 11:31 AM #1
2. June 27th 2008, 11:39 AM #2
Just a hint:
Now what? Or did we actually get anywhere?
3. June 27th 2008, 11:43 AM #3
This is as far as I have gotten on my own. I tried break up the tan squared and that just seemed to make a bigger mess.
4. June 27th 2008, 11:49 AM #4
Use the general reduction formula:
$\int sec^{n}(x)dx=\frac{sec^{n-2}(x)tan(x)}{n-1}+\frac{n-2}{n-1}\int sec^{n-2}(x)dx$
If you must reinvent the wheel, try using parts. That is how the above formula is derived.
$u=sec(x), \;\ dv=sec^{2}(x)dx, \;\ v=\int sec^{2}(x)dx=tan(x), \;\ du=sec(x)tan(x)dx$
$\int sec^{3}(x)dx = sec(x)tan(x)-\int sec(x)tan^{2}(x)dx$
$=sec(x)tan(x)-\int sec(x)(sec^{2}(x)-1)dx$
$=sec(x)tan(x)-\int sec^{3}(x)dx+\int sec(x)dx$
Now, add $\int sec^{3}(x)dx$ to both sides and finish.
5. June 27th 2008, 01:04 PM #5
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/calculus/42579-stuck-trigonometric-integral.html","timestamp":"2014-04-16T20:35:50Z","content_type":null,"content_length":"43781","record_id":"<urn:uuid:d5cb3cc6-687e-4549-8840-83961a97dc8c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
The following section presents a brief overview of the approach taken by Chang and Han in determining the gain and phase margin of a system with adjustable parameters.
Consider the block diagram of a basic closed-loop control system shown in Figure 1.
Figure 1: Basic Block Diagram of a Closed-Loop System
The open loop transfer function for this system is defined as:
Substituting into equation (1) yields:
Equation (2) can then be rewritten as:
Expressing equation (3) in terms of its magnitude and phase yields the following equation:
Combining equations (2) and (4) gives:
Dividing both sides of equation (5) by results in:
Let's define:
where A is the gain margin of the system at , and is the phase margin of the system when .
Then equation (6) can be rewritten as:
or as
The term is commonly referred to as the gain-phase margin tester. The gain-phase margin tester can be used to determine the gain and phase margin of a system with adjustable parameters by
incorporating it as an additional block in the closed loop system, such as the one shown in Figure 2.
Figure 2: Basic Block Diagram of a Closed-Loop System with Gain-Phase Margin Tester
The gain-phase margin tester can also be expressed as:
where X and Y are:
Substituting X and Y into equation (8) yields:
which can be expressed in terms of the real and imaginary parts, namely:
Assuming that X and Y are parameters, we obtain the following expressions:
where , , , , , and are functions of .
Solving equations (14) and (15) simultaneously yields:
If the system has adjustable parameters then equation (8) can be written as:
Assuming that and are linear functions then
where , , , , , and are functions of .
Solving equations (20) and (21) simultaneously yields:
If we let and , and set ... equal to some constants, then as varies from 0 to a locus that contains the stability boundary of the system without the gain-phase margin tester can be plotted in the vs.
plane. This locus can be considered the Nyquist plot of the system passing through the point .
If A is assumed equal to a constant value and , the locus formed in the vs. plane is a boundary of constant gain margin. Similarly, if is assumed to be constant and , then the locus formed is a
boundary of constant phase margin. The values of the phase crossover frequency and the gain crossover frequency can be determined by measuring the values of on the gain margin boundary and the
constant phase margin boundary.
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=applications/StabilityAnalysis","timestamp":"2014-04-25T08:59:07Z","content_type":null,"content_length":"364038","record_id":"<urn:uuid:b67db666-6519-4583-bebd-bba8396da056>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weekly Challenge 34: Googol
Copyright © University of Cambridge. All rights reserved.
The numbers in the problem are too large for computers to deal with in a straightforward fashion (or, rather, were in 2011), so we need to use pure mathematics to help us. There are at least two
possible positive ways forwards:
First, you might notice that the inequality is a quadratic in the variable $n^2$. You could solve the corresponding equality and use this to work out the minimum value of $n$ by rounding up the
answer to the next largest integer.
Alternatively, you could notice that $10^{100}=(10^{25})^4$. So, it is quite clear that $n=10^{25}$ is too small.
What about $10^{25}+1$? We can substitute this value and use the binomial theorem to show that $$ \begin{eqnarray} (10^{25}+1)^4-6(10^{25}+1)^2&=&\left(10^{100} +4\times 10^{75} + 6\times 10^{50} +4\
times 10^{25} + 1\right)\cr &&\quad\quad- 6\left(10^{50}+2\times 10^{25}+1\right) \cr
&=& 10^{100}+4\times 10^{75}-8\times 10^{25}-5
\end{eqnarray} $$ It might seem 'obvious' that this value is greater than $10^{100}+1$ but it is a good idea to get into the habit of writing down precisely why it is obvious - the problem here is
that we are suggesting the that sum of three different terms is greater than $1$. A simple way forward is to write out an inequality where we simplify one of the terms at each stage until we are left
with just two terms as follows:
4\times 10^{75}-8\times 10^{25}-5&> & 10^{75} - 8\times 10^{25}-5\cr
&> & 10^{75}-10^{26}-5\cr
&> & 10^{75}-2\times 10^{26}\cr
&> & 10^{75}-10^{27}\cr &> &1 \end{eqnarray}
(Note: you might not see the 'point' of these inequality manipulations: they are useful because it is clear and easy to verify each individual step. This turns something which might just be
controversial into something that is not at all controversial.)
Now for writing out the number $N$ on the left hand side of the inequality. As the number is so large a computer or a spreadsheet will not easily help us. Keeping the '+1' part separate for as long
as possible gives us (where $X=10^{25}$)
$$ \begin{eqnarray} N &=& (X+1)^4-6(X+1)^2 \cr &=& X^4+4X^3+6X^2+4X+1 - 6(X^2+2X+1)\cr &=& X^4+4X^3-8X-5 \end{eqnarray} $$ The structure of this number is now clear to us and includes the problematic
negative terms, which we can deal with one at a time.
Firstly, $8\times 10^{25}$ is represented as $8$ followed by $25$ zeros. Removing this from the part $4\times 10^{75}$ leaves a number of the form
100\dots 0039\dots 9200\dots 00
Removing the $5$ from this gives a number of the form
100\dots 0039\dots 9199\dots 995
The trick is now to get the $1$s and the $3$ in the correct place. If we consider the number as a string of digits, counted from the right, then the first $1$ with be in the 101st place, the $3$ in
the 76th place and the second 1 in the 26th place.
Phew! Who would have thought that place value could be so tricky? If you are planning on entering a career in finance or science then part of your computer programming will be to ensure that large
numbers that you enter into your code are accurate. A lot could rest on this accuracy, so patient and careful detail are the key skills required. | {"url":"http://nrich.maths.org/7293/solution?nomenu=1","timestamp":"2014-04-19T22:19:32Z","content_type":null,"content_length":"6481","record_id":"<urn:uuid:141a8f37-3494-43ef-89e1-71fcb3f13456>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Douglasville Trigonometry Tutor
Find a Douglasville Trigonometry Tutor
...She is really my one weapon in my arsenal I couldn't do without. I wouldn't have passed calculus without her! I plan on using her for Calculus II and Calculus III as well and am not nearly as
anxiety ridden about it as I was before I met her." - Calculus I Student If the above sounds like somebody you want to learn from, just let her know!
22 Subjects: including trigonometry, reading, physics, calculus
I am a biochemist degree holder with extensive experience as a high school science teacher, math and science tutor. I have worked with students from all backgrounds, from low-performing to
high-achieving, tapping into their strong suits and tailoring my lessons along those strengths so that they (t...
14 Subjects: including trigonometry, chemistry, physics, algebra 1
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including trigonometry, geometry, statistics, probability
I am a former Math Teacher / Basketball Coach available for tutoring. I taught precalculus at the college level. I also taught general math, algebra I and geometry in high school.
14 Subjects: including trigonometry, calculus, geometry, algebra 1
...I spent a year as a music performance major playing the flute at University of Tennessee Knoxville. During the next few years I would take breaks from my permanent jobs to teach high school
band camps during the summer. I taught both flute and color guard and the experience really cemented for me the satisfaction to be gained in helping others achieve success.
9 Subjects: including trigonometry, calculus, physics, geometry
Nearby Cities With trigonometry Tutor
Atlanta trigonometry Tutors
Austell trigonometry Tutors
College Park, GA trigonometry Tutors
Dunwoody, GA trigonometry Tutors
East Point, GA trigonometry Tutors
Lithia Springs trigonometry Tutors
Mableton trigonometry Tutors
Marietta, GA trigonometry Tutors
Powder Springs, GA trigonometry Tutors
Riverdale, GA trigonometry Tutors
Roswell, GA trigonometry Tutors
Smyrna, GA trigonometry Tutors
Union City, GA trigonometry Tutors
Villa Rica, PR trigonometry Tutors
Woodstock, GA trigonometry Tutors | {"url":"http://www.purplemath.com/douglasville_ga_trigonometry_tutors.php","timestamp":"2014-04-17T07:13:45Z","content_type":null,"content_length":"24403","record_id":"<urn:uuid:23080c80-2bc0-4f32-8ee5-5c30433dfda2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anti-aliased polygon filling algorithm
I have always found this an extremely interesting computer science problem, and have written various polygon scanline conversion routines in my life. In January 2002 while at my parents waiting for
the work year to start again, I decided to write a new one, this time in Java. (The source code is available on request to anyone who’s interested.)
These days, no doubt, many polygon fill routines are available open source, and the descriptions of how one should work are also easily available on the web. Not so when I was a child and wanted to
write my first one.
I wanted to create a program similar to Corel Draw which could have various different effects (graduations, fractals) as the fill for a polygon. The system-supplied polygon routines were
insufficient, and thus I had to write my own. The system routines could only fill solid colour, and had no hooks to allow one to use their rasterization algorithm yet using ones own plotting system.
The essence of a polygon plotting routine is to consider the polygon one horizontal scanline at a time, from the lowest Y in any of the points, to the highest Y in any of the points. An ordered list
is kept of which edges of the polygon are intersecting the current scanline. Moving up a scanline involves incrementing the position of this intersection by the gradient of the edge. If a node is
encountered, one edge is removed from the list and the neighbouring edge (i.e. the other edge which shares that node) is added in its place.
It sounds deceptively simple and I think that’s why I keep coming back to it. However there are a number of tricky special cases to consider.
• What about horizontal edges, or multiple horizontal edges next to each other, where you encounter a bunch of nodes all on the same scanline? Solution: write special case code.
• If you put in checks for horizontal edges, then what about nearly-horizontal edges where the entirety of the edge appears on the same scanline, but where the Ys are not actually the same?
Solution: pre-compute all X-Y coordinates to the screen resolution before applying the algorithm.
• What about rounding problems when tracing edges? Solution: use integer arithmetic similar to Bresenham’s algorithm.
• What about clipping? If you want to plot a polygon which is 100k pixels high (e.g. on a high zoom) you only want to trace the scanlines which are visible. But if the user scrolls down and exposes
10 extra pixels, the newly-plotted part must join the previously-plotted part exactly. Solution: With the integer node coordinates, and integer arithmetic for the edge-scanline intersection, this
can be done.
• What about anti-aliasing? (My) solution: Run the algorithm at 4x X and Y resolution, and for each scanline of the algorithm, build up an internal array (one entry for each horizontal screen
pixel), how many of the potential 16 subpixels should be plotted. After 4 subpixel scanlines, plot the actual pixel scanline and clear the internal array.
• Converting a pixel with 4×4 subpixels to a colour value is not as easy as it sounds. There are 16 subpixels meaning there are 17 different values (from 0 subpixels filled, to all 16 subpixels
filled, inclusive). Yet your graphics device needs a value from 0 to 255 inclusive. And you want to use integer maths there. Solution: Multiply by 15.
• Pixel rounding: If you plot the rectangle (0,0), (3,0), (3,3), (0,3) then you want to have pixel (0,0) 100% filled but pixel (3,3) 0% filled. In that way adjacent rectangles will abut and not
overlap. Solution: whole-number node coordinate values represent positions between pixels, not pixels themselves. Solution part 2: Carefully keeping track of when integers represent pixels and
when they represent between-pixel coordinates.
Here is some output from the program, drawing a random quadrilateral and a random star, with semi-transparency.
Here is a demonstration of an extremely thin shape showing the benefits of the anti-aliasing:
And here is the program displaying text using a vector font.
That font uses a simple text-based format which is easy to parse.
Strangely I got the font when my father bought me a program costing £4.99 when I was about 10. I loved that program as it was able to do large lettering using vector graphics, and render them and
print them out on the dot-matrix printer. It was not fast and the fonts did not have Bezier curves so it was not perfect, but it was a lot better than the character-based word processing I was able
to do otherwise. It opened up a whole new world to me about what vector graphics were capable of.
The program came on a single floppy, for my 8-bit Z80-based Amstrad PCW. I rescued the 4 fonts that came with it (as they were the only vector fonts I had access to at that time) and moved them to my
Archimedes, and now they live on in my Subversion repository, and are accessible from my Java IDE.
Flo Says: April 12th, 2008 at 11:41 am
It’s cool to see someone is enjoying implementing low-level graphics algorithms over and over again like myself! Kudos for the Amstrad fonts!!!
NIE Zhiwen Says: December 11th, 2008 at 10:12 am
I have been researching the altialiasing polygon algrithm for a long time,
may I please ask you for a copy of the source code .
Thanks very muck!
niezhiwen#sina.com (with # to @)
Chitra Says: March 23rd, 2009 at 8:09 am
I am Developing one application in Qt 4.4.3 in which I need to fill the images like ellipse and some irregular shapes.
Would you please send me code (c code ) of scan line algorithm for filling irregular shapes and for making images semi transparant….
I would be very thankfull if you could really provide me this help.
Hare Krsna!!!!!!!
lzhongyue Says: April 14th, 2009 at 7:05 am
I have been researching the altialiasing polygon algrithm for a long time,
may I please ask you for a copy of the source code .
Thanks very muck!
renpengfei Says: June 3rd, 2009 at 12:35 pm
a good article
JasF Says: July 23rd, 2009 at 5:48 pm
I’m develop application for symbian platform. I need the functionality of this algorithm. Please give me a copy :)
Thanks and respect!
Mimis Says: September 17th, 2009 at 1:17 pm
I’m searching that algorithm for a long time.
I think that your demo images are very good!
May i please ask you for a copy of the source code?
Thank you very much!
Peter Says: September 25th, 2009 at 9:17 pm
Can you please send me the polygon source code too. dc_dweller#hotmail.com.
Pablo Reda Says: December 22nd, 2009 at 4:38 pm
Can you please send me a copy of the algorithm?
astha Says: January 17th, 2010 at 8:05 pm
Can you please send me a copy of the source code to the email id mentioned above ?
Stu Says: January 18th, 2012 at 3:05 pm
Would it be ok to put the source up on GitHub or similar?
frstyle Says: February 17th, 2012 at 3:53 pm
Good article!!
I am studying the AntiAlaising Polygon.
may I please ask you for a copy of the source code | {"url":"http://www.databasesandlife.com/anti-aliased-polygon-filling-algorithm/","timestamp":"2014-04-16T18:57:09Z","content_type":null,"content_length":"21465","record_id":"<urn:uuid:91b08d9b-0168-4a01-9c8f-1626b4efa68a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Atanu Dey On India's Development
Shunyata, Nirvana, and Zero
November 30, 2005
There is a persistent misconception in the English-speaking world that I have to every so often set right. It is this: because the numerals we use are called “Arabic,” the number system was invented
by Arabs and by association, is Islamic in origin. This is as silly and illogical as claiming that potatoes originated in France since in the US we call them French fries.
Actually, two of the greatest inventions in mathematics arose in India: the positional number system and the number “zero.” Where else could zero have originated but in the land which has the
concepts of Shunyata (emptiness, nothingness) and of Nirvana (complete, utter, and absolute extinction) embedded deep into its philosophy?
Of course, I claim no special insight into mathematics just because I was born in the same land as the ones who hundreds of years ago conceived of shunyata, nirvana, and zero. But I cannot deny
myself a bit of pride that one of my ancestors’ mind created the bridge across which pre-numerate humanity walked to become numerate. Yet it fills me with profound sorrow to think that so many of the
present-day descendents of those brilliant minds are innumerate. They were giants and we are really puny. C’est la vie and all that.
Let me conclude this one with Alfred, Lord Tennyson’s words from Ulysses
Though much is taken, much abides;
And though we are not now that strength
Which in old days moved earth and heaven
That which we are, we are:
One equal temper of heroic hearts
Made weak by time and fate but strong in will:
To strive, to seek, to find, and not to yield.
A few selected references on the number system and zero.
1. The Concise Encyclopedia of Mathematics. W. Gellert, et al. Ed. 1975 Van Nostrand Reinhold:
Position Systems: Our present-day position system goes back to the Hindu from which it came to us by way of the Near East (Arabic Digits)…The introduction of zero is one of the greatest
achievements of the Hindus (around 800 A.D.)
2. Beyond Numeracy. John Allen Paulos 1991 Alfred A Knopf:
…we note that about 2000 years ago Chinese invented a written positional numeration system based on the powers of 10. About 500 years later the people of southern India independently made the
same discovery but soon thereafter went further and invented zero, a symbol that forever transformed the art of representing and manipulating numbers… The Chinese borrowed the notion of zero from
the Indian, as did the Arabs, who eventually communicated the whole system to Western Europe. The invention can fairly be said to be one of the most important technical discoveries of mankind
ranking with the invention of the wheel, fire, and agriculture.
3. Mathematics for the Millions. Lancelot Higben.WW Norton and Co:
… In the whole of history of mathematics there has been no more revolutionary step than the one which the Hindus made when they invented the sign ’0′… It makes alive the contents of the elements
of mathematics.
4. The Columbia Encyclopedia 1993 Columbia Univ Press:
The introduction of zero was the most significant achievment in the development of the number system in which calculation with large numbers was feasible. Without it, modern astronomy, physics,
and chemistry would have been unthinkable. The lack of such a symbol was one of the serious drawbacks of Greek mathematics. Its existence in the West is probably due to the Arabs, who, having
obtained it from the Hindus, passed it on to European mathematicians in the latter part of the Middles Ages.
Isn’t it a shame then that most of our past scientific and mathematical creed add up to nought when it comes to solving today’s issues!
John Keay, in his book India: A History (page 8) quotes D.R Bhandarkar, the first archaeologist to survey Mohenjo-daro, as saying
According to local tradition, these are the ruins of a town only two hundred years old …This seems not incorrect, because the bricks here found are of the modern type, and there is a total lack
of carved terra-cottas amidst the whole ruins.
Keay makes it a point to respond to this quote in his book,
Wrong in every detail, this statement must rank amongst archaeology’s greatest gaffes … In assuming the bricks to be ‘of a modern type’, Bhandarkar was unwittingly paying the Harappan brickmakers
a generous compliment.
But like I have posted earlier, it is exteremly unlikely that the ASI, or any other body, will commit enough resources to make such fact more popular. Hell, why don’t we have a ‘Museum of Zero’,
surely the story of how Indians invented the zero is as interesting as any other?
Or for that matter, who will research the story behind this ancient wall in Mahabalipuram that I wrote about on my blog? Consider the ignominy this wall suffers when you compare it to the Hadrian
I meant to say page 8 in my previous comment. But Atanu, your blog software automatically replaces with an emoticon! Perhaps we should have a preview page so we can see how our comments look? As
usual I have made an ‘a href’ gaffe by not closing the tag properly!
I have often wondered why Indians are drawn so strongly to the IT industry. When I am in a facetious mood, I sometimes think that this is because everyone in India aspires to a job in an
airconditioned office. More seriously however, the ancestral remnants of this early achievement in mathematics also must play some role in the interest in computing.
I have often wondered why Indians are drawn so strongly to the IT industry.
The relative salary is high.
…More seriously however, the ancestral remnants of this early achievement in mathematics also must play some role in the interest in computing.
I’m not sure at all The bulk of IT jobs in india dont even require highschool level math. Its another thing that employers dont feel comfortable with graduates of other degrees. Their are sw
engineering jobs which require a lot more knowledge of mathematics but i dont see them done in india. Take a company like mathworks its located in Boston MA thats where bulk of its developers and
architects are. It has other offices in europe for some other stuff. But i doubt it that it has anything else but a sales and support team in india.
(last time i checked it did not even have that in india).
The Americans,Germans,Swiss can not say that they invented the indian numerals, but looking at the papers published in mathematics and the number of new algorithms being developed they are not coming
in such large numbers from india as they are coming out of US, and europe.
Most indians are interested in computing b/c it can help them earn a living. Its the same globaly.
Parmentier made potatoes popular in France brought from America at the time of Louis XIV ; regarding french fries, you can actually give credit to the Belgian menu moules-frites-bière( a mussel
based national dish).
According to my Arab friends, the numerals used now are in fact referred to as an Indian system in their countries.In the western nations, every high school maths teacher knows that zero is the
fundamental contribution of India.It is well known through popular science books that positional numeration system is another big achievement.We may feel proud of it as Arabs do about algebra.
That’s past ;we need more Eulers, Fouriers,Fermats etc
As far as I can remember we were told that it originated from India but the Indian people expressed zero as a big dot – to mark the absense of a number. When Leonard Pisano (a.k.a. Fibonacci)
translated the Arabic writing into Latin he created the dot with the hole, the “0″
“We may feel proud of it as Arabs do about algebra.”
Like ’0′, the Arabs did not invent algebra either. Al Khwarizmi was a reputed Persian mathematician and astronomer. While many Arabs view his contributions as fundamental, many other western
scientific historians credit him only with compiling the work of previous mathematicians in his “Al-Mukhtasar fi Hisab Al-Jabr Wa’l-Muqabala” from which the term Algebra was derived. There is no
doubt that translation of his book to Latin in the 12th century AD allowed Europe to become familiar with earlier work on the topic. Hence both he and his text became famous.
The Wikipedia entry on Algebra does not credit him with the invention either.
“Al-Khwarizmi also wrote a treatise on Hindu-Arabic numerals. The Arabic text is lost but a Latin translation, Algoritmi de numero Indorum in English Al-Khwarizmi on the Hindu Art of Reckoning gave
rise to the word algorithm deriving from his name in the title”. (From http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Al-Khwarizmi.html)
Your reference to “Beyond Numeracy” by Paulos reminded me of another quick read by the same author: Innumeracy. Although you might not agree with everything therein, it does make for an interesting —
and oftentimes funny — read.
The concept that amazes me about zero is the concept of abstraction. This was a major leap in human thinking. In normal way of life we don’t have zero kg jalebi or zero meter of land.
For more read “Against the Gods”
Yet it fills me with profound sorrow to think that so many of the present-day descendents of those brilliant minds are innumerate. They were giants and we are really puny.
I’m not sure I agree with that. Most people have always been innumerate; a good education is a luxury that’s always been available only to a small percent of the population. If anything, the
proportion of people without basic mathematical skills has probably decreased over the centuries.
Our education system is a mess, but it’s not accurate to say that it was ever much better.
Atanu’s response: I don’t see where I claimed that our education system is not a mess, or that it used to be much better at an earlier time. So your last statement is a bit of a non sequitur.
Besides, I do agree that a good education is a luxury and like all luxuries, only available to a select but growing few.
There is something wrong with a nation where it needs to romanticize the past to feel good. Even a brilliant person like Amartya Sen off late has been running too many precious brain cycles on how
wonderful India was.
It does not matter if Harappa/Mohanjedaro had wonderful sewage system, if our rural folks are still going to behind bushes with tumbler for toilets. What is the use if we could design such advanced
space projects if we could not design a cheap shit pot for our poor ? Let us talk about the current problems. Let us talk about a grander future. And then let us plan a bridge from here to there.
Sorry, I sounded skeptic but I am only alerting to the fact that “I love Atanu’s writings and I want him to write on real problems/solutions”.
Atanu’s response: I agree that it is not helpful to romanticize a mythical golden past. Those who indulge in doing so are laboring under a delusion for psychological reasons. If you read what I
wrote, I did not doing that. My point was that some minds in the distant past in the land now called India made an astonishing invention in arithmetic. The extrapolation of that fact into some
imagined golden age is not my intention. Any such misreading could be due to the reader’s prejudices.
Thanks for the words of appreciation for my writings. I appreciate them. I also try to highlight real problems and suggest solutions as much as I am able to.
At the end of feeling proud of the achievements of our ancients must lie an introspection of why we – the present – fail to build on their work.
The achievement of the creation of the number 0 and the place order system is no mean feat! As an example, consider the ease with which addition of two numbers is performed using the position system
against, say, the Roman system.
It is an error to imagine that this faculty of invention must have resulted in the interest in computing. On that grounds, it is the Chinese who must be most adept at computing for their ancients
invented the ABACUS. Of course, the Chinese are better than us at computing, but not only because their ancients invented the ABACUS.
Atanu’s response: There is a distinction between the scientific discoveries as opposed to technological discoveries, though the latter depends on the former. Scientific discoveries are not
sufficient for inducing a macro change in society; the benefits of technological innovations can diffuse through society.
Unfortuntately, we are still unable to see our reality of ourselves until the white man tells us. And we immediately believe him without a thought!
Just a comment that attempts to give an insight into the beauty of the invention, and take a little space to lament! | {"url":"http://www.deeshaa.org/2005/11/30/shunyata-nirvana-and-zero/","timestamp":"2014-04-16T21:52:43Z","content_type":null,"content_length":"79343","record_id":"<urn:uuid:8810c05f-635f-4927-81eb-75a04ad19337>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the difference between force and pressure? - Homework Help - eNotes.com
What is the difference between force and pressure?
Force is the the product of mass and acceleration. It is measured in kg meter per second square. Pressure is the normal force exerted per unit area. The relation between the force, F acting normal
to a surface of area A and pressure P is given by:
P = F/A newton per square meter Or Pascal. Or
P*A = F.
The way I would explain it is that the two are closely related but that pressure depends on how much of an area the force is being exerted upon.
To find force, you have to know the mass of the object and its acceleration. Once you know that, you have the force because Force = mass times acceleration.
To convert force into pressure, you have to know how much of an area the force is acting upon. Stated mathematically, P = F/A where P is the pressure, F is the force, and A is the area.
So the identical amount of force exerts a lot of pressure on a small area or a little pressure on a large area. This is why you wouldn't want to sit on one nail but you could lie on a bed of nails.
Force is defined as the agent which produces or tend to produce motion, or destroy or tends to destroy motion. The magnitude of force is proportional to the rate of producing or destroying motion.
which in common language is described as acceleration and mass of the object.
The force may be applied or exerted at one concentrated point or it could be spread out over a finite area. When the force is spread out over a given are, the force applied per unit of area is called
force. Thus relationship between force and pressure can be represented mathematically as:
Force = Pressure x Area
Pressure is force per unit area. You consider the contact area with pressure.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-difference-between-force-pressure-159793","timestamp":"2014-04-21T07:39:48Z","content_type":null,"content_length":"31933","record_id":"<urn:uuid:c1ef6bfb-0165-463c-9553-6ce1b6fa1584>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beam Supported To Unformly Varying Load Help for - Transtutors
Simply supported beam subjected to uniformly varying load
Let a simply supported beam AB of length ‘L’ be subjected to a load uniformly from zero at support A and w/unit length at support B, as shown in figure. Then, at any section (X – X’) at a distance x
from A, the intensity of load = wx/L.
Since d F[x] / dx = –wx
F[x]= - ∫w[s] dx = - ∫ w[x] / L dx = - wx^2 / 2 L+ C[1]
At x = 0, F = R[A]
⇒ F[x] = –wx^2 / 2L + R[A] …(1)
Further, dMx / dx = F[x]
⇒ M[x] = ∫ F[x].dx = - wx^2 ) / 2L+ C[1]
At x = 0, M = 0 ⇒ C[2] = 0
⇒ M[x] = wx^3 / 6L – R[A]x …(2)
Now, R[A] = (1/2 wL) (1/3 L) / L = 1/6 wL
R[B] = wL / 2 – wL/6 = wL/3
M[x] = –wx^3 / 6L + wL/6
Shear force at x = 0;
F[A] = 0 + wL/6 = wL/6
Shear force at x = L;
F[B] = –w.L^2 / 2L + wL/6 = –wL/3
The shear force will have zero value at C = wL/6 – wx^2 / 2L
⇒ x= 1/√3
Bending moment at x = 0; M[A] = 0
Bending moment at x = L; M[B] = 0
Bending will be maximum where shear force is zero, i.e., x = L /√3 .
= wL / 6. L/√3 - wL
/ 6 × 3 √3.L = wL
^ 2
/ (6 √3 1-1/3)
Mmax = wL^2 / 9√3
Email Based Homework Assignment Help inSimply Supported Beam Subjected To Uniformly Varying Load
Transtutors is the best place to get answers to all your doubts regarding forsimply supported beam subjected to uniformly varying load with examples.You can submit your school, college or university
level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any
way with mechanical engineering.
Live Online Tutor Help forSimply Supported Beam Subjected To Uniformly Varying Load
Transtutors has a vast panel of experienced mechanical engineering tutors who specialize in for simply supported beam subjected to uniformly varying load and can explain the different concepts to you
effectively. You can also interact directly with our mechanical engineering tutors for a one to one session and get answers to all your problems in your school, college or university level Mechanical
engineering. Our tutors will make sure that you achieve the highest grades for your mechanical engineering assignments. We will make sure that you get the best help possible for exams such as the AP,
AS, A level, GCSE, IGCSE, IB, Round Square etc
Related Questions
• The height of water in a hydroelectric dam is 250 m. Turbine fall 35 m3 of... 2 hrs ago
The height of water in a hydroelectric dam is 250 m. Turbine fall 35 m3 of water per second. How is this hydroelectric power? Losses are not taken into account. From the response, to show how you
can increase the power of a...
Tags : Engineering, Electrical Engineering, Power, University ask similar question
• Name four physical quantities that are conserved 7 hrs ago
Name four physical quantities that are conserved and two quantities that are not conserved during a process?
Tags : Engineering, Mechanical Engineering, Thermodynamics, College ask similar question
• 2.1. A 6600/400 V, 50 Hz single-phase core type transformer has a net... 8 hrs ago
2.1. A 6600/400 V, 50 Hz single-phase core type transformer has a net cross-sectional area of the core of 428 cm2. The maximum flux density in the core is1.5 T. Calculate the number of turns in
the primary and secondary...
Tags : Engineering, Electrical Engineering, Electrical Machines, University ask similar question
• how to calculate a strength of a bracket 8 hrs ago
how to calculate a strength of a bracket
Tags : Engineering, Mechanical Engineering, Strength of Materials, Junior & Senior (Grade 11&12) ask similar question
• In a shell and tube type steam condenser, assume that there are 81 tubes... 14 hrs ago
In a shell and tube type steam condenser, assume that there are 81 tubes arranged in a square pitch with 9 tubes per column. The tubes are made of copper with an outside diameter of 1 in. The
length of the condenser is 1.5 m....
Tags : Engineering, Mechanical Engineering, Thermodynamics, University ask similar question
• An evaporation—crystallization process of the type described in Example... 15 hrs ago
An evaporation—crystallization process of the type described in Example 4.5.2 is used to obtain solid potassium sulfate from an aqueous solution of this salt. The fresh feed to the process
contains 19.6 wt% K2SO4. The wet...
Tags : Engineering, Chemical Engineering, Others, University ask similar question
• what is the complex conjugate of a rectangular pulse ?? i.e., If, x(t) =... 15 hrs ago
what is the complex conjugate of a rectangular pulse ?? i.e., If, x(t) = rect(t/T), then what is x*(t) ??
Tags : Engineering, Electrical Engineering, Signals & Systems, University ask similar question
• need solution in 8 hours 15 hrs ago
automotive. car questions
Tags : Engineering, Mechanical Engineering, Others, College ask similar question
• Determine the angle ? for connecting member B to the plate so that the... 15 hrs ago
Determine the angle ? for connecting member B to the plate so that the resultant of FA and FB is directed along the positive x axis. What is the magnitude of the resultant force?Given:FA = 400
lbFB = 500 lb?1 = 30 deg
Tags : Engineering, Civil Engineering, Others, University ask similar question
• Q.1 Figure 1 shows a four bar linkage mechanism. For each configuration... 22 hrs ago
Q.1 Figure 1 shows a four bar linkage mechanism. For each configuration shown in table 1, determine the angular displacements, angular velocities and angular accelerations of links 3 and 4. Use
the Excel sheet discussed in...
Tags : Engineering, Mechanical Engineering, Design and Drafting, University ask similar question
more assignments » | {"url":"http://www.transtutors.com/homework-help/mechanical-engineering/bending-moment-and-shear-force/beam-supported-to-unformly-varying-load.aspx","timestamp":"2014-04-17T18:25:42Z","content_type":null,"content_length":"90527","record_id":"<urn:uuid:0056ca69-b364-4624-ae53-33a5ccc205cf>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
a triangle and a rectangle are drawn on a page so that they overlap as shown. inside these figures are placed the digits from 0 to 9. if there are six digits in the triangle and eight digits in the
rectangle, how many digits are in the part that belongs to both figures
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a38bfbe4b01edaad2a4380","timestamp":"2014-04-19T22:52:35Z","content_type":null,"content_length":"218567","record_id":"<urn:uuid:a62c8d5e-ae4e-4f47-8133-f35976c1d866>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Estimating a model allowing for AR(1) in residuals with weights
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Estimating a model allowing for AR(1) in residuals with weights in panel
From David Jacobs <jacobs.184@sociology.osu.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Estimating a model allowing for AR(1) in residuals with weights in panel
Date Wed, 12 Apr 2006 14:16:38 -0400
Check out the cites to the articles by Beck and Katz in the manual entry on xtpcse. They claim that xtgls shouldn't be used because its estimates of the standard errors are far to optimistic. And
I've certainly found comparatively enormous t-values when I've used xtgls, so I suspect B & K are right.
By the way, at least xtpcse and xtgls assume that you have at least ten to twelve time periods. I don't remember if there are similar strictures on xtregar.
Dave Jacobs
At 02:00 PM 4/12/2006, you wrote:
I have also seen xtgls. Indeed what are the differences between xtpcse? Conceptually are there any difference in the adjustment method than xtregar?
Tak Wai Chau
David Jacobs wrote:
Xtregar fe does allow for aweights (and probably fweights) if you use two of the options for estimating the AR1 term.
Check the help file or the manual.
Probably, however, you will find that xtpcse will better suit your needs if you have a large enough T for this estimator. Note that Beck and Katz do NOT recommend the PSAR1 option in xtpcse
that estimates different ar1 corrections for each case (or state in your study).
Dave Jacobs
At 09:50 AM 4/12/2006, you wrote:
Hi, Statalist users,
I have a question about estimating a model allowing for AR(1) in residuals with weights.
I have a dataset with state-year level data. The model is like this:
y_it= a + b*policy_it + c_i + d_t + u_it
where i stands for states and t states for year. policy is a policy implemented at different time in different states. c_i are state dummies (all states except one), and d_t are year
dummies (all year except one), thus it is a difference in difference model. I also want to do this regression with state population size as weights.
If u_it is serially correlated for each state, and I would like to allow for AR(1) for this u_it over time for each state to obtain parameter estimates, what should I do in Stata?
I have thought of xtregar, fe, but it does not allow weights.
BTW, I think the convention is that we have the autoregressive parameter the same across all states. I wonder if it is identified if I allow different autoregressive parameters in
different states.
Thank you very much in advance!
Tak Wai Chau
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-04/msg00452.html","timestamp":"2014-04-17T07:42:04Z","content_type":null,"content_length":"10830","record_id":"<urn:uuid:6f28661b-c238-4048-8493-8321ed32e358>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
vector space
Given a field F, a vector space over F is a set V with two operations defined on it:
The operations are required to satisfy the following axioms. For vectors u,v∈V and scalars b,c∈F:
• Addition is commutative: u+v = v+u and associative: u+(v+w) = (u+v)+w.
• There exists a zero vector 0∈V such that for all v∈V, 0+v=v and for 0*v=0.
• Multiplication is associative: b*(c*v) = (b*c)*v.
• Multiplication is distributive across addition: c*(u+v) = c*u+c*v, and (b+c)*v = b*v+c*v.
• Multiplication preserves the field's unit: 1*v = v.
• If I is any set (called an index set) and F is a field, the set of functions I→F is a vector space over F:
□ R^n is a vector space over the real numbers R (take I={1,2,...,n}).
□ F^n is a vector space over F.
□ The set of all functions X→R is a vector space over R, for any set X.
The set of functions from R to any vector space V over R is a vector space over R. (Of course this remains true if you replace "R" with any other field "F").
The set of all continuous / differentiable / k times differentiable / analytic / almost any "nice" property functions X→R^n, where X⊆R^m is some open set is a vector space over R.
If F is a field and K⊆F is a subfield, then F is a vector space over K:
• C (the complex plane) is a vector space of the real numbers R.
• The finite field GF[p^k] is a vector space over GF[p].
• R is a vector space over the rational numbers Q.
Many thanks to jrn and to scibtag for stubbornly insisting I have an error in the definitions -- the axioms (1*v=v) and (0+v=v) were missing! | {"url":"http://everything2.com/title/vector+space","timestamp":"2014-04-18T13:47:06Z","content_type":null,"content_length":"22307","record_id":"<urn:uuid:ec090f9c-a0e6-4685-8f41-e8b98de880d0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] scipy.sparse.linalg.eigen error code?
Pauli Virtanen pav@iki...
Wed Oct 13 13:37:09 CDT 2010
Wed, 13 Oct 2010 15:55:42 +0200, Nico Schlömer wrote:
> I'm using scipy.sparse.linalg.eigen to compute the lowest magnitude
> eigenvalue of a matrix, and I noticed that sometimes the code would
> return 0+0j where I really didn't expect it. Turns out that tweaking the
> number of iterations results in some more meaningful value here,
> suggesting that the underlying Arnoldi (ARPACK?) iteration failed with
> the maxiter value previously given. As far as I can see, there's no way
> to tell that the iteration actually failed.
> Is that correct?
Yep. It seems to try to raise a warning, though,
elif self.info == -1:
warnings.warn("Maximum number of iterations taken: %s" %
But this is bad practice. You'll only see the warning *once*, and there
is indeed no way to catch it.
The Arpack interface seems to need some fixing, too. Exceeding the number
of iterations is really an error condition. The interface also apparently
doesn't check that ncv is sensible, so in some cases (small matrices) it
can give "Error -3".
As a work-around, you can probably use
def eigen_fixed(A, k=6, M=None, sigma=None, which='LM', v0=None,
ncv=None, maxiter=None, tol=0,
import numpy as np
from scipy.sparse.linalg.interface import aslinearoperator
from scipy.sparse.linalg.eigen.arpack.arpack import _UnsymmetricArpackParams
A = aslinearoperator(A)
if A.shape[0] != A.shape[1]:
raise ValueError('expected square matrix (shape=%s)' % (A.shape,))
n = A.shape[0]
matvec = lambda x : A.matvec(x)
params = _UnsymmetricArpackParams(n, k, A.dtype.char, matvec, sigma,
ncv, v0, maxiter, which, tol)
if M is not None:
raise NotImplementedError("generalized eigenproblem not supported yet")
while not params.converged:
if params.info == -1:
raise RuntimeError("Did not converge")
return params.extract(return_eigenvectors)
Pauli Virtanen
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-October/026955.html","timestamp":"2014-04-19T07:05:55Z","content_type":null,"content_length":"4637","record_id":"<urn:uuid:d343a748-399d-40cc-9acf-edc389eda59f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
amount to deposit
Re: amount to deposit
Hi zetafunc;
Isn't that a lot of money earned off of 4.5% interest?
I think the problem is that you have her earning 4.5% per month. She is only earning 1 / 12 of that per month.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19460","timestamp":"2014-04-20T13:45:56Z","content_type":null,"content_length":"18929","record_id":"<urn:uuid:23c92f0e-18d1-4447-aa01-c97d36e544eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sea Cliff Algebra Tutor
Find a Sea Cliff Algebra Tutor
...I just spent the last two years living and working in France as an ESL (English as a Second Language) teacher in four different elementary schools just near Paris. It was there that I
discovered my passion for teaching while acquiring real-life experience week after week for two years. I worked with children from 5-11 years old.
27 Subjects: including algebra 2, algebra 1, reading, English
...Well, I am sure it should be even easier to communicate with adults. My time is very flexible, but I do prefer to meet my student at least 5 hours a week in order to have a better result of
tutoring. I am looking forward to hearing from you soon,Thank you.I am a native Chinese who speaks Mandarin fluently.
6 Subjects: including algebra 1, calculus, Japanese, Chinese
...John's College in Annapolis, MD. After working for years in advertising, I decided to go back to school. I recently graduated with a Ph.D. in physics from the College of William & Mary.
10 Subjects: including algebra 2, algebra 1, writing, physics
...I have taken organic chemistry and tutored this for about 3 years when I was in Lehman College. Also during the MCAT test, I had a portion of the test which prepared me well on this test. I
have taken pharmacology in the 2-year period in the pharmacy school and I have received excellent grades (A) in all my pharmacology classes.
37 Subjects: including algebra 1, algebra 2, chemistry, English
...In addition, I have successfully tutored high school students preparing for the Algebra 2/Trig New York State Regents exam, a number of which scored in the 90's. As a retired college
Mathematics professor, I am knowledgeable in all of the topics normally seen in this course. I have taught Calculus at the college level and have tutored the subject to both college and high
school students.
12 Subjects: including algebra 1, algebra 2, calculus, geometry
Nearby Cities With algebra Tutor
Baxter Estates, NY algebra Tutors
Bayville, NY algebra Tutors
Brookville, NY algebra Tutors
Glen Cove, NY algebra Tutors
Glen Head algebra Tutors
Great Neck Estates, NY algebra Tutors
Great Neck Plaza, NY algebra Tutors
Kensington, NY algebra Tutors
Lattingtown, NY algebra Tutors
Locust Valley algebra Tutors
Matinecock, NY algebra Tutors
Old Brookville, NY algebra Tutors
Roslyn Harbor, NY algebra Tutors
Roslyn, NY algebra Tutors
Thomaston, NY algebra Tutors | {"url":"http://www.purplemath.com/sea_cliff_algebra_tutors.php","timestamp":"2014-04-19T10:08:17Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:e447e3fe-a1fd-42be-80f5-743df57c79be>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Division is "Faculty of Science > Institute of Mathematical Sciences" and Year is 2011
Up a level
Group by: Creators | Item Type | No Grouping
Jump to: E | Y
Number of items: 2.
Ewedafe, SU; Shariffudin, RH (2011) Parallel Implementation of 2-D Telegraphic Equation on MPI/PVM
Cluster. International Journal of Parallel Programming, 39 (2). pp. 202-231. ISSN 0885-7458
Yacob, NA; Ishak, AR; Pop, I (2011) Melting heat transfer in boundary layer stagnation-point flow
towards a stretching/shrinking sheet in a micropolar fluid. Computers & Fluids, 47 (1). pp. 16-21.
ISSN 0045-7930
Ewedafe, SU; Shariffudin, RH (2011) Parallel Implementation of 2-D Telegraphic Equation on MPI/PVM Cluster. International Journal of Parallel Programming, 39 (2). pp. 202-231. ISSN 0885-7458
Yacob, NA; Ishak, AR; Pop, I (2011) Melting heat transfer in boundary layer stagnation-point flow towards a stretching/shrinking sheet in a micropolar fluid. Computers & Fluids, 47 (1). pp. 16-21.
ISSN 0045-7930 | {"url":"http://eprints.um.edu.my/view/divisions/MathematicalSciences/2011.default.html","timestamp":"2014-04-17T10:43:38Z","content_type":null,"content_length":"7800","record_id":"<urn:uuid:4295b5ad-42df-440b-a79b-c27fa790ff5e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert cm^2 to m^2 - Conversion of Measurement Units
›› Convert square centimetre to square metre
›› More information from the unit converter
How many cm^2 in 1 m^2? The answer is 10000.
We assume you are converting between square centimetre and square metre.
You can view more details on each measurement unit:
cm^2 or m^2
The SI derived unit for area is the square meter.
1 cm^2 is equal to 0.0001 square meter.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between square centimeters and square meters.
Type in your own numbers in the form to convert the units!
›› Definition: Square meter
A square metre (US spelling: square meter) is by definition the area enclosed by a square with sides each 1 metre long. It is the SI unit of area. It is abbreviated mē.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0026 seconds. | {"url":"http://www.convertunits.com/from/cm%5E2/to/m%5E2","timestamp":"2014-04-19T07:07:10Z","content_type":null,"content_length":"19831","record_id":"<urn:uuid:9ca029f6-0b2b-4908-a947-4ff40149ddce>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Large Scale - Large Numbers - Large Efforts: Historical
3.2.2 Horizons
Schwarzschild (1916) derived a special solution to the original Einstein equations. The Schwarzschild metric has the line element
with m / r).
Eddington (1924) showed that the introduction of the cosmological constant led to an extended factor
in the line element and thus to a second horizon, the `world horizon'.
Different usage of the term horizon made standardization desirable. It was provided by Rindler (1956).
``We shall define a horizon as a frontier between things observable and things unobservable. (The vague term thing is here used deliberately). There are then two quite different horizon concepts
in cosmology which satisfy our definition and to which cosmologists have at different times devoted their attention. The first, which I shall call an event-horizon, is exemplified by the de
Sitter model-universe. It may be defined as follows: An event-horizon, for a given fundamental observer A, is a (hyper-) surface in space-time which divides all events into two non-empty classes:
those that have been, are or will be observable by A, and those that are forever outside A's Possible Powers of observation . . . .
The other type of horizon, which I shall call a particle-horizon ^(8), is exemplified by the Einstein-de Sitter model-universe. It may be defined as follows: A particle-horizon, for any given
fundamental observer A and cosmic instant t[0] is a surface in the instantaneous 3-space t = t[0] to, which divides all fundamental particles into two non-empty classes: Those that have already
been observable by A at time t[0] and those that have not.''
Fig. 27 gives an illustration of horizons.
Figure 27. Light-paths in a model-universe (similar to a Friedmann-Lemaitre model) possessing both a particle horizon and an event-horizon (Rindler 1956):
``The origin-observer is denoted by A. B is an observer on a typical particle which becomes visible to A at creation-time t[1] (when A and B enter each other's creation-light-cones) and which passes
beyond A's event-horizon at time t[2], so that events at B after t[2] are outside A's possible powers of observation. C is the critical particle which becomes visible to A only at t = t =
^8 ``It will be understood that whenever we speak of particles in this context we always mean fundamental particles, i.e. the representations of the nebulae in the world-model.'' Back. | {"url":"http://ned.ipac.caltech.edu/level5/Seitter/Seitter3_2_2.html","timestamp":"2014-04-20T13:20:20Z","content_type":null,"content_length":"5558","record_id":"<urn:uuid:5f7961b9-c340-4dfb-bbf0-6e212a7cca16>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Terrain elevation extraction from digital SPOT satellite imagery
- IEEE Transactions on Pattern Analysis and Machine Intelligence , 1994
"... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th#
rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..."
Cited by 140 (6 self)
Add to MetaCart
Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating
earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom
model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for
solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple non-iterative solutions are given for th# following problems : computation of th# model parameters from
groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and ground-control points. In
addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and
sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors.
Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph#
togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 -C-0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th#
SPOT satellite forth# generatio...
- in Proc. Second Asian Conference on Computer Vision , 1995
"... Several space-borne cameras use pushbroom scanning to acquire imagery.Traditionally,modeling and analyzing these sensors has been computationally intensive due to the motion of the orbiting
satellite with respect to the rotating earth, and the non-linearity of the mathematical model involving orbit ..."
Cited by 4 (3 self)
Add to MetaCart
Several space-borne cameras use pushbroom scanning to acquire imagery.Traditionally,modeling and analyzing these sensors has been computationally intensive due to the motion of the orbiting satellite
with respect to the rotating earth, and the non-linearity of the mathematical model involving orbital dynamics. A new technique for mapping a 3-D point to its corresponding image
pointthatleadstofastconvergence is described. Besides computational efficiency, experimental results also confirm the accuracy of the model in mapping 3-D points to their corresponding 2-D points. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1851786","timestamp":"2014-04-18T19:17:28Z","content_type":null,"content_length":"16270","record_id":"<urn:uuid:99ca199b-1cb2-4e58-8ed0-24fb393631e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Short question relating to the proof of the Atiyah-Singer Index Theorem for families
up vote 3 down vote favorite
My question relates to the proof of the Atiyah-Singer Index Theorem for families of elliptic operators, as presented in "The Index of Elliptic Operators: IV", M. F. Atiyah and I. M. Singer.
Let $A$ be a compact Hausdorff space and $q:A\times \mathbb{C}^n\rightarrow A$ be the projection, then we obtain the induced Thom isomorphism $q_!:K_{\text{cpt}}(A\times \mathbb{C}^n)\rightarrow K(A)
$. The map $q_!$ and the analytic index $ind:K_{\text{cpt}}(A\times \mathbb{C}^n)\rightarrow K(A)$ coincide. According to Atiyah this follows from the case $Y$ is a point, and the fact that the
analytical index is a homomorphism of $K(Y)$-modules. My question is why are these two properties enough to show the two maps coincide?
add comment
1 Answer
active oldest votes
The answer should probably go something like this:
Both $q_!$ and $ind$ are $K(A)$-module maps. Since $q_!$ is an isomorphism, to check that these are the same maps, it suffices to check they are the same on a generator; if $u_A\in
K_{Cpt}(A\times C^n)$ is the unique element such that $q_!(u_A)=1$, then we need to show that $ind(u_A)=1$ as well.
up vote 5 down
vote accepted The class $1\in K(A)$ is in the image of the tautological map $\mathbb{Z}=K(point)\to K(A)$, induced by $f:A\to point$. Both $q_!$ and $ind$ are natural with respect to $f$, so to
prove that $ind(u_A)=1$, it is enough to prove that $ind(u_{point})=1$, since $ind(u_A)=ind(f^*(u_{point}))= f^*(ind(u_{point}))$.
Thanks for your help! Tristan – Tristan Jan 19 '11 at 19:02
1 Tristan, if you think this answers your question, you should accept Charles' answer. – Adam Hughes Jan 19 '11 at 19:28
add comment
Not the answer you're looking for? Browse other questions tagged kt.k-theory-homology or ask your own question. | {"url":"http://mathoverflow.net/questions/52538/short-question-relating-to-the-proof-of-the-atiyah-singer-index-theorem-for-fami?sort=votes","timestamp":"2014-04-17T21:50:25Z","content_type":null,"content_length":"52990","record_id":"<urn:uuid:5c848104-26f0-4842-9fc3-2893f8f8f5cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calcudoku, Killer Sudoku, & Sudoku
The last 20 solved puzzles ( = by subscriber): → total
Cell size for ×: small medium large
Tomorrow's puzzles will appear in hour(s) and minute(s).Send your ideas and suggestions to
This type of puzzle is known as Calcudoku, Newdoku, Rekendoku, MathDoku, Kashikoku-Naru, KenKen, Kendoku, Sumdoku, Calkuro, K-Doku, Keen, NekNek, CanCan, Square Wisdom, Emono, Minuplu, LatinCalc,
Yukendo, ArithmeGrid, Hitoshii, Inky, SquareLogic, TomTom, and if you know of any other names, let me know :-). Of these names, "KenKen" and "KenDoku" are trademarks of Nextoy LLC. Note that this is
a hobby site, and is not affiliated with Nextoy nor their brands. | {"url":"http://www.calcudoku.org/en/2012-04-14/5/1","timestamp":"2014-04-17T18:23:28Z","content_type":null,"content_length":"29878","record_id":"<urn:uuid:8857b200-7bd1-442f-8c75-ff3d3da22f92>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
please check my answer :) Find the remainder using remainder theorem
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50250473e4b0d38989b72b61","timestamp":"2014-04-17T16:19:23Z","content_type":null,"content_length":"53743","record_id":"<urn:uuid:8f6af343-1b9b-4990-a54b-94f46263c97d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear regression library for use in Node.js applications. Based on Lyric javascript library provided by Flurry (http://github.com/flurry)
npm install lyric-node
Want to see pretty graphs? Log in now!
4 downloads in the last week
9 downloads in the last month
Linear Regression library for use in Node.js applications to model and predict data series. Based on Lyric javascript library provided by Flurry (http://github.com/flurry)
Lyric can help you analyze any set of x,y series data by building a model that can be used to:
1. Create trendlines on charts
2. Predict future values based on an existing set of data
Typical applications would include charting libraries and machine learning applications. You can learn more about Linear Regression and its applications on Wikipedia: http://en.wikipedia.org/wiki/
Lyric depends on the great Javascript Matrix library Sylvester by James Coglan available here: https://github.com/jcoglan/sylvester and turned into an npm module by Chris Umbell & Rob Ellis: https://
npm install lyric-node
First, make sure your data is represented in the form of a 2xN Array comprised of elements with an 'x' and 'y' value. The x value should be the explanatory and the y the dependent variables. var
Lyric = require('lyric-node');
var input = new Array();
input['x'] = new Array(); input['y'] = new Array();
input['x'][0] = 1; input['y'][0] = 0.5;
input['x'][1] = 2; input['y'][1] = 1.6;
input['x'][2] = 3; input['y'][2] = 4.5;
input['x'][3] = 4; input['y'][3] = 7.6;
input['x'][4] = 5; input['y'][4] = 10.1;
Then you need to have Lyric build the model for you from your data: var model = Lyric.buildModel(input);
Now that you have your model, you will likely want to apply it to a set of inputs. The newInput should be a 1xN array containing only the explanatory variable values you would like to calculate the
dependent values. This will result in a new 2xN array which will include the resulting series. var data = Lyric.applyModel(model, estimationInput);
The following is a complete example which, given some values for the explanatory values 1 through 5, estimates the values of 6 through 8: var Lyric = require('lyric-node');
var input = new Array();
input['x'] = new Array(); input['y'] = new Array();
input['x'][0] = 1; input['y'][0] = 0.5;
input['x'][1] = 2; input['y'][1] = 1.6;
input['x'][2] = 3; input['y'][2] = 4.5;
input['x'][3] = 4; input['y'][3] = 7.6;
input['x'][4] = 5; input['y'][4] = 10.1;
var estimationInput = new Array();
estimationInput['x'] = new Array();
estimationInput['x'][0] = 6;
estimationInput['x'][1] = 7;
estimationInput['x'][2] = 8;
var estimateData = Lyric.applyModel(estimationInput, Lyric.buildModel(data));
// estimateData = [
// {"x":6,"y":13.919999999999881},
// {"x":7,"y":17.93999999999984},
// {"x":8,"y":22.388571428571225}]
By default Lyric will attempt to use a 2nd degree polynomial to model the data. If you would like to use a higher order polynomial for the model, just pass in the degree you would like to use in the
buildModel() and applyModel() functions. For example, to model using a 4-th degree polynomial you would modify the above example as follows: var estimateData = Lyric.applyModel(estimationInput,
Lyric.buildModel(data, 4), 4);
Estimation Error
As with any model, it is important to know how accurate your model is on known data. Typically you would have a set of known values that you use to build the model (the training set) and a set of
known values you use to test (the test set). There is a convenience function provided to help you determine the Mean Squared Error (MSE) which is the sum of the squares of the differences between the
known values and the estimated values from the model. You call it the same way that you call applyModel()
var error = Lyric.computeError(input, Lyric.buildModel(input));
// error is a float value representing the MSE
Acceptable MSE will vary by application so it is up to you to determine whether the value is acceptable.
If you want to reduce the MSE you have two options:
• Increase the size of the training set.
• Change the polynomial degree used to fit the data.
For timeseries data using regular intervals, it is typically more efficient to use the ordinality as the explanatory value than the timestamp. For example, given the following data series: var input
= new Array(); input['x'] = new Array(); input['y'] = new Array(); input['x'][0] = '2012-03-01'; input['y'][0] = 0.5; input['x'][1] = '2012-03-02'; input['y'][1] = 1.6;
input['x'][2] = '2012-03-03'; input['y'][2] = 4.5; input['x'][3] = '2012-03-04'; input['y'][3] = 7.6; input['x'][4] = '2012-03-05'; input['y'][4] = 10.1;
You can turn the dates in the input[0] series into timestamps for use in modelling, but since each data point represents a single day the easier and simpler calculation is to ignore the particular
days and use the ordinality. Lyric provides a convenience function for manipulating this kind of data called ordinalize() which is used as shown below: var ordinalInput = Lyric.ordinalize(input);
The resulting ordinalInput will be equivalent to having created the following input: var input = new Array(); input['label'] = new Array(); input['x'] = new Array(); input['y'] = new Array(); input
['label'][0] = '2012-03-01'; input['x'][0] = 1; input['y'][0] = 0.5; input['label'][1] = '2012-03-01'; input['x'][1] = 2; input['y'][1] = 1.6;
input['label'][2] = '2012-03-01'; input['x'][2] = 3; input['y'][2] = 4.5; input['label'][3] = '2012-03-01'; input['x'][3] = 4; input['y'][3] = 7.6; input['label'][4] = '2012-03-01'; input['x'][4] =
5; input['y'][4] = 10.1;
Lyric can then use the ordinal x values to more efficiently compute the regression. Note that if you do use this, you need to ordinalize both the input provided to build the model AND the input the
model is applied.
Lyric uses the Normal Equation (closed form) to build the linear model. You can read more about the Normal Equation here: http://mathworld.wolfram.com/NormalEquation.html
This does introduce the limitation that Lyric will not work on input data that produces a non-invertible matrix when multiplied by its transpose.
A full breakdown on Lyric is available here: http://tech.flurry.com/lyric-linear-regression-in-pure-javascript
Copyright 2012 Flurry, Inc. (http://flurry.com) - modifications 2013 Sean Byrnes
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing permissions and limitations under the License. | {"url":"https://www.npmjs.org/package/lyric-node","timestamp":"2014-04-17T02:38:20Z","content_type":null,"content_length":"15322","record_id":"<urn:uuid:2a33822e-f081-4fe9-93c0-5a6a17715a8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sharp EL-5813
Datasheet legend
Ab/c: Fractions calculation
AC: Alternating current
BaseN: Number base calculations
Card: Magnetic card storage
Cmem: Continuous memory Years of production: 1979-1981 Display type: Numeric display
Cond: Conditional execution New price: Display color: Black
Const: Scientific constants Display technology: Liquid crystal display
Cplx: Complex number arithmetic Size: 4½"×2½"×½" Display size: 8+2 digits
DC: Direct current Weight: 3 oz
Eqlib: Equation library Entry method: Algebraic with precedence
Exp: Exponential/logarithmic functions Batteries: 2×"LR44" button cell Advanced functions: Trig Exp Hyp Sdev Cmem
Fin: Financial functions External power: Memory functions: +
Grph: Graphing capability I/O:
Hyp: Hyperbolic functions Programming model: Keystroke entry
Ind: Indirect addressing Precision: 11 digits Program functions:
Intg: Numerical integration Memories: 7 numbers Program display:
Jump: Unconditional jump (GOTO) Program memory: 30 program steps Program editing:
Lbl: Program labels Chipset: Forensic result: 9.0031035348
LCD: Liquid Crystal Display
LED: Light-Emitting Diode
Li-ion: Lithium-ion rechargeable battery
Lreg: Linear regression (2-variable statistics)
mA: Milliamperes of current
Mtrx: Matrix support
NiCd: Nickel-Cadmium rechargeable battery
NiMH: Nickel-metal-hydrite rechargeable battery
Prnt: Printer
RTC: Real-time clock
Sdev: Standard deviation (1-variable statistics)
Solv: Equation solver
Subr: Subroutine call capability
Symb: Symbolic computing
Tape: Magnetic tape storage
Trig: Trigonometric functions
Units: Unit conversions
VAC: Volts AC
VDC: Volts DC
calculator was an unexpected find. I wasn't even aware of this model number until one day, I had a chance to buy three of these machines. Even though only one worked out of the box, and of the other
two, one was repairable (with some broken traces on its circuit board) but the other was a hopeless case of heavy corrosion, I was happy with the result: here I had in my hands a previously unknown
programmable machine in good working condition.
I even know these machines' approximate age: one of the three came with a warranty sticker that dates back to 1981. Older than I'd have thought!
These machines have a 30-step program memory with no way to edit/review programs. An annoying "feature" is that when you hit the LRN key, the display is reset to 0 and pending operations are
cancelled. This makes it difficult to program algorithms that operate on the displayed value, if said algorithms result in an error for an argument of zero (this would interfere with the entering of
the program). Since most trivial implementations of the Gamma function or Stirling's formula fall into this category, my favorite programming example needed some adjustments before I was able to fit
it into the machine's limited program memory.
A further difficulty is caused by the fact that the EL-5813 stores programs in completely unmerged form; every use of the 2ndF key takes up an extra step in program memory. Overall, however, this is
not at all an unpleasant machine; its metallic case, very small size and weight, and pleasant keyboard make it an excellent shirt-pocket engineering calculator.
Stirling's formula, I said? Yes; even though the EL-5813 has several memory registers, a polynomial implementation of my programming favorite, the Gamma function, is just too much for its 30 program
steps. Stirling's formula, however, does fit, even in its improved form. Note how this implementation actually increments the argument by one, which is how the problems with a zero argument can be
avoided. To use the program, simply enter the argument and hit the COMP button. | {"url":"http://www.rskey.org/CMS/index.php/7?manufacturer=Sharp&model=EL-5813","timestamp":"2014-04-18T13:08:35Z","content_type":null,"content_length":"27143","record_id":"<urn:uuid:45bebbce-9146-4056-b78d-1dc9571505f7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
November 11th 2009, 10:04 PM #1
Oct 2008
How many conjugates does the permutation (123) have in the group S<sub>3</sub> of all permutations on 3 letters? Give brief reasons for your answers.
Answers were provided for this question, but after going carefully through it, I still don't know what's going on at one point in the answer, which may prove crucial to my understanding of the
problem (where I've put question marks). Below is the solution provided:
"We can use the stabilizer-orbit relationship to see that the size of the conjugacy class of (123) is equal to the index of the stabilizer. But the stabilizer is a subgroup of S3 which contains
at least <(123)> [????? why]. If it were to contain more then it must be all of S3 (by Lagrange's theorem) and so contain, for example, (12). But (12)(123)(12)-1 = (132) and so the centralizer is
exactly <(123)> [????]. Since it has order 3, it also has index 2 and so (123) has 2 conjugates."
Thank you.
How many conjugates does the permutation (123) have in the group S<sub>3</sub> of all permutations on 3 letters? Give brief reasons for your answers.
Answers were provided for this question, but after going carefully through it, I still don't know what's going on at one point in the answer, which may prove crucial to my understanding of the
problem (where I've put question marks). Below is the solution provided:
"We can use the stabilizer-orbit relationship to see that the size of the conjugacy class of (123) is equal to the index of the stabilizer. But the stabilizer is a subgroup of S3 which contains
at least <(123)> [????? why]
Because for ANY $x\in G$, for ANY group $G$, we have that $x^{-1}xx=x\Longrightarrow x$ centralizes itself, which simply means that any element in any group commutes with itself and, in fact,
with any of its powers
. If it were to contain more then it must be all of S3 (by Lagrange's theorem) and so contain, for example, (12). But (12)(123)(12)-1 = (132) and so the centralizer is exactly <(123)> [????]
As seen above, (123) centralizes (123) and thus the group generated by (123), i.e. <(123>, centralizes (123)
. Since it has order 3, it also has index 2 and so (123) has 2 conjugates."
Thank you.
How many conjugates does the permutation (123) have in the group S<sub>3</sub> of all permutations on 3 letters? Give brief reasons for your answers.
Answers were provided for this question, but after going carefully through it, I still don't know what's going on at one point in the answer, which may prove crucial to my understanding of the
problem (where I've put question marks). Below is the solution provided:
"We can use the stabilizer-orbit relationship to see that the size of the conjugacy class of (123) is equal to the index of the stabilizer. But the stabilizer is a subgroup of S3 which contains
at least <(123)> [????? why]. If it were to contain more then it must be all of S3 (by Lagrange's theorem) and so contain, for example, (12). But (12)(123)(12)-1 = (132) and so the centralizer is
exactly <(123)> [????]. Since it has order 3, it also has index 2 and so (123) has 2 conjugates."
Thank you.
This can actually be generalised in quite a neat way:
Two permutations in $S_n$ are conjugate if and only if they have the same cycle structure.
To do this, you simply need to prove the following result, which is also very nice (and useful):
Let $i_1, i_2, \ldots i_k \in \{1, 2, \ldots, n\}$. Then for $\sigma \in S_n$ we have $\sigma^{-1} (i_1 \text{ } i_2 \ldots i_k) \sigma=(i_1 \sigma \text{ } i_2 \sigma \ldots i_k \sigma)$.
I see now, thanks so much. I didn't take much notice of "any element in any group commutes with itself and, in fact, with any of its powers", especially about the powers.
November 12th 2009, 12:27 AM #2
Oct 2009
November 12th 2009, 12:44 AM #3
November 12th 2009, 01:35 AM #4
Oct 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/114024-conjugates.html","timestamp":"2014-04-18T06:47:06Z","content_type":null,"content_length":"42320","record_id":"<urn:uuid:55af50f6-52b9-4176-baf4-b92fdc6661d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Littleton, CO 80130
Just a chemistry loving chemical engineer
...I was the youngest general tutor at Colorado School of Mines (General tutors cover all of freshman and some sophomore courses including: Calculus I, Calculus II, Calculus III, Differential
Equations, Linear
, Chemistry I and II, Biology I and II, and Calculus...
Offering 7 subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Centennial_CO_Algebra_tutors.aspx","timestamp":"2014-04-19T02:58:44Z","content_type":null,"content_length":"60570","record_id":"<urn:uuid:4d8ec872-6350-4265-9bff-178e06d76d3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
FALL TERM 2004
Prof. Joseph M. Francos
Department of Electrical and Computer Engineering
Ben-Gurion University
Friday, September 24, 2004
4:00-5:00 PM
1005 EECS
Parametric Estimation of Multi-Dimensional Homeomorphic Transformations: Solving a Group-Theory Problem as a Linear Problem
Abstract -
We consider the general framework of planar object registration and recognition based on a set of known templates. Whereas the set of templates is known, the tremendous set of possible
transformations that may relate the template and the observed signature, makes any detection and recognition problem ill-defined unless this variability is taken into account. Given an observation on
one of the known objects, subject to an unknown transformation of it, our goal is to estimate the deformation that transforms some pre-chosen representation of this object (template) into the current
observation. The direct approach for estimating the transformation is to apply each of the deformations in the homeomorphism group to the template in search for the deformed template that matches the
observation. We propose a method that employs a set of non-linear operators to replace this high dimensional problem with an equivalent linear problem, expressed in terms of the unknown parameters of
the transformation model. The proposed solution is unique and is applicable to any homeomorphic transformation regardless of its magnitude. In the special case where the transformation is affine the
solution is shown to be exact. The effectiveness of the proposed solution will be demonstrated using various examples.
Prof. Francos' homepage
return to Current CSPL Seminars | {"url":"http://www.eecs.umich.edu/systems/francosFAL04.html","timestamp":"2014-04-21T15:10:39Z","content_type":null,"content_length":"2313","record_id":"<urn:uuid:3575ee37-92a1-4d24-96a0-22689f918621>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
A math question
you took a spelling test with 50 questions. you got 7 wrong. what is the % you got wrong.
by on Feb. 12, 2013 at 8:42 PM
Add your quick reply below:
You must be a member to reply to this post.
Replies (1-8):
on Feb. 12, 2013 at 9:00 PM
assuming the test is equal to 100 points every question would be 2 points. so 7 Times 2= 14 , 100-14= 86 if I did the math right. I can divide, multiply, do fractions and percentages but simple
adding and subtracting isis really hard.sadly Hannah's the same way.
Your math to that point is correct. However, I need the answer to what % is wrong. I say if you got an 86, that's 86% correct so that meansyou got 14% wrong. But when asked someone else they say the
answer is 7% but can't explain why that's the answer. Unless because 7 is half of 14 and the test has 50 questions which is half of 100, but yes, the test is worth 100 points and each quest is worth
2 points.
Quoting lady-J-Rock:
assuming the test is equal to 100 points every question would be 2 points. so 7 Times 2= 14 , 100-14= 86 if I did the math right. I can divide, multiply, do fractions and percentages but simple
adding and subtracting isis really hard.sadly Hannah's the same way.
14% of the questions were incorrect.
Thank you
Quoting Kmary:
14% of the questions were incorrect.
Thank you everyone for your answers! I'm the one who isn't great at math and I knew the answer immediately, I think others were over thinking it.
Add your quick reply below:
You must be a member to reply to this post. | {"url":"http://www.cafemom.com/group/107955/forums/read/18066325/A_math_question","timestamp":"2014-04-16T04:33:00Z","content_type":null,"content_length":"67818","record_id":"<urn:uuid:95a4039a-8c42-453d-96c8-da79ae96d5d4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amari, S. (K. Hiraoka and S. Amari) -- Strategy Under the Unknown Stochastic Environment: The Nonparametric Lob-Pass Problem - 1998
Amari, S. (S. Amari) -- Mathematical Theory of Neural Learning - 1991
Amari, S. (S. Amari) -- The EM algorithm and Information geometry in neural network learning - January 1995
Amari, S. (H. Ito, S. Amari and K. Kobayashi) -- Identifiability of Hidden Markov Information Sources and their Minimum Degrees of Freedom - March 1992
Amari, S. (S. Amari and S. Wu) -- Improving support vector machine classifiers by modifying kernel functions - 1999
Amari, S. (S. Amari, N. Fujita and S. Shinomoto) -- Four Types of Learning Curves - 1992 | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cltbibZz-e--00-1----0-10-0---0---0direct-10-TX%2CCR%2CBO%2CSO--4--Bshouty%2C%2C%2C-----0-1l--11-en-50---20-home-%5BBshouty%5D%3ATX+--01-3-21-00-0--4--0--0-0-11-10-0utfZz-8-00&a=d&c=cltbib-e&cl=CL2.1.49","timestamp":"2014-04-21T02:35:32Z","content_type":null,"content_length":"20059","record_id":"<urn:uuid:35f288f7-7745-4209-b2e4-57f3fa356b49>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
LOGSPACE and PTIME characterized by programming languages
Results 1 - 10 of 12
, 2000
"... We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasi-interpretation, is exactly the class of function
computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on ..."
Cited by 25 (10 self)
Add to MetaCart
We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasi-interpretation, is exactly the class of function
computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on programs to certify their complexity, (ii) on the fact that an important class of natural
programs is captured, (iii) and on potential applications on program optimizations. 1 Introduction This paper is part of a general investigation on the implicit complexity of a specication. To
illustrate what we mean, we write below the recursive rules that computes the longest common subsequences of two words. More precisely, given two strings u = u1 um and v = v1 vn of f0; 1g , a common
subsequence of length k is dened by two sequences of indices i 1 < < i k and j1 < < jk satisfying u i q = v j q . lcs(; y) ! 0 lcs(x; ) ! 0 lcs(i(x); i(y)) ! lcs(x; y) + 1 lcs(i(...
, 2001
"... Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is
no: both classes are Turing complete, meaning that they can compute all partial recursive functions. In pa ..."
Cited by 24 (1 self)
Add to MetaCart
Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is no:
both classes are Turing complete, meaning that they can compute all partial recursive functions. In particular, higher-order values may be first-order simulated by use of the list constructor ‘cons’
to build function closures. This paper uses complexity theory to prove some expressivity results about small programming languages that are less than Turing complete. Complexity classes of decision
problems are used to characterize the expressive power of functional programming language features. An example: second-order programs are more powerful than first-order, since a function f of type &
lsqb;Bool]-〉Bool is computable by a cons-free first-order functional program if and only if f is in PTIME, whereas f is computable by a cons-free second-order program if and only if f is in
EXPTIME. Exact characterizations are given for those problems of type [Bool]-〉Bool solvable by programs with several combinations of operations on data: presence or absence of
constructors; the order of data values: 0, 1, or higher; and program control structures: general recursion, tail recursion, primitive recursion.
"... We present a method for certifying that the values computed by an imperative program will be bounded by polynomials in the program’s inputs. To this end, we introduce mwp-matrices and define a
semantic relation | = C: M where C is a program and M is an mwp-matrix. It follows straightforwardly from o ..."
Cited by 4 (3 self)
Add to MetaCart
We present a method for certifying that the values computed by an imperative program will be bounded by polynomials in the program’s inputs. To this end, we introduce mwp-matrices and define a
semantic relation | = C: M where C is a program and M is an mwp-matrix. It follows straightforwardly from our definitions that there exists M such that | = C:M holds iff every value computed by C is
bounded by a polynomial in the inputs. Furthermore, we provide a syntactical proof calculus and define the relation ⊢ C:M to hold iff there exists a derivation in the calculus where C:M is the bottom
line. We prove that ⊢ C:M implies | = C:M. By means of exhaustive proof search, an algorithm can decide if there exists M such that the relation ⊢ C:M holds, and thus, our results yield a
computational method. Categories and Subject Descriptors: D.2.4 [Software engineering]: Software/Program Verification; F.2.0 [Analysis of algorithms and problem complexity]: General; F.3.1 [Logics
and meanings of programs]: Specifying and Verifying and Reasoning about Programs
"... This paper provides a recursion-theoretic characterization of the functions computable in logarithmic space, without explicit bounds in the recursion schemes. It can be seen as a variation of
the Clote and Takeuti characterization of logspace functions [7], which results from the implementation of a ..."
Cited by 2 (0 self)
Add to MetaCart
This paper provides a recursion-theoretic characterization of the functions computable in logarithmic space, without explicit bounds in the recursion schemes. It can be seen as a variation of the
Clote and Takeuti characterization of logspace functions [7], which results from the implementation of an intrinsic growth-control within an inputsorted context.
- In ICC ’02 [16
"... We give two simple uniform characterizations of the complexity classes L NL P NP by register machine programs. Both characterizations are intrinsic because they do not refer to any time and
space bounds. The rst characterization, which permits programs to peek into their computation history, capt ..."
Cited by 1 (0 self)
Add to MetaCart
We give two simple uniform characterizations of the complexity classes L NL P NP by register machine programs. Both characterizations are intrinsic because they do not refer to any time and space
bounds. The rst characterization, which permits programs to peek into their computation history, captures also Pspace and it is completely syntactical in that the restrictions on programs are decided
from their syntax. The programs computing predicates of larger classes are permitted to access larger parts of their history. The second characterization restricts the access of programs to parts of
their input which are then projected away by existential quantication. 1
, 2006
"... Quasi-interpretations are a technique to guarantee complexity bounds on first-order functional programs: with termination orderings they give in particular a sufficient condition for a program
to be executable in polynomial time ([14]), called here the P-criterion. We study properties of the program ..."
Cited by 1 (0 self)
Add to MetaCart
Quasi-interpretations are a technique to guarantee complexity bounds on first-order functional programs: with termination orderings they give in particular a sufficient condition for a program to be
executable in polynomial time ([14]), called here the P-criterion. We study properties of the programs satisfying the P-criterion, in order to better understand its intensional expressive power.
Given a program on binary lists, its blind abstraction is the non-deterministic program obtained by replacing lists by their lengths (natural numbers). A program is blindly polynomial if its blind
abstraction terminates in polynomial time. We show that all programs satisfying a variant of the P-criterion are in fact blindly polynomial. Then we give two extensions of the P-criterion: one by
relaxing the termination ordering condition, and the other one (the bounded value property) giving a necessary and sufficient condition for a program to be polynomial time executable, with
memoisation. 1
"... In this paper, we show that the Bellantoni and Cook characterization of polynomial time computable functions in term of safe recursive functions can be transfered to the model of computation
over an arbitrary structure developped by L. Blum, M. Shub and S. Smale. ..."
Add to MetaCart
In this paper, we show that the Bellantoni and Cook characterization of polynomial time computable functions in term of safe recursive functions can be transfered to the model of computation over an
arbitrary structure developped by L. Blum, M. Shub and S. Smale.
"... k / peo pl e/N DJ.h tml Abstract. A programming approac h to computabilit y and complexit y theory yields pro ofs of cen tral results that are sometimes more natural than the classical ones; and
some new results as w ell. ..."
Add to MetaCart
k / peo pl e/N DJ.h tml Abstract. A programming approac h to computabilit y and complexit y theory yields pro ofs of cen tral results that are sometimes more natural than the classical ones; and some
new results as w ell.
"... f(0, y) = g(y) f(x + 1, y) = h(x, y, f(j1(x), y),..., f(jk(x), y) where g, h, j1,..., jk are primitive recursive and ji(x) ≤ x for i ∈ {1,..., k} , are themselves primitive recursive. A similar
remark holds for recursion with parameter substituhal-00642731, ..."
Add to MetaCart
f(0, y) = g(y) f(x + 1, y) = h(x, y, f(j1(x), y),..., f(jk(x), y) where g, h, j1,..., jk are primitive recursive and ji(x) ≤ x for i ∈ {1,..., k} , are themselves primitive recursive. A similar
remark holds for recursion with parameter substituhal-00642731,
, 804
"... Weak affine light typing (WALT) assigns light affine linear formulae as types to a subset ofλ-terms of System F. WALT is poly-time sound: if aλ-term M has type in WALT, M can be evaluated with a
polynomial cost in the dimension of the derivation that gives it a type. The evaluation proceeds under an ..."
Add to MetaCart
Weak affine light typing (WALT) assigns light affine linear formulae as types to a subset ofλ-terms of System F. WALT is poly-time sound: if aλ-term M has type in WALT, M can be evaluated with a
polynomial cost in the dimension of the derivation that gives it a type. The evaluation proceeds under any strategy of a rewriting relation which is a mix of both call-by-name and
call-by-valueβ-reductions. WALT weakens, namely generalizes, the notion of “stratification of deductions”, common to some Light Systems — those logical systems, derived from Linear logic, to
characterize the set of Polynomial functions —. A weaker stratification allows to define a compositional embedding of Safe recursion on notation (SRN) into WALT. It turns out that the expressivity of
WALT is strictly stronger than the one of the known Light Systems. The embedding passes through the representation of a subsystem of SRN. It is obtained by restricting the composition scheme of SRN
to one that can only use its safe variables linearly. On one side, this suggests that SRN, in fact, can be redefined in terms of more primitive constructs. On the other, the embedding of SRN into
WALT enjoys the two following remarkable aspects. Every datatype, required by the embedding, is represented from scratch, showing the strong structural proof-theoretical roots of WALT. Moreover, the
embedding highlights a stratification structure of the normal and safe arguments, normally hidden inside | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=705258","timestamp":"2014-04-16T16:49:04Z","content_type":null,"content_length":"35554","record_id":"<urn:uuid:4044a994-c4d8-4afd-ad7f-3286a5d5ee0e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
horrible integrel
July 16th 2012, 02:23 PM #1
Jan 2011
horrible integrel
Hi, I am making a model and need to take the integral of a rather disgusting function, I was wondering if anyone had am idea of how to solve it, if its possible to!
the function is 1 / tan (90 + Vt + at^ 2 )
I need to integrate over t, a and V are constants, and limits 0 to T.
Any ideas ? And sorry if I put this in the wrong section, I wasn't sure where it belonged
Re: horrible integrel
Wolfram says no standard antiderivative exists ...
integral of 1/tan(at^2 + bt + c) - Wolfram|Alpha
... if it's a definite integral, its value can be estimated numerically.
July 16th 2012, 02:42 PM #2 | {"url":"http://mathhelpforum.com/advanced-math-topics/201050-horrible-integrel.html","timestamp":"2014-04-18T12:38:46Z","content_type":null,"content_length":"32594","record_id":"<urn:uuid:b0186b6a-9ca4-4372-bfb9-00a5ea6a9047>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Just Paid A Dividend Of $8.41. Dividends ... | Chegg.com
Google just paid a dividend of $8.41. Dividends and earnings are expected to grow 12% per year for the next two years before declining to the industry average of 3.1%. If the required return on
Google stock is 12% and the risk free rate is 3.3%, what is a fair price for Google stock based on these calculations? (Do NOT include the currency symbol in your answer, and round your answer to 2
decimal places. So if your answer is $34.567, write 34.57.) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/google-paid-dividend-841-dividends-earnings-expected-grow-12-per-year-next-two-years-decli-q3416980","timestamp":"2014-04-17T07:53:22Z","content_type":null,"content_length":"21125","record_id":"<urn:uuid:d2a978d8-9285-484c-91d0-17a450e4fa5b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Operating on Selections - GNU Emacs Calc Manual
11.1.4 Operating on Selections
Once a selection is made, all Calc commands that manipulate items on the stack will operate on the selected portions of the items instead. (Note that several stack elements may have selections at
once, though there can be only one selection at a time in any given stack element.)
The j e (calc-enable-selections) command disables the effect that selections have on Calc commands. The current selections still exist, but Calc commands operate on whole stack elements anyway. This
mode can be identified by the fact that the ‘*’ markers on the line numbers are gone, even though selections are visible. To reactivate the selections, press j e again.
To extract a sub-formula as a new formula, simply select the sub-formula and press <RET>. This normally duplicates the top stack element; here it duplicates only the selected portion of that element.
To replace a sub-formula with something different, you can enter the new value onto the stack and press <TAB>. This normally exchanges the top two stack elements; here it swaps the value you entered
into the selected portion of the formula, returning the old selected portion to the top of the stack.
3 ... ... ___
(a + b) . . . 17 x y . . . 17 x y + V c
2* ............... 2* ............. 2: -------------
. . . . . . . . 2 x + 1
1: 17 x y 1: (a + b) 1: (a + b)
In this example we select a sub-formula of our original example, enter a new formula, <TAB> it into place, then deselect to see the complete, edited formula.
If you want to swap whole formulas around even though they contain selections, just use j e before and after.
The j ' (calc-enter-selection) command is another way to replace a selected sub-formula. This command does an algebraic entry just like the regular ' key. When you press <RET>, the formula you type
replaces the original selection. You can use the ‘$’ symbol in the formula to refer to the original selection. If there is no selection in the formula under the cursor, the cursor is used to make a
temporary selection for the purposes of the command. Thus, to change a term of a formula, all you have to do is move the Emacs cursor to that term and press j '.
The j ` (calc-edit-selection) command is a similar analogue of the ` (calc-edit) command. It edits the selected sub-formula in a separate buffer. If there is no selection, it edits the sub-formula
indicated by the cursor.
To delete a sub-formula, press <DEL>. This generally replaces the sub-formula with the constant zero, but in a few suitable contexts it uses the constant one instead. The <DEL> key automatically
deselects and re-simplifies the entire formula afterwards. Thus:
17 x y + # # 17 x y 17 # y 17 y
1* ------------- 1: ------- 1* ------- 1: -------
2 x + 1 2 x + 1 2 x + 1 2 x + 1
In this example, we first delete the ‘sqrt(c)’ term; Calc accomplishes this by replacing ‘sqrt(c)’ with zero and resimplifying. We then delete the x in the numerator; since this is part of a product,
Calc replaces it with ‘1’ and resimplifies.
If you select an element of a vector and press <DEL>, that element is deleted from the vector. If you delete one side of an equation or inequality, only the opposite side remains.
The j <DEL> (calc-del-selection) command is like <DEL> but with the auto-selecting behavior of j ' and j `. It deletes the selected portion of the formula indicated by the cursor, or, in the absence
of a selection, it deletes the sub-formula indicated by the cursor position.
(There is also an auto-selecting j <RET> (calc-copy-selection) command.)
Normal arithmetic operations also apply to sub-formulas. Here we select the denominator, press 5 - to subtract five from the denominator, press n to negate the denominator, then press Q to take the
square root.
.. . .. . .. . .. .
1* ....... 1* ....... 1* ....... 1* ..........
2 x + 1 2 x - 4 4 - 2 x _________
V 4 - 2 x
Certain types of operations on selections are not allowed. For example, for an arithmetic function like - no more than one of the arguments may be a selected sub-formula. (As the above example shows,
the result of the subtraction is spliced back into the argument which had the selection; if there were more than one selection involved, this would not be well-defined.) If you try to subtract two
selections, the command will abort with an error message.
Operations on sub-formulas sometimes leave the formula as a whole in an “un-natural” state. Consider negating the ‘2 x’ term of our sample formula by selecting it and pressing n (calc-change-sign).
.. . .. .
1* .......... 1* ...........
......... ..........
. . . 2 x . . . -2 x
Unselecting the sub-formula reveals that the minus sign, which would normally have canceled out with the subtraction automatically, has not been able to do so because the subtraction was not part of
the selected portion. Pressing = (calc-evaluate) or doing any other mathematical operation on the whole formula will cause it to be simplified.
17 y 17 y
1: ----------- 1: ----------
__________ _________
V 4 - -2 x V 4 + 2 x | {"url":"http://www.gnu.org/software/emacs/manual/html_node/calc/Operating-on-Selections.html","timestamp":"2014-04-18T00:48:12Z","content_type":null,"content_length":"9322","record_id":"<urn:uuid:e73e7c44-a8cf-458b-b326-a9c8e8c02947>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zero: The Biography of a Dangerous Idea
In case you haven't heard, this has been a bumper year for the number zero. All the world waited in anticipation to see the effect when computer calendars rolled over from "99" to "00", and legions
of computer programmers spent their last night of the millennium (though we mathematicians know that really won't happen until next year) at the office or on call, waiting to be rushed in at the
slightest sign that something had gone awry. Last year zero was even the subject of a "natural history" by Robert Kaplan (reviewed here). Now, five months later, Charles Seife has presented us with
Zero: The Biography of a Dangerous Idea.
Of course, Seife's book is not a typical biography. There are no tell-all interviews with the number one or any of zero's other neighbors on the number line. In fact, the idea behind Seife's book is
nearly identical to the idea behind Kaplan's book, and there is much overlap between the two books--right down to the stark white covers that adorn both of them. Both are books about mathematics
aimed at amateurs, and their common goal is to tell the historical tale of mathematics through the eyes of the number zero.
Seife's book begins--of course--at Chapter Zero, with a story of how only recently a divide by zero error in its control software brought the guided missile cruiser USS Yorktown grinding to a halt.
As Seife relates, "Though it was armored against weapons, nobody had thought to defend the Yorktown from zero. It was a grave mistake." Maybe it's not the pulse-pounding drama of a Tom Clancy novel,
but it's enough foreshadowing to launch Seife on an essay which begins with notches on a 30,000-year-old wolf bone and ends with the role of zero in black holes and the big bang.
Chapters 1-5 cover much of the same material that Kaplan covers in his first eleven chapters--and that's to be expected. Historically, zero began as a necessity of place-value number systems. But
before readers can grasp the value of a place-value number system like the Babylonians invented (and the necessity of place-holders in such a system), they must first see the other number systems
that were employed and how the Babylonian system was superior. This is the subject of Chapter One. The next question is how the Greeks, in spite of being the master of almost every other branch of
mathematics, missed the importance of zero. This leads Seife to a discussion of Pythagorean number philosophy, Zeno's paradoxes, and Aristotelian cosmology. This and a lengthy discussion of the
history of the calendar we use today--and how it is fraught with error due to the problems bought on by the historical absence of zero--are the subject of Chapter two.
Chapters Three, Four, and Five cover the emergence of zero from the east and all of the problems and advances it entailed. Chapter Three begins with a discussion of the Indian adoption of the
base-ten place-value number system and ends with Fibonacci's use of the Hindu-Arabic system in Liber Abacci. Chapter Four plays on the duality between zero and infinity--something Seife will do for
the remainder of the book. It begins with Brunelleschi's introduction of perspective in 1425, in which the point at infinity is characterized as the "zero in the center of the painting [which]
contains an infinity of space". He moves on to discuss Descartes' introduction of zero into geometry with his coordinate system, and the chapter ends with a discussion of Pascal's wager, where the
algebraic properties of zero and infinity are used to compute the expected value of religious belief and atheism.
Chapter Five contains Seife's discussion of the origins of calculus--again with careful attention to the role of zero. It begins with the convergence of series. He explains that Suiseth was able to
sum the series of terms 1/2, 2/4, 3/8, ... , n/2^n (but he doesn't indicate how this was done). Next, he tackles Oresme's proof that the harmonic series diverges. The essential ideas behind integral
calculus are then presented in a discussion of Kepler's Volume-Measurement of Barrels, but the most complicated example we're shown is an the approximation of the area of a triangle by eight
rectangles. Newton's method for finding tangents is demonstrated by working through the example of finding the tangent to y = x^2 + x + 1, and the controversies between Newton and Leibniz, L'Hospital
and Bernoulli, and Berkeley and the entire English mathematical community are all presented briefly. The chapter finishes with a brief introduction to limits, nicely presented by showing how to solve
Zeno's Achilles paradox (presented in a previous chapter) with a geometric series. (This is particularly refreshing to see. It's amazing how many philosophy students think that nobody has ever been
able to answer Zeno. My only regret is that Seife didn't take the time to show his readers how easy it is to sum a general geometric series.) This entire discussion of calculus occupies only 25
But the time periods covered in Chapters 3-5 were also times of tremendous philosophical and cultural change in the West. While Hindu philosophy had embraced the void, Aristotle--and consequently the
church--had rejected it because "Nature abhors a vacuum". Seife does an excellent job of relating how the dual ideas of emptiness and infinity were shaping the cultural changes taking place in the
Renaissance. In fact, the bulk of Chapter Four, entitled The Infinite God of Nothingness, centers around the struggles between the church and Renaissance scientists over the nature of the universe.
Copernicus' heliocentric model of the solar system had banished Aristotle's (finite) universe and the centrality of the church. As Seife relates:
Nicholas of Cusa and Nicolaus Copernicus cracked open the nutshell universe of Aristotle and Ptolemy. No longer was the earth comfortably ensconced in the center of the universe; there was no
shell containing the cosmos. The universe went on into infinity, dotted with innumerable worlds, each inhabited by mysterious creatures. But how could Rome claim to be the seat of the one true
Church if its authority could not extend to other solar systems? Were there other popes on other planets?
Descartes' attempt to rebuild a rational belief in God on the understanding of the infinite is also presented well by Seife: "Since we have a concept of an infinite perfect being in our minds, . . .
this infinite and perfect being--God--must exist". But Descartes too was tripped up eventually because he couldn't bring himself to accept "infinity's twin", the void. Pascal's experiments with
atmospheric pressure, also described by Seife, would lead to that.
The theme of the duality between infinity and zero is continued in Chapter Six. The chapter begins with projective geometry, but the heart of the Chapter is the complex number system and how it leads
to the Riemann sphere, where the antipodal nature of zero and infinity are presented as the ultimate intuitive expression for the duality between zero and infinity. The chapter finishes with a
discussion of Cantor's cardinal numbers, and includes both a proof of the uncountability of the real numbers and an excellent intuitive justification for why the rational numbers are a set of measure
Chapters Seven and Eight follow the history of modern chemistry and physics by tracing the role of zero and infinity in the formulation of various modern scientific laws. The first stop in Chapter
Seven (entitled Absolute Zeros) is Charles' law and absolute zero. Then, by way of a description of thermodynamics and statistical mechanics, the discussion shifts to the historical question of what
constituted light. From here, Seife describes the Raleigh-James law and the problems that lead to the formulation of quantum mechanics. By the end of the chapter, Seife has managed to fit in
Heisenberg's uncertainty principle, general relativity, black holes, wormholes, and their application to interstellar space travel. In Chapter Eight, Seife moves on to describe the more recent topics
of string theory, the big bang, and the question of how the universe will ultimately end.
Like Kaplan, Seife doesn't present anything that experts and avid amateurs in the history of mathematics haven't heard before (though his descriptions of modern advances in physics may be
enlightening for some). In fact, broad brush strokes such as "[t]he Egyptians, who had invented geometry, thought little about mathematics" are bound to ruffle a few feathers. But newcomers to
mathematics will be enlightened and excited by Seife's enlivening account. His prose is clear and uncluttered and his vignettes in the history of mathematics and science are informative,
entertaining, and show the novice the cultural, philosophical, and scientific significance of mathematical ideas. In fact, though it's impossible to do much justice to anything when covering all of
mathematical history in 215 pages, if you've been looking for a book that will give your teenage protégée an introduction to the history of mathematics (and the western world!) and can be consumed in
a weekend, this is probably it. Be warned, though: Seife assumes less mathematical maturity--and far less cultural maturity--than Kaplan does in his story of zero, and even something as elementary as
the modern definition of the derivative is relegated to an appendix. (Other important results, such as a proof that Winston Churchill is a carrot and instructions for constructing a time machine out
of wormholes, are also given in the appendices.)
Of course, the real question is whether either Seife or Kaplan has written the definitive story of mathematical zero. In the end, Seife follows the story of zero into physics while Kaplan follows it
into philosophy. But neither one seriously addresses what mathematicians of the twentieth century have done with zero. The fact is that mathematicians have looked around and found zeros everywhere.
For instance, the simple axiomatic requirement which says that, for any a, we have "a + 0 = a" is one of the most powerful and prevalent ideas in mathematics today. This basic algebraic statement
about the nature of zero certainly lacks the grandeur of galactic wormholes and the emotional gravity of a poem by Sylvia Plath, but the fact that mathematicians--true to the example of Euclid
thousands of years ago--are still sifting through the ideas of mathematics and trying to discover what is most essential about them certainly deserves more than a passing mention.
Andrew Leahy (
) is Assistant Professor of Mathematics at Knox College. | {"url":"http://www.maa.org/publications/maa-reviews/zero-the-biography-of-a-dangerous-idea","timestamp":"2014-04-16T15:18:30Z","content_type":null,"content_length":"104781","record_id":"<urn:uuid:488262ce-cfbf-4e04-84ba-c756b1ec5888>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
The turbulent dynamo
Magnetic fields in astrophysics are generated by the inductive action of turbulence in the conducting fluid medium. This turbulence is usually generated by buoyancy forces and strongly influenced by
coriolis effects, and in consequence 'lacks reflexional symmetry'; in particular, the mean helicity is nonzero, i.e. there is a correlation between velocity and vorticity fields. This property in
general leads to an 'alpha-effect' in the fluid, whereby magnetic field grows on length-scales large compared with the dominant energy-containing scale of the turbulence. At the same time, the
turbulent diffusivity controls the growth of the field. The primary problem of mean-field dynamo theory is to obtain reliable expressions for alpha and for the turbulent diffusivity in terms of the
statistical properties and the magnetic Reynolds number of the turbulence. The first lecture will be concerned with this problem.
The second lecture will focus on dynamic back-reaction effects: as the magnetic field grows by turbulent dynamo action, the Lorentz force ultimately modifies the turbulence tending to reduce both
alpha and turbulent diffusivity, until some kind of equilibrium is established, this equilibrium depending on the mechanism by which energy is supplied to the turbulence. Some aspects of this
problem, which is the subject of much current debate, will be considered. | {"url":"http://www.newton.ac.uk/programmes/MSI/Abstract1/moffatt.html","timestamp":"2014-04-19T22:50:52Z","content_type":null,"content_length":"3761","record_id":"<urn:uuid:b6bdf093-ff17-4c4d-9ee1-8db2d82a8f1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairlawn, NJ Trigonometry Tutor
Find a Fairlawn, NJ Trigonometry Tutor
I have taught Mathematics in courses that include basic skills (arithmetic and algebra), probability and statistics, and the full calculus sequence. My passion for mathematics and teaching has
allowed me to develop a highly intuitive and flexible approach to instruction, which has typically garnere...
7 Subjects: including trigonometry, calculus, geometry, algebra 1
...In my spare time, I enjoy hiking, traveling, learning languages, producing/recording music, and cooking. I speak English and Russian fluently, and have basic/intermediate Spanish. I have been
tutoring math and physics for six years, and would be happy to provide references for past or current students on demand.
10 Subjects: including trigonometry, physics, calculus, geometry
...Based on teaching in the classroom and one-on-one tutoring session, I think that anyone can learn math and science. Many people approach math and science with some fear and an idea that it's
hard work. Math and science are about thinking logically, which everyone can do.
10 Subjects: including trigonometry, physics, writing, algebra 2
...Before I began teaching in the public schools, I was required to write a critical analysis of an essay. I was asked to do so again when I took The National Teachers Exam. I exceeded the
requirements on both occasions.
41 Subjects: including trigonometry, English, reading, chemistry
...Additionally, I have experience tutoring students with special needs in classroom settings for the Newark Public School System and I have served as a tutor for the Newark Public Library, in
the city’s Central Business District. As a Mechanical Engineering student, I have very strong Math and Science skills. I have a genuine interest in these subject areas and I enjoy teaching them.
25 Subjects: including trigonometry, English, reading, Spanish
Related Fairlawn, NJ Tutors
Fairlawn, NJ Accounting Tutors
Fairlawn, NJ ACT Tutors
Fairlawn, NJ Algebra Tutors
Fairlawn, NJ Algebra 2 Tutors
Fairlawn, NJ Calculus Tutors
Fairlawn, NJ Geometry Tutors
Fairlawn, NJ Math Tutors
Fairlawn, NJ Prealgebra Tutors
Fairlawn, NJ Precalculus Tutors
Fairlawn, NJ SAT Tutors
Fairlawn, NJ SAT Math Tutors
Fairlawn, NJ Science Tutors
Fairlawn, NJ Statistics Tutors
Fairlawn, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Elmwood Park, NJ trigonometry Tutors
Fair Lawn trigonometry Tutors
Garfield, NJ trigonometry Tutors
Glen Rock, NJ trigonometry Tutors
Hackensack, NJ trigonometry Tutors
Hawthorne, NJ trigonometry Tutors
Lodi, NJ trigonometry Tutors
North Haledon, NJ trigonometry Tutors
Paramus trigonometry Tutors
Paterson, NJ trigonometry Tutors
Radburn, NJ trigonometry Tutors
Ridgewood, NJ trigonometry Tutors
Saddle Brook trigonometry Tutors
Woodcliff, NJ trigonometry Tutors
Woodland Park, NJ trigonometry Tutors | {"url":"http://www.purplemath.com/Fairlawn_NJ_Trigonometry_tutors.php","timestamp":"2014-04-21T12:46:59Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:19c75a15-b019-4365-ae5a-11c12077640d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dunsbach Ferry, NY
Troy, NY 12180
Math tutor! Algebra, Calculus, Prob & Stats, and Oxford Commas!
I'm a graduate of Wheaton College (IL) with a B.S. in
ematics, although I started as an English major. I love the field, and I enjoy helping people to motivate the various ideas and techniques in
. I'm familiar with high school algebra as well as linear algebra,...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/Dunsbach_Ferry_NY_Math_tutors.aspx","timestamp":"2014-04-18T00:58:20Z","content_type":null,"content_length":"55735","record_id":"<urn:uuid:e85fccc1-3d21-4eea-a1d6-88074435cb4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Hamiltonian and calculation of the relative free energy
B. Equations of motion
C. Optimization of parameters
D. Generalization to multiple states
E. Estimation of errors
A. Two-state perturbations
1. Dipole inversion
2. van der Waals interactionperturbation
3. Charge inversion
4. Water to methanol conversion
B. Multiple-state perturbation
1. Dipole inversion
2. (Dis-)appearing water molecules
A. Two-state perturbations
1. Dipole inversion
2. van der Waals interactionperturbation
3. Charge inversion
4. Water to methanol conversion
5. Automatic parameter optimization
B. Multiple-state perturbation
1. Dipole inversions
2. (Dis-)appearing water molecules | {"url":"http://scitation.aip.org/content/aip/journal/jcp/128/17/10.1063/1.2913050","timestamp":"2014-04-16T17:13:39Z","content_type":null,"content_length":"110464","record_id":"<urn:uuid:4e502eb2-7a0c-40c6-8e4c-49574bff7ea8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Bertrand's postulate
Bertrand's postulate states that if n is a positive integer, then for n > 3 there always exists at least one prime number p between n and 2n-2, or in an equivalent weaker but more elegant form then
for n > 1 there is always at least one prime p such that n < p < 2n.
This statement was first conjectured in 1845 by Joseph Bertrand[?] (1822-1900). His conjecture was completely proved by Pafnuty Lvovich Chebyshev (1821-1894) in 1850 and so the postulate is also
called Chebyshev's theorem. Chebyshev in his proof used the Chebyshev's inequality. Bertrand himself verified his statement for all numbers in the interval [2, 3 × 10^6].
Srinivasa Aaiyangar Ramanujan (1887-1920) gave a simpler proof and Paul Erdös (1913-1996) in 1932 published a very simple proof where he used the function θ(x), defined as:
<math> \theta(x) \equiv \sum_{p=2}^{x} \ln (p) </math>
where p ≤ x runs over primes, and the binomial coefficients.
Bertrand's postulate was proposed for applications to permutation groups. James Joseph Sylvester (1814-1897) generalized it with the statement: the product of k consecutive integers greater than k is
divisible by a prime greater than k.
A similar and still unsolved conjecture is asking for a prime p, such that n^2 < p < (n+1)^2.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/be/Bertrand's_postulate","timestamp":"2014-04-18T00:38:00Z","content_type":null,"content_length":"16067","record_id":"<urn:uuid:58e1712d-22db-49a9-bca7-6be761eb34c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Perry observes the opposite, parallel walls of a room. In how many lines do the planes containing the walls intersect? Perry observes the opposite, parallel walls of a room. In how many lines do the
planes containing the walls intersect? @Mathematics
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ea962c4e4b02eee39d4678b","timestamp":"2014-04-18T10:59:51Z","content_type":null,"content_length":"34842","record_id":"<urn:uuid:104fd009-0f55-4693-ae9d-7e6316ade46c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
SailNet Community - Creating a 12-Volt Spreadsheet
Tom Wood 06-29-1999 08:00 PM
Creating a 12-Volt Spreadsheet
<HTML><!-- eWebEditPro 1.8.0.2 --><P>Generating, storing, and using 12-volt energy on board must be viewed as a total system. Attempts to set up the best complement of batteries, inverters,
alternators, and chargers without a comprehensive 12-volt spreadsheet are usually disappointing. And since every sailor uses electricity differently, a 12-volt spreadsheet needs to be developed.</P>
<P>Begin your spreadsheet with the manufacturer's spec sheet for each of the electrical items on the boat. A good retailer's catalog may sometimes be substituted. The spreadsheet can be developed on
a computer or done the old-fashioned way with paper, pencil, and calculator.</P><P>We've found that two spreadsheets are necessary, and we do them side-by-side. One is for underway when the running
lights, nav instruments, and radios get a lot of use. The other is headed "At Anchor" when the entertainment appliances, anchor, and reading lights demand more power.</P><P>When setting up your
spreadsheet, don't forget to incorporate items that you plan to add in the future. Heavy electrical users like12-volt refrigeration, SSB radio, or radar will upset the whole plan if not included.</P>
<P>If you have little experience living with your systems, err toward the generous side when estimating the time each electrical appliance will be used. It's better to plan using the water maker
three hours per day and then later discovering it's actually used only two hours.</P><P>Set up the column with these headings:</P><P><B>Appliance, Watts, Amps, Hours at Anchor, Hours Underway, Total
Hours at Anchor, and Total Hours Underway</B></P><P>Sit at the boat's 12-volt panel and fill out the Appliance column. Don't forget the hidden items such as pumps, propane solenoid, bilge blower, or
loads not directly labeled. Items such as interior lights can be grouped together under one heading.</P><P>Have no fear of the Watts and Amps columns. Electrical parts manufacturers quote the usage
of appliances either way and they are both measures of power consumption. If the electrical draw is quoted in amps, you can leave the watts column blank.</P><P>Specification sheets showing 12-volt
draw in watts will require a simple calculation to convert it to amps. For your spreadsheet, the formula is: <B>watts divided by volts = amps</B>. In round numbers, a 12-volt battery is fully charged
at 13 volts and completely discharged at 12 volts. A battery being charged by a generating source will usually read more than 14 volts. To convert watts to amps, we normally use12.75. Thus, our
formula becomes watts /12.75 = amps.</P><P>Example: a halogen reading light is quoted by the manufacturer as 20 watts. That's: 20 / 12.75 = 1.57 amps.</P><P>If an inverter is running 120-volt loads,
use the same formula. Thus using the microwave for popcorn would look like this: 1,200 watts / 12.75 volts = 94.12 amps of 12-volt power.</P><P>Next, estimate the number of hours each appliance is
used during a 24-hour day. Do this twice; once in the At Anchor column, and again in the Underway column. Use tenths of hours (six minutes = one-tenth hour) to keep the math simple.</P><P>Finally,
multiply the amps by the number of hours used, round to two decimal places and fill in the results in the Totals columns. Again, you must do this twice, once for Total At Anchor and once for the
Total Underway columns. These results are the amount of 12-volt power used by each appliance in 24 hours.</P><P>Now your spreadsheet should look like this:</P><TABLE cellSpacing=0 cellPadding=2 width
=466 align=center border=1><TBODY><TR><TD align=middle bgColor=silver colSpan=7><FONT face=Arial size=2><B>12-Volt Spreadsheet</B></FONT></TD></TR><TR><TD></TD><TD></TD><TD></TD><TD><FONT face=Arial
size=2><B>Hours Used</B></FONT></TD><TD align=middle><FONT face=Arial size=2><B>Total</B></FONT></TD><TD><FONT face=Arial size=2><B>Hours Used</B></FONT></TD><TD align=middle><FONT face=Arial size=2>
<B>Total</B></FONT></TD></TR><TR><TD><FONT face=Arial size=2><B>Appliance/Electronics</B></FONT></TD><TD align=middle><FONT face=Arial size=2><B>Watts</B></FONT></TD><TD><FONT face=Arial size=2><B>
Amps</B></FONT></TD><TD align=middle><FONT face=Arial size=2><B>at Anchor</B></FONT></TD><TD><FONT face=Arial size=2><B>at Anchor</B></FONT></TD><TD align=middle><FONT face=Arial size=2><B>Underway</
B></FONT></TD><TD><FONT face=Arial size=2><B>Underway</B></FONT></TD></TR><TR><TD>Running Lights</TD><TD> </TD><TD align=right>3.00</TD><TD align=right>0.00</TD><TD align=right>0.00</TD><TD
align=right>10.00</TD><TD align=right>30.00</TD></TR><TR><TD>Reading Lights</TD><TD align=right>20.00</TD><TD align=right>1.57</TD><TD align=right>3.45</TD><TD align=right>5.41</TD><TD align=right>
0.00</TD><TD align=right>0.00</TD></TR><TR><TD>Anchor Light</TD><TD align=right>10.00</TD><TD align=right>0.78</TD><TD align=right>9.50</TD><TD align=right>7.45</TD><TD align=right>0.00</TD><TD align
=right>0.00</TD></TR><TR><TD>Navigation Instruments</TD><TD> </TD><TD align=right>0.50</TD><TD align=right>0.00</TD><TD align=right>0.00</TD><TD align=right>24.00</TD><TD align=right>12.00</TD>
</TR><TR><TD>Refrigerator</TD><TD> </TD><TD align=right>6.00</TD><TD align=right>8.50</TD><TD align=right>51.00</TD><TD align=right>8.00</TD><TD align=right>48.00</TD></TR><TR><TD>LPG Solenoid</
TD><TD align=right>16.00</TD><TD align=right>1.25</TD><TD align=right>1.40</TD><TD align=right>1.76</TD><TD align=right>0.50</TD><TD align=right>0.63</TD></TR><TR><TD>Microwave Popcorn</TD><TD align=
right>1200.00</TD><TD align=right>94.12</TD><TD align=right>0.10</TD><TD align=right>9.41</TD><TD align=right>0.00</TD><TD align=right>0.00</TD></TR><TR><TD colSpan=4><B>Totals</B></TD><TD align=
right>75.03</TD><TD> </TD><TD align=right>90.63</TD></TR><TR><TD colSpan=7>Denominator for Watts Conversion = 12.75</TD></TR></TBODY></TABLE><P></P>At the bottom of the sheet, add the two Totals
columns. Most sailors find they use much more juice in the Underway column than in the At Anchor. Boats with power-hungry electronics, such as radar, SSB radio and electronic charting often use two
or three times as much 12-volt power when sailing. <P></P><P>Keep your spreadsheet in your maintenance log. Add new appliances to the spreadsheet when you install them and review it occasionally to
see if your estimates on usage hours change.</P><P>Your spreadsheet is the basis to properly sizing all other components of your 12-volt system. Many experts advise having two to four times the
maximum amount of amperage used daily in each battery bank. For the sample spreadsheet, where 90 amps were used in 24 hours of sailing, this would result in banks of 180-to-360-amp hours. Perhaps
270-amp-hour banks would be a good compromise. These batteries would be one-third drawn down underway, but only one-fourth depleted in 24 hours at anchor.</P><P>Our next two articles will deal with
proper sizing of charging systems based on your spreadsheet. </P></HTML> | {"url":"http://www.sailnet.com/forums/gear-maintenance-articles/19585-creating-12-volt-spreadsheet-print.html","timestamp":"2014-04-16T20:39:52Z","content_type":null,"content_length":"15512","record_id":"<urn:uuid:e8eb1bbc-c8ad-4ca2-b58e-7a0db2a49e09>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combining effects: sum and tensor
Results 1 - 10 of 26
, 2007
"... Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few
years later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained ..."
Cited by 12 (0 self)
Add to MetaCart
Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few years
later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained precedence. A generation later, the definition of monad began to appear extensively in
theoretical computer science in order to model computational effects, without reference to universal algebra. But since then, the relevance of universal algebra to computational effects has been
recognised, leading to renewed prominence of the notion of Lawvere theory, now in a computational setting. This development has formed a major part of Gordon Plotkin’s mature work, and we study its
history here, in particular asking why Lawvere theories were eclipsed by monads in the 1960’s, and how the renewed interest in them in a computer science setting might develop in future.
"... We develop a model of concurrent imperative programming with threads. We focus on a small imperative language with cooperative threads which execute without interruption until they terminate or
explicitly yield control. We define and study a trace-based denotational semantics for this language; this ..."
Cited by 10 (2 self)
Add to MetaCart
We develop a model of concurrent imperative programming with threads. We focus on a small imperative language with cooperative threads which execute without interruption until they terminate or
explicitly yield control. We define and study a trace-based denotational semantics for this language; this semantics is fully abstract but mathematically elementary. We also give an equational theory
for the computational effects that underlie the language, including thread spawning. We then analyze threads in terms of the free algebra monad for this theory. 1
- In ESOP ’09: Proceedings of the 18th European Symposium on Programming Languages and Systems , 2009
"... Abstract. During the last two decades, monads have become an indispensable tool for structuring functional programs with computational effects. In this setting, the mathematical notion of a
monad is extended with operations that allow programmers to manipulate these effects. When several effects are ..."
Cited by 8 (2 self)
Add to MetaCart
Abstract. During the last two decades, monads have become an indispensable tool for structuring functional programs with computational effects. In this setting, the mathematical notion of a monad is
extended with operations that allow programmers to manipulate these effects. When several effects are involved, monad transformers can be used to build up the required monad one effect at a time.
Although this seems to be modularity nirvana, there is a catch: in addition to the construction of a monad, the effect-manipulating operations need to be lifted to the resulting monad. The
traditional approach for lifting operations is nonmodular and ad-hoc. We solve this problem with a principled technique for lifting operations that makes monad transformers truly modular. 1
- Inf. & Comp , 2006
"... We give a coalgebraic formulation of timed processes and their operational semantics. We model time by a monoid called a “time domain”, and we model processes by “timed transition systems”,
which amount to partial monoid actions of the time domain or, equivalently, coalgebras for an “evolution comon ..."
Cited by 8 (1 self)
Add to MetaCart
We give a coalgebraic formulation of timed processes and their operational semantics. We model time by a monoid called a “time domain”, and we model processes by “timed transition systems”, which
amount to partial monoid actions of the time domain or, equivalently, coalgebras for an “evolution comonad ” generated by the time domain. All our examples of time domains satisfy a partial closure
property, yielding a distributive law of a monad for total monoid actions over the evolution comonad, and hence a distributive law of the evolution comonad over a dual comonad for total monoid
actions. We show that the induced coalgebras are exactly timed transition systems with delay operators. We then integrate our coalgebraic formulation of time qua timed transition systems into Turi
and Plotkin’s formulation of structural operational semantics in terms of distributive laws. We combine timing with action via the more general study of the combination of two arbitrary sorts of
behaviour whose operational semantics may interact. We give a modular account of the operational semantics for a combination induced by that of each of its components. Our study necessitates the
investigation of products of comonads. In particular, we characterise when a monad lifts to the category of coalgebras for a product comonad, providing constructions with which one can readily
calculate. Key words: time domains, timed transition systems, evolution comonads, delay operators, structural operational semantics, modularity, distributive laws 1
, 2008
"... Certain principles are fundamental to operational semantics, regardless of the languages or idioms involved. Such principles include rule-based definitions and proof techniques for congruence
results. We formulate these principles in the general context of categorical logic. From this general formul ..."
Cited by 7 (6 self)
Add to MetaCart
Certain principles are fundamental to operational semantics, regardless of the languages or idioms involved. Such principles include rule-based definitions and proof techniques for congruence
results. We formulate these principles in the general context of categorical logic. From this general formulation we recover precise results for particular language idioms by interpreting the logic
in particular categories. For instance, results for first-order calculi, such as CCS, arise from considering the general results in the category of sets. Results for languages involving substitution
and name generation, such as the π-calculus, arise from considering the general results in categories of sheaves and group actions. As an extended example, we develop a tyft/tyxt-like rule format for
open bisimulation in the π-calculus.
"... We provide a syntactic analysis of contextual preorder and equivalence for a polymorphic programming language with effects. Our approach applies uniformly to arbitrary algebraic effects, and
thus incorporates, as instances: errors, input/output, global state, nondeterminism, probabilistic choice, an ..."
Cited by 7 (0 self)
Add to MetaCart
We provide a syntactic analysis of contextual preorder and equivalence for a polymorphic programming language with effects. Our approach applies uniformly to arbitrary algebraic effects, and thus
incorporates, as instances: errors, input/output, global state, nondeterminism, probabilistic choice, and combinations thereof. Our approach is to extend Plotkin and Power’s structural operational
semantics for algebraic effects (FoSSaCS 2001) with a primitive “basic preorder ” on ground type computation trees. The basic preorder is used to derive notions of contextual preorder and equivalence
on program terms. Under mild assumptions on this relation, we prove fundamental properties of contextual preorder (hence equivalence) including extensionality properties, a characterisation via
applicative contexts, and machinery for reasoning about polymorphism using relational parametricity. 1.
, 2006
"... In this paper, we look at two categorical accounts of computational effects (strong monad as a model of the monadic metalanguage, adjunction as a model of call-by-push-value with stacks), and we
adapt them to incorporate global exceptions. In each case, we extend the calculus with a construct, due t ..."
Cited by 6 (1 self)
Add to MetaCart
In this paper, we look at two categorical accounts of computational effects (strong monad as a model of the monadic metalanguage, adjunction as a model of call-by-push-value with stacks), and we
adapt them to incorporate global exceptions. In each case, we extend the calculus with a construct, due to Benton and Kennedy, that fuses exception handling with sequencing. This immediately gives us
an equational theory, simply by adapting the equations for sequencing. We study the categorical semantics of the two equational theories. In the case of the monadic metalanguage, we see that a monad
supporting exceptions is a coalgebra for a certain comonad. We further show, using Beck’s theorem, that, on a category with equalizers, the monad constructor for exceptions gives all such monads. In
the case of call-by-push-value (CBPV) with stacks, we generalize the notion of CBPV adjunction so that a stack awaiting a value can deal both with a value being returned, and with an exception being
raised. We see how to obtain a model of exceptions from a CBPV adjunction, and vice versa by restricting to those stacks that are homomorphic with respect to exception raising.
- In POPL , 2012
"... We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a
development of Moggi’s monadic theory of computational effects that emphasises the operations causing the e ..."
Cited by 6 (1 self)
Add to MetaCart
We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a
development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects
can be identified with operation symbols. We develop an annotated version of Levy’s Call-by-Push-Value language with a kind of computations for every effect set; it can be thought of as a sequential,
annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as
structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give
modularly-checkable sufficient conditions for their validity).
, 2009
"... Most often, in a categorical semantics for a programming language, the substitution of terms is expressed by composition and finite products. However this does not deal with the order of
evaluation of arguments, which may have major consequences when there are side-effects. In this paper Cartesian e ..."
Cited by 5 (5 self)
Add to MetaCart
Most often, in a categorical semantics for a programming language, the substitution of terms is expressed by composition and finite products. However this does not deal with the order of evaluation
of arguments, which may have major consequences when there are side-effects. In this paper Cartesian effect categories are introduced for solving this issue, and they are compared with strong monads,
Freydcategories and Haskell’s Arrows. It is proved that a Cartesian effect category is a Freyd-category where the premonoidal structure is provided by a kind of binary product, called the sequential
product. The universal property of the sequential product provides Cartesian effect categories with a powerful tool for constructions and proofs. To our knowledge, both effect categories and
sequential products are new notions. Keywords. Categorical logic, computational effects, monads, Freyd-categories, premonoidal categories, Arrows, sequential product, effect categories, Cartesian
effect categories.
"... Abstract. Every algebraic theory gives rise to a monad, and monads allow a meta-language which is a basic programming language with sideeffects. Equations in the algebraic theory give rise to
equations between programs in the meta-language. An interesting question is this: to what extent can we put ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. Every algebraic theory gives rise to a monad, and monads allow a meta-language which is a basic programming language with sideeffects. Equations in the algebraic theory give rise to
equations between programs in the meta-language. An interesting question is this: to what extent can we put equational reasoning for programs into the algebraic theory for the monad? In this paper I
focus on local state, where programs can allocate, update and read the store. Plotkin and Power (FoSSaCS’02) have proposed an algebraic theory of local state, and they conjectured that the theory is
complete, in the sense that every consistent equation is already derivable. The central contribution of this paper is to confirm this conjecture. To establish the completeness theorem, it is
necessary to reformulate the informal theory of Plotkin and Power as an enriched algebraic theory in the sense of Kelly and Power (JPAA, 89:163–179). The new presentation can be read as 14 program
assertions about three effects. The completeness theorem for local state is dependent on certain conditions on the type of storable values. When the set of storable values is finite, there is a
subtle additional axiom regarding quotient types. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.62.7988","timestamp":"2014-04-20T17:33:38Z","content_type":null,"content_length":"39291","record_id":"<urn:uuid:cadec614-5cb5-40e1-be35-c51588eb0d82>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Specification Searches
Results 1 - 10 of 75
- Journal of the American Statistical Association , 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation
of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Cited by 184 (13 self)
Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of
uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making
inferences about quantities of
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational
advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Cited by 108 (0 self)
Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and
examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
, 2005
"... Ensembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing
ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distr ..."
Cited by 71 (28 self)
Add to MetaCart
Ensembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing
ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of
any quantity of interest is a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are equal to posterior probabilities of the models generating the
forecasts and reflect the models ’ relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used
as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating
from the BMA predictive distribution. The BMA predictive variance can be decomposed into two components, one corresponding to the between-forecast variability, and the second to the within-forecast
variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of
ensembles to exhibit a spread-error correlation but yet
- American Economic Review , 2004
"... Both textbook economics and common sense teach us that the value of household wealth should be related to consumer spending. Early academic work by Franco Modigliani (1971) suggested that a
dollar increase in wealth (holding � xed labor income) leads to an increase in consumer spending of about � ve ..."
Cited by 61 (4 self)
Add to MetaCart
Both textbook economics and common sense teach us that the value of household wealth should be related to consumer spending. Early academic work by Franco Modigliani (1971) suggested that a dollar
increase in wealth (holding � xed labor income) leads to an increase in consumer spending of about � ve cents. Since then, the so-called “wealth effect ” on consumption has increasingly crept into
both mainstream and policy discussions of the macroeconomy. 1 Today, it is commonly presumed that signi �-cant movements in wealth will be associated with movements in consumer spending, either
contemporaneously or subsequently. Quantitative estimates of roughly the magnitude reported by Modigliani are routinely cited in
- Handbook of Economic Forecasting , 2006
"... Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the ex-ante best individual forecasting model. Moreover, simple
combinations that ignore correlations between forecast errors often dominate more refined combination sch ..."
Cited by 50 (3 self)
Add to MetaCart
Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the ex-ante best individual forecasting model. Moreover, simple
combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this chapter we
analyze theoretically the factors that determine the advantages from combining forecasts (for example, the degree of correlation between forecast errors and the relative size of the individual
models’ forecast error variances). Although the reasons for the success of simple combination schemes are poorly understood, we discuss several possibilities related to model misspecification,
instability (non-stationarities) and estimation error in situations where thenumbersofmodelsislargerelativetothe available sample size. We discuss the role of combinations under asymmetric loss and
consider combinations of point, interval and probability forecasts. Key words: Forecast combinations; pooling and trimming; shrinkage methods; model misspecification, diversification gains
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads
to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Cited by 47 (6 self)
Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to
the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making
inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called
"Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model
averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is
no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We
refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small
number of models including the null model, thus largely resolving the paradox.
- JOURNAL OF ECONOMETRICS 95 (2000) 391-413 , 2000
"... This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the
problem in the statistics and econometrics literature. ..."
Cited by 46 (0 self)
Add to MetaCart
This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the problem in
the statistics and econometrics literature.
- Monthly Weather Review , 2007
"... and useful comments, and for providing data. They are also grateful to Patrick Tewson for implementing the UW Ensemble BMA website. This research was supported by the DoD Multidisciplinary
University Research Initiative (MURI) program administered by the Office of Naval Research under Grant N00014-0 ..."
Cited by 32 (20 self)
Add to MetaCart
and useful comments, and for providing data. They are also grateful to Patrick Tewson for implementing the UW Ensemble BMA website. This research was supported by the DoD Multidisciplinary University
Research Initiative (MURI) program administered by the Office of Naval Research under Grant N00014-01-10745. Bayesian model averaging (BMA) is a statistical way of postprocessing forecast ensembles
to create predictive probability density functions (PDFs) for weather quantities. It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts,
where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts ’ relative contributions to predictive skill over a training period. It was developed
initially for quantities whose PDFs can be approximated by normal distributions, such as temperature and sea-level pressure. BMA does not apply in its original form to precipitation, because the
predictive PDF of precipitation is nonnormal in two major ways: it has a positive probability of being equal to zero, and it is skewed. Here we extend BMA to probabilistic quantitative precipitation
forecasting. The predictive PDF corresponding to
- Brookings Papers on Economic Activity
"... It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical
translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the sa ..."
Cited by 31 (5 self)
Add to MetaCart
It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical
translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on
his a priori, that would make a difference to the outcome. 1 This paper describes some approaches to macroeconomic policy evaluation in the presence of uncertainty about the structure of the economic
environment under study. The perspective we discuss is designed to facilitate policy evaluation for several forms of uncertainty. For example, our approach may be used when an analyst is unsure about
the appropriate economic theory that should be assumed to apply, or about the particular functional forms that translate a general theory into a form amenable to statistical analysis. As such, the
methods we describe are, we believe, particularly useful in a range of macroeconomic contexts where fundamental disagreements exist as to the determinants of the problem under study. In addition,
this approach recognizes that even if economists agree on the
- Journal of Finance , 1998
"... Economics is primarily a non-experimental science. Typically, we cannot generate new data sets on which to test hypotheses independently of the data that may have led to a particular theory. The
common practice of using the same data set to formulate and test hypotheses introduces data-snooping bias ..."
Cited by 26 (3 self)
Add to MetaCart
Economics is primarily a non-experimental science. Typically, we cannot generate new data sets on which to test hypotheses independently of the data that may have led to a particular theory. The
common practice of using the same data set to formulate and test hypotheses introduces data-snooping biases that, if not accounted for, invalidate the assumptions underlying classical statistical
inference. A striking example of a datadriven discovery is the presence of calendar effects in stock returns. There appears to be very substantial evidence of systematic abnormal stock returns
related to the day of the week, the week of the month, the month of the year, the turn of the month, holidays, and so forth. However, this evidence has largely been considered without accounting for
the intensive search preceding it. In this paper we use 100 years of daily data and a new bootstrap procedure that allows us to explicitly measure the distortions in statistical inference induced by
data-snooping. We find that although nominal P-values of individual calendar rules are extremely significant, once evaluated in the context of the full universe from which such rules were drawn,
calendar effects no longer remain significant. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1326489","timestamp":"2014-04-18T08:31:12Z","content_type":null,"content_length":"40134","record_id":"<urn:uuid:0e478aee-df0d-4f94-9ab2-82a14a46b637>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenVMS RTL General Purpose (OTS$) Manual
Order Number: AA--PV6HD--TK
April 2001
This manual documents the general-purpose routines contained in the OTS$ facility of the OpenVMS Run-Time Library.
Revision/Update Information: This manual supersedes the OpenVMS RTL General Purpose (OTS$) Manual, OpenVMS Alpha Version 7.1 and OpenVMS VAX Version 7.1.
Software Version: OpenVMS Alpha Version 7.3 OpenVMS VAX Version 7.3
Compaq Computer Corporation
Houston, Texas
© 2001 Compaq Computer Corporation
Compaq, VAX, VMS, and the Compaq logo Registered in U.S. Patent and Trademark Office.
OpenVMS is a trademark of Compaq Information Technologies Group, L.P. in the United States and other countries.
All other product names mentioned herein may be trademarks of their respective companies.
Confidential computer software. Valid license from Compaq required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Compaq shall not be liable for technical or editorial errors or omissions contained herein. The information in this document is provided "as is" without warranty of any kind and is subject to change
without notice. The warranties for Compaq products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an
additional warranty.
The Compaq OpenVMS documentation set is available on CD-ROM.
This manual provides users of the OpenVMS operating system with detailed usage and reference information on general-purpose routines supplied in the OTS$ facility of the Run-Time Library.
Intended Audience
This manual is intended for system and application programmers who write programs that call OTS$ Run-Time Library routines.
Document Structure
This manual is organized into two parts as follows:
• Part I contains a brief overview of the OTS$ routines in Chapter 1.
• Part 2, the OTS$ Reference Section, provides detailed reference information on each routine contained in the OTS$ facility of the Run-Time Library. This information is presented using the
documentation format described in OpenVMS Programming Concepts Manual. Routine descriptions appear in alphabetical order by routine name.
Related Documents
The Run-Time Library routines are documented in a series of reference manuals. A description of how the Run-Time Library routines are accessed and of OpenVMS features and functionality available
through calls to the OTS$ Run-Time Library appears in the OpenVMS Programming Concepts Manual. Descriptions of other RTL facilities and their corresponding routines and usages are discussed in the
following books:
• Compaq Portable Mathematics Library
• OpenVMS VAX RTL Mathematics (MTH$) Manual
• OpenVMS RTL DECtalk (DTK$) Manual^1
• OpenVMS RTL Library (LIB$) Manual
• OpenVMS RTL Parallel Processing (PPL$) Manual^1
• OpenVMS RTL Screen Management (SMG$) Manual
• OpenVMS RTL String Manipulation (STR$) Manual
The Guide to the POSIX Threads Library contains guidelines and reference information for Compaq POSIX Threads^2, the Compaq Multithreading Run-Time Library.
Application programmers using any programming language can refer to the Guide to Creating OpenVMS Modular Procedures for writing modular and reentrant code.
High-level language programmers will find additional information on calling Run-Time Library routines in their language reference manual. Additional information may also be found in the language
user's guide provided with your OpenVMS language software.
For a complete list and description of the manuals in the OpenVMS documentation set, see the OpenVMS Version 7.3 New Features and Documentation Overview.
For additional information about Compaq OpenVMS products and services, access the Compaq website at the following location:
^1 This manual has been archived but is available on the OpenVMS Documentation CD-ROM.
^2 Compaq POSIX Threads was formerly called DECthreads.
Reader's Comments
Compaq welcomes your comments on this manual. Please send comments to either of the following addresses:
┃Internet │openvmsdoc@compaq.com ┃
┃ │Compaq Computer Corporation ┃
┃Mail │OSSG Documentation Group, ZKO3-4/U08 ┃
┃ │110 Spit Brook Rd. ┃
┃ │Nashua, NH 03062-2698 ┃
How To Order Additional Documentation
Use the following World Wide Web address to order additional documentation:
If you need help deciding which documentation best meets your needs, call 800-282-6672.
The following conventions are used in this manual:
┃Ctrl/ x │A sequence such as Ctrl/ x indicates that you must hold down the key labeled Ctrl while you press another key or a pointing device button. ┃
┃PF1 x │A sequence such as PF1 x indicates that you must first press and release the key labeled PF1 and then press and release another key or a pointing device button. ┃
┃ │In examples, a key name enclosed in a box indicates that you press a key on the keyboard. (In text, a key name is not enclosed in a box.) ┃
┃[Return] │ ┃
┃ │In the HTML version of this document, this convention appears as brackets, rather than a box. ┃
┃ │A horizontal ellipsis in examples indicates one of the following possibilities: ┃
┃ │ ┃
┃... │ • Additional optional arguments in a statement have been omitted. ┃
┃ │ • The preceding item or items can be repeated one or more times. ┃
┃ │ • Additional parameters, values, or other information can be entered. ┃
┃. │ ┃
┃. │A vertical ellipsis indicates the omission of items from a code example or command format; the items are omitted because they are not important to the topic being discussed. ┃
┃. │ ┃
┃( ) │In command format descriptions, parentheses indicate that you must enclose choices in parentheses if you specify more than one. ┃
┃[ ] │In command format descriptions, brackets indicate optional choices. You can choose one or more items or no items. Do not type the brackets on the command line. However, you must ┃
┃ │include the brackets in the syntax for OpenVMS directory specifications and for a substring specification in an assignment statement. ┃
┃{ } │In command format descriptions, braces indicate a required choice of options; you must choose one of the options listed. Do not type the braces on the command line. ┃
┃bold text │This typeface represents the introduction of a new term. It also represents the name of an argument, an attribute, or a reason. ┃
┃italic text │Italic text indicates important information, complete titles of manuals, or variables. Variables include information that varies in system output (Internal error number), in command ┃
┃ │lines (/PRODUCER= name), and in command parameters in text (where dd represents the predefined code for the device type). ┃
┃UPPERCASE TEXT│Uppercase text indicates a command, the name of a routine, the name of a file, or the abbreviation for a system privilege. ┃
┃Monospace text│Monospace type indicates code examples and interactive screen displays. ┃
┃ │ ┃
┃ │In the C programming language, monospace type in text identifies the following elements: keywords, the names of independently compiled external functions and files, syntax summaries,┃
┃ │and references to variables or identifiers introduced in an example. ┃
┃- │A hyphen at the end of a command format description, command line, or code line indicates that the command or statement continues on the following line. ┃
┃numbers │All numbers in text are assumed to be decimal unless otherwise noted. Nondecimal radixes---binary, octal, or hexadecimal---are explicitly indicated. ┃
Part 1
OTS$ Overview
This part of the OpenVMS RTL General Purpose (OTS$) Manual contains a general overview of the routines provided by the OpenVMS RTL General Purpose (OTS$) Facility, and lists them by function.
Chapter 1
Run-Time Library General Purpose (OTS$) Facility
This chapter describes the OpenVMS Run-Time Library General Purpose (OTS$) facility. See the OTS$ Reference Section for a detailed description of each routine within the OTS$ facility.
Most of the OTS$ routines were originally designed to support language compilers. Because they perform general-purpose functions, the routines were moved into the language-independent facility, OTS$.
1.1 Overview
The Run-Time Library General Purpose (OTS$) facility provides routines to perform general-purpose functions. These functions include data type conversions as part of a compiler's generated code, and
some mathematical functions.
The OTS$ facility contains routines to perform the following main tasks:
Some restrictions apply if you link certain OTS$ routines on an Alpha system. See Section 1.2 for more information about these restrictions.
Table 1-1 OTS$ Conversion Routines
┃Routine Name│ Function ┃
┃OTS$CNVOUT │Convert a D-floating, G-floating, or H-floating value to a character string. ┃
┃OTS$CVT_L_TB│Convert an unsigned integer to binary text. ┃
┃OTS$CVT_L_TI│Convert a signed integer to signed integer text. ┃
┃OTS$CVT_L_TL│Convert an integer to logical text. ┃
┃OTS$CVT_L_TO│Convert an unsigned integer to octal text. ┃
┃OTS$CVT_L_TU│Convert an unsigned integer to decimal text. ┃
┃OTS$CVT_L_TZ│Convert an integer to hexadecimal text. ┃
┃OTS$CVT_TB_L│Convert binary text to an unsigned integer value. ┃
┃OTS$CVT_TI_L│Convert signed integer text to an integer value. ┃
┃OTS$CVT_TL_L│Convert logical text to an integer value. ┃
┃OTS$CVT_TO_L│Convert octal text to an unsigned integer value. ┃
┃OTS$CVT_TU_L│Convert unsigned decimal text to an integer value. ┃
┃OTS$CVT_T_ x│Convert numeric text to a D-, F-, G-, or H-floating value. ┃
┃OTS$CVT_TZ_L│Convert hexadecimal text to an unsigned integer value. ┃
For more information on Run-Time Library conversion routines, see the CVT$ reference section in the OpenVMS RTL Library (LIB$) Manual.
Table 1-2 OTS$ Division Routines
┃ Routine Name │ Function ┃
┃OTS$DIVC x │Perform complex division. ┃
┃OTS$DIV_PK_LONG │Perform packed decimal division with a long divisor. ┃
┃OTS$DIV_PK_SHORT│Perform packed decimal division with a short divisor. ┃
Table 1-5 OTS$ Exponentiation Routines
┃Routine Name │ Function ┃
┃OTS$POWC xC x│Raise a complex base to a complex floating-point exponent. ┃
┃OTS$POWC xJ │Raise a complex base to a signed longword exponent. ┃
┃OTS$POWDD │Raise a D-floating base to a D-floating exponent. ┃
┃OTS$POWDR │Raise a D-floating base to an F-floating exponent. ┃
┃OTS$POWDJ │Raise a D-floating base to a longword integer exponent. ┃
┃OTS$POWGG │Raise a G-floating base to a G-floating or longword integer exponent. ┃
┃OTS$POWGJ │Raise a G-floating base to a longword integer exponent. ┃
┃+OTS$POWHH_R3│Raise an H-floating base to an H-floating exponent. ┃
┃+OTS$POWHJ_R3│Raise an H-floating base to a longword integer exponent. ┃
┃OTS$POWII │Raise a word integer base to a word integer exponent. ┃
┃OTS$POWJJ │Raise a longword integer base to a longword integer exponent. ┃
┃OTS$POWLULU │Raise an unsigned longword integer base to an unsigned longword integer exponent. ┃
┃OTS$POW xLU │Raise a floating-point base to an unsigned longword integer exponent. ┃
┃OTS$POWRD │Raise an F-floating base to a D-floating exponent. ┃
┃OTS$POWRJ │Raise an F-floating base to a longword integer exponent. ┃
┃OTS$POWRR │Raise an F-floating base to an F-floating exponent. ┃
+VAX specific.
Table 1-6 OTS$ Copy Source String Routines
┃ Routine Name │ Function ┃
┃OTS$SCOPY_DXDX│Copy a source string passed by descriptor to a destination string. ┃
┃OTS$SCOPY_R_DX│Copy a source string passed by reference to a destination string. ┃
1.2 Linking OTS$ Routines on an Alpha System
On Alpha systems, if you use the OTS$ entry points for certain mathematics routines, you must link against the DPML$SHR.EXE library. Alternately, you can use the equivalent math$ entry point for the
routine and link against either STARLET.OBJ or the DPML$SHR.EXE library. Math$ entry points are available only on Alpha systems.
Table 1-8 lists the affected OTS$ entry points with their equivalent math$ entry points. Refer to the Compaq Portable Mathematics Library for information about the math$ entry points.
Table 1-8 OTS$ and Equivalent Math$
Entry Points
┃OTS$ Entry Point │Math$ Entry Point ┃
┃OTS$DIVC │math$cdiv_f ┃
┃OTS$DIVCG_R3 │math$cdiv_g ┃
┃OTS$MULCG_R3 │math$cmul_g ┃
┃OTS$POWCC │math$cpow_f ┃
┃OTS$POWCGCG_R3 │math$cpow_g ┃
┃OTS$POWCJ │math$cpow_fq ┃
┃OTS$POWGG │math$pow_gg ┃
┃OTS$POWGJ │math$pow_gq ┃
┃OTS$POWGLU │math$pow_gq ┃
┃OTS$POWII │math$pow_qq ┃
┃OTS$POWJJ │math$pow_qq ┃
┃OTS$POWLULU │math$pow_qq ┃
┃OTS$POWRJ │math$pow_fq ┃
┃OTS$POWRLU │math$pow_fq ┃
┃OTS$POWRR │math$pow_ff ┃
1.2.1 64-Bit Addressing Support (Alpha Only)
On Alpha systems, the General Purpose (OTS$) routines provide 64-bit virtual addressing capabilities as follows:
• All OTS$ RTL routines accept 64-bit addresses for arguments passed by reference.
• All OTS$ RTL routines also accept either 32-bit or 64-bit descriptors for arguments passed by descriptor.
The OTS$ routines declared in ots$routines.h do not include prototypes for 64-bit data. You must provide your own generic prototypes for any OTS$ functions you use.
See the OpenVMS Programming Concepts Manual for more information about 64-bit virtual addressing capabilities. | {"url":"http://h71000.www7.hp.com/doc/73final/5933/5933pro.html","timestamp":"2014-04-20T08:17:52Z","content_type":null,"content_length":"31398","record_id":"<urn:uuid:e770f47f-6ec9-4f4f-a018-6d1859cd8bc7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deconstructing Larry Wasserman
Larry Wasserman (“Normal Deviate”) has announced he will stop blogging (for now at least). That means we’re losing one of the wisest blog-voices on issues relevant to statistical foundations (among
many other areas in statistics). Whether this lures him back or reaffirms his decision to stay away, I thought I’d reblog my (2012) “deconstruction” of him (in relation to a paper linked below)[i]
Deconstructing Larry Wasserman [i] by D. Mayo
The temptation is strong, but I shall refrain from using the whole post to deconstruct Al Franken’s 2003 quip about media bias (from Lies and Lying Liars Who Tell Them: A Fair and Balanced Look at
the Right), with which Larry Wasserman begins his paper “Low Assumptions, High Dimensions” (2011) in his contribution to Rationality, Markets and Morals (RMM) Special Topic: Statistical Science and
Philosophy of Science:
Wasserman: There is a joke about media bias from the comedian Al Franken:
‘To make the argument that the media has a left- or right-wing, or a liberal or a conservative bias, is like asking if the problem with Al-Qaeda is: do they use too much oil in their hummus?’
According to Wasserman, “a similar comment could be applied to the usual debates in the foundations of statistical inference.”
Although it’s not altogether clear what Wasserman means by his analogy with comedian (now senator) Franken, it’s clear enough what Franken meant if we follow up the quip with the next sentence in his
text (which Wasserman omits): “The problem with al Qaeda is that they’re trying to kill us!” (p. 1). The rest of Franken’s opening chapter is not about al Qaeda but about bias in media.
Conservatives, he says, decry what they claim is a liberal bias in mainstream media. Franken rejects their claim.
The mainstream media does not have a liberal bias. And for all their other biases . . . , the mainstream media . . . at least try to be fair. …There is, however, a right-wing media. . . . They
are biased. And they have an agenda…The members of the right-wing media are not interested in conveying the truth… . They are an indispensable component of the right-wing machine that has taken
over our country… . We have to be vigilant. And we have to be more than vigilant. We have to fight back… . Let’s call them what they are: liars. Lying, lying, liars. (Franken, pp. 3-4)
When I read this in 2004 (when Bush was in office), I couldn’t have agreed more. How things change*. Now, of course, any argument that swerves from the politically correct is by definition unsound,
irrelevant, and/ or biased. [ii]
But what does this have to do with Bayesian-frequentist foundations? What is Wasserman, deep down, really trying to tell us by way of this analogy (if only subliminally)? Such are my ponderings—and
thus this deconstruction. (I will invite your “U-Phils” at the end[a].) I will allude to passages from my contribution to RMM (2011) http://www.rmm-journal.de/htdocs/st01.html (in red).
A.What Is the Foundational Issue?
Wasserman: To me, the most pressing foundational question is: how do we reconcile the two most powerful needs in modern statistics: the need to make methods assumption free and the need to make
methods work in high dimensions… . The Bayes-Frequentist debate is not irrelevant but it is not as central as it once was. (p. 201)
One may wonder why he calls this a foundational issue, as opposed to, say, a technical one. I will assume he means what he says and attempt to extract his meaning by looking through a foundational
Let us examine the urgency of reconciling the need to make methods assumption-free and that of making them work in complex high dimensions. The problem of assumptions of course arises when they are
made about unknowns that can introduce threats of error and/or misuse of methods.
Wasserman: These days, statisticians often deal with complex, high dimensional datasets. Researchers in statistics and machine learning have responded by creating many new methods … . However,
many of these new methods depend on strong assumptions. The challenge of bringing low assumption inference to high dimensional settings requires new ways to think about the foundations of
statistics. (p. 201)
It is not clear if Wasserman thinks these new methods run into trouble as a result of unwarranted assumptions. This is a substantive issue about Wasserman’s applications that foundational discussions
are unlikely to answer. Still, he sees the issue as one of foundations, so I shall take him at his word.
The last decade or more has also given rise to many new problem areas that call for novel methods (e.g., machine learning). Do they call for new foundations? Or, can existing foundations be
relevant here too? (See Larry Wasserman’s contribution.) A lack of clarity on the foundations of existing methods tends to leave these new domains in foundational limbo. (Mayo 2011, 92)
I may seem to be at odds with Wasserman’s call to move on past frequentist-Bayesian debates:
Debates over the philosophical foundations of statistics have a long and fascinating history; the decline of a lively exchange between philosophers of science and statisticians is relatively
recent. Is there something special about 2011 (and beyond) that calls for renewed engagement in these fields? I say yes. (Mayo, p. 80)
Perhaps this may be Wasserman’s meaning: new types of problems and methods call for a more pragmatic perspective on learning from data. One cannot begin at the point at which different
interpretations of probability (Bayesian or frequentist) enter; so frequentist-Bayesian debates are not as central to current practice.
I would never claim there is any obstacle to practice in not having a clear statistical philosophy. But that is different from maintaining both that practice calls for recognition of underlying
foundational issues, while also denying Bayesian-frequentist issues are especially important to them. The fact is, key underlying issues come to the surface and are illuminated within
frequentist-Bayesian contrasts, as are issues surrounding objective/subjective, deduction/induction, and truth/idealizations, deliberately discussed on this blog. It may be insisted we are beyond
them, but they invariably lurk in the background, they are the elephants in the room.
We deliberately used ‘statistical science’ in our forum title because it may be understood broadly to include the full gamut of statistical methods, from experimental design, generation,
analysis, and modeling of data to using statistical inference to answer scientific questions. (Even more broadly, we might include a variety of formal but nonprobabilistic methods in computer
science and engineering, as well as machine learning.) (Mayo, p. 85)
B. Models Are Always Wrong
Wasserman: One then looks for adequate models rather than true models… . [A] distribution P is an adequate approximation for x1,…, xn, if typical data sets of size n, generated under P ‘look
like’ x1,…, xn. (p. 203)
The recognition that “the model is always wrong”–in the sense of being an idealization– was clear to the founders of “classical” statistics*(see relevant remarks from Cox, Fisher, and Neyman
elsewhere on this blog). Although this recognition discredits the idea that inference is all about assigning degrees of belief or confirmation to hypotheses and models, it supports the use of
probability in standard error statistics—or so I argue. One can learn true things from idealized models.
Wasserman: A more extreme example of using weak assumptions is to abandon probability completely… . Why are scholars in foundations ignoring this? (pp. 203-4)
By and large, the idea that data were literally “generated from a distribution is usually a fiction” (p. 203) is also not news to error statisticians; in a sense, observations are always
deterministic. Viewing the sample as if it were generated probabilistically may simply be to cope with incomplete information, and the incorrect inferences that can result. Probability is introduced
as attached to methods (which, in this example, would be for a type of prediction or classification tool).
The machine learners say that there is little need to understand what actually produced the numbers. Fine, then methods are apt that enable an increasingly successful error-rate reduction. Under
error statistics’ big umbrella, machine learning appears to fall under the subset of the philosophy of “inductive behavior,” the goals of which involve controlling/improving performance and setting
bounds for error rates, and trading off precision and accuracy where appropriate to the particular case. This is in contrast to the subset that is the main focus of my work: that which uses error
rates to assess and control how severely claims have passed tests. The latter are contexts of scientific inference. In the prediction-classification example, however, the error-rate guarantees are
just the ticket. (I would not rule out inferences about the case at hand.) Yet in the domains of both inductive behavior and scientific inference, the error statistician regards models as
approximations and idealizations, or, as Neyman saw them, “picturesque” ways of talking about actual experiments.
Wasserman has proved many intriguing results about the problems of and prospects for low-assumption methods. Whether methods that invoke assumptions could do better, perhaps along side these
(checking or making allowances later), is not something on which I can speculate. As complex as the classification prediction problems are, they enjoy an outcome that’s normally absent: we get to
find out if we’ve been successful. Background knowledge enters in qualitative ways, not obviously as prior probability distributions in parameters.
C. Is It Bayesian?
Wasserman: In principle, low assumption Bayesian inference is possible. We simply put a prior π on the set of all distributions P. The rest follows from Bayes theorem. But this is clearly
unsatisfactory. The resulting priors have no guarantees, except the solipsistic guarantee that the answer is consistent with the assumed prior. (p. 206) [iii]
One big reason some may turn aside from frequentist-Bayesian contrasts is that today even most Bayesians grant the importance of good performance characteristics (though their meaning may differ
distinctly). The traditional idea that statistical learning is well-captured by Bayes theorem is rarely upheld (we have seen exceptions, most recently Lindley, also Kadane) [iv].
Today’s debates clearly differ from the Bayesian-frequentist debates of old. In fact, some of those same discussants of statistical philosophy, who only a decade ago were arguing for the
‘irreconcilability’ of frequentist p-values and (Bayesian) measures of evidence, are now calling for ways to ‘unify’ or ‘reconcile’ frequentist and Bayesian accounts… .(Mayo p. 82)
In some cases the nonsubjective posteriors may have good error-statistical properties of the proper frequentist sort, at least in the asymptotic long run. But then another concern arises: If the
default Bayesian has merely given us technical tricks to achieve frequentist goals, as some suspect, then why consider them Bayesian (Cox 2006)? Wasserman (2008, 464) puts it bluntly: If the
Bayes’ estimator has good frequency-error probabilities, then we might as well use the frequentist method. If it has bad frequency behavior then we shouldn’t use it. (The situation is even more
problematic for those of us who insist on a relevant severity warrant.) (Mayo, p. 90)
Wasserman: [In other cases] the answers are usually checked against held out data. This is quite sensible but then this is Bayesian in form not substance. (p. 206)
In this context, insofar as I understand it, the goal is to be able to assess how well the rule can predict “test sets” and indicate an estimate of prediction error. The substance is of an
error-statistical kind: through various strategies (e.g., cross validation) we may learn approximately how well a predictive model will perform in cases other than those already used to fit the
model. It connects with a general set of strategies for preventing too-easy fits and avoiding (pejorative) double-counting, “over fitting,” and nongeneralizable results.
Deconstructing Wasserman
So where does this leave us in deconstructing Wasserman’s call for new-fangled foundations?
Franken deconstructed: Let us imagine Franken as representing a frequentist error statistician[v]. He begins by noting that while Bayesians may detect a frequentist bias (in certain circles), he
detects no such thing. Besides, such a quibble would be akin to worrying about Al-Qaeda using too much oil in their hummus!
Frequentists, he says, are at least trying to meet a fundamental scientific requirement for controlling error, and are open to any number of ways of accomplishing this. But Bayesians—at least
dyed-in-the-wool (or staunch subjective or “philosophical”) Bayesians—have an agenda, Franken is saying, by analogy. They charge frequentists with legitimating a hodgepodge of “incoherent” and
“inadmissible” methods; they say that frequentists care only for low error rates in the long run, have no way of incorporating background information, invariably misinterpret their own methods, and
top it all off with a litany of howlers (that the Bayesian easily avoids). If the discourse on frequentist foundations seems biased, our frequentist Franken continues, it is only to correct the many
blatant misinterpretations of its methods.
Now Wasserman comes in and utters the scientific equivalent of “Let’s move on.” (as with the Clinton scandal, which gave rise to MoveOn.org, i.e., “Get over it.”) The Bayesian requirements and
philosophy do not underwrite the substance of the most promising new complex methods. So if our focus is to justify, interpret, and extend these new contexts, we are allowed to leave the old
(frequentist-Bayesian) scandals behind. But, as Wasserman seems further to imply, finding oneself in an essentially frequentist, error-statistical world is not enough either, especially when it comes
to the kinds of complex classification and prediction problems of machine learning, data mining, and the like. At any rate, new foundational concerns must loom large….
So let me inject myself into the interpretive mix I’ve created.
I concur with the deconstructed Franken and Wasserman. Taking seriously Wasserman’s intimation that there is not only a technical-statistical problem here (which only statisticians can solve), but
also a foundational problem, he seeks a ground for applications where probabilistic bounds, however, crude, do not directly describe a data-generating mechanism, but assess/reduce/balance procedural
error rates.
The “long-run” relative frequencies have probabilistic implications for bounding the next test set. The old accusation that good error-statistical properties are irrelevant to the case at hand goes
by the wayside. Anyone who takes a broad view of error-statistical methods would have no problem finding a home for the variety of methods of creative control and assessment of approximate sampling
distributions and error rates. This falls more clearly under what may be called “a behavioristic” context than one of scientific inference (though the latter is not precluded) . It would require
breaking out of traditional notions of frequentist statistics and in so doing simultaneously scotch the oft repeated howlers.[vi]
Ironically many seem prepared to allow that Bayesianism still gets it right for epistemology, even as statistical practice calls for methods more closely aligned with frequentist principles. What
I would like the reader to consider is that what is right for epistemology is also what is right for statistical learning in practice. That is, statistical inference in practice deserves its own
epistemology. (Mayo, p. 100)
Constructing such a framework, would be one payoff of genuinely transcending the frequentist-Bayesian debates, rather than rendering them taboo, or closed.
Cox, D. R. 2006, Principles of Statistical Inference, Cambridge: Cambridge University Press.
Gelman, A and C. Shalizi. (Article first published online: 24 FEB 2012). “Philosophy and the Practice of Bayesian statistics (with discussion)”. British Journal of Mathematical and Statistical
Mayo, D. (2011), “Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)?” RMM Vol. 2, 2011, 79–102
Wasserman, L. 2006. “Frequentist Bayes is Objective”. Bayesian Analysis 1(3): 451-456. URL http://ba.stat.cmu.edu/journal/2006/vol01/issue03/wasserman.pdf.JMSP.
Wasserman, L. 2008., “Comment on article by Gelman,” Bayesian Analysis. 3(3): 463-465.
*7/29 I modified this assertion, and will explicate the different senses in which Neyman and Pearson viewed the relationship between approximate models and correct/incorrect claims about the world
later on.
[a] This had been an open “U-Phil”. If you send me a new analysis, I’m willing to post it.
[i] See an earlier post for the way we are using “deconstructing” here.
[ii] Says Franken: “And what shocked me most…was the silence from those conservatives who complain about the ugliness of political discourse in this country.” (19) Oh pleeeze (to use Franken’s
[iii] For some examples of methods applicable to large numbers of variables in econometrics under the error statistical umbrella, see the two contributions to the special topic by Aris Spanos, and
David Hendry. It would be interesting to hear of relationships.
[v] Never mind that, intuitively, I think, it would fit more closely to see him wearing a Bayesian hat. Please weigh in on this.
6 thoughts on “Deconstructing Larry Wasserman”
It’s interesting to draw out the metaphor between Bayes-Frequentist and left-wing/right-wing bias, taking seriously Wasserman’s remark (despite it’s being intended as just a funny quip): “a
similar comment could be applied to the usual debates in the foundations of statistical inference.” Fleshing out the analogy with Franken’s remarks would see left-wing as going with the
frequentist, whereas a more apt analogy might Bayesians ~ left-wing (recalled from discussion with Wasserman). Doubtless it’s unhelpful to make such political analogies, but since this is a blog,
it’s fair game, and I’d be curious as to reader’s thoughts.
My opinions have shifted a bit.
My reference to Franken’s joke suggested that the usual philosophical
debates about the foundations of statistics were un-important, much
like the debate about media bias. I was wrong on both counts.
First, I now think Franken was wrong. CNN and network news have a
strong liberal bias, especially on economic issues. FOX has an
obvious right wing, and anti-atheist bias. (At least FOX has some
libertarians on the payroll.) And this does matter. Because people
believe what they see on TV and what they read in the NY times. Paul
Krugman’s socialist bullshit parading as economics has brainwashed
millions of Americans. So media bias is much more than, who makes
better hummus.
Similarly, the Bayes-Frequentist debate still matters. And people —
including many statisticians — are still confused about the
distinction. I thought the basic Bayes-Frequentist debate was behind
us. A year and a half of blogging (as well as reading other blogs)
convinced me I was wrong here too. And this still does matter.
My emphasis on high-dimensional models is germane, however. In our
world of high-dimensional, complex models I can’t see how anyone can
interpret the output of a Bayesian analysis in any meaningful way.
I wish people were clearer about what Bayes is/is not and what
frequentist inference is/is not. Bayes is the analysis of subjective
beliefs but provides no frequency guarantees. Frequentist inference
is about making procedures that have frequency guarantees but makes no
pretense of representing anyone’s beliefs. In the high dimensional
world, you have to choose: objective frequency guarantees or
subjective beliefs. Choose whichever you prefer, but you can’t have
both. I don’t care which one people pick; I just wish they would be
clear about what they are giving up when they make their choice.
In your blog, Deborah, you mentioned these papers
by Houman Owhadi, Clint Scovel and Tim Sullivan.
And then there is this paper
by Gordon Belot.
These challenges to Bayesian inference remain unanswered in my
opinion. In fact, I think Freedman’s Theorem (1965, Annals, p 454)
still remains adequately unanswered.
Of course, one can embrace objective Bayesian inference. If this
means “Bayesian procedures with good frequentist properties” then I
am all for it. But this is just frequentist inference in Bayesian
clothing. If it means something else, I’d like to know what.
Normal Deviate: I’m so glad to have your update (I’ve placed it as a blogpost : http://errorstatistics.com/2013/12/28/wasserman-on-wasserman-update-december-28-2013/) I agree with everything
you say, although I’d extend the point to any dimensional models. I’m commenting more on the current post, and hope others will bring forward their comments–be they on politics or statistics.
Say it ain’t so, Joe (uh, Larry)! I, for one, will definitely miss Wasserman’s blog, as I tend to align fairly closely with his views.
I don’t know about the whole left/right analogy as my own views in this regard have shifted lately (regrettably, I think that Bayesians may in fact be closer with the current “left”, but I
wouldn’t align frequentists with the current “right”… Maybe more with true conservatives of old). Anyway, I have always thought that Bayesians would have to be more likely to believe in a god, or
fate, or whatever.
Mark: Note, I’ve placed Wasserman’s update as a new blogpost.
Yes, the political left-right doesn’t really match the perspectives here, but I get your point about “true conservatives” (though I surprise myself). It’s an interesting thought that one
might align Bayesians with “believers”, as this is often coupled with (philosophical) skepticism, positivism, operationism, constructive empiricism, and such. Maybe the cultural
anthropologists have categories that fit (as they usually claim to).
“… they have an agenda…We have to be vigilant. And we have to be more than vigilant. We have to fight back… .”(Franken)
To pursue the analogy of Franken as frequentist, are you saying frequents have to be vigilant…have to fight back?
I welcome constructive comments for 14-21 days
Categories: Philosophy of Statistics, Statistics, U-Phil Tags: Al Franken, Larry Wasserman, model assumptions, statistical foundations 6 Comments | {"url":"http://errorstatistics.com/2013/12/27/deconstructing-larry-wasserman/","timestamp":"2014-04-18T23:15:51Z","content_type":null,"content_length":"94474","record_id":"<urn:uuid:2a8c61ee-161f-4dd0-b88e-cdcf3f99d946>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
a couple of word problems
December 13th 2006, 07:45 PM #1
a couple of word problems
A circular coffee table has a diameter of 5 ft. What will it cost to have the
top refinished if the company charges $3 per square foot the refinishing?
The distant from Philadelphia to Sea Isle City is 100 mi. A car was driven distance using tires with a radius of 14 in. How many revolutions of each tired occurred on the trip?
Can you please show me how you came up with the answer to these story problems?
The area of the table top is $\pi r^2$, and here $r=2.5 \mbox(ft)$ (half the diameter), so the area is $\pi \times 2.5^2 \approx 19.6 \mbox{sq ft}$. So the cost is $\ 3 \times 19.6=\58.80$
The distant from Philadelphia to Sea Isle City is 100 mi. A car was driven distance using tires with a radius of 14 in. How many revolutions of each tired occurred on the trip?
1 mile is 5280 feet, so 100 miles is 528000 feet.
The circumference of the tires is $2\,\pi\,r$, so in this case the circumference is ~87.96 inches or ~7.33 feet. So in moving 7.33 feet the tire rotates once.
So in travelling 100 miles each tire will revolve 528000/7.33~72033 times.
December 13th 2006, 10:50 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/math-topics/8817-couple-word-problems.html","timestamp":"2014-04-18T01:20:26Z","content_type":null,"content_length":"35283","record_id":"<urn:uuid:05cc445d-a0ae-4992-b1b0-8c6d874290cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
chaotic system w/ initial condition
In short, for a (dissipative) dynamic system with a given fixed set of parameters there can in general be one or more attractors (with at least one of these being a chaotic attractor a.k.a. strange
attractor if the system is to be chaotic) each with an associated basin of attraction. If the system in addition to the chaotic attractor(s) has a non-chaotic attractor (say, a fix point) then there
obviously must be some initial conditions, namely those in the basic of attraction for this non-chaotic attractor, that will lead to a non-chaotic trajectory.
For the Duffing Oscillator (Duffing's Equation) I believe there are parameters for which the system has both chaotic and non-chaotic trajectories, and in those cases you will get chaotic or
non-chaotic trajectory depending on the initial conditions. For instance, it looks like there should be both a chaotic and non-chaotic attractor for k = 0.2 and B = 1.2 (liftet from Ueda's parameter
map for Duffing's Equation as it is shown in [1]).
[1] Nonlinear Dynamics and Chaos, Thompson and Steward, Wiley, 2002. | {"url":"http://www.physicsforums.com/showthread.php?p=3857527","timestamp":"2014-04-17T10:05:23Z","content_type":null,"content_length":"31491","record_id":"<urn:uuid:3004d056-953f-43b9-8add-e674514493e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power Function
Let f(x)= [((3x^5) - 2x² + 3) / (2x² -5)]. Find a power function end behavior model for f.[/FONT]
Do you know how to divide polynominals by long division?
yes, but i'm confused by what the question is asking for. How do you find the power function and end behavior function?
If you perform long division you get $\frac{3x^5-2x^2+3}{2x^2-5} = \frac{3x^3}{2} + \frac{15 x}{4} - 1 + \frac{75x-8}{4(2x^2-5)}$ As $x \rightarrow \pm \infty$ then the last term drops and the end
behaviour looks like the function $\frac{3x^5-2x^2+3}{2x^2-5} \approx \frac{3x^3}{2} + \frac{15 x}{4} - 1$
Your approach is right, however the problem calls for a power function. Actually $y=\frac{3}{2}x^3$ would be the answer. Another way to look at this is by looking at the leading terms of the
polynomials on the numerator and denominator (they each describe the end behavior of the two polynomials), which in this case is $3x^5$ and $2x^2$ and by looking at its ratio, we get $y=\frac{3}{2}x^ | {"url":"http://mathhelpforum.com/calculus/68393-power-function-print.html","timestamp":"2014-04-18T03:11:26Z","content_type":null,"content_length":"12045","record_id":"<urn:uuid:c1b1c0b4-4171-4e95-bc76-6d1809c519fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: re: ivreg2: No validity tests if just-identified?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: re: ivreg2: No validity tests if just-identified?
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject st: re: ivreg2: No validity tests if just-identified?
Date Wed, 15 Apr 2009 21:38:28 -0400
Jennifer said
I worked quite carefully through the various options in ivreg2 and ivregress for testing (a) instrument validity (orthogonality etc) and (b) instrument strength (correlatedness with endogenous
regressors). After doing so, I seem to be arriving at the conclusion that there is no way to test (a) if the model is just-identified, i.e. if I have the same number of excluded instruments as I have
endogenous regressors). For example, the Sargan overid statistic, the C-statistic, the LR IV redundancy test statistic, etc. all don't get produced unless the model is overidentified. Is that true?!
If yes, I would need to rely on persuasion using economic intuition to make my case that the instruments are valid, and there are no statistical tools to use?
Quite so. If you have only just enough instruments to identify the model, there are none available to test overidentifying restrictions, as there is no overid. The J statistic is by definition zero
for all exactly identified models. On the other hand, if you have found some variables which you believe are appropriate instruments, transformations of them (powers, lags, interactions) should also
be valid instruments, so it is not clear why you are stuck with exact ID.
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-04/msg00626.html","timestamp":"2014-04-17T06:43:03Z","content_type":null,"content_length":"7257","record_id":"<urn:uuid:8ad2449f-f7ae-4fbb-888e-0178ebb3d6a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
A BJT Is Biased In The Circuit On The Left Below. ... | Chegg.com
A BJT is biased in the circuit on the left below. The I-V characteristics of the BJT (IC vs VCE) are shown on the right.
a) Draw the load line on the characteristics (assume IE ≈ IC.)
b) Verify that β is approximately 80 in the forward-active region from the characteristics.
c) Replace the left side of the bias circuit with its Thevenin equivalent and find VEQ and REQ.
d) Find IB, IC and VCE (use β = 80.)
e) Circle the Q-point on the characteristics.
f) Calculate the small-signal parameters gm, rπ and ro (assume VA = 100 V.)
g) Draw the small-signal (hybrid-π) model of the BJT.
h) The circuit is connected in an amplifier such as Figure 13.18(a). The source resistance RI = 100 Ω, the load R3 = 10 kΩ, and the other resistors are the same as the rest of this problem. Calculate
the mid-frequency gain Av = vo/vs, Rin and Rout (output resistance of amplifier, not ro.)
i) Assume fL depends on the RE bypass capacitor C3 only. Find C3 for fL ≤ 100 Hz.
j) Estimate fH using the method of open-circuit time constants, given Cπ = 10 pF and Cμ = 2 pF. Assume fH depends on the Miller multiplication term (i.e., the input side) only.
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/bjt-biased-circuit-left--v-characteristics-bjt-ic-vs-vce-shown-right-draw-load-line-charac-q1061504","timestamp":"2014-04-16T20:09:53Z","content_type":null,"content_length":"23970","record_id":"<urn:uuid:0dafa74d-5c97-4be8-b747-809f5ef452b2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
RPI Mathematical Sciences Research: Dynamical Systems
Dynamical Systems
A dynamical system is a deterministic process in which a function's value changes over time according to a rule that is defined in terms of the function's current value.
Dynamical systems arise as mathematical models in various applications such as:
● Mechanics.
● Optics.
● Electric circuits.
● Solid-state physics.
● Fluid dynamics.
● Optimal control.
● Neural science.
Researchers often model real-life problems with a dynamical system and then apply the ideas and methods of the theory to explain and predict complex behavior.
Dynamical Systems at Rensselaer
Researchers at Rensselaer concentrate on the theory of dynamical systems and its applications in physics and engineering.
This research aims to discover and explain new and important phenomena found in experimental and numerical studies.
The mathematical methods used come from:
● Analysis.
● Topology.
● Differential geometry.
● Combinatorics.
Researchers may use computation as an experimental tool.
Current Projects
Researchers conduct theoretical study of:
● Chaotic dynamics.
● Hamiltonian systems.
● KAM theory and applications.
● Theoretical mechanics.
● Bifurcation theory.
Faculty Researchers
Gregor Kovacic
Chjan Lim
Yuri Lvov | {"url":"http://www.rpi.edu/dept/math/ms_research/dynamical.html","timestamp":"2014-04-21T07:05:44Z","content_type":null,"content_length":"10552","record_id":"<urn:uuid:c26afdd2-92e4-49f5-bcbd-ee9c65bd07eb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Levers - Mechanical Aptitude Test
Mechanical Aptitude Test – Levers
A lever is defined as: “A lever is a beam attached to ground using a hinge, or pivot, known as a fulcrum. The perfect lever doesn’t dissipate or store energy, meaning there isn’t any friction from
the hinge or bending from the beam. In this instance, the power in to the lever equals the power out, as well as the ratio of output to input force is given by the ratio of the distances from the
fulcrum to the points of application of these forces. This is what’s called the law of the lever. ”
The formula used to calculate levers is as follows: W x D1 = F x D2
W = Weight
D1 = Distance from fulcrum to weight
F= Force required
D2= Distance from fulcrum to force point
Here are some examples to get your started:
Example 1:
How much force must we exert in order to lift the weight?
a) 127.47
b) 147.27
c) 152.39
d) 176.56
Correct Answer – B
How to reach this answer:
In this example we see that the weight(W) is 135 lbs. We are aware that the distance between the weight and the fulcrum is 12(D1). The final piece of information that is given to us is that the
distance between the fulcrum and the force required is 11(D2).
Therefore, by modifying the formula for what is required, you come up with the following.
F=(W x D1)/D2
Now when you insert the values you are given this.
F=(135 x 12)/11
F= 147.27
Find more lever examples here.
[ CONTINUE TO MAPS ]
For access to great examples, have a look at our premium eBook and premium online aptitude tests. (Paid content) | {"url":"http://www.mechanicalaptitudetest.org/sample-questions/levers/","timestamp":"2014-04-18T10:42:26Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:f5bdf998-b55c-4920-815f-22bbf8cb200c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recent work on hypergeometric functions
up vote 7 down vote favorite
Does anyone know of a monograph/survey on the modern history of (basic or elliptic) hypergeometric functions and their applications?
I haven't had much time to search the literature, and because it is summer it is hard to reach professors or specialists, which is why I am asking the question here. It is also likely that there are
obvious choices out there that I am unaware of because of my ignorance in the field. I would appreciate it a lot if along with the suggestions you could give a quick description of what the book/
article treats.
special-functions hypergeometric-functions ho.history-overview reference-request
add comment
3 Answers
active oldest votes
Gjergji, there are remarkable articles by Richard Askey:
(1) "Ramanujan and hypergeometric and basic hypergeometric series" in
Russian Math. Surveys 45:1 (1990) 37--86; reprinted in Ramanujan: essays and surveys, Hist. Math. 22 Amer. Math. Soc., Providence, RI, 2001, pp. 277--324;
(2) "A look at the Bateman project" in The mathematical legacy of Wilhelm Magnus: groups, geometry and special functions (Brooklyn, NY, 1992), 29--43, Contemp. Math. 169, Amer.
Math. Soc., Providence, RI, 1994.
up vote 7 down vote (I asked Dick exactly this question, maybe without accenting on "modern theory", some years ago.) The modern theory is mostly multiple hypergeometric functions related to root
accepted systems; for a nice survey on the roots of these functions, the Selberg integral, see
(3) P. Forrester and S.O. Warnaar, "The importance of the Selberg integral", Bull. Amer. Math. Soc. (N.S.) 45:4 (2008) 489--534.
Elliptic functions are hypergeometric functions of the 21st century:
(4) V.P. Spiridonov, "Essays on the theory of elliptic hypergeometric functions", Russian Math. Surveys 63:3 (2008) 405--472.
I am familiar with Spiridonov's article you mention, but the rest is all new to me. Thank you! – Gjergji Zaimi Jul 17 '10 at 11:01
add comment
"Basic Hypergeometric Series" 2nd Edition, George Gasper and Mizan Rahman, ISBN: 0521833574 would be a good place to start. Chapters 9-11 of the second edition are new and deal
1. Linear and bilinear generating functions for basic orthogonal polynomials;
up vote 5 down vote 2. q-series in two or more variables;
3. Elliptic, modular, and theta hypergeometric series
That does sound like a great start, thanks! I've never looked at the second edition. – Gjergji Zaimi Jul 17 '10 at 9:09
The $q$-Bible definitely covers some history. Slater's book also does... – Wadim Zudilin Jul 17 '10 at 9:35
add comment
Hyper Geometric Functions, My Love: Modular Interpretations of Configuration Spaces (Aspects of Mathematics) by Masaaki Yoshida
Discriminants, Resultants, and Multidimensional Determinants Israel M. Gelfand, Mikhail Kapranov, Andrei Zelevinsky
up vote 4 down I heard a talk by Rivoal a decade back when he had just had his breakthrough concerning irrationality of zeta values. He quoted some very classical but not well-known results and got
vote asked where he learned them. He recommended this book:
Confluent Hypergeometric Functions L. J. Slater
add comment
Not the answer you're looking for? Browse other questions tagged special-functions hypergeometric-functions ho.history-overview reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/32268/recent-work-on-hypergeometric-functions/32271","timestamp":"2014-04-18T08:22:45Z","content_type":null,"content_length":"61137","record_id":"<urn:uuid:0dc1b05d-d051-416f-b1c9-9057a93caabd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Calculate variances of subsamples
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Calculate variances of subsamples
From Lars Knuth <knuth.lars@googlemail.com>
To statalist <statalist@hsphsun2.harvard.edu>
Subject st: Calculate variances of subsamples
Date Wed, 2 Jun 2010 20:21:49 +0200
Dear listers,
I have to say thanks to Martin, the recommendation of rolling was
great. Unfortunately, I have now a few problems with the
1. -rolling- works with the "clear" option, but without it does not
("rolling r(Var), window(60) clear: summarize exret" works)
2. I need the data to calculate and store the variances for more than
1000 stock price returns in the end, so can I somehow keep all the
data and then perform -rolling- in a loop?
3. Is there also an opportunity to perform the return calculation in a loop?
I am attaching parts of the code I have so far. Any ideas would be of
great help to me.
Thanks in advance!
use "C:\...\3105.dta", clear
gen int time=_n
* Return calculation
gen double exret=ex[_n]/ex[_n-1]-1 if _n>1
gen double msciret=msci[_n]/msci[_n-1]-1 if _n>1
gen double msftret=msft[_n]/msft[_n-1]-1 if _n>1
gen double appret=app[_n]/app[_n-1]-1 if _n>1
gen double geret=ge[_n]/ge[_n-1]-1 if _n>1
gen double pgret=pg[_n]/pg[_n-1]-1 if _n>1
gen double jnjret=jnj[_n]/jnj[_n-1]-1 if _n>1
gen double bpret=bp[_n]/bp[_n-1]-1 if _n>1
tsset time
* Rolling
rolling r(Var), window(60): summarize exret
rolling r(Var), window(60): summarize msciret
rolling r(Var), window(60): summarize msftret
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-06/msg00126.html","timestamp":"2014-04-18T13:16:34Z","content_type":null,"content_length":"8494","record_id":"<urn:uuid:59cd05bc-f72e-4aae-ac72-d60f8b006f0c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding covariant derivative of a riemanian submanifold
up vote 0 down vote favorite
Hi, I have a question about properties which are common to a manifold and its submanifolds. I start with the metric. $ M \subset N, dim(M) = m, dim (N) = m+1 $ let $ g^N $ be the metric of N, so that
$ (N,g^N) $ is a riemanian manifold and N is a submanifold. Now, I'm looking at N and I'm trying to understand what does $ g^M $ looks like. WLOG I assume that in every point $ p \in M $ there exists
$ \phi $ a homemorphism of a neighbourhood of p to $ U \subset R^{m+1} $ $ p = \phi(U^1,...,U^m,U^{m+1} = 0) $ I call the reduced $ \phi, \psi $. Now, I can see that $ \partial \psi / \partial u^j =
\partial \phi / \partial u^j $ for $ 1 \leq j \leq n $ and that, $ \\ \partial \psi / \partial u^{m+1} = 0 $ (by definition) so I conclude that in U coordinates, $ g^N $ has the form $ \left(\begin
{array}{cc}A_{m \times m}&*\\***&B_{1 \times 1}\end{array}\right) $ This must be this way, of the inner product will not be induce correctly from N to M. A is exactly $ g^M $ Now, I'm trying to check
the Cristoffel symbols (so I could know what the covariant derivative is). I use the formula $ \Gamma^k_{i j} = 1/2 * g^{k l} ( \partial g_{l j} / \partial u^i + ...)$ And here is my problem. the
factors in the brackets are identical for M and N, but I cant say the same about $ g^{k l} $. If I could determine that * from above is zero (?) then I could say that the inverse of $ g^N $ is $ \
left(\begin{array}{cc}A^{-1}&0\\C&D\end{array}\right) $ but unfortunately, I dont know if I can choose coordinates, so that this property holds. Can I somehow make it happen? or is there another way
to compute $ \Gamma^M $ from $ \Gamma^N $?
dg.differential-geometry riemannian-geometry
You don't need * to be all zero. You just need it to be zero along $M$. So just take an arbitrary local coordinate on $M$ to start. The metric defines along $M$ the normal direction. Choose a field
of normal vectors, extend the field arbitrarily in a thickened slab around $M$. Flow $M$ along the vector field. Then the flow $t$ gives the "vertical coordinate". The lie transport of the local
coordinates on $M$ gives the coordinate on the slab. And along $M$ the total metric $g^N$ is block diagonal. – Willie Wong Jul 26 '10 at 0:19
(Of course when you do this you have to be careful when you compute the curvature; beware of taking additional vertical derivatives!) – Willie Wong Jul 26 '10 at 0:22
OK I understand this, but I'm still not sure how do I demonstrate that $ g^N $ would be diagonal. Intuitively - Suppose I have a "vertical vector" $ a \in M \subset N $ then in the local
coordinates you showed me $ g^N * a $ as a matrix operating on a vector, should give me a vector which is in $ T_a N $ but not in $ T_a M $ And this shows that the matrix elements "*" would have to
be zero. But again, this is intuitive, from linear algebra. Can you please help me to understand this delicate point? thanks for the time, Tamir – tamir Jul 26 '10 at 17:37
add comment
2 Answers
active oldest votes
If I understand your notation correctly, then your question is a bit confused, because $g^N$ has to be a symmetric matrix, so that "$***$" = "$*$". The condition that $g^N$ is block diagonal
does not have to hold; it says that the tangent vector of the last coordinate, $\partial/\partial u^{m+1}$, is perpendicular to the surface $M$. On the other hand, there always exist local
coordinates with this property. If you take any local coordinates for $M$, you can evolve them for a short time with the normal surface flow. You can even get the condition $B = 1$ in a local
up vote Also, there certainly is another way to get the covariant derivative on $M$ and its Christoffel symbol. Namely, if you apply the covariant derivative $\nabla^N$ to a tangent vector field $v$
1 down on $M$ in some tangent direction $w$, you get a vector field $\nabla^N_w(v)$ on $M$ that does not have to be tangent. You should then just project this derivative $\nabla^N_w(v)$ orthogonally
vote onto the tangent bundle $TM$. The orthogonal projection is a useful tensor field $P$ defined on the tangent bundle $TN$ restricted to $M$, and you can write an explicit expression for the
covariant derivative $\nabla^M$, or the Christoffel symbol or even the curvature tensor, in terms of $\nabla^N$ and this tensor field $P$. Actually, I am not entirely sure that this method is
algebraically all that different, but it is at least conceptually different.
Hi Greg, I'm still not sure about two points here, so let me see if I get it correctly. – tamir Jul 26 '10 at 18:10
In the first part, given a metric tensor, is it OK just to choose new local coordinates and preserve the tensor? I think that as long as I choose a smooth coordinate change, than maybe g
takes another form, but it still operates the same on members of $ TN $ . It's like doing $ w (C^T*g*C) v = (Cw)^T * g * (Cv) $ right? (if it's not too much trouble - if it is so, since $ g
$ is symmetric and strictly positive, so it can always be diagonalized and therefore we have sort of "principal" directions where the arc length is just $ ds^2 = U*du^2 + V*dv^2 + W*dw^2 +
... $ ?) – tamir Jul 26 '10 at 18:10
About the second approach you showed me, I see that $ P $ is doing the "magic" there. But does $ P $ have an "analytical" definition of some sort that I can work with? My problem started
with the fact that when I tried to project $\nabla^N$ to $\nabla^M$ I took $\nabla^N_{\partial i}\partial j$ and I wrote it explicitly with $\partial k$ and "threw away" the term with $ k =
m+1 $. This is the projection as I understand it. But the Christoffels in the other $\partial k, k \neq m+1$ stuck me. Because then, I didn't know what how to "project" them. Can you give
me anopther word on that, please? – tamir Jul 26 '10 at 18:10
PS (no place left up there) thanks for the time and effort, Tamir. – tamir Jul 26 '10 at 18:11
add comment
@Tamir: what book are you learning from? It seems that you are missing/confounding some elementary concepts.
This really belongs as a clarification of my earlier comment, but it doesn't look like it will fit in the comment box. Hence now it lives as an answer.
To start with we set some notations.
$(N,g^N)$ is a Riemannian manifold. This means that $N$ is a smooth manifold (locally diffeomorphic to domains in $\mathbb{R}^{m+1}$) and $g^N|_p$ at some point $p\in N$ is a positive
definite (and hence non-degenerate) symmetric bilinear form on $T_pN$. So given an arbitrary vector $v\in T_pN$, $g^N(v)$ is a co-vector, or an element of $T^*_pN$. $T_pN$ and $T_p^*N$ are
not the same space; the metric however induces a canonical isomorphism between the two (since the metric can be understood as a non-degenerate map between the two vector spaces).
Now given $M$ a $m$-dimensional smooth submanifold of $N$. The identity map $\iota:M\to N$ is an embedding. The tangent space $T_pM$ for $p\in M$ naturally embeds into $T_{p}N$ by the tangent
bundle map $d\iota: TM\to TN$. (Note that, however, without the Riemannian structure there's no natural embedding of the dual space $T^*_pM$ into $T_p^*N$; but don't worry about that now,
since we won't need it.)
Now I give an explicit construction of a coordinate system in a neighborhood $U$ of $p$ such that the metric $g^N$ is block diagonal along points $q \in U\cap M$.
First fix a neighborhood $V\subset M$ of $p$, and an arbitrary coordinate system $u^1,\ldots,u^m$ on $V$, with $p$ at the origin. At every point $q\in V$ the vectors $\partial_i = \partial/\
partial u^i$ span the tangent space $T_qM$. Now consider the image of $\partial_i$ under the map $d\iota$, call them $e_i = d\iota \partial_i$. By elementary linear algebra since $T_qM\subset
T_qN$ has co-dimension 1, there exists a unique vector $n_q$, up to scaling, such that $g(n_q,e_i) = 0$ for all $i$. (The reason I used $e_i$ instead of $\partial_i$ on $T_qN$ is because
up vote without a full set of coordinates [we're still missing one] it doesn't make sense to speak of the "coordinate derivative", which is obtained by varying one coordinate value while holding the
0 down remainder fixed.)
In any case, so in $T_qN$ for any point $q\in M$, we now have a set of basis vectors $e_1,\ldots,e_m,n_q$. Normalize $n_q$ so that $g(n_q,n_q) = 1$ and $\eta(e_1,e_2,\ldots,e_m,n_q) > 0$
where $\eta$ is the volume form on $N$. (So now we fix the size and orientation of the field $n_q$.) Now we extend $n_q$ somehow: pick your favourite way. One possible way is to extend $n_q$
by the geodesic flow for some short period of time. Let's call $\gamma(t,q)$ the geodesic starting from the point $q$ with initial speed $n_q$ at the time $t$. By possibly shrinking the
neighborhood $V$, there exists some $\epsilon > 0$ such that $\gamma: (-\epsilon,\epsilon)\times V \to N$ is injective.
Now let the neighborhood $U = \gamma( (-\epsilon,\epsilon)\times V )$. Define the coordinate on $U$ thus: for any point $x\in U$, let $\tilde{u}^i(x) = u^i\circ \pi_2\circ\gamma^{-1}(x)$,
where $\pi_2: (-\epsilon,\epsilon)\times V \to V$ is projection onto the second component. In other words, at a point $x$, find the unique geodesic $\gamma(t,q)$ that passes through $x$, and
set the value $\tilde{u}^i(x) = u^i(q)$. To complete the coordinate system you let $\tilde{u}^{m+1}(x) = \pi_1\circ \gamma^{-1}(x)$, where $\pi_1$ is projection onto the first component, or
that if $\gamma(t,q)$ is the geodesic, set $\tilde{u}^{m+1}(x) = t$.
Now you simply check that by definition, the surface $\{t = 0\} \cap U = V$. And that along this surface $\partial/\partial \tilde{u}^{m+1} = n_q$, and $\partial/\partial \tilde{u}^i = e_i$
for the other $i$'s. So that for any $q\in V$ the metric $g^N$ is block diagonal. Furthermore, observe that $$ \partial_{m+1} g(\partial_{m+1},\partial_{m+1}) = 2 g(\partial_{m+1},\nabla_{\
partial_{m+1}}\partial_{m+1}) = 0$$ by the geodesic equation, you have that $g(\partial_{m+1},\partial_{m+1}) = 1$ on $U$. Also, use that $$ \partial_{m+1} g(\partial_i,\partial_{m+1}) = g(\
nabla_{\partial_{m+1}}\partial_i,\partial_{m+1}) = g(\nabla_{\partial_i}\partial_{m+1},\partial_{m+1}) = \frac12 \partial_i g(\partial_{m+1},\partial_{m+1}) = 0$$ (first equality uses the
geodesic equation again, the second one uses that Levi-Civita connection is torsion free, and the Lie bracket of coordinate vector fields vanish). We see that $g(\partial_i,\partial_{m+1}) =
0$ on $U$. So in the whole neighborhood $U$, we have that $g^N$ is block diagonal, with the block $B$ exactly 1.
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/33343/finding-covariant-derivative-of-a-riemanian-submanifold","timestamp":"2014-04-21T07:57:26Z","content_type":null,"content_length":"69067","record_id":"<urn:uuid:1408c03a-7022-4c73-9aad-503d88675756>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blowing up a subvariety - what can happen to the singular locus?
up vote 4 down vote favorite
Let $X$ be a variety defined over a number field $k$. If I blow-up along some arbitrary subvariety of $X$, what are the possible outcomes for the dimension of the singular locus of the variety? If
the subvariety lies outside the singular locus of $X$, then it stays the same, if it is carefully chosen, it might go down. Can it go up?
To be more specific, my variety is a high dimensional hypersurface, and the subvariety I am blowing up is a linear space of much smaller dimension than the singular locus. I don't know if this
changes the situation.
I have a feeling this question might be more suited to stackexchange, but it didn't spark much interest over there http://math.stackexchange.com/questions/53676/
blowing-up-a-subvariety-what-can-happen-to-the-singular-locus. Apologies for wasting time if so.
ag.algebraic-geometry geometry
add comment
1 Answer
active oldest votes
Any birational map $\pi:X'\to X$ is the blow-up of some ideal sheaf on $X$, so in general one must expect singularities on $X'$, even if the ideal is reduced (as you assume).
As a concrete example, let $X=\mathbb{A}^n$ and blow-up the complete intersection subvariety gven by the ideal $I=(f,g)\subset k[x_1,\ldots,x_n]$. Then the blow-up of $X$ is the Proj
of the Rees algebra $R[It]$ which is given by $k[x_1,\ldots,x_n,S,T]/(fS-gT)$. By choosing $f$ and $g$ appropriately one can produce varieties with singular locus of high dimension.
up vote 4 down
vote accepted For your specific example, when $Y$ is a linear space of small dimension, I don't know if the above can happen, but there are certainly cases where the dimension of the singular locus
will be unchanged after the blow-up, (e.g when $Y$ a point on a singular surface).
Thanks, that's exactly what I was looking for. – samian86 Aug 25 '11 at 13:01
No problem. Actually, I think it might to say more about your specific case. In that case the Rees algebra is given by $k[x_0,\ldots,x_n,y_1,\ldots,y_k](f,x_iy_j-x_jy_k,\ldots)$ and
it seems doable to investigate the singularities in each chart $y_i=1$ by hand. Perhaps you can show that the dimension of the singular locus does indeed stay the same for some
choices of $f$. – J.C. Ottem Aug 25 '11 at 15:07
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/73557/blowing-up-a-subvariety-what-can-happen-to-the-singular-locus","timestamp":"2014-04-20T06:24:19Z","content_type":null,"content_length":"53702","record_id":"<urn:uuid:43d473b3-fb50-480e-a4c8-d6037ee6aea0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kimberton Math Tutor
Find a Kimberton Math Tutor
...Although technically, you don't actually need to know much about these subjects to be successful on the test. Every clue you need to solve a problem is provided in the question, and some clever
sleuthing is often all you need to get the answer. The MCAT is by far the toughest standardized test you'll ever take.
34 Subjects: including precalculus, ACT Math, SAT math, English
Hey Everybody! I am an experienced tutor teaching subjects like physics, calculus, algebra, Spanish and even guitar. I was originally an engineer for a helicopter company for nearly 4 years and I
resigned to start a career in education.
16 Subjects: including precalculus, algebra 1, algebra 2, calculus
...His industrial career included technical presentations and workshops, throughout North America and Europe, to multinational companies, to NATO, and to trade delegations from China and Russia.
In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr.
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...My students and I will spend time organizing the subject into small memorable portions and will end sessions reviewing the material presented that day. My hobbies include photography and
printing and painting. My wife and I share our home with a former rescue dog; a Lhasa Apso named Lucky.
16 Subjects: including geometry, ASVAB, algebra 1, algebra 2
...Once, when a calculus student of mine said that about derivatives, I took them out to her car and explained how acceleration, velocity, and distance are all calculus terms we think about in the
car while she drove us around.Swimming is the premier water sport of the world. I was a certified life...
21 Subjects: including calculus, chemistry, swimming, world history | {"url":"http://www.purplemath.com/kimberton_math_tutors.php","timestamp":"2014-04-16T13:17:04Z","content_type":null,"content_length":"23732","record_id":"<urn:uuid:3b93329e-50ed-4b13-ac58-472e3a9c517b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fatou theorem
Critical points and Fatou theorem
The orbit starting at z[o] = 0 which converges to the attracting fixed point z[∗] = f(z[∗ ]) is shown to the left. For small ε
f[c](z[∗] + ε ) = z[∗] + λ ε + O(ε^ 2), |λ| < 1
therefore f maps every disk with radius R into the next smaller one with radius |λ|R (really the "circles" are distorted a little by the O(ε^ 2) terms).
You see these small circles around the attracting fixed points below (here |λ| = 0.832). One of two branches of the inverse function f^ -1(z) maps the disks vice versa. We can extend the map
analytically, while f^ -1(z) is a smooth nonsingular function with finite derivative [1].
Differentiating f^ -1(f(z)) = z we get
f^ -1(t)'|[t = f(z)] = 1 / f '(z).
Therefore the map is singular if f '(z) = 0. Points z[c] for which f '(z[c ]) = 0 are called critical points of a map f. E.g. quadratic map f[c](z) = z^2 + c with derivative f[c](z)' = 2z has the
only critical point z[c] = 0 and inverse function f[c]^-1(z) = ±(z - c)^½ is singular at z = c. We can continue f[c]^-1 up to the outside border of the yellow region. The border contains the point z
= c and is mapped on the figure eight curve with the critical point z[c] in the center. Therefore iterations f[c]^on(z[c ]) converge to z[∗] for large n (the orbit is called the critical orbit). This
is the subject of the Fatou theorem.
Fatou theorem: every attracting cycle for a polynomial or rational function attracts at least one critical point.
As since quadratic maps have the only critical point z[c] = 0 then quadratic J may have the only finite attractive cycle! (There is one more critical point at infinity which attracts diverging
orbits.) Thus, testing the critical point shows if there is any finite attractive cycle.
[1] John W. Milnor "Dynamics in One Complex Variable" § 8.5
Contents Previous: The Julia sets symmetry Next: The Fundamental Dichotomy
updated 11 Sep 2013 | {"url":"http://ibiblio.org/e-notes/MSet/inside.html","timestamp":"2014-04-19T00:06:50Z","content_type":null,"content_length":"4209","record_id":"<urn:uuid:a8b5d77a-54ae-44eb-b4b0-c3ff0142304f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
HZmvntest - File Exchange - MATLAB Central
Henze and Zirkler (1990) introduce a multivariate version of the univariate There are many tests for assessing the multivariate normality in the statistical literature (Mecklin and Mundfrom, 2003).
Unfortunately, there is no known uniformly most powerful test and it is recommended to perform several test to assess it. It has been found that the Henze and Zirkler test have a good overall power
against alternatives to normality.
The Henze-Zirkler test is based on a nonnegative functional distance that measures the distance between two distribution functions: the characteristic function of the multivariate normality and the
empirical characteristic function.
The Henze-Zirkler statistic is approximately distributed as a lognormal. The lognormal distribution is used to compute the null hypothesis probability.
According to Henze-Wagner (1997), this test has the desirable properties of,
--affine invariance
--consistency against each fixed nonnormal alternative distribution
--asymptotic power against contiguous alternatives of order n^-1/2
--feasibility for any dimension and any sample size
If the data is multivariate normality, the test statistic HZ is approximately lognormally distributed. It proceeds to calculate the mean, variance and smoothness parameter. Then, mean and variance
are lognormalized and the P-value is estimated.
Also, for all the interested people, we provide the lognormal critical value.
X - data matrix (size of matrix must be n-by-p; data=rows,
independent variable=columns)
c - covariance normalized by n (=1, default)) or n-1 (~=1)
alpha - significance level (default = 0.05)
- Henze-Zirkler's Multivariate Normality Test
Please login to add a comment or rating. | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/17931-hzmvntest","timestamp":"2014-04-16T17:21:45Z","content_type":null,"content_length":"31538","record_id":"<urn:uuid:375a4e14-8b0f-4a63-848b-8f8800f26bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convex Optimization Methods for Graphs and Statistical Modeling
Chandrasekaran, Venkat (2011) Convex Optimization Methods for Graphs and Statistical Modeling. PhD thesis, Massachusetts Institute of Technology. http://resolver.caltech.edu/
PDF - Accepted Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20121008-130644748
An outstanding challenge in many problems throughout science and engineering is to succinctly characterize the relationships among a large number of interacting entities. Models based on graphs form
one major thrust in this thesis, as graphs often provide a concise representation of the interactions among a large set of variables. A second major emphasis of this thesis are classes of structured
models that satisfy certain algebraic constraints. The common theme underlying these approaches is the development of computational methods based on convex optimization, which are in turn useful in a
broad array of problems in signal processing and machine learning. The specific contributions are as follows: We propose a convex optimization method for decomposing the sum of a sparse matrix and a
low-rank matrix into the individual components. Based on new rank-sparsity uncertainty principles, we give conditions under which the convex program exactly recovers the underlying components.
Building on the previous point, we describe a convex optimization approach to latent variable Gaussian graphical model selection. We provide theoretical guarantees of the statistical consistency of
this convex program in the high-dimensional scaling regime in which the number of latent/observed variables grows with the number of samples of the observed variables. The algebraic varieties of
sparse and low-rank matrices play a prominent role in this analysis. We present a general convex optimization formulation for linear inverse problems, in which we have limited measurements in the
form of linear functionals of a signal or model of interest. When these underlying models have algebraic structure, the resulting convex programs can be solved exactly or approximately via
semidefinite programming. We provide sharp estimates (based on computing certain Gaussian statistics related to the underlying model geometry) of the number of generic linear measurements required
for exact and robust recovery in a variety of settings. We present convex graph invariants, which are invariants of a graph that are convex functions of the underlying adjacency matrix. Graph
invariants characterize structural properties of a graph that do not depend on the labeling of the nodes; convex graph invariants constitute an important subclass, and they provide a systematic and
unified computational framework based on convex optimization for solving a number of interesting graph problems. We emphasize a unified view of the underlying convex geometry common to these
different frameworks. We describe applications of these methods to problems in financial modeling and network analysis, and conclude with a discussion of directions for future research.
Item Type: Thesis (PhD)
Additional This research was supported in part by the following grants - MURI AFOSR grant FA9550-06-1-0324, MURI AFOSR grant FA9550-06-1-0303, NSF FRG 0757207, AFOSR grant FA9550-08-1-0180,
Information: and MURI ARO grant W911NF-06-1-0076.
Funders: ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┐
│ Funding Agency │ Grant Number │
│ Air Force Office of Scientific Research (AFOSR) Multidisciplinary University Research Initiative (MURI) │ FA9550-06-1-0324 │
│ Air Force Office of Scientific Research (AFOSR) Multidisciplinary University Research Initiative (MURI) │ FA9550-06-1-0303 │
│ NSF │ FRG 0757207 │
│ Air Force Office of Scientific Research (AFOSR) │ FA9550-08-1-0180 │
│ Army Research Office (ARO) Multidisciplinary University Research Initiative (MURI) │ W911NF-06-1-0076 │
Record Number: CaltechAUTHORS:20121008-130644748
Persistent URL: http://resolver.caltech.edu/CaltechAUTHORS:20121008-130644748
Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 34757
Collection: CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On: 08 Oct 2012 20:48
Last Modified: 27 Dec 2012 02:49
Repository Staff Only: item control page | {"url":"http://authors.library.caltech.edu/34757/","timestamp":"2014-04-20T04:10:09Z","content_type":null,"content_length":"42094","record_id":"<urn:uuid:023d76aa-dc38-42b4-a407-09ba4e6d7469>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
identity function
identity function
Given a set $S$, the identity function on $S$ is the function $id_S\colon S \to S$ that maps any element $x$ of $S$ to itself:
$id_S = (x \mapsto x) = \lambda x.\; x ;$
or equivalently,
$id_S(x) = x .$
The identity functions are the identity morphisms in the category Set of sets.
More generally, in any concrete category, the identity morphism of each object is given by the identity function on its underlying set.
Revised on July 20, 2012 01:37:51 by
Toby Bartels | {"url":"http://ncatlab.org/nlab/show/identity+function","timestamp":"2014-04-20T13:21:27Z","content_type":null,"content_length":"14718","record_id":"<urn:uuid:0c89aa67-0223-4b91-8744-ac236c8cf351>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the differences
Comparing files is something developers do every once in a while. For example, comparing configuration files to see what is different in the other environment or compare programming files to see
what has changed in the source code. Implementations of text comparison algorithms are therefore widespread and used in several fields. For instance, in blogs and content managements systems, one
might need to know what was altered in an update of a text (in cms like systems) or a programmer in a team would like to see what changed in the source code (svn). Also a lot of (combined) search,
spell checking, speech recognition and plagiarism detection software compare texts (strings) in a certain way. This article covers the Levenshtein distance algorithm and how to use it to indicate
alterations to texts.
The are several ways to compare texts and find differences and similarity scores. For this article the similarity scores are not relevant because these scores are just numbers. We are interested in
what is added, deleted or substituted in the transformation from text A to text B. In other words, we would like to mark the minimal number of primitive operations needed to transform A to B. To do
this we’ll need the basics from the classic computer science problem: the longest common subsequence problem. The technique described hereunder which is derived from this problem is the Levenshtein
distance algorithm. The algorithm was developed by Vladimir Levenshtein to replace the Hamming distance. The result of Levenshtein’s algorithm is exactly the minimal number of operations, but you can
use the unpolished result of this algorithm to determine what parts of text were added, deleted or substituted. Originally this is done per character, but with a little tweak this can be changed to
a per word, line or paragraph level function. When you’ve read this article you will know how this works (and the demo, which is referred to at the end). This demo takes two texts as input and
outputs what was added, deleted and replaced.
For this article and the demo I used search results for some inspiration. A nice explanation already there is this description of the Levenshtein algorithm, as well as the Wiki page on it. For this
article, let’s use two sample lines: “The brown dog jumped away from the sprinkler” and “The dog ran towards the green sprinkler”. Now we want to know with words were added, deleted or replaced in
the transition from the first sentence to the second. To do this, let’s take a closer look on how the iterative process of the Levenshtein algorithm is executed.
1. The first step is to contruct a matrix of n+1 by m+1, where n is the number of words in the first line and m the number of words in the second line.
2. Secondly, fill the first row and column with (from top left to bottom or right) zero to respectively m and n.
3. Now for each n, evaluate each m. If the evaluated word of n matches m, the cost is 0, otherwise it’s 2.
4. Fill out cell (n, m) having (n, m) is the minumum of:
□ The value of the cell above + 1
□ The left neighbour cell value + 1
□ The above left cell value + cost
For the Levenshtein distance it stops right here, when the iteration is completed. The distance is the value in the lower right cell. Here we are not interested in the Levenshtein distance itself,
but in the matrix we’ve just constructed. The lowest cost route from bottom right to top left reveals information on what words have been added, deleted and/or substituted. To show how, we need to
costruct the matrix with the algorithm above.
The brown dog jumped away from the sprinkler
The 1 *
dog 2 **
ran 3
towards 4
the 5
green 6
sprinkler 7
For the first cell (*), the words “The” versus “The” are equal, so the cost is 0. Now the minumum of the cell above + 1 (2), the cell to the left + 1 (2) and the above left + cost (0), is the latter
The cell underneath it (**) has cost 1 (“The” versus “dog”) and gets a value equal to the minimum of the cell above + 1 (1), the cell to the left + 1 (3) and the above left + cost (2), which is 1.
Continue to fill this out and the table will look like this:
The brown dog jumped away from the sprinkler
The 1 0 1 2 3 4 5 6 7
dog 2 1 1 1 2 3 4 5 6
ran 3 2 2 2 2 3 4 5 6
towards 4 3 3 3 3 3 4 5 6
the 5 4 4 4 4 4 4 4 5
green 6 5 5 5 5 5 5 5 5
sprinkler 7 6 6 6 6 6 6 6 5
Now we need to find the lowest cost path from the bottom right to the zero in the top left. To do this simply jump to the cell with the lowest value adjacent to the current cell (to left, above or
diagonal). Jumping diagonal is only allowed if the words are the same (column and row). If two or more have the same (lower) value, the priority of choosing a route is to try diagonal first, then
either left or above. So, from the bottom right 5 we start the route to the diagonally adjacent 5 (because ’sprinkler’ equals ’sprinkler’). From the 5 the next step would be the lower 4 above it,
then the diagonally adjacent 4, etc… The route table will look like this:
The brown dog jumped away from the sprinkler
The 1 0 1 2 3 4 5 6 7
dog 2 1 1 1 2 3 4 5 6
ran 3 2 2 2 2 3 4 5 6
towards 4 3 3 3 3 3 4 5 6
the 5 4 4 4 4 4 4 4 5
green 6 5 5 5 5 5 5 5 5
sprinkler 7 6 6 6 6 6 6 6 5
After the route is calculated, every step in it tells something about the operations needed (from top left to bottom right).
• Every diagonal step (not increasing the score) tells us nothing happened. E.g. the first step from 0 to 0 tells us “The” from the first line stays “The” in the second line.
• Every horizontal step means a word is deleted. E.g. the step from 0 to 1 tells us “brown” was deleted at this point.
• Every vertical step means a word is added. E.g. the step from 4 to 5 tells us “green” was added at this point.
• Every diagonal step having the score increased means a word is substituted (added and deleted) at this point. E.g. the step from 2 to 3 tells “away” is substituted by “ran”. This is an illegal
opperation in the detection of addition and deletion of words.
This way a text indicating the operations can be constructed:
The brown dog jumped ranaway towardsfrom the green sprinkler.
Red indicates deletion, green for insertion and a red and green pair indicates substitution.
Of course some optimalizations can be performed. The above for instance does give a good indication of what happened to the text. Imagine a larger text than just these lines and the relevance of
changes are marked this way will become more obvious. But because only the primitive operations are detected at the word level, word groups are not taken into account. In this example for instance,
the algorithm would be better if it marked “jumped away from” as replaced by “ran towards” instead of each seperate word as it does now:
The brown dog ran towardsjumped away from the green sprinkler.
This operation is not that hard to implement, simply replace subsequent differing operations by substitute operations.
An implementation of this algorithm, with the optimalization patch suggested here, is now available as an example. Source code (PHP) is available as well.
And about the featured picture on top: there are 12 differences to spot.
VN:F [1.3.1_645] | {"url":"http://www.edesign.nl/2009/04/12/find-the-differences/","timestamp":"2014-04-18T23:15:49Z","content_type":null,"content_length":"44124","record_id":"<urn:uuid:5b94004f-4d3c-46c8-9ef4-a74447ea1ceb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derry, NH Algebra Tutor
Find a Derry, NH Algebra Tutor
...For each grade level, the standard states that the student is expected to "Demonstrate command of the conventions of standard English grammar and usage when writing or speaking." Of course,
the specifics vary by grade level. I utilize the materials from the student's class to advance his/her gra...
44 Subjects: including algebra 1, algebra 2, chemistry, writing
...I offer tutoring based on each student's level of need to review or learn the skills, then work on applying them in practice test problems. Learning how questions are scored, and paying close
attention to the often tricky wording within the questions, can make a difference in test outcome. I also offer strategies to overcome anxiety and boost confidence regarding testing.
15 Subjects: including algebra 1, algebra 2, geometry, precalculus
I have been teaching math for many years in high school and colleges. I have a MBA (3.97 GPA out of 4.0) and an undergraduate degree in engineering. Presently, I am an adjunct math instructor.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...Throughout those 20 years, I implemented several networks and network management systems. I hold bachelor and master’s degrees in Computer Science. For ten years, I worked in the software
industry developing and leading the development of software products often written in C++. For almost ten ...
21 Subjects: including algebra 1, physics, geometry, Java
...I can help you. I have over 10 years of experience tutoring hundreds of middle and high school students in math with varying learning styles and needs. I explain the material in a way students
can understand.
15 Subjects: including algebra 2, calculus, geometry, algebra 1
Related Derry, NH Tutors
Derry, NH Accounting Tutors
Derry, NH ACT Tutors
Derry, NH Algebra Tutors
Derry, NH Algebra 2 Tutors
Derry, NH Calculus Tutors
Derry, NH Geometry Tutors
Derry, NH Math Tutors
Derry, NH Prealgebra Tutors
Derry, NH Precalculus Tutors
Derry, NH SAT Tutors
Derry, NH SAT Math Tutors
Derry, NH Science Tutors
Derry, NH Statistics Tutors
Derry, NH Trigonometry Tutors | {"url":"http://www.purplemath.com/derry_nh_algebra_tutors.php","timestamp":"2014-04-21T13:08:29Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:34101a38-9f8e-424c-9d1f-bc551a07d2ba>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Andrej Zlato
Andrej Zlatoš
I am an Associate Professor in the Department of Mathematics of the University of Wisconsin. I work primarily on reaction-diffusion equations, fluid dynamics, spectral theory of Schrödinger
operators, and orthogonal polynomials. More details can be found in my CV.
E-mail: [my first name]@math.wisc.edu
Phone: (608) 262-2926
Mail: UW Mathematics, 480 Lincoln Dr, Madison, WI 53706 | {"url":"http://www.math.wisc.edu/~zlatos/","timestamp":"2014-04-19T14:35:06Z","content_type":null,"content_length":"6370","record_id":"<urn:uuid:92b90149-74e5-4183-83af-0573d802e938>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beginner probability question
At the start of flue season, Dr. Anna Ahmeed examines 50 patients over two days. Of those 50 patients, 30 have a headache, 24 have a cold, and 12 have neither symptoms. Some patients have both
Draw a Venn diagram and determine the number of patients that have both symptoms.
How do I figure out how many have both?
The answer is this (8a): | {"url":"http://mathhelpforum.com/statistics/177361-beginner-probability-question.html","timestamp":"2014-04-18T20:06:33Z","content_type":null,"content_length":"43331","record_id":"<urn:uuid:eb70e5fc-dca4-4c5c-92dd-30d1bf5ad23a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
the sales tax rate in a city is 8.6% find the tax charged on a purchase of $243 . How much tax is charged on a purchase of $243 round to nearest cent
The tax charged on a purchase of $243 is $20.89 here is the solution. $243 x 8.6% = $20.89
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/?ConversationId=62E5D6D5","timestamp":"2014-04-18T05:40:17Z","content_type":null,"content_length":"39696","record_id":"<urn:uuid:171e9443-2a0c-42fc-b80c-a5a3291b9e40>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] Typed scheme and curry
From: John Clements (clements at brinckerhoff.org)
Date: Sat Dec 20 18:14:27 EST 2008
On Dec 19, 2008, at 6:37 PM, Eli Barzilay wrote:
> On Dec 19, Carl Eastlund wrote:
>> The "curry" function in Typed Scheme isn't just about writing curried
>> functions, it's about converting uncurried functions to curried
>> functions. If you want to write a function in a curried style from
>> the get-go, you can do it like this: [...]
> [Note: this is a reply for Scott.]
> Note that the basic problem that (the untyped) `curry' solves is
> unrelated to the typed vs the untyped worlds -- it's the fact that
> Scheme has variable arity functions, and therefore `curry' is
> "guessing" how many levels of function calls are needed. (I'll
> describe what it's "type" would be below.) (BTW, there is a plan to
> change how `curry' works -- the result is still going to be
> problematic for typed-scheme.) In any case, when you say
Lest we miss an obvious corollary of this discussion: you *can* make a
plain-vanilla "curry3" function for a fixed number of arguments (in
this case, 3):
#lang typed-scheme
(: curry3 (All (a b c d)
((a b c -> d)
(a -> (b -> (c -> d))))))
(define (curry3 f)
(lambda (a)
(lambda (b)
(lambda (c)
(f a b c)))))
;; let's try it out:
(: fun-of-four (Number Number Number -> Number))
(define (fun-of-four x y z)
(* (+ x y) z))
((((curry3 fun-of-four) 3) 5) 6)
This isn't variable-arity, and it's not flexible the way that the
'curry' that Eli describes is.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2484 bytes
Desc: not available
URL: <http://lists.racket-lang.org/users/archive/attachments/20081220/c60433a0/attachment.p7s>
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2008-December/029280.html","timestamp":"2014-04-18T10:46:52Z","content_type":null,"content_length":"7273","record_id":"<urn:uuid:92206e3d-24e6-4534-a635-562e50143014>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inhomogeneous boundary conditions
5.6 Inhomogeneous boundary conditions
The method of separation of variables needs homogeneous boundary conditions. More precisely, the eigenfunctions must have homogeneous boundary conditions. (Even if in a set of functions each function
satisfies the given inhomogeneous boundary conditions, a combination of them will in general not do so.)
In the previous example, this problem could be circumvented by choosing instead of as the variable of the eigenfunctions. For the example in this section, however, this does not work.
5.6.1 The physical problem
The problem is to find the unsteady temperature distribution in a bar for any arbitrary position and time . The initial temperature distribution at time zero equals a given function . The heat flux
out of the left end equals a given function , and the temperature of the right end a given function . Heat is added to the bar from an external source at a rate described by a given function .
5.6.2 The mathematical problem
• Finite domain :
• Unknown temperature
• Constant , so a linear constant coefficient partial differential equation.
• Parabolic
• Inhomogeneous
• One initial condition
• One Neumann boundary condition
• One Dirichlet boundary condition
• All of , , , and are given functions.
5.6.3 Outline of the procedure
We would like to use separation of variables to write the solution in a form that looks roughly like:
Here the would be the eigenfunctions.
The cannot be eigenfunctions since the time axis is semi-infinite. Also, Sturm-Liouville problems require boundary conditions at both ends, not initial conditions.
Unfortunately, eigenfunctions must have homogeneous boundary conditions. So if was simply written as a sum of eigenfunctions, it could not satisfy inhomogeneous boundary conditions.
Fortunately, we can apply a trick to get around this problem. The trick is to write as the sum of a function that satisfies the inhomogeneous boundary conditions plus a remainder :
Since produces the inhomogeneous term in the boundary conditions, the remainder satisfies homogeneous boundary conditions. Therefore can be written as
using separation of variables. Add to get .
5.6.4 Step 0: Fix the boundary conditions
The first thing to do is find a function that satisfies the same boundary conditions as . In particular, must satisfy:
The function does not have to satisfy the either the partial differential equation or the initial condition. That allows you to take something simple for it. The choice is not unique, but you want to
select something simple.
A function that is linear in ,
is surely the simplest possible choice. In this example, it works fine too.
Plug this expression for into the boundary conditions for ,
That produces the requirements
The solution is and . So our is
Keep track of what we know, and what we do not know. Since we (supposedly) have been given functions and , function is from now on considered a known quantity, as given above.
You could use something more complicated than a linear function if you like to make things difficult for yourself. Go ahead and use if you really love to integrate error functions and Bessel
functions. It will work. I prefer a linear function myself, though. (For some problems, you may need a quadratic instead of a linear function.)
Under certain conditions, there may be a better choice than a low order polynomial in . If the problem has steady boundary conditions and a simple steady solution, go ahead and take to be that steady
solution. It will work great. However, in the example here the boundary conditions are not steady; we are assuming that and are arbitrary given functions of time.
Next, having found , define a new unknown as the remainder when is subtracted from :
We now solve the problem by finding . When we have found , we simply add , already known, back in to get .
To do so, first, of course, we need the problem for to solve. We get it from the problem for by everywhere replacing by . Let's take the picture of the problem for in front of us and start
First take the boundary conditions at and :
Replacing by :
But since by construction and ,
Note the big thing: while the boundary conditions for are similar to those for , they are homogeneous. We will get a Sturm-Liouville problem in the -direction for where we did not for . That is what
does for us.
We continue finding the rest of the problem for . We replace by into the partial differential equation ,
and take all terms to the right hand side:
where , or, written out
Hence is now a known function, just like .
The final part of the problem for that we have not converted yet is the initial condition. We replace by in ,
and take to the other side:
where is , or written out:
Again, is now a known function.
The problem for is now the same as the one for , except that the boundary conditions are homogeneous and functions and have changed into known functions and .
Using separation of variables, we can find the solution for in the form:
We already know how to do that! (Don't worry, we will go over the steps anyway.) Having found , we will simply add to find the asked temperature .
5.6.5 Step 1: Find the eigenfunctions
To find the eigenfunctions , substitute a trial solution into the homogeneous part of the partial differential equation, . Remember: ignore the inhomogeneous part when finding the eigenfunctions.
Putting into produces:
Separate variables:
As always, cannot depend on since the left hand side does not. Also, cannot depend on since the middle does not. So must be a constant.
We then get the following Sturm-Liouville problem for any eigenfunctions :
The last two equations are the boundary conditions on which we made homogeneous.
This is the exact same eigenvalue problem that we had in an earlier example, so I can just take the solution from there. The eigenfunctions are:
5.6.6 Step 2: Solve the problem
We expand in the problem for in a Fourier series:
In particular,
Since and are known functions, we can find their Fourier coefficients from orthogonality:
or with the eigenfunctions written out
The integrals in the bottom equal .
So the Fourier coefficients are now known constants, and the are now known functions of . Though in actual application, numerical integration may be needed to find them. During finals, I usually make
the functions , and simple enough that you can do the integrals analytically.
Now write the partial differential equation using the Fourier series:
Looking in the previous section, the Sturm-Liouville equation was , so the partial differential equation simplifies to:
It will always simplify or you made a mistake.
For the sums to be equal for any , the coefficients of every individual eigenfunction must balance. So we get
We have obtained an ordinary differential equation for each . It is again constant coefficient, but inhomogeneous.
Solve the homogeneous equation first. The characteristic polynomial is
so the homogeneous solution is
For the inhomogeneous equation, undetermined constants is not a possibility since we do not know the actual form of the functions . So we use variation of parameter:
Plugging into the ordinary differential equation produces
We integrate this equation to find . I could write the solution using an indefinite integral:
But that has the problem that the integration constant is not explicitly shown. That makes it impossible to apply the initial condition. It is better to write the anti-derivative using an integral
with limits plus an explicit integration constant as:
You can check using the Leibniz rule for differentiation of integrals (or really, just the fundamental theorem of calculus,) that the derivative is exactly what it should be. (Also, the lower limit
does not really have to be zero; you could start the integration from 1, if it would be simpler. The important thing is that the upper limit is the independent variable .)
Putting the found solution for into
we get, cleaned up:
We still need to find the integration constant . To do so, write the initial condition using Fourier series:
This gives us initial conditions for the :
the latter from above, and hence
or writing out the eigenvalue:
We have in terms of known quantities, so we are done.
5.6.7 Summary of the solution
Collecting all the boxed formulae together, the solution is found by first computing the coefficients from:
Also compute the functions from:
Then the temperature is: | {"url":"http://www.eng.fsu.edu/~dommelen/pdes/style_a/svbex.html","timestamp":"2014-04-20T16:48:48Z","content_type":null,"content_length":"56667","record_id":"<urn:uuid:e53a1966-08e7-45dd-91d9-136f0a33163a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Famous Theorems of Mathematics
A reader requests expansion of this book to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.
Mathematics deals with proofs. Whatever statement, remark, result etc. one uses in mathematics it is considered meaningless until is accompanied by a rigorous mathematical proof. This book is
intended to contain the proofs (or sketches of proofs) of many famous theorems in mathematics in no particular order. It should be used both as a learning resource, a good practice for acquiring the
skill for writing your own proofs is to study the existing ones, and for general references.
It is not however intended as a companion to any other wikibook or wikipedia articles but can complement them by providing them with links to the proofs of the theorems they contain.
One note here. There are usually many ways to solve a problem. Many times the proof used comes down to the primary definitions of terms involved. We will follow the definition given by the first
major contributor.
Table of contents
High school
• Fermat's theorem on sums of two squares
• Sum of the reciprocals of the primes diverges
• Bertrand's postulate
• Spectral Theorem
• e^πi+1=0
• Jordan Curve Theorem
• Prime number theorem
• Riemann hypothesis
Old table of contents
This section contains the table of content of the book as according to its original intentions. The material here should be either incorporated in the existing book or discarded.
Proofs and definitions will be arranged according to the fields of mathematics:
Further Reading
Manual of style
External links etc
Last modified on 13 January 2014, at 18:12 | {"url":"https://en.m.wikibooks.org/wiki/Famous_Theorems_of_Mathematics","timestamp":"2014-04-21T07:16:46Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:e340badb-a2c4-48ec-a38d-c32b5110ca7a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a Spectrum
2 Answers
Accepted answer
The area under your curve should just be:
N = numel(x);
dt = 1/fs;
df = fs/N;
y = fft(x)*dt;
area_y = sum(abs(y))*df; % which is also equal to: sum(abs(fft(x)))/N
energy_y = sum(abs(y).^2)*df;
If you really want "energy", then energy_y should be equal to energy_x:
energy_x = sum(x.^2)*dt;
4 Comments
Show 1 older comment
freq range is between 10 to 800
No products are associated with this question.
Area of a Spectrum
What matlab command can give me the area of a spectrum. I have shock response spectrum but i need to find the area under the curve.
0 Comments
Yeah... in your case, you would just need:
area_y = sum(abs(y))/N;
The frequency increment ( df ) comes into play only if you scaled your FFT amplitudes ( y ) by the time increment ( dt ) - since dt*df = 1/ N. If you do not scale your FFT amplitudes inside your srs
function, then you should just divide your sum by N to get the area. If you want to find the area between a frequency range, you will have to do something a little different. See below:
N = 4096;
fs = 2000;
x = randn(1,N);
df = fs/N;
Nyq = fs/2;
y = fft(x);
f = ifftshift(-Nyq : df : Nyq-df);
If your y represents both the negative and positive frequency amplitudes:
area_y_10_800 = sum(abs(y( abs(f) >= 10 & abs(f) <= 800 )))/N;
or if your y represents only the positive frequencies
area_y_10_800 = 2*sum(abs(y( f >= 10 & f <= 800 )))/N;
However, you do not want to double the amount if you are including either your 0 frequency or Nyquist frequency amplitude in the frequency range.
Hi Lisa, If you have the Signal Processing Toolbox, you can use the avgpower() method of a spectrum object.
For example:
Fs = 1000;
t = 0:1/Fs:1-(1/Fs);
x = cos(2*pi*50*t)+sin(2*pi*100*t)+randn(size(t));
psdest = psd(spectrum.periodogram,x,'Fs',Fs,'NFFT',length(x));
avgpower(psdest,[25 75])
The final line above integrates under the PSD from 25 to 75 Hz.
Note you can get the fraction of the total power in the specified interval with:
avgpower(psdest,[25 75])/avgpower(psdest)
1 Comment
Thanks Wayne. But it is like this: I calculated the shock response spectrum Y and plotted between frequencies 10-500. I need the peaks and the area under the curve max(Y) gives me the peak and i
tried area(Y) and i get a shaded plot again but what i really want is the value of the area of Y. | {"url":"http://www.mathworks.com/matlabcentral/answers/45031","timestamp":"2014-04-16T08:27:44Z","content_type":null,"content_length":"32741","record_id":"<urn:uuid:88ef3449-6a92-4896-9ceb-f8c9c465a86c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
System of Tanks/DiffEQ
April 21st 2007, 07:16 PM
System of Tanks/DiffEQ
Solve the mixing problem. At t = 0, the volume of solution in both tanks is 60 L and tank 1 contains 60 g of chemical whereas tank 2 contains 200 g of chemical.
The system of differential equations I ended up getting is:
Is this right? How would I solve this?
April 21st 2007, 07:46 PM
I get the same system of differencial equations. :)
Unless you want me explain how those numbers come from if you were not sure. | {"url":"http://mathhelpforum.com/calculus/14005-system-tanks-diffeq-print.html","timestamp":"2014-04-17T20:30:11Z","content_type":null,"content_length":"4574","record_id":"<urn:uuid:6bd5717f-99a8-4748-8c8e-b34bbb3d97a2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Them there is some neat algebraic mechanics !
Date: Nov 27, 2012 12:44 AM
Author: Ray Koopman
Subject: Re: Them there is some neat algebraic mechanics !
On Nov 26, 5:06 am, djh <halitsk...@att.net> wrote:
> Thanks for that algebraic legerdemain ? it certainly clarifies why
> the u^2 in c on u,u^2 models the change in slope with change in u.
> Some questions.
> 1.
> You say ?This is because a1 is only the slope of the function at
> x = 0?
> This makes partial sense to me inasmuch as a1 = a01+a10 and a01 is
> the slope in the ?intercept? function A0 = a00+a01*x. But why are
> you free to disregard a10 here? Is it necessarily 0 when x is 0?
> If so, the reason why is eluding me.
slope = dy/dx = a1 + 2*a2*x
> 2.
> Putting aside for a moment the matter of whether we need a
> ?meaningful? definition for a1, may I operate with a2 the same way
> I?ve operated with usual slope coefficients to develop the 2-ways?
> That is, is it permissible to develop the 2-way a2 interactions
> involving coreXcomp and nonrandXrand (for each of the dicodon sets
> 1,2,3) using the same mechanics I?ve previously used for linear
> regression slopes of the usual type (first roll-up across length
> intervals within fold and then roll-up across folds)?
Yes. Think about a2 as if it were c1-b1, which "uses up" the u-level
factor. After summing over length intervals and folds, you have a 3-
way design: set X subset X method. (Note that "X" in this context
usually denotes a Cartesian product. Your use of X in "coreXcomp" and
"nonrandXrand" may be misunderstood by others. It would be better to
change the "X" to something else, say "/".)
For each set, you would have only 4 values, say A,B,C,D:
non ran
core A B
comp C D
The 2-way (core/comp X non/ran) interaction is A-B-C+D.
It corresponds to the 3-way (core/comp X non/ran X u-lev)
interaction when your were dichotomizing u.
> If so, I?d like to go ahead and do that, in order to see whether a2
> behaves as one might expect it to, given our current understanding
> of the probable relationship between c and u( namely that c varies
> directly with u.)
> 3.
> Regarding a ?meaningful? defintion for a1, it would be useful to
> have one for two reasons:
> a) to see what happens with its coreXcomp and nonrandXrand 2-ways
> b) to see if this behavior of a1 helps to validate the use of
> residuals from c on (u,u^2) in the construction of predictors for
> logistic regressions involving structural alignability.
a1 is well defined. You need to define a measure of "average slope".
Think about what you want it to mean, what properties you want it to
have and not have. For instance, you might get the slope at each data
point and then use their literal average, a1 + 2*a2*mean_x. But what
if cells with different x-means give the same a1 and a2? Should their
"average slope" measures be the same or different? That's the kind of
question you have to ask yourself.
> 4.
> Does anything you've said change if we use your suggested u/(1+u)
> instead of u itself?
No. That's why I called the variables 'x' and 'y'.
Nothing I said was specific to your variables. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7928547","timestamp":"2014-04-19T17:43:00Z","content_type":null,"content_length":"4508","record_id":"<urn:uuid:070df2d5-93d6-4f94-8ca5-37babe581430>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
5x5 Magic Square - Easy and quick method!
Thursday, January 21, 2010
5x5 Magic Square - Easy and quick method!
9 comments:
1. i am interested in ancient hindu teaching. how can i know the important vedic maths texts. Kindly provide me the details. your class are interesting and easy to understand. it is also easy to
teach to students. my email id is kannankunnathully@gmail.com will definitly be in touch.
2. howdoyouknowthatyouwillget65asanswer?whatso,ifineed75asanswer?
3. Hello Sir,
My name is dinesh. Am preparing for Bank examinations.I have one doubt on division of big no.s ie 2584623.5486/35.6 pls tell some sutras for doing such calculations.
5. hello sir can you please tell me how to find out power of any number like 101^100
i can find out square and cube but i dont know how the above one solve?
7. Its so nice i have also practiced alot with and awesome.......
this type of maths gives good logical and low time consuming.
8. Komal Singh
It is nice and low time consuming...but i am little bit confused in squaring the numbers...ur trick not applicable to all no. ...plz help me my id is komalsingh50@hotmail.com
9. sir i am a student of eight std. i saw your tricks in youtube it was so easy to and thanks for it | {"url":"http://vedicmaths-vsr.blogspot.com/2010/01/5x5-magic-square-easy-and-quick-method.html","timestamp":"2014-04-21T00:06:21Z","content_type":null,"content_length":"90405","record_id":"<urn:uuid:28efe158-febd-4391-9680-7e50811c84cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Asset Pricing Theory
Results 1 - 10 of 345
- Journal of Finance
"... This paper explores the structural differences and relative goodness-of-fits of affine term structure models ~ATSMs!. Within the family of ATSMs there is a tradeoff between flexibility in
modeling the conditional correlations and volatilities of the risk factors. This trade-off is formalized by our ..."
Cited by 336 (30 self)
Add to MetaCart
This paper explores the structural differences and relative goodness-of-fits of affine term structure models ~ATSMs!. Within the family of ATSMs there is a tradeoff between flexibility in modeling
the conditional correlations and volatilities of the risk factors. This trade-off is formalized by our classification of N-factor affine family into N � 1 non-nested subfamilies of models.
Specializing to three-factor ATSMs, our analysis suggests, based on theoretical considerations and empirical evidence, that some subfamilies of ATSMs are better suited than others to explaining
historical interest rate behavior. IN SPECIFYING A DYNAMIC TERM STRUCTURE MODEL—one that describes the comovement over time of short- and long-term bond yields—researchers are inevitably confronted
with trade-offs between the richness of econometric representations of the state variables and the computational burdens of pricing and estimation. It is perhaps not surprising then that virtually
all of the empirical implementations of multifactor term structure models that use time series data on long- and short-term bond yields simultaneously have focused on special cases of “affine ” term
structure models ~ATSMs!.AnATSM accommodates time-varying means and volatilities of the state variables through affine specifications of the risk-neutral drift and volatility coefficients. At the
same time, ATSMs yield essentially closed-form expressions for zero-coupon-bond prices ~Duffie and Kan ~1996!!, which greatly facilitates pricing and econometric implementation. The focus on ATSMs
extends back at least to the pathbreaking studies by Vasicek ~1977! and Cox, Ingersoll, and Ross ~1985!, who presumed that the instantaneous short rate r~t! was an affine function of an N-dimensional
state vector Y~t!, r~t! � d 0 � d y Y~t!, and that Y~t! followed Gaussian and square-root diffusions, respectively. More recently, researchers have explored formulations of ATSMs that extend the
one-factor Markov represen-
, 2001
"... Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread
changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-co ..."
Cited by 224 (2 self)
Add to MetaCart
Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes
have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common
factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread
changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity.
- Journal of Finance
"... This paper surveys the field of asset pricing. The emphasis is on the interplay between theory and empirical work and on the trade-off between risk and return. Modern research seeks to
understand the behavior of the stochastic discount factor ~SDF! that prices all assets in the economy. The behavior ..."
Cited by 123 (3 self)
Add to MetaCart
This paper surveys the field of asset pricing. The emphasis is on the interplay between theory and empirical work and on the trade-off between risk and return. Modern research seeks to understand the
behavior of the stochastic discount factor ~SDF! that prices all assets in the economy. The behavior of the term structure of real interest rates restricts the conditional mean of the SDF, whereas
patterns of risk premia restrict its conditional volatility and factor structure. Stylized facts about interest rates, aggregate stock prices, and cross-sectional patterns in stock returns have
stimulated new research on optimal portfolio choice, intertemporal equilibrium models, and behavioral finance. This paper surveys the field of asset pricing. The emphasis is on the interplay between
theory and empirical work. Theorists develop models with testable predictions; empirical researchers document “puzzles”—stylized facts that fail to fit established theories—and this stimulates the
development of new theories. Such a process is part of the normal development of any science. Asset pricing, like the rest of economics, faces the special challenge that data are generated naturally
rather than experimentally, and so researchers cannot control the quantity of data or the random shocks that affect the data. A particularly interesting characteristic of the asset pricing field is
that these random shocks are also the subject matter of the theory. As Campbell, Lo, and MacKinlay ~1997, Chap. 1, p. 3! put it: What distinguishes financial economics is the central role that
uncertainty plays in both financial theory and its empirical implementation. The starting point for every financial model is the uncertainty facing investors, and the substance of every financial
model involves the impact of uncertainty on the behavior of investors and, ultimately, on mar-* Department of Economics, Harvard University, Cambridge, Massachusetts
- Review of Derivatives Research , 1998
"... the problems and opportunities facing the financial services industry in its search for competitive excellence. The Center's research focuses on the issues related to managing risk at the firm
level as well as ways to improve productivity and performance. The Center fosters the development of a comm ..."
Cited by 120 (6 self)
Add to MetaCart
the problems and opportunities facing the financial services industry in its search for competitive excellence. The Center's research focuses on the issues related to managing risk at the firm level
as well as ways to improve productivity and performance. The Center fosters the development of a community of faculty, visiting scholars and Ph.D. candidates whose research interests complement and
support the mission of the Center. The Center works closely with industry executives and practitioners to ensure that its research is informed by the operating realities and competitive demands
facing industry participants as they pursue competitive excellence. Copies of the working papers summarized here are available from the Center. If you would like to learn more about the Center or
become a member of our research community, please let us know of your interest.
- THE JOURNAL OF FINANCE , 2001
"... Motivated by recent financial crises in East Asia and the United States where the downfall of a small number of firms had an economy-wide impact, this paper generalizes existing reduced-form
models to include default intensities dependent on the default of a counterparty. In this model, firms have c ..."
Cited by 117 (6 self)
Add to MetaCart
Motivated by recent financial crises in East Asia and the United States where the downfall of a small number of firms had an economy-wide impact, this paper generalizes existing reduced-form models
to include default intensities dependent on the default of a counterparty. In this model, firms have correlated defaults due not only to an exposure to common risk factors, but also to firm-specific
risks that are termed “counterparty risks.” Numerical examples illustrate the effect of counterparty risk on the pricing of defaultable bonds and credit derivatives such as default swaps.
, 1994
"... We consider the problem of pricing an American contingent claim whose payoff depends on several sources of uncertainty. Using classical assumptions from the Arbitrage Pricing Theory, the
theoretical price can be computed as the maximum over all possible early exercise strategies of the discounted ..."
Cited by 95 (0 self)
Add to MetaCart
We consider the problem of pricing an American contingent claim whose payoff depends on several sources of uncertainty. Using classical assumptions from the Arbitrage Pricing Theory, the theoretical
price can be computed as the maximum over all possible early exercise strategies of the discounted expected cash flows under the modified risk-neutral information process. Several efficient numerical
techniques exist for pricing American securities depending on one or few (up to 3) risk sources. They are either lattice-based techniques or finite difference approximations of the Black-Scholes
diffusion equation. However, these methods cannot be used for high-dimensional problems, since their memory requirement is exponential in the
- Applied Mathematical Finance , 1995
"... We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely, but is assumed instead to lie between two
extreme values oe min and oe max . These bounds could be inferred from extreme values of the implied volatil ..."
Cited by 95 (3 self)
Add to MetaCart
We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely, but is assumed instead to lie between two extreme
values oe min and oe max . These bounds could be inferred from extreme values of the implied volatilities of liquid options, or from high-low peaks in historical stock- or option-implied
volatilities. They can be viewed as defining a confidence interval for future volatility values. We show that the extremal non-arbitrageable prices for the derivative asset which arise as the
volatility paths vary in such a band can be described by a non-linear PDE, which we call the Black-Scholes-Barenblatt equation. In this equation, the "pricing" volatility is selected dynamically from
the two extreme values oe min ,oe max , according to the convexity of the valuefunction. A simple algorithm for solving the equation by finite-differencing or a trinomial tree is presented. We show
that this model capture...
, 2002
"... As is well known, the classic Black-Scholes option pricing model assumes that returns follow Brownian motion. It is widely recognized that return processes differ from this benchmark in at least
three important ways. First, asset prices jump, leading to non-normal return innovations. Second, return ..."
Cited by 89 (12 self)
Add to MetaCart
As is well known, the classic Black-Scholes option pricing model assumes that returns follow Brownian motion. It is widely recognized that return processes differ from this benchmark in at least
three important ways. First, asset prices jump, leading to non-normal return innovations. Second, return volatilities vary stochastically over time. Third, returns and their volatilities are
correlated, often negatively for equities. We propose that time-changed Lévy processes be used to simultaneously address these three facets of the underlying asset return process. We show that our
framework encompasses almost all of the models proposed in the option pricing literature. Despite the generality of our approach, we show that it is straightforward to select and test a particular
option pricing model through the use of characteristic function technology.
- Journal of Financial and Quantitative Analysis 37, 63–91. The Journal of Finance Wachter, Jessica A , 2003
"... This paper solves, in closed form, the optimal portfolio choice problem for an investor with utility over consumption under mean-reverting returns. Previous solutions either require
approximations, numerical methods, or the assumption that the investor does not consume over his lifetime. This paper ..."
Cited by 73 (6 self)
Add to MetaCart
This paper solves, in closed form, the optimal portfolio choice problem for an investor with utility over consumption under mean-reverting returns. Previous solutions either require approximations,
numerical methods, or the assumption that the investor does not consume over his lifetime. This paper breaks the impasse by assuming that markets are complete. The solution leads to a new
understanding of hedging demand and of the behavior of the approximate log-linear solution. The portfolio allocation takes the form of a weighted average and is shown to be analogous to duration for
coupon bonds. Through this analogy, the notion of investment horizon is extended to that of an investor who consumes at multiple points in time. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=15072","timestamp":"2014-04-17T20:13:35Z","content_type":null,"content_length":"39021","record_id":"<urn:uuid:bfb5581c-4360-47c9-b847-2f92b74fb3f3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i got 56 as p 187 as b lateral as 672 total as 706 correct?
Best Response
You've already chosen the best response.
Number one.
Best Response
You've already chosen the best response.
i got 1046 for total
Best Response
You've already chosen the best response.
how? i keep getting that wrong..
Best Response
You've already chosen the best response.
surface area of a rectangular prism = Ph + 2B 56(12) + 2(187) = 1046
Best Response
You've already chosen the best response.
i did 56 x 12 x + 2 x 17
Best Response
You've already chosen the best response.
oh. one EIGHT seven. howd u get that?
Best Response
You've already chosen the best response.
but 187 is B lol
Best Response
You've already chosen the best response.
area of the base... 17 x 17?
Best Response
You've already chosen the best response.
1 is easy just add up the sides of the figure
Best Response
You've already chosen the best response.
you already figured out that B = 187
Best Response
You've already chosen the best response.
oh times 11. ok
Best Response
You've already chosen the best response.
yes, ma'am
Best Response
You've already chosen the best response.
ok. ill correct that.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
what's the second figure?
Best Response
You've already chosen the best response.
triangular prism
Best Response
You've already chosen the best response.
yep, and what's the perimeter of the base?
Best Response
You've already chosen the best response.
umm, let me add,
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
btw in the first pic dont include the floor/ base of figure since you dont need to paint that :P
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok timo:)
Best Response
You've already chosen the best response.
the base of a triangular prism is a triangle, you should get 36 as the perimeter
Best Response
You've already chosen the best response.
and i think timo is talking about number 6? we're only on number 2
Best Response
You've already chosen the best response.
um,is it not 9+12+9+12?
Best Response
You've already chosen the best response.
no, because it's a triangle, not a square or rectangle. so add up 9, 12, and 15
Best Response
You've already chosen the best response.
oh. ok.. i get it:)
Best Response
You've already chosen the best response.
let me write that
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
b would be..umm..let me calculate..
Best Response
You've already chosen the best response.
um, 54?
Best Response
You've already chosen the best response.
yep :)
Best Response
You've already chosen the best response.
yay! im finally getting it! tnx to the awesome ari! i mean 296!
Best Response
You've already chosen the best response.
ok,let me write 54 down.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok, eo,lateral, let me try.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
no. the lateral area would be the rectangles
Best Response
You've already chosen the best response.
oh. but are the formu,as not the same or smthing?
Best Response
You've already chosen the best response.
no, they're not. that formula was for a rectangular prism. this is a triangular prism
Best Response
You've already chosen the best response.
because it too asks for the lateral area of the triang. prism
Best Response
You've already chosen the best response.
can u plz explain for the triang prism?
Best Response
You've already chosen the best response.
just find the areas for each of the rectangles and add them together
Best Response
You've already chosen the best response.
oh. um, let me try,
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
aah! um, area of a triangle is bh,yes?
Best Response
You've already chosen the best response.
you're not trying to find the area of the triangle. you're finding the areas of each of the rectangles
Best Response
You've already chosen the best response.
and area of a triangle = bh/2
Best Response
You've already chosen the best response.
ohh. so im actually finding the area of rectangles as those a re the laterals'/sides?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ohh. ok,once more..
Best Response
You've already chosen the best response.
um, was it more or less than my last answ er?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
311, i gotta do some chemistry soon. no.
Best Response
You've already chosen the best response.
you have to include the 15?
Best Response
You've already chosen the best response.
ok,last try. 444.
Best Response
You've already chosen the best response.
what's the area of this rectangle?
Best Response
You've already chosen the best response.
yes, 22 x 12, no?
Best Response
You've already chosen the best response.
yes, which is...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yes. and what's the area of this rectangle?
Best Response
You've already chosen the best response.
umm, 350
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
why are you guessing? what's 22 x 15?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
it's 330
Best Response
You've already chosen the best response.
and what's the area of this rectangle?
Best Response
You've already chosen the best response.
im using a calculator.
Best Response
You've already chosen the best response.
i'm using one too. it's 330
Best Response
You've already chosen the best response.
the first i said 264
Best Response
You've already chosen the best response.
i put 22*15 into web22.0calc
Best Response
You've already chosen the best response.
maybe you should use a different calculator
Best Response
You've already chosen the best response.
umm,let me see. then..
Best Response
You've already chosen the best response.
o yea! just used google's
Best Response
You've already chosen the best response.
so what's the area of that last rectangle?
Best Response
You've already chosen the best response.
um,the first one was 264. the second 330?
Best Response
You've already chosen the best response.
what's the area of this rectangle...
Best Response
You've already chosen the best response.
9 x 12?
Best Response
You've already chosen the best response.
9 x 22...
Best Response
You've already chosen the best response.
umm 198
Best Response
You've already chosen the best response.
yes. so what's 198 + 330 + 264?
Best Response
You've already chosen the best response.
um 762
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok. ill let you alone now. tnx ill search for someone elses hellp if u like
Best Response
You've already chosen the best response.
i was gonna finish helping you with this one. you still have to find total surface area
Best Response
You've already chosen the best response.
o yea
Best Response
You've already chosen the best response.
so 792 is our lateral area, all we have to do is add the areas of those two triangle. what do you get?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
um... let me se..
Best Response
You've already chosen the best response.
is 12 tthe base?
Best Response
You've already chosen the best response.
the base is the triangle. you've already found the area of the base. it's 54. all you have to do is: 792 + 54 + 54 = ?
Best Response
You've already chosen the best response.
ok. um
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
does that make sense? what we did?
Best Response
You've already chosen the best response.
finally. tnx,ari:) ill stop botheringn u now
Best Response
You've already chosen the best response.
um,yes u believe so
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
mayi message u my answers?
Best Response
You've already chosen the best response.
is anyone else gonna help you?
Best Response
You've already chosen the best response.
i dont think so
Best Response
You've already chosen the best response.
:L maybe they will... @Hero @JakeV8
Best Response
You've already chosen the best response.
ok. ill try them on my own first
Best Response
You've already chosen the best response.
mmk. i'm off the chem. gl :)
Best Response
You've already chosen the best response.
off to*
Best Response
You've already chosen the best response.
hi jake its only 4 probs
Best Response
You've already chosen the best response.
okay 296:) tnx for ur help
Best Response
You've already chosen the best response.
no problem :) make sure you tell him you've done one and two on the second link
Best Response
You've already chosen the best response.
well, i guess i just did lol. nvm
Best Response
You've already chosen the best response.
okay. jake, ive done one and two on the second link:D
Best Response
You've already chosen the best response.
what are the P and B things? Help me help you...
Best Response
You've already chosen the best response.
and i need not do the bonus
Best Response
You've already chosen the best response.
ok. lol p isperimeter of base
Best Response
You've already chosen the best response.
b is area of base
Best Response
You've already chosen the best response.
And you're up to # 3 on the 2nd link, as sayeth @AriPotta ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
They show you all three leg-lengths on the picture... one is one the bottom, two are shown on the top. Just add 'em up...
Best Response
You've already chosen the best response.
i want to try and talk old fashionedly noe lol
Best Response
You've already chosen the best response.
jake pls can u work them out and explain asnu gi along? please?
Best Response
You've already chosen the best response.
its 4 probs
Best Response
You've already chosen the best response.
look at the diagram... see how it shows one of the bottom sides as 8? And the top shows the other two sides as 7 and 5 Perimeter = 8 + 7 + 5
Best Response
You've already chosen the best response.
ok. letnme add.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Adding is good. Reading the diagram and realizing that all you need to do for perimeter is add those 3 sides is even better. For # 3, P = 20 is correct.
Best Response
You've already chosen the best response.
yay! ok, now, b
Best Response
You've already chosen the best response.
B is the area of the base. The formula for area of a triangle is, as you surely know, 1/2 * base * height Again, the base side has length 8. And the height is shown as 4.3. (I'm just reading the
diagram... no magic) So find 1/2 * 8 * 4.3
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
What did you get?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Jame, Ive to go soon. But, if you leave explanations to thhe three remaining problems,mi will give you a medal.
Best Response
You've already chosen the best response.
Ok, great. Next is lateral surface area. Check that diagram again... the side surfaces are all rectangles, right? And area of a rectangle is length X width, or length X height (basically one side
X the other side). So, please tell me the the length and height, then multiply to get the area of each of the 3 lateral side surfaces: Lateral side 1: Length: Height: Length X Height = ? Lateral
side 2: Length: Height: Length X Height = ? Lateral side 3: Length: Height: Length X Height = ? This is really easy... it's just reading the length of each edge, and all the heights are the
same... then it's easy multiplying... you know how to do this.
Best Response
You've already chosen the best response.
oops s, jake*
Best Response
You've already chosen the best response.
Yes,? please. i am begging you. ill rereadmthe explanation so i get them.
Best Response
You've already chosen the best response.
itmis 11:00 pm where i amAnd ive to get up early tmrw
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i would very much appreciate itnif u droppedoff thenanswersnhere. i really really need them:/
Best Response
You've already chosen the best response.
This is like a puzzle... you find the bottom area... and the bottom area is the same as the top area. Then, you find the side areas... then you add the bottom, top, and all 3 sides together. You
found the bottom area earlier... 17.2. So that's also the top area... Now you have to find the sum of all the side areas... All sides are rectangles with area = length X height So the first is :
length1 * height And the 2nd is : length2 * height And the 3rd is : length3 * height And the sum of all three areas is: length1 * height + length2 * height + length3 * height But since height is
the same, you can think of "distributive property" and rewrite the lateral side areas as: height * (length 1 + length2 + length3) And notice that adding those 3 lengths together is the same as
the perimeter, and you already found that to be 20. So the lateral side area is height * perimeter = height * 20
Best Response
You've already chosen the best response.
please please please leave the remaining answers to the third one and fourth on this image and the other two probs on the second image as well youll be rewarded.
Best Response
You've already chosen the best response.
So put in the value of the height (height = 9), and solve for lateral side area... that's the 3rd part. Then, for total surface area, add the lateral side area you just now found to the top area
+ the bottom area... remember, those were each 17.2 So the total surface area = (9 * height) + 17.2 + 17.2
Best Response
You've already chosen the best response.
um and the height would be 4.3,yes?
Best Response
You've already chosen the best response.
no that's the height of the base side. Height of the prism itself is shown as 9
Best Response
You've already chosen the best response.
um, ok 180
Best Response
You've already chosen the best response.
that's the 3rd part... lateral side area = 120
Best Response
You've already chosen the best response.
oops, 180, sorry!!!!!!!!!
Best Response
You've already chosen the best response.
i have to go:/ please:/ have mercy:/
Best Response
You've already chosen the best response.
3rd part: lateral side area = 180 4th part: total area = 180 + top + bottom = 180 + 17.2 + 17.2 = 180 + 34.4 = 214.4
Best Response
You've already chosen the best response.
please:-'and thanks but
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509b2a78e4b00773745818f2","timestamp":"2014-04-18T19:09:06Z","content_type":null,"content_length":"399843","record_id":"<urn:uuid:49b393ea-33c3-4bc2-af81-26197e88ca3f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Fourier binest algebra.
Power, Stephen C. and Katavolos, A. (1997) The Fourier binest algebra. Mathematical Proceedings of the Cambridge Philosophical Society, 122 (3). pp. 525-539. ISSN 0305-0041
PDF (download3.pdf)
Download (380Kb) | Preview
The Fourier binest algebra is defined as the intersection of the Volterra nest algebra on L2([open face R]) with its conjugate by the Fourier transform. Despite the absence of nonzero finite rank
operators this algebra is equal to the closure in the weak operator topology of the Hilbert–Schmidt bianalytic pseudo-differential operators. The (non-distributive) invariant subspace lattice is
determined as an augmentation of the Volterra and analytic nests (the Fourier binest) by a continuum of nests associated with the unimodular functions exp([minus sign]isx2/2) for s>0. This multinest
is the reflexive closure of the Fourier binest and, as a topological space with the weak operator topology, it is shown to be homeomorphic to the unit disc. Using this identification the unitary
automorphism group of the algebra is determined as the semi-direct product [open face R]2×[kappa][open face R] for the action [kappa]t([lambda], [mu]) =(et[lambda], e[minus sign]t [mu]).
Item Type: Article
Journal or Mathematical Proceedings of the Cambridge Philosophical Society
Publication Title:
Additional http://journals.cambridge.org/action/displayJournal?jid=PSP The final, definitive version of this article has been published in the Journal, Mathematical Proceedings of the
Information: Cambridge Philosophical Society, 122 (3), pp 525-539 1997, © 1997 Cambridge University Press.
Subjects: Q Science > QA Mathematics
Departments: Faculty of Science and Technology > Mathematics and Statistics
ID Code: 19509
Deposited By: ep_ss_importer
Deposited On: 14 Nov 2008 12:15
Refereed?: Yes
Published?: Published
Last Modified: 09 Oct 2013 13:12
URI: http://eprints.lancs.ac.uk/id/eprint/19509
Actions (login required) | {"url":"http://eprints.lancs.ac.uk/19509/","timestamp":"2014-04-19T18:11:51Z","content_type":null,"content_length":"18103","record_id":"<urn:uuid:fe6ed264-04a7-4cd1-87ef-a68600acd385>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pleasantville, NY SAT Math Tutor
Find a Pleasantville, NY SAT Math Tutor
...I find sciences fascinating, especially fields of genetics and oncology, because they are so actively developing. I specialize mostly in helping students excel in math and science courses, and
on high school standardized tests. I am familiar with many testing formats, including the SSAT, SAT, S...
23 Subjects: including SAT math, chemistry, geometry, biology
...Many are teenagers with attention issues, and hyper-activity, which prevents them from focusing for longer periods of time. I am working with several Autistic children, ADD Adults, dyslexic
adults and teenagers, and ADHD children. I have been working with Autistic children for several years now.
76 Subjects: including SAT math, English, Spanish, physics
...I have also worked with an SAT Prep organization that provides disadvantaged students with a full SAT prep course, and I have tutored a few private clients in SAT Prep. Resume available upon
request. I was Chess Champion of several chess tournaments from 2006-2012.
23 Subjects: including SAT math, reading, calculus, physics
...In cases where the student was completely at a loss, I would often go over the very basics very thoroughly and provide examples such that the student will have some hands-on experience instead
of just me lecturing. It is a personal belief that learning is like building a skyscraper, you can't ex...
20 Subjects: including SAT math, chemistry, geometry, biology
...I have a MS in Education and a MBA in Accounting and Finance. As a tutor, I take a personal approach when working with my clients. Using his or her individual learning style I ensure that our
learning sessions are not only educationally productive, but also enjoyable.
8 Subjects: including SAT math, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/pleasantville_ny_sat_math_tutors.php","timestamp":"2014-04-19T10:10:41Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:8adb7cd7-8dcd-42c0-9ab0-2adf6e929ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Moving linalg c code
josef.pktd@gmai... josef.pktd@gmai...
Thu Apr 4 07:44:06 CDT 2013
On Thu, Apr 4, 2013 at 3:49 AM, Dave Hirschfeld
<dave.hirschfeld@gmail.com> wrote:
> Charles R Harris <charlesr.harris <at> gmail.com> writes:
>> Hi All,There is a PR that adds some blas and lapack functions to numpy. I'm
> thinking that if that PR is merged it would be good to move all of the blas
> and lapack functions, including the current ones in numpy/linalg into a single
> directory somewhere in numpy/core/src. So there are two questions here: should
> we be adding the new functions, and if so, should we consolidate all the blas
> and lapack C code into its own directory somewhere in numpy/core/src.Thoughts?
> Chuck
> The code in the aforementioned PR would be very useful to me in performance
> critical areas of my code. So much so in fact that I've actually rolled my own
> functions in cython to do what I need. I'd be happy if the functionality was
> available by default in numpy though.
What I see form a quick browse of the PR, is that we can get most of linalg to
work on many matrices (2d arrays) at the same time.
(It's the first time we get generalized u-functions that are targeted to users,
IISIC, if I see it correctly)
I would also like to see these added, and we will have many cases where this
will be very useful in statistics and statsmodels.
> Regards,
> Dave
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-April/066097.html","timestamp":"2014-04-16T10:13:04Z","content_type":null,"content_length":"4540","record_id":"<urn:uuid:4d225494-73c2-43ce-8c0b-cc1541cd0392>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
The TIMSS 1995 Videotape Classroom Study
In 1995, the Third International Mathematics and Science Study (TIMSS) included a Videotape Classroom Study. This video study was an international videotape survey of eighth-grade mathematics
lessons in Germany, Japan, and the United States. Funded by the National Center for Education Statistics (NCES) and the National Science Foundation, the 1995 video study was the first attempt to
collect videotaped records of classroom instruction from nationally representative samples of teachers. The study was conducted in a total of 231 classrooms in Germany, Japan, and the United States
and used multimedia database technology to manage and analyze the videos.
The Videotape Classroom Study had four goals:
• To provide a rich source of information regarding what goes on inside eighth-grade mathematics classes in the three countries;
• To develop objective observational measures of classroom instruction to serve as quantitative indicators, at a national level, of teaching practices in the three countries;
• To compare actual mathematics teaching methods in the United States and the other countries with those recommended in current reform documents and with teachers’ perceptions of those
• To assess the feasibility of applying videotape methodology in future wider-scale national and international surveys of classroom instructional practices.
For the report on the methods and findings of the Videotape Classroom Study, click here.
Example lessons from the TIMSS 1995 Video Study were made available in the form of video vignettes of six eighth-grade lessons, two each from Germany, Japan, and the United States. These example
lessons were taught by teachers who volunteered to be videotaped for the project. The video vignettes were originally made available on a CD-ROM: Video Examples from the TIMSS Videotape Classroom
Study: Eighth Grade Mathematics in Germany, Japan, and the United States (NCES 98092). Now they are all available for viewing through the links below.
• German Lesson 2: Systems of Equations
German Lesson 2: Systems of Equations
After some brief warm-up problems, students and the teacher work collaboratively on solving a complex system of equations: (2y-5)/9=5(x-1)/6-5y and (3x+1)/12=(8/3)(y-2)+33x/2
│ Part 1 │ The lesson begins with two minutes of quickly paced warm-up exercises. The teacher asks six questions, including "Eight to the third power?", │
│ Presenting Warm-up Problems │ "Twelve percent of one hundred twenty?", and "Five factorial?". Students answer orally, and the teacher confirms the response or asks if others │
│ [Begin: 01:01] │ agree. │
│ │ After the warm-up activity, the teacher asks "What have we done lately?" After a student replies that they have studied "equations with two │
│ Part 2 │ variables," the teacher encourages students to describe the solution methods they have learned. Students respond by identifying the methods of │
│ Reviewing Previous Material │ "equating," "substituting," and "adding." The teacher asks them to give examples of how such methods work. With some prodding, students generate │
│ [Begin: 03:10] │ a system of equations and illustrate the method while the teacher records their verbal descriptions on the chalkboard. They work on three │
│ │ examples of systems of equations during this review activity, which lasts about seven minutes. │
│ │ The teacher writes the following system of equations on the chalkboard: (2y-5)/9=5(x-1)/6-5y and (3x+1)/12=(8/3)(y-2)+33x/2. After giving │
│ Part 3 │ students a minute to think about the problem, he asks for students to volunteer suggestions on how to proceed. Students take turns coming to the │
│ Posing and Working on the Problem │ board to work on the problem, taking questions and advice from their peers and the teacher. After about ten minutes working together in this │
│ [Begin: 10:00] │ way, the teacher asks students to record the partial result in their notebooks and continue solving the problem. He gives them about five │
│ │ minutes to find the solution. │
│ Part 4 │ The teacher asks students to describe the methods they used to finish the problem. One student suggests the method of addition and the teacher │
│ Sharing the Result │ asks her to show her work on the chalkboard. She works at the board on completing the problem with help from the teacher and the other students. │
│ [Begin: 26:13] │ She occasionally asks questions of the teacher, and debates points with her peers. She finishes the problem in about six minutes. │
│ Part 5 │ When the student completes the problem and returns to her seat, the teacher asks the students to summarize what they have learned about solving │
│ Summarizing the Objective and Assigning Seatwork │ "complicated problems" like this. The teacher says that the main thing is to think about what method will be best to use for different types of │
│ [Begin: 33:34] │ systems. He then assigns a problem from the exercise book. For about seven minutes the students work independently. The teacher monitors their │
│ │ work and occasionally assists individual students until the bell rings ending the lesson. │ | {"url":"http://nces.ed.gov/timss/video95.asp?VidType=2","timestamp":"2014-04-19T17:03:14Z","content_type":null,"content_length":"27931","record_id":"<urn:uuid:ba5c2924-3724-4ccf-a094-e637a17de3b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Acceleration as a function of displacement
actually as it turned out:
a(t) = 6 theta
better put as
a(t) = 6 theta(t)
a(t) is just the second derative of theta(t), so
theta''(t) = 6 theta(t)
tossing away the 6 mentally for a second, what function equals its own second derative? e does! so let's play with that and see what we get
if theta(t) = e^t, then theta''(t) = e^t. Okay, but we need to bring that 6 back in. So, theta (t) = e^sqrt6 t, theta'(t) = v(t) = sqrt6 e^sqrt6 t, and v''(t) = a(t) = 6 e^sqrt6 t.
To check it:a(t) = 6 e^sqrt6 t. e^sqrt6 t = theta(t) substitute and we get back to the original a(t) = 6 theta.
yay :) using this I got the right answer for my problem!
But I'm on another problem (last one, promise) with similar premises.
omega(t) = 5 (theta(t))^2 is what I'm given.
So theta'(t) = 5 (theta(t))^2
Toss away the 5, we can deal with it later...but what function equals its derative when you square it? | {"url":"http://www.physicsforums.com/showpost.php?p=1308630&postcount=3","timestamp":"2014-04-17T15:42:20Z","content_type":null,"content_length":"8255","record_id":"<urn:uuid:b845854e-a882-4ab4-b60e-7a1914a2c7a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Early NL Series: Intro and Run Estimation
My major interest in baseball research is theoretical sabermetrics. The “theoretical” label sounds a bit arrogant, but what I mean is that I am interested particularly in questions of what would
happen at extremes that do not occur with the usual seasonal Major League data that many people analyze (for instance, RC works fine for normal teams, and so does 10 runs = 1 win as a rule of thumb.
You don’t really need BsR or Pythagenpat for those types of situations--they can help sharpen your analysis, but you won’t go too far off track without them.) Thus my interest in run and win
estimation at the extremes, as well as evaluation of extreme batters (yes, I still have about five installments in the Rate Stat series to write, and yes, I will get around to it, but when, I don’t
know). Secondary to that is using sabermetrics to increase my understanding of the baseball world around me (example, how valuable is Chipper Jones? What are the odds that the Tigers win the World
Series? Who got the better of the Brewers/Rangers trade?). I don't do this a whole lot here because there are dozens and dozens of people who do that kind of stuff, and I wouldn't be able to add any
added insight. But a close third is using sabermetrics to evaluate the players and teams of the past. Particularly, I am interested in applying sabermetric analysis to the earliest days of what we
now call major league baseball.
A few years ago, and again recently, I turned my attention to the National Association, the first loose major league of openly professional players that operated from 1871-1875. However, this league,
as anyone who has attempted to statistically analyze it will know, was a mess. Teams played 40 games in a season; some dropped out after 10, some were horrifically bad, Boston dominated the league,
etc. All of these factors make it difficult to develop the kind of sabermetric tools (run estimators, win estimators, baselines) that we use in present day analysis. So I finally threw my hands up
and gave up (Dan Rosenheck came up with a BsR formula that worked better for the NA then anything I did, but there are limitations of the data that are hard to overcome). For now, it is probably best
to eyeball the stats of NA players and teams and use common sense, as opposed to attempting to apply rigorous analytical structures to them.
Anyway, when things start to settle down, you have the National League, founded in 1876. I should note at this point that while I am interested in nineteenth-century baseball, I am by no means an
expert on it, and so you should not be too surprised if I butcher the facts or make faulty assumptions, or call Cap Anson “Cap Anderson”. If you want a great historical presentation of old-time
baseball, the best place to go is David Nemec’s The Great Encyclopedia of Nineteenth Century Major League Baseball. I believe that a revised edition of this book has been published recently, but I
have the first edition. It is really a great book, similar in format to my favorite of the 20th century baseball encyclopedias, The Sports Encyclopedia: Baseball (or Neft/Cohen if you prefer). Like
that work, only basic statistics are presented (no OPS+ or Pitching Runs, etc.), but you get the complete roster of each team each year, games by position, etc. And just like Neft/Cohen, there is a
text summary of every season’s major stories, although Nemec writes these over the course of four or five pages, with pictures and trivial anecdotes, as opposed to the several paragraphs in the Neft/
Cohen book. I wholeheartedly recommend the Nemec encyclopedia to anybody interested in the 19th century game.
That digression aside, the 1876 National League is still a different world then what we have today. The season is 60 games long, one team goes 9-56, pitchers are throwing from a box 45 feet away from
the plate, it takes a zillion balls to draw a walk, overhand pitching is illegal, etc. But thankfully, you can make some sense of the statistics of this league, and while our tools don’t work as
well, due to the competitive imbalance, the lack of important data that we have for later seasons, the shorter sample sizes as a result of a shorter season, etc., they can work to a level of
precision that makes me comfortable to present their findings, with repeated caveats about how inaccurate they are compared to similar tools today. For the National Association, I could never reach
that level of confidence.
What I intend to do over the course of this series is to look at the National League each season from 1876-1881. I chose 1881 for a couple reasons, the first being that during those seven seasons the
NL had no other contenders to “major league” status (although many historians believe that other teams in other leagues would have been competitive with them--it's not like taking today’s Los Angeles
Dodgers against the Vero Beach Dodgers). Also, in Bill James’ Historical Data Group Runs Created formulas, 1876-1881 is covered under one period (although 1882 and 1883 are included as well). That
James found that he could put these seasons under one RC umbrella lead me to believe that the same could be done for BsR and a LW method as well. I will begin by looking at the runs created
methodology here.
Run estimation is a little tricky as you go back in time. Unfortunately, there is no play-by-play database that we can use to determine empirical linear weights, and some important data is missing
(SB and CS particularly). The biggest missing piece of the offensive puzzle though is reached base on error, which for simplicity’s sake I will just refer to as errors from hereon. In the 1880 NL,
for instance, the fielding average was .901, and there were 8.67 fielding errors per game (for both teams). One hundred years later, the figures were .978 and 1.74. So you have something like five
times as many errors being made as you do in the modern game.
When looking at modern statistics, you can ignore the error from an offensive perspective pretty safely. It will undoubtedly improve the accuracy of your run estimator if you can include it, but only
very slightly, and the data is not widely available so we just ignore it, as we sometimes ignore sacrifice hits and hit batters and other minor events. But when there are as many errors as there were
in the 1870s, you can’t ignore that. If you use a modern formula like ERP, and find the necessary multiplier, you will automatically inflate the value of all of the other events, because there has to
be compensation somewhere for all of the runs being created as a result of errors.
So far as I know, there is only one published run estimator for this period. Bill James’ HDG-1 formula covers 1876-1883, and is figured as:
RC = (H + W)*(TB*1.2 + W*.26 + (AB-K)*.116)/(AB + W)
Bill decided to leave base runners as the modern estimate of H+W, and then try to account somewhat for errors by giving all balls in play extra advancement value. If you use the total offensive stats
of the period to find the implicit linear weights, this is what you get:
RC = .730S + 1.066D + 1.402T + 1.739HR + .434W - .1081(AB - H - K) - .1406K
As you can see, the value of each event is inflated against our modern expectation of what they should be. I should note here that, of course, we don’t expect the 1870s weights to be the same as or
even that similar to the modern weights. The coefficients do and should change as the game changes. That said, though, we have to be suspicious of a homer being valued at 1.74 runs and a triple at
1.40. The home run has a fairly constant value and it would take a very extreme context to lift its value so high. Scoring is high in this period (5.4 runs/game), but a lot of that logically has to
be due to the extra errors. Three and a half extra errors per team game is like adding another 3.5 hits--it's going to be a factor in increased scoring.
To test RMSE for run estimators, I figured the error per (AB - H). I did this because I did not want the ever changing schedule length to unduly effect the RMSE. Of course, this does introduce the
potential for problems because AB-H is much less a good proxy for outs in this period then it is today, as I will discuss shortly. I then multiplied the per out figure by 2153 (the average number of
AB-H for a team in the 1876-1883 NL). In any case, doing this versus just taking the straight RMSE against actual runs scored did not make a big difference. Bill’s formula came in at 35.12 while the
linearization was 30.65.
Of course what I wanted to do was figure out a Base Runs formula that worked for this period, as BsR is the most flexible and theoretically sound run estimator out there. What I decided to do was use
Tango Tiger’s full modern formula and attempt to estimate some data that was missing and throw out other categories that would be much more difficult to estimate. I wound up estimating errors,
sacrifice hits, wild pitches, and passed balls but throwing out steals, CS, intentional walks, hit batters, etc. Some of those events were subject to constantly changing rules and strategy (stolen
bases and sacrifices were not initially a big part of the professional game) or didn’t even yet exist (Did teams issue intentional walks when it took 8 balls to give the batter first base? I am not a
historian, but I doubt it. Hit batters did not result in a free pass until the 1887 in the NL). In the end, I came up with these estimates:
ERRORS: In modern baseball, approximately 65% of all errors result in a reached base on error for the offense. I (potentially dubiously) assumed that a similar percentage held in the 1870s, and used
70%. Then I simply figured x as 70% of the league fielding errors, per out in play (AB-H-K). x was allowed to be a different value for each season. Some may object to this as it hones in too much on
the individual year and I certainly can understand such a position. However, the error rates were fluctuating during this period. In 1876 the league FA was .866; in 1877 it was up to .884; then .893,
.892, .901, .905, .897, and .891. These differences are big enough to suggest that fundamental changes in the game may have been occurring from year-to-year.
James’ method had no such yearly correction, and if you force the BsR formula I will present later to use a constant x value of .134 (i.e. 13.4% of outs in play resulted in ROE), its RMSE will
actually be around a run and a half higher then that of the linearization of RC. I still think that there are plenty of good reasons to use the BsR formula instead, but in the interests of
intellectual honesty, I did not want to omit that fact.
It is entirely possible that a better estimate for errors could be found; there is no reason to assume that every batter is equally likely to reach on an error once they’ve made an out in play. In
fact, I am sure that some smart mind could come along and come up with better estimates then I have in a number of different areas, and blow my formula right out of the water. I welcome further
inquiry into this by others and look forward to my formula being annihilated. So don’t take any of this as a finished product or some kind of divine truth (not that you should with my other work
SACRIFICES: The first league to record sacrifices, so far as I can tell, was the American Association in 1883 and 1884. In those leagues, there was .0323 and .0327 SH per single, walk, and estimated
ROE. So I assumed SH = .0325*(S + W + E) would be an acceptable estimate in the early NL. NOTE: Wow, did I screw the pooch on this one. The AA DID NOT track sacrifices in '83 and '84. I somehow
misread the HB column as SH. We do no thave SH data until 1895 in the NL. So the discussion that follows is of questionable accuracy.
I did this some tie ago without thinking it through completely; in early baseball, innovations were still coming quickly, and it is possible that in the seven year interval, the sacrifice frequency
changed wildly. George Wright recalled in 1915 (quoted in Bill James’ New Historical Baseball Abstract, pg. 10): “Batting was not done as scientifically in those days as now. The sacrifice hit was
unthought of and the catcher was not required to have as good a throwing arm because no one had discovered the value of the stolen base.”
On the other hand, 1883 is pretty close to the end of our period, so while the frequency may well have increased over time, the estimate should at least be pretty good near the end of the line. One
could also quibble with the choice of estimating sacrifices as a percentage of times on first base when, if sacrifices are not recorded, they are in actuality a subset of AB-H-K. Maybe an estimate
based both on times on first and outs in play would work best. Again, there are a lot of judgment calls that go into constructing the formula, and so there are lots of areas for improvement.
WP and PB: These were kept by the NL, and there were .0355 WP per H+W-HR+E and .0775 PB per the same. So, the estimates are WP = .0355*(H + W - HR + E) and PB = .0775*(H + W - HR + E).
Then I simply plugged these estimates into Tango’s BsR formula. D of course was home runs, while A = H + W - HR + E + .08SH and C = AB - H - E + .92SH. The encouraging thing about this exercise was
that the B factor only needed a multiplier of 1.087 (after including a penalty of .05 for outs) to predict the correct number of total runs scored. Ideally, if Base Runs was a perfect model of
scoring (obviously it is not), we could use the same formula with any dataset, given all of the data, and not have to fudge the B component. The fact that we only had to fudge by 1.087 (compared to
Bill James who to make his Basic RC work had to add walks into the B factor, take 120% of total bases, and add 11.6% of balls in play to B), could indicate that the BsR formula holds fairly well for
this time when we add important, more common events like SH, errors, WP, and PB. Of course, perhaps Bill could get similar results using a more technical RC formula + estimation. The bottom line is,
a fudge of only 1.087 will keep the linear weights fairly close to what we expect today. I don’t know for sure that they should be, but I’d rather error on the side of our expectations as opposed to
a potentially quixotic quest to produce the lowest possible RMSE for a sample of sixty teams playing an average of 78 games each.
So the B formula is:
B = (.726S + 1.948D + 3.134T + 1.694HR + .052W + .799E + .727SH + 1.165WP + 1.174PB - .05(AB - H - E))*1.087
The RMSE of this formula by the standard above is 28.18. I got as low as 24.61 by increasing the outs weight to -.2, but I was not comfortable with the ramifications of this. As mentioned before, if
one does not allow each year to have a unique ROE per OIP ratio, the RMSE is a much worse 32.20. Again, I feel a differently yearly factor is appropriate, but can certainly see if some feel this is
an unfair advantage for this estimator when comparing it to others. The error of approximately 30 runs is a far cry from the errors around 23 in modern baseball, plus the season was shorter and the
teams in this period averaged only 421 runs/season, so the raw number makes it seem smaller then it actually is. As I said before, you should always be aware of the inaccuracies when using any
sabermetric method, but those caveats are even more important to keep in mind here.
Another way to consider the error is as a percentage of the runs scored by the team. This is figured as ABS(R-RC)/R. For sake of comparison, basic ERP, when used on all teams 1961-2002 (except 1981
and 1994), has an average absolute error of 2.7%. The BsR formula here, applied to all NL teams 1876-1883, has an AAE of 5.4%, twice that value. So once again I will stress that the methods used here
are nowhere near as accurate as the similar methods used in our own time. Just for kicks, the largest error is a whopping 24.2% for the 1876 Cincinnati entry, which scored 238 runs but was projected
to score 296. The best estimate is for Buffalo in 1882; they actually scored 500 versus a prediction of 501.
Before I move on too far, I have a little example that will illustrate the enormous effect of errors in this time and place. In modern baseball, there are pretty much exactly 27 outs per game, and
approximately 25.2 of these are AB-H. We recognize, of course, that ROE in our own time are included in this batting out figure, and should not be, but any distortion is small and can basically be
Picking a random year, in the 1879 NL, we know that there were 27.09 outs/game since we have the innings pitched figure. How many batting outs were there per game? Well, if the modern rule of thumb
held, there should be just about 25.2. There were 28.01. So there are more batting outs per game then there are total outs in the game. With our error estimate subtracted (so that batting outs = AB -
H - E), we estimate 24.60. Now this may well be too low, or just right, or what have you. Maybe I it should have been 50% of errors put a runner on first base instead of 70%. I don’t know. What I do
know is that if you pretend errors do not exist, you are going to throw all of your measures for this time and place out of whack. Errors were too big of a factor in the game to just be ignored as we
can do today.
Let’s take a look at the linear values produced by the Base Runs formula, as applied to the entire period:
Runs = .551S + .843D + 1.126T + 1.404HR + .390W + .569E + .081SH + .280PB + .278WP - .145(AB - H - E)
This is why I felt much more comfortable with the BsR formula I chose, despite the fact that there were versions with better accuracy. These weights would not be completely off-base if we found them
for modern baseball. Whether or not they are the best weights for 1876-1883, we will have to wait for when brighter minds tackle the problem or when PBP data is available and we can empirically see
what they are. But to me, it is preferable to accept greater error in team seasonal data but keep our common sense knowledge of what events are worth rather then to chase greater accuracy but distort
the weights.
This is still not the formula that I am going to apply to players, though. For that, I will use the linear version for that particular season. Additionally, for players, SH, PB, and WP will be broken
back down into their components. What I mean is that we estimate that a SH is worth .081 runs, and we estimated that there are .0325 SH for every S, W, and E. .081*.0325 = .0026, and therefore, for
every single, walk, and error we’ll add an additional .0026 runs. So a single will be worth .551+.0026 = .554 runs. We’ll also distribute the PB and WP in a similar way.
There are some drawbacks to doing it this way. If Ross Barnes hits 100 singles, his team may in fact lay down 3.25 more sacrifices. But it will be his teammates doing the sacrificing, not him. And we
would assume that good hitters would sacrifice less then poor hitters, and this method assumes they are all doing it equally.
On the other hand, though, we are just doing something similar in spirit to what a theoretical team approach does--crediting the change in the team’s stats as a direct result of the player to the
player. Besides, there’s really no other fair way to do it (we don’t want to get into estimating SH as a function of individual stats, and even if we did, we have no individual SH data for this
period to test against). Also, in the end, the extra weight added to each event will be fairly small, and I am much more comfortable doing it with the battery errors which should be fairly randomly
distributed with regards to which particular player is on base when they occur.
Then there is the matter of the error. Since the error is done solely as a function of AB-H-K, we could redistribute it, and come up with a different value for a non-K out and a K out, and write
errors out of the formula, and have a mathematically equivalent result. However, I am not going to do this because I believe that, as covered previously, errors are such an important part of this
game that we should recognize them, and maybe even include them in On Base Average (I have not in my presentation here, but I wouldn’t object if someone did) in order to remember that they are there.
I think that keeping errors in the formula gives a truer picture of the linear weight value of each event as well, as it allows us to remember that the error is worth a certain number of runs and
that outs, actual outs, have a particular negative value. Hiding this by lowering the value of an out seems to erase information to me.
I mentioned earlier that each year will have a different x to estimate errors in the formula x(AB-H-K). They are: 1876 = .1531, 1877 = .1407, 1878 = .1368, 1879 = .1345, 1880 = .1256, 1881 = .1184.
At this point, let me present the weights for the league as a whole in each year in 1876-1881, and then the ones with SH, PB, and WP stripped out and reapportioned across the other events. The first
set is presented as (S, D, T, HR, W, E, AB-H-E, SH, PB, WP). The second is presented as (S, D, T, HR, W, E, AB-H-E).
1876: .552, .853, 1.146, 1.417, .386, .570, -.147, .085, .289, .287
1876: .588, .886, 1.178, 1.417, .422, .606, -.147
1877: .563, .862, 1.152, 1.414, .398, .581, -.153, .079, .287, .285
1877: .598, .894, 1.184, 1.414, .433, .616, -.153
1878: .546, .846, 1.138, 1.417, .380, .564, -.144, .087, .289, .287
1878: .581, .879, 1.171, 1.417, .415, .599, -.144
1879: .543, .830, 1.108, 1.397, .385, .560, -.140, .082, .275, .273
1879: .577, .861, 1.139, 1.397, .419, .594, -.140
1880: .537, .825, 1.105, 1.400, .378, .554, -.137, .086, .277, .275
1880: .571, .856, 1.136, 1.400, .412, .588, -.137
1881: .560, .859, 1.149, 1.415, .395, .578, -.151, .080, .287, .285
1881: .595, .891, 1.182, 1.415, .430, .613, -.151
Next installment, I’ll talk a little bit about replacement level, the defensive spectrum, and park factors.
2 comments:
1. An excellent series of articles I must say. I noticed with BsR that it's pretty accurate dating back to World War II. It's interesting that this coincides with a .005 gain in Fielding Percentage
and a 10% increase in strikeouts after WWII.
2. Thanks. One of the things I intend to get done eventually is BsR formulas for 1883-1953 or so. More errors is certainly one of the things that can throw any run estimator off.
Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason. | {"url":"http://walksaber.blogspot.com/2007/08/early-nl-series-intro-and-run.html","timestamp":"2014-04-16T08:26:03Z","content_type":null,"content_length":"117066","record_id":"<urn:uuid:9d693bb6-dd5f-4275-a6e4-6b01ffd0bba2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why does the Kleene Hierarchy not collapse?
up vote 2 down vote favorite
Consider some logical statement (I am talking about natural numbers all the way). P(x, y, z) is a computable statement:
For all x: There exists an y: For all z: P(x,y,z) is true.
I suppose this would have a Kleene level of 3.
Now, you could not consider all x, y and z, but only the values from zero to below a+q, a+v and a+w:
For all x < a + q: There exists an y < a + v: For all z < a + w: P(x, y, z) is true.
Clearly, you could program a Turing machine to check whether this is true.
Now: Is the statement "No matter how big you make a, you can always find (q, v, w) (natural numbers) were said Turing machine will output 'True'" equivalent to the first statement?
Consider it were. Because you can construct functions that map one integer to n integers, you could represent the tuple (q, v, w) as one natural number (call it p), and the Turing machine calculates
the (q, v, w) out of this natural number (we call this Turing TM_P). Then you could rephrase the question:
For all a: There exists a p: TM_P(a, p) will output true.
Now, that would clearly imply a collapse of the Arithmetical hierarchy, as you only need some number of levels to make statements about Turing machines (I don't know the details here) and then two
additional levels to make the "For all a, there exists a p" claim.
Where does this reasoning fail? And does it fail, after all? If no: What did I do wrong that I did not read about it?
A reference for this is Rogers "Theory of Recursive functions and effective computability" or a similar introductory text, where these questions are treated in detail. – Andres Caicedo Dec 19 '10
at 18:14
add comment
1 Answer
active oldest votes
You're right that the statement $\varphi(a,q,v,w)$ defined by $\forall x<a+q \,\, \exists y<a+v \,\, \forall z<a+w [P(x,y,z)]$ can be checked by a Turing machine. If I read you
correctly, you're wondering whether (1) $\forall x \exists y \forall z P(x,y,z)$ is generally equivalent to (2) $\forall a \exists q,v,w \varphi(a,q,v,w)$, because an affirmative
answer would conflict with the fact that the Kleene hierarchy doesn't collapse.
up vote 4 down
vote accepted Happily, (1) and (2) aren't generally equivalent. Let $P(x,y,z)$ be the statement $x+z<y$ for instance. Then (2) is true: for any $a$, set $q=w=0$ and $v=a+1$, and the $y<a+v=2a+1$
that you need can always be witnessed by $2a$. But (1) is certainly false for this $P(x,y,z)$.
Thanks for the answer! Can you prove that the hierarchy does not collapse? – nibbles Dec 19 '10 at 9:38
2 The undecidability of the halting problem separates $\Sigma^0_1$ from $\Delta^0_1$. The proof of that fact relatives to any $n>1$. – Ed Dean Dec 19 '10 at 10:07
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/49859/why-does-the-kleene-hierarchy-not-collapse?sort=newest","timestamp":"2014-04-17T13:03:45Z","content_type":null,"content_length":"53975","record_id":"<urn:uuid:e5f59f18-1083-4ff2-8e1a-ac6a6936c664>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Problem based on Newton's laws of Motion
since the lower block is of 10 kg and the upper block is just half its weight so the dimensions of the upper block will be half the lower block. so the length of the upper block is 5m.
now since there is no friction the lower block will have to move 10m so that tha upper block is completely fallen away.
now S = ut+1/2at^2
here u=0
so, 10=1/2 5t^2 | {"url":"http://www.physicsforums.com/showthread.php?t=132925","timestamp":"2014-04-19T17:39:50Z","content_type":null,"content_length":"28952","record_id":"<urn:uuid:8c5fe8fd-8abf-4d55-bb20-525d7ecdc388>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical vs Quantum Computation (Week 9)
Posted by John Baez
Here are the first of the Winter quarter notes on Classical versus Quantum Computation. This quarter we’ll start categorifying everything we did last time!
• Week 9 (Jan. 4) - Brief review of categories and computation: objects as data types, morphisms as equivalence classes of programs. Why we must categorify our work so far to see computation as a
process. The strategy of categorification. Categorifying the concept of ‘monoid’ to get the concept of ‘monoidal category’. Categorifying the concept of ‘category’ to get the concept of ‘(weak)
2-category’ (also known as ‘bicategory’).
Last week’s notes are here.
Posted at January 5, 2007 12:18 AM UTC | {"url":"http://golem.ph.utexas.edu/category/2007/01/classical_vs_quantum_computati_9.html","timestamp":"2014-04-18T01:27:32Z","content_type":null,"content_length":"11392","record_id":"<urn:uuid:0b27a051-8f17-4bdc-8fa2-fd9453de7fd8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
I need help writing this program
June 2nd, 2009, 03:06 PM #1
Junior Member
Join Date
Jun 2009
Thanked 0 Times in 0 Posts
In general terms the algorithm for calculating a leap year is as follows...
A year will be a leap year if it is divisible by 4 but not by 100. If a year is divisible by 4 and by 100, it is not a leap year unless it is also divisible by 400.
Thus years such as 1996, 1992, 1988 and so on are leap years because they are divisible by 4 but not by 100. For century years, the 400 rule is important. Thus, century years 1900, 1800 and 1700
while all still divisible by 4 are also exactly divisible by 100. As they are not further divisible by 400, they are not leap years.
Write a program that asks the user how many years they want to check.
For that number of times the program accepts as data a year. The program will then determine if the year is a leap year or not. The program will keep doing this until the correct number of years
is checked.
Display the percentage of input years that are leap years, as well as the percentage that are not leap years. Confirm these add to 100.
Use a sentinel instead of asking how much data you have
using formatting as described in the text, make a table of your output. Real numbers should have 2 decimal places, and decimal points should line up
can you put the comments on each code for what they do. Thank you if you can help me
Hello kev2000, welcome to the forums.
Have you started any of this assignment yet? Please post what you have so far and we can guide you in the right direction.
Please use [highlight=Java] code [/highlight] tags when posting your code.
Forum Tip: Add to peoples reputation
Looking for a Java job? Visit - Java Programming Careers
I haven't started yet, I don't know where to start. It will be nice if you can help me understand.
Here is some code to help you get started.
It will read in the year submitted by the user in the console. Then it will work out if the year is divided by 4. See if you can work out the rest...
import java.util.Scanner;
public class Kev2000 {
* JavaProgrammingForums.com
private static int year;
private static boolean isLeapYear = false;
public void divisibleBy4() {
String myYear = Integer.toString(year);
myYear = myYear.substring(2, 4);
int myYearInt = Integer.parseInt(myYear);
for (int a = 0; a < myYearInt; a++) {
int calc = (4 * a);
if (calc == myYearInt) {
System.out.println(year + " is divisible by 4");
//Next part of the calculations needed here
isLeapYear = false;
public static void main(String[] args) {
Kev2000 k = new Kev2000();
Scanner sc = new Scanner(System.in);
System.out.println("Enter a year: ");
year = sc.nextInt();
Please use [highlight=Java] code [/highlight] tags when posting your code.
Forum Tip: Add to peoples reputation
Looking for a Java job? Visit - Java Programming Careers
Here is some code to help you get started.
It will read in the year submitted by the user in the console. Then it will work out if the year is divided by 4. See if you can work out the rest...
import java.util.Scanner;
public class Kev2000 {
* JavaProgrammingForums.com
private static int year;
private static boolean isLeapYear = false;
public void divisibleBy4() {
String myYear = Integer.toString(year);
myYear = myYear.substring(2, 4);
int myYearInt = Integer.parseInt(myYear);
for (int a = 0; a < myYearInt; a++) {
int calc = (4 * a);
if (calc == myYearInt) {
System.out.println(year + " is divisible by 4");
//Next part of the calculations needed here
isLeapYear = false;
public static void main(String[] args) {
Kev2000 k = new Kev2000();
Scanner sc = new Scanner(System.in);
System.out.println("Enter a year: ");
year = sc.nextInt();
A quick suggestion: wouldn't it be simplier to use mod function?
boolean divBy4 = myYearInt % 4 == 0
A quick suggestion: wouldn't it be simplier to use mod function?
boolean divBy4 = myYearInt % 4 == 0
It would indeed be a lot simpler to use that function! Thank you Dalisra.
Please use [highlight=Java] code [/highlight] tags when posting your code.
Forum Tip: Add to peoples reputation
Looking for a Java job? Visit - Java Programming Careers
June 2nd, 2009, 03:23 PM #2
mmm.. coffee
Join Date
May 2008
United Kingdom
My Mood
Thanked 286 Times in 225 Posts
Blog Entries
June 2nd, 2009, 03:26 PM #3
Junior Member
Join Date
Jun 2009
Thanked 0 Times in 0 Posts
June 3rd, 2009, 10:03 AM #4
mmm.. coffee
Join Date
May 2008
United Kingdom
My Mood
Thanked 286 Times in 225 Posts
Blog Entries
June 3rd, 2009, 03:57 PM #5
Little Star
Join Date
May 2009
Thanked 9 Times in 7 Posts
June 4th, 2009, 03:14 AM #6
mmm.. coffee
Join Date
May 2008
United Kingdom
My Mood
Thanked 286 Times in 225 Posts
Blog Entries | {"url":"http://www.javaprogrammingforums.com/algorithms-recursion/528-i-need-help-writing-program.html","timestamp":"2014-04-18T20:01:31Z","content_type":null,"content_length":"74968","record_id":"<urn:uuid:01d54dbd-1468-4ec6-9a83-60a1647a2d3e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discretize gamma distribution
March 1st 2009, 03:27 AM #1
Mar 2009
Discretize gamma distribution
I have a hard time to discrete the gamma distribution
The travel time from A to B denoted as X follows a contituous gamma distribution. I want to discretize the CDF of X into λ intervals with an equal cumulative probability. For example to divided
the CDF into 20 intervals (0, 0.05, 0.1, ..., 0.95, 1). A mean value xi is needed to calculated in each interval, P(X= xi) = ω ,i= 1,2,...,λ. My problem is how to calculate xi for each interval?
Denote a set of intervals to be (iω-ω, iω), i=1,2,…, λ, where ω is the cumulative probability of an interval, 0<ω<1 and λω=1.
According to the Mean-Value Theorem, within each interval (iω-ω, iω) there is at least one point xq satisfies:
f(xi) =ω/{v(iω) − v(iω−ω)}
where f(xi) is the PDF of X at point xq, and v(iω −ω ) and v(iω) are the inverse of CDF at the boundary points of the interval.
(1) how to inverse the CDF at iω ?
(2) how to inverse the PDF at f(xi) ?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/76272-discretize-gamma-distribution.html","timestamp":"2014-04-17T14:25:48Z","content_type":null,"content_length":"30222","record_id":"<urn:uuid:bffadf33-3e53-4457-ac03-f63bb236858c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Resistor - How Voltage Drops
basic passive elements of any electric circuit. Find out why resistors are so important, how we may describe and characterize them and what materials they are made off!
A resistor is a passive element of an electric circuit that resists the flow of electrical current. If a certain potential difference \(U\) is applied, a current \(I\) emerges following Ohm's law,\
[I = \frac{U}{R}\ ,\]which can be seen as the definition of an ideal resistor. The unit of the resistance is \(\left[R\right]=1\Omega=1V/A\) named after Georg Simon Ohm (1789 - 1854).
In the following we will get to know resistors a little better. We will outline resistor applications, a somewhat rigogous electromagnetic description of resistors, how resistors are characterized
and what materials are used for their construction. We wish you an interesting read! However, if you think, something is urgently missing, please let us know! Worksheets on electrical circuits can be
found here.
Before we get into details on how we can understand a resistor, let us briefly look into its applications. There are much more than one thinks of in the first place:
Most common to us is the supply of a specific current or voltage for other electrical components and the division of voltage and currents in a needed proportion with serial and parallel resistors.
A so-called pull-up resistor is used to connect a signal line like a logic circuit to a much higher voltage. Such resistors have a very high resistance in the kΩ range and ensure that no output with
a certain impedance can influence the potential of the signal line itself. Pull-down resistors are used to connect a signal lines to the ground to hold the line at approximately zero potential if no
actual signal is present. These resistors are also highly ohmic to prevent leakage currents.
Interesting applications can be found if one considers the change of a resistor's resistance when some physical quantity is changed. Then, one can build sensors to measure i.e. the temperature,
luminance or pressure.
Last but not least, resistors are also used to transform electrical energy into heat like in soldering irons or heaters. Light bulbs work likewise: they emit electromagnetic radiation by a heated
metal according to Planck's law.
Resistance - Electromagnetic Description
Resistors can be understood from a phenomenologic point of view introducing current and voltage as it is done in schools. Here, we want to use our well-known field equations and Ohm's law to find the
well-known equations for the ohmic resistance, specific resistance and the dissipated power.
What is Ohmic Resistance?
The starting point to describe resistors can be the electostatic potential \(\phi\left(\mathbf{r}\right)\). Since the electric field is related to the potential by \(\mathbf{E}\left(\mathbf{r}\right)
=-\nabla\phi\left(\mathbf{r}\right)\), we know that a potential difference \(U\) can be calculated via\[U_{12} = \phi\left(\mathbf{r}_{1}\right)-\phi\left(\mathbf{r}_{2}\right)=-\int_{2}^{1}\
mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{r}\ .\]From \(\nabla\times\mathbf{E}\left(\mathbf{r}\right)=0\) we also know that this integral is independent of the actual path. Let us assume that we
are in a conducting material, where the current density \(\mathbf{j}\) is given due to the microscopic Ohm's law \(\mathbf{j}\left(\mathbf{r}\right)=\sigma\mathbf{E}\left(\mathbf{r}\right)\). Then
the total current is given by\[\begin{eqnarray*} I_{12}&=&\int_{A}\mathbf{j}\left(\mathbf{r}\right)\cdot d\mathbf{A}\\&=&\int_{A}\sigma\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{A}\ , \end
{eqnarray*}\]where the area \(A\) can be anywhere separating the points 1 and 2 but must contain all of \(\mathbf{j}\left(\mathbf{r}\right)\). For now, we just have that the resistance \(R\) using
Ohm's law is given by\[R_{12} = \frac{U_{12}}{I_{12}}=\frac{\int_{1}^{2}\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{r}}{\int_{A}\sigma\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{A}}\ .
\]The latter formula is the most general definition of ohmic resistance for a constant conductivity \(\sigma\). Let us specify the resistance for resistors with an electric field that is constant
along the resistor. This is for example in a very good approximation the case for resistors that have flat metallic terminations which have, by definition, a constant potential:\[\begin{eqnarray*} U_
{12}&=&\int_{1}^{2}\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{r}=l\, E\ \text{and}\\I_{12}&=&\int_{A}\sigma\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{A}=\sigma E\, A\ . \end{eqnarray*}\]
Then we find the relation of the resistance of the resistor to its conductivity:\[R_{12} = \frac{U_{12}}{I_{12}}=\frac{l}{\sigma A}\equiv\rho\frac{l}{A}\]with the specific resistance \(\rho\).
In the end, two characteristics determine the resistance of a resistor: the used material with a given specific resistance and the actual resistor geometry, here length and cross section.
Power Dissipation
At this point we want to shortly outline how we come to the power dissipated by a resistor, the famous \(P=R\, I^{2}\). Strictly, we would have to use Poynting's theorem and derive at the loss term \
(\mathbf{j}\left(\mathbf{r}\right)\cdot\mathbf{E}\left(\mathbf{r}\right)\) directly from there. Such an approach requires the actual knowledge of the whole Maxwell equations. Let us find the
dissipation here a little more intuitive.
Let us start with the infinitesimal work needed to move a charge q a certain distance in an electric field,\[dW = q\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{r}\ .\]Per time intervall, we
find for this work\[\frac{dW}{dt} = q\mathbf{E}\left(\mathbf{r}\right)\cdot\mathbf{v}\left(\mathbf{r}\right)\equiv P\]with the velocity \(\mathbf{v}\left(\mathbf{r}\right)=d\mathbf{r}/dt\) and
the electric power \(P=dW/dt\). If we go now to a continuous charge distribution, \(q\rightarrow\rho\left(\mathbf{r}\right)\) (not the specific resistance here!), and integrate over the resistor, we
find\[P = \int\rho\left(\mathbf{r}\right)\mathbf{E}\left(\mathbf{r}\right)\cdot\mathbf{v}\left(\mathbf{r}\right)dV=\int\mathbf{E}\left(\mathbf{r}\right)\cdot\mathbf{j}\left(\mathbf{r}\right)dV
\]with the current density \(\mathbf{j}\left(\mathbf{r}\right)\). Now, we can define the current as the surface integral over the current density, \(I=\int_{A}\mathbf{j}\left(\mathbf{r}\right)d\
mathbf{A}\). Furthermore, if the current density is constant over the cross section of the resistor, we can express the dissipated power as an integral over the electric field and recover a
well-known expression:\[P = I\int\mathbf{E}\left(\mathbf{r}\right)\cdot d\mathbf{r}\equiv I\, U\ .\]Ohm's law allows us now to reformulate the power in other familiar expressions:\[P = U
\, I=R\, I^{2}=\frac{1}{R}U^{2}\ .\]The dissipated power is given in units of Watts, \(\left[P\right]=1W=1VA=1J/s\).
Now that we have learned how to boil down Maxwell's equations to get well-known expressions, let us discuss the common ways to characterize a resistor.
Resistor Characterization
In addition to material and geometry properties, there may exist a plethoria of different categories to distinguish resistors. For instance, resistors with a fixed and variable value exist or ones
for which the material's temperature dependency can be parametrized. A temperature characterization is often done in terms of the temperature coefficient \(\alpha=\rho d\rho\left(T_{0}\right)/dT\)
assuming a linear dependency such that \(\rho\left(T\right)=\rho\left(T_{0}\right)\left\{ 1+\alpha\left(T-T_{0}\right)\right\}\). Now let us come the main characterization methods.
By Material
Resistors are made of different materials with largely varying properties. Even though the boundaries are somewhat continuous, one may find four different types of resistor depending on their
specific resistance:
• Isolators are ideally non-conductive materials and have an extremely high specific resistance. The most prominent example is amber with \(\rho_{\text{amber}}\approx10^{\color{red}{22}}\,\Omega\
text{m}\). Roughly any material with \(\rho\geq10^{8}\,\Omega\text{m}\) is considered an isolator.
• Semiconductors are materials where electrons need to have a certain amount of energy to be able to flow. This energy is depending on the temperature of the semiconductor, \(E_\mathrm{e}\propto
k_B T\). Hence, such materials often cannot be described using a simple temperature coefficient approach. Characteristic values for semiconductor resistances are in the order of \(\rho\approx1\
dots10^{4}\,\Omega\text{m}\). Undoped silicon for example has a \(\rho_{\text{Si}}\approx4\cdot10^{3}\,\Omega\text{m}\) at room temperature.
• Any material below \(\rho=10^{-3}\,\Omega\text{m}\) is a conductor. Copper, for instance has a specific resistance of \(1.6\,10^{-8}\Omega\text{m}\). Here, it is of course better to speak of the
conductivity \(\sigma=1/\rho\). Its unit is \(1S=1\Omega^{-1}\) after Werner von Siemens (1816 - 1892).
• The last fascinating category are superconductors which do not have any resistance by definition, \(\rho\equiv0\) and we may not use Ohm's law to describe them.
In principle, any material category, except the superconductors, may be used for a resistor since not only the specific resistance but also geometrical parameters influence its overall resistance.
By Geometry
Most of the resistors have a non-changing cross-section in common. This is useful since it implies that the whole resistor can be characterized only by length and cross section area as we have seen
through-hole mounting are often of cylindrical cross-section. A surface-mounted resistor on the other hand has most likely a squared, see figure on the right. The form is dictated by cost efficiency
of the resistor and final device fabrication processes/requirements.
For all forms, we can sub-divide resistors into those using films, a bulk material or windings. A film here refers to a thin layer, i.e. of carbon wrapped around some isolator whereas the bulk
material has a much thicker cross-section. The winding is basically a way to reduce the space requirement of a bulk material resistor like a metallic wire which is often used. The different forms
just minimize or maximize the cross-section area or length of a resistor - a very effective way to control its properties by design!
Resistor Materials
For varying purposes, different materials in resistors can be found. Most frequently, carbon, metal/metal oxide films are used. The choice of the material hugely depends on the maximum power
consumption, and, of course, material cost. Let us outline the most common materials here.
For a power consumption lower than ~5 W, carbon is the material of choice. It is cheap and relatively easy to process in industrial fabrications - carbon resistors can be printed directly onto
circuit boards mounted on or surrounded by a ceramic isolation. We can find such kinds of resistors for example on any motherboard, i.e. surrounding more complex chips. Note that small capacitors
look very similar. In most cases, however, we may distinguish the two by a label, like “C64” and “R92” for the 64'th capacitor and 92'nd resistor, as you can see again on the figure above.
For higher power consumptions, metal film or metal oxide resistors are commonly used like the ones you use for home-made circuits. Aluminium oxide, Al[2]O[3], is the most common material here for two
reasons. It has an enourmous specific resistance of \(\rho_{\text{Al}_{2}\text{O}_{3}}\approx10^{{\color{red}{12}}}\,\Omega\text{m}\) and is contained in a lot of minerals which add up to 15% of the
earth's crust!
High-temperatures-resistors are usually made of sophisticated composite matertials containing ceramic and metal - so called cermets.
Very interesting objects we already outlined are photoresistors: If we illuminate such a device, electrons are excited into the conduction band by the inner photoelectric effect. This lowers the
resistance of photoresistors in a very deterministic way. Such devices can be used as very accurate light detectors. Most used materials for visible light are the semiconductors cadmium sulfide, CdS,
and cacmium selenide, CdSe.
As for all industrial devices, producing a resistor is a science in itself involving physical, chemical and sometimes even biological processes.
If you liked this article, maybe you would like to more e.g. about the disadvantages of solar energy. Thank you for reading! | {"url":"http://www.problemsinelectrodynamics.com/blog/2012/resistor","timestamp":"2014-04-16T04:10:53Z","content_type":null,"content_length":"30937","record_id":"<urn:uuid:76b94b75-3450-4a23-8417-3ceccdf57d96>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
sites that show real applications of math
Excellent! Will re-blog it:)
free content on a variety of subjects; they also have in iOS app to access the content.
"How technology can help develop academic language for Common Core math success"
interactive games and exercises in a wide variety of topics, many free.
I was surprised. I learned things from this article. It can be projected in class to teach students also.
Sarah Stamos on 20 Jul 12
Conduct a college, university, or career search at Search By Degree. Find the college, university, or career information you need to accomplish your goals.
Selected Tags
Related tags | {"url":"https://groups.diigo.com/group/sites_for_education/content/tag/math","timestamp":"2014-04-17T16:06:46Z","content_type":null,"content_length":"160547","record_id":"<urn:uuid:cbfd3715-54a9-4b1c-bb5a-a83c7f227f14>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subgroups of a given group
I'm a newbie to the site, who has just started a PhD in Group Theory. I wonder if anyone could help me with a couple of (probably very basic things).
1. I need to produce a Hasse diagram for subgroups of a given group containing a given Sylow subgroup of the group. I can use the command Subgroups(G:OrderMultipleOf:=??) to obtain all subgroups of a
group G which contain a Sylow subgroup, however is there a command I can use so that for a given group G and a Sylow subgroup S, I can produce all subgroups of G containing S.
2. As I'm very new to MAGMA, does anyone know of any good books, publications or websites aiding someone to use MAGMA for group theoretical purposes.
Thanks in advance for your help.
Last edited by dward1996 on Tue Dec 06, 2011 7:58 am, edited 1 time in total.
Re: Subgroups of a given group
I think you may have the wrong MAGMA group - try this site magma.maths.usyd.edu.au
This one is about matrix algebra on general purpose graphics processors.
Re: Subgroups of a given group
Sorry to have bothered everyone! | {"url":"http://icl.cs.utk.edu/magma/forum/viewtopic.php?p=1146","timestamp":"2014-04-19T00:07:07Z","content_type":null,"content_length":"16140","record_id":"<urn:uuid:5defba4e-c9c6-48a6-b749-069c49b0472d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
The tool to generate Cartesian X/Y-plots
Windows AllPlatform :
USD $29.9Price :
5.17 MBFile Size :
Popularity :
10/30/2003Date Added :
Rt-Plot is a tool to generate Cartesian X/Y-plots from scientific data. You can enter and calculate tabular data. View the changing graphs, including linear and non linear regression, interpolation,
differentiation and integration, during entering. Rt-Plot enables you to create plots fast and easily. The options can be changed interactively. A powerful reporting module generates ready to publish | {"url":"http://www.topshareware.com/Rt-Plot-download-9731.htm","timestamp":"2014-04-19T12:08:48Z","content_type":null,"content_length":"15190","record_id":"<urn:uuid:ad141a49-08b9-4328-8a04-f6be5298d8a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |