content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Math Forum Discussions
- User Profile for: JohnAdre
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile for: JohnAdre
UserID: 873609
Name: JohnAdre
Registered: 3/21/13
Location: Topeka, KS
Total Posts: 7
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=873609","timestamp":"2014-04-17T13:15:47Z","content_type":null,"content_length":"12807","record_id":"<urn:uuid:b119f13d-aff0-4032-b747-2d5e12e25452>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CRC32 doenst work like on Wikipedia
Apr 29th, 2012, 04:07 PM #1
Thread Starter
Hyperactive Member
Join Date
Oct 2008
CRC32 doenst work like on Wikipedia
I just tried it and it doesn't, because it doesn't give the same results as actual software with CRC32 implemented.
In general Wikipedia shows CRCs working like this. In this example they use the simple 4bit divisor 1011.
11010011101100 000 <--- input left padded by 3 bits
1011 <--- divisor
01100011101100 000 <--- result
1011 <--- divisor ...
00000000000000 100 <---remainder (3 bits)
So this shows that the most significant bit is on the left, and the divisor is moved right according to where the next "1" bit is, and an XOR operation is performed at that point. This process is
repeated, until all the data bits have been set to 0. The 3 padding bits appended to the right at the beginning then hold the remainder which is the 3bit CRC.
However as simple as this looks, it doesn't actually work when repeated with CRC32. For CRC32, I simply took and appended 32 bits to the right, used the polynomial shown in Wikipedia which is:
This corresponds to the binary number:
I used this number as the divisor for the CRC operation, because that number is what is (according to Wikipedia) the divisor for CRC32. And my implementation went perfectly just as as Wikipedia's
4bit divisor (CRC3) example did, just with a 33bit divisor (CRC32) instead. However the results do not match those of a known working CRC32 implementation in a 2 other pieces other software
(EasyHash and HxD hex editor I tested). Yet the The CRC32 implementation in both of these other pieces of software are identical to each other, showing that my implementation (and that of
Wikipedia) is flawed.
Below are the results of what I found, using the simplest possible input, 1 byte all set to 1's (11111111). Which is the ascii text character ÿ.
Here is a screenshot showing what my program outputs (it displays a log of all the intermediate steps of the CRC32 that was calculated, including the data at each step and the position of the
divisor at each step).
As you can see if you look at the rightmost 32 bits (the CRC32) in the final output stage of the calculation, the bits are:
Now the output of EasyHash for the same input (which I have verified with HxD hex editor that also has CRC32 calculation implemented) is this:
This hexidecimal number corresponds to the binary number:
And of course 10110001111101110100000010110100 is in NO WAY equal to 11111111000000000000000000000000. In fact it is impossible to convert one to the other using operations like bit inversion
(NOT gating the data) or even bit order reversal. These 2 outputs look NOTHING alike.
What is wrong with the implementation here? I need some hints on where I (and apparently Wikipedia as well) are going wrong. It looked so simple from the Wikipedia article, but it really doesn't
work that way. Any info on the correct way to calculate CRC32 would be REALLY helpful.
Re: CRC32 doenst work like on Wikipedia
Have you looked at Calculating CRC32 With VB yet?
May 6th, 2012, 08:46 AM #2
Join Date
Feb 2006
|
{"url":"http://www.vbforums.com/showthread.php?678480-CRC32-doenst-work-like-on-Wikipedia&p=4165766","timestamp":"2014-04-20T05:48:09Z","content_type":null,"content_length":"67114","record_id":"<urn:uuid:7bbbae35-17fb-44bb-ba0e-1a3bc9d08bba>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BT Easy Math Reference Guide: Help with Math Homework
BT Easy Math Reference
Price: $17.00
Math Homework Help is Here:
The Ultimate Cheat Sheet for Any Math Assignment…Even Helps Those With Dyslexia, Dyscalulia & ADHD
• 16 pages packed with all the math you child or student needs to know 1st – 8th!
• GREAT as a refresher for all parents
• Step-by-step how to solve math homework problems.
The BT Easy Math Reference Guide Includes:
• Addition
• Subtraction
• Multiplication,
• Division
• Fractions
• Decimals
• Percents
• Word Problem Strategies
• Math Vocabulary and much, much, more!
• And, it fits in your binder!
Demolish the Roadblocks to Math Homework and Give Your Child the Tools to be Successful with Math
Turn your child into a competent and successful math student even if they have a learning disability, dyslexia, dyscalculia, or ADHD!
How many times has your child come home with a math homework assignment and began working on it right away? If your child is anything like my children were, they come home and avoid it, put it off,
and then, when they finally get down to doing it, need your help.
That’s how my children and my students were. They were great at saying, “It made perfect sense when she (my teacher) explained it in class; I just can’t remember how to do it now that I’m home (or
here – at the learning center).”
And, how often are you able to explain the math assignment? Or do you find that you can’t remember how to do that type of problem because you haven’t had to deal with fractions, decimals, or percents
since you were in school.
The challenge then becomes one of trying your darnedest to figure out how to do the problem and then explain to your child how to do the problem. You look through their textbook (if they’ve brought
it home), and even then sometimes you become confused.
You know your child needs to not only do his math assignment but also understand it.
You know math can be a challenge and if your child doesn’t understand basic math, they will become lost. In fact, if children don’t have a mathematical foundation, the further they go, the more lost
they get.
Most children have gaps in their understanding of calculations with: fractions, decimals, percents, or word problems. Without these basics, once they’re in Algebra and beyond, they tend to feel
hopelessly lost.
I wanted to be sure my children and students would be able to understand the math concepts and be successful with their math assignments. I also wanted them to have something to refer to if I wasn’t
there to offer guidance to them.
So, I decided to create the same type of reference guide for their math homework assignments that I did to help them with their writing assignments (the Writer’s Easy Reference Guide). This time I
would address the roadblocks to math success. The math homework reference guide would be filled with all of the basics of elementary school math. The BT Easy Math Reference Guide would bridge the gap
between the theoretical and the practical. It can also help you become your child’s Math Guru and be so easy to use that your child will even be able to use it on their own.
Math Homework Help: Did you know that in math assignments that contain word problems there are clue words in every word problem?
And if you understand what the clue words are telling you what to do, you will be able to solve any word problem. That is why I explain what the clue words are telling you and give you examples of
just about every type of word problem there is. That way your child can just match their math problem up with the example and see how to do it.
Imagine the day when your child can accurately:
• Add, subtract, multiply, divide
• Calculate fractions, decimals, percents
• Solve word problems
• Calculate area, perimeter, and ratio with ease.
• Recognize and do calculations with geometric shapes, lines, and angles.
I must admit, I got a bit carried away when I was developing the BT Easy Math Reference Guide, so it is 16 pages packed with all the math you child needs to know for 1st through 8th grade math! It’s
GREAT as a refresher for all parents trying to remember how to help their child solve their homework problem.
Homeschoolers Say…
“The BT Easy Math Reference Guide is a mother load of math help. If you’re one of those “math-phobic” moms, it could be a huge help with teaching your kids elementary math and getting them ready with
everything they need to know before tackling algebra.
Starting with addition and subtraction, this 16-page guide shows step by step how to solve all elementary math calculations right up to fractions and decimals. Also covered are all those “other” math
topics, such as word problems, bar and circle graph, rounding and estimating, place value, geometry, measurements, money, and averaging.
You’ll learn the how, the why, and the memory tricks to help you student remember. Simple, clear examples, all in a sturdy, 3-hole-punched format you can slip into a binder. It’s a great supplement
to any math program, for both parent and student.”
Don’t spend another evening frustrated, trying to help your child with their math homework assignments! Become your child’s math hero; empower your child, and ENJOY your evenings!
BT Easy Math Reference Guide
Price: $17.00
As with all Bonnie Terry Learning Products, BT Easy Math Reference Guide comes with a 60 Day Money Back No Questions Asked Guarantee.
Related Products
Find Similar Products by Category
|
{"url":"http://www.bonnieterrylearning.com/products/bt-easy-math-reference-guide/","timestamp":"2014-04-21T09:45:59Z","content_type":null,"content_length":"26814","record_id":"<urn:uuid:da8f901a-7da3-424a-8fd8-4d377fe97296>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
physical equilibria
The vapor pressure of HN3 is 58 Torr at -22.75 C and 512 Torr at 25C. Find the standard enthalpy of vaporization, standard entropy of vaporization, the standard free energy of vaporization.
I know how to find the first two, but for some reason I can't find the third one...Is the equation for it lnP=-G/(RT). If so, what unit is P in and which P should I use and is R 8.3145 and what
temperature do I use?
|
{"url":"http://www.physicsforums.com/showthread.php?t=70346","timestamp":"2014-04-16T19:02:27Z","content_type":null,"content_length":"21656","record_id":"<urn:uuid:755a1130-b7ff-4eed-8995-0e4e1e7f1e1e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interference of Sinusoidal Waveforms
Author: Konstantin Lukin
Project Java Webmaster: Glenn A. Richard
Mineral Physics Institute
SUNY Stony Brook
How to Use this Applet
For slower machines, you may want to try the fast version of this applet.
A three-wave version of the applet is also available.
A bidirectional version of the applet is also available.
The two green curves are parallel sinusoidal waveforms that have identical wavelengths, amplitudes, and phases when the applet initializes. The blue sinusoidal waveform at the bottom is the sum of
the two green parallel waveforms. You can change the phase of the green sinusoidal waveforms by dragging the circles at the left end of the waveform. You can change the wavelength and the amplitude
by dragging the other two circles. The sliders to the right of the waveforms offer alternative means of making similar changes.
To animate the waveforms, click on the animate checkbox, and to stop the animation, click again. In order to change the speed of the animation of the two green waveforms, you can use the sliders on
the lower right, but you must halt the animation in order to adjust the speed. Once your speeds are selected, start the animation again. The speeds are actually phase velocity. In other words, when
the two green waveforms animate at equal speeds, each one will advance by an equal number of wavelengths during a given amount of time. Therefore, if they are set to the same speed, but different
wavelengths, the waveform with the longer wavelength will advance faster than the other one.
Mathematical Concepts
• Sinusoidal waveforms
• Waveform amplitude
• Wavelength
• Waveform summation
• Phase relationships
Source Code
Zip File
Last modified on June 25, 2007
[More Applets: Project Java Home] . [Wave Interaction - fast version] . [Wave Interaction - bidirectional version]
|
{"url":"http://www.eserc.stonybrook.edu/ProjectJava/WaveInt/index.html","timestamp":"2014-04-20T18:51:48Z","content_type":null,"content_length":"3464","record_id":"<urn:uuid:6189d2b3-5b1b-4c88-b101-bf785e9ea62a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimation of multiple-regime regressions with least absolutes deviation
Jushan, Bai (1995): Estimation of multiple-regime regressions with least absolutes deviation. Published in: Journal of Statistical Planning and Inference , Vol. 74, No. 1 (October 1998): pp. 103-134.
Download (267Kb) | Preview
This paper considers least absolute deviations estimation of a regression model with multiple change points occurring at unknown times. Some asymptotic results, including rates of convergence and
asymptotic distributions, for the estimated change points and the estimated regression coefficient are derived. Results are obtained without assuming that each regime spans a positive fraction of the
sample size. In addition, the number of change points is allowed to grow as the sample size increases. Estimation of the number of change points is also considered. A feasible computational algorithm
is developed. An application is also given, along with some monte carlo simulations.
Item Type: MPRA Paper
Original Estimation of multiple-regime regressions with least absolutes deviation
Language: English
Keywords: Multiple change points, multiple-regime regressions, least absolute deviation, asymptotic distribution
Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C13 - Estimation: General
C - Mathematical and Quantitative Methods > C2 - Single Equation Models; Single Variables > C21 - Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions
Item ID: 32916
Depositing Jushan Bai
Date 20. Aug 2011 16:53
Last 13. Feb 2013 16:57
Babu, G.J. (1989). Strong representations for LAD estimators in linear models. {\em Probability Theory and Related Fields} 83 547-558.
Bai, J. (1994). Least squares estimation of a shift in linear processes. {\em J. of Time Series Analysis} 15, 453-472.
Bai, J. (1995). Least absolute deviation estimation of a shift. {\em Econometric Theory 11, 403-436}.
Bai, J. and P. Perron (1998). Estimating and testing linear models with multiple structural changes. {\em Econometrica}, 66, 47-78.
Barrodale, I. and Roberts, F.D.K. (1974). Algorithm 478: Solution of an overdetermined system of equations in the $L_1$ norm. {\em Comm. ACM } 17 319-320.
Bassett, G. and K. Koenker (1978), Asymptotic theory of least absolute error regression. {\em J. Amer. Statist. Assoc.} 73 618-622.
Bhattacharya, P.K. (1987). Maximum likelihood estimation of a change-point in the distribution of independent random variables, General Multiparameter case. {\em J. Multi. Analysis.} 23
Brodsky, B.E. and B.S. Darkhovsky (1993). {\em Nonparametric Methods in Change Point Problems.} Kluwer, Dordrecht.
Dueker, M.J. (1992). The response of market interest rates to discount rate changes. The Federal Reserve Bank of St. Louis {\em Review} 74, No.4 78-91.
D\"umbgen, L. (1991). The asymptotic behavior of some nonparametric change point estimators. {\em Annals of Statistics} 19 1471-1495.
Gombay, E. and L. Horv\'{a}th (1994). Limit theorems for change in linear regression. {\em J. Multivariate Anal.} 48, 43-69.
Guthery, S.B. (1974). Partition regression. {\em J. Ameri. Stat. Assoc.} 69, 945-947.
Horv\'{a}th, L. (1995). Detecting changes in linear regressions. {\em Statistics} 26, 189-208.
Horv\'{a}th, L., Hu\v{s}kov\'{a}, M., and Serbinowska, M. (1997). Estimators for the time of change in linear models. {\em Statistics} 29, 109-130.
References: Huang, W-T, Y-P Chang (1993). Nonparametric estimation in change-point models. {\em J. Statist. Plan. Inference} 35, 335-347.
Hu\v{s}kov\'{a}, M. (1996a). Tests and estimators for change point based on M-statistics. {\em Statistics and Decisions} 14, 115-136.
Hu\v{s}kov\'{a}, M. (1996b). Estimation of a change in linear models. {\em Statistics and Probab. Letters} 26, 13-24.
Kim, H.J. and D. Siegmund (1989), The likelihood ratio test for a change point in simple linear regression. {\em Biometrika} 76 409-423.
Krishnaiah, P.R. \& B.Q. Miao (1988), Review about estimation of change points, in P.R. Krishnaiah and C.R. Rao (ed) {\em Handbook of Statistics} Vol. 7 375-402. New York: Elsevier.
Picard, D. (1985). Testing and estimating change-points in time series. {\em Advances in Applied Probability} 17 841-867.
Pollard, D. (1990). {\em Empirical Processes: Theory and Applications}. CBMS Conference Series in Probability and Statistics, Vol. 2, Institute of Mathematical Statistics.
Pollard, D. (1991). Asymptotics for least absolute deviation regression estimators. {\em Econometric Theory} 7 186-199.
Quandt, R.E. (1958), The estimation of parameters of a linear regression system obeying two separate regimes. {\em J. Ameri. Statist. Assoc.} 53 873-880.
Roley, V. and Wheatley, S.M. (1990). Temporal variation in the interest rate response to money announcements. NBER No. 3471.
Shaban, S. A. (1980). Change point problem and two-phase regression: an annotated bibliography. {\em International Statistics Review} 48 83-93.
Yao, Yi-Ching, (1988). Estimating the number of change-points via Schwarz' criterion. {\em Statist. and Prob. Letters} 6 181-189.
Yao, Yi-Ching, and S.T. Au (1989). Least squares estimation of a step function. {\em Sankhy\={a}} 51 Ser. A 370-381.
Yin, Y.Q. (1988). Detection of the number, locations, and magnitudes of jumps. {\em Comm. in Stat.-Stoch. Models} 4 445-455.
Zacks, S. (1983). Survey of classical and Bayesian approaches to the change-point problem, fixed sample and sequential procedures of testing and estimation. In M.H. Rivzi, J.S. Rustagi
and D. Siegmund (ed) {\em Recent Advances in Statistics}, pp 245-269. New York: Academic Press.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/32916
|
{"url":"http://mpra.ub.uni-muenchen.de/32916/","timestamp":"2014-04-17T00:52:45Z","content_type":null,"content_length":"26107","record_id":"<urn:uuid:0f7088bc-d925-4821-b613-721cb8225229>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Signal and Array Processing
We haven't found any reviews in the usual places.
New Research Directions 1 1
Leon Cohen Scale Frequency and Time and the Scale 8 8
Qu Jin K M Wong Z Q Joint Time Delay and Doppler Stretch Estimation 1821 18
66 other sections not shown
Bibliographic information
|
{"url":"http://books.google.com/books?id=TeFVAAAAMAAJ&q=assumed&dq=related:ISBN0780362942&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-18T03:39:51Z","content_type":null,"content_length":"103529","record_id":"<urn:uuid:6a5003a8-fa07-4555-a54a-fbe59892c6a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pikesville Trigonometry Tutors
...I have tutored remedial math(algebra) at a community college for over an year now. I find most students struggling with algebra need a lot of practice. Algebra is a most basic necessity to do
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...Upon retiring I began a second career as a substitute teacher in the Baltimore County school system. Much of my work over the last 5-6 years has been at Campfield Early Learning Center where
students are Pre-School, Pre-K and K so I'm very comfortable working with very young children. I looked through a Geometry book the other day.
11 Subjects: including trigonometry, geometry, ASVAB, algebra 1
...Very often students struggle with algebra 2 because they were not taught Algebra 1 properly. I will locate the areas where you or your child struggle, and fill the gaps of your learning, so
that algebra 2 will become much more intuitive and simple. Geometry is usually unlike any math that a student took before it.
12 Subjects: including trigonometry, physics, piano, geometry
...I teach courses in Mathematics and Computer Science. I am an online tutor at Franklin University. I also taught Algebra and Statistics at Columbus State Community College.
10 Subjects: including trigonometry, calculus, statistics, algebra 1
...I have a very flexible time schedule, however, I stick to a 24-hour notification policy, to either cancel, change or re-schedule an appointment.Mathematics is my minor specialization in
college. I have more than 40 units of credit in mathematics in college. I have taken and passed Algebra, tri...
19 Subjects: including trigonometry, chemistry, physics, GRE
|
{"url":"http://www.algebrahelp.com/Pikesville_trigonometry_tutors.jsp","timestamp":"2014-04-16T22:00:40Z","content_type":null,"content_length":"25081","record_id":"<urn:uuid:77029912-972b-4b8b-bba6-4b4a0b1b17e7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/therealamiee/answered","timestamp":"2014-04-19T17:10:44Z","content_type":null,"content_length":"104128","record_id":"<urn:uuid:3bcb2e12-5e4a-478c-a1b2-202a5f0a4264>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring Polynomials by ECSDM
Lesson Plan
Factoring Polynomials by ECSDM
Grade Levels
Commencement, 9th Grade
In this lesson, students will review multiplying two binomials together (FOIL) in the "Do-Now". Students will then learn how to factor a quadratic equation in the form of x^2 + bx + c, when a is
equal to 1.
Through the interactive SMART Notebook lesson, the students will:
learn how to factor the quadratic equation a^2 + bx + c, when a = 1 into 2 binomials
complete the accompanying worksheet for homework
Two 45-minute class periods
Materials/Web Resources
• Computer
• SMART Board
• LCD Projector
• "Factoring Polynomials" worksheet
• Internet Game - Rags to Riches
This procedure should be used in conjunction with the SMART Board file available below.
Day 1 -
1. Have students complete the DO NOW which consists of practicing the FOIL (first, outer, inner, last) method. This will help them to understand that factoring is foiling backwards.
2. Go over the DO NOW with the students. Reveal answers by removing rectangles.
3. Have students copy down notes on factoring polynomials.
4. Go over a few examples with the students. Use the box on the side to drag over needed numbers and operators. Follow the steps that are listed on the notes page.
5. Ask students to try a few problems in their notebooks. Have a few volunteers come up and complete the problems on the SMART Board.
6. Hand out homework worksheet ("Factoring Polynomials" worksheet below.) Have students begin problems in class and finish for homework.
Day 2 -
1. Go over homework with students. Have a few volunteers complete the problems on the board or SMART Board.
2. Go to the Rags to Riches game which can be found at quia.com.
3. Have students complete each problem on a separate sheet of paper that will be handed in. As a class, vote for the answer.
4. Continue until game has ended.
1. Students will be assessed on participation in the SMART Notebook lesson.
2. Homework assignment will be checked. See attached sheet with answers.
Support Materials
The following support materials were used during this lesson:
SMART Board
This instructional content was intended for use with a SMART Board. The .xbk file below can only be opened with SMART Notebook software. To download this free software from the SMART Technologies
website, please click here.
Lucrezia Parisi, Enlarged City School District of Middletown
|
{"url":"http://www.nylearns.org/module/content/search/item/3982/viewdetail.ashx","timestamp":"2014-04-21T12:09:58Z","content_type":null,"content_length":"30474","record_id":"<urn:uuid:eb1f5ad4-d762-4ba6-9a57-b38579982c29>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Need Help on Wednesday, January 11, 2012 at 10:06pm.
I need help on this math problem.
The width of a certain painting is 5cm less than twice the length. The length of the diagonal distance across the painting is 107cm. Find the length and width. Round your answer to 2 decimal places.
diagonal distance: 107cm
length: L
width: 2L-5
Is the substitutes correct?
Should I use the pythagoras method?
Can someone put me in the right track please so I can work out the problem?
please help and thank you
• Math 11 - Steve, Thursday, January 12, 2012 at 6:08am
L^2 + (2L-5)^2 = 107^2
L^2 + 4L^2 - 20L + 25 = 11449
L = 49.84
• Math 11 - HelpPlease, Thursday, January 12, 2012 at 9:38am
the next step is:
• Math 11 - Steve, Thursday, January 12, 2012 at 10:44am
Almost. Set the equation to zero to solve:
5L^2-20L-11424 = 0
then use the quadratic formula to get L.
Related Questions
Math - I have a fraction problem that involves borrowing and I do not know how ...
helpppppppp - I have this problem abouth math pre algebra this is my problem I ...
Math - Hello everyone. I have one hard math problem and I do not know how to ...
Math - Thanks Ms. Sue for givig me the anser of 11,211 for my son's math problem...
Math - This is a 3rd grade math problem that my son had on homework. Can you ...
math - ok, i typed it out right this time. (5/4)x+(1/8)x=11/8+x (10/8)x+(1/8)x=...
Math 11 - I need help on this math problem. The width of a certain painting is ...
math - hi i need help calculating the standard error using the central limit ...
Math - i need an example problem using log10E=11.4+1.5m when E means radiated ...
Math - I need help solving this problem. 17 11/12 ÷ 6 1/10
|
{"url":"http://www.jiskha.com/display.cgi?id=1326337612","timestamp":"2014-04-19T02:05:58Z","content_type":null,"content_length":"9186","record_id":"<urn:uuid:7b538dda-f053-4362-bde3-fccf03b51593>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Applications for the Management- Life- and Social Sciences 10th edition by Harshbarger | 9781133106234 | Chegg.com
Mathematical Applications for the Management, Life, and Social Sciences 10th edition
Details about this item
Mathematical Applications for the Management, Life, and Social Sciences: MATHEMATICAL APPLICATIONS FOR THE MANAGEMENT, LIFE, AND SOCIAL SCIENCES, 10th Edition, is intended for a two-semester applied
calculus or combined finite mathematics and applied calculus course. The book's concept-based approach, multiple presentation methods, and interesting and relevant applications keep students who
typically take the course--business, economics, life sciences, and social sciences majors--engaged in the material. This edition broadens the book's real-life context by adding a number of
environmental science and economic applications. The use of modeling has been expanded, with modeling problems now clearly labeled in the examples. Also included in the Tenth Edition is a brief
review of algebra to prepare students with different backgrounds for the material in later chapters.
Back to top
Rent Mathematical Applications for the Management, Life, and Social Sciences 10th edition today, or search our site for Ronald J. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee.
Published by CENGAGE Learning.
|
{"url":"http://www.chegg.com/textbooks/mathematical-applications-for-the-management-life-and-social-sciences-10th-edition-9781133106234-1133106234?ii=1&trackid=a925ded8&omre_ir=1&omre_sp=","timestamp":"2014-04-21T06:02:33Z","content_type":null,"content_length":"23486","record_id":"<urn:uuid:42dd3681-8b96-4645-b53e-abaeab0e58f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ardmore, PA Math Tutor
Find an Ardmore, PA Math Tutor
Latoya graduated from the University of Pittsburgh in December of 2007 with a Bachelor's degree in Psychology and Medicine and a minor in Chemistry. Currently, she is pursuing her Master's in
Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities.
13 Subjects: including algebra 2, trigonometry, psychology, biochemistry
...At the advanced level, I like pointing out grammatical constructions to students and asking them questions of syntax so that they can begin to understand how the language works. I served as a
math tutor during my first two years of college. Before that, I tutored students in high school math as well as elementary math.
10 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...I hold a PhD in Algorithms, Combinatorics and Optimization. I have 14 years' experience as a practicing actuary. I am a Fellow of the Society of Actuaries, having completed the actuarial exam
18 Subjects: including trigonometry, differential equations, algebra 1, algebra 2
...My years of experience as a published author will be invaluable to those students who wish to learn how to write an essay that will “stand out” from the “forest” of essays received. The
English language can present difficulties for many elementary school students. Most students are confused by the phonetics (sound) of a word as opposed to the correct spelling of the word.
61 Subjects: including algebra 1, algebra 2, organic chemistry, biology
...I love learning and teaching. I have a lot of patience and want to see everyone succeed. Everyone has the ability to learn, it's just a matter of finding the right way to reach them.
8 Subjects: including calculus, trigonometry, SAT math, precalculus
Related Ardmore, PA Tutors
Ardmore, PA Accounting Tutors
Ardmore, PA ACT Tutors
Ardmore, PA Algebra Tutors
Ardmore, PA Algebra 2 Tutors
Ardmore, PA Calculus Tutors
Ardmore, PA Geometry Tutors
Ardmore, PA Math Tutors
Ardmore, PA Prealgebra Tutors
Ardmore, PA Precalculus Tutors
Ardmore, PA SAT Tutors
Ardmore, PA SAT Math Tutors
Ardmore, PA Science Tutors
Ardmore, PA Statistics Tutors
Ardmore, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Ardmore_PA_Math_tutors.php","timestamp":"2014-04-16T22:33:12Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:429247bd-bc8b-4abb-b62a-03553770473b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dr Alvin Thaler | Sharp Capital Partners
Dr. Alvin Thaler
Al Thaler joined the National Science Foundation in 1971, when the Algebra and Topology Program, originally under the purview of Ralph M. Krause, was split into two programs. He was the first NSF
Program Director for Algebra. In his over-thirty years at NSF he saw and was part of the growth of the Division in terms of budget, personnel, and intellectual breadth. He presided over the two
renaming of Algebra Program (first to Algebra and Number Theory, then to the addition of Combinatorics to the current program title).
A partial listing of his job titles and responsibilities includes: Program Director for Strategic Activities; Program Director for Computational Mathematics; Program Director for Algebra, Number
Theory, and Combinatorics; Program Manager, EXPRES (in CISE); Program Director for Foundations of Mathematics, and Program Director for Special Projects.
Thaler was an original member of the Grand Challenges Application Groups management team, and was NSF coordinator of the two initial joint NSF-DARPA programs (VIP, OPAAL).
As Program Director for Special Projects, in the 1980s, Thaler was principal Program Officer responsible for the formation and initial management of the Mathematical Sciences Research Institutes at
Berkeley and Minneapolis, as well as the concept and initial design of the Scientific Computing Research Equipment in the Mathematical Sciences program, and the Mathematical Sciences Postdoctoral
Research Fellowships program.
Dr. Thaler holds a Ph.D., Mathematics (Arithmetic Algebraic Geometry), The Johns Hopkins University; PhD Advisor: Bernard M. Dwork and a A.B., Columbia University
|
{"url":"http://www.sharpcapitalpartners.com/ptbio2.htm","timestamp":"2014-04-21T01:01:56Z","content_type":null,"content_length":"3485","record_id":"<urn:uuid:dca9582d-40d2-4320-8e54-c0ed04b33699>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beginners Guide to ISE Measurements: Chap. 11) ACCURACY AND PRECISION OF SAMPLE MEASUREMENTS
NEXT PAGE | PREVIOUS PAGE | BACK TO CONTENTS LIST | GO TO WWW.NICO2000.NET |
Beginners Guide to ISE measurement. Chapter 11.
ACCURACY AND PRECISION OF SAMPLE MEASUREMENTS
a) General Discussion
The accuracy (how close the result is to the true value) and precision (= reproducibility; i.e. how close are a series of measurements on the same sample to each other) of ISE measurements can be
highly variable and are dependent on several factors. The concentration is proportional to the measured voltage and so any error in measurement will cause an error in the concentration, but this is
not directly proportional. It is a logarithmic relationship which depends on the slope of the calibration line. For mono-valent ions with a calibration slope of around 55 millivolts per decade of
concentration, an error of 1 mV in measuring the electrode potential will cause approximately 4% error in the concentration, whereas for di-valent ions, with a slope of around 26, the error will be
more like 8% per mV. It must also be noted that the slope becomes less at the lower end of the concentration range, in the non-linear area, and hence the error per mV can be even greater at low
concentrations. Thus it is important to use a meter which is capable of measuring the millivolts accurately and precisely. With modern meter technology this is not normally the limiting factor,
although for the most precise work it can be beneficial to adopt multiple-sampling techniques (i.e. by using an integrating voltmeter or computer interface) in order to ensure the most reliable
voltage measurements.
Apart from the accuracy and precision of the measuring device (meter or computer interface), the most important factors in achieving the most precise results is controlling the electrode drift and
hysteresis (or memory), and limiting the variability in the Liquid Junction Potential of the reference electrode, so that the measured voltage is reproducible. The amount of the drift and hysteresis
effects can vary significantly between different ions and different electrode types, with crystal membranes being generally more stable than PVC - techniqes for controlling or minimising drift and
hysteresis are described elsewhere in this work (Chapter 9).
The most effective way of minimising the variation in LJP is by using Standard Addition or Sample Addition Techniques (see later - section e). Alternatively, but less effectively, by using reference
electrodes with nearly equi-transferrent filling solutions (in which both ions have the same mobility when diffusing through the ceramic tip) such as KNO3 or Li Acetate - but this is not always
possible (depending on likely interference effects).
The accuracy of the results is affected by several other factors:
1) The presence of interfering ions.
2) Any difference in ionic strength between the sample and standard solutions.
3) Any difference in temperature between sample and standards - A re-calibration should be made if the sample temperature changes by more than 1 degree C from the calibration temperature.
4) Any variation in the electrode slope in different parts of the curve. Although the calibration graph may show a straight line over several decades of concentration with an average slope of say
54.5 ± 2 mV/dec. it is highly unlikely that this will be exactly the same across the whole of this range. If separate two-point calibrations are made between two more closely spaced points at
different concentration ranges then there may be a variation of several millivolts between the individual slopes. Thus, if samples are calculated using the overall slope then they will give results
which will differ in concentration from those calculated using the appropriate individual slope by 4% times the difference in mV between the two slopes.
Therefore, for the most accurate results, it is recommended that the electrode slope is determined using two standards which closely span the expected range of the samples. It must be noted, however,
that it is not beneficial to have standards too close together because the measured slope is dependent on the difference in voltages. So, for example, if the difference in mV is 50 then a 1 mV error
in measurement will only cause a 2% error in the slope but if the difference is only 10 mV then the same measurement error will result in a 10% error in the slope. Thus it is normally recommended
that calibration standards are about an order of magnitude different in concentration and should not be less than 20 mV difference in reading.
Nevertheless, whichever slope is used, the reproducibility of replicate measurements of the same sample should be the same.
By taking special precautions to overcome drift problems (such as frequent recalibration and ensuring that you wait for stable readings, or read after a regular time interval), and by adding special
ISABs to equalise activity effects and remove interfering ions, direct potentiometry can give very reasonable results (reproducibility of ± 2 or 3%, one standard deviation, and accurate within these
precision limits). Even without taking these precautions, it is possible to achieve satisfactory reproducibility and accuracy (± 10 to 15%) for many applications where the highest accuracy is not
necessary and ionic strength and interfering ions are not a problem.
b) Reproducibility Experiments using an Ammonium Electrode.
Some of the suggestions in the foregoing discussion, and the levels of accuracy and precision achievable with careful work, can be illustrated with the results of some experiments conducted by the
author. Reproducibility tests were carried out using an ELIT 8 mm diameter, solid-state ammonium electrode (PVC membrane) with a lithium acetate double junction reference electrode and pure
ammonium solutions (no ISAB). Standard solutions containing 1 ppm and 10 ppm NH[4]^+ were used for calibration and a 5 ppm solution was used as the test sample. Measurements were made after immersing
the electrodes in approximately 50 mls of solution in a 100 ml beaker, swirling the solution for 5 secs. then leaving to stand for 20 secs. Each millivolt measurement was the average of ten readings
taken at one second intervals. The electrodes were rinsed with a jet of de-ionised water, then soaked in a beaker of water for twenty seconds, then dabbed dry with a low-lint tissue between each
measurement. The solutions were measured in the sequence 1 ppm, 5 ppm, 10 ppm, and this pattern was repeated six times. The data were obtained using a meterless PC interface and specially written
For this experiment, the concentration results were calculated with an EXCEL spreadsheet using the Nernst equation in the standard form for a straight line: y = mx + c.
y is the measured voltage,
m is the electrode slope
(calculated from the two-point calibration data: (V1-V2)/ ((Log ppm1) - (Log ppm2))),
x is the logarithm of the concentration in the sample,
c, the intercept on the y axis, is E^o.
The experimental data were processed in several different ways:
1) Using only the first measurement of the two standards to define the slope and intercept, six measurements of the 5 ppm sample, taken over approximately half an hour, gave an average of 4.71 ± 0.14
ppm (±2.96% one standard deviation). However it was noticeable that successive measurements gave progessively lower values due to electrode drift after calibration (causing a difference of nearly 8%
between the highest and lowest results).
2) The drift effect was compensated for by recalculating each result using different values for the slope and intercept as defined by the standards measured immediately adjacent to each sample
measurement. This produced a significant improvement in the reproducibility and only a random variation in the results rather than a progressive drift downwards. This clearly demonstrates the
importance of measuring samples soon after calibration. The average concentration this time was 4.90 ± 0.06 ppm (±1.20%, 1 S.D.) Although remarkably precise and very close to the true value, the
accuracy of this average is not quite within the precision limits. As noted above, this can probably be explained by variation in the electrode slope and this suggestion is supported by examining the
individual slope values which can be calculated from the various measurements. The average value for six determinations of the slope between 1 and 5 ppm was 55.92 ± 0.92 whereas that between 5 and 10
ppm was 58.21 ± 0.78; i.e. there is a significant difference in slope between the two adjacent ranges.
3) A third method of calculating these results, using the slope defined by the first calibration for all samples but a different intercept value as given by each successive two-point calibration, was
less satisfactory and gave 4.87 ± 0.12 ppm (± 2.35%) which is only slightly better than the results using only a single calibration at the beginning. Thus these data would appear to suggest that the
effect of electrode drift is more significant in producing changes in the measured slope between different sample measurements rather than producing a change in the calculated value for the
intercept. This conclusion is also borne out by examining the individual calibration data. Whereas the average slope between 1 and 10 ppm was 56.74 ± 0.51 (± 0.90%) for six successive measurements
and these showed a gradual drift downwards (57.49, 57.04, 56.84, 56.64, 56.49, 55.98) the associated intercept calculations showed a more random distribution and gave a much more precise average
value of 346.18 ± 0.26 mV (± 0.07%).
c) Reproducibility of Chloride Measurements.
A second experiment using the same techniques as above, but with a chloride (crystal membrane) electrode and calibration standards of 25 and 250 ppm also yielded very impressive results. Eight
measurements of a 100 ppm test solution gave an average of 95.4 ± 0.6 ppm (± 0.63%) when two-point calibrations were made immediately prior to each sample measurement.
d) Conclusions from the Experimental Data.
These experimental results demonstrate that in order to obtain the best possible accuracy and precision, it is important to measure samples soon after calibration and to use standard solutions that
closely bracket the expected range of sample concentrations. Furthermore, for direct potentiometry measurements, it is best to make a full two-point recalibration every time, in order to obtain the
most precise value for the slope, rather than just making a single point recalibration and assuming that the slope is constant. This is not necessary for Standard and Sample Addition techniques
because of the possibility of recalculating the results for a known standard to "fine tune" the slope measurement in the middle of the concentration range expected for the samples.
These results show that it is possible to obtain accuracy and precision levels of better than ± 3% fairly easily, and better than ± 2% by making more frequent calibrations or by using Standard or
Sample Addition techniques (better than ± 1% for some crystal membrane electrodes). Thus it has been shown that, with careful use and a full consideration of all the limiting factors, ISE technology
can be compared favorably with other analytical techniques which require far more sophisticated and costly equipment.
Nevertheless, it must be stressed that these special considerations are only necessary to achieve the highest possible precision and accuracy. For many applications, a simple two-point calibration
followed by a batch of direct potentiometry measurements will probably be quite sufficient.
e) Standard Addition and Sample Addition Methods
These methods can potentially yield even more accurate and precise results than direct potentiometry because the calibration intercept and sample measurement stages are made essentially at the same
time and in the same solution (but the calibration slope still has to be measured separately before sample measurements). This means that Ionic Strength and temperature differences between sample and
standard are not significant - and the fact that the electrodes remain immersed throughout the measurements means that hysteresis, memory, and variations in the reference electrode liquid junction
potential are eliminated. These mthods are particularly useful for samples with high ionic strength or a complex matrix. However, they are rather more time consuming and require more analytical
chemistry expertise than direct potentiometry and are not as popular for many applications where the highest accuracy and precision is not necessary. See Standard & Sample Addition Methods for a full
description of the methods and experimental results for precision and accuracy.
Undoubtedly the Double Standard Addition method is potentially the most precise way of making ISE measurements - but it has not been widely adopted because of the increased complexity. This method
measures the calibration slope and intercept and the sample concentration at the same time, and sample results will be accurate within the precision limits as long as there are no significant ionic
interferences in the samples. An added refinement is to use a weighed dropping bottle for adding the standard instead of a pipette. This is more precise and particularly useful for small samples.
Click Here for more details of the Double Addition method and an MS "Excel" spreadsheet to calculate the results.
NEXT PAGE | PREVIOUS PAGE | BACK TO CONTENTS LIST
|
{"url":"http://www.nico2000.net/Book/Guide12.html","timestamp":"2014-04-18T10:59:25Z","content_type":null,"content_length":"16574","record_id":"<urn:uuid:e5f76dd7-9055-4cc0-8c4f-4379f239272c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
...Through the opportunity provided by WyzAnt, I would like to help students realize their own potential. My specialties are math and science. I am able to teach up to college level in all the
following subjects: math, physics, chemistry, and biology.
38 Subjects: including precalculus, chemistry, statistics, calculus
I have more than 20 years of experience teaching and tutoring Mathematics at the elementary, high school and college levels. I've been tutoring Spanish for several years. I have a Bachelor Degree
in Mathematics and a Master in Mathematics Education.
8 Subjects: including precalculus, Spanish, geometry, trigonometry
...I received 5s on the AP Calculus and AP Literature exams, a composite 2130 on the SAT and a 31 on the ACT. I specialize in English and Math, and feel most confident about SAT/ACT/AP writing and
reading comprehensions. I have always believed that the best learning and study methods come from repeat exposure and extreme practice!
16 Subjects: including precalculus, reading, writing, calculus
...Even the students that breezed through Algebra 1 can get stumbled in Algebra 2 concepts just because there are so many things that have to come together in their brains before they can master
it. I enjoy the challenge of "getting all their ducks in the row" so to speak, so that they can then pro...
11 Subjects: including precalculus, calculus, ASVAB, geometry
...Please feel free to contact me for more information about calculus tutoring. Chemistry is sometimes referred to as the fundamental or central science because it forms the foundation of biology
and our experience of the everyday world. My approach to teaching chemistry is to focus on those aspec...
22 Subjects: including precalculus, chemistry, calculus, French
|
{"url":"http://www.purplemath.com/Temecula_Precalculus_tutors.php","timestamp":"2014-04-20T02:01:03Z","content_type":null,"content_length":"23957","record_id":"<urn:uuid:b485d569-fa8c-4bbd-bd31-af5a2f8a1af6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring 2013
Geneseo Mathematics Colloquium Schedule
Spring 2013
Wednesday, February 13, 2:30 - 3:20pm
Newton 203
Gillian Galle, University of New Hampshire
The Trouble with Trigonometry
Students that enroll in algebra-based physics courses for life science may be less prepared mathematically than their counterparts in the engineering, physical science, or mathematics majors. This
means it can be especially difficult for them to develop conceptual understandings of equations that possess both physical and mathematical interpretations within the same context. Based on such
students’ answers to a particular question on simple harmonic motion equations, this study undertook to systematically probe the following questions: What is the range of students’ initial knowledge
with respect to trigonometry? Is reviewing trigonometric concepts valuable and/or necessary? Can students see the trigonometric equations describing oscillations as conveying an idea, in addition to
being a tool to get "the answer?"
In this talk I focus on the efforts of my colleague and I to answer this last question through the design and timely implementation of a trigonometric intervention and motivational activity meant to
help these students reason through the underlying connections between trigonometry and modeling simple harmonic motion. In addition to discussing the research the intervention was based on, I will
address the development of our motivational activity, our finding that students can learn to see trigonometric equations describing oscillations as conveying an idea, and what implications this may
have for the way we address this topic in both high school and undergraduate physics courses.
Friday, February 15, 3:30 - 4:20pm
Newton 203
Valentina Postelnicu, Arizona State University
The Functional Thinking in Mathematics Education: A Cultural Perspective
One of the most important ideas that influenced the mathematics education of the last century is the idea of educating functional thinking, particularly a kinematic-functional thinking. Bringing
students up to functional thinking has proved to be a difficult task for mathematics educators. We examine the current state of mathematics education with respect to functional thinking by
considering different curricular approaches to functions in the United States and other parts of the world. We closely look to one problem and the way it may appear in different cultural settings. We
focus on issues related to the covariational approach to functions, the rise of digital technologies, and the need for symbolic representations.
Thursday, February 21, 4:00 - 4:50pm
Newton 203
Amanda Beeson, University of Rochester
Did Escher know what an elliptic curve is?
We will give a naïve introduction to elliptic curves. Then we will discuss whether the 20th century Dutch artist M.C. Escher knew what an elliptic curve is. Along the way, we will discover many
wonderful things about his piece called "Print Gallery". This talk will be enjoyable if you remember how to add and multiply, but some paper-folding skills never hurt. This talk is based on work of
H. Lenstra.
Monday, February 25, 4:00 - 4:50pm
Newton 203
Carlos Castillo-Garsow, Kansas State University
Chunky and smooth images of change
Students have well documented difficulties with graphs. In this talk, I discuss recent and current research that investigates connections between these difficulties and student difficulties in
forming images of change, the impact that these student difficulties have on more advanced mathematical reasoning at the secondary and undergraduate level, the damage that developing these
difficulties can do to the preparation of teachers, and the potential role of technology in developing solutions to these systemic and persistent problems.
Wednesday, February 27, 3:30 - 4:20pm
Newton 203
May Mei, University of California, Irvine
Attack of the Fractals!
You may not know it, but you're surrounded by fractals! They are all around you and even inside of you. In this talk, we will explore the prevalence of fractal structure in the natural world and in
mathematics. Then we will construct the standard Cantor set and show you how you can construct your own fractals.
Friday, March 1, 3:45 - 4:35pm
Newton 203
Emma Norbrothen, North Carolina State University
Number Systems Base p
Rational numbers can construct the real numbers by using the absolute value norm. Under different norms, rationals can construct different types of numbers. In particular, the p-norm evaluates how
much a prime, p, is a factor in a given rational. We will explore some consequences of the p-norm and what kind of numbers it creates from the rationals
Friday, April 5 2:30 - 3:30pm
Newton 204
Sue McMillen, Buffalo State
President, Association of Mathematics Teachers of New York State (AMTNYS)
Fibonacci Fun
Explore interesting properties of the Fibonacci sequence. Look for patterns and make conjectures. Learn about connections between matrices and the Fibonacci sequence. Bring your calculator. If you
would like to know more about graduate studies at Buffalo State or about AMTNYS, please stay around after the talk to converse with Dr. McMillen.
Thursday, April 25 2:30 - 3:30pm
Newton 203
Arunima Ray, Rice University
SUNY Geneseo, Class of 2009
A friendly introduction to knots in three and four dimensions
If you've ever worn sneakers or a necktie, or ever been a boy scout, you know a lot about knots. Knot theory is also an exciting (and young) field of mathematics. We will start from scratch to define
and discuss some basic concepts about knots in three dimensions, such as how to quantify the 'knottedness' of a knot and how to tell if two knots which look different are secretly the same. We will
also see how a four dimensional equivalence relation reveals a simple and elegant algebraic structure within the set of knots.
This talk will be very visual with lots of pictures and will be accessible to students at all levels.
|
{"url":"http://www.geneseo.edu/math/colloquium-spring-2013","timestamp":"2014-04-17T16:30:02Z","content_type":null,"content_length":"25647","record_id":"<urn:uuid:8f0d7790-0ebf-4545-bb0b-e674b9108773>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration by partial fractions is a technique we can use to integrate rational functions when the degree of the numerator is less than the degree of the denominator. Here's the big picture:
• We start out with an integral whose integrand is a rational function, like
The degree of the numerator must be less than the degree of the denominator.
• We do some sneaky stuff to rewrite the original rational function as a sum of partial fractions:
• We integrate the partial fractions, whose antiderivatives all involve the natural log:
Be Careful: When x occurs in a denominator with a coefficient other than 1, you have to use integration by substitution.
into a sum of the form
into partial fractions.
Without a calculator, find
Find the sum. A and B are unknown numbers.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
Decompose into partial fractions.
|
{"url":"http://www.shmoop.com/indefinite-integrals/partial-fraction-integration-help.html","timestamp":"2014-04-18T02:58:43Z","content_type":null,"content_length":"79456","record_id":"<urn:uuid:3c7bf3aa-b295-4638-8d33-e6ef92baa7a3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Payback Period
The Payback Period
By WEALTH Editors | Print This Article
When a new business opens its doors, its owners and investors want to know: How long is it going to take until I get my initial investment back and start pulling in a profit? This payback period
determines the value of a business opportunity by the amount of time it takes the business venture to recover the initial investment costs.
Here’s the mathematical formula for payback period analysis:
PP: Cost of Project or Investment / Annual Cash Inflows
Simple, right? How much went out compared to how much came in. Let’s look at an example. We want to figure out the payback period on an enterprise into which we plan to place a $200,000 investment.
The expected annual cash inflows,which is really the amount being returned on the investment, is $40,000 per year. When you divide $200,000 by $40,000, you learn that the payback period is five years
if the investment performs as anticipated.
Although the payback period is the easiest to compute and the simplest to understand, this method of analysis has weaknesses. The payback period analysis does not compute any benefits or profits that
may have been earned or acquired after the payback period. Payback period analysis also does not account for the time value of money, that is, what the money could have been earning had it been
invested elsewhere.
Internal rate of return
The internal rate of return (IRR), according to Investopedia, is the growth rate expected to be generated by a specific project. Generally speaking, the higher a project’s internal rate of return,
the more desirable it is to undertake the project. This ratio is also sometimes called the economic rate of return or ERR. The IRR is an analysis of how much sales volume a business must achieve in
order to begin making profit. This information is crucial in developing a pricing strategy. By knowing the number of sales required at a given price in order to break even, an entrepreneur can
identify whether the price structure being considered will allow the business to thrive.
When a business obtains bank financing to purchase equipment or gets a loan to provide operating capital, the money borrowed carries a cost, or interest, that must be paid in addition to the
principal amount. The income generated by the use of this capital compared to the cost of the capital generates the IRR. In most spreadsheet software applications such as Excel, the formula for IRR
is a built-in function and you simply choose the formula by using the formula wizard. You can also use online calculators to perform the calculations.
Here’s an example of how IRR can help you make solid business decisions. Suppose you wish to purchase equipment that costs $100,000.You will have to pay 14 percent interest on the money borrowed to
pay for the equipment. Your best estimates show that by adding this equipment, you’ll increase your revenues by 22 percent. The IRR in this case is 8 percent, the difference between the cost of
capital and the revenue increase generated by the capital investment.
On the other hand, let’s say that you are considering the same equipment purchase at a cost of $100,000 and you’ll pay the same 14 percent on the money borrowed, but your best estimates of the
revenue generated by adding this equipment show that your revenue will increase only 3 percent, putting the IRR in negative territory. Then clearly the investment is not a good idea.
Break-even analysis
Another way to estimate profitability of a business venture and to determine some uncertain variables you undoubtedly will run into somewhere along your business path is called the break-even
analysis. Break-even analysis can help determine the income estimation when certain variables may still be unknown. The simple equations that represent a break-even analysis are below.
First, you need to determine the contribution margin. The contribution margin consists of the revenue minus any variable costs (costs that change from month to month such as electricity). The result
is the amount that is available to pay for fixed costs (costs that do not vary from month to month but are always a set amount such as the payment on a fixed-rate mortgage) and the remainder provides
the profit the business earns. Once you know the contribution margin, you can then subtract the fixed costs from the contribution margin to obtain the before-tax earnings from the business.
Second, you can now calculate the break-even point. In order to do so, you must divide the total fixed costs by the contribution margin that you defined using the formula above. Commonly, break-even
is expressed as a percent of revenue. It can also be expressed as a number of units required to achieve the break-even point.
Here’s what we mean: Current financial statements for Joe’s Bakery show that the fixed costs are $49,000 and variable costs per loaf of bread are $0.30. Let’s assume the sales revenue is $1.00 per
loaf of bread. We can easily see that after the $0.30 per loaf variable costs are covered, each loaf of bread contributes $0.70 toward covering the fixed costs of the business. By dividing fixed
costs by the contribution of $0.70 made by each loaf of bread sold – $49,000/$0.70 — we see that 70,000 loaves must be sold in order for Joe’s Bakery to break even.
If fewer than 70,000 loaves of bread are sold, the business will experience a loss.
In the case of Joe’s bakery, a 10,000-loaf increase in sales over the break-even point to 80,000 loaves will produce a $7,000 profit, and a 30,000 loaf increase to 100,000 loaves will produce a
$21,000 profit. On the other hand, a decline in sales of 10,000 loaves from break-even to 60,000 loaves will produce a loss of $7,000, and a 30,000 decrease from the 70,000 break-even point produces
a $21,000 loss.
Being able to successfully manipulate these types of calculations can be the difference between a thriving business and a failed enterprise.
Tags: break-even analysis, contribution margin, internal rate of return, payback period, profit
|
{"url":"http://wealthmagazine.com/2010/01/the-payback-period/","timestamp":"2014-04-17T10:46:00Z","content_type":null,"content_length":"28673","record_id":"<urn:uuid:2c51429d-2de3-4f5a-b314-9c2dde9c2c17>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics League
Caltech Physics League
Introduction and terms
Winter 2014
Meetings: Mondays, 6pm, E. Bridge 114.
(no meeting on Jan. 20th for MLK)
Feb. 3rd: Vacuum resistance, antenna fields, and energy weapons
assignment (due Feb. 10th): Raytheon's anti-demo weapon
Additional links:
* sensitivity of devices and persons to EM radiation:
* Boeing's EM pulse weapon:
Jan. 27th: Radars and stealth
assignment: longitudinal iron-ball mode
Jan. 9th, 13th: The thing - a vintage eavesdropping devices.
Assignment, due Jan 27th: How to operate a bug from afar
Fall 2013
Meetings: Mondays 6pm at E. Bridge 114.
Dec. 2nd: Roundup of the recent assignments
Nov. 24th: Warp drives
extra stuff:
Alcubierre's paper on warp drives
NASA's Sonny White's paper improving the warp drive
Report on NASA's efforts
Thone-Morris wormholes (recommended!)
Nov. 17th: Fusion
Problem 7: Fusion of Mu-drogen
Nov. 10th: Nuclear-Fission Critical Mass
Problem 6: Critical mass for a diffusive core
Nov. 3rd: Space elevators
Problem 5: Mars elevator design.
Oct. 28th: Balloon assisted launches
Problem 4: Air drag on a rocket.
Oct. 21st: Ion Thrusters
Problem 3: What limits an ion thurster - charge buildup.
Oct. 14th: More on RTG's
Problem 2: Modeling the RTG unit of Cassini.
Oct. 7th: Thermoelectrics and the risky powering space exploration.
Problem 1: Typical Seebeck coefficient, and power extraction.
Spring 2013
Last meeting: June 2nd, 5:45pm, bridge 114. Featuring: Clockwise-counterclockwise.
May 13th: space debris I - debris size
Question for next meeting: Could you think up a theory for the 1/L^2 rule of debris size distribution?
Problem 1: Magentic field and Einstein challenge. (will be discussed on May 13th)
April 29th: magnetic fields of stars. (notes to come)
April 22nd: Black hole batteries (click for notes)
Black-hole batteries paper
April 8th, 15th: Slowing light (class notes)
and - more elaborate notes discussing QM dyamics
Slowing light paper
Winter 2013
Meeting times: Mondays, 5:45PM, E. Bridge 114.
Optical solitons notes
Photonic crystals notes
Challenge 2: Motorcycles: crash and jump
Challenge 1: Flats and jets
jet solution: notes from meeting discussion
Winter 2012
Organizational meeting : 10/1/2012(Tuessday), 6PM, Bridge 114.
Weekly meeting : Tuessday, 6PM, Bridge 114.
Challenge 1: Earthquake!
Challenge 2: Altitude sickness
Challenge 2: Mathematica notebook
Challenge 3: Black Hole Battery
Fall 2011
Organizational meeting : 10/11/2011(Tuessday), 6PM, Bridge 114.
Weekly meeting : Tuessday, 6PM, Bridge 114.
Challenge 1: From diffraction patterns
Challenge 1: Mathematica notebook
Challenge 2: Quasicrystals form an odd angle
Challenge 2: Mathematica notebook
Challenge 3: Dispersion of Graphene
Game Show!
Spring 2011
Organizational meeting : 1/6/2011(Thursday), 6PM, Bridge 114.
Tidal Speed note
Desert Chimney note
Tokamak note
Fall 2010
CPL meeting times: Thursday, 6pm - 7:30pm, Bridge 114.
meeting 1: Earth rotation induction generator notes
superconducting critical field example experiment
Challenge 1: Magnetic-breaking
Challenge 2: Speed of the tide
Here's the tidal energy note by Gil
Spring 2010
First challenge - Hanburry Brown - Twiss and Sirius
Noise correlations:
- Hanbury Brown-Twiss Sirius Nature paper
- Correlations in BEC's and Mott insulators:
Ketterle's webpage: http://cua.mit.edu/ketterle_group/
Ian Spielman's noise measurements: http://arxiv.org/abs/cond-mat/0606216
Winter 2010
Second challenge - Magnetic pinball
winners: John Forbes and Tony Baojia
First challenge - Atmosphere vs. Astronomy
due: extension made till monday Feb 15th.
winner: John Forbes
2/11/10 meeting: slowing light. (17m/s!) nature-paper
3/4/10 meeting: Aurora slide show
Fall 2009
Second challenge - Electric images
Preparation problem: The imperfect electric mirror
Challenge 2 Problems
winners: John Forbes and Tony Baojia
First challenge - Mars elevator under fire
Challenge 1 Problems
Preparation problem: Mars space elevator (not handed in or graded)
also see Notes on waves in ropes
taken from: http://www.myreviews101.com/tag/space-elevator-on-green-mars
Spring 2009 challenges:
Challenge 1: Space wars with leftovers
Winners: John Schulman and Eric Stansifer.
Challenge 2: Sonic booms
Winners: John Schulman and Tong (Tony) Boajia
Useful Links:
• Leaguemaster info:
Gil Refael
164 W. Bridge
M-C 149-33
tel.: (626)-395-4705
fax: (626)-683-9060
|
{"url":"http://www.cmp.caltech.edu/refael/league/","timestamp":"2014-04-19T14:48:08Z","content_type":null,"content_length":"28844","record_id":"<urn:uuid:72b32557-650e-4c77-a489-114b2fd38a1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Turing Machine
«There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy» says Hamlet to his friend Horatio. An elegant way to point to all the unsolvable, untreatable questions
that haunt our lives… One of the most wonderful demonstrations of all times ends up with the same sad conclusion: There are mathematical problems that are simply unsolvable.
In 1936, the British mathematician Alan Turing conceived the simplest and most elegant possible computer ever: A device, as he described it, "with an infinite memory capacity obtained in the form of
an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the
scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behaviour of the machine. However, the tape can be moved back and forth
through the machine, this being one of the elementary operations of the machine".
An abstract machine, conceived by the mind of a genius, to solve an unsolvable problem, the decision problem, that is: for each logical formula in a theory, is it possible to decide in a finite
number of steps, if the formula is valid in that theory? Well, Turing shows that it is not possible. The decision, or Entscheidungs problem was well-known by mathematicians: it was the number 10 of a
list of unsolved problems that David Hilbert presented in 1900 to the community of mathematicians, thus setting most of the agenda for the 20th century of mathematical research.
The decision problem asks whether there is a mechanical process — that can be realizable in a finite number of steps — to decide whether a formula is valid or not, or whether a function is computable
or not. Turing started asking himself: "What does a mechanical process mean?" and his answer was that a mechanical process is a process that can be realized by a machine. Obvious, isn't it?
He then designed a Turing machine for each possible formula in a first-order logic system, or, for each possible recursive function within the natural numbers, given the logical equivalence proven by
Gödel between the two sets. And indeed with his simple definition, we can write down a string of 0 and 1 for each tape to describe the function, then give to the machine a list of simple instructions
(move to left, move to right, stop) so that it writes down the demonstration of each function and then stops.
Turing was able to design a Universal Turing Machine, that is, a machine that can take as input any possible string of symbols that describe a function and give as output its demonstration. But, if
you feed the Universal Turing Machine with a description of itself, it doesn't stop, and goes on infinitely generating 0s and 1s. That's it. The mother of all the computers, the soul of the Digital
Age, was designed to show that not everything can be reduced to a Turing machine. There are more things in heaven and earth.
|
{"url":"http://edge.org/print/response-detail/10092","timestamp":"2014-04-17T01:34:20Z","content_type":null,"content_length":"16477","record_id":"<urn:uuid:aa3f881e-45bd-4269-b919-02295f6c83cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
668pages on
this wiki
A triangle or trigon is a two dimensional geometric object that has the specific qualities of having three straight sides that intersect at three vertices.
The sum of the internal angles that exist at the vertices always total the same number for every triangle—180 degrees, or π radians.
In Euclidean geometry, any three non-collinear points determine a unique triangle and a unique plane.
Types of triangles
By relative lengths of sides
Triangles can be classified according to the relative lengths of their sides:
• In an equilateral triangle, all sides are the same length. An equilateral triangle is also a regular polygon with all angles 60°.
• In an isosceles triangle, at least two sides are equal in length. An isosceles triangle also has two equal angles: the angles opposite the two equal sides.
• In a scalene triangle, all sides and internal angles are different from one another.
By internal angles
Triangles can also be classified according to their internal angles, measured here in degrees.
A triangle that does not contain a right angle is called an oblique triangle. One that does is a right triangle.
• There are two types of oblique triangles, those with all the internal angles smaller than 90°, and those with one angle larger than 90°.
The obtuse triangle contains the larger than 90° angle, known as an obtuse angle. The acute triangle is composed of three acute angles, the same as saying that all three of its angles are smaller
than 90°.
• A right triangle (or right-angled triangle) has one 90° internal angle (a right angle). The side opposite to the right angle is the hypotenuse; it is the longest side in the right triangle. Right
triangles conform to the Pythagorean theorem: the sum of the squares of the two legs is equal to the square of the hypotenuse; i.e., a^2 + b^2 = c^2, where a and b are the legs and c is the
hypotenuse. See also Special right triangles.
• An equiangular triangle is also an equilateral triangle.
Basic facts
Triangles are assumed to be two-dimensional plane figures, unless the context provides otherwise (see Non-planar triangles, below). In rigorous treatments, a triangle is therefore called a 2-simplex
(see also Polytope). Elementary facts about triangles were presented by Euclid in books 1–4 of his Elements, around 300 BCE.
The internal angles of a triangle in Euclidean space always add up to 180 degrees. This allows determination of the third angle of any triangle as soon as two angles are known. An exterior angle of a
triangle is an angle that is adjacent and supplementary to an internal angle. Any external angle of any triangle is equal to the sum of the two internal angles that it is not adjacent to; this is the
exterior angle theorem. The three external angles (one for each vertex) of any triangle add up to 360 degrees. (The n external angles of any n-sided convex polygon add up to 360 degrees.)
The sum of the lengths of any two sides of a triangle always exceeds the length of the third side, a principle known as the triangle inequality. In a special case, the sum is equal to the length of
the third side; but in this case the triangle has arguably degenerated to a line segment, or to a digon.
Two triangles are said to be similar if and only if each internal angle of one triangle is equal to an internal angle of the other. It is not required to specify that the equal angles be
corresponding angles, since any triangle is by definition similar to its own "mirror image". In this case, all sides of one triangle are in equal proportion to sides of the other triangle.
A few basic postulates and theorems about similar triangles:
• If two corresponding internal angles of two triangles are equal, the triangles are similar.
• If two corresponding sides of two triangles are in proportion, and their included angles are equal, then the triangles are similar. (The included angle for any two sides of a polygon is the
internal angle between those two sides.)
• If three corresponding sides of two triangles are in proportion, then the triangles are similar.
Two triangles that are congruent have exactly the same size and shape:^[1] all corresponding internal angles are equal in size, and all corresponding sides are equal in length. (This is a total of
six equalities, but three are often sufficient to prove congruence.)
Some sufficient conditions for a pair of triangles to be congruent (from basic postulates and theorems of Euclid):
• SAS Postulate: Two sides and the included angle in a triangle are equal to two sides and the included angle in the other triangle.
• ASA Postulate: Two internal angles and the included side in a triangle are equal to those in the other triangle. (The included side for a pair of angles is the side between them.)
• SSS Postulate: Each side of a triangle is equal in length to a side of the other triangle.
• AAS Theorem: Two angles and a corresponding non-included side in a triangle are equal to those in the other triangle.
• Hypotenuse-Leg (HL) Theorem: The hypotenuse and a leg in a right triangle are equal to those in the other right triangle.
• Hypotenuse-Angle Theorem: The hypotenuse and an acute angle in one right triangle are equal to those in the other right triangle.
An important case:
• Side-Side-Angle (or Angle-Side-Side) condition: If two sides and a corresponding non-included angle of a triangle are equal to those in another, then this is not sufficient to prove congruence;
but if the non-included angle is obtuse or a right angle, or the side opposite it is the longest side, or the triangles have corresponding right angles, then the triangles are congruent. The
Side-Side-Angle condition does not by itself guarantee that the triangles are congruent because one triangle could be obtuse-angled and the other acute-angled.
Using right triangles and the concept of similarity, the trigonometric functions sine and cosine can be defined. These are functions of an angle which are investigated in trigonometry.
A central theorem is the Pythagorean theorem, which states in any right triangle, the square of the length of the hypotenuse equals the sum of the squares of the lengths of the two other sides. If
the hypotenuse has length c, and the legs have lengths a and b, then the theorem states that
$a^2 + b^2 = c^2.\,\!$
The converse is true: if the lengths of the sides of a triangle satisfy the above equation, then the triangle is a right triangle.
Some other facts about right triangles:
$a + b + 90^{\circ} = 180^{\circ} \Rightarrow a + b = 90^{\circ} \Rightarrow a = 90^{\circ} - b$
• If the legs of a right triangle are equal, then the angles opposite the legs are equal, acute, and complementary; each is therefore 45 degrees. By the Pythagorean theorem, the length of the
hypotenuse is the length of a leg times √2.
• In a right triangle with acute angles measuring 30 and 60 degrees, the hypotenuse is twice the length of the shorter side, and twice the length divided by √3 for the longer side.
For all triangles, angles and sides are related by the law of cosines and law of sines.
Points, lines and circles associated with a triangle
There are hundreds of different constructions that find a special point associated with (and often inside) a triangle, satisfying some unique property: see the references section for a catalogue of
them. Often they are constructed by finding three lines associated in a symmetrical way with the three sides (or vertices) and then proving that the three lines meet in a single point: an important
tool for proving the existence of these is Ceva's theorem, which gives a criterion for determining when three such lines are concurrent. Similarly, lines associated with a triangle are often
constructed by proving that three symmetrically constructed points are collinear: here Menelaus' theorem gives a useful general criterion. In this section just a few of the most commonly-encountered
constructions are explained.
A perpendicular bisector of a triangle is a straight line passing through the midpoint of a side and being perpendicular to it, i.e. forming a right angle with it. The three perpendicular bisectors
meet in a single point, the triangle's circumcenter; this point is the center of the circumcircle, the circle passing through all three vertices. The diameter of this circle can be found from the law
of sines stated above.
Thales' theorem implies that if the circumcenter is located on one side of the triangle, then the opposite angle is a right one. More is true: if the circumcenter is located inside the triangle, then
the triangle is acute; if the circumcenter is located outside the triangle, then the triangle is obtuse.
An altitude of a triangle is a straight line through a vertex and perpendicular to (i.e. forming a right angle with) the opposite side. This opposite side is called the base of the altitude, and the
point where the altitude intersects the base (or its extension) is called the foot of the altitude. The length of the altitude is the distance between the base and the vertex. The three altitudes
intersect in a single point, called the orthocenter of the triangle. The orthocenter lies inside the triangle if and only if the triangle is acute. The three vertices together with the orthocenter
are said to form an orthocentric system.
An angle bisector of a triangle is a straight line through a vertex which cuts the corresponding angle in half. The three angle bisectors intersect in a single point, the incenter, the center of the
triangle's incircle. The incircle is the circle which lies inside the triangle and touches all three sides. There are three other important circles, the excircles; they lie outside the triangle and
touch one side as well as the extensions of the other two. The centers of the in- and excircles form an orthocentric system.
The intersection of the medians is the centroid.
A median of a triangle is a straight line through a vertex and the midpoint of the opposite side, and divides the triangle into two equal areas. The three medians intersect in a single point, the
triangle's centroid. The centroid of a stiff triangular object (cut out of a thin sheet of uniform density) is also its center of gravity: the object can be balanced on its centroid. The centroid
cuts every median in the ratio 2:1, i.e. the distance between a vertex and the centroid is twice the distance between the centroid and the midpoint of the opposite side.
Nine-point circle demonstrates a symmetry where six points lie on the edge of the triangle.
The midpoints of the three sides and the feet of the three altitudes all lie on a single circle, the triangle's nine-point circle. The remaining three points for which it is named are the midpoints
of the portion of altitude between the vertices and the orthocenter. The radius of the nine-point circle is half that of the circumcircle. It touches the incircle (at the Feuerbach point) and the
three excircles.
Euler's line is a straight line through the centroid (orange), orthocenter (blue), circumcenter (green) and center of the nine-point circle (red).
The centroid (yellow), orthocenter (blue), circumcenter (green) and barycenter of the nine-point circle (red point) all lie on a single line, known as Euler's line (red line). The center of the
nine-point circle lies at the midpoint between the orthocenter and the circumcenter, and the distance between the centroid and the circumcenter is half that between the centroid and the orthocenter.
The center of the incircle is not in general located on Euler's line.
If one reflects a median at the angle bisector that passes through the same vertex, one obtains a symmedian. The three symmedians intersect in a single point, the symmedian point of the triangle.
Computing the area of a triangle
Calculating the area of a triangle is an elementary problem encountered often in many different situations. The best known and simplest formula is:
where $A$ is area, $b$ is the length of the base of the triangle, and $h$ is the height or altitude of the triangle. The term 'base' denotes any side, and 'height' denotes the length of a
perpendicular from the point opposite the side onto the side itself.
The area of a triangle is the area of any quadrilateral divided by 2, therefore the following formulas are:
$A= \frac{A_4}{2}$
$A= \frac{h(b_1 + b_2)}{4}$
$A= \frac{\sqrt{abcd}}{2}$
$A= \frac{S^2}{2}$
$A= \frac{\frac{a+c}{4(a-c)}\sqrt{(a+b-c+d)(a-b-c+d)(a+b-c-d)(b-a+c+d)}}{2}.$
Although simple, this formula is only useful if the height can be readily found. For example, the surveyor of a triangular field measures the length of each side, and can find the area from his
results without having to construct a 'height'. Various methods may be used in practice, depending on what is known about the triangle. The following is a selection of frequently used formulae for
the area of a triangle.
Another way to find the area of a triangle:
$A= \sqrt{\frac{a^2b^2}{4}-\frac{b^4}{16}}$
The area of a triangle is the area of any n-sided polygon divided by $n-2$ therefore the formula is:
$A= \frac{A_n}{n - 2}= s^2\frac{n}{(4n-8)tan(\frac{180}{n})}$
Using vectors
The area of a parallelogram can be calculated using vectors. Let vectors AB and AC point respectively from A to B and from A to C. The area of parallelogram ABDC is then $|{AB}\times{AC}|$, which is
the magnitude of the cross product of vectors AB and AC. $|{AB}\times{AC}|$ is equal to $|{h}\times{AC}|$, where h represents the altitude h as a vector.
The area of triangle ABC is half of this, or $S = \frac{1}{2}|{AB}\times{AC}|$.
The area of triangle ABC can also be expressed in terms of dot products as follows:
$\frac{1}{2} \sqrt{(\mathbf{AB} \cdot \mathbf{AB})(\mathbf{AC} \cdot \mathbf{AC}) -(\mathbf{AB} \cdot \mathbf{AC})^2} =\frac{1}{2} \sqrt{ |\mathbf{AB}|^2 |\mathbf{AC}|^2 -(\mathbf{AB} \cdot \
mathbf{AC})^2} \, .$
Using trigonometry
The height of a triangle can be found through an application of trigonometry. Using the labelling as in the image on the left, the altitude is h = a sin γ. Substituting this in the formula $S= \frac
{1}{2}bh$ derived above, the area of the triangle can be expressed as:
$S = \frac{1}{2}ab\sin \gamma = \frac{1}{2}bc\sin \alpha = \frac{1}{2}ca\sin \beta.$
Furthermore, since sin α = sin (π - α) = sin (β + γ), and similarly for the other two angles:
$S = \frac{1}{2}ab\sin (\alpha+\beta) = \frac{1}{2}bc\sin (\beta+\gamma) = \frac{1}{2}ca\sin (\gamma+\alpha).$
For a right triangle:
$A= s^2 \frac{sin(\frac{360}{n})}{4}= \frac{s^2}{4csc(\frac{360}{n})}$
Using coordinates
If vertex A is located at the origin (0, 0) of a Cartesian coordinate system and the coordinates of the other two vertices are given by B = (x[B], y[B]) and C = (x[C], y[C]), then the area S can be
computed as ½ times the absolute value of the determinant
$S=\frac{1}{2}\left|\det\begin{pmatrix}x_B & x_C \\ y_B & y_C \end{pmatrix}\right| = \frac{1}{2}|x_B y_C - x_C y_B|.$
For three general vertices, the equation is:
$S=\frac{1}{2} \left| \det\begin{pmatrix}x_A & x_B & x_C \\ y_A & y_B & y_C \\ 1 & 1 & 1\end{pmatrix} \right| = \frac{1}{2} \big| x_A y_C - x_A y_B + x_B y_A - x_B y_C + x_C y_B - x_C y_A \big|$
$S= \frac{1}{2} \big| (x_C - x_A) (y_B - y_A) - (x_B - x_A) (y_C - y_A) \big|.$
In 3 dimensions, the area of a general triangle {A = (x[A], y[A], z[A]), B = (x[B], y[B], z[B]) and C = (x[C], y[C], z[C])} is the Pythagorean sum of the areas of the respective projections on the
three principal planes (i.e. x = 0, y = 0 and z = 0):
$S=\frac{1}{2} \sqrt{ \left( \det\begin{pmatrix} x_A & x_B & x_C \\ y_A & y_B & y_C \\ 1 & 1 & 1 \end{pmatrix} \right)^2 + \left( \det\begin{pmatrix} y_A & y_B & y_C \\ z_A & z_B & z_C \\ 1 & 1 &
1 \end{pmatrix} \right)^2 + \left( \det\begin{pmatrix} z_A & z_B & z_C \\ x_A & x_B & x_C \\ 1 & 1 & 1 \end{pmatrix} \right)^2 }.$
Using Heron's formula
The shape of the triangle is determined by the lengths of the sides alone. Therefore the area S also can be derived from the lengths of the sides. By Heron's formula:
$S = \sqrt{s(s-a)(s-b)(s-c)}$
$S= \sqrt{\frac{P(P-2a)(P-2b)(P-2c)}{16}}$
where $s= \frac{1}{2}(a + b + c)$ is the semiperimeter, or half of the triangle's perimeter.
3 equivalent ways of writing Heron's formula are
$S = \sqrt{\frac{a^2b^2+c^2(a^2+b^2)}{8}-\frac{a^4+b^4+c^4}{16}}$
$S = \sqrt{\frac{(a^2+b^2+c^2)^2}{16}-\frac{a^4+b^4+c^4}{8}}= \sqrt{\frac{a^4+b^4+c^4}{16} + \frac{(a^2b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4)}{8}}$
$S = \sqrt{\frac{(a+b-c) (a-b+c) (b-a+c) (a+b+c)}{16}}.$
Computing the sides and angles
In general, there are various accepted methods of calculating the length of a side or the size of an angle. Whilst certain methods may be suited to calculating values of a right-angled triangle,
others may be required in more complex situations.
Trigonometric ratios in right triangles
A right triangle always includes a 90° (π/2 radians) angle, here labeled C. Angles A and B may vary. Trigonometric functions specify the relationships among side lengths and interior angles of a
right triangle.
In right triangles, the trigonometric ratios of sine, cosine and tangent can be used to find unknown angles and the lengths of unknown sides. The sides of the triangle are known as follows:
• The hypotenuse is the side opposite the right angle, or defined as the longest side of a right-angled triangle, in this case h.
• The opposite side is the side opposite to the angle we are interested in, in this case a.
• The adjacent side is the side that is in contact with the angle we are interested in and the right angle, hence its name. In this case the adjacent side is b.
Sine, cosine and tangent
The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case
$\sin A = \frac {\textrm{opposite}} {\textrm{hypotenuse}} = \frac {a} {h}\,.$
Note that this ratio does not depend on the particular right triangle chosen, as long as it contains the angle A, since all those triangles are similar.
The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. In our case
$\cos A = \frac {\textrm{adjacent}} {\textrm{hypotenuse}} = \frac {b} {h}\,.$
The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side. In our case
$\tan A = \frac {\textrm{opposite}} {\textrm{adjacent}} = \frac {a} {b}\,.$
The acronym "SOHCAHTOA" is a useful mnemonic for these ratios.
There are many useful mnemonics that go with SOHCAHTOA, like "Some Old Hippie Caught A High Tripping On Acid"
Another useful mnemonic is "Some People Have Curly Brown Hair Turned Permanently Brown"
Here: S- sin , C- cos , T- tan , P- Perpendicular ( corresponds to Opposite ) , B- Base ( corresponds to Adjacent ) , H- Hypotenuse.
Eg: Some People Have ---- sin A = P/H
Inverse functions
The inverse trigonometric functions can be used to calculate the internal angles for a right angled triangle with the length of any two sides.
Arcsin can be used to calculate an angle from the length of the opposite side and the length of the hypotenuse.
$\theta = \arcsin \left( \frac{\text{opposite}}{\text{hypotenuse}} \right)$
Arccos can be used to calculate an angle from the length of the adjacent side and the length of the hypontenuse.
$\theta = \arccos \left( \frac{\text{adjacent}}{\text{hypotenuse}} \right)$
Arctan can be used to calculate an angle from the length of the opposite side and the length of the adjacent side.
$\theta = \arctan \left( \frac{\text{opposite}}{\text{adjacent}} \right)$
The sine and cosine rules
The law of sines, or sine rule^[2], states that the ratio of the length of side $a$ to the sine of its corresponding angle $\alpha$ is equal to the ratio of the length of side $b$ to the sine of its
corresponding angle $\beta$.
$\frac{a}{\sin \alpha} = \frac{b}{\sin \beta} = \frac{c}{\sin \gamma}$
The law of cosines, or cosine rule, connects the length of an unknown side of a triangle to the length of the other sides and the angle opposite to the unknown side. As per the law:
For a triangle with length of sides $a$, $b$, $c$ and angles of $\alpha$, $\beta$, $\gamma$ respectively, given two known lengths of a triangle $a$ and $b$, and the angle between the two known sides
$\gamma$ (or the angle opposite to the unknown side $c$), to calculate the third side $c$, the following formula can be used:
$c^2\ = a^2 + b^2 - 2ab\cos(\gamma) \implies b^2\ = a^2 + c^2 - 2ac\cos(\beta) \implies a^2\ = b^2 + c^2 - 2bc\cos(\alpha).$
For a right triangle:
Adjacent: $cos(\frac{180}{n}), A= b^2+c^2-a^2, B= a^2+c^2-b^2, C= a^2+b^2-c^2$
Opposite: $sin(\frac{180}{n}), \sqrt{(a^2+b^2+c^2)^2-2(a^4+b^4+c^4)}$
Hypotenuse: 1, $A= 2bc, B= 2ac, C= 2ab$
Trigonometric functions of half angles in a triangle
$Cos=\sqrt{\frac{P^2}{4bc}-\frac{aP}{2bc}}$, $\sqrt{\frac{P^2}{4ac}-\frac{bP}{2ac}}$, $\sqrt{\frac{P^2}{4ab}-\frac{cP}{2ab}}$
$Sin=\sqrt{\frac{(P-2c)(P-2b)}{2bc}}$, $\sqrt{\frac{(P-2c)(P-2a)}{2ac}}$, $\sqrt{\frac{(P-2b)(P-2a)}{2ab}}$
$Tan= \sqrt{\frac{4(P-2c)(P-2b)}{4P^2-8Pa}}$
$Csc= \sqrt{\frac{4bc}{(P-2c)(P-2b)}}$
$Cot= \sqrt{\frac{4P^2-8Pa}{4(P-2c)(P-2b)}}$
Non-planar triangles
A non-planar triangle is a triangle which is not contained in a (flat) plane. Examples of non-planar triangles in noneuclidean geometries are spherical triangles in spherical geometry and hyperbolic
triangles in hyperbolic geometry.
While all regular, planar (two dimensional) triangles contain angles that add up to 180°, there are cases in which the angles of a triangle can be greater than or less than 180°. In curved figures, a
triangle on a negatively curved figure ("saddle") will have its angles add up to less than 180° while a triangle on a positively curved figure ("sphere") will have its angles add up to more than
180°. Thus, if one were to draw a giant triangle on the surface of the Earth, one would find that the sum of its angles were greater than 180°. In this circumstance, one can even make each of the
triangle in question's angles measure 90°, adding up to a total of 270°.
See also
1. ↑ All pairs of congruent triangles are also similar; but not all pairs of similar triangles are congruent.
External links
|
{"url":"http://math.wikia.com/wiki/Triangle","timestamp":"2014-04-16T10:32:51Z","content_type":null,"content_length":"122370","record_id":"<urn:uuid:e8a0639c-f3f9-4016-b28c-05ad8661c3af>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You are here:
Ma2 Number
This is a collection of work. Click through the chapters to see the full collection or download the attached standards file.
Teacher's notes
• Given half of a number, finds 100%, 10%, 5%, 50% and 25%.
• Recognises the equivalence of
Next steps
Find other percentages of the given numbers, such as 60% of 80 (by adding his 50% and 10%); 20% of 160.
Long multiplication
Teacher's notes
• When modelled in class, he uses the grid method to multiply a two-digit number by a two-digit number.
• Understands place value when multiplying multiples of 10, for example 20 × 30 = 600.
Next steps
• Investigate other ways to partition 21 and 32, such as (20 and 1) × (20 and 12).
• Use grid method rather than repeated addition when solving problems.
Teacher's notes
When modelled in class, he uses a standard written method to subtract a two-digit number from a three-digit number, bridging the hundreds and the tens.
Next steps
• Use an efficient written method when solving problems.
• Check calculations by using an inverse method.
Word problems
Teacher's notes
• Chooses the correct operations to solve problems.
• multiplies a multiple of 10 by a single digit mentally.
• Uses informal methods to subtract and divide.
Next steps
Solve a wider range of multi-step problems, including those involving measures.
What the teacher knows about Peter's attainment in Ma2
Peter reads numbers with up to five digits and understands the place value of decimals to two places. He multiplies and divides whole numbers by 10 or 100. He rounds two-digit and three-digit numbers
to the nearest 10 and three-digit numbers to the nearest 100, using this to make approximations when calculating. In the context of temperature, Peter reads and explains negative numbers. He
recognises approximate proportions of a whole and uses simple fractions and percentages to describe these, for example
Peter has a quick recall of multiplication facts up to 10 × 10 and uses his knowledge of inverses to derive the associated division facts. He extends this to larger numbers: for example, given 35 ×
76 = 2660, he knows 2660 ÷ 76 = 35. He knows whether to round up or down after division in the context of a problem, for example 'There are 53 pages in each chapter and I am on page 127. Which
chapter am I reading?' He adds and subtracts two-digit numbers mentally; calculates complements to 1000, for example 1000 - 240; doubles and halves any two-digit number; and uses his tables knowledge
and place value to multiply or divide multiples of 10 by a single digit, such as 30 × 7, 180 ÷ 3. Peter knows efficient methods to add and subtract three-digit numbers, including decimals, in the
context of money, although he will often revert to an informal method for subtraction when presented with a word problem: for example, he prefers to use a number line. He multiplies and divides
three-digit numbers (including decimals in the context of money) by a single digit, using informal methods, although he sometimes lacks security. He is beginning to use a grid method to multiply
two-digit numbers together. Peter interprets a calculator display of 4.5 as £4.50.
Peter solves one-step and two-step number problems choosing and using the appropriate operations and knows how to deal with remainders when they occur: for example, given the information that a story
starts on page 1 and is 630 pages long, and that Susie is on page 126, Peter works out how many more pages she needs to read to reach the middle of the book. When told that there are 10 equal
chapters in the book, he is able to work out which chapter Susie is currently reading. He carries out simple calculations involving negative numbers in context: for example, he knows 18° Celsius is
22° Celsius above -4° Celsius. He uses and interprets coordinates in the first quadrant.
Summarising Peter's attainment in Ma2
In Ma2 Peter is best described as working at the lower end of level 4. He has a good understanding of place value in whole numbers and decimals to two places, and he multiplies and divides whole
numbers by 10 or 100. In solving number problems, he uses a range of mental methods of computation with the four operations, including quick recall of multiplication and related division facts to 10
× 10. Although he knows efficient methods for three-digit addition and subtraction, including decimals in the context of money, he prefers to use informal methods for subtraction. He multiplies and
divides three-digit numbers by a single digit using informal methods. He recognises approximate proportions of a whole and uses simple fractions and percentages to describe these, for example 25%,
|
{"url":"http://webarchive.nationalarchives.gov.uk/20110809091832/http:/www.teachingandlearningresources.org.uk/node/22458","timestamp":"2014-04-21T13:01:17Z","content_type":null,"content_length":"65414","record_id":"<urn:uuid:256b390c-437d-48d8-ae27-1bbac9f8c15b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Montara Calculus Tutor
Find a Montara Calculus Tutor
...I am well regarded as an excellent instructor and am able to deal with students with a wide range of abilities in math, finance and economics. I worked a number of years as a data analyst and
computer programmer and am well versed in communicating with people who have a variety of mathematical a...
49 Subjects: including calculus, physics, geometry, statistics
...I have been a professional tutor since 2003, specializing in math (pre-algebra through AP calculus), AP statistics, and standardized test preparation. I am very effective in helping students to
not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is positive, supportive, and also fun.
14 Subjects: including calculus, statistics, geometry, algebra 2
When I retired from the United States Air Force I swore I would never get up early again. But I still wanted to do something to continue making the world a better place. So I turned to something I
had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING!
10 Subjects: including calculus, geometry, precalculus, algebra 1
...I have been tutoring for the past eleven years in Physics, Math, and Chemistry. I started tutoring when I was an undergrad in Electrical Engineering at UC Berkeley. At first, I started helping
my friends with their classes in math, physics, and chemistry.
11 Subjects: including calculus, chemistry, physics, geometry
...I am an actuary and work with probabilities on a daily basis. The topic may seem hard but in many ways it is just using your common sense in a new and different manner. I have a background as
an actuary and MBA training.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/montara_calculus_tutors.php","timestamp":"2014-04-19T02:38:38Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:d4235a1d-b455-4854-8e54-07997b17553a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bargaineering » Calculating Post-Tax 401(K) Contribution Cost - Bargaineering » Print
A reader recently sent in a question on how much it really costs you to contribute to your 401(k). I’ve always advocated that you contribute to your 401(k), regardless of whether your employer offers
a match, and I will continue to advocate doing so until something drastically changes in retirement planning. Now, the reader was actually in a discussion with someone else about how your
contribution to your 401(k) was cheaper than it’s actual dollar cost to you, at least initially, because of the fact that it’s pre-tax. So, let’s cover the basics and give our friend some ammunition
to go back to the debate stand.
First off, it’s cheaper initially because you don’t pay tax on the funds yet. So, if you’re in the 25% tax bracket, when you contribute $100 it’s really only $75 out of pocket for you. On that basis
alone, I think most would accept the argument that your 401(k) contribution isn’t as expensive as one may expect looking at nominal dollar amounts. However, let’s look at it from a different
perspective, from the cost of the tax being paid (either today on non-contributions or tomorrow on appreciated 401(k) assets).
However, let’s actually compare the cost today ($25) versus the cost of the taxation in the future, given a few assumptions. First, let’s assume your money earns a reasonable 8% and inflation is a
healthy 4%. Let’s also assume that your tax rate remains 25%, which is probably the most risky of the assumptions. Given the growth rate of 8%, your $100 in 40 years will be worth $2,172.45. If you
tax that at 25%, that’s a tax of $543.11! $543.11 is much more than $25 right? Well, that’s $543.11 in 2047 dollars, which is only $113.12 in 2007 dollars. But isn’t $113.12 over four times more than
$25? Yes, but that’s $113.12 you don’t pay today, you pay that in 40 years… the time value of money makes $113.12 in 40 years worth only $23.56 today (given the same 4% rate used for inflation). So
you get more money and you pay less tax in the future, not bad!
I bet you didn’t think calculating post-tax 401(k) contribution costs could be that involved huh?
(Someone please check my math, I don’t make a habit of calculating 401(k) numbers so I could’ve gotten something wrong)
|
{"url":"http://www.bargaineering.com/articles/calculating-post-tax-401k-contribution-cost.html/print/","timestamp":"2014-04-17T15:30:06Z","content_type":null,"content_length":"6154","record_id":"<urn:uuid:3e07fcc3-2508-4d64-bec3-ddec400a69a9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Language Specific
Welcome to Project Eureka! Project Eureka is a collaboratively edited site for problem solvers/creators - regardless of problem or field. We are a bunch of math enthusiasts who decided to create a
website for submitting and solving problems. However, project Eureka is not limited to math problems; any problem, puzzle, or trivia question can be submitted to project Eureka.
Based on the original perl golf, Code Golf allows you to show off your code-fu by trying to solve coding problems using the least number of keystrokes. You're not just limited to Perl either - PHP,
Python and Ruby are all available too. Challenges are always open, and your entries are automatically scored so you can start playing right away!
+Ma's Reversing
2013-12-14 23:07:47 Are you a novice in programming? by Łukasz Kuszner Please consider Simple Programming Problems. Comments and suggestions are very welcome here.
InterviewStreet We built HackerRank to focus on you, the programmer. Here you will find a much larger set of challenges, including Artificial Intelligence and Machine Learning, in addition to classic
ACM-style problems. We have also migrated the existing Interviewstreet challenges over to HackerRank.
What is Project Euler? Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will
help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems. The motivation for starting Project Euler, and its
continuation, is to provide a platform for the inquiring mind to delve into unfamiliar areas and learn new concepts in a fun and recreational context. Who are the problems aimed at? The intended
audience include students for whom the basic curriculum is not feeding their hunger to learn, adults whose background was not primarily mathematics but had an interest in things mathematical, and
professionals who want to keep their problem solving and mathematics on the edge.
In 1988, Stephen K. Park and Keith W. Miller, reacting to the plethora of unsatisfactory random number generators then available, published a linear congruential random number generator that they
claimed should be the “minimum standard” for an acceptable random number generator. Twenty-five years later, the situation is only somewhat better, and embarrassingly bad random number generators are
still more common than they ought to be. Today’s exercise implements the original minimum standard random number generator in several different forms.
|
{"url":"http://www.pearltrees.com/09mikesbrain/challenges/id5792872","timestamp":"2014-04-20T21:05:58Z","content_type":null,"content_length":"21608","record_id":"<urn:uuid:45fb6d77-9353-4952-99e1-0d4b7592279d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Type this with your eyes closed
Re: Type this with your eyes closed
gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme gimme
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Type your name
Re: Type this with your eyes closed
Type your favorite book. The whole book.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Allan and Ted (my own book!
Re: Type this with your eyes closed
I have read it. It is a little short but wow, what an ending!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
hello bobbym
Re: Type this with your eyes closed
I read it right here in post #79.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Wheere do you live?
Re: Type this with your eyes closed
In Las Vegas, Nevada. A hot and cold desert.
The next is in the tropics.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
New rule: If I misspell it, just type it, e.g. Wheere do you live, just answer "Wheere do you live".
What is your favrite movie?
Re: Type this with your eyes closed
Wheere do you live?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Not like that. Whatever I typed in bold, you type it too.
Le's give irt another try.
Re: Type this with your eyes closed
I get it now.
Le's give irt another try.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
What time ius it:
Re: Type this with your eyes closed
What time ius it:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
True or fakse?
Re: Type this with your eyes closed
True od fakse?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
A misspelling of false.
What is your facorite food?
Re: Type this with your eyes closed
What is your facorite food?
Whoo is the worst speler?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Whoo is the worst speler?
Re: Type this with your eyes closed
Whoo is the worst speler?
Whan is it dinnertime?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Type this with your eyes closed
Whan is ir dinnertime?
What is your name?
Re: Type this with your eyes closed
Mak sum mistakes.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Power Member
Re: Type this with your eyes closed
make sim mustakes
pretty good
friendship is tan 90°.
Re: Type this with your eyes closed
prdttu good
Incarceration is not fun.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=295659","timestamp":"2014-04-19T02:16:56Z","content_type":null,"content_length":"34898","record_id":"<urn:uuid:51eade4c-3560-43f2-84c9-f3e258899086>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HPS 0410 Einstein for Everyone
Back to main course page
What is a four dimensional space like?
John D. Norton
Department of History and Philosophy of Science
University of Pittsburgh
We have already seen that there is nothing terribly mysterious about adding one dimension to space to form a spacetime. Nonetheless it is hard to resist a lingering uneasiness about the idea of a
four dimensional spacetime. The problem is not the time part of a four dimensional spacetime; it is the four. One can readily imagine the three axes of a three dimensional space: up-down, across and
back to front. But where are we to put the fourth axis to make a four dimensional space?
My present purpose is to show you that there is nothing at all mysterious in the four dimensions of a spacetime. To do this, I will drop the time part completely. I will just consider a four
dimensional space; that is, a space just like our three dimensional space, but with one extra dimension. What would it be like?
With no effort whatever, I can visualize a three dimensional space--and you can too. What would it be like to live in a three dimensional cube? To be asked to visualize that is like being asked to
breathe or blink. It is effortless. There we sit in the cube with its six square walls and eight corners. Our mind's eye lets us hover about inside.
Can I visualize what it would be like to live in the four dimensional analog of a cube, a four dimensional cube or "tesseract"? I cannot visualize this with the same effortless immediacy. I doubt
that you can as well. But that is just about the only thing we cannot do. Otherwise we can determine all the properties of a tesseract and just what it would be like to live in one. There are many
techniques for doing this. I will show you one below. It involves progressing through the sequence of dimensions, extrapolating the natural inferences at each step up to the fourth dimension. Once
you have seen how this is done for the special case of a tesseract, you will have no trouble applying it to other cases.
The door to the fourth dimension is opening.
The one dimensional interval
The one dimensional analog of a cube is an interval. It is formed by taking a dimensionless point and dragging it through a distance. That distance could be 2 inches or 3 feet or anything. Let us
call the distance "L".
The interval has length L. It is bounded by 2 points as its faces--the two points at either end of the interval.
The two dimensional square
The two dimensional analog of a cube is a square. It is formed by dragging the one dimensional interval through a distance L in the second dimension.
The square has area L^2. It is bounded by faces on 4 sides. The faces are intervals of length L. We know there are four of them since its two dimensional axes must be capped on either end by faces.
So we have 2 dimensions x 2 faces each = 4 faces. The faces together form a perimeter of 4xL in length.
The three dimensional cube
To form a cube, we take the square and drag it a distance L in the third dimension.
The cube has volume L^3. It is bounded by faces on 6 sides. The faces are squares of area L^2. We know there are 6 of them since its three dimensional axes must be capped on either end by faces.
So we have 3 dimensions x 2 faces each = 6 faces. The faces together form a surface of 6xL^2 in area. Drawing a picture of a three dimensional cube on a two dimensional surface is equally easy. We
take two of its faces--two squares--and connect the corners.
There are several ways of doing the drawing that corresponds to looking at the cube from different angles. The figure shows two ways of doing it. The first gives an oblique view; the second looks
along one of the axes.
The four dimensional cube: the tesseract
So far I hope you have found our constructions entirely unchallenging. The next step into four dimensions can be done equally mechanically. We just systematically repeat every step above. The only
difference is that this time we cannot readily form a mental picture of what we are building. But we can know all its properties!
To form a tesseract, we take the cube and drag it a distance L in the fourth dimension. We cannot visualize exactly what that looks like, but it is something like this:
The tesseract has volume L^4. It is bounded by faces on 8 sides. The faces are cubes of volume L^3. We know there are 8 of them since its four dimensional axes must be capped on either end by
faces--two cubical faces per axis. Once again, we cannot visualize all four of these capped dimenions. We can at best visualize three directions perpendicular to each other. We then somehow add in
the fourth (in red):
So we have 4 dimensions x 2 faces each = 8 faces. The faces together form a "surface" (really a three dimensional volume) of 8xL^3 in volume. Drawing a picture of a four dimensional tesseract in a
three dimensional space is straightforward. We take two of its faces--two cubes--and connect the corners.
There are several ways of doing the drawing that corresponds to looking at the tesseract from different angles. The figure shows two ways of doing it. The first gives an oblique view; the second
looks along one of the axes.
So now we seem to know everything there is to know about the tesseract! We know its volume in four dimensional space, how it is put together out of eight cubes as surfaces and even what the volume of
its surface is (8xL^3).
The "drawings" of the tesseract are hard to see clearly. That is because they are really supposed to be three dimensional models in a three dimensional space. So what we have above are two
dimensional drawings of three dimensional models of a four dimensional tesseract. No wonder it is getting messy!
The images below are stereo pairs. If you are familiar with how to view them, you will see that they give you a nice stereo view of the three dimensional model. If these are new to you, they take
practice to see. You need to relax your view until your left eye looks at the left image and the right eye looks at the right image.
But how can you learn to do this? I find it easiest to start if I sit far away from the screen and gaze out into the distance over the top of the screen. I see the two somewhat blurred images on the
edge of my field of vision. As long as I don't focus on them, they start to drift together. That is the motion you want. The more they drift together the better. I try to reinforce the drift as best
I can while carefully moving my view toward the images. The goal is to get the two images to merge.When they do, I keep staring at the merged images, the focus improves and the full three dimensional
stereo effect snaps in sharply. The effect is striking and worth a little effort.
This pair is easier to fuse:
and this one is a little harder:
Summary table
We can summarize the development of the properties of a tesseract as follows:
Dimension Figure Face Volume Number Volume of
of faces surface/ perimeter
1 interval point L 1x2=2 two points
2 square interval L^2 2x2=4 4L
3 cube square L^3 3x2=6 6L^2
4 tesseract cube L^4 4x2=8 8L^3
A roomy challenge
If you were to live in a tesseract, you might choose to live in its three dimensional surface, much as a two dimensional person might choose live in the 6 square rooms that form the two dimensional
surface of a cube. So your house would be the eight cubes that form the surface of the tesseract. Imagine that there are doors where ever two of these cubes meet. If you are in one of these rooms,
how many doors would you see? What would the next room look like if you passed through one of the doors? How many doors must you pass through to get to the farthest room? How many paths lead to that
farthest room? Could you have any windows to outside the tesseract? What about windows to inside the tesseract?
Some of these questions are not easy. To answer them, go back to the easy case of a three dimensional cube with faces consisting of squares. Ask the analogous questions there and just extrapolate the
answers to the tesseract.
A knotty challenge
Access to a fourth dimension makes many things possible that would otherwise be quite impossible. To see how this works, we'll use the strategy of thinking out a process in a three dimensional space.
Then we replicated it in a four dimensional space.
Consider a coin lying in a frame on a table top.
There is no way the coin can be removed from the frame within the confines of the two dimensional surface of the table. Now recall that we have access to a third dimension. The coin is easily
removed merely by lifting it into the third dimension, the height above the table.
We are then free to move the coin as we please in the higher layer and then lower back to the tabletop outside the frame.
The thing to notice about the lifting is that the motion does not move the coin at all in the two horizontal directions of the two dimensional space. So the motion never brings it near the frame and
there is no danger of collision with the frame.
Now repeat this analysis for its analog in one higher dimension, a marble trapped within a three dimensional box.
The marble can be removed in exactly the same way by "lifting" it, this time into the fourth dimension. As with the coin in the frame, the key thing to note is that in this lifting motion, the
marble's position in the three spatial directions of the box are unchanged. The marble never comes near the walls and there is no danger of colliding with them.
Once it is lifted into a new three dimensional space, it can be moved around freely in that space and lowered back into the original three dimensional space, but now outside the box.
Now finally consider two linked rings in some three dimensional space. Can we separate them using access to a fourth dimension?
It can be done by exactly the same process of lifting one of the rings into the fourth dimension. As before, note that the lifting does not move the ring in any of the three directions of the
three dimensional space holding the initially linked rings. So the motion risks no collisions of moved ring with the other. The lifting simply elevates the moved ring to a new three dimensional
layer of the four dimensional space in which no part of the other ring is found. The moved ring can then be freely relocated in that new layer and, if we pleased lowered back into the original
three dimensional space in quite a different location.
Now comes the knotty challenge. We are familiar in our three dimensional space with tying knots in a rope. Some knots are just apparent tangles that can come apart pretty easily. Others are real and
can only be undone by threading the end of the rope through a loop. So take this to be a real knot: one that cannot be undone by any manipulation of the rope if we cannot get hold of the ends.
(Imagine, if you like, that they are each anchored to a wall and cannot be removed.)
The challenge is to convince yourself that there are no real knots in ropes in a four dimensional space. The principal aid you will need is the manipulation above of the linked rings. To get yourself
started, imagine how you would use a fourth dimension to untie some simple knot you can easily imagine.
Using colors to visualize the extra dimension
Does the general idea of "lifting" an object into the fourth dimension still seem elusive? If so, here's a technique for visualizing it that may just help. The trick is to imagine that differences in
position in the extra dimension of space can be represented by differences of colors.
Here's how it works when we start with a two dimensional space and lift into the third dimension. The objects in the original two dimensional space are black. As we lift through the third dimension,
they successively take on the colors blue, green and red.
Now let's apply this colored layer trick to the earlier example of lifting a coin out of a frame. The coin starts in the same two dimensional space as the frame. We lift it up into the third
dimension into a higher spatial layer that we have color-coded red. In this higher layer, the coin can move freely left/right and front/back without intersecting the frame. We moving it to the right
until it passes over the frame. Then we lower it back down outside.
Now imagine that we cannot perceive the third dimension directly. Here's how we'd picture the coin's escape. It starts out inside the frame in the space of the frame. It is then lifted out of the
frame into the third dimension. At that moment, it is indicated by a ghostly red coin. Its spatial position in the left/right and front/back direction has not changed. All that has changed is its
height. It is now in the red height layer. If we move the coin left or right, or front and back, in this red layer, it no longer intersects the frame and can move right over it. We won't see it move
over the frame, however. As far as we are concerned it will just move through it.
The motion of the coin in this third dimensional escape passage is illustrated by the ghostly red coin.
This last analysis of the coin in the frame is the template for dealing with the real case of a marble trapped inside a three dimensional box. If the marble moves in any of the three familiar
dimensions (up/down, left/right and front/back), its motion intersects the walls of the box and it cannot escape. So we lift the marble into the fourth dimension, without changing its position in
the three familiar dimensions. In the figure, this is shown by the marble turning ghostly red. In the red space, the marble is free to move up/down, left/right and front/back, without intersecting
the box's walls. The marble then moves so that is passes over one of the walls. It is then lowered out of the red space back to the original three dimensional space of the box, but now outside the
The same analysis applies to the linked rings. One ring is lifted out of the three dimensional space of the original set up. In this red space, the ring can move freely without intersecting the
other ring. We move it well away from the other ring and then drop it back into the original three dimensional space. It is now unlinked from the other ring.
What you should know
• The properties of squares, cubes and tesseracts.
• How to arrive at the properties of a tesseract and other four-dimensional figures by extrapolating the methods used to get the properties of a cube.
Copyright John D. Norton. February 2001; July 2006, February 2, 2008; February 6, 2012.
|
{"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/four_dimensions/","timestamp":"2014-04-20T14:34:13Z","content_type":null,"content_length":"25886","record_id":"<urn:uuid:f2a8e9eb-36e4-46b8-a97d-c5b836d02128>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pages: 1 2
Post reply
how can we prove 2n choose n is always even?
I see you have graph paper.
You must be plotting something
Re: binomial
hi cooljackiec
Welcome to the forum.
I'm not at all clear what you are asking. Please would you say a bit more about this problem.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: binomial
I like this proof best:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
so because i have a 2 * 2n-1 choose n, it is bound to be even?
I see you have graph paper.
You must be plotting something
Re: binomial
Yes, any integer that is multipled by 2 is even.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
Another problem:
Consider the polynomial
What are the coefficients of $ f(t-1) $? Enter your answer as an ordered list of four numbers. For example, if your answer were $ f(t-1) = t^3+3t^2-2t+7 $, you'd enter (1,3,-2,7). (This is not the
actual answer.)
I see you have graph paper.
You must be plotting something
Re: binomial
I am getting (1, 3, 3, 1).
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
thank you. I have another question:
We have 8 pieces of strawberry candy and 7 pieces of pineapple candy. In how many ways can we distribute this candy to 4 kids?
In how many ways can we distribute 13 pieces of identical candy to 5 kids, if the two youngest kids are twins and insist on receiving an equal number of pieces?
4 students are running for club president in a club with 50 members. How many different vote counts are possible, if members may choose not to vote?
I see you have graph paper.
You must be plotting something
Full Member
Re: binomial
I'm getting the same as bobbym. It sometimes helps to write the original function in terms of
a different variable than the one you are substituting in its place. For example
f(x) = x^3+6x^2+12x+8 = x^3 + 2*3x^2 + 4*3x + 8 = (x+2)^3
f(x) = (x+2)^3 so f(t-1) = ((t-1)+2)^3 = (t+1)^3 = t^3+3t^2+3t+1 = (1, 3, 3, 1) by replacing
x by t-1 in the f(x) = (x+2)^3.
And as another example find the quadruple for f(t-2).
f(x) = (x+2)^3 so f(t-2) = ((t-2)+2)^3 = t^3 = (3, 0, 0, 0).
These can also be looked at as a composition of functions: f(x)
(fog)(x) = f(g(x)) = f(x-1) = ( (x-1)+2)^3 = (x+1)^3
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: binomial
Hi cooljackiec;
4 students are running for club president in a club with 50 members. How many different vote counts are possible, if members may choose not to vote?
I am getting 316251.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
On the other hand, I am getting 5^50.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
5^50 = 88817841970012523233890533447265625
I bet you did not do a simulation.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
Each member has 5 chiloices. There are 50 members, or 46 if you exclude the ones who are running for president. 5 votes per person, 46 persons, 5^46 possible vote counts...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
Not exactly, If one guy gets 20 votes there are only 30 to spread to the others.
Remember you are only voting for one position not 4.
Take 50 x's and place 3 spacers in various positions.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
Actually, you must have 4 spacers...
Your answer is correct.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
3 spacers, because you are looking for solutions to
the three spacers make 4 separte groups. Each group is how many is in a variable.
xxxxxxxxxxxxx _ xxxxxxxxxxxxxxxxx _ xxxxxxxxxx _ xxxxxxxxxx
this corresponds to the solution 13 + 17 + 10 + 10 = 50
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
There are 5 groups, votes for 1st, 2nd, 3rd and 4th candidate and the non-voters...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
The non voters are not a candidate. They are represented by different values of r. For instance when there is one non voter the equation is
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
It is easier to look at them as a special category of voters:
where n are the non-voters. There are then .
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
That is very good, I did not see that. That would get the same answer and there is less calculation.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
There will be times when you will think you or I have found the perfect answer, I assure you these are delusions on your part. - Prof. Kingsfield
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
I never said it was perfect... I only agreed that it is much easier to calculate...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: binomial
True, but there could be an even easier way...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: binomial
it is wrong.
I think that it would be 4^50.
Every member has 4 choices. 50 members. But im not sure...
I see you have graph paper.
You must be plotting something
Post reply
Pages: 1 2
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=244562","timestamp":"2014-04-19T17:10:15Z","content_type":null,"content_length":"42468","record_id":"<urn:uuid:56d77b7a-bf7a-42ed-a5f9-d6ca93438584>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Producing polar contour plots with matplotlib
In my field I often need to plot polar contour plots, and generally plotting tools don’t make this easy. In fact, I think I could rate every single graphing/plotting package in the world by the ease
of producing a polar contour plot – and most would fail entirely! Still, I have managed to find a fairly nice way of doing this using my plotting package of choice: matplotlib.
I must warn you first – a Google search for matplotlib polar contour or a similar search term will produce a lot of completely out-dated answers. The most commonly found answers are those such as
this StackOverflow question and this forum post. In fact, the first question was asked by me last year – and got an answer which is now completely out of date. Basically, all of these answers tell
you that you can’t do a polar contour plot directly in matplotlib, and you must convert your points from polar co-ordinates to cartesian co-ordinates first. This isn’t difficult, but is a pain to do,
and of course you then end up with cartesian axes which doesn’t look great. The great news is that you can now do polar contour plots directly with matplotlib!
So, how do you do them? Simple really, you just create some polar axes and plot a contour plot on them:
fig, ax = subplots(subplot_kw=dict(projection='polar'))
cax = ax.contourf(theta, r, values, nlevels)
This produces a filled contour plot, as it uses the contourf function, using the contour function would give simple contour lines. The first three parameters which must be given to this function are
all two-dimensional arrays containing: the radii, the angles (theta) and the actual values to contour. The final parameter is the number of contour levels to plot – you tend to want lower numbers for
line contours and higher numbers for filled contour plots (to get a smooth look).
I never quite understood these two-dimensional arrays, and why they were needed. I normally had my data in the form of three lists that were basically columns of a table, where each row of the table
defined a point and value. For example:
Radius Theta Value
10 0 0.7
10 90 0.45
10 180 0.9
10 270 0.23
20 0 0.5
20 90 0.13
20 180 0.52
20 270 0.98
Each of these rows define a point – for example, the first row defines a point with a radius of 10, an angle of 0 degrees and a value of 0.7. I could never understand why the contour function didn’t
just take these three lists and plot me a contour plot. In fact, I’ve written a function that will do just that, which I will describe below, but first let me explain how those values are converted
to two-dimensional arrays.
First of all, lets think of the dimensions: we obviously have dimensions here in our data, the radius and the angle. In fact, we could re-shape our values array so that it is two-dimensional fairly
easily. We can see from the table above that we are doing all the azimuth angles for a radius of 10 degrees, then the same azimuth angles for a radius of 20 degrees, etc. Thus, rather than our values
being stored in a one-dimensional list, we could put them in a two-dimensional table where the columns are the azimuth angles, and the rows are the radii:
10 0.7 0.45 0.9 0.23
20 0.5 0.13 0.52 0.98
This is exactly the sort of two dimensional array that we need to give to the contourf function. That’s not too hard to understand – but why on earth do the radii and angle arrays have to be
two-dimensional too. Well, basically we just need two arrays like the one above, but with the relevant radii and angles in the cells, rather than the values. So, for the angles, we’d have:
And for the radii we’d have:
Then, when we take all three arrays together, each cell will define the three bits of information we need. So, the top left cell gives us an angle of 0, a radius of 10 and a value of 0.7. Luckily,
you don’t have to make these arrays by hand – a handy NumPy function called meshgrid will do it for you:
>>> radii = np.arange(0, 60, 10)
>>> print radii
[ 0 10 20 30 40 50]
>>> angles = np.arange(0, 360, 90)
>>> print angles
[ 0 90 180 270]
>>> np.meshgrid(angles, radii)
(array([[ 0, 90, 180, 270],
[ 0, 90, 180, 270],
[ 0, 90, 180, 270],
[ 0, 90, 180, 270],
[ 0, 90, 180, 270],
[ 0, 90, 180, 270]]),
array([[ 0, 0, 0, 0],
[10, 10, 10, 10],
[20, 20, 20, 20],
[30, 30, 30, 30],
[40, 40, 40, 40],
[50, 50, 50, 50]]))
One thing to remember is that the plotting function requires the angle (theta) in radians, not degrees, so if your data is in degrees (as it often is) then you’ll need to convert it to radians using
the NumPy radians function.
After doing all of this you can get your data into the contour plotting function correctly, and you can get some polar axes for it to be plotted on. However, if you do this, you’ll find that your
axes look something like this:
You can see that zero degrees isn’t at the top, it’s at the ‘East’ or ’3 o’clock’ position, and the angles go round the wrong way! Apparently that’s how these things are often done in maths – but in
my field particularly people want to have a polar plot like a compass, with zero at the top!
If you try and find how to do this, you’ll find a StackOverflow answer with a brilliant subclass of PolarAxes which does this for you. It’s brilliant that matplotlib allows you do this sort of
customisation, but if you look below the accepted answer you’ll find a link to the matplotlib documentation for a function called set_theta_zero_location. This function very nicely takes a compass
direction (“N” or “E” or “NE” etc) for where zero should be, and puts it there! Similarly, the function set_theta_direction sets the direction in which the angles will increase. All you need to do to
use these is call them from the axes object:
The example above will set up the plot for a ‘normal’ compass-style plot with zero degrees at the north, and the angles increasing clockwise. If you find that these lines of code give an error you
need to update your matplotlib version – these methods were only added in the latest version (v1.1.0).
So, now we’ve covered everything that I’ve gradually learnt about doing this, we can put it all together in a function. I use the function below whenever I want to plot a polar contour plot, and it
works fine for me. It is documented through the docstring shown in the code below.
I can’t guarantee the code will work for you, but hopefully this post has been helpful and you’ll now be able to go away and create polar contour plots in matplotlib.
import numpy as np
from matplotlib.pyplot import *
def plot_polar_contour(values, azimuths, zeniths):
"""Plot a polar contour plot, with 0 degrees at the North.
* `values` -- A list (or other iterable - eg. a NumPy array) of the values to plot on the
contour plot (the `z` values)
* `azimuths` -- A list of azimuths (in degrees)
* `zeniths` -- A list of zeniths (that is, radii)
The shapes of these lists are important, and are designed for a particular
use case (but should be more generally useful). The values list should be `len(azimuths) * len(zeniths)`
long with data for the first azimuth for all the zeniths, then the second azimuth for all the zeniths etc.
This is designed to work nicely with data that is produced using a loop as follows:
values = []
for azimuth in azimuths:
for zenith in zeniths:
# Do something and get a result
After that code the azimuths, zeniths and values lists will be ready to be passed into this function.
theta = np.radians(azimuths)
zeniths = np.array(zeniths)
values = np.array(values)
values = values.reshape(len(azimuths), len(zeniths))
r, theta = np.meshgrid(zeniths, np.radians(azimuths))
fig, ax = subplots(subplot_kw=dict(projection='polar'))
cax = ax.contourf(theta, r, values, 30)
cb = fig.colorbar(cax)
cb.set_label("Pixel reflectance")
return fig, ax, cax
Categorised as: Programming, Python, Remote Sensing
Thanks, this is a good write up. I’ve been struggling with plotting spectral wave data in a polar plot in matplotlib.
I found your post after seeing your question on stackoverflow, might be worth linking them together.
Thanks a lot for posting this. I can’t use it yet because my work computer won’t allow me to download matplotlib v1.1.0 but I will hopefully soon.
To James Morrison: I’m working with wave spectral data too. If you haven’t heard of it, the WAFO toolbox could be good for you – http://www.maths.lth.se/matstat/wafo/. It has a python version too.
I tried this out but it’s still quite a bit slower than what my previous method was: for each point along a ray, add an x,y,intensity to a scatter plot. This is still roughly 4x faster than a polar
contour as above, but the output ends up looking pretty similar.
Thanks! This was super useful.
|
{"url":"http://blog.rtwilson.com/producing-polar-contour-plots-with-matplotlib/","timestamp":"2014-04-20T03:42:14Z","content_type":null,"content_length":"36102","record_id":"<urn:uuid:a2928c7e-01d8-4ce9-8202-baf924ea57ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annals of Mathematics
From inside the book
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Introduction 18
The divisibility theorem for conical polynomials 24
The main existence theorem 31
2 other sections not shown
Common terms and phrases
abelian algebra analytic assume Borel boundary bundle closed positive codimension coefficients cohomology compact completes the proof complex manifold component compute conjecture consider Corollary
decomposition define definition denote dense diffeomorphism dimension discrete series discrete series representation disjoint element embedding exists extended f-module fibration finite group fixed
foliation follows function functor handle decomposition Hence highest weight holomorphic holomorphic function holonomy homeomorphism implies induced inequality integer invariant measure irreducible
isomorphism Lemma linear Main Theorem mapping cylinder Math maximal metric module monadic monadic theory neighborhood non-zero notation obtain open subset PL fibration plurisubharmonic function
polynomial proof of Theorem Proposition prove pseudogroup regular Remark representation respect restriction result satisfies Section space Stein manifold strongly closed subgroup submanifold
subspace subvariety supp Suppose Theorem tion topology torsion trivial unique vector field zero
Bibliographic information
|
{"url":"http://books.google.com/books?id=otY0AAAAIAAJ","timestamp":"2014-04-19T05:04:30Z","content_type":null,"content_length":"98137","record_id":"<urn:uuid:aef0147f-faf0-4925-a2bc-dfd4fd432459>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the answer to this questions?
http://postimg.org/image/5qsfldn0f/ - Homework Help - eNotes.com
What is the answer to this questions?
The equation you provided:
`T=2pi\sqrt(L/9.8) `
is the equation for the period of a simple pendulum.
Since L is the only variable, then T is a function of L.
1. `T(L) = 2pi\sqrt(L/9.8)`
2. The domain is `{LinR|Lgt=0}` or `[0,+oo)`
3. The range is `{TinR|Tgt=0}` or `[0,+oo)`
4. This is a sideways-parabola graph (a square root graph).
T = 8.98 s at L = 20 m
5. Let T = 10 s, Solve for L
The above equation is rearranged:
Now we can plug in T=10
`L=24.824 m` long to the nearest 0.001 m
6. The mass has no affect on the pendulum according to this equation.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-answer-this-questions-http-postimg-org-image-432599","timestamp":"2014-04-24T03:28:10Z","content_type":null,"content_length":"25935","record_id":"<urn:uuid:220fe71a-62e8-48e9-b560-73436c89e01e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: statistical test and sensitivity analysis for matched pairs with
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: statistical test and sensitivity analysis for matched pairs with censoring
From "Shoryoku Hino" <shoryok@ninus.ocn.ne.jp>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: statistical test and sensitivity analysis for matched pairs with censoring
Date Tue, 28 Dec 2010 10:29:40 +0900
Dear Joseph,
Thank you for your reply and kind advice. I should have made my question
All I want to know is the test for survival data in case of paired sample.
For example, if it were not survival data, we would use signed rank test for
continuous variable in pared sample instead of unpaired t-test or rank sum
test. I would use McNemar test for proportion instead of chi square test.
Rosenbaum's sensitivity test could be applicable in these cases.
In my case, survival data, I think there might be a better, more powerful
test for paired sample than log-rank test or Wilcoxon test.
I would like you to give me any suggestion.
-----Original Message-----
From: Joseph Coveney [mailto:jcoveney@bigplanet.com]
Sent: Sunday, December 26, 2010 8:08 PM
To: statalist@hsphsun2.harvard.edu
Cc: Shoryoku Hino
Subject: Re: statistical test and sensitivity analysis for matched pairs
with censoring
Shoryoku Hino wrote:
I am using Stata11 and working on the observational data.
I produced the one-to-one matched pairs using PSMATCH2 to compare the
treatment A with B in terms of maintenance effect.
The primary outcome is the time to recurrence of the disease.
There are two questions I would like to ask you.
1) What is the appropriate statistical test in this situation? RF.Woolson
seems to recommend censored-data version of signed rank test. But I have no
idea how we can do it on Stata.
2) How to perform sensitivity analysis to determine the magnitude of hidden
bias in this case?
I wonder if you could help me to know the way to operate Stata for these
1) Help files for Stata's official survival time commands can be seen by
help st
at the command line. Stat has a selection of tests for censored survival
time data. The help file for these tests can be seen by typing
help sts_test
at the command line. I don't know whether the particular one you mention is
among the ones that Stata offers.
Stata also has modeling commands for this type of data. You can click on
the hyperlinks in the viewer window that displays after typing "help st" to
see more about these modeling commands.
There are also user-written commands for survival time or censored data.
You can see more about these by typing
findit survival
findit censored
at the command line, and then clicking on the hyperlinks that appear in the
viewer window that pops up.
2) I don't understand your question and so can't offer a specific answer to
it, but in general you can use Monte Carlo simulation in order to explore
the properties of a method. To see Stata's help file for Monte Carlo
simulation, type
help simulate
at the command line.
Joseph Coveney
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00947.html","timestamp":"2014-04-21T15:16:30Z","content_type":null,"content_length":"10997","record_id":"<urn:uuid:3481f5d8-6028-4e61-a764-d6e6d1fc6bde>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathcad - system of ODEs with Minimize function
April 4th 2009, 01:23 PM #1
Apr 2009
Mathcad - system of ODEs with Minimize function
I’ve got a problem which I have to solve in Mathcad software. Here’s the description:
There is a system of ODEs and in these equations 2 constants are hidden. My task is to find the values of those constants in such a way that the solution of ODEs will mach experimental results.
I know how to solve differential equations via rkfixed command and others, but I have no idea how to connect this solution with the optimizing mathcad functions (Minimize). At this point I want
to emphasize that this problem cannot be solved with Given Find procedure because there is no chance to calculate exact values which will match perfectly those from experiment.
Sorry for my English – if there is something that you don’t understand, please ask and I try to explain it better.
That is a difficult task, not knowing the equations or anything concerning your efforts.
The equations are not so important in this case. They are very complicated and they use many predefined earlier functions. I’ll give you a simple example of my problem:
April 10th 2009, 10:36 AM #2
MHF Contributor
Aug 2007
April 10th 2009, 12:29 PM #3
Apr 2009
|
{"url":"http://mathhelpforum.com/math-software/82252-mathcad-system-odes-minimize-function.html","timestamp":"2014-04-20T07:35:15Z","content_type":null,"content_length":"35273","record_id":"<urn:uuid:5305ebf8-04d3-4f3c-95c3-9bd866460353>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No subject
Thu Nov 16 16:52:29 CST 2006
rather than sin(a) will be enough to make most people stay away from
numerical python. I say this from experience: most of what Python
does well, guile also did well. After the IDAE BoF at the 1996 ADASS
meeting, we considered whether guile would be a better platform than
Python. It had a lot of development force behind it, had all the
needed features, etc. We asked our audience whether they would use a
lisp-based language. There was laughter. The CS people here know
that lisp is a "beautiful" language. But nobody uses it, because the
syntax is so different from the normal flow of human thought. It is
written to make writing an interpreter for it easy, not to be easy to
learn and use. I've tried it about 8 times and have given up.
Apparently others agree, as guile has been ripped out of many
applications, such as Gimp, or at least augmented as an extension
language by perl or python, which now get most of the new code.
Normal people don't want to warp their brains in order to code.
Consider the following:
2 1/2
-b +- (b - 4ac)
x =3D ------------------
x =3D (-b + [1.,-1] * sqrt(b*b-4*a*c)) / (2*a)
(let x (/=20
(- b)
(sqrt (-=20
(* b b)=20
(* 4 a c))))
(* 2 a)))
You can verify that you have coded the IDL correctly at a glance. The
lisp takes longer, even if you're a good lisp programmer. Now
consider the following common astronomical equation:
sin(dec) - sin(alt) sin(lat)
cos(a) =3D ----------------------------
cos(alt) cos(lat)
a =3D acos((sin(dec) - sin(alt) * sin(lat)) / (cos(alt)*=B7cos(lat)))
a =3D ((dec.Sin - alt.Sin * lat.Sin) / (alt.Cos * lat.Cos)).Acos
readable, but we start to see the problem with the moved .Acos. Now
try this:
sin x + cos(tan(x + sin(x)))
a =3D e
a =3D exp((sin(x))^2 + cos(tan(x + sin(x))))
a =3D (x.Sin**2 + (x + x.Sin).tan.cos).Exp
Half of it you read from left to right. The other half from right to
left. Again, the IDL is much easier to write and to read, given that
we started from traditional math notation. In the proposal version,
it's easy to overlook that this is an exponential.
So, I don't object to making functions into methods, but if there's
even a hint of deprecating the traditional functional notation, that
will relegate us to oblivion. If you don't believe it still, take the
last equation to a few non-CS types and ask them whether they would
consider using a language that required coding math in the proposed
manner versus in the standard manner. Then consider how much time it
would take to port your existing code to this new syntax, and verify
that you didn't misplace a paren or sign along the way.
A statement that traditional functional notation is guarranteed always
to be part of Numeric should be in the PEP. Even calling it syntactic
sugar is dangerous. It is the fundamental thing, and the methods are
sugar for the CS types out there.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-November/016795.html","timestamp":"2014-04-16T13:30:06Z","content_type":null,"content_length":"5187","record_id":"<urn:uuid:a2c690eb-2aa1-47d9-9513-562b92b9de2c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Example Configurations for Log Rotation
Examples of how to configure log rotation by log size, time, or both follow.
Rotating the Log Based on Log Size
This section example shows how to configure a log rotation according to log size only. This configuration rotates the log when it reaches 10 Mbytes, irrespective of the time since the log was last
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-policy:size \│
│ log-rotation-size:10M │
Rotating the Log Based on Time
The examples in this section show how to configure log rotation according to the time since the last rotation, irrespective of log size.
• This configuration rotates the log at 3:00 today and then every 8 hours, irrespective of the size of the log file.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:8h \│
│ log-rotation-policy:periodic log-rotation-start-time:0300 │
• This configuration rotates the log at 3:00, 13:00 and 23:00 every day, irrespective of the size of the log file. Because the log-rotation-start-time parameter takes precedence over the
log-rotation-frequency parameter, the log is rotated at 23:00 and then 4 hours later. The log is not rotated at 23:00 and then 10 hours later.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:10h \│
│ log-rotation-policy:periodic log-rotation-start-time:0300 │
• This configuration rotates the log at noon on Monday, and then at the same time every week, irrespective of the size of the log file.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:1w \ │
│ log-rotation-policy:periodic log-rotation-start-day:2 log-rotation-start-time:1200│
• This configuration rotates the log at noon on Monday, and then every 3 days, irrespective of the size of the log file.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:3d \ │
│ log-rotation-policy:periodic log-rotation-start-day:2 log-rotation-start-time:1200│
The log is rotated on the following days: Monday, Thursday, Sunday, Wednesday, and so on. Notice that the log-rotation-start-day parameter applies to the first week only. The log is not rotated
on the Monday of the second week.
• This configuration rotates the log at noon on the 22^nd day of the month, and then at the same time every month, irrespective of log size.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:1m \│
│ log-rotation-policy:periodic log-rotation-start-day:22 \ │
│ log-rotation-start-time:1200 │
If the log-rotation-start-day is set to 31 and the month has only 30 days, the log is rotated on the first day of the following month. If the log-rotation-start-day is set to 31 and the month has
only 28 days (February), the log is rotated on the 3^rd.
Rotating the Log Based on Time and Log Size
This example shows how to configure a log rotation for a specified interval if the file size is big enough.
This configuration rotates the log at 3:00, 11:00, and 19:00 every day, if the size of the log file exceeds 1 Mbyte. If the size of the log file does not exceed 1 Mbyte, the log file is not rotated.
│ │
│$ dpconf set-access-log-prop -h host1 -p 1389 log-rotation-frequency:8h \ │
│ log-rotation-policy:periodic log-min-size:1M log-rotation-start-time:0300│
|
{"url":"http://docs.oracle.com/cd/E19261-01/820-2763/gbvqq/index.html","timestamp":"2014-04-19T04:13:53Z","content_type":null,"content_length":"7624","record_id":"<urn:uuid:fba99af5-5faf-4da5-92e9-acfc64d743d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some results on the error terms in certain exponential sums involving the divisor function
Abstract (Summary)
(Uncorrected OCR) Abstract of thesis entitled
submitted by
Chi-Yan WONG
for the Degree of Master of Philosophy
at The University of Hong Kong
in August 2002
In 1985, Jutila investigated a generalization to the Dirichlet divisor problem. He defined, for a and b being coprime integers and a 2: 1,
6 (x; ~) = ~I d(n)e (b:) - ~ (lOg :2 + 21- 1) - E (0; ~) ,
where ~I indicates that the last term in the sum is to be halved if x is an integer, d(n) is the divisor function and E (0; ~) is the value at 8 = 0 of the analytic continuation of the Dirichlet
(Re (8) > 1).
This thesis is devoted to investigation of certain properties of the error term 6 (x; ~).
The third power moment of the functi,on Re (e-iIl6 (x; ~)) was studied. By following the main idea of Tsang (1992) and an estimation of Lau (1999), the following theorem was proved:
Theorem 1 for 1 ::; a ::; X, and any real e,
where C2 is the constant
L L I-i(h)h-t (aJ3(a + J3)r~ d(a2h)d(J32h)d((a + 13)210)
( bh ) (bh ) (bh )
. cas 27r~a2 + () cas 27r~J32 + e cas 27r~(a + 13)2 + e .
Here b is the positive integer (mod a) such that bb _ 1 (mod a).
Theorem 1 also implies that
The higher power moments of \.6. (x;~) I were investigated. The following theorem was proved:
Theorem 2 For a ::; X"2 ,
jX It. (x; ~)f dx
This gives a non-trivial bound for the integral It 1.6. (x; ~) IA dx for A < 8 under certain restrictions on a. The case A = 3 of Theorem 2 also suggests that the above consequence of Theorem 1 is
not the best possible in the sense that the restriction on a is less strict. The proof of Theorem 2 employs an estimate on the number of large values of 1.6. (x; ~) I?
The sizes of the gaps between signichanges of the function Re (e-il1.6. (x; ~)) was studied. It was shown that such gaps can be as large as of order aX~ log-5 X. A mean square estimate of .6. (x + U;
~) - .6. (x; ~) was proved and was then used in the proof of the above result.
Bibliographical Information:
School:The University of Hong Kong
School Location:China - Hong Kong SAR
Source Type:Master's Thesis
Keywords:exponential sums divisor theory
Date of Publication:01/01/2003
|
{"url":"http://www.openthesis.org/documents/Some-results-error-terms-in-509042.html","timestamp":"2014-04-16T11:03:21Z","content_type":null,"content_length":"10107","record_id":"<urn:uuid:0e186938-6665-4cd5-96a7-1a01be0421ef>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does there exists a necessary condition for Lp multiplier?
up vote 4 down vote favorite
Let $1 \leq p \leq 2$. A measurable function $m(\xi)$ is called a $L^p(R^n)$ ($L^p$ for convenience) multiplier, if $$\|m(D)\varphi\|/\|\varphi\|_{L^p} \leq C , \varphi \in L^p $$ for some constant
$C$, and $$ m(D)\varphi(x) = F^{-1}(m(\xi)F\varphi(\xi)) $$ where $F, F^{-1}$ denotes the Fourier transform and its converse in the tempered distribution sense. The picture of $L^1$ multiplier and $L
^2$ multiplier are clear. In fact, $m$ is an $L^2$ multiplier iff $m \in L^\infty$, and $m$ is an $L^1$ multiplier iff $m$ is the fourier transform of a finite Borel measure. For general $p \in (1,2)
$, we can apply Mihlin-Hormander's theorem to prove $m$ to be an $L^q$ multiplier. We know $e^{-i|\xi|^2}$ is not an $L^p$ multiplier if $p\neq 2$, but the proof is not trivial and relies on the form
of the function. So, does there exists any necessary condition for $L^p$ multiplier in general?
harmonic-analysis fa.functional-analysis ap.analysis-of-pdes
add comment
2 Answers
active oldest votes
In the case of radial multipliers, Heo, Nazarov, Seeger (Acta Math, 206 (2011) 55-92) have recently given a complete characterization of $L^p$ multipliers when the dimension is large
up vote 3 down compared to $p$. More specifically $d > (2+p)/(2-p)$, $1 < p <2 $.
Thank you for the useful reference. – Wang Ming May 7 '12 at 4:06
add comment
Beals has given necessary conditions for certain subalgebras defined by symbol-like estimates to be bounded:
R. Beals, $L^{p}$ and Hölder estimates for pseudodifferential operators: necessary conditions. Proc. Sympos. Pure Math., XXXV (1979), 153-157.
up vote 1 down http://www.ams.org/mathscinet-getitem?mr=545303
This does not give you a necessary condition for a single operator, though. (Of course, there exists a trivial necessary condition: Any $L^p$-multiplier is in particular an $L^
2$-multiplier, so its symbol has to be bounded.)
On a noncompact space, is it trivial that L^p Fourier multiplers are in L^\infty? I know this follows from a more general result in non-abelian harmonic analysis, due to Herz, but is
the abelian case trivial as you say? – Yemon Choi May 6 '12 at 20:59
add comment
Not the answer you're looking for? Browse other questions tagged harmonic-analysis fa.functional-analysis ap.analysis-of-pdes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/96132/does-there-exists-a-necessary-condition-for-lp-multiplier","timestamp":"2014-04-16T22:50:33Z","content_type":null,"content_length":"54867","record_id":"<urn:uuid:9e85f0bf-ef0a-4e8a-b700-eb10ad2a74ae>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
given (1+t)y' +y= cos(t) , y(0)=1 , I tried solving it using P(x) and q(x). So far I have d/dt(yt) = integral (cost/(1+t))*t dt...I'm stuck with the right side.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
think you're making it too hard...
Best Response
You've already chosen the best response.
\[(t+1)y' + y = ( (t+1)*y)'\]
Best Response
You've already chosen the best response.
make sense?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
sorry i'm just trying it out
Best Response
You've already chosen the best response.
k :)
Best Response
You've already chosen the best response.
am i suppose to end up with something like this?\[y dy=cost/(t+1) dt\] and then integrate both sides?
Best Response
You've already chosen the best response.
hmm you're mixing methods:) you have:
Best Response
You've already chosen the best response.
\[( (t+1)*y)' = \cos(t)\]
Best Response
You've already chosen the best response.
\[\int\limits_{ }^{ }( (t+1)*y)' = \int\limits_{ }^{ }\cos(t)\]
Best Response
You've already chosen the best response.
\[(t+1)*y= \sin(t) +C\] whoops
Best Response
You've already chosen the best response.
ok i'll try this out..thanks
Best Response
You've already chosen the best response.
I'm too stupid to solve in this way... \[(1+t)y' +y= cos(t)\]\[y' +\frac{1}{1+t}y= \frac{1}{1+t}cos(t)\] Nice :) \[\alpha = e^{\int\frac{1}{1+t}dt}=e^{\ln|1+t|} = 1+t\]\[y=\frac{1}{\alpha}\int \
alpha \times \frac{1}{1+t}cos(t) dt = \frac{1}{1+t}\int (1+t) \times \frac{1}{1+t}cos(t) dt \]\[= \frac{1}{1+t}\int cost dt =...\] Why can't I make my life easier :'(
Best Response
You've already chosen the best response.
oh wow i made a mistake when taking the integral of \[\int\limits1/(1+t) \] no wonder it wouldn't come out nicely..thanks
Best Response
You've already chosen the best response.
thanks to both for your help!!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a1aee8e4b05517d536e459","timestamp":"2014-04-18T23:34:51Z","content_type":null,"content_length":"61202","record_id":"<urn:uuid:15925fe4-a6f5-4dac-9876-b78f43d12f72>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pitcher Similarity Scores 2.0
As I commented in an article about seven months ago, people like comparisons in baseball. We like to be able to compare an up-and-coming player to an established veteran, and -- if possible --
somehow quantify that comparison.
This was the idea behind Pitcher Similarity Scores. The idea was to compare pitchers based on the pitches they throw, not on their results, and in comparing, we got a score bounded between [0,1] to
describe the similarity. I won't rehash the whole formula for the scores -- you can read about the details here -- but the general idea was to look at the velocity, break, arm slot angle and pitch
location to compare pitches.
The scores generated many comments and suggestions for improvement, including some suggestions that were originally considered and left out for various reasons. Two comments in particular struck me
as important to be the next step in improving these similarity scores. In addition to explaining these improvements, I will include the most similar pitchers of 2013, as well as the most similar
individual pitches of 2013.
Improving Similarity Scores: Lefties and Righties
The first of the two comments was that some found it interesting that left-handed pitchers were being compared to right-handed pitchers. A high similarity score between these lefties and righties
implied that the two pitchers were mirror images of each other.
In calculating these similarity scores, the thought process was that of a batter who is facing a pitcher for the first time. He has never seen this pitcher's arsenal, but is told he has a fastball
like Pitcher X, and a changeup like Pitcher Y. He then can reach back to his experiences against Pitchers X and Y and have a general idea of what to expect. Obviously, if Pitcher X is not the same
handedness as the new pitcher, it makes no practical sense for the batter to compare the two. So, for this (and future) incarnation of these similarity scores, right-handed and left-handed pitchers
will not be as highly comparable to each other.
Improving Similarity Scores: Pitch Sequencing
The most common suggestion to improve the similarity scores was to include pitch sequencing somehow in the process. Originally they were included in the formula, but there were some slight
complications in its inclusion, so they were removed.
The main difficulty in pitch sequencing is dealing with incomplete sequences within an at bat. In example, are the following two sequences the same: FA-CU-FA and FA-CU? What if the pitcher intended
to throw a fastball as his third pitch in the second sequence ? Are they the same in that case?
In order to compare the sequences, we have to ask, ``What's the shortest sequence we could possibly see?" That would be a one-pitch at bat. However, if you say that the first pitch is preceded by
``nothing" you can even call a one-pitch at bat a sequence of length 2. So, since we can have at minimum a sequence of length 2, that's what we'll look at: All sequences of length 2 within an at bat.
So, for example, say we have an at bat with the following sequence of pitches: FA-CU-FA-CH. In this case, there are 4 sequences of length 2: O-FA, FA-CU, CU-FA, FA-CH, where the entry ``O"
corresponds to the ``nothing" that precedes the first pitch.
Now we need to get these sequences in an appropriate form so that we can work them into the similarity score. To begin with, we'll put these sequences in a contingency table. For example, let's look
at an example below where the pitcher only throws fastballs and changeups. In this table, the rows are the first pitch in the two pitch sequence, while the columns are the second pitch of the
FA CH
O 70 30
FA 35 70
CH 40 15
Before we continue, we need to remember two things; first, that the pitchers throw different numbers of total pitches, and second, they throw different numbers of each of the individual pitches. To
take this into account, we need adjust for the expected number of sequences seen based on the number of individual pitches. In order to do this, we'll assume independence, so that the expected number
of sequences E[i,j] is
E[i,j]=(∑[i] O[i,j])(∑[j] O[i,j])/(∑[i,j] O[i,j])
This is where O[i,j] is the observed table that we saw above. The expected table for that table would be
FA CH
O 55.8 44.2
FA 58.6 46.4
CH 30.7 24.3
From here, we'll look at a scaled form of the residuals from this expected table. This table is denoted R[i,j] and is calculated
R[i,j]=(O[i,j]-E[i,j])/(∑[i,j] |O[i,j]-E[i,j]|)
For the above tables, we get a scaled residual table R of
FA CH
O 0.15 -0.15
FA -0.25 0.25
CH 0.1 -0.1
Finally, to compare the two pitchers, we'll take the two scaled residual matrices R^1 and R^2, subtract them from each other, sum up the absolute differences, and divide by two. Or, in math notation
D[i,j] = ∑[i,j] |R^1[i,j]-R^2[i,j]|/2
This quantity D[i,j] is bounded between [0,1], which makes it easily combined with the other components of the similarity scores. However, we need to re-weight the various components of the
similarity scores to do this. After the inclusion of sequencing, the weights are
Component Weight
Horizontal Break 0.2
Vertical Break 0.2
Velocity 0.2
Pitch Sequence 0.2
Angle 0.1
Pitch Location 0.1
The Most Similar Pitches of 2013
So, now that we have a new version of the similarity scores, we can recalculate them for the pitchers in 2013. For 2012, only the overall similarity scores of pitchers based on their entire arsenals
were included. Here, however, the most similar pitchers for each individual pitch will also be given. This is the main merit of the similarity scores. When combining across pitches, comparisons can
be a bit muddier. However, this muddiness is removed when looking at one pitch at a time.
From these individual pitch comparisons, you can ``create" a Franken-pitcher profile by looking at his most similar comparisons. For example, Mets phenom Matt Harvey has a four-seam fastball most
similar to Stephen Strasburg, a curveball similar to Grant Balfour (although not strongly similar), and a slider similar to LaTroy Hawkins (Again, only somewhat similar). Below, you can download the
entire similarity scores matrix for each of the seven most common pitches, but we will list the most similar for each pitch below.
Full Matrix of Four-seam Fastball Comparisons
Full Matrix of Two-seam Fastball Comparisons
Full Matrix of Cut Fastball Comparisons
Full Matrix of Changeup Comparisons
Full Matrix of Curveball Comparisons
Full Matrix of Sinker Comparisons
Full Matrix of Slider Comparisons
Most Similar Pitchers of 2013
Of course, in addition to the Franken-pitcher approach, we can look at a pitcher's arsenal as a whole. This is explained in the original article on similarity scores, and the method is no different
than before. So, without further ado, 2013's most similar pitchers are -- envelope please -- Ervin Santana and Juan Nicasio.
Now, just because two pitchers use similar pitches does not imply that they'll have the same results. There is of course still many aspects of pitching that require explanation beyond similar
arsenals before we can get at the heart of why one pitcher is successful and another falls flat.
Full Matrix of Pitcher Similarity Scores
. . .
PITCHF/x data courtesy of Baseball Heat Maps.
Stephen Loftus is a featured writer at Beyond The Box Score. You can follow him on Twitter at @stephen__loftus.
More from Beyond the Box Score:
|
{"url":"http://www.beyondtheboxscore.com/2013/11/25/5133702/pitcher-similarity-scores-ervin-santana-sabermetrics","timestamp":"2014-04-21T07:11:36Z","content_type":null,"content_length":"94744","record_id":"<urn:uuid:690f5962-f592-4fe7-83eb-f4990bb66aa7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: phi
Not exactly. I'd prove it using your idea. 1/p of the first p^n nunbers are not relatively prime to p^n and the rest is, so phi(p^n)=p^n-(1/p)*p^n=p^n-p^(n-1).
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=20072","timestamp":"2014-04-21T05:48:00Z","content_type":null,"content_length":"12936","record_id":"<urn:uuid:740af4bf-f519-4673-9e90-2e4734b3d290>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ceiling Bounce Model
Ceiling Bounce Model
written by Wolfgang Christian
supported by the National Science Foundation
The EJS Ceiling Bounce Model shows a ball launched by a spring-gun in a building with a very high ceiling and a graph of the ball's position or velocity as a function of time. Students are asked set
the ball's inital velocity so that it barely touches the ceiling. This simple model is a designed to teach both physics and EJS modeling.
The Ceiling Bounce model was created using the Easy Java Simulations (EJS) version 4.1 modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the
ejs_mecvh_newtion_CeilingBounce.jar file will run the program if Java is installed. EJS is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate
computer models.
Please note that this resource requires at least version 1.5 of Java.
Ceiling Bounce Model source code
The source code zip archive contains an XML representation of the Ceiling Bounce Model. Unzip this archive in your EJS workspace to compile and run this…
more... download 10kb .zip
Last Modified: December 16, 2008
Subjects Levels Resource Types
Classical Mechanics
- General
- Motion in One Dimension
- Lower Undergraduate - Instructional Material
= Gravitational Acceleration
- High School = Interactive Simulation
= Velocity
- Newton's Second Law
= Force, Acceleration
Intended Users Formats Ratings
- Learners
- application/java
- Educators
Access Rights:
Free access
This material is released under a GNU General Public License Version 3 license. Additional information is available.
Rights Holder:
Wolfgang Christian
Record Cloner:
Metadata instance created December 16, 2008 by Wolfgang Christian
Record Updated:
September 19, 2013 by Andreu Glasmann
Last Update
when Cataloged:
December 16, 2008
Other Collections:
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4F. Motion
• 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both.
• 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it.
9. The Mathematical World
9B. Symbolic Relationships
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily,
increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in steps,
or do something different from any of these.
11. Common Themes
11B. Models
• 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast,
too complex, or too dangerous to study.
• 9-12: 11B/H3. The usefulness of a model can be tested by comparing its predictions to actual observations in the real world. But a close match does not necessarily mean that other models would
not work equally well or better.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
High School — Algebra (9-12)
Creating Equations^? (9-12)
• A-CED.2 Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
• A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations.
High School — Functions (9-12)
Interpreting Functions (9-12)
• F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.^?
Building Functions (9-12)
• F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs.
Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
Linear, Quadratic, and Exponential Models^? (9-12)
• F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
ComPADRE is beta testing Citation Styles!
<a href="http://www.compadre.org/OSP/items/detail.cfm?ID=8385">Christian, Wolfgang. "Ceiling Bounce Model." Version 1.0.</a>
W. Christian, Computer Program CEILING BOUNCE MODEL, Version 1.0 (2008), WWW Document, (http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926).
W. Christian, Computer Program CEILING BOUNCE MODEL, Version 1.0 (2008), <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926>.
Christian, W. (2008). Ceiling Bounce Model (Version 1.0) [Computer software]. Retrieved April 16, 2014, from http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926
Christian, Wolfgang. "Ceiling Bounce Model." Version 1.0. http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926 (accessed 16 April 2014).
Christian, Wolfgang. Ceiling Bounce Model. Vers. 1.0. Computer software. 2008. Java 1.5. 16 Apr. 2014 <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926>.
@misc{ Author = "Wolfgang Christian", Title = {Ceiling Bounce Model}, Month = {December}, Year = {2008} }
%A Wolfgang Christian
%T Ceiling Bounce Model
%D December 16, 2008
%U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926
%O 1.0
%O application/java
%0 Computer Program
%A Christian, Wolfgang
%D December 16, 2008
%T Ceiling Bounce Model
%7 1.0
%8 December 16, 2008
%U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=8385&DocID=926
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Ceiling Bounce Model:
Covers the Same Topic As Physlet Physics: Chapter 2: One-Dimensional Kinematics: Exploration 2.6
The Ceiling Bounce Model covers the same material as Physlet Exploration 2.6, while allowing students to learn how to use EJS.
relation by Andreu Glasmann
See details...
Know of another related resource? Login to relate this resource to it.
Related Materials
Similar Materials
|
{"url":"http://www.compadre.org/osp/items/detail.cfm?ID=8385","timestamp":"2014-04-16T13:26:19Z","content_type":null,"content_length":"45680","record_id":"<urn:uuid:7550fd7c-37ee-4594-a1d2-f4fa8a205ab2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Re: Question: Normal form
G Barmpalias georgeb at amsta.leeds.ac.uk
Fri Aug 17 10:14:14 EDT 2001
Thank you,
I found it in Smullyan: 'theory of formal systems' page 89.
>Date: Thu, 16 Aug 2001 20:16:36 +0100 (BST)
>From: G Barmpalias <georgeb at amsta.leeds.ac.uk>
>Subject: Question: Normal form
>To: fom at math.psu.edu
>Mime-Version: 1.0
>Content-MD5: RkHggs4Wim3i+cjZN5pHFg==
> A question:
> It has been obtained an improvement of the Normal for theorem for partial
>recursive functions, such that the (universal) predicate T_n and the function U
>belong to the smalest Grzegorczyk class E^0 (for every partial recursive
>function f, there an index e such that f(x)= U(\mu~y[T_n(e,x,y)]) ).
> Does any member of the list know a specific reference for this result?
> PS Odifreddi mentions the result in his book classical recursive functions
>Vol.II page 306.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-August/005010.html","timestamp":"2014-04-20T16:37:10Z","content_type":null,"content_length":"3440","record_id":"<urn:uuid:0339e2e5-f0b0-429e-b7a2-28db7aeb65da>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alhambra, CA Calculus Tutor
Find an Alhambra, CA Calculus Tutor
...With the right tools and encouragement from myself, teachers and parents, students are able to achieve great things.I earned my MBA from UCLA Anderson School of Management in June 2009. The
MBA covers the required materials for Business, Accounting, Economics, Marketing, and other business related classes. I earned my MBA from UCLA Anderson School of Management in June 2009.
30 Subjects: including calculus, chemistry, physics, geometry
...Grammar skills are crucial to daily modern life. How one uses, or misuses, language can determine success or failure in interactions with others. For better or worse, we are judged on how we
use language.
39 Subjects: including calculus, chemistry, reading, English
I have been successfully helping students with math and Mandarin for over 10 years. I graduated from Beijing University majoring in International Trading then finished another degree majoring in
Economics and minor in Math, then I was accepted by UCLA for MBA program. I have experience teaching, tutoring students from age 2 to adults in the subjects of Math and Mandarin, Chinese.
7 Subjects: including calculus, Chinese, algebra 1, algebra 2
...I have taken many students with GPAs less than 2.0/4.0 and taken them to 3.5+/4.0 (honor roll). I'm very proud of one student who was consistently failing and getting D's during sophomore year
and earned a full academic scholarship to Syracuse University! Students that show these types of marke...
42 Subjects: including calculus, chemistry, reading, physics
...I will also help you with constructions using compass and straightedge. I am well qualified to tutor prealgebra. I have taught Precalculus many times.
16 Subjects: including calculus, French, geometry, piano
Related Alhambra, CA Tutors
Alhambra, CA Accounting Tutors
Alhambra, CA ACT Tutors
Alhambra, CA Algebra Tutors
Alhambra, CA Algebra 2 Tutors
Alhambra, CA Calculus Tutors
Alhambra, CA Geometry Tutors
Alhambra, CA Math Tutors
Alhambra, CA Prealgebra Tutors
Alhambra, CA Precalculus Tutors
Alhambra, CA SAT Tutors
Alhambra, CA SAT Math Tutors
Alhambra, CA Science Tutors
Alhambra, CA Statistics Tutors
Alhambra, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Alhambra_CA_Calculus_tutors.php","timestamp":"2014-04-18T16:20:13Z","content_type":null,"content_length":"24025","record_id":"<urn:uuid:1a5e6c99-adf8-42b9-bf3d-22008e42f7db>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Polar Coordinate Integration
August 14th 2006, 05:36 PM #1
Aug 2006
Polar Coordinate Integration
Having problems setting integration limits due to symmetry.
I have two questions:
One asks to find the volume of the solid formed by the interior of the circle
r = cos(theta) capped by the plane z = x.
Because x = rcos(theta) and r = cos(theta), we have z = (cos(theta))^2
= (1+ cos2(theta))/2, which is the function we integrate.
The volume we wish to find is the right hand side of the lemniscate which is enclosed inside the circle.
For limits for the dr expression we can integrate from b = 1 to a = 0.
For the d(theta) limits I get confused. I was told to integrate the smallest interval possible and multiply by any symmetry factors. With that said, I can take beta = pi/2 and alpha = 0 and
multiply by two due to symmetry about the x-axis. However, I know this is not the smallest interval as there must be a ray that caps the lemniscate function. I know how to find this ray,
essentially a value for theta, by setting functios r equal to each other and finding that theta. However, I have an expression in terms of r and in terms of z.
My confusion in this next question is much the same as the above:
The question asks to find the volume of the solid based on the interior of the cardioid r = 1 + cos(theta), capped by the cone z = 2 - r.
We have z = 2 - (1 + cos(theta)) = 1 - cos(theta). Essentially another cardioid with the same size but with the cusp pointing in the opposite direction. The graph I get is basically an infinity
sign along the y-axis, that is with rays pi/2 and 3pi/2, and with vertices (1,0) and (-1,0).
Again I get confused by symmetry due to the negative and positive contributions of the polar graph. For the d(theta) limits I believe I can integrate from alpha = 0 to beta = pi/2 and multiply by
four due to symmetry. I get confused for the dr limits as the solid formed by the two cardioids expands over all four quadrants. Do I integrate from a = o to b = 2 and multiply by two due to
symmetry? Two is the intersection point of the
1 + cos (theta) cardiod on the x-axis. I really get confused with this question as I imagine you can integrate from 0 to pi/2, pi/2 to pi, pi to 3pi/2 and 3pi/2 to 2pi breaking up the four
quadrants and adding up the volumes of the four regions of the solid.
I would love any hints and any generalizations about symmetry considerations with polar coordinates.
Thank you.
Having problems setting integration limits due to symmetry.
I have two questions:
One asks to find the volume of the solid formed by the interior of the circle
r = cos(theta) capped by the plane z = x.
The region of integration is demonstrated below.
To find the surface area you need to find,
$\int_D \int \sqrt{1+f_x^2+f_y^2} dA$
Given the surface,
$z=x$ we find that,
$f_x=1$ and $f_y=0$,
$\int_D \int \sqrt{2} dA=\sqrt{2}\int_D\int dA$.
$\int_D \int dA$ is the area of the region. Which is,
My confusion in this next question is much the same as the above:
The question asks to find the volume of the solid based on the interior of the cardioid r = 1 + cos(theta), capped by the cone z = 2 - r.
We have z = 2 - (1 + cos(theta)) = 1 - cos(theta). Essentially another cardioid with the same size but with the cusp pointing in the opposite direction. The graph I get is basically an infinity
sign along the y-axis, that is with rays pi/2 and 3pi/2, and with vertices (1,0) and (-1,0).
The region of integration is demonstrated below.
To simplify the problem divide it into to parts. Calculate the right region and then add the left region.
The cone is, $z=2-(x^2+y^2)$??? But that is a parabolid $z=2-\sqrt{x^2+y^2}$. Thus, you need to find,
$\int_{A_1} \int 2-x^2-y^2 dA +\int_{A_2} \int 2-x^2-y^2 dA$ after the substitution $x^2+y^2=r^2$ you end with (and remember to multiply by $r$ again),
$\int_{A_1} \int (2-r)r dr d\theta +\int_{A_2}\int (2-r)r dr d\theta$
Now you need to set the limits. Which are,
$\int_{3\pi/2}^{\pi/2} \int_0^{1-\cos \theta} (2-r)r dr d\theta +\int_{\pi/2}^{3\pi/2} \int_0^{1+\cos \theta} (2-r)r dr d\theta$
Polar coordinate Integration
Perfect Hacker,
Thank you for your explanations the question with the cardioids is now clear.
However, I do not understand your reply to my first question. I don't see what the surface area has to do with the solid region inside the circle
r = cos(theta) and the plane z = x.
The polar coordinate integration requires dA = rdrd(theta), which is not used in your explanation.
The graph I obtain has the right hand of the lemniscate inside of the circle.
I used z = x = rcos(theta)
Therefore, z = cos(theta)cos(theta) = (1 + cos2(theta))/2. This is the function we integrate right.
So the limits can be from r = cos(theta) to r = 0 right?!
I am not sure about the other limits though, would it be from pi/2 to 0 and multiply by two because of symmetry?
Thanks again.
Polar Coordinate Integration
Perfect Hacker,
I finished doing the question with the two cardioids. Unless I have made some mistakes, working out the integrals for 2r - r^2 was very labourious. I got an answer of -28/9 for the volume of the
solid formed by the two cardioiods. Does a negative volume make any sense?
Thank you again.
Forgive me. In the first question I assumed you were speaking of surface area not volume. I respond back later.
Perfect Hacker,
I finished doing the question with the two cardioids. Unless I have made some mistakes, working out the integrals for 2r - r^2 was very labourious. I got an answer of -28/9 for the volume of the
solid formed by the two cardioiods. Does a negative volume make any sense?
Thank you again.
No, it cannot be negative unless the surface goes below the xy-plane. It you visualize the surface $z=2-\sqrt{x^2+y^2}$ is it above. It is very possible you make a mistake in this problem because
it is very long computation. It is also possible that I made a mistake with the first integral by writing the angles bound incorrecty, I shall check that. I belive it might be simpler to express
this as 4 integral. Divide that region into 4 parts.
I believe the integral should have been,
$<br /> \int_{-\pi/2}^{\pi/2} \int_0^{1-\cos \theta} (2-r)r dr d\theta +\int_{\pi/2}^{3\pi/2} \int_0^{1+\cos \theta} (2-r)r dr d\theta<br />$
When we, use the first iteration we get,
$\int_{-\pi/2}^{\pi/2} (1-\cos \theta)^2-\frac{1}{3}(1-\cos \theta)^3 d\theta \approx .5388$ (I used software for this part).
On the second integral,
$\int_{\pi/2}^{3\pi/2}(1+\cos \theta)^2-\frac{1}{3}(1+\cos \theta)^3 d\theta\approx .5388$
Your answer is the sum of these two.
I believe when you mentioned symettry was a good idea.
The cone, $f(x,y)=2-\sqrt{x^2+y^2}$ is symettric because $f(-x,-y)=f(x,y)$.
Thus, you good have calculated the right part and then multipled by two, namely,
$2\int_{\pi/2}^{3\pi/2}(1+\cos \theta)^2-\frac{1}{3}(1+\cos \theta)^3 d\theta$
Last edited by ThePerfectHacker; August 15th 2006 at 08:57 AM.
Polar coordinate integration
Perfect Hacker,
Thank you again for the question with the cardiods. The limits you proposed the first time were incorrect. With the new limits you suggested, I got the same volume as you did by either adding up
the left and the right side, or by computing either and multiplying by two due to symmetry.
Like I said before, quite a labourious question with all the trigonometric substitutions.
I am confident that I did the other question correctly, the one you misunderstood as surface area instead of volume.
For d(theta) limits I integrated from 0 to pi/2 and multiplied by two due to symmetry, and for dr limits, I integrated from 0 to cos(theta). The function I integrated was (1+ cos2(theta))/2.
Please correct me if I am wrong.
Thanks again.
For d(theta) limits I integrated from 0 to pi/2 and multiplied by two due to symmetry, and for dr limits, I integrated from 0 to cos(theta). The function I integrated was (1+ cos2(theta))/2.
Please correct me if I am wrong.
Seems right. Just let me tell you a story about symettry. You should not rely on it so much. Because it is based on the symettry of the region (like here) AND the symettry of the solid above the
region. Once you have that then you can proceede with symettry.
August 14th 2006, 06:20 PM #2
Global Moderator
Nov 2005
New York City
August 14th 2006, 06:43 PM #3
Global Moderator
Nov 2005
New York City
August 14th 2006, 08:24 PM #4
Aug 2006
August 14th 2006, 11:33 PM #5
Aug 2006
August 15th 2006, 07:58 AM #6
Global Moderator
Nov 2005
New York City
August 15th 2006, 08:05 AM #7
Global Moderator
Nov 2005
New York City
August 15th 2006, 10:27 PM #8
Aug 2006
August 16th 2006, 07:51 AM #9
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/4915-polar-coordinate-integration.html","timestamp":"2014-04-19T22:19:20Z","content_type":null,"content_length":"67547","record_id":"<urn:uuid:9a4564df-7933-4f9d-b5e2-f5f5efa2363b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 2 Tutors
New Castle, DE 19720
Experienced Math Tutor Available in New Castle!
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught
Algebra 2
with a national tutoring chain...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/Newark_DE_algebra_2_tutors.aspx","timestamp":"2014-04-24T04:33:37Z","content_type":null,"content_length":"61798","record_id":"<urn:uuid:2711f6b8-70cf-4764-84df-4ef33915faec>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
March 4, 2011 Alfredo Hubard, CIMS
Title: Space crossing numbers
If a graph with many edges is drawn in the plane then some of the edges have to intersect. On the other hand, (by dimensional considerations) any graph can be drawn in three dimensional space without
two edges crossing. Back to dimension two, mathematicians have wondered about the best way to draw a graph in the plane, there are many answers to this question depending on the context, one of
particular interest is the crossing number. What is the smallest number of crossings of any drawing of a given graph? There is a rich theory of crossing numbers with interesting relations to
engineering of circuits and to geometric measure theory. Our motivating question was: what is the analogue of the crossing number for embeddings in three dimensions? Our answer (almost) recovers the
most famous result about (two dimensional) crossing numbers.
|
{"url":"http://www.cims.nyu.edu/seminars/gsps/current_talks/Hubard_spring11.html","timestamp":"2014-04-16T10:43:33Z","content_type":null,"content_length":"2645","record_id":"<urn:uuid:2ce3c455-be91-4ca7-aa7f-b629c2c92ae7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Create a unique example of dividing a polynomial by a monomial and provide the simplified form. Explain, in complete sentences, the two ways used to simplify this expression and how you would check
your quotient for accuracy.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Hmm ok here's example, hopefully it will help. \(\large f(x)=x^2+2x\) <-- Polynomial right? It has multiple.... "nom..ials" or whatever.. \(\large g(x)=x\) <-- Monomial! Dividing a Polynomial by
a Monomial,\[\large \frac{f(x)}{g(x)}\qquad =\qquad \frac{x^2+2x}{x}\]
Best Response
You've already chosen the best response.
Two ways to simplify this? Hmm I guess one method would be to use Polynomial Long Division. Another method would be to simply split the problem into fractions like this, \[\large \frac{x^2+2x}{x}
\qquad = \qquad \frac{x^2}{x}+\frac{2x}{x}\]And then simplify them individually. I dunno if that's the two methods that your book would describe :O But whatev.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5133edeee4b0034bc1d7e6b7","timestamp":"2014-04-17T06:45:21Z","content_type":null,"content_length":"30664","record_id":"<urn:uuid:c5822c4b-99ef-400d-9da7-9f3ae959e89e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AWT and the quest for hidden dimensions
When Neil Armstrong made his first descent to the Moon's surface during Apollo 11 mission in 1969,
he spoke
his famous line "
That's one small step for (a) man, one giant leap for mankind..
". Aether Wave Theory can become such a dual step in mankind awareness, too: in many areas of physics, which are covered by formal theories already its contribution may remain infinitesimal, whereas
in more fundamental areas it suggests a virtually a deep revolution in thinking. Such revolution may become the understanding of role/scope of hidden dimensions and Lorentz symmetry concepts.
we have demonstrated
already by model of water surface, the problem of hidden dimensions is tightly connected to violation of Lorentz symmetry up to level, every violation of Lorentz symmetry can be considered as a
direct manifestation of hidden dimensions. Because Lorentz symmetry for light spreading in vacuum is violated at presence of every dispersion or refraction phenomena, it would mean, the hidden
dimensions are very common in Nature. For example, the hydrogen bonds or repulsive forces between atoms are manifestation of short distance forces operating in very high number of dimensions.
How the heck is all this possible? A quite easily - due the symmetry, in AWT energy spreads through every environment under conservation of integral number of dimensions in two dual ways only: via
transversal and longitudinal waves. The forces mediated by these waves follows an inverse power law due the shielding LeSage/Casimir/Feynman mechanism represented by supergravity, where the power is
always lower by one from the number of environment dimensions. In this way, for 3D space the only force fulfilling the inverse 2nd power law (ISL) should be a Coulomb force mediated by photons and
gravity force mediated by gravitons or gravity waves at infinite scope, i.e. of zero rest mass. Every other force is simply manifestation of interaction in hidden dimensions and Lorentz symmetry
violation. Note that Casimir force mechanism mediated by gravity waves corresponds Fatio-LeSage mechanism for gravitons and transaction absorber theory for virtual photons promoted by Feynman and
Wheeler for QED in 50's of the last century. Easy and trivial, isn't it?
But mainstream physics still persists a deep inconsistency in thinking. While the world is full of short distance interactions - from weak nuclear force and dual Casimir force spreading if five
dimensions, strong nuclear force mediated by gluons and so called gravitomagnetism ("fifth force") mediated by gravitophotons in four dimensions and many refraction and dispersive phenomena inside of
atom orbital - mainstream physicists still considers these nonlinear forces as a "special forces", "nonlinearities" and/or "2nd order effects" rather then manifestation of hidden dimensions and ISL
violation - despite the fact, under inconsistent thinking they can never get consistent conclusion.
As the result, these scientists are spending a lotta money from our taxes in various less or more sagacious searches of "hidden dimensions". Because they know, violation of Lorentz symmetry would
manifest by inverse square law and non zero rest mass of photons, one way of experimental evidence is based on thourough tests of ISL for gravity and Coulomb forces (which are unsuccessful so far,
especially because theorists are ignoring Casimir force in this extent). The deep inconsistency in mainstream science thinking manifests by the fact, some of these tests
are even interpreted
like tests of string theory as well - albeit string theory is based on Lorentz symmetry from its very beginning. Next time we will discuss some techniques, by which we can interpret and/or
visualize interactions in hidden dimensions
by AWT.
24 comments:
El Cid said...
Hi Zephir,
“As we have demonstrated already by model of water surface, the problem of hidden dimensions is tightly connected to violation of Lorentz symmetry up to level, every violation of Lorentz symmetry
can be considered as a direct manifestation of hidden dimensions“
I think Loop Quantum Gravity (LQG) also predicts the Lorentz Violation, for example, see this paper, where is said:
“This in turn yields the effective speed of light (97) which involves two types of corrections. One of them is just that of Gambini and Pullin [9] including the helicity of the photon, whereas
the other depends on the scale L“
But LQG is a nonperturbative quantization (canonical quantization) of any vaccum, which is a solution of the Einstein Field Equations (EFE). Nevertheless, we can formulate LQG in any number of
dimensions (like EFE). The problem is that the Lorentz Violation can be produced in the framework of LQG for any number of dimensions, and this violation is produced in a geometry of 3 + 1
dimensions, as well. Namely, the hidden dimensions are not necessary. So, do you think there is a flaw in LQG?
El Cid said...
Don't forget that the ADD model (by Arkany-Hamed and others) has serious flaws, because, this model predicts a neutrino mass too big, and also predicts the lepton number violation for some
particles decay process. But the idea of that the gravity force could change at distances that are accessible for the LHC, due to the additional dimensions, is so exciting, that I don't want to
rule out this model, yet.
Zephir said...
/*..do you think there is a flaw in LQG..*/
LQG is later and conceptually more advanced theory than string theory, as it uses a general relativity in combination of quantum mechanics instead of special relativity. So it should be a
slightly more general, then the string theory, too.
Nevertheless, here still persist a conceptual problem, whether supposedly rigorous theory, which is using a relativity can derive the negation of relativity and violation of constant speed of
light postulate. By my opinion it's impossible to derive by using "c = const." postulate the "c ≠ const." result in strictly rigorous way. After all, it's known already, quantum gravity leads to
the same landscape of fuzzy solutions, like string theory - just in somewhat larger extent. It's an unavoidable
consequence of the above paradox in my opinion.
El Cid said...
“Nevertheless, here still persist a conceptual problem, whether supposedly rigorous theory, which is using a relativity can derive the negation of relativity and violation of constant speed of
light postulate.“
Unfortunately, I am too limited and I can't explain you how this fact is possible. But we must be very careful here, because when you quantized a field then arise amazing properties, quantum
mechanics is very different from classical mechanics, namely the quantum fields are very different from the classical fields. For example, when you quantized the electromagnetic field, which is a
continuum wave in the Maxwell theory, you obtain the light quanta, namely the photons. These photons are not longer described by a pure wave model (like in the Maxwell theory), but, they present
a dual behaviour. For example, when the photons interact with the matter, its behaviour is more close to the particles.
I don't think the Lorentz symmetry violaton is a problem for LQG, because could be a consequence of the canonical quantization of the metric (Ashtekar variables). Just the opposite. I think it's
rather a scientific prediction that is not made by many models in String Theory (ST) (ST predicts all and nothing at the same time ;-)). This prediction made by LQG could be tested (or falsified)
in a near future (by the Fermy telescope).
The true problem could be that nobody knows how to recover the General Relativity from the low energy regime of LQG. Another related problem is the Barbero-Immirzi parameter. This parameter (real
number) must be choose for technical reasons. But the theories that we recovery from the low energy regime could depend of this parameter.
Like you, I prefer LQG instead of ST. ST is not the only game in the city.
Zephir said...
/*.. I can't explain you how this fact is possible.. */
It's quite simple, my dear Watson.. LQG theory - like many other formal theories - consist of many postulates: some of them are mutually inconsistent more, some other less. Every formal model is
based on certain subset of postulates, as no equation contains them all at the same level. So we can derive equations, which are more relevant with respect to Lorentz symmetry violation, then
others - while neglecting the relativity postulates, which are prohibiting Lorentz symmetry violation.
The problem of LQG is simmilar to string theory, as it doesn't use its postulates consistently - and I'm not even sure, whether some finite set of LQG postulates exists. It is problem, because
without exact definition of postulate set is impossible to distinguish, which derivation correspond the particular theory and which not. For me this theory is as vague from this perspective, like
string theory. Personally I consider these theories dual mutually by their very nature.
Zephir said...
/*..Like you, I prefer LQG instead of ST..*/
In brief, string theory is theory of particles, while LQG theory of vaccum, i.e. field theory. AWT is completelly invariant to these theories, as it describes the both by more general mechanism.
So I can see both common, both dual points of all these theories. While LQG theory is using general relativity instead of special relativity, such theory should be more general, then string
theory, which is special relativity based - but this is the whole difference.
Each theory brings interesting concepts into physics and it can describe particular situation more effectivelly, then other theories - despite of how general they can be.
El Cid said...
I think we can have a theory (a quantum field theory), for which its lagrangian is symmetric with respect to some symmetry group, and a vaccum of this theory could violate this symmetry. And this
theory could remain consistent. One thing is the symmetry of the fundamental equations, and other thing is the symmetry of their solutions. Maybe, the Lorentz symmetry is spontaneously broken in
LQG. So, LQG could be a consistent theory, although it violated the Lorentz symmetry. Anyway, after to read a little about this question, has been interesting discovering that maybe LQG doesn't
violate the Lorentz symmetry after all. Maybe we're talking about something that does not exist yet. For example, someone called Marcus, who seems to know a lot of this matter (much more than me)
“As it happens, the main form of LQG that has been researched in the past couple of years is Lorentz invariant.“
You also wrote:
“string theory is theory of particles“
I think string theory is a theory of all things that you want, a theory of particles, a theory of gravity, a theory of dogs and cats, a theory of colours, a theory of strings, a theory of books,
a theory of Lubos, a theory of computers, ... in short, a TOE which is useful for nothing. Or maybe I am wrong, string theory could be useful for developing useful tools in math, and thereby, it
will be useful to the physics at the future.
HAHA... El Cid = Zeph
El Cid said...
what you are reading is what there is, nothing more ...
El Cid said...
HAHA... Anonymous = Robot
Zephir said...
/*..I think string theory is a theory of all things that you want..*/ No doubt, string theory is overhyped. For me it's just one of quantum field theories - without Brian Greene nobody of laymans
would know about it today.
Zephir said...
/* Anonymous = Robot */ a kind of robots, I like...
El Cid said...
“No doubt, string theory is overhyped“
My view about this issue is that Quantum Gravity is a theory nonrenormalizable because there is a nontrivial UV fixed point. Namely, the divergences arise because we aren't working with the exact
theory. We must understand and explore the laws of physics, that are already known, very well before to speculate with strings, branes, additional dimensions, other worlds and similar bullshits
... . I'm afraid that all the string theorists want to be the New Einsteins, my friend ...
El Cid said...
What I want to say is that we must study the semiclassical gravity carefully before to make speculative models, whose predictions can not even be measured in the near future. In the Wikikipedia
entry it's said:
“The most important applications of semiclassical gravity are to understand the Hawking radiation of black holes and the generation of random gaussian-distributed perturbations in the theory of
cosmic inflation, which is thought to occur at the very beginnings of the big bang.“
This issues are much more reasonable than the strings, branes, multiverse, additional dimensions, flux, landscape ...
And we should see the string theorists like the true genius, maybe they think the rest of people are fools, don't they? Not Sir, not, we would have to be making science not science fiction.
Sorry Zephir, but these things upset me.
Zephir said...
Don't worry, string theory has its best years gone already.
"String theory is like a 50-year old woman trying to camouflage her flaws by wearing way too much lipstick." Robert B. Laughlin, Nobel Laureate
"String theorists .. just celebrated the 20th anniversary of superstring theory. So when one
person spends 20 years, it's a waste, but when thousands waste 20
years in modern day, they celebrate with champagne. I find that curious." Sheldon Glashow, Nobel Laureate
"String theorists don't make predictions, they make excuses." Feynman, Noble Laureate
But not all achievements of string theory should be forgotten from now. If nothing else, flaws of string theory enables us to better understand another more advanced models. We can just try to
make this understanding a bit cheaper and less scolastic for future.
Concerning your preferences, I can agree, we should study observable artifacts first - just after then we can speculate about these abstract ones. And we shouldn't judge all string theorists by
few (or even single one) asocial freaks, who cannot understand, situation has changed. Currently we have number of alternative models for most of paradigms of string theory.
El Cid said...
I didn't know that such prominent physicists were so critic with string theory. I'd like add:
Polchinski (a known string theorist) said:
“I keep hoping that maybe before getting at the underlying equation of string theory we have to solve the problem of high-temperature superconductivity!“
And Philip Anderson of Princeton University, who shared the 1977 Nobel Price for Physics for his work on electronic structure in magnetic and disordered systems, replied to these statements
“The last thing we need is string theorists, Anything out there is hype. Superconductivity is an experimental science, and most string theorists have no idea how to understand an experiment
because they have never looked at one!“
It is clear. BTW I'm enjoying a lot reading your blog.
Bollocks! Don`t give me that crap.
Zephir said...
Do you believe, Philip Anderson, Bob Laughlin, Sheldon Glashow or Feynman are all just spreading a crap?
Zephir said...
People don't realize, just 3D space should be completelly transparent, nonrefracting and empty. At the moment, when we are observing some inhomogeneities or even particles in it such space
becomes effectivelly high-dimensional.
This approach can explain controversy of grb090510 photon and to reconcile string theory / LQG theory with Lorentz symmetry violation.
Zephir said...
Synopsis: Because 4D space-time is a always flat from 4D perspective and Lorentz symmetry is maintained for it, hidden dimensions manifest as a curvature of flat space-time or density
fluctuations of 3D space and Lorentz symmetry is always broken there. Therefore the fact, string theory considers hidden dimensions and extradimensions at the same moment renders it as a
inconsistent theory, leading to huge landscape of possible solutions.
The simplest example of extradimensions is the CMB background and/or gravitational lensing, but hidden dimensions are all around us. In AWT we can imagine hidden dimension as a nested density
fluctuations. Electron are revolving around atoms independently to the motion of the whole atom - so they're moving in hidden dimensions in the same way, like nucleons inside of atom nuclei.
Hidden dimensions manifests by violation of inverse square law, therefore effects like Casimir force or dark matter (Pioneer anomaly) are evidence of hidden dimensions as well. Complex
interactions between organisms in life environment and or people inside of human society (the love) are interactions in highly dimensional space-time, when observed from perspective of 4D
Zephir said...
Scientists have found, that electrons confined to the surface of a 3-sphere (ie to the surface of a 4-D ball) behave in a remarkably similar way to electrons confined to real 3D space.
Zephir said...
For example, everyone knows, how strangely the massive bodies appear. They're composed of "isolated" particles, which are moving together in collective way. Now, try to imagine, how the
projection of hyperdimensional object into 3D space would appear. Try to project the shadow of compact grid to the solid line - the shadow will become composed of 1D "particles", which are moving
collectively. The fact, physicists are using a number of mutually inconsistent theories is the consequence of situation, when they're using low-dimensional approach for description of
hyperdimensional reality.
Zephir said...
If the wave function is real, according to the mathematics, Schrödinger's wave function encodes everything there is to know about a single particle in three dimensions. But things get more
complicated very quickly. The wave function for two particles exists in an abstract six-dimensional space and for three particles, it exists in nine dimensions, and so on. (1, 2, 3)
Zephir said...
A simple minded question: Do we live in the four-dimensional spacetime?
|
{"url":"http://aetherwavetheory.blogspot.cz/2009/04/quest-for-hidden-dimensions.html","timestamp":"2014-04-17T18:41:21Z","content_type":null,"content_length":"129090","record_id":"<urn:uuid:8ee508e1-a088-45e4-91df-a4a381ec0392>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Farmers Branch, TX Geometry Tutor
Find a Farmers Branch, TX Geometry Tutor
I have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and physics. This also involved
teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring.
15 Subjects: including geometry, chemistry, physics, calculus
...I graduated high school with high honors. In college I completed 40 hours of mathematics to obtain a mathematics degree. I also completed 18 hours of graduate mathematics courses in pursuit of
a Master degree in Mathematics Education.
17 Subjects: including geometry, calculus, statistics, GRE
...I have presented at numerous university including Yale and John Hopkins and at scientific meetings across the country for the last 12 years. I acquired a PhD in genetics at Texas A&M
University in 2003, through my graduate career I taught 3 semesters of genetics laboratory and tutored undergradu...
32 Subjects: including geometry, chemistry, algebra 2, GED
...Thank You!I am certified and approved in Elementary Math and Algebra. I am certified and approved in Reading, Writing and English. I am certified and approved for GED prep.
41 Subjects: including geometry, English, reading, writing
...Additionally I am certified as a composite science teacher (8-12), teaching chemistry, physics and biology. Currently I am tutoring a lot of the college courses that are prerequisites for
medical college. I am currently a student with major in Biology, minor in health care, and a certificate in teaching composite science for high school.
33 Subjects: including geometry, reading, chemistry, Spanish
Related Farmers Branch, TX Tutors
Farmers Branch, TX Accounting Tutors
Farmers Branch, TX ACT Tutors
Farmers Branch, TX Algebra Tutors
Farmers Branch, TX Algebra 2 Tutors
Farmers Branch, TX Calculus Tutors
Farmers Branch, TX Geometry Tutors
Farmers Branch, TX Math Tutors
Farmers Branch, TX Prealgebra Tutors
Farmers Branch, TX Precalculus Tutors
Farmers Branch, TX SAT Tutors
Farmers Branch, TX SAT Math Tutors
Farmers Branch, TX Science Tutors
Farmers Branch, TX Statistics Tutors
Farmers Branch, TX Trigonometry Tutors
Nearby Cities With geometry Tutor
Addison, TX geometry Tutors
Balch Springs, TX geometry Tutors
Bedford, TX geometry Tutors
Carrollton, TX geometry Tutors
Coppell geometry Tutors
Euless geometry Tutors
Flower Mound geometry Tutors
Grapevine, TX geometry Tutors
Highland Park, TX geometry Tutors
Hurst, TX geometry Tutors
Irving, TX geometry Tutors
Parker, TX geometry Tutors
Richardson geometry Tutors
The Colony geometry Tutors
University Park, TX geometry Tutors
|
{"url":"http://www.purplemath.com/Farmers_Branch_TX_geometry_tutors.php","timestamp":"2014-04-20T06:50:14Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:01280f5b-bc53-4dc1-aabe-6ab0ff91991a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Queen's University Site Art - Articles - Part 2
Understanding outdoor sculpture on campus
Part 2 of a 6 part series: the donut on the pole
Five Sculptures on By Catherine Hale, Contributor
Topological Themes, aka Queen's University - The Journal
“the donut on the pole.” Friday September 12, 2003 - Issue 6, Volume 131
Photo by Emily
MacLaurin-King As promised, in our ongoing quest to comprehend the various outdoor sculptures scattered about campus, this week, we tackle “the donut on the pole.”
Understanding outdoor Located southeast of Jeffery Hall, “the donut on the pole” is one in a set of sculptures entitled Five Sculptures on Topological Themes, created by Alan Dickson in 1972.
sculpture on campus: Dickson worked as a professor in the department of Fine Art at Queen’s University from 1970 to 1997.
The sculptures are made of terrazzo, Portland cement, marble chips and epoxy. Terrazzo refers to a material usually used in flooring that combines mortar with stone or
marble chips and is polished when dry.
But what do the sculptures mean? Once you understand what a topological theme is, the set of sculptures makes a little more sense. Topology is the study of those properties
of geometric forms that do not change under certain circumstances such as bending or stretching. There are two important rules to remember: you cannot break the original
form, and you cannot join parts of the original form together. Take our “donut on the pole” for example. Even if you managed to stretch it out, it would still have a hole in
its centre, and only one continuous surface. You can’t squish it flat to make more than one surface because that would mean you had joined two originally separate parts.
Think of the donut as a balloon. The different areas of the skin of the balloon should never connect.
Central to Five Sculptures on Topological Themes is the idea of a single continuous surface. The flat circle and square structures, both on pediments, are examples of Mobius
strips. The Mobius strip is a physical structure that is, surprisingly, both three-dimensional and one-sided. If you missed making Mobius strips in school, it’s simple: take
a strip of paper and flip one end over and then attach it to the other end. The resulting form has only one side.
Why are the triangle and pentagon sculptures included? Try and collapse the Mobius strip you just made. When collapsed, the Mobius strip makes shapes identical to the
triangle and pentagon, but despite the fact that the Mobius has been folded, it still retains the property of having only one surface. Remember our second rule: parts of the
form that were originally separate cannot be joined. What we have is not a pentagon or a triangle, but rather, a Mobius strip folded into the shape of a pentagon and a
triangle. The artist has even cleverly included the “fold lines” in his sculptures to make this idea apparent.
So just who might decide to install a set of public sculptures dealing with topological themes and Mobius strips? You guessed it, the mathematics and statistics department
commissioned Five Sculptures on Topological Themes in connection with the building of Jeffery Hall in the early 1970s. At the time, a government program was in place
stipulating that one per cent of construction budgets for public buildings be spent on art. A committee was formed to select works, and artists were asked to respond to the
site with mathematically relevant forms. Alan Dickson took up the challenge, and as a result, we can now contemplate topological concepts on a daily basis.
|
{"url":"http://www.queensu.ca/camplan/siteart/part2.html","timestamp":"2014-04-17T16:30:50Z","content_type":null,"content_length":"6373","record_id":"<urn:uuid:798bcef6-1c06-458f-aa8f-662602edad76>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Revised mm/yy
NVCC COLLEGE-WIDE COURSE CONTENT SUMMARY
EGR 120 - INTRODUCTION TO ENGINEERING (2 CR.)
Introduces the engineering profession, professional concepts, ethics, and responsibility. Reviews hand calculators, number systems, and unit conversions. Introduces the personal computer, operating
systems and processing, engineering problem solving and graphic techniques. Lecture 2 hours per week.
The purpose is to provide the incoming freshman engineering transfer student with probably his/her first exposure to the world of engineering. Here the student will obtain a first impression of what
the engineering is all about as well as gaining some skills in basic engineering procedure and calculation, using the hand calculator and a personal computer. A major thrust of the course is to
emphasize that the engineer is a team worker who needs strong skills in problem solving and communications.
Competence in algebra and trigonometry as well as English composition skills. Co-requisites for this course are MTH 173 - 'Calculus with Analytic Geometry I' and ENG 111 - 'College Composition I'.
Upon completion of this course, the student should be:
A. familiar with the engineering world of work through readings in the text, lectures, and writing a term paper
B. acquainted with the engineering education process and particularly its realization at NVCC
C. familiar with fundamental proficiency in engineering calculations using hand calculators and the personal computer will be acquired, together with the discipline of engineering problem solving
A. Profession of Engineering
1. its history
2. ethics
3. responsibilities
B. Library Research Paper on an engineering career field, engineering problem, or other topics related to the course
C. Use of the Hand Microelectronic Calculator for Computations
D. Use of the Personal Computer in Engineering
1. introduction to the computer and DOS commands
2. introduction to computer algorithm by flowcharting
3. BASIC programming language to solve engineering related problems
E. Graphing Engineering Data to derive empirical equations in two variables:
1. using straight line curve fitting on linear, log-log and semi-log grids
OR using least-squares methods of curve fitting
F. Engineering Accuracy and Significant figures
G. Engineering Problem Solving Methodology
H. Dimensions and Unit Systems: SI and AES
I. Introduction to Two Dimensional Mechanics (statics) OR Fundamental Electric Circuit Theory
EXTRA TOPICS (optional)
A. The Engineering Design Process
Revised 01/90
Top of Page Alphabuttons CES Homepage
|
{"url":"http://www.nvcc.edu/academic/coursecont/summaries/EGR120.htm","timestamp":"2014-04-18T08:17:01Z","content_type":null,"content_length":"6962","record_id":"<urn:uuid:26ec6f5e-aea2-4cd9-94ec-cefefa4b4c8f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
= Preview Document = Member Document = Pin to Pinterest
"One more than half a number (x) equals (y)." One page with four short input/output tables.
"7 times a number (x) equals (y)." One page with four short input/output tables.
"5 more than a number (x) less 2 equals (y)." One page with four short input/output tables.
"2 less than 5 times a number (x) equals (y)." One page with four short input/output tables.
"A number (x) divided by 3 equals (y)." One page with four short input/output tables.
"7 less than a number (x) equals (y)." One page with four short input/output tables.
"10 more than half a number (x) equals (y)." One page with four short input/output tables.
• Blank charts for practicing functions. Write the rule in the blank, write numbers for practice in the top row, and write the results of the equations in the bottom row.
[member created with abctools] Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
"4 added to a number (x) equals (y)." One page with four short input/output tables. Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
A poster explaining the rules of division of negative numbers.
"6 less than 5 times a number (x) equals (y)." One page with four short input/output tables.
[member created with abctools] Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
[member-created with abctools] Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
Use Euler's formula to determine the number of edges of a simply connected polyhedron such as a soccer ball. Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
x + 3 = 6, x = ?. Answers up to 10; 5 pages; 15 problems per page. Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
Convert eight word problems to linear equations and solve them. Detailed answers provided. Common Core Math: 5.OA.1
Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
Rules for solving linear equations and then 24 questions to practice the skill.
"3 more than 3 times a number (x) equals (y)." One page with four short input/output tables. Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
Converting word problems into equations and solving the equations. Common Core Math: 5.OA.1
Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
• [member-created with abctools] Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
• x + 3 = 13, x = ?. Answers up to 10; 5 pages; 15 problems per page. Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
• [member created with abctools] Common Core_Math_6.EE.A.1, 6.EE.A.2, 6.EE.A.3,
Colorful posters present the rules for determining perimeter and area for regular polygons including triangles, and quadrilaterals including squares, rectangles, and parallelograms. Common Core:
Geometry 6.G.A1, 5.G.B3, 4.MD.3
"6 – 5(42 – 2 • 4) + (3 • 4)". Apply the rules of operation order to these eight algebraic equations. Common Core Math: 5.OA.1, Common Core_Math_6.EE.A.1
|
{"url":"http://www.abcteach.com/directory/middle-school-junior-high-math-algebra-3310-8-1","timestamp":"2014-04-16T16:30:14Z","content_type":null,"content_length":"105693","record_id":"<urn:uuid:a985e31a-d73e-4ce6-bf3a-e9d6e78be50a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [Discuss-gnuradio] Frequency resolution of CORDIC algorithm
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Discuss-gnuradio] Frequency resolution of CORDIC algorithm
From: Trond Danielsen
Subject: Re: [Discuss-gnuradio] Frequency resolution of CORDIC algorithm
Date: Thu, 28 Jun 2007 18:30:22 +0200
2007/6/28, Brian Padalino <address@hidden>:
On 6/28/07, Trond Danielsen <address@hidden> wrote:
> Hi,
> after having read several papers on the subject, I am still not able
> to find the answer I am looking for. I wonder how to calculate the
> frequency resolution of the CORDIC algorithm. In an earlier post to
> this mailing list it was stated that the resolution is approximately
> 0.01 Hz. Could anyone point me to where I can find a deviation of this
> result?
This paper is really good for understanding the CORDIC:
It is specifically written to look at FPGA implementations, which is nice.
As I understand it, the USRP uses the CORDIC as described in section
3.1 of that paper. A phase accumulator is used to spin the angle
around, and the modulated sin/cos or xi is the output on xo and yo
after 12 iterations of the algorithm. The value of zo should be zero,
and any error leftover should be represented on that output.
The resolution should really be how slowly you can spin the zi
component while maintaining accuracy out of the CORDIC. It may be
that with 12 iterations and 16-bit inputs 0.01 Hz is possible, whereas
more iterations or larger inputs might get better resolution, but I
suspect you're really past the point of diminishing returns at that
Is that helpful?
Thank you a lot for your reply. I have already read the mentioned
paper, and found it useful.
Trond Danielsen
[Prev in Thread] Current Thread [Next in Thread]
|
{"url":"http://lists.gnu.org/archive/html/discuss-gnuradio/2007-06/msg00303.html","timestamp":"2014-04-17T10:42:12Z","content_type":null,"content_length":"7503","record_id":"<urn:uuid:6469f0b4-52d6-4566-afec-dcffdc9ff1dc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced Functions help...Describe how....
April 5th 2013, 11:15 AM #1
Feb 2013
Advanced Functions help...Describe how....
If anyone could help i'd really appreciate it
Describe how y= f(x) is transformed by each of the following:
a) y= -f(x+2) - 5
b) y= f(-x) + 5
Re: Advanced Functions help...Describe how....
Any good algebra textbook should have a table summarizing function transformations. Here's a table that I got from this website:
I'll help you get started. If you have $y=-f(x+2)-5$ , identify the things that are going on. For instance, there's a negative sign in front of the f(x). According to the table, that's a
reflection of the function about the x-axis. Continue that methodology for +2 inside the function notation, and the -5 outside of the function notation.
Last edited by semouey161; April 5th 2013 at 11:54 AM.
Re: Advanced Functions help...Describe how....
on top of what semouey161 said, when given questions where you have to APPLY transformations to functions, it's important to do them in a certain order (stretches/compressions first, then
reflection (which is basically a special type of stretch), and then translations).
For the questions that you posted above, it helps to remember that some of the "modifiers" on f(x) must be interpreted in an in unintuitive way. By that I mean the +2 in part A might make you
think +2 to the right since we tend to associate positive with right, but actually +2 means to move 2 to the left. -5 means to move 5 down, but to keep yourself from mixing up which direction of
translations should be intuitive/unintuitive, add +5 to both sides so:
y+5 = -f(x+2), then you can use the "unintuitive trick" to determine the direction of the transformations (notice how the vertical translation +5 goes with the y, and the vertical axis is the
y-axis. similarly the horizontal translation +2 goes with the x, and the x-axis is the horizontal axis).
Don't worry about the minus sign messing this up.
y+5 = -f(x+2)
-y-5 = f(x+2)
-y = f(x+2) +5
y = -f(x+2) - 5
Rearranging the variables and multiplying both sides by -1 doesn't change the equation and the transformations
April 5th 2013, 11:50 AM #2
Mar 2013
Los Angeles
April 5th 2013, 01:40 PM #3
Junior Member
Apr 2013
|
{"url":"http://mathhelpforum.com/algebra/216742-advanced-functions-help-describe-how.html","timestamp":"2014-04-17T04:29:22Z","content_type":null,"content_length":"35997","record_id":"<urn:uuid:20c4edb3-d624-4c16-9c4a-c82df5efe02a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Market Garden
Matrix Games Forums Forums Register Login Photo Gallery Member List Search
Calendars FAQ
My Profile Inbox Address Book My Subscription My Forums Log Out
View related threads: (in this forum | in all forums) Logged in as: Guest
All Forums >> [New Releases from Matrix Games] >> Advanced Tactics Series >> Mods and Scenarios >> Market Garden Page: [1] 2 next > >>
10/26/2011 8:45:38 AM
Grymme Well, my Vietnam project is almost finished. So i have started a little on my new project. If everything goes to plan it will be a huge Market Garden scenario.
The main challenge for this scenario will be scale. It will probably be the closest to a tactical scenario made so far for ATG (at least as i am aware of). A hex will be
Posts: 1821 600m across. Rounds will be 3hours long (5 hours for night rounds) and units will be on a company/troop level.
Joined: 12/16/2007
Status: offline Since units will be on a company level this means that 1 infantry SFT will be something like 2-5 men, as for artillery and vehicles 1 SFT=2 vehicles/artillery tubes.
The allied divisional units involved were 1st Abn, 82nd Abn, 101st Abn, Gds Arm, 43rd and 50th. Also plenty of brigades. Given the scale this would mean that for example
the british 1st Airborne Division (the Red Devils) would have something like 90 units with 2000 infantry SFTs, 12 artillery SFTs, 34 Antitank guns, and a lot of jeeps and
other vehicles. So, a lot of units for either side.
Since 1 hex=600 meters fighting from 1 hex to another covers a combat distance of 1200 meters. This means that some mortars for example might have to be given artillery
capacity. It also means that even a paultry light 75mm pack howitzer will have an artillery range of 10 hexes. But i think that most units will not need to have artillery
values since effective combat range in WWII rarely was beyond 1200meters even though some of the weapons technical specifications may say so.
So far i have made the bulk of the map. The map is 200 hexes high and 60 hexes wide at its widest. So plenty of map to go around. Se screenshot. Graphics are not the final
ones. I am satisfied with most graphics, but candidates for change are urban, suburban, mixed terrain. Roads will be divided into Primary roads, Secondary roads, Railroads
and Trails.
screenshot shows an overview of the map. The cities visible are (from north to south Arnhem, Nijmegen and Eindhoven).
Attachment (1)
< Message edited by Grymme -- 10/26/2011 9:59:05 AM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 1
10/26/2011 8:56:27 AM
Grymme This scenario will not be so heavy with events. But there will be some, mainly to simulate battlefield conditions etc.
Possible events will simulate
- Night rounds and night fighting.
Posts: 1821 - Unit fatigue from fighting.
Joined: 12/16/2007 - Supply drop zones and supply for airborne troops
Status: offline - Artillery ammunition supplies
- Rules for unlimbering and limbering artillery
- Airlandings
- Repair and destruction of bridges through the major rivers.
- Crossing rivers in boats.
- Historical leaders
So far i have been working a little on the structure of the day event. The campaign scenario will last 10 days. Each day will be divided in 7 rounds. 06:00, 09:00, 12:00,
15:00, 18:00, 21:00 (night round) and 01:00 (night round. Weather, which will only affect airlandings, will be determined twice each day and will be separate for airplanes
using the Northern or Southern air aproach route.
Airlanding rules will factor in the following.
- Weather
- If northern or southern approach route is taken
- What time of day the airlanding is atempted.
- Units can scatter of the designated dropzone.
- Units can take losses depending on above factors.
- Flak in the area
- What kind of terrain the unit lands in (impossible terrain, urban and river is not the preffered terrain to land in).
Possible other factors as well. I have only the skeleton of the event written so far, but i have an idea for the rest.
Here is the skeleton for the airlanding event.
0) CHECK: TempVar0 == 5
1) SETVAR: TempVar11 = 0
2) LOOPER: TempVar12 FROM 0 TO 4
3) SETVAR: TempVar10 = CheckRandomPercent
4) SETVAR: TempVar10 + Gameslot_S R weather(#7)
5) CHECK: TempVar10 > 50
6) CHECK: TempVar10 =< 70
7) SETVAR: TempVar2 - 1
8) SETVAR: TempVar3 - 1
9) CHECK: CheckRandomPercent < 51
10) SETVAR: TempVar2 + 2
11) END CHECK
12) CHECK: CheckRandomPercent < 51
13) SETVAR: TempVar3 + 2
14) END CHECK
15) END CHECK
16) END CHECK
17) CHECK: TempVar10 > 71
18) CHECK: TempVar10 =< 80
19) SETVAR: TempVar2 - 2
20) SETVAR: TempVar3 - 2
21) CHECK: CheckRandomPercent < 51
22) SETVAR: TempVar3 + 4
23) END CHECK
24) CHECK: CheckRandomPercent < 51
25) SETVAR: TempVar2 + 4
26) END CHECK
27) END CHECK
28) END CHECK
29) CHECK: TempVar11 == 0
30) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
31) END CHECK
32) CHECK: TempVar11 == 2
33) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
34) END CHECK
35) CHECK: TempVar11 == 1
36) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
37) END CHECK
38) CHECK: TempVar11 == 3
39) EXECUTE: ExecAddUnit(6, TempVar2, TempVar3, 0)
40) END CHECK
41) SETVAR: TempVar11 + 1
42) END LOOPER
43) EXECUTE: ExecGiveActionCard(2, 0)
44) END CHECK
Screenshot shows a the Arnhem area.
Attachment (1)
< Message edited by Grymme -- 10/26/2011 9:18:40 AM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 2
10/26/2011 9:14:03 AM
Grymme I have started working a little on the units. So far i have made 5 or so different predefined units, all for the Red Devils.
Equipment will be historical whenever possible and i have also decided to have separate NATO counters for units belonging to Parachute Brigades, Airlanding Brigades and
Rifle Brigades. This will mean a lot of extra work and might not have a huge impact on the game. But it sure looks purdy.
Posts: 1821
Joined: 12/16/2007 The unit shown is the A company of the 1st Parachute Battalion of the 1st Parachute Brigade of the 1st Airborne Division. Below is a 75mm howitzer troop from the 1st
Status: offline Airlandingh light Regiment.
Attachment (1)
< Message edited by Grymme -- 10/26/2011 9:34:46 AM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 3
10/26/2011 9:50:48 AM
As for scenarios etc i was thinking something like this
- 1*3 "smaller" scenario each four each of the 3 airborne divisions
- 1 "smaller" scenario for the brekout of the XXX Corps
Posts: 1821 - 1 grand campaign scenario covering the whole thing.
Joined: 12/16/2007
Status: offline For a total of 5 scenarios. Of these 1-2 could be directly downloadable.
Smaller is really a misnomer for those scenarios since each of them would probably involve something like 100 units on each side and each small scenario would last
something like 15 rounds.
Here is a screenshot to show the number of units compared to the scale of the map. The screenshot shows the units of 1 Parachute brigade defending the landing zone west of
Arnhem. The wierd looking Nato counters are Airlanded troops with 6pnd Antitank Guns.
Attachment (1)
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 4
10/26/2011 5:11:32 PM
Grymme Have created some more units, so i set out to recreate the entire 1st Airborne Division and place it, just to check out how it works and how strong it is.
Its 76 units with the following number of SFTs
Posts: 1821 1593 Riflemen
Joined: 12/16/2007 132 Sten Submachineguns
Status: offline 218 Piat Antitank Guns
388 Bren Machineguns
117 2inch Mortars
72 3inch Mortars
78 Engineers
184 Staff
56 Gun Crews
44 6pnd Antitank Guns
12 75mm Pack Howitzers
96 Jeeps
Here is a screenshot showing the whole thing of
- Black units are divisional units including
Major Goughs Motorized 1st Recconnoisance sqaudron
1st Light Artillery Regiment
Engineer Companies
Glider pilots
A lot of separate companies etc
- Yellow units are the 1st Airlanding Brigade
- Blue units are the 1st Parachute Brigade
- Green units are the 4th Parachute Brigade
Attachment (1)
< Message edited by Grymme -- 10/26/2011 5:38:59 PM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 5
10/26/2011 8:16:32 PM
Jeffrey H. Wow ! Awesome !
I can tell you that for me at least the NATO style counters built into the scenario along with the specific historical units and timetables makes for a very immersive
Posts: 2779 experience.
Joined: 4/13/2007
From: San Diego, Ca. I find the scattering aspects particularly interesting.
Status: offline
"Games lubricate the body and the mind" Ben Franklin.
Post #: 6
10/27/2011 6:15:28 PM
GrumpyMel Grymme,
I don't know where you find the time/energy
Posts: 786 Looks amazing though. It'll be interesting to see how you deal with the scale issues....when you get that low you're really starting to deal with tactical
Joined: 12/28/2007 combat....although I'm guessing you will abstract those details as AT doesn't really deal with things like LOS or some other tactical considerations.
Status: offline
Post #: 7
10/27/2011 8:03:09 PM
Jeffrey H. Which does bring up an interesting side bar discussion; What would you consider to be the smallest possible map scale for a WWII style ATG scenario ?
Posts: 2779
Joined: 4/13/2007 "Games lubricate the body and the mind" Ben Franklin.
From: San Diego, Ca.
Status: offline (in reply to GrumpyMel)
Post #: 8
10/28/2011 9:54:45 AM
Grymme I dont have a life. No, seriously. I think its fun, its a hobby. Otherwise i wouldnt be doing it. My goal is to do a scenario on all wars, theaters, boardgames etc for the
game. Obviously a long time project.
As for the scale. I do think the scale for this scenario touches around the lowest scale possible if you want to make a realistic scenario. Mainly because of LOS issues and
Posts: 1821 because it would be really boring with a scenario where every unit had an artillery range. I do not think LOS will be an issue in this scenario though since very few
Joined: 12/16/2007 weapons other than artillery hade an effective range of more than 1200 meters (2 hexes next to eachother). And those weapons that do have an effective range beoynd that are
Status: offline indirect fire weapons so the do not need line of sight. I could theoreticly see certain AFV and Antitank guns engaging at 1200 meters and beyond but i think that is such a
rare occurence that it could be pretty much ignored.
Lets say the scale would be 150m or 300m. Then you would have a problem with LOS because machineguns and infantry guns and tanks would all need artillery capacity and then
they are direct fire weapons.
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 9
10/30/2011 5:15:43 PM
Jay Doubleyou Finally, Operation Market Garden!
Glad you are filling up this gap in the WW2 scenario's.
It was a very important WW2 operations and I think an intersting one to play with ATG.
Posts: 245
Joined: 8/12/2004
Status: offline
Post #: 10
10/30/2011 8:30:13 PM
Grymme Jay. Thanks. Well, it will be fun to make a mod on a somewhat popular topic for a change. I really really liked the Vietnam mod, but i was realistic enough to realize that
it wouldnt be the most popular scenario that people would want (not to speak of making mods of strange wars in Central America). Interestingly though it seems that that
particular mod seems to have had some devouted people. I have had several people email me who bought the computergame just to buy the Vietnam mod.
Posts: 1821 Now i was getting off topic. The reason i came into a Market Garden scenario is that the same guys who made the Vietnam boardgame (Victory Games) also have made a Market
Joined: 12/16/2007 Garden boardgame (Hells Highway). Since i really really like Victory Games boardgames and would like to make mods of all their boardgames i started looking into it. In the
Status: offline end i felt their scale wrong for the subject. If you are fighting Market Garden you really want to feel the fighting from house to house in Arnhem. So in the end i choose
to use the scale and look to the mechanics in the SPI monster boardghame Highway to the Reich instead.
Anyway this is a longwinding way to say that the boardgame that will be inspiring this mod will be SPI/DG Highway to the Reich
It will probably not follow as close to the Boardgame as Vietnam did. But i am quite sure i can achieve a cool mod anyway.
Anyway. I am still working on the the event to drop the paratroopers.
The skeleto of the Airdrop event has grown a little.
First here are the rules written as from the briefing
Air landing rules
Airlanding is attempted by playing the appropriate action card and choosing the hex where you want the unit to land.
Each time you try and land a unit there is a chance of each subunit scattering and landing in another hex than the one intended.
Scatter is determined by a random 100diceroll. To the value is added the weathervalue of the route chosen. Weather changes twice each day and is different if you choose the
northern approach route or the southern approach route.
A roll of
50 or less means that the unit does not scatter
51-70 unit scatters +/- 1 hex in either direction
71-80 there is a 50 % chance unit scatters in +/-2 hexes in either direction
After scatter is resolved there is a chance that the unit takes losses during the landing.
The chance and size of losses depends on what type of landscape the unit lands in.
Terrain type...............................Percentage of loss in unit
Mixed terrain..............................0-10%
Light forest................................0-20%
Heavy forest..............................10-30%
River..........................................Unit automatically lost
Impassable..................................Unit automatically lost
The number of units landing on a single hex also affects losses. It is unwise to land more units in the exact same hex since losses are rolled for all units in a hex each
After scatter and losses are determined units will suffer an initial readiness penalty. Artillery units will always suffer a 100% readiness penalty. For other units the
readiness penalty is calculated as follows (0,5*50+0,25*25+0,25*25+0,25*25).
Landing units will only have 25AP left.
Here are the current rules for air landing written in code form
0) SETVAR: TempVar11 = 0
1) ' Decide how many units drop
2) CHECK: TempVar0 == 5
3) SETVAR: TempVar17 = 5
4) END CHECK
5) CHECK: TempVar0 == 0
6) SETVAR: TempVar17 = 5
7) END CHECK
8) CHECK: TempVar0 == 1
9) SETVAR: TempVar17 = 6
10) END CHECK
11) CHECK: TempVar0 == 2
12) SETVAR: TempVar17 = 3
13) END CHECK
14) CHECK: TempVar0 == 3
15) SETVAR: TempVar17 = 6
16) END CHECK
17) CHECK: TempVar0 == 4
18) SETVAR: TempVar17 = 4
19) END CHECK
20) CHECK: TempVar0 == 6
21) SETVAR: TempVar17 = 4
22) END CHECK
23) LOOPER: TempVar12 FROM 0 TO TempVar17
24) SETVAR: TempVar10 = CheckRandomPercent
25) ' adjust for flight route chosen
26) CHECK: Gameslot_Use N Route(#10) == 1
27) SETVAR: TempVar10 + Gameslot_N R Weather(#8)
28) END CHECK
29) CHECK: Gameslot_Use S Route(#11) == 1
30) SETVAR: TempVar10 + Gameslot_S R weather(#7)
31) END CHECK
32) CHECK: TempVar10 > 50
33) CHECK: TempVar10 =< 70
34) SETVAR: TempVar2 - 1
35) SETVAR: TempVar3 - 1
36) CHECK: CheckRandomPercent < 51
37) SETVAR: TempVar2 + 2
38) END CHECK
39) CHECK: CheckRandomPercent < 51
40) SETVAR: TempVar3 + 2
41) END CHECK
42) END CHECK
43) END CHECK
44) CHECK: TempVar10 > 71
45) CHECK: TempVar10 =< 80
46) SETVAR: TempVar2 - 2
47) SETVAR: TempVar3 - 2
48) CHECK: CheckRandomPercent < 51
49) SETVAR: TempVar3 + 4
50) END CHECK
51) CHECK: CheckRandomPercent < 51
52) SETVAR: TempVar2 + 4
53) END CHECK
54) END CHECK
55) END CHECK
56) ' Losses because of landscape
57) SETVAR: TempVar16 = 5
58) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 0
59) SETVAR: TempVar16 = 20
60) END CHECK
61) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 2
62) SETVAR: TempVar16 = 10
63) END CHECK
64) CHECK: CheckLandscapeType(TempVar2, TempVar3) => 0
65) CHECK: CheckLandscapeType(TempVar2, TempVar3) =< 9
66) SETVAR: TempVar15 = CheckRandomPercent
67) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 3
68) SETVAR: TempVar15 + 50
69) END CHECK
70) CHECK: CheckLandscapeType(TempVar2, TempVar3) => 6
71) CHECK: CheckLandscapeType(TempVar2, TempVar3) =< 7
72) SETVAR: TempVar15 + 100
73) END CHECK
74) END CHECK
75) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 9
76) SETVAR: TempVar15 + 100
77) END CHECK
78) END CHECK
79) END CHECK
80) SETVAR: TempVar15 / TempVar16
81) ' adding units
82) CHECK: TempVar0 == 5
83) CHECK: TempVar11 == 0
84) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
85) END CHECK
86) CHECK: TempVar11 == 2
87) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
88) END CHECK
89) CHECK: TempVar11 == 1
90) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
91) END CHECK
92) CHECK: TempVar11 == 3
93) EXECUTE: ExecAddUnit(6, TempVar2, TempVar3, 0)
94) END CHECK
95) END CHECK
96) CHECK: TempVar0 == 4
97) CHECK: TempVar11 == 0
98) EXECUTE: ExecAddUnit(9, TempVar2, TempVar3, 0)
99) END CHECK
100) CHECK: TempVar11 == 1
101) EXECUTE: ExecAddUnit(9, TempVar2, TempVar3, 0)
102) END CHECK
103) CHECK: TempVar11 == 2
104) EXECUTE: ExecAddUnit(9, TempVar2, TempVar3, 0)
105) END CHECK
106) CHECK: TempVar11 == 3
107) EXECUTE: ExecAddUnit(12, TempVar2, TempVar3, 0)
108) END CHECK
109) CHECK: TempVar11 == 4
110) EXECUTE: ExecAddUnit(18, TempVar2, TempVar3, 0)
111) END CHECK
112) END CHECK
113) CHECK: TempVar0 == 0
114) CHECK: TempVar11 == 0
115) EXECUTE: ExecAddUnit(15, TempVar2, TempVar3, 0)
116) END CHECK
117) CHECK: TempVar11 > 0
118) CHECK: TempVar11 =< 3
119) EXECUTE: ExecAddUnit(17, TempVar2, TempVar3, 0)
120) END CHECK
121) END CHECK
122) CHECK: TempVar11 == 4
123) EXECUTE: ExecAddUnit(5, TempVar2, TempVar3, 0)
124) END CHECK
125) CHECK: TempVar11 == 5
126) EXECUTE: ExecAddUnit(11, TempVar2, TempVar3, 0)
127) END CHECK
128) END CHECK
129) CHECK: TempVar0 == 1
130) CHECK: TempVar11 > 0
131) CHECK: TempVar11 =< 6
132) EXECUTE: ExecAddUnit(8, TempVar2, TempVar3, 0)
133) END CHECK
134) END CHECK
135) CHECK: TempVar11 == 6
136) EXECUTE: ExecAddUnit(18, TempVar2, TempVar3, 0)
137) END CHECK
138) END CHECK
139) CHECK: TempVar0 == 2
140) CHECK: TempVar11 > 0
141) EXECUTE: ExecAddUnit(16, TempVar2, TempVar3, 0)
142) END CHECK
143) END CHECK
144) CHECK: TempVar0 == 3
145) CHECK: TempVar11 == 0
146) EXECUTE: ExecAddUnit(15, TempVar2, TempVar3, 0)
147) END CHECK
148) CHECK: TempVar11 > 0
149) CHECK: TempVar11 < 4
150) EXECUTE: ExecAddUnit(10, TempVar2, TempVar3, 0)
151) END CHECK
152) END CHECK
153) CHECK: TempVar11 == 4
154) EXECUTE: ExecAddUnit(18, TempVar2, TempVar3, 0)
155) END CHECK
156) CHECK: TempVar11 == 5
157) EXECUTE: ExecAddUnit(7, TempVar2, TempVar3, 0)
158) END CHECK
159) END CHECK
160) CHECK: TempVar0 == 6
161) CHECK: TempVar11 == 0
162) EXECUTE: ExecAddUnit(15, TempVar2, TempVar3, 0)
163) END CHECK
164) CHECK: TempVar11 > 0
165) CHECK: TempVar11 < 4
166) EXECUTE: ExecAddUnit(10, TempVar2, TempVar3, 0)
167) END CHECK
168) END CHECK
169) CHECK: TempVar11 == 4
170) EXECUTE: ExecAddUnit(18, TempVar2, TempVar3, 0)
171) END CHECK
172) END CHECK
173) EXECUTE: ExecRemoveTroops(TempVar2, TempVar3, 0, TempVar15)
174) EXECUTE: ExecRemoveTroops(TempVar2, TempVar3, 1, TempVar15)
175) EXECUTE: ExecRemoveTroops(TempVar2, TempVar3, 2, TempVar15)
176) EXECUTE: ExecRemoveTroops(TempVar2, TempVar3, 3, TempVar15)
177) ' Readines & AP of landed units
178) SETVAR: TempVar15 = CheckTotalUnits
179) CHECK: CheckRandomPercent < 50
180) EXECUTE: ExecUnitRdnModify(TempVar15, -50, -1, -1)
181) END CHECK
182) CHECK: CheckRandomPercent < 50
183) EXECUTE: ExecUnitRdnModify(TempVar15, -25, -1, -1)
184) END CHECK
185) CHECK: CheckRandomPercent < 50
186) EXECUTE: ExecUnitRdnModify(TempVar15, -25, -1, -1)
187) END CHECK
188) CHECK: CheckRandomPercent < 50
189) EXECUTE: ExecUnitRdnModify(TempVar15, -25, -1, -1)
190) END CHECK
191) CHECK: CheckUnitSFType(TempVar15, 2, -1) > 0
192) EXECUTE: ExecUnitRdnModify(TempVar15, -75, -1, -1)
193) END CHECK
194) EXECUTE: ExecUnitApModify(TempVar15, -75, -1, -1)
195) ' remove losses and destroy units in impassable terrain
196) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 1
197) SETVAR: TempVar14 = CheckTotalUnits
198) EXECUTE: ExecMessage(0, 2, -1, -1)
199) EXECUTE: ExecRemoveunit(TempVar14)
200) END CHECK
201) CHECK: CheckLandscapeType(TempVar2, TempVar3) == 50
202) SETVAR: TempVar14 = CheckTotalUnits
203) EXECUTE: ExecMessage(0, 2, -1, -1)
204) EXECUTE: ExecRemoveunit(TempVar14)
205) END CHECK
206) SETVAR: TempVar11 + 1
207) END LOOPER
208) END CHECK
Finally screenshot shows units during a night round.
Attachment (1)
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
(in reply to Jay Doubleyou)
Post #: 11
11/11/2011 8:24:36 AM
Grymme Unfortunatly i am having some issues with this mod. Mainly to find some of the OOB information i want.
I have all the info i need for the Airborne units. What i need are some placement and reinforcement information for the german and allied ground forces present.
Posts: 1821 I am moddeling the scenario after the boardgame Highway to the Reich and using reinforcement tables etc from that boardgame. Unfortunatly i am missing the above
Joined: 12/16/2007 organizational charts. I have asked for them on boardgamegeek, but so far no result.
Status: offline
In any case. If anyone has the Higway to the Reich boardgame and would be willing to share those organization tables feel free so send me a PM.
I havent completely stopped working on the mod. I am doing some small bits and bobs on it and working on fixing some old projects until i know wether this project can be
done or not.
Btw. The last months i have recieved a couple of the largest donations so far. I wont publish any names but i am very thankful to people who give something back (sometimes
even with no strings attached) for my work.
Screenshot shows a german medium artillery battery. 13 hexes artillery range.
Attachment (1)
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 12
11/11/2011 8:06:25 PM
Jeffrey H.
Posts: 2779 ORIGINAL: Grymme
Joined: 4/13/2007
From: San Diego, Ca. Unfortunatly i am having some issues with this mod. Mainly to find some of the OOB information i want.
Status: offline
I have all the info i need for the Airborne units. What i need are some placement and reinforcement information for the german and allied ground forces present.
I am moddeling the scenario after the boardgame Highway to the Reich and using reinforcement tables etc from that boardgame. Unfortunatly i am missing the above
organizational charts. I have asked for them on boardgamegeek, but so far no result.
In any case. If anyone has the Higway to the Reich boardgame and would be willing to share those organization tables feel free so send me a PM.
I havent completely stopped working on the mod. I am doing some small bits and bobs on it and working on fixing some old projects until i know wether this project can
be done or not.
Btw. The last months i have recieved a couple of the largest donations so far. I wont publish any names but i am very thankful to people who give something back
(sometimes even with no strings attached) for my work.
Screenshot shows a german medium artillery battery. 13 hexes artillery range.
I have a friend who has the game, however he's living in a different city up North, prolly about 6 hours driving time.
Have you tried Consimworld ?
Another idea, what about inbound aircraft losses due to AAA ?
"Games lubricate the body and the mind" Ben Franklin.
Post #: 13
quote:ORIGINAL: Grymme Unfortunatly i am having some issues with this mod. Mainly to find some of the OOB information i want. I have all the info i need for the Airborne units. What i need are some
placement and reinforcement information for the german and allied ground forces present. I am moddeling the scenario after the boardgame Highway to the Reich and using reinforcement tables etc from
that boardgame. Unfortunatly i am missing the above organizational charts. I have asked for them on boardgamegeek, but so far no result. In any case. If anyone has the Higway to the Reich boardgame
and would be willing to share those organization tables feel free so send me a PM. I havent completely stopped working on the mod. I am doing some small bits and bobs on it and working on fixing some
old projects until i know wether this project can be done or not. Btw. The last months i have recieved a couple of the largest donations so far. I wont publish any names but i am very thankful to
people who give something back (sometimes even with no strings attached) for my work. Screenshot shows a german medium artillery battery. 13 hexes artillery range.
11/12/2011 12:42:53 PM
Grymme About airlanding losses from AA i am planning to implement that somehow. I just havent figured it out yet.
Jeffrey: Maybe your friend could take pictures or scan in the charts and then email them to you and you could send them to me. It would be a great help.
Posts: 1821 Thanks in advance
Joined: 12/16/2007
Status: offline Grymme
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 14
11/12/2011 6:55:24 PM
Shadehawk Hi Grymme,
Thanks for all the good work!
I've sent you a PM. Hope you find it usefull.
Posts: 23
Joined: 3/31/2003 Edit; on closer inspection I guess it doesn't contain what you need, sorry!
From: Bodø, Norway
Status: offline Regards,
< Message edited by Shadehawk -- 11/12/2011 7:02:24 PM >
Post #: 15
11/14/2011 4:22:48 PM
Grymme Shadehawk.
Thanks, but yeah, thats not the charts i need. Hopefully i am getting some help on this though.
Posts: 1821 _____________________________
Joined: 12/16/2007
Status: offline My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
(in reply to Shadehawk)
Post #: 16
11/16/2011 8:09:49 PM
Grymme Finally some good news on the development of this mod. I found the missing organizational charts. Thanks to all people who have tried to help me.
Anyway, i now have all the six organizational charts. So now on to making and placing all these nice units. Its a lot of work, but much easier with the charts.
Posts: 1821 I actually found that the designer of the boardgame had posted them publicly so there shouldnt be any problem reposting them here.
Joined: 12/16/2007
Status: offline To illustrate the amount of units to be placed here is a screenshot of the organizational chart for the 30th Corps. Thats approximatly 450 different units in one of the six
organizational charts of the game. A lot of work... maybe....
Attachment (1)
< Message edited by Grymme -- 11/16/2011 8:15:15 PM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 17
11/18/2011 7:28:26 PM
Jeffrey H. That's good news, I was striking out left and right.
Posts: 2779
Joined: 4/13/2007 "Games lubricate the body and the mind" Ben Franklin.
From: San Diego, Ca.
Status: offline
Post #: 18
11/18/2011 7:30:44 PM
Jeffrey H. Neat chart BTW, that's modern graphics for you. Much better than the old SPI stuff for sure.
Posts: 2779 _____________________________
Joined: 4/13/2007
From: San Diego, Ca. "Games lubricate the body and the mind" Ben Franklin.
Status: offline
Post #: 19
11/18/2011 8:00:48 PM
Grymme Jeffrey. Thanks for trying to help anyway. Its the thought that counts.
Now that i have most of what i want i am starting to place the 100s of units and code the reinforcement events. I remember advicing LoJ a while ago that his mods might have
a tad to many units. Well, this will most likely be my biggest forray into that tendency. At least i hope that i will never do a mod with more units than this one.
Posts: 1821
Joined: 12/16/2007 To give a perspective. This is what 50% of the Guards armored division + 231st Infantry Brigade looks like when you do a mod on company/troop level. 88 units.
Status: offline
Anyway. Work is going swimmingly now. But if anyone by chance should have the answer to any of these questionmarks. Feel free to answer.
1) Did Major Goughs 1st Air Reconnoisance Squadron (Red Devils) have any tankettes or other AFVs? Or where they just equipped with upgunned Jeeps. So far i have seen one
reference to british tankettes fighting in Arnhem. But i would like to know more.
2) As far as i can tell the Guards Armored division was mainly equipped with Sherman MkIV Fireflys and M5 Stuart Light tanks. But i am believing that the Welsh Guards
Armored Battalion (yellow in the organizational chart) was equipped with some other kind of tank. Anyone know which one?
Attachment (1)
< Message edited by Grymme -- 11/18/2011 8:17:32 PM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 20
11/24/2011 4:15:41 PM
Shadehawk For 1, I found the following information here: [color=#0000ff size=3]http://www.combatreform.org/groundvehiclephotos.htm
MYSTERY: the British tank museum's Tetrarch light tank has a 3" gun for high explosive fire support as shown above. So don't even try to whine about these light tanks being
inadequate to kill heavier German tanks, they were needed primarily to BLAST GERMAN INFANTRY OUT OF BUILDINGS, BUNKERS AND DUG-IN POSITIONS. This was known AT THE TIME as
Posts: 23 proven by the Tetrarch with the 3 inch gun. Yet we only know the 6th Airborne Division having an Armored Reconnaissance unit with light tanks, not the 1st Brititsh Airborne
Joined: 3/31/2003 which landed near Arnhem. The photo above of a Hamilcar at a landing zone west of Arnhem adds to the mystery as SOMETHING has rolled off its front nose ramp. We also know
From: Bodø, Norway 412 Hamilcar heavy gliders were built so the question arises WHY DIDN'T THE BRITISH GLIDER-LAND A FORCE OF 100-400 LIGHT TANKS AND BREN GUN CARRIERS WEST OR ARNHEM AND
Status: offline PUNCH THEIR WAY THROUGH TO REINFORCE LTC FROST'S MEN? Why didn't they fit large guns to Bren gun carriers and glider land them?
Keith Flint in his startling book, Airborne Armour solves the mystery.
1. The slacker British industry didn't have any urgency to make Hamilcar heavy gliders so there were less than 50 at the time of Arnhem
2. The British military being smart to see the need for glider-delivered light tanks was not bright enough to realize that just because German tanks CAME TO THEM when in
the DEFENSE at Normandy to be blasted by their static after being towed by trucks 6-pounder and 17-pounder anti-tank guns---did not mean they should not take any light
tanks that could fire-from-the-move to Arnhem when in the OFFENSE. This fatal error cost them the battle and extended the war by 1 year.
3. 18 Bren gun tracked tankettes were glider-landed 8 miles west of Arnhem bridge but instead of mounting 75mm or other guns they were used to CARRY SUPPLIES for the
infantry. According to General Gavin in his 1947 book, Airborne Warfare a mere two armored cars held up the British infantry from reaching LTC Frost's men already at the
bridge by blasting Freddy Gough's 1st Recce Squadron's unarmored jeeps towing 20mm Polsten anti-aircraft guns that were not ready-to-fire.
4. 1st Airborne Recce Squadron commander, Major Freddie Gough had asked for the 6th Airborne Recce Squadron's unused Tetrarch light tanks for his coup de main mission but
was ignored and thus failed because he was in unarmored jeeps.
I don't know if it's acurate, but it's what I could find on the subject.
Post #: 21
11/25/2011 7:03:21 AM
Jeffrey H. Regarding your item 2, I found this wiki page:
Posts: 2779
Joined: 4/13/2007 There is a photo there of a Sherman crossing the bridge @ Nijmegan. From the hull shape and gun, I'd say it's an early Sherman, possibly even a 75 mm gunned version.
From: San Diego, Ca.
Status: offline _____________________________
"Games lubricate the body and the mind" Ben Franklin.
Post #: 22
11/25/2011 7:06:44 AM
Jeffrey H. And this:
Posts: 2779
Joined: 4/13/2007
From: San Diego, Ca.
Status: offline _____________________________
"Games lubricate the body and the mind" Ben Franklin.
Post #: 23
11/25/2011 7:15:08 AM
Jeffrey H. In particular this:
Posts: 2779
Joined: 4/13/2007 Note the following excerpt:
From: San Diego, Ca.
Status: offline "...While they battled on in those theatres the 1st and 2nd joined the Guards Armoured Division, with the 1st Battalion being infantry and the 2nd armoured..."
"Games lubricate the body and the mind" Ben Franklin.
Post #: 24
11/25/2011 7:24:01 AM
Jeffrey H. Not sure if you found this:
Posts: 2779
Joined: 4/13/2007
From: San Diego, Ca.
Status: offline _____________________________
"Games lubricate the body and the mind" Ben Franklin.
Post #: 25
11/30/2011 7:48:23 AM
Grymme Shadehawk & Jeffrey: Thanks guys. Appreciated.
Work is going steady, although slow. Had a setback last week when i managed to save over a couple of days work. I am slowly placing units and reinforcements and working on
some events when i have the time over. Which is not often.
Posts: 1821
Joined: 12/16/2007 This mod will have some changes to the system of readiness. Basicly there will be dayturns and nightturns. During day turns units do not recover readiness. So a unit that
Status: offline looses readiness because of combat or lack of supplies will have to wait to a nighturn to recover it. Also possibly units that move might loose a little readiness (havent
decided this yet).
During night rounds units do recover readiness. Units can move as normal, but units that do move loose more readiness than they would have recovered because they are using
time that should be spent resting to move instead. Also during night turns there is a -50% modifier on offensive action.
One reason that work is going pretty slow is that i constantly have to check what equipment is in a certain type of unit and then create the SFTs for that unit. So far i
have created the following SFTs.
Light Rifle
Bren Machinegun (MG)
MG 34/42 (MG)
MP 38/40 (SMG)
Sten Gun (SMG)
Piat (AT)
RP43 (AT)
2inch mortar
3inch mortar
81mm Mortar
Gun Crew
6pnd Antitank Gun
17pnd Antitank Gun
75mm Pak 97/38
Bren Carrier (994)
SdKfz 250/1
Daimler Dingo Scout Car
Humber Scout Car
Staghound T17 Armored Car
Sdkfz 233
Sdkfz 234/1
75mm Pack Howitzer
Sexton SP 25pnd Gun
M7 PRiest 105mm How
17cm Kanone 18 (283)
105mm LeHF
25pnd gun (230)
Grille SiG 15cm (365)
Sherman M4 Firefly
Sherman M4
M5 stuart
Jagdpanzer IV
PzKpfw IIIJ
PzKpfw IV
PxKpfw V Panther
88L56 FlaK18
2cm Flak38
Quad 2cm Flak38
I have now placed all the starting units of the XXX Corps.
A lot of units, as can be seen by this screenshot. The white units to the right is the Dutch Princess Irene Brigade. Missing from the picture is 2/3rds of the 15&19th
Hussars (light green in the lower left side of the map. But other than that this is the entire British starting forces.
Attachment (1)
< Message edited by Grymme -- 11/30/2011 7:51:29 AM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 26
11/30/2011 8:25:53 AM
lion_of_judah Grymme
how did you get it too look night-time in one of the earlier screenshots, looks rather cool. I'm normally tired of WW2 scenarios, but this has peeked my interest....
Posts: 1237
Joined: 1/8/2007
Status: offline
Post #: 27
12/11/2011 3:30:45 PM
Grymme Lion. Its actually quite easy. There is an exec in the editor called Execweathercolour that changes the entire colour of the map.
Still working on this mod although it is slow going. Adding some 5-10 units every day. Which isnt that much considering there are maybe some 1 000 units to add.
Posts: 1821
Joined: 12/16/2007
Status: offline _____________________________
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
(in reply to lion_of_judah)
Post #: 28
12/12/2011 1:04:37 AM
Jeffrey H. Well, maybe I can help ? If it's just sort of grunt labor type stuff, metahporically speaking, maybe I can take some of the load off you ?
Posts: 2779
Joined: 4/13/2007 "Games lubricate the body and the mind" Ben Franklin.
From: San Diego, Ca.
Status: offline
Post #: 29
12/12/2011 7:13:13 AM
Grymme Thanks for the offer. It is a lot of grunt labour stuff. But to be honest i think it still wouldnt be more efficient to send savefile back and forth.
Still i am making some progress. As i think i stated before i was thinking of making separate smaller scenarios for each of the divisional landings. As for the Arnhem
landing i am finished with all of the 1st Airborn Division units, all of the Polish Brigade units, all at start german units and have only 71 german reinforcements left to
Posts: 1821 place. That isnt actually not as much as it sounds since a lot of the placements are of several units of the same type in the same place. If you account for that its 31
Joined: 12/16/2007 individual placements left. Since i make around 5 placements each day this should be finished in a week or so.
Status: offline
I have also finished all of the 30th Corps units at start and all of the german units at start opposite 30th corps. But havent started reinforcements at all. Also have all
of the central sector units at start, reinforcements, and all US airborne units left to place.
A cool sidenote. This scenario will feature a german tank company equipped with the old french heavy tank Char B. Its not to often you find scenarios using this.
Attachment (1)
< Message edited by Grymme -- 12/12/2011 7:57:51 AM >
My Advanced Tactics Mod page
30+ scenarios, maps and mods for AT and AT:G
Post #: 30
|
{"url":"http://www.matrixgames.com/forums/tm.asp?m=2936682&mpage=1&key=","timestamp":"2014-04-16T05:15:44Z","content_type":null,"content_length":"202674","record_id":"<urn:uuid:6d737d01-d97d-4217-8dd5-c2a9f6b01537>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Tutors
Whitestone, NY 11357
Highly Experienced High School Math Teacher and Tutor
I am currently a high school math teacher in a large comprehensive Queens High School. I have been teaching high school
for 11 years. Courses I have taught include Integrated Algebra , the new Common Core Algebra 1, Geometry, Algebra 2 and Trigonometry,...
Offering 10+ subjects including algebra 1, algebra 2 and geometry
|
{"url":"http://www.wyzant.com/Corona_NY_Mathematics_tutors.aspx","timestamp":"2014-04-19T17:43:12Z","content_type":null,"content_length":"61032","record_id":"<urn:uuid:7ffa204b-d09b-4a28-99c8-38b3afc20b15>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Houston Science Tutor
...I was also Chairman of the Science Department. I have also tutored students for the 8th grade STAAR test. I taught science for 3 years at HISD and for 17 years at Saint Agnes Academy.
4 Subjects: including astronomy, physical science, chemistry, geology
...Second, we will dig to the root of the misunderstanding or challenging material in order to locate the source of confusion. This is often a simple misunderstanding stemming from the basic
concepts. While it might sound tedious and superfluous, many of the more complex topics are easily understood from the basics.
22 Subjects: including chemical engineering, physical science, physics, geometry
I will make learning biology an exciting, fun, challenging, and meaningful adventure. I like all fields of science; however, I really enjoy the area of biology. It is through the field of biology
that we are able to explore, research and understand living systems.
2 Subjects: including biology, GED
...TSI Exam Students: Yes I do tutor for the Math portion of the TSI exam. As the exam is new and there are no published resources that do a good job of teaching and explaining the material
covered, I have developed a structured curriculum that is guaranteed to improve your score and increase your ...
29 Subjects: including biostatistics, physics, physical science, physiology
I possess exceptional teaching skills. I was a graduate teaching assistant for Genetics, Statistics, Molecular biology,and Tissue culture. I tutored undergraduates genetics in University of
Houston for 2 years with great success.
10 Subjects: including genetics, ACT Math, anatomy, biostatistics
|
{"url":"http://www.purplemath.com/south_houston_science_tutors.php","timestamp":"2014-04-20T07:00:31Z","content_type":null,"content_length":"23877","record_id":"<urn:uuid:f39a30f1-6a0f-47ef-b6bd-d43a20e15e56>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EECS 307
EECS 307 Communications Systems
CATALOG DESCRIPTION: Analysis of analog and digital communications systems, including modulation, transmission, and demodulation of AM, FM, and TV systems. Design issues, channel distortion and loss,
bandwidth limitations, additive noise.
REQUIRED TEXT: R. E. Ziemer and W. H. Tranter, Principles of Communications , John Wiley & Sons, Inc., 5 th edition (2002)
REFERENCE TEXT: None
COURSE DIRECTOR: M. Honig
COURSE GOALS: To teach the principles underlying modulation and demodulation of analog signals along with associated system design issues. The latter includes power and bandwidth constraints and
performance in the presence of additive noise.
PREREQUISITES BY COURSES: EECS 222 and EECS 302
1: Fourier transforms and linear systems
2: Probability and random variables
Week 1. Components of a communications system, benefits of modulation, review of Fourier series and Fourier transform.
( READINGS : Z&T, Chapter 1, 2.1, 2.2, 2.4, 2.5)
Week 2. Properties of the Fourier transform, Fourier transform of a periodic signal, linear systems, impulse response, time-invariant systems.
( READINGS : Z&T, 2.5 (cont.), 2.7 (excluding 2.7.13-14))
Week 3. Cross-correlation and autocorrelation of deterministic signals, power spectral density, Hilbert transform.
( READINGS : Z&T, 2.6, 2.9.1-2)
Week 4. Analytic signals, characterization of bandpass signals, double-sideband and amplitude modulation.
( READINGS : Z&T, 2.9.3-5, 3.1 up to ``Single-Sideband Modulation'')
Week 5. Power efficiency of AM, Single- and Vestigial-sideband modulation, mixers.
( READINGS : Z&T, 3.1 from SSB subsection up to ``Frequency Translation and Mixing'')
Week 6. Phase and frequency modulation, spectral analysis, FM bandwidth, demodulation of FM.
( READINGS : Z&T, 3.2)
Week 7. Superheterodyne receiver, multiplexing, probability review.
( READINGS : Z&T, Sec. 3.1.5, 3.7, Chapter 4)
Week 8. Probability review (cont.): densities, random variables, and statistical averages; random processes, first- and second-order statistics, stationarity and ergodicity.
( READINGS : Z&T, Chapter 4, 5.1, 5.2)
Week 9. Auto- and cross-correlation, power spectral density of random signals, effect of filtering.
( READINGS : Z&T, 5.3, 5.4)
Week 10. Narrowband noise, Signal-to-Noise Ratio analysis of DSB and coherent AM.
( READINGS : Z&T, 5.5.1-2, 6.1)
Homework 1: Problems on using properties of the Fourier transform to evaluate transforms of specific signals.
Homework 2: Problems on characterizing linear, time-invariant filters and input-output relations.
Homework 3: Problems on computing the Hilbert transform, autocorrelation, and power spectral density.
Homework 4: Problems on characterizing bandpass signals and determining properties (e.g., modulation index and transmitted power) of double-sideband and amplitude-modulated signals.
Homework 5: Problems on Amplitude and Single-Sideband modulation and demodulation (e.g., computing power efficiency and determining spectral properties).
Homework 6: Problems on phase and frequency modulation and demodulation (e.g., computing the spectrum for tone modulation and determining bandwidth).
Homework 7: Problems on superheterodyne receivers (e.g., filter specification and determining tuning range), and on random variables.
Homework 8: Problems on statistical averages, second-order statistics, and ergodicity.
Homework 9: Problems on computing power spectral densities, effect of filtering, and characterizing narrowband noise.
COMPUTER PROJECTS: none
1. Uses Hypersignal software on a PC to view signals in the time and frequency domains. The software simulates an oscilloscope and spectrum analyzer. The effects of filtering and modulation are
2. The students build an amplitude modulator based on the Motorola MC1496 modulator chip, along with a noncoherent demodulator. The outputs of the modulator and demodulator are viewed in the time and
frequency domains.
3. The students build a frequency modulator and demodulator based on the Exar XR-2207 voltage-controlled oscillator and Exar XR-221 phase-locked loop demodulator. The output of the modulator is
viewed in the time and frequency domains, and performance of the demodulator is observed.
• Homework: 15%
• Labs: 15%
• Midterms (2): 30%
• Final: 40%
COURSE OBJECTIVES: When a student completes this course, s/he should be able to:
1. Evaluate and interpret Fourier transforms of signals by using properties of the Fourier transform.
2. Evaluate the output of a linear, time-invariant system given an input and the impulse response or transfer function.
3. Evaluate the autocorrelation and energy or power spectral density of a deterministic signal.
4. Evaluate the Hilbert transform of elementary signals.
5. Characterize a bandpass signal in terms of in-phase and quadrature components, envelope, and phase.
6. Characterize double-sideband and amplitude modulated waveforms in the time and frequency domains.
7. Characterize double-sideband, amplitude, and single-sideband modulation in terms of bandwidth and power efficiency.
8. Describe phase and frequency modulated signals in the time domain, and tone modulated signals in the frequency domain.
9. Estimate the bandwidth of a phase or frequency modulated waveform.
10. Determine filter specifications and tuning range for a superheterodyne receiver.
11. Determine whether or not a random process is wide-sense stationary and ergodic.
12. Compute the power spectral density of a random process.
13. Compute the autocorrelation and power spectral density of a filtered random process.
14. Specify narrowband noise in terms of low-pass random noise.
15. Compute pre- and post-detection Signal-to-Noise Ratios for linear modulation systems.
ABET CONTENT CATEGORY: 100% Engineering (Design component).
|
{"url":"http://www.eecs.northwestern.edu/eecs-307","timestamp":"2014-04-20T08:45:00Z","content_type":null,"content_length":"20399","record_id":"<urn:uuid:3bccb36a-13b3-4657-9846-6c5dfa705378>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: The two barriers ruin problem via a Wiener Hopf
decomposition approach
Florin Avram Martijn R. Pistorius y Miguel Usabel z
Consider an insurance company whose capital U evolves as a risk processes with phase{
type inter-arrivals and claims. In this note we study the probability and severity of
ruin before the capital U reaches an upper barrier K > 0. The main tools we use are
Asmussen and Kella's embedding [5, 6] and Wiener-Hopf factorization of generator
Keywords: Two sided exit problem, phase{type distributions, semi-Markov em-
bedding, Wiener-Hopf factorization of matrices, renewal process.
AMS Subject Classication: 60G40, 90A09.
1 Introduction
Consider an insurance company, whose capital is modeled by a positive drift
added to a pure jump process with negative jumps. The drift, say p, models the
premium income stream and the jumps stand for the claims the company re-
ceives. One is interested in the time and severity of ruin. The transform analytic
approach to this problems, going back to Cramer [12] and Sparre Andersen [1],
consists in formulating integro-dierential equations or renewal equations for the
functions of interest and solving them via a double Laplace - Stieltjes transform
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/281/3886963.html","timestamp":"2014-04-21T07:38:00Z","content_type":null,"content_length":"8340","record_id":"<urn:uuid:db1450fa-9499-4a25-8364-0311046cec77>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematicians set world record in packing puzzle
Princeton researchers have beaten the present world record for packing the most tetrahedra into a volume. Research into these so-called packing problems have produced deep mathematical ideas and led
to practical applications as well. Credit: Princeton University/Torquato Lab
(PhysOrg.com) -- Finding the best way to pack the greatest quantity of a specifically shaped object into a confined space may sound simple, yet it consistently has led to deep mathematical concepts
and practical applications, such as improved computer security codes.
When mathematicians solved a famed sphere-packing problem in 2005, one that first had been posed by renowned mathematician and astronomer Johannes Kepler in 1611, it made worldwide headlines.
Now, two Princeton University researchers have made a major advance in addressing a twist in the packing problem, jamming more tetrahedra -- solid figures with four triangular faces -- and other
polyhedral solid objects than ever before into a space. The work could result in better ways to store data on compact discs as well as a better understanding of matter itself.
In the cover story of the Aug. 13 issue of Nature, Salvatore Torquato, a professor in the Department of Chemistry and the Princeton Institute for the Science and Technology of Materials, and Yang
Jiao, a graduate student in the Department of Mechanical and Aerospace Engineering, report that they have bested the world record, set last year by Elizabeth Chen, a graduate student at the
University of Michigan.
Using computer simulations, Torquato and Jiao were able to fill a volume to 78.2 percent of capacity with tetrahedra. Chen, before them, had filled 77.8 percent of the space. The previous world
record was set in 2006 by Torquato and John Conway, a Princeton professor of mathematics. They succeeded in filling the space to 72 percent of capacity.
Beyond making a new world record, Torquato and Jiao have devised an approach that involves placing pairs of tetrahedra face-to-face, forming a "kissing" pattern that, viewed from the outside of the
container, looks strangely jumbled and irregular.
"We wanted to know this: What's the densest way to pack space?" said Torquato, who is also a senior faculty fellow at the Princeton Center for Theoretical Science. "It's a notoriously difficult
problem to solve, and it involves complex objects that, at the time, we simply did not know how to handle."
Henry Cohn, a mathematician with Microsoft Research New England in Cambridge, Mass., said, "What's exciting about Torquato and Jiao's paper is that they give compelling evidence for what happens in
more complicated cases than just spheres." The Princeton researchers, he said, employ solid figures as a "wonderful test case for understanding the effects of corners and edges on the packing
Studying shapes and how they fit together is not just an academic exercise. The world is filled with such solids, whether they are spherical oranges or polyhedral grains of sand, and it often matters
how they are organized. Real-life specks of matter resembling these solids arise at ultra-low temperatures when materials, especially complex molecular compounds, pass through various chemical
phases. How atoms clump can determine their most fundamental properties.
"From a scientific perspective, to know about the packing problem is to know something about the low-temperature phases of matter itself," said Torquato, whose interests are interdisciplinary,
spanning physics, applied and computational mathematics, chemistry, chemical engineering, materials science, and mechanical and aerospace engineering.
Mathematicians define the five shapes composing the Platonic solids as being convex polyhedra that are regular. Their beauty and symmetry have fascinated minds, including that of the Greek
philosopher, Plato, for thousands of years. (Image: Courtesy of the Torquato Laboratory)
And the whole topic of the efficient packing of solids is a key part of the mathematics that lies behind the error-detecting and error-correcting codes that are widely used to store information on
compact discs and to compress information for efficient transmission around the world.
Beyond solving the practical aspects of the packing problem, the work contributes insight to a field that has fascinated mathematicians and thinkers for thousands of years. The Greek philosopher
Plato theorized that the classical elements -- earth, wind, fire and water -- were constructed from polyhedra. Models of them have been found among carved stone balls created by the late Neolithic
people of Scotland.
The tetrahedron, which is part of the family of geometric objects known as the Platonic solids, must be packed in the face-to-face fashion for maximum effect. But, for significant mathematical
reasons, all other members of the Platonic solids, the researchers found, must be packed as lattices to cram in the largest quantity, much the way a grocer stacks oranges in staggered rows, with
successive layers nestled in the dimples formed by lower levels. Lattices have great regularity because they are composed of single units that repeat themselves in exactly the same way.
Mathematicians define the five shapes composing the Platonic solids as being convex polyhedra that are regular. For non-mathematicians, this simply means that these solids have many flat faces, which
are plane figures, such as triangles, squares or pentagons. Being regular figures, all angles and faces' sides are equal. The group includes the tetrahedron (with four faces), the cube (six faces),
the octahedron (eight faces), the dodecahedron (12 faces) and the icosahedron (20 faces).
There's a good reason why tetrahedra must be packed differently from other Platonic solids, according to the authors. Tetrahedra lack a quality known as central symmetry. To possess this quality, an
object must have a center that will bisect any line drawn to connect any two points on separate planes on its surface. The researchers also found this trait absent in 12 out of 13 of an even more
complex family of shapes known as the Archimedean solids.
The conclusions of the Princeton scientists are not at all obvious, and it took the development of a complex computer program and theoretical analysis to achieve their groundbreaking results.
Previous computer simulations had taken virtual piles of polyhedra and stuffed them in a virtual box and allowed them to "grow."
The algorithm designed by Torquato and Jiao, called "an adaptive shrinking cell optimization technique," did it the other way. It placed virtual polyhedra of a fixed size in a "box" and caused the
box to shrink and change shape.
There are tremendous advantages to controlling the size of the box instead of blowing up polyhedra, Torquato said. "When you 'grow' the particles, it's easy for them to get stuck, so you have to
wiggle them around to improve the density," he said. "Such programs get bogged down easily; there are all kinds of subtleties. It's much easier and productive, we found, thinking about it in the
opposite way."
Cohn, of Microsoft, called the results remarkable. It took four centuries, he noted, for mathematician Tom Hales to prove Kepler's conjecture that the best way to pack spheres is to stack them like
cannonballs in a war memorial. Now, the Princeton researchers, he said, have thrown out a new challenge to the math world. "Their results could be considered a 21st Century analogue of Kepler's
conjecture about spheres," Cohn said. "And, as with that conjecture, I'm sure their work will inspire many future advances."
Many researchers have pointed to various assemblies of densely packed objects and described them as optimal. The difference with this work, Torquato said, is that the algorithm and analysis developed
by the Princeton team most probably shows, in the case of the centrally symmetric Platonic and Archimedean solids, "the best packings, period."
Their simulation results are also supported by theoretical arguments that the densest packings of these objects are likely to be their best lattice arrangements. "This is now a strong conjecture that
people can try to prove," Torquato said.
Source: Princeton University
5 / 5 (1) Aug 13, 2009
I wonder if the phrase "think outside of the box" was mentioned?
|
{"url":"http://phys.org/news169301990.html","timestamp":"2014-04-20T06:17:25Z","content_type":null,"content_length":"74660","record_id":"<urn:uuid:c0ce4d0e-6f6e-4d22-8ff7-f2a9e11ee1ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electric Field of a Dielectric Sphere
1. The problem statement, all variables and given/known data
A uniform charge q is distributed along a sphere of radius R.
a) What is the Electric Potential in the center of the sphere?
2. Relevant equations
V(r1)-V(r0) = - [tex]\int \stackrel{\rightarrow}{E}[/tex] * [tex]\stackrel{\rightarrow}{dl}[/tex]
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?t=352590","timestamp":"2014-04-17T07:35:14Z","content_type":null,"content_length":"28686","record_id":"<urn:uuid:6c56e185-5936-4c21-94bf-dfa7ea7ecc02>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On 1/16/07, Matthias Schabel <boost_at_[hidden] > wrote:
> ...
> My humble knowledge may have confused me but I thought that the same
> > temperature by Centigrade and by Kelvin will always differ by about
> > 273 (i.e. the conversion is linear). Which is not right in case of
> > Farenheit.
> What I mean by linear vs. affine is that, while the scale factor for
> converting
> temperature differences in Kelvin to/from centigrade is one, there is
> a nonzero
> translation of the origin (MathWorld has a good description of linear
> and affine
> transformations):
> http://mathworld.wolfram.com/LinearTransformation.html
> http://mathworld.wolfram.com/AffineTransformation.html
> The point that Ben was making in his post is that, while absolute
> temperature
> conversion between Kelvin and centigrade requires an offset,
> conversion of
> temperature differences does not. As far as I can tell, to integrate
> this directly
> into the library would require strong coupling between a special
> value_type
> flagging whether a quantity was an absolute temperature or a
> difference, which
> I think is undesirable. By defining special value_types
> class absolute_temperature;
> class temperature_difference;
> and defining the operators between these so that you can add or subtract
> temperature_differences to/from absolute_temperatures to get another
> absolute_temperature, add or subtract two temperature_differences to get
> another temperature difference, and subtract two
> absolute_temperatures to
> get a temperature_difference you should be able to get the correct
> behavior.
> Maybe I'll try to put together a quick example of this...
I do think it would be great to distinguish between affine and vector spaces
in a library like this, but in practical terms it seems like there are three
levels of commitment that users might have to dimensional analysis:
1. Type of dimension (length versus temperature).
2. Units of measure (meters versus feet).
3. Absolute versus relative quantities (°C = K+273.15 versus °C = K).
It seems to me that these are separate problems, each building upon the one
before it, and that an ideal library would let the user work at any of these
levels. I think 1 and 2 can be handled with appropriate dimensional types
and casts among them; 3 can be handled by separate types for absolute and
relative quantities. I think these could be implemented independently in
that order.
I'm thinking usage like this:
// First: type of dimension
quantity<double, distance> x1 = 42.0 * distance;
// Second: units of measure
quantity<double, meters> x2 = 20.0 * meters;
// Casting to explicitly add units of measure to unitless quantities.
x2 = quantity_cast<quantity<double, meters> >(x1); // The user claims x1
is in meters.
// Allow casting a reference or pointer, too (important for interfacing
with numerical solvers).
quantity<double, meters>& xref = quantity_cast<quantity<double,
quantity<double, meters>* xptr = quantity_cast<quantity<double,
// Third, since most people won't want to get into this, let quantity be
// most would expect (i.e., a linear space) but add absolute_ and
relative_ to be clear.
absolute_quantity<double, temperature> t1 = 1000.0 * temperature; //
Unspecified temperature type.
absolute_quantity<double, temperature> t2 = 1010.0 * temperature;
relative_quantity<double, temperature> tdiff = t2 - t1; // tdiff is now 10
temperature units.
quantity<double, kelvin> t3 = 1020.0 * kelvin; // General kelvin quantity.
relative_quantity<double, kelvin> t3rel =
quantity_cast<relative_quantity<double, kelvin> >(t3); // Explicitly
say it's relative.
// So now t3rel is 1020.0 K
absolute_quantity<double, celsius> t3C =
quantity_cast<absolute_quantity<double, celsius> >(t3); // Explicitly
say it's absolute.
// Now t3C is 746.85°C.
t3C /= 2.0; // Specialize absolute temperatures to have scalar
// Now t3C is 236.85°C = (1 020.0K/2)
absolute_quantity<double, seconds> t = 10.0 * seconds;
// t2 *= 2.0; // This shouldn't compile because absolute time doesn't have
scalar multiplication.
Temperature is particularly odd in that absolute temperature it is (almost)
a linear space even though temperature differences are in a different linear
space. That is,
0°C × 2 = 273.15K × 2 = 546.3K = 273.15°C.
(I say "almost" because there's no negative absolute temperature, so it
can't really be a linear space.)
Where can I find the code under discussion?
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
{"url":"http://lists.boost.org/Archives/boost/2007/01/115680.php","timestamp":"2014-04-18T16:08:06Z","content_type":null,"content_length":"17210","record_id":"<urn:uuid:41a4bbb2-40e3-4698-a96c-82a942956b8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math puzzles
These puzzles do not require any mathematical knowledge, just logical reasoning. Check, how smart you are. If you cannot solve them, take it easy. Almost all puzzles were told to us by a computer
/math genius Vlad Mitlin . Visit us again: we intend to place here new puzzles and the solutions.
Please email us your comments and new puzzles: cherk@math.utah.edu. Enjoy!
1. Coins
There are 12 coins. One of them is false; it weights differently. It is not known, if the false coin is heavier or lighter than the right coins. How to find the false coin by three weighs on a
simple scale?
Click here
2. Bridge crossing
This problem was recently published in MAA on line: Crossing a Rickety Bridge at Night By Flashlight.
A group of four people has to cross a bridge. It is dark, and they have to light the path with a flashlight. No more than two people can cross the bridge simultaneously, and the group has only
one flashlight. It takes different time for the people in the group to cross the bridge:
Annie crosses the bridge in 1 minute,
Bob crosses the bridge in 2 minutes,
Volodia Mitlin crosses the bridge in 5 minutes,
Dorothy crosses the bridge in 10 minutes.
How can the group cross the bridge in 17 minutes?
Click here
To see the animated solution, you need a browser which supports JAVA
3. Apples delivery
The distance between the towns A and B is 1000 miles. There is 3000 apples in A, and the apples have to be delivered to B. The available car can take 1000 apples at most. The car driver has
developed an addiction to apples: when he has apples aboard he eats 1 apple with each mile made. Figure out the strategy that yields the largest amount of apples to be delivered to B.
Generalize the strategy for an arbitrary amount of apples.
More problems from Vlad Mitlin
There is a group of people at a party. Show that you can introduce some of them to each other so that after the introduction, no more than two people in the group would have the same number of
friends (initial configuration doesn't work because they all initially have 0 friends).
Show that for any natural n, at least one of two numbers, n or n+1, can be represented in the following form: k + S(k) for a certain k, where S(k) is the sum of all digits in k. For instance, 21 = 15
+ (5+1)
Zen problem
A Buddhist monk got an errand from his teacher: to meditate for exactly 45 minutes. He has no watch; instead he is given two inscent sticks, and he is told that each of those sticks would completely
burn in 1 hour. The sticks are not identical, and they burn with variant yet unknown rates (they are hand-made). So he has these two inscent and some matches: can he arrange for exactly 45 minutes of
Lucky tickets
In Russia you get into a bus, take a ticket, and sometimes say : Wow, a lucky number! Bus tickets are numbered by 6-digit numbers, and a lucky ticket has the sum of 3 first digits being equal to the
sum of 3 last digits. When we were in high school (guys from math school No. 7 might remember that ) we had to write a code that prints out all the lucky tickets' numbers; at least I did, to show my
loyalty to the progammers' clan. Now, if you add up all the lucky tickets' numbers you will find out that 13 (the most unlucky number) is a divisor of the result. Can you prove it (without writing a
Prove that for any natural N, 1000^N - 1 cannot be a divisor of 1978^N - 1
There are 6 points in a rectangle with the sides, 3 and 4. Prove that the distance between at least two of these points is smaller than the square root of 5.
A chess king is placed on a 8x8 chessboard. It has to make 64 moves, visiting each (of 64) squares only once, and to return there where it started. The path has to be with no intersections (a path
looking like '8' is no good). For a loop, one can count the total number of horizontal + vertical (i.e. excluding diagonal) moves; let's call this number M. 1. Give an example of at least one such
loop. 2. Give an example of a loop with the largest possible M. 3. Give an example of a loop with M=28. 4. Prove that 28 is the smallest possible M.
Consider a rectangular M x N checker board and some checkers on it. What is the minimum number of checkers you should put on the board for any straight line parallel to any one of two sides of the
board would cross some (at least one) checker?
Four numbers
Show that for positive a, b, c, end d, such that abcd=1, a^2 +b^2+c^2+d^2 + ab+ac+ad+bc+bd+cd is not smaller then 10.
Three planets in a galaxy and a market crash
A galaxy consists of three planets, each of them moving along a straight line with its own constant speed. If the centers of all three planets happen to lie on a straight line (some kind of eclipse)
the inhabitants of each planet go nuts (they cannot see their two neighbor planets all at once), start talking about the end of the world, and the stock market crashes. Show that there will be no
more than two such market crashes on each of these planets.
Fermat, computers, and a smart boy
A computer scientist claims that he proved somehow that the Fermat theorem is correct for the following 3 numbers:
He announces these 3 numbers and calls for a press conference where he is going to present the value of N (to show that
x^N + y^N = z^N
and that the guy from Princeton was wrong). As the press conference starts, a 10-years old boy raises his hand and says that the respectable scientist has made a mistake and the Fermat theorem cannot
hold for those 3 numbers. The scientist checks his computer calculations and finds a bug.
How did the boy figure out that the scientist was wrong?
Toy Fermat
Does the equation, x^2 + y^3 = z^4 have solutions in prime numbers? Find at least one if yes, give a nonexistence proof otherwise.
A sequence
A sequence of natural numbers is determined by the following formula, A[n+1] = a[n] + f(n) Where f(n) is the product of digits in a[n]. Is there an a[1] such that the above sequence is unbounded?
Intellectual power of a dragon pack
Dragons have to meet for a brainstorm in a convention center. The delegates have to be selected to provide the maximum efficiency of the brainstorm session. A dragon has any amount of heads, and for
any N, any amount of N-headed dragons is available if needed. The problem is that the size of the convention center is limited so no more than 1000 heads can fit into the assembly hall. The
intellectual power of a dragon pack is the product of head numbers of dragons in the pack. How should an optimum pack look like (the total number of dragons, the head number distribution)?
The vampire slayer
On the surface of the planet lives a vampire , that can move with the speed not greater than u. A vampire slayer spaceship approaches to the planet with its speed v. As soon as the spaceship sees the
vampire it shots a silver bullet - the vampire is dead. Prove that if v/u > 10 , the vampire slayer can accomplish his mission, even the vampire is trying to hide.
(2D) a projector illuminates a quadrant on a plane. 4 projectors are set in 4 arbitrary points of the plane. Show that they can be turned so that the whole plane would be illuminated.(3D) Show that
the whole space can be illuminated with 8 projectors, each illuminating an octant, however the location points are.
A vigilance campaign in Salt Lake City.
Salt Lake City looks like a rectangle crossed with M streets going from North to South and with N streets going from East to West. The city is frequently visited by tourists who suppose to run around
in the busses. The Utah governor wants to vigil all moves of the busses. He plans to put policemen at some intersections to watch all the busses moving on the streets visible from that intersections.
What is the minimum number of policemen needed for the bus watch?
Blondes (the puzzle from Oldaque P. de Freitas)
Two blondes are sitting in a street cafe, talking about the children. One says that she has three daughters. The product of their ages equals 36 and the sum of the ages coincides with the number of
the house across the street. The second blonde replies that this information is not enough to figure out the age of each child. The first agrees and adds that the oldest daughter has the beautiful
blue eyes. Then the second solves the puzzle. You might solve it too!
This one comes from Grzegorz Dzierzanowski
There are 12 toothpicks. Find polygons of extremal field, using all toothpicks. While building these polygons follow the rules : - you cannot break the toothpicks, - the length of each edge is
1,2,3,... toothpicks, - the edges of the polygon cannot cross each other.
|
{"url":"http://www.math.utah.edu/~cherk/puzzles.html","timestamp":"2014-04-18T00:40:59Z","content_type":null,"content_length":"11490","record_id":"<urn:uuid:9b55dab4-c0e2-44a7-ab29-3d5331ff3642>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Velocity Space Distribution Function
I need some guidance or where or how to learn the mathematics for velocity-phase space integrals that appear in Maxwellian distributions.
I'm an Engineer in Electronics and Communications, taking an advanced course on Kinetic Theory and I'm having this mathematical problems as I don't have much background on Physics.
I have already taken courses in vector calculus, differential equations and integration methods (basic)
Please Advise.
Thanks a lot.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4174198","timestamp":"2014-04-17T09:51:06Z","content_type":null,"content_length":"20053","record_id":"<urn:uuid:7c29327d-d48c-4364-85cf-6ff6c83c9f90>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
data : array-like, shape (n,m)
The n data points of dimension mto be indexed. This array is not copied unless this is necessary to produce a contiguous array of doubles, and so modifying this data will result in bogus results.
leafsize : positive integer
The number of points at which the algorithm switches over to brute-force.
|
{"url":"http://docs.scipy.org/doc/scipy-0.10.0/reference/generated/scipy.spatial.cKDTree.html","timestamp":"2014-04-16T07:34:29Z","content_type":null,"content_length":"10320","record_id":"<urn:uuid:58ae5623-dea6-4611-b90d-b136e9fe8336>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conjunction Fallacy - Less Wrong
Comments (33)
Sort By: Old
Ray Kurzweil is pretty impressive, although I would be much less confident in his predictions from now to 2029 than from 1999-2009.
Thanks for this gem!
I got the 1982 University of British Columbia ordering right easily, though that might be because I'm already aware of the phenomenon being studied.
It would be much harder for me as a subject to deal properly with the Second International Congress on Forecasting experiment. Even if I'm aware that adding or removing detail can lead my estimate of
probability to change in an illogical way, my ability to correct for this is limited. For one thing, it is probably hard to correctly estimate the probability that I would have assigned to a more (or
less) detailed scenario. So I may just have the one probability estimate available to me to work with. If I tell myself, "I would have assigned a lower probability to a less detailed scenario", that
by itself does not tell me how much lower, so it doesn't really help me to decide whether and how much I should adjust my probability estimate to correct for this. Furthermore, even if I were somehow
able to accurately estimate the probabilities I would have assigned to scenarios with varying levels of detail, that still would not tell me what probability I should assign. If my high-detail
assigned probability is illogically higher than the low-detail assigned probability, that doesn't tell me whether it is the low-detail probability that is off, or the high-detail probability that is
So as someone trying to correct for the "conjunction fallacy" in a situation like that of the Congress in Forecasting experiment, I'm still pretty helpless.
Although Robin's critiques of "gotcha" bias are noted, I experienced this as a triumph of learned heuristic over predisposed bias. My gut instinct was to rank accountant+jazz player as more probable
than jazz player, and then I thought about the conjunction rule of probability theory.
"The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics."
This is shocking, particularly if they had more than 30 seconds to make a decision.
I think this might possibly be explained if they looked at it in reverse. Not "how likely is it that somebody with this description would be A-F", but "how likely is it that somebody who's A-F would
fit this description".
When I answered it I started out by guessing how many doctors there were relative to accountants -- I thought fewer -- and how many architects there were relative to doctors -- much fewer. If there
just aren't many architects out there than it would take a whole lot of selection for somebody to be more likely to be one.
But if you look at it the other way around then the number of architects is irrelevant. If you ask how likely is it an architect would fit that description, you don't care how many architects there
So it might seem unlikely that a jazz hobbyist would be unimaginative and lifeless. But more likely if he's also an accountant.
I think this is a key point - given a list of choices, people compare each one to the original statement and say "how well does this fit?" I certainly started that way before an instinct about
multiple conditions kicked in. Given that, its not that people are incorrectly finding the chance that A-F are true given the description, but that they are correctly finding the chance that the
description is true, given one of A-F.
I think the other circumstances might display tweaked version of the same forces, also. For example, answering the suspension of relations question not as P(X^Y) vs P(Y), but perceiving it as P(Y),
given X.
But if the question "What is P(X), given Y?" is stated clearly, and then the reader interprets it as "What is P(Y), given X", then that's still an error on their part in the form of poor reading
Which still highlights a possible flaw in the experiment.
Imagine a group of 100,000 people, all of whom fit Bill's description (except for the name, perhaps). If you take the subset of all these persons who play jazz, and the subset of all these persons
who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.
Nitpicking: Concluding that this is a strict inclusion implicitly assumes that there is at least one jazz player who is not an accountant in the original set. Otherwise, the two subsets may still be
equal (and thus, equal in size).
The interesting thing to me is the thought process here, as I also knew what was being tested and corrected myself. But the intuitive algorithm for answering the question is to translate "which of
these statements is more probable" with "which of these stories is more plausible." And adding detail adds plausibility to the story; this is why you can have a compelling novel in which the main
character does some incomprehensible thing at the end, which makes perfect sense in the sequence of the story.
The only way I can see to consistently avoid this error is to map the problem into the domain of probability theory, where I know how to compute an answer and map it back to the story.
While I personally answered both experiments correctly, I see the failure of those whom we assume should be able to do so as a lack of being able to adapt learned knowledge for practical use. I have
training in both statistics and philosophy, but I believe that any logical person would be capable of making these judgments correctly, sans statistics and logic classes. Is there any real reason to
believe that someone who has studied statistics would be more likely to answer these questions correctly? Or is the ability simply linked to a general intelligence and that participation in an
advanced statistics and probability curriculum is a poor indicator of that intelligence?
I know a jazz musician who is not an accountant.
Going to the reason why. If I simply ask, which is more probable, that a random person I pick out of a phone book is an accountant or that same person is an accountant and is also a jazz musician.
Then I suspect more grad students would get the answer correct.
That personality traits are given to the random selection clutters up the "test". We can understand the possibility that Bill is an accountant. So we look for that trait and accept the secondary
trait of jazz. But jazz by itself - Never. We read answer E as if to say "If Bill is an accountant, he might play jazz" and this which we can accept for Bill much greater than Bill actually playing
jazz. It would also be more probably with typical prejudice.
So an interesting question here is (if I'm correct) why do our prejudices want to make answer E as an accountant who might play jazz rather than the wording actually used. I think it makes more
intuitive sense to an typical reader. Can we imagine Bill as an accountant who might play jazz - absolutely. Can we imagine Bill as an account who does play jazz - not as easliy: Lets substitute what
it is, with what I want it to read so it makes sense and makes me feel comfortable about solving this riddle.
QED A>E>C
If one is presented two questions, - Bill plays jazz - Bill is an accountant and plays jazz, is there an implied "Bill is not an accountant", created by our flawed minds, in the first question? This
could explain the rankings.
There was an implied "Bill is not an accountant" in the way I read it initially, and I failed to notice my confusion until it was too late.
So in answer to your question, that has now happened at least once.
I, too, was worried about this at first, but you'll find that http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ contains a thorough examination of the research on the
conjunction fallacy, much of which involves eliminating the possibility of this error in numerous ways.
Reasoning with frequencies vs. reasoning with probabilities
Though it's frustrating that we humans seem so poorly designed for explicit probabilistic reasoning, we can often dramatically improve our performance on these sorts of tasks with a quick fix: just
translate probabilities into frequencies.
Recently, Gigerenzer (1994) hypothesized that humans reason better when information is phrased in terms of relative frequencies, instead of probabilities, because we were only exposed to frequency
data in the ancestral environment (e.g., 'I've found good hunting here in the past on 6 out of 10 visits'). He rewrote the conjunction fallacy task so that it didnâ t mention probabilities, and with
this alternate phrasing, only 13% of subjects committed the conjunction fallacy. That's a pretty dramatic improvement!
For the above experiment, the rewrite would be: ----------- Bill is 34 years old. He is intelligent, but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but
weak in social studies and humanities.
There are 200 people who fit the description above. How many of them are: A: Accountants â Ś E: Accountant who play jazz for a hobby. ------------
Gigerenzer, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright and P. Ayton, eds., Subjective Probability.
New York: John Wiley.
The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could
cause the conjunction fallacy.
Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who *don't* play jazz" or C is "jazz players who are not accountants".
I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the compound statement is the conditional probability, IOW, turning the
"and" statement into an "if" statement. If the soviets invaded Poland, the probability of a cutoff might be high, certainly higher than the current probability given no new information.
But of course that was not the question. A big part of our problem is sometimes translation of english statements into probability statements. If we do that intuitively or cavalierly, these fallacies
become very easy to fall into.
josh wrote: Sebastian,
I know a jazz musician who is not an accountant.
Josh, note that it is not sufficient for one such person to exist; that person also has to be in the set of 100,000 people Eliezer postulated to allow one to conclude the strict inclusion between the
two subsets mentioned.
Sebastian, I thought of including that as a disclaimer, and decided against it because I figured y'all were capable of working that part out by yourselves. Unless both figures are equal to 0, I think
it is rather improbable that in a set of 10 jazz players, they are all accountants.
The probability of P(A&B) should always be strictly less than P(B), since just as infinity is not an integer, 1.0 is not a probability, and this includes the probability P(A|B). However you may drop
the remainder if it is below the precision of your arithmetic.
When initially presenting the question, he doesn't mention a sample of 100,000 people. I assumed we were using the sample of all people. My gooch.
1.0 is a probability. According to the axioms of probability theory, for any A, P(A or (not A))=1. (Unless you're an intuitionist/constructivist who rejects the principle that not(not A) implies A,
but that's beyond the scope of this discussion.)
My question about the die-rolling experiment is: how would raising the $25 reward to, say $250, affect the probability of an undergraduate commiting the conjunction fallacy?
(By the way, Bill commits fallacies for a hobby, and he plays the tuba in a circus, but not jazz)
It seems to me like a fine way to avoid this fallacy is to always, as a habit, disassemble statements into atomic statements and evaluate the probability of those. Even if you don't use numbers and
any of the rules of probability, just the act of disassembling a statement should make the fallacy obvious and hence easier to avoid.
I think this fallacy could have severe consequences for criminal detectives. They spend a lot of time trying to understand the criminals, and create possible scenarios. It's not good if a detective
finds a scenario more plausible the more detailed he imagines it.
The case of the die rolled 20 times nad trying to determine which sequecne is more likely is not one covered in most basic statistics courses. Yes you can apply the rule of statistics and get the
right answer, but knowing the rules and being able to apply them are different things. Otherwise we could give people Euclids postulates one day and expect them to know all of geometry. I see a lot
of people astonished by peoples answers, but how many of you could correctly determine the exact probability of each of the sequences appearing?
Maybe I am wrong but I think to get the probability of an arbitrary sequence appearing you have to construct a markov model of the sequence. And then I think it is a bunch of matirx multiplication
that determines the ultimate probability. Basically you have to take a 6 by 6 matrix and take it to the 20th power. Obviously this is not required, but I think when people can't calculate the
probabilities they tend to use intuition, which is not very good when it comes to probability theory.
What's an accountant?
I think most people would say that there's a high probability Bill is an accountant and a low probability that he plays jazz. If Bill is an accountant that does not play jazz, then E is "half right"
whereas C is completely wrong. They may be judging the statements on "how true they are" rather than "how probable they are", which seems an easy mistake to make.
Re: Dice game
Two reasons why someone would choose sequence 2 over sequence 1, one of them rational:
1) Initially I skimmed the sequences for Gs, assumed a non-fixed type font, and thought all the sequences were equal length. On a slightly longer inspection, this was obviously wrong.
2) The directions state: " you will win $25 if the sequence you chose appears on successive rolls of the die." A person could take this to mean that they will successively win $25 for each roll which
is a member of a complete version of the sequence. It seems likely the 2nd sequence would be favored in this scenario. The winners would probably complain though, so this likely would have been
This was a really nice article, especially the end.
So, I tried each of these tests before I saw the answers, and I got them all correct- but I think the only reason that I got them correct is because I saw the statements together. With the exception
of the dice-rolling, If you had asked me to rate the probabilities of different events occurring with sufficient time in between for the events to become decoupled in my mind, I suspect the absolute
probabilities I would given would be in a different order from how I ordered them when I had access to all of them at once. Having the coupled events listed independently forced me to think of each
event separately and then combine them rather than trying to just guess the probability of both of a joint event.
But I'm not sure if that's the same problem- it might be more related to how inconsistent people really are when they try to make predictions.
"Logical Conjunction Junction"
Logical conjunction junction, what's your function?
To lower probability,
By adding complexity.
Logical conjunction junction, how's that function?
I've got hundreds of thousands of words,
They each hide me within them.
Logical conjunction junction, what's their function?
To make things seem plausible,
Even though they're really unlikely.
Logical conjunction junction, watch that function!
I've got "god", "magic", and "emergence",
They'll get you pretty far.
[spoken] "God". That's a being with complexity at least that of the being postulating it, but one who is consistently different from that in logically impossible ways and also has several literature
genres' worth of back story,
"Magic". That's sort of the opposite, where instead of having an explanation that makes no sense, you have no explanation and just pretend that you do,
And then there's "emergence", E-mergence, where you collapse levels everywhere except in one area that seems "emergent" by comparison, because only there do you see higher levels are perched on lower
levels that are different from them.
"God", "magic", and "emergence",
Get you pretty far.
[sung] Logical conjunction junction, what's your function?
Hooking up two concepts so they hold each other back!
Logical conjunction junction, watch that function!
Not just when you see an "and",
Also even within short words.
Logical conjunction junction, watch that function!
Some words combine many ideas,
Ideas that can't all be true at once!
Logical conjunction junction, watch that function!
The YouTube link is broken. Did you intend to link to a YouTube video of the original video from Schoolhouse Rock?
I suspect respondents are answering different questions from the ones asked. And where the question does not include probability values for the options the respondents are making up their own. It
does not account for respondents arbitrarily ordering what they perceive as equal probabilities. And finally, they may be changing the component probabilities so that they are using different
probability values throughout when viewing the options.
The respondents are actually reading the probabilities as independent, and assigning probabilities such as this: A: P(Accountant) = 0.1 C: P(Jazz) = 0.01 E: P(Accountant^Jazz) = P(Accountant) x P
(Jazz) = 0.001, and you would expect the correct ranking
But if they are perceiving E as conditional then P(Accountant|Jazz) = P(Accountant^Jazz)/P(Jazz) = .001/.01 = 0.1, and leaving the equal ranking of A, E ordered as A, E they end up with A >= E > C.
And, it's also possible they are using an intuitive conditional probability and coarsely and approximately ranking without calculation.
They may also be doing the intuitive of the following, by reading the questions in order:
A: Yeah, sounds about right for Bill. Let's say 0.1 C: Nah, no way does Bill play Jazz. Let's say zero! E: Well, I really don't think he plays jazz, and I really thought he'd be an accountant. But I
guess he could be both. In this case I'm going for 0.05 accountant, but 0.02 Jazz. 0.05 x 0.02 = 0.001
So, A > E > C
In this last case the fact that he could both be an Accountant and play Jazz (E) is more plausible than he would play Jazz and not be an accountant (reading C as not being an accountant). Of course C
does not rule out him also being an accountant, but that's not what appears to be the intuitive implication of C. It's as if the respondent is thinking, why would they include E if C already includes
the possibility of being an accountant? And though the options are expressed as a set the respondent is not connecting them and so adapting the independent probabilities in each option. As I said,
this might be quite intuitive so that the respondents do not perform the calculations and so do not see the mistake. That the question says "not mutually exclusive or exhaustive" may not register.
The diplomatic response might be explained by the following. Without any good reason respondents to (1) think suspension unlikely. Because they are not asked (2) they are asked to rate this
independently of anything else, whether that be invasion of Poland, assassination of the US President, or anything else not mentioned in (1). Since they are not given any reason for suspension they
think it very unlikely. So, your point that "there is no possibility that the first group interpreted (1) to mean 'suspension but no invasion' " does not hold. They can interpret it to mean
'suspension but nothing else'.
But in (2) the respondents are given a good reason to thank that if invasion is likely then suspension will follow hot on its heels. Also, some respondents might be answering a question such as "If
invasion then suspension?", even though that is not what they are being asked.
So I think there are explanations as to why respondents don't get it that go beyond simply not knowing or remembering the conjunction condition, let alone knowing it as a 'fallacy' to avoid.
Is probability a cognitive version of an optical illusion? Two lines may not look the same length, but when you measure them they are. When two probability statements appear one way they may actually
turn out to be another way if you perform the calculation. The difference in both cases is relying on intuition rather than measurement or calculation. Looked at it from this point of view
probability 'illusions' are no more embarrassing than optical ones, which we still fall for even when we know the falsity of what we perceive.
A : A complete suspension of diplomatic relations between the USA and Russia, sometime in 2023.
B : A Russian invasion of Poland, sometime in 2023.
C : Chicago Bulls winning NBA competition, sometime in 2023.
D <=> A & B
E <=> A & C.
In order to estimate the likelyhood of an event, the mind looks in the available memory for information. The more easily available an information the more it is taken into account.
A and B hold information that is relevent to each other. A and B are correlated and the occurence of one of them strengthens the probability of the other happening. The mind while trying to evaluate
the likelyhood of each of the components of D takes one as a relevant information about the other hence leading to overestimate p(A) and p(B).
p(D) = p(A&B) = p(A).p(B | A) = p(B).p(A | B)
Then the mind gets it wrong is when it makes the above equation equal to either p(A | B) or p(B | A) or oddly equal to their sum, as it has been mentionned in previous comments.
The intuitive mind has trouble understanding probability multiplication. It rather functions in an addition (for correlated events) and substraction (for independent events) mode when evaluating
likelyhood. p(E) for example is likely to be seen as p(C) - p(A). C is a more likely event ( even more if you live in Chicago) than A. Let say 5% for C and to be generous 1% for A.
The mind would rather do p(E) = p(C) - p(A) = 4 % which ends up making p(A&C) > p(A) ,
rather than the correct p(A) . p(C |A) = p(C) . p(A | C) = p(A) . p(C) = 0.05 % assuming A and C completely independent.
The great speed at which the intuitive mind makes decisions and assigns likelyhood for propositions seems to be at the expense of occuracy due to oversimplification, poor calculus ability,
sensitivity to current emotional state leading to volatility of priority order, sensitivity to chronology of information acquisition... etc.
Nevertheless, the intuitive mind shared with other species proved to be a formidable machine fine tuned for survival by ages of natural selection. It is capable of dealing with huge amount of
sensorial and abstract information gathered upon time, sorting it dynamically, and making survival decisions in a second or two.
|
{"url":"http://lesswrong.com/lw/ji/conjunction_fallacy/","timestamp":"2014-04-16T04:15:16Z","content_type":null,"content_length":"145216","record_id":"<urn:uuid:6e93af1b-10f7-411d-a8bf-69deea1ad50e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
15. Instabilities in Spheroidal Systems
Astronomy 626: Spring 1997
Galaxies should be stable equilibrium solutions of the CBE. Spheroidal, pressure-supported systems with highly anisotropic velocity distributions are often unstable, relaxing to less flattened and
less anisotropic configurations in a few dynamical times. Such instabilities may explain the absence of elliptical galaxies flatter than E6 or E7.
No all equilibria are stable. One example is a pencil balanced on end; this is equilibrium position, but the slightest perturbation will cause it to fall over. The balanced pencil is unstable. A
second example is a `public-address' system, consisting of a microphone, an amplifier, and a loud-speaker; a signal going around this loop can grow in amplitude with each cycle, producing
ear-splitting feedback. The PA system is overstable.
Instabilities also occur in stellar systems. Examples of potentially unstable stellar systems include:
• Homogeneous systems (Jeans instability);
• Disk systems (ring & bar instabilities);
• Spherical systems (Henon, radial-orbit, & tangential-orbit instabilities);
• Axisymmetric systems (radial-orbit, ring, off-center, & bending instabilities).
The Jeans instability plays a key role in structure formation in cosmology. Instabilities in rotating disk systems play important roles in constraining and determining the structure of disk galaxies.
This lecture will focus on instabilities in pressure-supported spherical and axisymmetric systems. These instabilities may provide important constraints on the structure of elliptical galaxies.
15.1 Methods of Analysis
Linear Stability Analysis
Suppose that f_0(r,v) is a time-independent solution of the CBE, and that Phi_0(r) is the self-consistent potential generated by this solution. To determine if this solution is stable, consider
perturbed solutions of the form
(1) f(r,v,t) = f_0(r,v) + epsilon f_1(r,v,t)
where epsilon is a small parameter and f_1(r,v,t) is an arbitrary function. Substitute this perturbed distribution function into the time-dependent CBE, and keep only terms of first order in epsilon;
the result is the linearized CBE,
d f_1 d f_1 d f_1 d f_0
(2) ----- + v . ----- - grad Phi_0 . ----- - grad Phi_1 . ----- = 0 .
d t d r d v d v
Here the first three terms describe the evolution of the perturbed distribution function in the unperturbed field Phi_0, while the last term describes the effect of the perturbation in the potential,
Phi_1(r,t), on the unperturbed distribution. Here Phi_1 is given by Poisson's equation,
(3) div grad Phi_1 = 4 pi G | dv f_1(r,v,t) .
Eqs. 2 and 3 together describe the evolution of the perturbed system under the assumption that the perturbations are small.
To discover if an instability exists -- or to prove that the systems is stable -- one must consider a complete set of perturbations f_1(r,v,t). The choice of the set of basis functions depends on the
geometry of the system. For homogeneous systems, a Fourier expansion is appropriate; this approach is useful in deriving the Jeans instability.
Having devised a complete set of perturbations, the trick is to find linear combinations which grow. A linear combination can be represented as a vector, and time evolution as a matrix acting on this
vector. Then combinations which grow are eigenvectors of this matrix. For example, if the growing solutions depend on time like
- i omega t
(4) f_1 ~ e ,
with omega = i gamma where gamma is a real number, then the perturbation grows exponentially and the system is unstable; if omega = omega_R + i gamma, where omega_R is also real, then the
perturbation oscillates as it grows and the system is overstable.
Numerical Simulations
Due to problems in chosing good basis sets, the linear analysis just described is a fairly daunting task. An alternate approach is to feed an N-body realization of the equilibrium system into an
N-body code, and run the system to see if it remains in equilibrium. This approach may rely on particle noise to `seed' growing perturbations, although one also has the option of introducing
perturbations `by hand'. It's worth emphasizing that such numerical simulations only reveal fairly gross and violent instabilities; slowly-growing perturbations may be lost in the fluctuations due to
discreteness and consequently can't be detected.
Nonetheless, N-body simulations have played an important role in the investigation of dynamical instabilities in stellar systems. Among things, they have led to the
• discovery of new instabilities,
• treatment intractable cases, and
• study of non-linear evolution.
15.2 Spherical Systems
Isotropic Systems: f = f(E)
Antonov (1960, 1962) derived a necessary and sufficient condition for the stability of spherical, isotropic systems. This criterion, expressed in terms of a complex variational principle, can
distinguish both stable and unstable systems. Some simple consequences of this criterion are that
• any system with df/dE < 0 is stable to non-spherical perturbations, and
• any system with df/dE < 0 and d^3 rho/d Phi^3 <= 0 is stable to all perturbations.
These secondary criteria are sufficient but not necessary conditions for stability; systems which violate them may still be stable. It's unknown if any spherical, isotropic systems are actually
unstable. The best candidate is the polytrope with index n = 1/2, in which all stars have the same binding energy; this system exhibits radial oscillations which seem larger than expected given the
number of bodies used in the simulations (Henon 1973).
Anisotropic Systems: f = f(E,J)
Instability in systems composed of stars on exactly radial orbits was predicted by Antonov (1973), and confirmed numerically by Polyachenko (1981), who showed that such systems rapidly evolve into
elongated, bar-shaped configurations.
Henon (1973) systematically investigated the dynamical stability of a class of models known as generalized polytropes, which have the distribution function
n-3/2 2m
{ K (E_1 - E) J , if E <= E_1
(5) f(E,J) = {
{ 0 , otherwise .
These models have finite radius since f -> 0 at some energy E = E_1 < 0; the parameters K and E_1 together fix the total mass and radius of the system. The parameters n >= 1/2 and m >= -1
govern the structure of the system. The energy distribution is controlled by n, while m determines the velocity anisotropy; in particular, the ratio of radial to tangential velocity dispersion is
sigma_r^2 1
(6) --------- = ----- ,
sigma_t^2 1 + m
so m = -1, 0, infinity correspond to radial, isotropic, and tangential systems, respectively.
Using a spherically-symmetric N-body code, Henon (1973) found a dynamical instability in generalized polytropes `when n is low and the velocity distribution is radially elongated'. This instability
takes the form of radial oscillations which grow in amplitude. As the system pulsates, bodies are scattered in binding energy; eventually the distribution function changes enough to shut off the
At P. Hut's suggestion, I repeated Henon's experiments with an N-body code which did not enforce spherical symmetry and thus allowed non-spherical perturbations to grow as well (Barnes 1985). Being
unaware of Antonov and Polyachenko's discovery of a non-spherical instability in systems dominated by radial orbits, I was initially suspicious of numerical bugs when my simulations of generalized
polytropes with preferentially radial orbits (m < 0) evolved from spheres into triaxial, bar-like configurations. Only after much testing with different N-body codes could I defend the claim that
this instability was real.
I also found that generalized polytropes with preferentially tangential orbits (m > 0), which are unusual in having hollow centers, are unstable in a different way. These systems exhibit
non-spherical oscillations which grow in amplitude; bodies scattered into orbits of low angular momentum eventually fill in the hollow center of the model and shut down the instability. Only systems
with d rho/dr > 0 are subject to this instability, so it's probably not relevant for realistic galaxies. The limiting case in which m -> infinity and the system consists of a thin spherical shell
of bodies on circular orbits was subject to a linear stability analysis by J. Goodman (Barnes, Goodman, & Hut 1986). In this limit, all stars have the same orbital period, so any initial
non-spherical perturbation will be recreated at the antipodal point half an orbit period later; gravity attracts more bodies to over-dense perturbations, which consequently grow.
To map the range of these instabilities, we ran a 10 by 14 grid of models in the (m,n) plane and measured changes in mass profile and ellipticity (Barnes, Goodman, & Hut 1986). These showed that
Henon's spherically-symmetric instability was confined largely to systems with m + n < 1/2. On the other hand, the radial-orbit and tangential-orbit instabilities were completely insensitive to the
value of n and even the sign of df/dE.
15.3 The Radial-Orbit Instability
The astrophysical relevance of the radial-orbit instability was emphasized when it was found in several more-or-less realistic models of spherical galaxies. Merritt & Aguilar (1985) showed that
anisotropic Jaffe (1983) models with distribution functions of the form f = f(E + J^2 / 2r_a^2) are unstable if the anisotropy radius r_a < 0.3 a, where a is the scale radius. And Merritt (1987)
showed that an anisotropic model of M87, which reproduced the observed velocity profiles without invoking a central black hole (Newton & Binney 1984), would evolve into an elongated configuration.
A crude picture of the radial-orbit instability is to imagine exactly radial orbits as rigid rods which are constrained to pass through the center but can freely pivot about that point; a spherical
system of such rods can move to a state of lower potential energy if the rods clump together about some axis determined by a slight initial overdensity. This picture relates the radial-orbit
instability to the Jeans instability for mass points constrained to stay on the surface of a sphere.
A better picture of this instability was put forward by Palmer & Papaloizou (1987). In a general spherical potential, a highly elongated orbit will precess at a slow and constant rate. If a weak
bar-like potential is added, the orbit will gain angular momentum as it comes into alignment with the bar, and lose angular momentum as it continues to precess past the bar. If the orbit's initial
angular momentum is low enough, it will be trapped by the bar and confined to a box orbit, adding its mass to the mass already comprising the bar; hence the bar will grow.
Palmer & Papaloizou (1987) also showed that all models with f(E,J) diverging like a power law in J as J -> 0 are unstable. This implies that no rigorous stability criterion can be formulated in
terms of the ratio of radial to total kinetic energy; systems with f(E,J) diverging as J -> 0 exist with arbitrarily small amounts of radial anisotropy, and all are formally unstable. Palmer &
Papaloizou's criterion is sufficient but not necessary for instability; as they and others showed, radially anisotropic systems with finite f(E,0) can also be unstable (e.g. Dejonghe & Merritt
15.4 Axisymmetric Systems
N-body methods are probably the only viable way to assess the stability of three-dimensional systems. The enormous range of possible axisymmetric models makes any definitive survey of instabilities
in non-spherical systems a daunting task. Here I will simply mention some results obtained for axisymmetric systems.
The construction of axisymmetric equilibrium systems for stability testing is tricky since most orbits in axisymmetric potentials possess a third, non-classical integral of unknown form. One possible
set of axisymmetric models which can be derived analytically are the `shell-orbit' models described by Bishop (1987). These are built out of tube orbits of zero radial thickness; they are thus
somewhat unusual and may be more prone to instabilities than models using orbits of finite radial thickness. Another way to construct axisymmetric models is Schwarzschild's (1979) linear programming
technique. This approach permits model builders to use the full range of possible orbits. Levison & Richstone (1985) have used this method to construct E6 models with flat rotation curves and a
wide range of kinematic properties.
An analog of the radial-orbit instability is seen in axisymmetric models constructed using orbits with low angular momentum J_z (Palmer, Papaloizou, & Allen 1990, Levison, Duncan, & Smith
1990). Oblate and prolate systems composed of such orbits evolve into triaxial structures; this suggests that the natural endpoint of the radial-orbit instability is a triaxial configuration.
Oblate Systems
Oblate Bishop models as flat as E6 or flatter are subject to an axisymmetric instability, with particles on adjacent orbits clumping together into thin, curving cylinders (Merritt & Stiavelli
1990). A similar axisymmetric instability has long been known in disk systems composed of stars on nearly circular orbits; such a disk will break up into a set of nested rings (Toomre 1964). It seems
likely that this instability occurs because of the small radial velocity dispersion of these `shell-orbit' models.
Bishop models even as round as E1 are subject to a `m = 1' instability which displaces the density maximum from the system's center of mass (Merritt & Stiavelli 1990). This instability is not
peculiar to Bishop models; it's also seen in Levison & Richstone models with low radial velocity dispersions (Levison, Duncan, & Smith 1990).
Prolate Systems
Prolate Bishop models more elongated than E6 are subject to bending instabilities, temporarily becoming either banana or S-shaped before relaxing to less elongated configurations (Merritt &
Hernquist 1991). This instability is probably related to the ``firehose'' instability in a thin, infinite sheet of stars (Toomre 1966). It seems that this instability is not peculiar to `shell-orbit'
models but is likely to occur in any prolate system more elongated than about E7 (Merritt & Sellwood 1994).
• Antonov, V.A. 1960, Astr. Zh. 37, 918 (Soviet Astron. 4, 859).
• Antonov, V.A. 1962, Vestnik Leningrad Univ. 19, 96.
• Antonov, V.A. 1973, in Dinamika Galaktik i Zvezdnykh Skoplenii (Alma-Ata: Nauka).
• Barnes, J. 1995, in Dynamics of Star Clusters, eds. J. Goodman & P. Hut, p. 297.
• Barnes, J., Goodman, J., & Hut, P. 1986, Ap. J. 300, 112.
• Bishop, J. 1987, Ap. J. 322, 618.
• Dejonghe, H. & Merritt, D. 1988, Ap. J. 328, 93.
• Henon, M. 1973, Astr. Ap. 24, 229.
• Jaffe, W. 1983, MNRAS 202, 995.
• Levison, H., Duncan, M.J., & Smith, B.F. 1990, Ap. J. 363, 66.
• Levison, H. & Richstone, D. 1985, Ap. J. 295, 349.
• Merritt, D. 1987, Ap. J. 319, 55.
• Merritt, D. & Aguilar, L.A. 1985, MNRAS 217, 787.
• Merritt, D. & Hernquist, L. 1991, Ap. J. 376, 439.
• Merritt, D. & Sellwood, J.A. 1994, Ap. J. 425, 551.
• Merritt, D. & Stiavelli, M. 1990, Ap. J. 358, 399.
• Newton, A.J. & Binney, J. 1984, MNRAS 210, 711.
• Palmer, P.L. & Papaloizou, J. 1987, MNRAS 224, 1043.
• Palmer, P.L., Papaloizou, J., & Allen, A.J. 1990, MNRAS 243, 282.
• Polyachenko, V.L. 1981, Soviet Astr. Lett. 7, 79.
• Schwarzschild, M. 1979, Ap. J. 232, 236.
• Toomre, A. 1964, Ap. J. 139, 1217.
• Toomre, A. 1966, in Notes on the 1966 Summer Study Program in Geophysical Fluid Dynamics at the Woods Hole Oceanographic Institution, p. 111.
Due date: 3/20/97
These problems explore the picture of radial-orbit instability due to Palmer & Papaloizou (1987). The first two ask you to study an orbit in a Hernquist potential with a weak bar; the third seeks
to show that a collection of such orbits will yield a bar-shaped mass distribution.
15. In Cartesian coordinates (X,Y), the potential of a Hernquist (1990) model with mass M and scale radius a is
G M
(7) Phi(X,Y) = - ------------------- .
sqrt(X^2 + Y^2) + a
Derive the equations of motion for a test particle moving in this potential, and implement them in the accel routine of the program leapint that you got in Lecture 6. Setting G = 1, M = 1 and a = 1/
2, plot the orbit of a test particle starting at position (X,Y) = (1,0) with velocity (v_X,v_Y) = (0,v_0), where v_0 is a small value compared to the circular velocity at radius 1. Verify that the
resulting orbit is a rosette, and adjust v_0 so that the test particle reaches apocenter about 20 to 50 times while precessing once around the origin. Show the resulting orbit, along with your value
for v_0. (Hint: set the `number of points', now more properly called the `number of dimensions', n = 2; use two elements of the vector x for the coordinates (X,Y), and two elements of the vector v
for the velocities (v_X,v_Y); set mstep = 32768, nout = 32, and dt = 1.0/256.0 to run for a sufficient time while outputting a reasonable amount of data).
16. Starting with the system in problem 15, add a weak bar-like potential aligned with the X axis,
G M G m 2
(8) Phi(X,Y) = - ------------------- + --- Y ,
sqrt(X^2 + Y^2) + a a
where m << M parametrizes the strength of the bar. Re-run the initial conditions for the test particle orbit from problem 15 in this new potential, and adjust the parameter m so that the orbit
can no longer precess completely around the origin but rather remains trapped within some angle of the X axis. Plot the resulting orbit.
17. In problem 16 you probably noticed that the orbit precesses fastest when it's aligned with the bar; consequently the orbit contributes less to the density along the bar than it does elsewhere.
But the bar as a whole is built from an ensemble of orbits which oscillate through different angles with respect to the X axis, and the sum of all these orbits gives a density which is highest along
the bar. Illustrate this by considering a simpler system: an ensemble of simple harmonic oscillators, with oscillation amplitudes x_max uniformly distributed between 0 and 1. Assuming the oscillators
have random phases, what is the density distribution along the x axis?
Joshua E. Barnes (barnes@galileo.ifa.hawaii.edu)
Last modified: March 14, 1997
|
{"url":"http://www.ifa.hawaii.edu/~barnes/ast626_97/iss.html","timestamp":"2014-04-19T11:59:42Z","content_type":null,"content_length":"21406","record_id":"<urn:uuid:9ea678d6-795f-4acf-bae7-e6e5fa08e78c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kirt's Cogitations
These original Kirt's Cogitations™ may be reproduced (no more than 5, please) provided proper credit is given to me, Kirt Blattenberger.
Please click here to return to the Table of Contents.
Cog·i·ta·tion [koj-i-tey'-shun] – noun: Concerted thought or
reflection; meditation; contemplation.
Kirt [kert] – proper noun: RF Cafe webmaster.
Rules of Thumb Rules of thumb
are a great tool to have available as long as you have confidence in the general accuracy of the rule. Depending on which source you consult, the term “rule of thumb” has many possible origins, but
most refer to some part of the of the thumb (probably one belonging to some king) being used to approximate length, like the distance from the tip of the thumb to the first joint being about an inch.
From there, just about any sort of mnemonic for approximating a quantity has been called a rule of thumb.
Many common rules of thumb exist, like the “Rule of 72,” whereby for exponential growth at a constant rate is obtained by dividing 72 by the percent growth rate so as to arrive at the period for
doubling the original amount. For example, if a population grows 10% every year, then it doubles in 72/10 = 7.2 years. The “real” number in this case is (log 2)/(log 1.1) = 7.273, but it is close to
7.2 (a mere 1% error), so at least for this example, the rule of thumb holds. Let us assume it holds for any case since it has persisted for a long time.
Another common rule of thumb is the Tailor’s Rule of Thumb (possibly where the rule of "thumb" originated). Tailors used to measure the circumference of a client’s thumb to approximate the
circumference of the wrist (2x), the neck (4x), and the waist (8x). For myself, the multiplication factors are 2.4x, 5.7x, and 11x, respectively (2.8" thumb, 6.8" wrist, 16" neck, 32" waist). Hmmm, I
would hate to wear that suit, because according to the rule of thumb, my shirt sleeves would be only 5.6" and the neck would be 11.2," the pants waistline would be a mere 22.4." Either my thumb is
too thin or the rest of me is way too fat. Maybe I measured my thumb incorrectly.
Electromagnetic energy travels about one foot in one nanosecond in free space (actually 1.01670336 ns), and in one nanosecond, it travels about one foot (actually 0.98357106 ft). Yet another useful
rule of thumb.
OK, so what’s the point? Here’s the point. Recently, the subject surfaced again regarding what value to use for the relationship between the 3rd-order intercept point (IP3) and the 1 dB compression
point (P1dB). Most people will say it is 10 to 12 dB. Many software packages allow the user to enter a fixed level for the P1dB to be below the IP3 when the actual P1dB value is not known. For
instance, if a fixed level of 12 dB below IP3 is used and the IP3 for the device is +30 dBm, then the P1dB would be
+18 dBm. I was tempted to simply propagate that rule of thumb, but decided to actually test it empirically.
In order to check the theory, IP3 and P1dB values from 53 randomly chosen amplifiers and mixers were entered into an Excel spreadsheet (see
). The components represent a cross-section of silicon and GaAs; FETs, BJTs, and diodes; connectorized and surface mount devices. A mean average and standard deviation was calculated for the sample,
and everything was plotted on a graph (see
As it turns out, the mean is 11.7 dB with a standard deviation of 2.9 dB, so about 68% of the sample has P1dB values that fall between 8.8 dB and 14.6 dB below the IP3 values. What that means is that
the long-lived rule of thumb is a pretty good one. A more useful exercise might be to separate the samples into silicon and GaAs to obtain unique (or maybe not) means and standard deviations for
An interesting sidebar is that where available, the IP2 values were also noted. As can be seen in the chart, the relationship between IP2 and P1dB is not nearly as consistent.
Of equal motivation for the investigation was the desire to confirm or discredit the use of the noise figure and IP3 type of cascade formula for use in cascading component P1dB values. As discussed
elsewhere, the equation for tracking a component from its linear operating region into its nonlinear region is highly dependent on the entire circuit structure, and one model is not sufficient to
cover all instances. Indeed, the more sophisticated (pronounced “very expensive”) system simulators provide the ability to describe a polynomial equation that fits the curve of the measured device.
Carrying the calculation through many stages is calculation intensive. Some simulators exploit the rule of thumb of IP3 versus P1dB tracking and simply apply the IP3 cascade equation to P1dB. As with
other shortcuts, as long as the user is aware of the approximation and can live with it, it’s a beautiful thing.
The RF Cascade Workbook series of spreadsheets has assiduously avoided attempting a P1dB cascade calculation for the reason noted above. Instead, a saturated power (Psat) value was provided and the
program simply flagged a condition where the linear power gains would cause a stage output power that was greater than the entered Psat value. Future version of RF Cascade Workbook will incorporate
the P1dB cascade and use the rule of thumb method for calculations.
While on the subject of rules of thumb, it would be very useful to have you go to the RF Cafe Forum and add any that you know, whether they apply to engineering, science, woodworking or anything
else. If enough good rules of thumb are posted, I will create a dedicated page for them, and give credit to you for each if desired. Please click
HERE to go to the forum now. Thanks for your help!
|
{"url":"http://www.rfcafe.com/miscellany/factoids/kirts-cogitations-212.htm","timestamp":"2014-04-18T11:17:01Z","content_type":null,"content_length":"20481","record_id":"<urn:uuid:79c16249-e9e9-4a8b-8a7f-8746f2c8917d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random Numbers
06-09-2006 #1
Eager young mind
Join Date
Jun 2006
hi all,
I am required to generate an array having 100 random numbers, where there is absolutely no repetition and the case where the ith element of the array is not the number " i " itself is not allowed
I found some ideas where we could generate pseudo random numbers using the idea:
x(n+1) = (x(n)*P1 + P2) (mod N) .
where x(n) is the nth element,
P1 and P2 are constants
N is also a constant.. the value of x0 must be chosen appropriately..
This does not guarantee absolute randomness..
Can some one help me with this please...
I would just use something simple like rand() to generate the random numbers. Then just check if the number is already in the array and that it's not the same as the array index you're storing it
in. If one of those checks fails just grab another random number and try again.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void)
int nums[100];
int i, j;
int again;
srand(time(0)); // Seed pseudo random number generator
for(i = 0;i < 100;++i)
again = 0;
nums[i] = rand();
if(nums[i] == i)
again = 1;
for(j = 0;j < i;++j)
if(nums[j] == nums[i])
again = 1;
} while(again);
for(i = 0;i < 100;++i)
printf("nums[%d] = %d\n", i, nums[i]);
return 0;
Last edited by itsme86; 06-09-2006 at 11:30 AM.
If you understand what you're doing, you're not learning anything.
Since rand() returns a number from 0 to RAND_MAX (which is <= INT_MAX, and is on two of my compilers, 2^16-1 and 2^32-1), it might take a long time to get the value you're looking for. You should
limit the value (but not with modulus): http://eternallyconfuzzled.com/articles/rand.html.
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
I didn't see in his post that he was looking for specific values. Since the only qualification is that none of the numbers can match (or equal the array index) then I think the largest range
possible would be best.
If you understand what you're doing, you're not learning anything.
rand() does not guarantee absolute randomness either. It uses a simple linear congruential algorithm. Go use some radioactive kitten litter for some real random values.
You could use a primitive root modulo a prime to loop through a set of distinct values. Repetitions are mathematically impossible. Or set the array values using indexes from a randomly generated
hash table.
#include <stdio.h>
void J(char*a){int f,i=0,c='1';for(;a[i]!='0';++i)if(i==81){
/3*3+f/3*9+f%3]==c||a[i%9+f*9]==c||a[i-i%9+f]==c)goto e;a[i]=c;J(a);a[i]
='0';e:;}}int main(int c,char**v){int t=0;if(c>1){for(;v[1][
t];++t);if(t==81){J(v[1]);return 0;}}puts("sudoku [0-9]{81}");return 1;}
Linux also provides /dev/random and /dev/urandom. It uses random events such as keyboard strokes and harddrive activity to generate entropy data which you can use for
more-than-pseudo-random-numbers if you really need that (and your OS supports it).
Last edited by itsme86; 06-12-2006 at 10:25 AM.
If you understand what you're doing, you're not learning anything.
> I am required to generate an array having 100 random numbers, where there is absolutely
> no repetition and the case where the ith element of the array is not the number " i " itself is
> not allowed ..
This hardly describes any definition of random.
This seems to fit your rules, but is decidedly non-random
for ( i = 0 ; i < 100 ; i++ ) arr[i] = i + 1;
No repetition, and no arr[i] contains i
Perhaps you need to say why you need an array which matches these properties you list.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
This hardly describes any definition of random.
To be fair, the requirements on no repetition and "ith element of the array is not the number " i " itself" can be seen as additional requirements on top on the 'randomness'.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
Apparently, the problem was not very clear.
I wish to state the problem again:
"I want an array of size 100 filled with random numbers.
The range of numbers is between 0 and 99.
On top of this, there should not be any repeated numbers and
the number in the ith position of the array must not be the number "i" itself." I am sorry for the trouble, if any.
No genuine requirement as such, this problem came up when a bunch of us were discussing random numbers in general
Fill the array from 0 to 99.
Shuffle it as long as there are any elements whose value matches its index.
Hope is the first step on the road to disappointment.
Shuffle it as long as there are any elements whose value matches its index.
What if the process never completes? It might happen. In any case, that is a helluva inefficient operation, in theory.
#include <stdio.h>
void J(char*a){int f,i=0,c='1';for(;a[i]!='0';++i)if(i==81){
/3*3+f/3*9+f%3]==c||a[i%9+f*9]==c||a[i-i%9+f]==c)goto e;a[i]=c;J(a);a[i]
='0';e:;}}int main(int c,char**v){int t=0;if(c>1){for(;v[1][
t];++t);if(t==81){J(v[1]);return 0;}}puts("sudoku [0-9]{81}");return 1;}
Well, after shuffling check for elements whose value matches the index. Shuffle those elements, or simply swap them around.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
What if the process never completes? It might happen. In any case, that is a helluva inefficient operation, in theory.
Maybe the way you write it makes it inefficient. It depends on how you shuffle. Then again, I don't shuffle like a moron.
Besides, the OP said nothing of efficiency.
Last edited by quzah; 06-13-2006 at 04:11 AM. Reason: Add comment regarding efficiency.
Hope is the first step on the road to disappointment.
in theory
Anyway, random_shuffle is a quick fix for the OP's problem, and should work in most* practical situations.
* "most" == "inconcievably huge proportion of".
I should stop thinking like a math geek
#include <stdio.h>
void J(char*a){int f,i=0,c='1';for(;a[i]!='0';++i)if(i==81){
/3*3+f/3*9+f%3]==c||a[i%9+f*9]==c||a[i-i%9+f]==c)goto e;a[i]=c;J(a);a[i]
='0';e:;}}int main(int c,char**v){int t=0;if(c>1){for(;v[1][
t];++t);if(t==81){J(v[1]);return 0;}}puts("sudoku [0-9]{81}");return 1;}
Anyway, random_shuffle is a quick fix for the OP's problem,
I agree, though there's still the check for elements whose value matches the index. I dont think std::random_shuffle has any guarantees on that.
and should work in most* practical situations.
* "most" == "inconcievably huge proportion of".
Unfortunately, this is the C forum, so a #include <algorithm> followed by the use of std::random_shuffle may not be practical.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
06-09-2006 #2
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
06-10-2006 #3
06-11-2006 #4
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
06-12-2006 #5
Registered User
Join Date
Mar 2006
06-12-2006 #6
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
06-12-2006 #7
06-12-2006 #8
06-13-2006 #9
Eager young mind
Join Date
Jun 2006
06-13-2006 #10
06-13-2006 #11
Registered User
Join Date
Mar 2006
06-13-2006 #12
06-13-2006 #13
06-13-2006 #14
Registered User
Join Date
Mar 2006
06-13-2006 #15
|
{"url":"http://cboard.cprogramming.com/c-programming/79958-random-numbers.html","timestamp":"2014-04-24T09:19:15Z","content_type":null,"content_length":"103788","record_id":"<urn:uuid:0b48b87c-6c83-435d-b520-8d74370c1c61>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: three mean and sd plots on the same graph?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: three mean and sd plots on the same graph?
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: three mean and sd plots on the same graph?
Date Wed, 7 Nov 2007 21:01:36 -0000
Now my best advice is to use -stripplot- from SSC. Its default is to
strips of data points, but you can jitter or stack and add boxes or
. sysuse auto
(1978 Automobile Data)
. stripplot trunk turn mpg, bar
. stripplot trunk turn mpg, bar
. stripplot trunk turn mpg, bar vertical
You can use a -by()- option as well. I'd recommend showing the data,
but you can blank it out with -ms(none)-.
By default the bars are offset from the data which may answer your
last question.
Joseph Wagner
I created box plot graph with of 3 X variables over the same Y variable
but was then asked to produce a similar graph this time using mean (not
median) and +/- 95% CI. I don't think Stata can do this but thanks to
a post by Nick three years ago, I was able to (sort of) create these:
egen mean = mean(cont), by(cat)
egen sd = sd(cont), by(cat)
gen upper = mean + sd
gen lower = mean - sd
scatter mean cat || rcap upper lower cat
. . . is the example Nick gave but how would (or could I?) do this for
three different X variables (in Nick's example I suppose it would be 3
different 'cont' variables) on the same graph? I can do
this for two but not 3 which brings me to my next problem:
The box plots I created earlier were each side by side for the same
value of X (rather than on top of one another) but the two -rcap- graphs
are on top of each other making it impossible to differentiate
the two lines (three would be even worse). Is there a way to separate
these lines or do I need to graph these data in another way altogether?
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-11/msg00191.html","timestamp":"2014-04-19T20:40:39Z","content_type":null,"content_length":"7032","record_id":"<urn:uuid:5c7f9eec-a2cd-45dc-a6e0-ecf38d88fc0c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
In a measurement of precipitation (rainfall), a raingauge with an orifice (collector) area of 30.50 in2 collects 1.30 litres of water over a period of 26 minutes and 45 seconds.[10]
1) The depth (amount) of rain that fell (in mm).
2) The intensity (mm h-1) at which the rain fell.
3) The volume (m3) of water that would have fallen on an area of 7.00 acres.
4)The discharge in m3 s-1 that would occur if all the water ran off the area in part c. in 3.00 hours.
• math - MathMate, Monday, October 26, 2009 at 10:18pm
What did you get from your calculations?
• math - Pleasehelp!, Monday, October 26, 2009 at 11:34pm
hi, um the problem is that i didnt know how to start. i was just thinking.. V which is 1.3L/ the area would give me Depth but im in sure and i dont get conversions properly
• math - MathMate, Monday, October 26, 2009 at 11:51pm
The amount of rainfall that fell is equal to the depth of water collected in a rectangular water-tight box over the period of time.
A rain gauge is an instrument by which even minute amount of rainfall can be measureed accurately by using an oversized funnel and collecting the water in a container that can measure the
quantity accurately. By dividing the amount of water collected divided by the area of the funnel (collecting area), we can obtain the amount of rainfall in mm or any other unit.
So the depth required is the quantities of water divided by the area, both in consistent units.
Let's work in centimetres.
1.3 litres = 1300 cm³
We know that 1 inch = 2.54 cm, so
1 in² = 2.54² cm²
30.50 in²
= 30.50*2.54^2 cm²
= 196.645 cm²
Amount of rainfall
= volume of water / area of collector area
= 1300 cm³ / 196.645 cm²
= 6.61 cm
= 66.1 mm
This measn that to do these questions, you will need to understand the definition of each term, make necessary converions of units to arrive at the correct answer.
Give a try for the remaining questions, start by
1. reading up and understanding the definitions of the terms, then
2. decide on the units you should be using,
3. do the conversions, then
4. do the calculations.
Start with Question 2 and post the steps that you have done if you have difficulties.
• math - ScienceConfusion, Tuesday, October 27, 2009 at 2:08am
Umm why do you have to convert it to CM? So this is not right:
30.50 inches squared= 774.6999mm squared
1.3 litres= 1 300 000 mm cubed
Depth= 1 300 000/774.699
Depth = 1678.069147 mm
Im so confused with this stuff! Im in management so science is definitely not my area of expertise. Please help!
• math - ScienceConfusion, Tuesday, October 27, 2009 at 3:28am
would you use the depth in part a) to calculate part b)?
• math - MathMate, Tuesday, October 27, 2009 at 8:43am
I started with cm so that the numbers remain reasonable and can be "visualized" in size. You could have very well used mm all along, then there would be no conversion in the end.
For part 2, yes, exactly, you would first read up your teacher's notes to find out what is the definition of intensity. I believe it is the amount of rainfall (in mm) that fell in one hour, that
is why the units are mm hr-1.
Then you use the amount calculated in part 1, in mm that fell in 26 minutes and 45 seconds, to calculate what would have fallen in 1 hour, if the intensity did not change. Can you proceed now
with #2?
• math - ScienceConfusion, Tuesday, October 27, 2009 at 9:57am
I understand now! Just to clarify my way of calculating depth isn't right? And also when I calculate part b), I have to convert 24 minutes and 45 secs into hours right? When I do that, do I
convert it as 24.45 or 24mins seperately 45 secs seperately....sorry just confused!
• math - MathMate, Tuesday, October 27, 2009 at 1:26pm
Since there are 60 seconds in a minute, so
26 minutes 45 seconds is converted to
26+45/60 minutes, or 26.75 minutes.
In the same way, since there are 60 minutes in an hour, so 26.75 minutes converts to
26.75/60=0.44583 hour.
Now for the intensity of rainfall, which is measured by mm/hour, we divide the amount fallen in 26 minutes 45 seconds by the duration (in hours), so the intensity would be:
______ / 0.44583 = ______ mm/hour.
• math - ScienceConfusion, Tuesday, October 27, 2009 at 3:55pm
Thank you! I was wondering if the person who originally started this forum had any input ("pleasehelp").
• math - MathMate, Tuesday, October 27, 2009 at 6:44pm
Sorry, I assumed you're the original poster. Sometimes they change names along the way.
I hope you got something out of it!
Do you have a similar problem or was it by curiosity that you joined in?
• math - ScienceConfusion, Tuesday, October 27, 2009 at 6:50pm
Similar question! I think we are both in the same course!
So for the second part i got 148.26 mm/hr
how do i write that as mm h-1?
• math - MathMate, Tuesday, October 27, 2009 at 7:22pm
148.26 mm-hr^-1 or
148.26 mm/hr will both be correct.
I would go with the former if this is the unit the question suggested.
On the other hand, the volume collected was 1.30 litres, meaning that it is accurate to at most 3 significant figures. So I would give the answer as 148 mm-hr^-1.
Are you OK with the rest of the questions?
• math - ScienceConfusion, Tuesday, October 27, 2009 at 8:25pm
I really feel bad asking but im honestly not...its taking me a while to figure it out...do you mind helping me with the rest of the questions :$...sorrryy....this is definitely not my area of
expertise:(....thank you so much i really appreciate it:)
• math - ScienceConfusion, Tuesday, October 27, 2009 at 8:31pm
If its a problem please let me know...thank you:)
• math - MathMate, Tuesday, October 27, 2009 at 9:08pm
Not a problem at all!
The volume of water would be calculated like a big flat rectangular slab of area 1 acre (4840 yd²) with a height of 66.1 mm.
You need conversions:
4840 yd²
= 4840 (3 ft)²
= 4840 (3*.3048m)²
= 4840 *(3*.3048)^sup2; m²
Now multiply by the height of 66.1 mm = 0.0661 m. to get
4840 *(3*.3048)² m² * 0.0661 m
4840 *(3*.3048)²* 0.0661 m³
= _______ m³ (= V)
4. Discharge
Discharge, D is measured in m³/s, so
D = V m/ 3hr
= V m / 3*3600 s
= V/(3*3600) m/s
5. Today at closing, according to the Bank of Canada, 1 US$ = 1.0661 C$
and Google gives 1 US gallon = 3.78541178 litres
US$4.00 gallon-1
= 4*1.0661C$ / 3.78541178 litres
= 4*1.0661 / 3.7854 C$/litre
I will leave it to you for the numerical calculations.
Post if you need a check. However, DO make sure you understand every step that I have done. It is more important to know how to do it than to get the right answer.
• math - ScienceConfusion, Tuesday, October 27, 2009 at 11:08pm
You are truely a savior mathmate, but i honestly dont understand part 3)...where are you getting all these numbers from?
• math - MathMate, Tuesday, October 27, 2009 at 11:17pm
I'm sorry that I did not explain in detail:
These are standard conversions
1 acre = 4840 sq. yds = 43560 sq.ft
1 inch = 2.54 cm (exactly)
so 1 foot = 12*2.54 cm = 30.48 cm = 0.3048 m.
and 1 sq. ft = 0.3048² m² = 0.09290304 m²
Hope that clears up a little.
• math - ScienceConfusion, Tuesday, October 27, 2009 at 11:33pm
Ahhhhhhhhhhhhhhhhhhhhh okkk :S but the question is asking for 7 acres
• math - MathMate, Tuesday, October 27, 2009 at 11:39pm
I apologize for the oversight, so the volume will have to be multiplied by 7.
It will affect #4 also, although it said "...all the water ran off the area in part c. in 3.00 hours."
I suppose that part c was converted to #3 when it was posted.
Also, I can continue tomorrow. I need some sleep before I don't know what I'm writing!
• math - ScienceConfusion, Wednesday, October 28, 2009 at 12:06am
Ok thank you...if you can explain it to me more tomorrow i will greatly appreciate it! good night!!
• math - ScienceConfusion, Wednesday, October 28, 2009 at 12:10am
In a measurement of precipitation (rainfall), a raingauge with an orifice (collector) area of 30.50 in2 collects 1.30 litres of water over a period of 26 minutes and 45 seconds.[10]
3) The volume (m3) of water that would have fallen on an area of 7.00 acres.
4)The discharge in m3 s-1 that would occur if all the water ran off the area in part c. in 3.00 hours.
Thank you for your help thus far. Would be able to explain it to me in steps tomorrow? GOOD NIGHT MATH MATE!!
• math - ScienceConfusion, Wednesday, October 28, 2009 at 12:58am
Thank you good night
• math - Question, Wednesday, October 28, 2009 at 9:16pm
Would the fact
1 (american) gallon = 3.785L
1 (canadian) gallon = 4.55L
come into affect for # 5?
|
{"url":"http://www.jiskha.com/display.cgi?id=1256608269","timestamp":"2014-04-20T18:58:46Z","content_type":null,"content_length":"20128","record_id":"<urn:uuid:96ac9035-8515-462c-b6f6-0aa736b12ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
why does current not flow through a broken circuit
• 2 months ago
• 2 months ago
Best Response
You've already chosen the best response.
because there is no pathway for the electrons to move through...of course the air between a broken wire can act as a pathway if the voltage(push) applied is large enough
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/52d93b1fe4b05ee05437c4fa","timestamp":"2014-04-17T04:11:56Z","content_type":null,"content_length":"27757","record_id":"<urn:uuid:f9ce3f47-c8ad-4114-afec-ae43f1dcbe7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pressure and Temperature -- Quick Concept Check
1. The problem statement, all variables and given/known data
A fixed amount of ideal gas is held in a rigid container that expands negligibly when heated. At 20°C the gas pressure is p. If we add enough heat to increase the temperature from 20°C to 40°C, the
pressure will be less than 2p.
2. Relevant equations
3. The attempt at a solution
Initially I thought the solution was simple. Solving for p_2, we have (p_1*T_2)/T_1. Plugging in 40 for T_2, and 20 for T_1 gives us 2p. Because volume, number of moles, and R are all constant, I
thought it just came down to the relation between pressure and temperature, but it turns out the pressure is less than 2p, which I do not understand. Just looking for some clarification....
Thank you!
|
{"url":"http://www.physicsforums.com/showthread.php?t=575211","timestamp":"2014-04-19T17:40:01Z","content_type":null,"content_length":"30413","record_id":"<urn:uuid:71183dd0-cb01-4f54-b1f7-fb24746bd79a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Silver Spring, MD Algebra 1 Tutor
Find a Silver Spring, MD Algebra 1 Tutor
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including algebra 1, calculus, physics, GRE
...Performing well on exams like the SAT, ACT and GRE is partially about the mathematics and partially about the strategies needed to do well. See my blog about some basic strategies. One of the
biggest differences between the ACT and the SAT is that the ACT also has some trigonometry included in the test, which usually students do not see until their junior year.
24 Subjects: including algebra 1, reading, calculus, geometry
...I am very comfortable working on conceptual issues, as well as personal challenges in learning math. I received a Bachelors of Music in Clarinet Performance from the University of the Pacific
in 2011. My musical studies included extensive study in music theory.
11 Subjects: including algebra 1, writing, algebra 2, public speaking
...My techniques are clear and effective, and I am confident that I would make an impact on your child. I taught myself Spanish the summer after freshman year of college. I then tested into the
most advanced Spanish class at Virginia Tech the fall of my sophomore year.
27 Subjects: including algebra 1, Spanish, physics, reading
...Because every student is different, I adapt tutoring methods to any learning style to achieve the best outcome. Most of the students I tutored have mastered algebra so well that they are now
taking Calculus classes.I have tutored Algebra I for two years at Montgomery College's Math Learning Center. I have tutored Algebra II for two years at Montgomery College's Math Learning Center.
3 Subjects: including algebra 1, algebra 2, prealgebra
Related Silver Spring, MD Tutors
Silver Spring, MD Accounting Tutors
Silver Spring, MD ACT Tutors
Silver Spring, MD Algebra Tutors
Silver Spring, MD Algebra 2 Tutors
Silver Spring, MD Calculus Tutors
Silver Spring, MD Geometry Tutors
Silver Spring, MD Math Tutors
Silver Spring, MD Prealgebra Tutors
Silver Spring, MD Precalculus Tutors
Silver Spring, MD SAT Tutors
Silver Spring, MD SAT Math Tutors
Silver Spring, MD Science Tutors
Silver Spring, MD Statistics Tutors
Silver Spring, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/silver_spring_md_algebra_1_tutors.php","timestamp":"2014-04-20T19:33:02Z","content_type":null,"content_length":"24560","record_id":"<urn:uuid:07670102-0700-435a-bde3-959778e0fb23>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: January 2010 [00767]
[Date Index] [Thread Index] [Author Index]
Re: How to calculate covariant derivative by Mathematica?
• To: mathgroup at smc.vnet.net
• Subject: [mg106850] Re: How to calculate covariant derivative by Mathematica?
• From: Simon <simonjtyler at gmail.com>
• Date: Sun, 24 Jan 2010 05:47:18 -0500 (EST)
• References: <hjeq9u$fr1$1@smc.vnet.net>
Hi Shen,
It depends on the context in which you're working, as a covariant
derivatives can _look_ quite different.
But maybe what you basically need is an operator of the type
In[1]:= DD[t_]:=(D[#,t]+Con[#,t])&
so that
In[2]:= DD[x]@f[x]
Out[3]= Con[f[x],x]+(f^\[Prime])[x]
Then you need to make your connection, Con act properly. For example,
it should return 0 when acting on scalars, and if you're acting on
explicit Tensors and don't distinguish between contravariant and
covariant, then maybe something like this would work:
In[5]:= Con[expr_?ArrayQ,t_]:=Module[{dim=Dimensions[expr],rep,perms},
rep=Array[Subscript[r, ##][t]&,{dim[[1]],dim[[1]]}];
we can test that this works properly on a (square) matrix:
In[6]:= rep=Array[Subscript[r, ##][t]&,{2,2}]; m=Array[Subscript[z, ##]
In[7]:= Con[m,t]==rep.m+m.rep\[Transpose]//Expand
Out[7]= True
The above can be extended to vector derivatives and associated
Symbolic covariant derivatives are a bit more tricky...
There are some packages out there... a google search for "mathematica
covariant derivative" brings up a few.
The Wolfram pages to look at are
Finally, if you want to do index / field theory style calculations,
then maybe you could try Cadabra.
Hope some of that helps,
On Jan 23, 8:33 pm, Shen <zshen2... at yahoo.com> wrote:
> I need to calculate covariant derivative by Mathematica. I noticed
> that there is no such a function in Mathematica. Can we define such a
> funcation? I don't know how to do it. Who can tell me how to define
> and calculate covariant derivative with Mathematica?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00767.html","timestamp":"2014-04-17T13:19:28Z","content_type":null,"content_length":"27310","record_id":"<urn:uuid:35bc876b-3309-493a-9a7e-f072c01667fe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] python-numpy debian package and f2py
Ondrej Certik ondrej@certik...
Sun Dec 2 18:52:57 CST 2007
I am a comaintainer of the python-scipy package in Debian and now it
seems to be in quite a good shape. However, the python-numpy package
is quite a mess, so as it usually goes in opensource, I got fedup and
I tried to clean it. But I noticed, that f2py was moved from external
package into numpy, however
the versions mishmatch:
The newest (deprecated) python-f2py package in Debian has a version
2.45.241+1926, so I assume this was the version of f2py, before
with numpy. However, the f2py in numpy says when executing:
Version: 2_3816
numpy Version: 1.0.3
so I assume the version of f2py in numpy is 2_3816? So has the
versioning scheme of f2py changed? Another question - since both numpy
and f2py
is now built from the same source, doesn't f2py simply has the same
version as numpy, i.e. 1.0.3? Note: I know there is a newer numpy
release, but that's
not the point now.
I am asking because we probably will have to remove the old
python-f2py package and build a new one from the sources of numpy,
etc., and it will
take some time until this happens (ftpmasters need to remove the old
package from the archive, then the new binary package needs to go to
NEW queue for approval etc.), so I would like to make sure I
understand the versioning and the future plans with numpy and f2py,
before starting
the transition in Debian.
Actually, does it even make sense to create a python-f2py package? It
seems so (to me), it's a separate program. But since you decided to
merge it
with numpy, what are your thoughts about it?
Thanks a lot,
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-December/030160.html","timestamp":"2014-04-18T08:14:11Z","content_type":null,"content_length":"4165","record_id":"<urn:uuid:27004b58-327a-4ffb-95c0-3e66de5fe334>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Excel User-Defined Functions
User-Defined Functions
User Defined Functions can provide great power and convenience and appear very simple to write. But there are some problem areas that may need special attention in your UDF coding:
• The UDF code must be in a General Module, not a Sheet Module.
• Action ignored: UDF "does nothing"
• Not recalculated when needed or always recalculating.
• Unexpectedly returns #Value or other error.
• Calculates more than once in a recalculation, the Function Wizard or when entered.
• Slow to calculate.
UDF action being ignored.
Excel will not allow a UDF written in VBA to alter anything except the value of the cell in which it is entered.
You cannot make a VBA UDF which directly:
• Alters the value or formula or properties of another cell.
• Alters the formatting of the cell in which it is entered.
• Alters the environment of Excel. This includes the cursor.
• Uses FIND, SpecialCells, CurrentRegion, CurrentArray, GOTO, SELECT, PRECEDENTS etc : although you can use Range.End.
• Note you can use FIND in Excel 2002/2003.
UDF not recalculating or always recalculating or calculating in an unexpected sequence:
Dependency Sequence Problems
Excel depends on analysis of the input arguments of a Function to determine when a Function needs to be evaluated by a recalculation. What this means is that the function will only be flagged as
needing evaluation when one or more of the input arguments change (unless the function is volatile).
If your UDF gets input values from any cells that are not in its argument list then it may not be recalculated, and give the wrong answer. Mostly you can bypass this problem by doing a Full
Calculation (Ctrl-Alt-F9), rather than a recalculation (F9), or by making your UDF volatile, but I strongly recommend that you put all the input cells in the argument list.
During a recalculation if Excel does evaluate the UDF it determines which cell references are actually being used inside the function to affect the function result, and if those cells have not yet
been finally calculated it will reschedule the Function for later calculation. This is required to make the UDF be finally calculated in the correct dependency sequence.
Function mySum1(SheetName as string) as Variant
End Function
This function will not automatically recalculate if Data!A2 changes: it will only automatically recalculate when Sheetname changes.
Note that putting all the references in the argument list does not control the initial sequence which Excel uses to calculate the UDF, and it may well be calculated more than once per recalculation
See UDFs evaluated more than once per workbook calculation.
False Dependencies
If you put a false dependency in the argument list (a reference which is not actually used inside the function) Excel will execute the function when the dependency changes, but not neccessarily in
dependency sequence.
To ensure that the code in a Function is executed in dependency sequence you need to use ISEMPTY to check the input argument(s). See Detecting Uncalculated Cells below.
User-Defined Volatile Functions
Making UDFs Volatile is NOT a good If you develop a User Defined Function (UDF) you may need to make it Volatile (Application.Volatile(True)) to make it recalculate BUT:
substitute for including ALL it’s inputs in
the argument list. • Application.Volatile makes your function ALWAYS recalculate each time Excel calculates, which can slow down calculation.
• Application.Volatile does not directly affect Calculation Sequence.
• If your UDF refers to cells that are not included in the function’s argument list and (if the function is calculated) those cells have not yet been
calculated, then Excel will reschedule the UDF to be recalculated again later. Note that uncalculated cells do not stop your UDF from calculating,
they just make it calculate more than once. This will mostly give you the correct answer.
To make your function volatile put Application.Volatile(True) in your function before any other executable statements.
Function mySum2(SheetName as string) as Variant
end function
MySum will now give you the answer you expect.
I recommend avoiding the use of Application.Volatile if at all possible: put ALL your references in the argument list
Function mySum3(theFirstCell as range,theSecondCell as range) as Variant
End Function
Modifying UDFs and Sheet.calculate
Whenever you modify the code in a UDF Excel flags all instances of the UDF in all open workbooks as being uncalculated. If you then do a sheet.calculate or shift/F9 Excel will not only calculate the
currently selected worksheet(s), but will also evaluate all the instances of the UDF that are not on the currently selected worksheets.
Unexpectedly returning #Value, the wrong answer, or being recalculated more than once.
UDFs may be evaluated more than once per workbook calculation
Writing efficient and robust UDFs is not always simple. Various conditions may cause Excel to evaluate your UDF more than once during a recalculation. This means that your UDF should:
• Explicitly initialize all variables used in the UDF.
• Error-handle input from uncalculated cells (check for ISEMPTY, unexpected zeros, blanks, missing properties etc)
• Avoid doing intensive calculations until all input cells have been fully calculated
• Cache input values and output values to avoid time-intensive calculations when the input values have not changed.
Untrapped Errors
You should make sure that you have an On Error handler in your UDF to handle both real errors and errors that are caused by out of sequence execution. Un-trapped errors can halt the calculation
process before it is complete.
Function mySum4(theFirstCell as range,theSecondCell as range) as Variant
On error goto FuncFail:
Exit Function
End Function
Using UDFs in Conditional Formats
UDFs referenced in conditional formats get executed each time the screen is refreshed (even if Application.screenupdating=false), and breakpoints in these UDFs will be ignored. Make sure you have an
On Error handler in all UDFs referenced in conditional formats, since unhandled errors during a UDF execution caused by a screen refresh may cause VBA to silently stop running without any error
message being issued.
Because UDFs referenced by conditional formatting will be executed more frequently than you would expect it is not a good idea to reference slow-running UDFs from conditional formats.
Unhandled UDF Errors, and debugging your UDF's
If you don't have an On Error handler in your UDF you may need to be aware of the differences in the way that different Excel versions react.
• Excel 97 (SR2 and previous) will interrupt calculation and return #Value for UDF's if:
□ Calculation is called from VBA using Sheet.Calculate, Application.Calculate, or SendKeys "%^{F9}", True and the UDF contains an unhandled error
• if Application.Interactive=False Excel will hang, and you will need to shut it down.
• Range.Calculate with Excel 97 will automatically go into debug mode highlighting the error.
If you start the calculation process using Shift-F9 or F9 this problem does not occur: all cells are calculated and debug mode is not entered.
For the Microsoft view on the Excel 97 error handling problem see MSKB 244466
• Range.Calculate goes into debug mode when the UDF has an unhandled error.
• Sheet.Calculate, Application.Calculate and Application.CalculateFull: does not usually interrupt calculation or enter Debug mode.
Excel 2002/2003
• Range.calculate, Sheet.Calculate, Application.Calculate and Application.CalculateFull: does not usually interrupt calculation or enter Debug mode.
Detecting Uncalculated Cells, Controlling calculation sequence using false dependencies.
Using the Visual Basic ISEMPTY function on a UDF Range argument will return TRUE if either the input cell is not yet calculated or it contains nothing.
To distinguish between uncalculated cells and cells without any data check that the length of the formula is greater than 0 (suggested by David Cuin) or use Cell.HasFormula:
=ISEMPTY(Cell.Value) AND Len(Cell.formula)>0 or =ISEMPTY(Cell.Value) and Cell.HasFormula
These expressions will only return true if the cell is both uncalculated and contains a formula.
To check whether any of the cells in a range or function input parameter are uncalculated use the following function:
Public Function IsCalced(theParameter As Variant) As Boolean
' Charles Williams 9/Jan/2009
' Return False if the parameter refers to as-yet uncalculated cells
Dim vHasFormula As Variant
IsCalced = True
On Error GoTo Fail
If TypeOf theParameter Is Excel.Range Then
vHasFormula = theParameter.HasFormula
' HasFormula can be True, False or Null:
' Null if the range contains a mix of Formulas and data
If IsNull(vHasFormula) Then vHasFormula = True
If vHasFormula Then
' CountA returns 0 if any of the cells are not yet calculated
If Application.WorksheetFunction.CountA(theParameter) = 0 Then IsCalced = False
End If
ElseIf VarType(theParameter) = vbEmpty Then
' a calculated parameter is Empty if it references uncalculated cells
IsCalced = False
End If
Exit Function
IsCalced = False
End Function
I recommend that ALL UDFs should include both an On Error handler and where possible check for empty cells. UDFs written using the C API can check for xlCoerce returning xlretUncalced.
In this example the Function will not return a recalculated answer until both the input arguments have been recalculated and are not empty.
Function mySum5(theFirstCell as Variant,theSecondCell as variant) as Variant
On error goto FuncFail:
If not IsCalced(theFirstCell) or not IsCalced(theSecondCell) then
Exit Function
Exit Function
End Function
If you have a function that you want to calculate in dependency sequence for an argument that does not affect the result of the function (false dependency), you also need to check the input arguments
with ISEMPTY.
In this example the second argument is a false dependency because it does not affect the result of the function.
Function mySumTimes2(theFirstCell as range,theSecondCell as range) as Variant
On error goto FuncFail:
If IsEmpty(theFirstCell) or IsEmpty(theSecondCell) then
Exit Function
Exit Function
End Function
If you are using Excel 97 make sure you have installed the SR1 and SR2 updates.
There are several problems with UDFs in Excel97 which are fixed by these two service releases, which are available from:
The Microsoft Office download centre.
UDF error with multi-area range argument.
Howar Kaikow http://www.standards.com has discovered a bug in Excel's processing of UDFs using non-contiguous multi-area ranges as input arguments. See AreasBugBypass2.zip for a download containing
two workbooks that illustrates the problem and a way of bypassing it..
The problem occurs when:
• A UDF has a multi-area range as one of its input arguments.
• And the multi-area range refers to the worksheet that contains the formula with the UDF.
• And a different sheet is the activesheet when the UDF is calculated.
In these circumstances Excel/VBA incorrectly treats the multi-area range as referring to the active sheet. This means that the UDF may give incorrect answers without warning.
I recommend that you do not attempt to program a UDF to handle multi-area ranges as arguments: use multiple (optional if there are a varying number) range arguments instead.
If required you can bypass the problem as demonstrated in in the download, or by ensuring that there are no instances of the UDF where the multi-area range refers to the sheet with the UDF formula.
The bug exists in Excel 97, Excel 2000, Excel 2002 and partially in Excel 2003. It has been fixed in Excel 2007.
Referencing cell formatting properties
If your UDF references cell properties other than .value or .formula (ie .Bold) there are some occasions when these properties may be undefined when your UDF is evaluated. One such occasion is
renaming a worksheet in automatic mode. If this happens you may need to explicitly recalculate your function.
UDF Performance
For optimum performance UDFs should be coded in C and use the C API.
Usually Excel's built-in functions are faster than the equivalent function written in VBA, unless the VBA function uses a better algorithm.
Make sure you use an IsEmpty routine at the top of your UDF to check for uncalculated cells so that you can avoid unneccessary calculations of your UDF. IsEmpty returns True if the variable being
checked has not been initialised or has been set to empty. When used on UDF input range arguments IsEmpty returns True if the range either contains nothing or has not yet been calculated. Combining
an ISEMPTY test with a test that the formula length is >0 will detect only uncalculated cells but not cells that are empty because they contain nothing.
Also since executing UDF's is usually slower than other Excel calculations, try to put the reference to your UDF in a place in your formulae where it will be calculated as late as possible (towards
the end of the formulae and inside the brackets).
Consider if it makes sense only to abandon calculation not only if ALL the input arguments are uncalculated/empty but also if ANY of them are uncalculated/empty.
Automatic and Function key Calculation slower than VBA calculation
UDFs calculate significantly slower when the calculation is started automatically by Excel or by pressing F9 than when the calculation is started by a vba calculation statement like
The slowdown is significantly larger if the VBE is open and not minimised.
The slowdown is an overhead for each UDF that is recalculated, so its roughly proportional to the number of UDFs.
These timings are for 16000 very simple UDFs, using Excel 2002 on an AMD 1200MHZ with Windows XP:
│Autocalc with VBE open and maximised │ 91 seconds │
│Autocalc with VBE open and minimised │ 38 seconds │
│Autocalc with VBE closed │ 2 seconds │
│Application.Calculatefull with VBE open and maximised │0.302 seconds│
│Application.Calculatefull with VBE closed │0.293 seconds│
So if you are using a lot of UDFs it really pays to be in manual calculation mode and have a calculate button that uses VBA to initiate an Excel calculation (Application.Calculate or
Automation Addin UDFs
UDFs in Automation addins created using VB6 do not use the VBE, so do not suffer from the VBE overhead as above, but otherwise give very similar performance to VBA. In theory compiled VB6 with array
bounds checking disabled should be faster for intensive arithmetic calculations on arrays, but I have not yet managed to detect any speed improvement.
Array Functions
UDF's can be written as multicell array formulae that can be entered using Ctrl-Shift-Enter. The results of the array calculation are returned to the cells by assigning an array to the function.
Note that Excel behaves unexpectedly when a multi-cell UDF is entered or modified and depends on volatile formulae: the UDF is evaluated once for each cell it occupies. This does not happen when the
UDF is recalculated, only when it is entered or changed.
Transferring information from Excel Ranges to the UDF.
You should also try to minimise the performance overhead of transferring information from Excel to VBA and back to Excel:
If you are going to process each cell in the input range(s) then they should usually be read into a variant variable containing an array, which is subsequently read from for the calculations. This
avoids reading information from Excel cell by cell, which is slow, and ensures that you only read each cell once. Note that this example assumes that each range is a single-area contiguous range.
It is faster (15-20%) to use the Range.Value2 property rather than the (default) Range.Value property. The Range.Value property attempts to convert cells formatted as Dates to a variant containg a
VBA date type, and cells formatted as currency to a variant containing a VBA Currency type. Range.value2 attempts to convert date and Currency formatted cells into Variants containing Doubles.
Function mySum6(theFirstRange As Range, theSecondRange As Range) As Variant
Dim dblMySum As Double
Dim varRange1 As Variant
Dim varRange2 As Variant
Dim j As Long
Dim k As Long
On Error GoTo FuncFail:
' initialise output
dblMySum = 0#
' check for non-empty cells
If not IsCalced(theFirstRange) or not IsCalced(theSecondRange) then exit Function
' get ranges into variant variables holding array
varRange1 = theFirstRange.Value2
varRange2 = theSecondRange.Value2
blEmptyCells = True
For j = 1 To UBound(varRange1, 1)
If Not blEmptyCells Then Exit For
For k = 1 To UBound(varRange1, 2)
If Not IsEmpty(varRange1(j, k)) Then
blEmptyCells = False
Exit For
End If
Next k
Next j
If blEmptyCells Then
For j = 1 To UBound(varRange2, 1)
If Not blEmptyCells Then Exit For
For k = 1 To UBound(varRange2, 2)
If Not IsEmpty(varRange2(j, k)) Then
blEmptyCells = False
Exit For
End If
Next k
Next j
End If
' exit function if ALL input cells are empty
If blEmptyCells Then
Exit Function
' add cells to double (error traps text cells)
For j = 1 To UBound(varRange1, 1)
For k = 1 To UBound(varRange1, 2)
dblMySum = dblMySum + varRange1(j, k)
Next k
Next j
For j = 1 To UBound(varRange2, 1)
For k = 1 To UBound(varRange2, 2)
dblMySum = dblMySum + varRange2(j, k)
Next k
Next j
End If
' assign value to function
mySum6 = dblMySum
Exit Function
mySum6 = CVErr(xlErrValue)
End Function
Using Excel functions inside your UDF.
If you are going to process the input ranges using Excel functions called from VBA, then keep the input ranges as Range object variables, so that the function does not have to transfer all the cell
information to VBA : you are then just manipulating the object variable pointers.
Note that functions which reference ranges such as VLOOKUP and INDEX can return Empty without raising an error if the range being referenced contains uncalculated cells.
Since assigning Empty to a Long results in zero you can get unexpected results if you do not check for this condition.
Note that using Application.WorksheetFunction.Match is generally about 20% faster than using Application.Match (similar results expected for other functions). The other difference is that
Application.WorksheetFunction raises an error if, for instance, no match is found, but Application.Match returns an error value without raising an error.
This example extends the isempty check to disjoint ranges containing multiple areas, and will only check cells that are within the used range (efficient handling of ranges specified as entire columns
or rows).
Note that because there is a bug in the way Excel handles ranges containing multiple areas I do not recommend that UDF's are programmed to handle multiple areas.
Function mySum7(theFirstRange As Range, theSecondRange As Range) As Variant
Dim blEmptyCells As Boolean
Dim rngUsedCellsInRange As Range
Dim oCell As Range
Dim j As Long
On Error GoTo FuncFail:
' check for non-empty cell
blEmptyCells = True
Set rngUsedCellsInRange = Intersect(theFirstRange, theFirstRange.Parent.UsedRange)
If Not rngUsedCellsInRange Is Nothing Then
For Each oCell In rngUsedCellsInRange
If Not IsEmpty(oCell) Then
blEmptyCells = False
Exit For
End If
Next oCell
End If
If blEmptyCells Then
Set rngUsedCellsInRange = Intersect(theSecondRange, theSecondRange.Parent.UsedRange)
If Not rngUsedCellsInRange Is Nothing Then
For Each oCell In rngUsedCellsInRange
If Not IsEmpty(oCell) Then
blEmptyCells = False
Exit For
End If
Next oCell
End If
End If
Set oCell = Nothing
Set rngUsedCellsInRange = Nothing
' exit function if all input cells are empty
If blEmptyCells Then
Exit Function
mySum7 = Application.Sum(theFirstRange, theSecondRange)
End If
Exit Function
Set oCell = Nothing
Set rngUsedCellsInRange = Nothing
mySum7 = CVErr(xlErrValue)
End Function
UDFs with a variable number of input arguments.
This example shows how to handle a variable number of arguments.
Function mySum8(ParamArray varArgs() As Variant) As Variant
Dim varArg As Variant
Dim rngCell As Range
Dim dblMySum As Double
Dim blEmptyCells As Boolean
Dim j As Long
Dim k As Long
On Error GoTo FuncFail:
' initialise output
dblMySum = 0#
' check for non-empty cell
blEmptyCells = True
For Each varArg In varArgs
If Not blEmptyCells Then Exit For
If Not IsMissing(varArg) Then
' if not a range skip the check
If TypeName(varArg) = "Range" Then
For Each rngCell In varArg
If Not blEmptyCells Then Exit For
If Not IsEmpty(rngCell) Then
blEmptyCells = False
Exit For
End If
Next rngCell
If Not IsEmpty(varArg) Then blEmptyCells = False
End If
End If
Next varArg
' exit function if all input cells are empty
If blEmptyCells Then
Exit Function
' add cells to double (error traps text cells)
For Each varArg In varArgs
If Not IsMissing(varArg) Then
If TypeName(varArg) = "Range" Then
For Each rngCell In varArg
dblMySum = dblMySum + rngCell
Next rngCell
dblMySum = dblMySum + varArg
End If
End If
Next varArg
End If
' assign value to function
mySum8 = dblMySum
Exit Function
mySum8 = CVErr(xlErrValue)
End Function
Function Wizard
Your end-users can use the Function Wizard to enter your UDFs (usually from the User Defined Category) into a worksheet.
If your UDF takes a long time to execute you will soon discover that the Function wizard executes your UDF several times.
There are two methods you can use to avoid this:
• Declare the function as Private: the UDF will still work but will not appear in the list of functions shown by the function wizard (but you can still use the Function Wizard to modify an existing
• Add code to your UDF to determine if it has been called from the function Wizard.
Axel König has suggested adding the following code:
If (Not Application.CommandBars("Standard").Controls(1).Enabled) Then Exit Function
This code depends on the fact that when using the function wizard most icons in the toolbars are disabled. I have tested Axel's solution in Excel 97, Excel 2000, Excel 2002 and Excel 2003, and it
works well.
A solution is also possible by using the Windows API to check if the Function Wizard window is showing and has the same process ID as the current Excel process.
|
{"url":"http://www.decisionmodels.com/calcsecretsj.htm","timestamp":"2014-04-21T03:07:01Z","content_type":null,"content_length":"54172","record_id":"<urn:uuid:eb9cf422-49ee-4be1-953c-59524a2bb759>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binomial Distribution and Probability
Date: 06/07/97 at 11:06:33
From: Bianca Clarke
Subject: Binomial distribution
I just don't understand binomial distribution. Can someone explain it
very simply?
Many thanks,
Bianca Clarke
Date: 06/07/97 at 15:53:26
From: Doctor Anthony
Subject: Re: Binomial distribution
The binomial distribution applies when you have repeated experiments
with constant probabilities of success and failure at each trial. A
good example is throwing a die, say five times, and finding the
probability of rolling exactly three sixes.
If we use the letters S and F to represent success and failure in any
trial, then one possible sequence giving the desired result would be:
The probability of this particular sequence is:
Another sequence would be FFSSS and this too has probability:
In fact there are many other sequences that produce 3 sixes and 2 non-
sixes, and to get the total probability of 3 sixes and 2 non-sixes, we
must multiply the probability of any given sequence, (1/6)^3*(5/6)^2,
by the number of sequences. This in effect, is the number of
different arrangements you can make with the 5 letters S and F, 3
being alike of one kind and two being alike of a second kind.
In case you are unfamiliar with this problem, we shall show that the
number of different arrangements is:
3! 2!
We have 5 letters, and to start with assume they are all different
These letters could then be arranged in 5! ways. However three are
alike of one kind and we could swap these three amongst themselves in
3! ways without giving rise to a new arrangement. Similarly two
others are alike of a second kind, and these could be swapped between
themselves without giving rise to a new arrangement. Our answer of 5!
is therefore too large by factors 3! and 2!. Hence the actual number
of different arrangements of three S's and two F's is given by:
------ = 10
3! 2!
This is also the expression for the binomial coefficient 5_C_2. These
coefficients you will find are given on most scientific calculators.
So the probability of three successes = 10*(1/6)^3*(5/6)^2
The general expression for binomial probabilities are given by the
terms of the binomial expansion of (p+q)^n
Here, n = number of trials
p = probability of success at each trial (constant)
q = probability of failure at each trial (p+q = 1)
The probability of r successes is P(r) = nCr*p^r*q^(n-r)
In example we did above, n = 5, p = 1/6, q = 5/6, and r = 3:
P(3) = 5C3*(1/6)^3*(5/6)^2
= 10*(1/6)^3*(5/6)^2 = 0.03215
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/56557.html","timestamp":"2014-04-19T21:27:52Z","content_type":null,"content_length":"7681","record_id":"<urn:uuid:45e6707d-f9d8-4f98-9944-b8fed69f01d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Major (B.S.) - Undergraduate - 2013 University Catalog
You are viewing the 2013 University Catalog. Please see the newest version of the University Catalog for the most current version of this program's requirements.
A minimum of 120 semester hours of coursework is required for the baccalaureate degree with a minimum 2.0 overall GPA, and a minimum 2.0 major GPA. However, more than 120 semester hours may be
required depending upon the major field of study. In addition to the major requirement outlined below, all university students must fulfill the set of
General Education requirements
applicable to their degree.
Complete 65 semester hours including the following 3 requirement(s):
1. PHYSICS REQUIRED COURSES
Complete the following for 24 semester hours:
PHYS 191 University Physics I (3 hours lecture, 2 hours lab) 4
PHYS 192 University Physics II (3 hours lecture, 2 hours lab) 4
PHYS 210 Mechanics (3 hours lecture, 2 hours lab) 4
PHYS 240 Electricity and Magnetism (3 hours lecture, 2 hours lab) 4
PHYS 350 Optics (3 hours lecture, 2 hours lab) 4
PHYS 460 Modern Physics (3 hours lecture, 2 hours lab) 4
2. PHYSICS ELECTIVE COURSES
Complete at least 14 semester hours from the following:
EAES 105 Physical Geology (3 hours lecture, 2 hours lab) 4
PHYS 242 Circuit Theory (2 hours lecture, 2 hours lab) 3
PHYS 245 Electronics and Digital Circuits (2 hours lecture, 2 hours lab) 3
PHYS 247 Microprocessors and Their Applications (2 hours lecture, 2 hours lab) 3
PHYS 280 Astronomy (3 hours lecture, 2 hours lab) 4
PHYS 310 Advanced Mechanics (3 hours lecture) 3
PHYS 320 Thermodynamics (3 hours lecture) 3
PHYS 340 Advanced Electricity and Magnetism (3 hours lecture) 3
PHYS 377 Mathematical Physics (3 hours lecture) 3
PHYS 430 Computer Simulations of Physical Systems (3 hours lecture) 3
PHYS 462 Nuclear Physics (3 hours lecture, 2 hours lab) 4
PHYS 464 Quantum Mechanics (3 hours lecture) 3
PHYS 468 Fluid Mechanics (3 hours lecture) 3
PHYS 470 Solid State Physics (3 hours lecture) 3
PHYS 490 Literature Research in Physics (2 hours lecture) 2
PHYS 495 Laboratory Research in Physics 1-4
Complete the following for 27 semester hours:
CHEM 120 General Chemistry I (3 hours lecture, 3 hours lab) 4
CHEM 121 General Chemistry II (3 hours lecture, 3 hours lab) 4
CMPT 183 Foundations of Computer Science I (2 hours lecture, 2 hours lab) 3
MATH 122 Calculus I (4 hours lecture) 4
MATH 221 Calculus II (4 hours lecture) 4
MATH 222 Calculus III (4 hours lecture) 4
MATH 420 Ordinary Differential Equations (4 hours lecture) 4
Course Descriptions:
CHEM120: General Chemistry I (3 hours lecture, 3 hours lab)
Introductory lecture and laboratory course for science majors, prerequisite for all advanced chemistry courses. Introduction to atomic and molecular structure, bonding, stoichiometry, states of
matter, solutions, and selected topics in descriptive inorganic chemistry. Laboratory stresses techniques and data treatment and their use in examining chemical systems. 4 sh.
Prerequisites: Satisfactory score on the Mathematics readiness test OR a grade of C- or better in MATH 100 or MATH 111 or MATH 112 or MATH 116 or MATH 122 or MATH 221 or MATH 222. Satisfactory score
on the Chemistry readiness test OR a grade of C- or better in CHEM 105 or CHEM 106 or CHEM 113.
CHEM121: General Chemistry II (3 hours lecture, 3 hours lab)
Introductory lecture and laboratory course for science majors, prerequisite for all advanced chemistry courses. Introduction to thermochemistry, kinetics; general acid base, precipitation, redox
equilibria, electrochemistry and selected topics in descriptive inorganic chemistry. Laboratory stresses techniques and data treatment and their use in examining chemical systems. 4 sh.
Prerequisites: CHEM 120 with a grade of C- or better.
CMPT183: Foundations of Computer Science I (2 hours lecture, 2 hours lab)
Basic theory of digital computers. Syntax and semantics of a programming language. Algorithms: logic, design, testing and documentation. 3 sh.
Prerequisites: MATH 100, MATH 112, MATH 114, MATH 116, MATH 122 or MATH 221.
EAES105: Physical Geology (3 hours lecture, 2 hours lab)
Materials of the earth; landforms and structures; the processes and agents responsible for their formation and modification. Modern tectonic concepts. Topographic and geologic maps. Required field
trips. Not open to students who have had Principles of Geology. Meets Gen Ed 2002 - Natural/Physical Science Laboratory. Previous course GEOS 112 effective through Spring 2012. 4 sh.
MATH122: Calculus I (4 hours lecture)
Limits, continuity; derivative and differentiation; applications of the derivative, maxima, minima, and extreme considerations; antiderivatives; Riemann integral. 4 sh.
Prerequisites: MATH 111 or MATH 112 or placement through the Montclair State University Placement Test (MSUPT) or a satisfactory score on department's Calculus Readiness Test. (Students who did not
satisfy the course prerequisite at MSU and students who received a grade of D-, D, or D+ in the prerequisite course taken at MSU are required to demonstrate competency on the department's Calculus
Readiness Test.)
MATH221: Calculus II (4 hours lecture)
Riemann integral applications, transcendental functions, techniques of integration, improper integrals, L'Hospital's rule, infinite series. 4 sh.
Prerequisites: MATH 122.
MATH222: Calculus III (4 hours lecture)
Vector algebra; partial differentiation, and extreme considerations; polar, cylindrical, and spherical coordinates, multiple integration; introduction to line integrals. 4 sh.
Prerequisites: MATH 221.
MATH420: Ordinary Differential Equations (4 hours lecture)
A course in the theory and applications of ordinary differential equations which emphasizes qualitative aspects of the subject. Topics include analytic and numerical solution techniques for linear
and nonlinear systems, graphical analysis, existence-uniqueness theory, bifurcation analysis, and advanced topics. Prerequisite: MATH 335. 4 sh.
Prerequisites: MATH 335.
PHYS191: University Physics I (3 hours lecture, 2 hours lab)
This one-semester calculus-based course including laboratory is a study of the principles of physics and some applications to society's problems. Topics covered include mechanics, thermodynamics,
fluids, and harmonic motion. 4 sh.
Prerequisites: MATH 122 is prerequisite or co-requisite.
PHYS192: University Physics II (3 hours lecture, 2 hours lab)
Calculus-based course. Study of some principles of physics and some applications to society's problems. Topics include: wave motion, sound and noise pollution, optics, electricity, lasers, nuclear
theory, radiation, nuclear reactors, waste disposal. 4 sh.
Prerequisites: MATH 221 is prerequisite or corequisite.
PHYS210: Mechanics (3 hours lecture, 2 hours lab)
Classical mechanics: Kinematics, Newton's laws, impulse and momentum, statics, work and energy, oscillations, general motion, central force motion, non-inertial frames, system of particles, methods
of handling data. 4 sh.
Prerequisites: PHYS 191.
PHYS240: Electricity and Magnetism (3 hours lecture, 2 hours lab)
Basic principles of electromagnetism: Coulomb's law and general techniques in electrostatics, currents and their associated magnetic field, electromagnetic induction and magnetic properties of
materials. Foundations of Maxwell's equations (without detailed solutions). Laboratory experiments. 4 sh.
Prerequisites: PHYS 192. MATH 222 is a prerequisite or corequisite.
PHYS242: Circuit Theory (2 hours lecture, 2 hours lab)
Introduces basic methods in circuit analysis and design. Topics include linear electric circuits and their response, circuit theorems, filters, Fourier analysis of different inputs and outputs, and
transmission lines. 3 sh.
Prerequisites: PHYS 192 or PHYS 194 and MATH 221.
PHYS245: Electronics and Digital Circuits (2 hours lecture, 2 hours lab)
An introduction to the principles of amplifiers, waveform generators, and digital circuits, with emphasis on the use of commonly available integrated circuit packages. 3 sh.
Prerequisites: PHYS 192 or 194.
PHYS247: Microprocessors and Their Applications (2 hours lecture, 2 hours lab)
One semester course providing an introduction to the principles, operations and applications of microprocessors including experiment control and data manipulation. 3 sh.
Prerequisites: PHYS 192 or 194.
PHYS280: Astronomy (3 hours lecture, 2 hours lab)
Application of physical laws to the earth as a planet; nature of the other planets; orbital motion and space flight; origin of the solar system; the birth, life and death of a star galactic
structure; and cosmology. Meets the University Writing Requirement for majors in Physics. 4 sh.
Prerequisites: PHYS 191, 192 or PHYS 193, 194.
PHYS310: Advanced Mechanics (3 hours lecture)
Classical mechanics; transformations, oscillators, generalized motion; Lagrange's equations; Hamilton's equation; small oscillations; wave propagation. (Offered alternate years.) Meets the University
Writing Requirement for majors in Physics. 3 sh.
Prerequisites: MATH 222, and 420, and PHYS 210.
PHYS320: Thermodynamics (3 hours lecture)
Thermodynamic systems; laws of thermodynamics; entropy; kinetic theory; transport processes; statistical thermodynamics. (Offered alternate years.) 3 sh.
Prerequisites: MATH 222 and PHYS 210.
PHYS340: Advanced Electricity and Magnetism (3 hours lecture)
Dielectric materials; image calculations; Laplace's equation; magnetic materials and flux; A.C. networks; nonsinusoidal AC; transients and pulses; electromagnetic radiation. (Offered alternate
years.) 3 sh.
Prerequisites: MATH 420.
PHYS350: Optics (3 hours lecture, 2 hours lab)
Propagation of light, optical components, instruments and photometry. Interference, diffraction and polarization with elements of spectroscopy. (Offered alternate years.) Meets the University Writing
Requirement for majors in Physics. 4 sh.
Prerequisites: PHYS 240.
PHYS377: Mathematical Physics (3 hours lecture)
Vector analysis, complex variables, ordinary and partial differential equations, matrices. (Not offered every year.) 3 sh.
Prerequisites: 2 years of physics and MATH 222.
PHYS430: Computer Simulations of Physical Systems (3 hours lecture)
This course applies computer techniques and numerical analysis to model physical systems. Simulations and calculations will be done of falling bodies, gravitational orbits, scattering, oscillations,
electrical circuits, molecular dynamics, Monte Carlo techniques, chaos, and quantum systems. 3 sh.
Prerequisites: MATH 221, PHYS 191, PHYS 192, and CMPT 183.
PHYS460: Modern Physics (3 hours lecture, 2 hours lab)
Special relativity, kinetic theory of matter; quantization of electricity, light and energy; nuclear atom; elementary quantum mechanics and topics on solid state. (Offered alternate years.) 4 sh.
Prerequisites: PHYS 210, 240.
PHYS462: Nuclear Physics (3 hours lecture, 2 hours lab)
Nuclear radiation; radioactive decay; detectors; nuclear spectroscopy and reactions; theories and models; fission, fusion, reactors; and application of radioisotopes. (Offered alternate years.) Meets
the University Writing Requirement for majors in Physics. 4 sh.
Prerequisites: PHYS 210, 240.
PHYS464: Quantum Mechanics (3 hours lecture)
Shroedinger's wave equation, its application and interpretation; Pauli exclusion principle and spectra. (Offered alternate years.) 3 sh.
Prerequisites: PHYS 460.
PHYS468: Fluid Mechanics (3 hours lecture)
Mechanics of continuous media, liquids and gases; stress, viscosity, Navier-Stokes and Euler Equations, exact solutions, potential flow, circulation and vorticity, dimensional analysis and asymptotic
models, boundary layers, stability theory and applications to industrial and environmental problems. Cross listed with MATH 468. 3 sh.
Prerequisites: PHYS 210 or MATH 222.
PHYS470: Solid State Physics (3 hours lecture)
Properties of solid state matter are developed from the quantum mechanics of atoms and molecules. (Not offered every year.) 3 sh.
Prerequisites: PHYS 460.
PHYS490: Literature Research in Physics (2 hours lecture)
Student considers topics in physics and gains facility in literature research techniques: topics in pure physics or related to physics education. Students intending to enroll in laboratory research
in physics should use PHYS 490 to provide the literature research related to his/her laboratory problem. (Not offered every year.) 2 sh.
Prerequisites: At least 16 credit hours of physics beyond PHYS 192.
PHYS495: Laboratory Research in Physics
Solution of a laboratory problem research in pure physics or in physics education. Written report required. (Not offered every year.) 1 - 4 sh.
Prerequisites: At least 16 credit hours of physics beyond PHYS 192.
|
{"url":"http://www.montclair.edu/catalog/view_requirements.php?CurriculumID=2002","timestamp":"2014-04-23T21:52:41Z","content_type":null,"content_length":"37421","record_id":"<urn:uuid:be87ae6f-5157-477a-8c40-bee1f216628b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cg Cgk ??
I have heard about Cg/Cgk which are Instrument Capability Indexes.
The idea for Cg is the same than for Cp, the “conceptual” formula for Cp and Cg is:
Cg=(allowable variation)/(actual variation).
The differences are what we take for allowable and actual variation. For Cp, the actual variation is the process variation, and the allowed variation is the specification range.
For Cg, the actual variation is the INSTRUMENT ALONE variation (6 x Sinst). No between parts, within part, between operators or along time variation is included. As said in a previous post, about 50
measurements are made consecutively on a master or “best available” part, and allways on the same point of the part, and by the same person, and in a controlled enviroment (ussually the metrology
The allowable variation is taken as 1/5 of the process variation (6 x Sproc) or 1/5 of the product specification range, deppending on whether you want the instrument to controll the process or to
check conformance of the parts, then:
Cg=(0.2 x 6 x Sproc)/(6 x Sinst)=Sproc/5Sinst, or
Cg=(0.2 x Tol))/(6 x Sinst)=Tol/30Sinst
If you think that the 30 Sigma in the denominator here is ridicously excessive, you are just wrong. Imagine an instrument that has a Cg of 1.33, that means that the Sinst is 0.025 x Tol. Now, no
measurement system that uses this instrument will have a Sigma due to r&R lower that this, because you have to add all the other variations to the instrument alone. Then, at best, S(r&R)=0.025 x Tol,
and the value of r&R=5.15 x S(r&R)=0.13 x Tol. Then, the r&R using this instrument will never be better that 13% of the tolerance (in fact, it will allways be worse). Not such a crazy number!
Cgk, as Cpk, takes into account the possition (which would be the bias): Cgk=(0.5 of the allowable variation – |bias|) / (0.5 of the actual variation)
The idea of Cg/Cgk is to have this information in advance to use it for the dessition about if the instrument would be “selectable” for a given measurement. If you decide to include the instrument in
a measurement system that you will include in the control plan, only then you make the r&R with the full system, the real process parts and the full measurement variation. But you don’t want to
develope a measuring system, design and build a measurement device, and write the measuring instructions only to find that the instrument itself could have never been capable of such a measurement.
|
{"url":"http://www.isixsigma.com/topic/cg-cgk/","timestamp":"2014-04-19T06:57:14Z","content_type":null,"content_length":"119700","record_id":"<urn:uuid:3676d86d-8253-4824-ad73-a401758d270b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient computation of the least fraction with square denominator greater than the square root of 2.
up vote 8 down vote favorite
The least rational number greater than $\sqrt{2}$ that can be written as a ratio of integers $x/y$ with $y\le10^{100}$ can be found in a moment using a little Python program. Can anyone write a
program that finds, in hours rather than centuries, the least rational greater than $\sqrt{2}$ of the form $x/y^2$ with $y^2\le 10^{100}$?
More generally, my question is whether the following computation is known to be feasible or not feasible:
Given $N$, find the least rational greater than $\sqrt{2}$ of the form $x/y^2$, with $x$ and $y$ integers and $y^2\le N$. For definiteness, let's say that the output should be the required rational
written in lowest form.
By a feasible computation I mean one that can be done in $O((\log N)^k)$ bit operations for some constant $k$.
Of course the square root of 2 is not essential here. Any irrational would do, as long as comparisons with rationals are feasible. I don't know of any such irrational for which I can answer the
question I've posed.
nt.number-theory computational-number-theo computational-complexity diophantine-approximation
You would mean "in lowest terms", I suppose. – Charles Matthews May 17 '11 at 11:55
@Charles: Thanks. I've edited. – SJR May 17 '11 at 12:07
see also mathoverflow.net/questions/22868/… – Junkie May 18 '11 at 4:02
In essence, you're trying to find $x,y$ with $x^2-2y^4$ small. For any fixed integer $c$, $x^2-2y^4=c$ has only finitely many solutions, but it may not be easy to find them. This makes me think
it's hard to solve your problem (although it's very far from a convincing argument). Maybe it suggests at least that the literature on diophantine equations like $x^2-2y^4=c$ is a good place to
start. – Gerry Myerson May 18 '11 at 6:09
@Gerry: It may be that I'm really trying to make $x^2-2y^4$ small, but this puzzles me, because I don't have any idea whatsoever how lopsided is asymmetric (i.e. from above) approximation of $\sqrt
{2}$ by fractions with square denominator. Maybe the numbers $x/y^2$ I'm looking for are often not very good approximations, which would make singling them out via values of $x^2-2y^4$ rather
delicate. – SJR May 18 '11 at 9:27
show 2 more comments
2 Answers
active oldest votes
The following "aglorithm" gives not necessarily the best solution but yields fairly "good" solutions.
Start with a $n$ "bad" rational approximations $x_1/y_1,\dots,x_n/y_n$ of $2^{1/4}$ (obtained eg. by considering a few convergents of $2^{1/4}$) such that $y_1 \dots y_n < N$ and consider a
linear combination $\sum_{i=1}^n a_i(x_i/y_i)^2=x/(y_1\cdots y_n)^2$ with $a_i\in \mathbb Z,\sum_{i=1}^na_i=1$ which is slightly larger than $\sqrt 2$. One way to get coefficients $a_1,\
up dots,a_n\in \mathbb Z$ with $\sum a_i(x_i/y_i)^2$ close to $\sqrt{2}$ is by using the LLL-algorithm: Consider the $(n+1)-$dimensional sublattice $\Lambda$ of $\mathbb R^{n+2}$ spanned by $f_1=
vote 3 (1,0,0,\dots,0,A(x_1/y_1)^2)$, $f_2=(0,1,0,\dots,0,A(x_2/y)^2),\dots$, $f_n=(0,0,\dots,1,0,A(x_n/y_n)^2)$, $f_{n+1}=(0,\dots,0,1,A\sqrt 2)$ where $A$ is some huge real number (one can also
down work with an integral lattice by rounding off the last coordinate to the nearest integer for a fixed large real number $A$). A short vector of the form $(a_1,\dots,a_n,1)$ in $\Lambda$ yields
vote a good rational approximation $\sum_{i=1}^n a_i(x_i/y_i)^2$ of $\sqrt 2$ . About half of the time, such an approximation should have the correct sign. A few LLL runs for various large constant
values of $A$ (which should be larger than $\max (y_i^2)$, perhaps $A\sim \sqrt{N}$ is interesting) and various finite sets $x_1,\dots,x_n$ (with $n$ also varying) should give interesting
Thanks Roland. I am interested in efficient ways to get "good" approximations. Do you know anything about the exponents $e$ such that the method you describe gives approximations $x/y^2$
with $|x/y^2-\sqrt{2}|<y^e$? – SJR May 17 '11 at 13:36
@Roland: By the way, I am not sure that the rational $x/y^2$ to be computed in my question is always "good" approximation to $\sqrt{2}$. It might even be (for all I know) that for all but
finitely many $N$ the best approximations $x/y^2$ with $y^2\le N$ are LESS than $\sqrt{2}$. – SJR May 17 '11 at 14:07
add comment
You should try the algorithms in Elkies' paper (from 2000) "Rational points near curves ..." http://arxiv.org/abs/math/0005139 . His idea is to cover the curve with a bunch of small
up vote 2 rectangles, and use lattice basis reduction within each such region. He proves a result which either says that there are small number of solutions or all the solutions lie on a line.
down vote
@Victor: Elkies paper looks interesting. I suppose you have in mind finding small values of $x^2−2y^4$, and Elkies ideas certainly seem relevant to this. On the hand, it seems that his
approach, no matter how practical, is nonfeasible. What I'm really hoping for is a proof that the type of problem I stated is nonfeasible, or at least ways to reduce the general problem
to other problems whose complexity has been studied. – SJR Jun 8 '11 at 13:55
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory computational-number-theo computational-complexity diophantine-approximation or ask your own question.
|
{"url":"http://mathoverflow.net/questions/65223/efficient-computation-of-the-least-fraction-with-square-denominator-greater-than/65227","timestamp":"2014-04-19T07:17:17Z","content_type":null,"content_length":"67014","record_id":"<urn:uuid:7c822eb6-3eee-4690-9549-823a2a7a369c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The volume of a sphere radius r increases at 12pi cm^3/minute by inflation. What is the rate of growth of its surface area?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ef0097ae4b082f22c0b15c9","timestamp":"2014-04-19T13:06:31Z","content_type":null,"content_length":"100319","record_id":"<urn:uuid:77a86fd6-2c50-48e4-bba5-029472886834>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum magnetism with polar molecules
The so-called $t$-$J$ model describes strongly correlated fermions on a lattice, and in particular, the system’s most interesting low-energy spin and charge excitations. In the context of
high-temperature superconductivity, the model has been instrumental in the attempt to describe the evolution of the insulating state of the undoped parent into the superconducting state of the doped
material. Although trivial to write down on a piece of paper, the model contains complex physics and is intractable; many approximations and numerical schemes have been devised for its study.
Now, in two papers appearing in Physical Review Letters and Physical Review A, Alexey Gorshkov at the California Institute of Technology, Pasadena, and collaborators propose using suitable rotational
states of ultracold polar molecules in an optical lattice in order to simulate a highly tunable generalization of the $t$-$J$ model in the lab. The researchers’ proposal is based on currently
available experimental techniques. They also show that detailed control—both in sign and magnitude—of all the interaction parameters is possible. As a first step they have used a numerical approach
to construct the phase diagram of the simplest experimentally realizable case. Apart from stimulating interesting experimental studies, their proposal has the potential to facilitate the study of
complex condensed-matter phenomena in tightly controlled experimental settings. – Alex Klironomos
|
{"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.107.115301","timestamp":"2014-04-20T04:11:24Z","content_type":null,"content_length":"5791","record_id":"<urn:uuid:45882223-e033-44be-a2ef-5af91477afd9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Bivariate probit model
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Bivariate probit model
From Kristian Jakobsen <pisfin@hotmail.com>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Bivariate probit model
Date Sat, 5 Sep 2009 13:07:04 +0000
Hi again,
Thank you so much for the quick help although I think I have been slightly misunderstood. I am not looking for the sole probability of being poor in period 2 (y2=1), but the probability of being poor in period 2 conditional on the poverty status in period 1. The easy solution would of course be to run two estimation e.g.:
probit y2 hh if y1=poor
probit y2 hh if y1=non-poor
But I am thinking that there would be a correlation between those two, meaning that the poverty status in period 1 (y1) influences the probability of being poor in period 2 (y2), and this would be confirmed with rho different from 0 in a bivariate probit model. Therefore, I am trying to get a table with the following two columns:
Probability of being poor in period 2 (for poor hhs in period 1) | Probability of being poor in period 2 (for non-poor hhs in period 1)
But how to write that in STATA? As I mentioned previously, I have written:
biprobit y1 y2 hh
but that does not seem right?
Thank you so much for any help,
With Windows Live, you can organize, edit, and share your photos.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-09/msg00217.html","timestamp":"2014-04-19T12:09:39Z","content_type":null,"content_length":"6538","record_id":"<urn:uuid:e51a2b2e-fa3d-4a7f-96d9-fb9c03571b42>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
99 Clojure Problems (31-34)
I wanted to complete problems 27 and 28, but it just wasn’t working out. I’ll come back to them later, but for now, here’s numbers 31 through 34.
Since I’m working through Project Euler as a sort of algorithmic kata to stretch my legs in new languages, these prime based problems were a snap.
; A utility method that helps keep things readable.
(defn evenly-divisible [n d] (zero? (mod n d)))
; P31 (**) Determine whether a given integer number is prime.
(defn prime? [n]
(loop 1
(if (> (* c c) n)
(if (evenly-divisible n c)
(recur (inc c))))))
; P32 (**) Determine the greatest common divisor of two positive integer numbers.
(comment "Use Euclid's algorithm.")
; my first attempt...
(defn gcd_a [n k]
(loop [a (if (< n k) n k)
b (if (< n k) k n)
c 2
o 1]
(< a c) o
(and (evenly-divisible a c) (evenly-divisible b c)) (recur (/ a c) (/ b c) c (* o c))
:else (recur a b (inc c) o))))
(comment "using euclid's algorithm, which I thought I knew, but I was apparently misremembering")
(defn gcd [m n]
(if (zero? n)
(recur n (mod m n))))
; P33 (*) Determine whether two positive integer numbers are coprime.
(comment "Two numbers are coprime if their greatest common divisor equals 1.")
(defn coprime? [n k] (= 1 (gcd n k)))
; P34 (**) Calculate Euler's totient function phi(m).
(comment "Euler's so-called totient function phi(m) is defined as the number of positive integers r (1 <= r <= m) that are coprime to m.")
(defn totient [n]
(fn [e] (coprime? e n))
(range 1 n))))
Not only was Project Euler helpful, the fact that these solutions so nicely build upon each other helped speed these through. I think I made the right choice skipping 27-28. The forward momentum has
rekindled my interest in learning more about clojure.
4 thoughts on “99 Clojure Problems (31-34)”
1. in your last one partial would be a handy function to know. it takes a function and some of its args and returns a new function that needs less args than the original.
(defn totient [n]
(partial coprime? n)
(range 1 n))))
`(partial coprime? n)` returns a function that takes one argument, which is perfect for filter.
also, i had to look it up and was surprised that clojure has both (mod) and (rem) functions. i’ve always used rem.
2. shoot i thought you used markdown in your comments. feel free to edit that with some pre tags or what-have-you.
3. I didn’t know it worked, but apparently surrounding your code with
tags works just like my posts. (Use square brackets)
4. Thanks for posting these. If you are working through Project Euler as well, allow me to mention my solutions for the first 25 at http://clojure.roboloco.net. I’m also working through my first
steps of this kind of Clojure kata, and if you have better solutions I would love to be corrected.
|
{"url":"http://jondotcomdotorg.net/2010/07/13/99-clojure-problems-31-34/","timestamp":"2014-04-18T08:01:54Z","content_type":null,"content_length":"27714","record_id":"<urn:uuid:5cfda840-9113-4df4-9bfe-a67b61af6b23>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partition function related to number of microstates
I have a question about the partition function.
It is defined as ## Z = \sum_{i} e^{-\beta \epsilon_{i}} ## where ##\epsilon_i## denotes the amount of energy transferred from the large system to the small system. By using the formula for the
Shannon-entropy ##S = - k \sum_i P_i \log P_i## (with ##k## a random constant or ##k_B## in this case), I end up with the following: $$ S = - k \sum_i P_i \log P_i = (k \sum_i P_i \beta \epsilon_i) +
(k \sum_i P_i \log Z) = \frac{U}{T} + k \log Z $$
This simplifies to ##Z = e^{-\beta F}## by using the Helmholtz free energy defined as ##F = U - T S##. But Boltzmann's formula for entropy states ##S = k \log \Omega##, where ##\Omega## denotes the
number of possible microstate for a given macrostate. So we will get $$ \Omega = e^{S/k} = e^{\beta (U - F)} = Z e^{\beta U} $$
So the partition function is related to the number of microstates, but multiplied by a factor ##e^{\beta U}##. And this bring me to my question: why is it multiplied by that factor? Maybe the answer
is quite simple, but I can't seem to think of anything.
Boltzmann's formula ##S = k_B \ln \Omega## is applicable
to the case of a microcanonical ensemble - a system in which every microstate is equally likely. Note that setting ##P_i = 1/\Omega## in ##S = -k_B \sum_{i=1}^\Omega P_i \ln P_i## gives Boltzmann's
The partition function ##Z = \sum_i \exp(-\beta \epsilon_i)## corresponds to a canonical ensemble. The microstates in a canonical ensemble are
equally likely, so Boltzmann's formula ##S = k_B \ln \Omega## does not apply. (However, the more general formula, ##S = -k_B \sum_{i=1}^\Omega P_i \ln P_i##, does still apply).
You can thus not equate ##\Omega## to ##Ze^{\beta U}##, as the two formulas you used for entropy are not simultaneously true.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4278576","timestamp":"2014-04-19T15:07:59Z","content_type":null,"content_length":"29294","record_id":"<urn:uuid:ec9248f2-cd8f-4a28-b73d-a9c18ea5da34>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An airplane flying with a velocity...
by spacecadette
Tags: airplane, flying, velocity
P: 24 [b]1. The problem statement, all variables and given/known data
An airplane is flying with a velocity of 85.0 m/s at an angle of 19.0 degrees above the horizontal. When the plane is a distance 106 m directly above a dog
that is standing on level ground, a suitcase drops out of the luggage compartment.
[b]2. Relevant equations
How far from the dog will the suitcase land? You can ignore air resistance.
[b]3. The attempt at a solution
I attempted to use Y-Yinitial = Voy(t) - 1/2(g)(t^2)
I solved for t and i got t = 4.651 s.
That answer was incorrect.
I then tried using Vx = (85m/s)(cos19) = 80.37
Vy = (85m/s)(sin19) = 27.67
I then tried adding them using V = sqrt{(80.1^2) - (27.7^2)} = 75.16 m/s
I then did R = Vox(t) but the answer I got wasn't correct.
HW Helper Welcome to PF.
P: 5,346 The problem is that the vertical velocity continues to carry it upward before it goes down.
What if you reduced the problem to throwing a ball straight up at 27 m/s and you are standing on a 105m cliff? Find that time and multiply by the horizontal velocity.
P: 24 I'm still not sure how to solve this =(
HW Helper An airplane flying with a velocity...
P: 5,346
You have the right method.
You got the wrong answer from the quadratic.
P: 24 Ohh! I didn't realize my mistake. I used 0 for Voy instead of 85m/s which is the initial velocity.
I solved the quadratic using the positive value and I got 1.168s. I then multiplied that by 85m/s to find the range and I got 99.28m. Does that sound correct?
HW Helper
P: 5,346 No.
Your initial vertical velocity is your Voy
Besides your quadratic should yield t to hit the ground. It can't get to the ground in 1 sec from 100 m.
HW Helper
P: 5,346 Write out the equation for your quadratic. Let's see where you are going wrong.
P: 24 -85 (+ or -) sqrt{(-85^2) - 4(4.9)(-106) all divided by 2(4.9)
I think that when I added the -85 to the problem I ended up with 1.168
Now when I subtracted the -85 to the problem I got 18.51s. Which seems more realistic.
HW Helper
P: 5,346 But 85 is the velocity of the plane and it is not going straight up.
Your vertical component of velocity is 85*Sin19° = 27.67
That should yield 0 = 105 + 27.67*t -4.9*t^2
That quadratic yields 8.246 s
I use this on-line calculator btw:
P: 24 Ohhh I understand now. Thank you so much for your help!
Ohhh I understand now. Thank you so much for your help!
P: 24 How far from the dog will the suitcase land?
I'm still having issues finding the distance.
HW Helper
P: 5,346 That's the easy part now.
You have time to hit the ground.
You have the horizontal velocity projected along the ground ...
Speed * time = distance.
Related Discussions
Airplane Velocity/Wind Introductory Physics Homework 1
airplane at constant velocity Introductory Physics Homework 4
centripetal accel -- airplane flying in a horizontal circle... Introductory Physics Homework 8
flying beam-angular velocity Introductory Physics Homework 2
airplane flying in a circle Introductory Physics Homework 5
|
{"url":"http://www.physicsforums.com/showthread.php?t=287186","timestamp":"2014-04-20T05:58:53Z","content_type":null,"content_length":"52433","record_id":"<urn:uuid:29518104-5379-4322-8c6a-08933f7d87d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Base Conversions
Number Base Conversions
(Last Mod: 27 November 2010 21:37:42 )
There are times when we need to convert the representation of a number in one base to a different base. It's important to understand that we are not changing the value of the number represented, only
the representation itself. There are many conversion methods available and some lend themselves to certain situations better than others. For instance, some are better suited to a manual approach
while others are better suited when implemented as a computer algorithm. Some are very streamlined if converting from base ten while others are very streamlined if converting to base ten. The more
methods you are familiar with, the more likely you will choose an efficient method in a particular situation. Likewise, the more understanding you have of why each method works, the more likely you
will be able to apply it without making the kinds of mistakes that are common when you are using a set of steps that you have merely memorized.
There are two obvious ways to accomplish base conversion that follow directly from the concept of what a positional number system is.
The first is to go through digit by digit and multiply it by the appropriate weight for that digit's position and sum up the result. If the arithmetic is performed in the target number base, then the
final result will be in that base. Here the arithmetic is performed in the base of the number system being converted to and hence this is the most common way to convert from an arbitrary base to
decimal because we are comfortable performing the math in base-10. It is not a common way to convert from decimal to another base for the simple reason that few people are comfortable performing
arithmetic in any base other than base-10.
635[7] = 6 x 7^2 + 3 x 7^1 + 5 x 7^0 = 6 x 49 + 3 x 7 + 5 x 1 = 320[10]
The second way to convert is to build up the number in the new base following exactly the steps introduced earlier - namely to figure out the weight of each position and figure out how many groupings
of that weight can be removed from the number and proceed digit by digit until the units digit is reached (or further if something other than an integer is being converted). Here the arithmetic is
performed in the base of the number system being converted from and hence this is the most common way to convert from decimal to an arbitrary base because, as above, we are comfortable performing the
math in base-10. Similarly, it is not a common way to convert from another base to decimal.
• 7^0 = 1
• 7^1 = 7
• 7^2 = 49
• 7^3 = 343
Since 49 <= 320 < 343, the number in base-7 has three digits.
Dividing 320 by 49 yields 6.53 and so the there are six complete groupings of 49 in 320. The first digit is therefore '6'.
Subtracting off 6 x 49 = 294 leaves a remainder of 26 to be represented by the remaining two digits.
Dividing 26 by 7 yields 3.71 and so there are three complete groupings of 7 in 26. The second digit is therefore '3'.
Subtracting off 3 x 7 = 21 leaves a remainder of 5 which is the remaining digit.
So the result is 635[7].
As indicated, the two methods described above are "brute force" methods and they always work - but they are not the only two methods. The fact that they are the most common is not a tribute to their
efficiency, but merely to the fact that they are almost always the only two methods that are taught. As the saying goes, when the only tool you have is a hammer, everything looks suspiciously like a
nail. So let's add a couple of tools to our toolkit - there are alternate methods that, with just a little bit of practice, enable much quicker conversions. These conversions, in the form presented,
are limited to converting integers. But converting fixed-point and floating-point can be done by applying a bit of finesse, as we shall see later.
The first of these is repeated multiplication by the number base being converted from with the arithmetic being performed in the number base being converted to - hence this method lends itself to
conversions to decimal from another base. The basis for this method becomes apparent by noting the following:
635[7] = 6 x 7^2 + 3 x 7^1 + 5 x 7^0 = 6 x 7 x 7 + 3 x 7 + 5 = 7( 7(6) + 3 ) + 5 = 320[10]
To better see the systematic approach embodied by this method, consider the algorithm for using it:
To convert a number from an arbitrary base to decimal:
1. SET: result = 0
2. SET: present_digit = left-most digit
3. WHILE: present_digit exists
1. SET: result = result * old_base
2. SET: result = result + present_digit
3. SET: present_digit = the next digit to the right (if any)
If you walk through the above algorithm you will notice an inefficiency that is embedded in it. The first time through the loop you start with "result" equal to zero and you first multiply this by
the number base - seven in our case - and then add the leftmost digit. Why not simply initialize "result" to the leftmost digit in the first place? You can and, when doing the conversion manually,
you almost certainly will. But if you were encoding this algorithm into a computer program your algorithm has to be explicit about every step and the shortcut we would naturally use manually requires
quite a bit more thought to explain it properly in a step-by-step algorithm. We could do it, and perhaps pick up some performance in the process, but it is generally better to keep your algorithms as
simple as possible even if it is at the expense have having some steps whose effect is wasted from time to time.
Solution: Let's assume that it has been awhile since we've done a conversion and all we remember is that we walk across the number digit by digit and that we multiply our partial result by the
number base each time we add one of the digits. But we don't remember if we go from right to left or from left to right. Also, we aren't sure when we start and stop multiplying by the number
base. If our knowledge of how to use this method was based strictly on memorizing a bunch of steps, we would be stuck. But since our knowledge is based instead on understanding why the method
works, we can quickly figure it out. For instance, we know that you can't multiply the result by the base after adding in the unit's digit because the unit's digit represents single quantities.
Therefore we have just determined that we go from left to right and that we multiply by the base after adding in each digit except for the last one.
Result: 6347[8] = 3303[10]
The second alternate method is basically the inverse of the method above and uses repeated division by the number base being converted to with the arithmetic performed in the number base being
converted from - hence this method lends itself to conversions from decimal to another base.
Using the example above once more, to convert 320[10] to base-7, we divide by the new number base, 7, and get
^320 / [7] = 45 r 5 (45 with a remainder of 5), hence 5 is the last (i.e., rightmost) digit. Proceeding with just the whole part of the result, we have:
^45 / [7] = 6 r 3, so 3 is the second digit from the right. Again proceeding with just the whole part of the result, we have:
^6 / [ 7] = 0 r 6, and so 6 is the third digit from the right and since the whole part is now zero, we are finished.
A common way of writing this when converting by hand is:
• 7 | 320
• 7 | 45 r 5
• 7 | 6 r 3
• 7 | 0 r 6
The converted number is simply the remainders starting with the last one first, or 635[7] in this example.
As with the previous method, this one also lends itself to a compact algorithm:
To convert a number from decimal to an arbitrary base:
1. SET: quotient = the original value being converted
2. SET: result (a string of characters) to be blank (no digits)
3. WHILE: quotient > 0
1. SET: quotient = quotient / new_base (integer portion i.e., integer division)
2. SET: remainder = quotient % new_base (modulo division)
3. TASK: Tack on "remainder" to front of "result" as the new MSD (Most Significant Digit or leftmost digit).
Use the above algorithm to make the conversion of 320 from base-10 to base-7.
Step 1: quotient = 320
Step 2: result = 0
Step 3: quotient > 0, so perform loop
Step 3.1: 320/7 = 45 r 5 so quotient = 45
Step 3.2: and remainder = 5
Step 3.3: result = 5
Step 3: quotient > 0, so perform loop
Step 3.1: 45/7 = 6 r 3 so quotient = 6
Step 3.2: and remainder = 3
Step 3.3: result = 35
Step 3: quotient > 0, so perform loop
Step 3.1: 6/7 = 0 r 6 so quotient = 0
Step 3.2: and remainder = 6
Step 3.3: result = 635
Step 3 Since (quotient = 0), we are finished.
Check the calculation of the exercise in the previous section by converting 3303 to base-8.
Solution: Again let's assume that it has been awhile since we've done this type of conversion and we want to make sure we do it right. Here the issue is whether our string of remainders are the
digits in the result from left-to-right or right-to-left. If we ask what happens if we change the number by one, then we know that the result should only change by one. Now we ask which of the
remainders is affected by a change of one and the answer is the first one. So the first remainder is the right most digit.
3303 / 8 = 412 r 7
412 / 8 = 51 r 4
51 / 8 = 6 r 3
6 / 8 = 0 r 6
Result: 3303[10] = 6347[8]
The method introduced above for converting decimal integers to another base using repeated division is, with practice, considerably quicker than using the brute force method and it is particularly
well suited to purely manual use (pen and paper with no calculator). But it has two drawbacks if you are trying to speed up the process by using a calculator to perform the arithmetic. First, the
division results are, on most calculators, presented only as a single number with a decimal fraction. So we must convert this decimal fraction to an integer numerator either mentally or by
multiplying just the fractional part by the new number base. But this generally requires having to lose the whole number portion of the division result which is needed to continue the conversion. The
result is that we have to store and retrieve intermediate results, either mentally, on paper, or in the calculator's memory, and the process is a bit cumbersome let alone affected by the individual
and the capabilities of the calculator in use.
But there is another way to perform this task that uses repeated multiplication, albeit in a different fashion than is used to convert a value from another number base to decimal. To set the stage
for how this method works, let's first look at a three digit decimal number and see how we could isolate the individual digits.
If we divide by the number base, ten in this case, enough times to get a result that is less than one, we would need to do so three times and end up with:
0.123 (after dividing by ten three times)
If we multiply by ten, we get
(0.123) x 10 = 1.23
The whole number part is the first digit. If we then remove the whole number part and multiply by ten again, we get:
(1.23 - 1) x 10 = 2.3
Now the whole number part is the second digit. If we repeat this process again, we get:
(2.3 - 2) x 10 = 3
If, instead of dividing by ten three times and then multiplying by ten three times we were to use a different base then the whole number parts that would have been revealed after each multiplication
would have been the successive digits of the number as represented in the base used.
An informal proof that this is true is quite straightforward. Consider that we have a number x and we want to recover the digits of that number in an arbitrary base, b. Based on our definition of a
positional numbering system, this string of digits (sticking with a three digit example) would be:
x = A x b^2 + B x b^1 + C x b^0
By dividing by b three times (two is actually sufficient, but three is more systematic) we have:
x / b^3 = A / b^1 + B / b^2 + C / b^3
If we now multiply by b, we get:
A + (B / b^1 + C / b^2)
Keeping in mind that A, B, and C are all integers strictly less than b, we see that the above value, represented in any number base, will be have A as the whole number part and the term in
parentheses as the fractional part. If we now subtract A (after recording it elsewhere as the first digit of our base-b representation) and multiply by b again we are left with
B + (C / b^1)
Now B is the whole number portion and we repeat the same procedure as above, leaving use with:
From a practical standpoint, the benefit that this method has is that intermediate results don't need to be stored away and then recalled in order to determine the value of the current digit. The
current digit is simply the whole number portion of the intermediate result (displayed in base ten, or more correctly, in the base of the original representation). This quantity is readily
identifiable and can be recorded elsewhere as part of the new representation and then can be easily subtracted from the present result. Hence this method is extremely streamlined, especially for
calculator-assisted conversions. It does have a drawback for hand-conversions - you must divide by the new number base N times and then multiply by the number base N times, hence it has about twice
the number of multiplication/division operations as the repeated-division method above.
First, divide by eight enough times to yield a result that is less than one:
3303 / 8 = 412.875
412.875 / 8 = 51.609375
51.609375 / 8 = 6.451171875
6.451171875 / 8 = 0.806396484375
Since we had to divide by eight four times, we will need to multiply by eight four times and will end up with a four digit number in base-8. At each stage, we subtract off the whole number portion
before proceding:
0.806396484375 * 8 = 6.451171875
0.451171875 * 8 = 3.609375
0.609375 * 8 = 4.875
0.875 * 8 = 7
Result: 3303[10] = 6347[8] (which agrees with our earlier example using repeated division)
Conversions between hexadecimal and binary are extremely simple - which is the primary reason that hexadecimal (and less so octal) is so widely used in fields related to digital technology. Each
hexadecimal digit maps directly to four binary bits and each four binary bits map directly to one hex digit. This property can frequently be exploited to make conversions quicker and less prone to
mistakes. For instance, when converting from decimal to binary (or vice-versa) it is generally easier to convert to hex first.
This same property applies to base-8 (known as "octal") as well, except here each octal digit maps directly to three binary digits.
Example: Convert 6347[8] (the result from a prior example) to hexadecimal
Solution: Normally when converting between two non-decimal bases, we would convert to decimal as an intermediate step so that our arithmetic can be performed in decimal. But here we can use
binary as our intermediate step and avoid arithmetic altogether!
6347[8] = 110 011 100 111
Regrouping into groupings of four:
110 011 100 111 = 1100 1110 0111
Converting each group to hex:
1100 1110 0111 = 0xDE7
Final Result: 6347[8] = 0xDE7
Check: Converting this to decimal yields 3303 which agrees with the original example.
Most people are reasonably fast and accurate when multiplying and/or dividing by single digit values. This means that they can convert between base-10 and base-8 more readily than they can between
base-10 and base-16. This ability can be exploited by converting base-10 to base-8 to base-2 to base-16 or the other way around. In many instances, it is quicker and more accurate to perform these
three number base conversions than to perform the single direct one involving a multi-digit number base (multi-digit in base ten).
Although the above methods were developed for converting integers, we can use them to convert non-integer values by simply scaling the number to be converted appropriately before converting it, then
scaling it back to its original value. We can even avoid this overhead if we use the method of repeated multiplication taking care to keep track of the radix point, which is the method we will
develop first. Regardless of what method we use, we must be aware that we need to convert the proper number of digits beyond the radix point to preserve the level of accuracy contained in the
original value.
For now, let's focus on the two repeated multiplication techniques. Converting from base-10 to another base is the simplest because we must do nothing extra beyond keeping track of when we reach the
radix point. Other than that we can continue converting digits as long as we desire. When converting to base-10, we must hold off on dividing by the other number base until we have the value
converted over to base-10 (since we want to perform the division in base-10 and not the original number base). So while we can continue converting digits beyond the radix point for as long as we
chose, each one involves an extra multiplication by the original number base that we must eventually divide back out.
We must divide the original value by the new number base twice to get a value less than one:
13.14 / 5 = 2.628
2.628 / 5 = 0.5256
The first two multiplications we perform will therefore produce the whole number portion of the final result:
0.5256 * 5 = 2.628 (result = "2")
0.628 * 5 = 3.14 (result = "23")
Any subsequent multiplications will reveal successive digits to the right of the radix point:
(result = "23.")
0.14 * 5 = 0.7 (result = "23.0")
0.7 * 5 = 3.5 (result = "23.03")
0.5 * 5 = 2.5 (result = "23.032")
0.5 * 5 = 2.5 (result = "23.032")
We can continue this as far as we would like and, when we reach the point at which we wish to stop it is easy to see if the last digit should be rounded up or not. In this case, it should. So, to
four radix places, our have:
Result: 13.14[10] = 23.0323[5]
13 <-- Whole number portion
x5 (extra multipication)
x5 (extra multipication)
x5 (extra multipication)
x5 (extra multipication)
Since we multiplied by 5 an extra four times, we must now divide by 5 four times:
8213 / 5^4 = 13.1408
Result: 23.0323[5] = 13.1408[10]
Note that, in either technique, we could have separated out the original whole number and fractional parts and converted them separately before combining them back into a final result. Doing so would
be particularly attractive if doing all the computations by hand.
Also note that we did not recover the exact value that we started with in the prior example. This should not be surprising since we had to round the result in the prior example and, since we rounded
up, we could have anticipated that our result in the second example would end up a little larger than our original starting value. In general, the issue of rounding and how many digits should be
converted is one that we must deal with, in some way, when we convert a non-integral value from one base to another.
Just as we could determine how many digits a certain integer requires when represented in a different number base, so to can we easily determine how many digits we need to have to the left of the
radix point to correspond to a desired accuracy. Consider our original value in the prior examples, namely 13.14. One way to look at this number is to assume that it is accurate to within one-half of
it's last digit since only values greater than or equal to 13.135 and less than 13.145 would have been expressed as 13.14. If we convert this value to another number base and then convert it back, we
want to be assured that we at least get a value that falls somewhere within this range. We can do this as long as the least significant digit in the new base has a resolution smaller than this total
range of 0.01.
1 x b[2]^-n < 1 x b[1]^-m
b[1] is the base we are converting from
b[2] is the base we are converting to
m is the number of digits to the right of the radix point of the number in the old base
n is the number of digits to the right of the radix point of the number in the new base.
Solving for n yields:
n = m log(b[1])/log(b[2])
So to preserve the level of accuracy represented by the decimal value of 13.14, we need
n = 2 log(10)/log(5) = 2.86
We need to always round this value up to the next higher integer, so we need three digits beyond the radix point in our base-5 number.
A slightly different approach that some people might find more intuitive and, at least under some circumstances, easier to use is to scale the number being converted both before and after the
conversion in such a way that the number that is actually converted is an integer. This permits us to use any of the techniques already developed for the conversion of integers without any
modification at all.
The mathematical basis for this method is as follows:
x = (b^n * x) / b^n
Which simply recognizes that if we both multiply and divide one value by another that we have not changed the value. If x is a real value with a fractional component that we want to preserve in our
conversion, then we only have to multiply by a sufficiently large value of b^n to ensure that integer portion of it contains all of the significant information. For instance, if we want to convert
3.14 to some other base, then we can multiply it by 10^2 (since we want to preserve two base-10 radix places) and convert the integer 314 to the other base. We must then divide by one hundred and
this might be difficult since it must be done in the other base - a base that we are almost certainly not comfortable performing arithmetic in.
The key to making this conversion technique convenient is to notice that it doesn't matter what the exact value is that we multiply by (since we will be dividing by that same value later) as long as
it is large enough. In the above example of 3.14, this means that 10^2 is merely the smallest value we can use. In particular, notice that it doesn't matter which value of b we use - the base we are
converting from or the base we are converting to. But, whichever one we choose, we must multiply/divide by that number in both bases - once before the conversion and once after. Doing so in base-10
is straight forward regardless of which value of b we choose, but doing so in the other base is only convenient if we do so in that number base. The reason for this is that multiplying and/or
dividing a value by an integer power of the number base in which it is represented is accomplished by the simple act of shifting the radix point the appropriate number of places to the left or right.
Hence, to make things simple, we will always use the non-base-10 value of b.
We want to preserve two decimal digits (to the right of the decimal point) so our scaling factor must be larger than 10^2. We can either use trial-and-error to find a value of n such that 6^n>10^
2 (and, most often, this can be done very quickly) or we can be explicit and solve for n:
n = 2 log(10)/log(6) = 2.57 (need n = 3)
The value we will convert will then be:
k = 6^3 * 3.14 = 678 (rounding to the nearest integer)
Converting this to base-6 by Repeated Multiplication:
678 / 6 = 113
113 / 6 = 18.83333
18.83333 / 6 = 3.1388889 (partial result: 3)
Taking the manual shortcut where we only divide until the whole number portion is easy to convert, we now work back:
0.1388889 * 6 = 0.83333333 (partial result: 30)
0.83333333 * 6 = 5 (partial result: 305)
0 * 6 = 0 (partial result: 3050)
Now all we must do is divide by 6^3 which is accomplished merely by moving the radix point three places to the left.
Result: 3.14[10] = 3.050[6]
In this example we don't need to spend any time determining what our scaling factor will be - we want to multiply by at least 6^2 in order to preserve the two radix places (since the third
happens to be zero) and so that is the value we will use.
The value we will convert will therefore be:
k = 6^2 * 3.05[6] = 305[6]
Converting this to base-10 by Repeated Multiplication:
((3 * 6) + 0)*6 + 5 = 113
Now all we must do is divide back out the factor of 6^2:
113 / 62 = 113 / 36 = 3.139
Result: 3.050[6] = 3.139[10]
We are looking for an integer M such that
π = M / 16^4 => M = 16^4 x π = 205,887
Converting this to hex using any method of our choosing we get
M = 0x3243F
Result: π = M / 16^4 = 0x 3.243F
A similar approach can be used for working with numbers represented in an exponential format by first scaling the number to convert the exponent and then scaling the mantissa as above.
Solution: charge on one electron = 1.602 x 10^-19 C, so it takes 6.242 x 10^18 electrons to get 1 C of charge.
Our final value, in hex, is going to be of the form
value = A.BCD x 16^EXP
Where A is a single non-zero hex digit in order for the value to be normalized. This means that:
A.000 x 16^EXP <= (value = A.BCD x 16^EXP) < 1.000 x 16^EXP+1
Using the right-hand inequality, we can find the value of the exponent:
value < 1.000 x 16^EXP+1
EXP + 1 > ln(value) / ln(16) = ln(6.242 x 10^18) / ln(16) = 15.6
EXP = 15 = 0xF
We now use this to determine the mantissa that needs to be converted:
value = mantissa x 16^E
mantissa = value / 16^E = 6.242 x 10^18 / 16^15 = 5.4141
If we want to preserve this value to at least 0.1%, and since our method of conversion is to multiply by the new number base enough times to make this an integer, then we need to multiply by B^n
such that:
1 < 5.4141 x 0.1% x B^n
Solving for n we get
n > ln [^1/[(5.414 x 0.1%)]] / ln(16) = 1.9
n = 2
So the integer that we convert from decimal to hex is
M = 5.414 x 16^2 = 1386 = 0x56A
Dividing this by 16^2 and combining with the exponent yields a final result of:
0x 5.6A e F
As with the integer representations, moving between hex and binary is straightforward even when working with floating point values, although you must take care to make the necessary adjustments if
you desire to keep the value normalized. We must recognize, however, that we can't just simply convert the exponent from hex to binary because what the relationship really means is that:
b.bbb x 2^e1 = h.hhh x 16^e = h.hhh x 2^(4 x e)
So the exponent in the binary representation is four times the exponent in the hex representation. But this is easily accomplished by converting the hex representation's exponent to binary and
shifting the result to the left two places.
Example: Convert the result from the previous example to a normalized binary exponential representation.
Solution: We can map from hex to binary directly, although the result may not be normalized.
0x 5.6A e F = 0101 . 0110 1010 e 11 1100 b
To normalize this, we must divide the mantissa by four (2^2) and add two to the exponent.
0x 5.6A e F = 1.0101 1010 10 e 11 1110 b
Converting from binary to hexadecimal is just as straightforward. First you multiply the mantissa by the power of two needed to make the exponent evenly divisible by four - meaning that the
exponent's two least significant bits are zero. Then translate the result directly to hexadecimal.
Example: Convert the binary value 1.1001 0010 0001 1111 1011 01 e 1 to decimal.
Solution: First, we'll convert it to hex by multiplying the mantissa by two:
11.0010 0100 0011 1111 0110 1 e 0
Now we'll convert directly to hex:
value = 0x 3.243F68 e 0
To convert this to binary, we'll first make it an integer by multiplying by 16^6:
M = 16^6 x value = 0x 324 3F68
Converting this to decimal by the method of repeated multiplication yields
M = 52,707,176
To get the final value, we then divide this by 16^6:
value = 3.1415925
A lot of tools has been developed in this module, but like any set of tools, they provide significant flexibility and efficiency if a bit of effort is invested to gain competence with them. Even more
importantly, the techniques developed are based on a fundamental understanding of the properties of numbers and how we represent them. An understanding of these concepts is critical to many
engineering disciplines including computer hardware design, software engineering, embedded system design, encryption, information security, network design, and digital signal processing just to name
a few.
The author would like to acknowledge the following individual(s) for their contributions to this module:
• Loren Blaney from the 6502 Users Group for proofreading the entire module and pointing out countless typos, grammatical errors, and making numerous suggestions that led to a better module.
|
{"url":"http://www.dragonwins.com/domains/getteched/binary/PositionalNumberSystems_BaseConversions.htm","timestamp":"2014-04-18T23:17:16Z","content_type":null,"content_length":"58102","record_id":"<urn:uuid:04b8a469-622f-474b-b3ba-e53bafee9323>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: ReplyToAnand
Josef Mattes mattes at math.ucdavis.edu
Fri Oct 31 16:48:17 EST 1997
>I hate to be repetitive, but sometimes I find telling quotes that are much
>stronger than anything I am saying - from people you would least expect. I
>would like your reaction to the Morriss Kline quote from Mathematics from
>Ancient to Modern Times, Chapter 51, pp. 1182:
>"By far the most profound activity of twentieth-century mathematics has
>been the research on the foundations."
There are other interesting passages by Morris Kline. E.g. in
"Mathematics, the loss of certainty", p.331, he quotes Goedel as saying in
"the role of the alleged "foundations" is rather comparable to the
function discharged, in physical theory, by explanatory hypotheses. . . .
The so-called logical or set-theoretical foundations for number theory or
of any other well established mathematical theory is explanatory, rather
than foundational, exactly as in physics where the actual function of
axioms is to explain the phenomena described by the theorems of this
system rather than to provide a genuine foundation for such theorems."
Do you agree with this statement?
Josef Mattes
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-October/000127.html","timestamp":"2014-04-20T16:37:07Z","content_type":null,"content_length":"3433","record_id":"<urn:uuid:7f285be8-84fe-4540-9661-e1786c9a8f33>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Arabic mathematics
Egyptian mathematics
refers to the style and methods of
performed in
Ancient Egypt
Egyptian multiplication and division employed the method of doubling and halving (respectively) a known number to approach the solution. The
method of false position
may not have been used for division and algebra problems. Scribes may have only used
Old Kingdom
binary numbers, and
Middle Kingdom
unit fractions, written within
RMP 2/n table
answers. Scribes like
solved complex mathematical problems, 84 of which are outlined in the
Rhind Mathematical Papyrus
(RMP), one of which included
arithmetic progressions
The traditional Old Kingdom scholars report that Egyptians confined themselves to applications of practical arithmetic with problems additively addressing how a number of loaves can be divided
equally between a number of men. Problems in the Moscow and Rhind Mathematical Papyri expressed instructional views. Three views cover abstract definitions of number, and higher forms of arithmetic.
Abstract definitions are found in the Akhmim Wooden Tablet, the Egyptian Mathematical Leather Roll and the Rhind Mathematical Papyrus. Abstract arithmetic was used to scale hekat, and other weights
and measures units. The hekat included Eye of Horus quotients and Egyptian fraction remainders, scaled to ro, 1/320 of a hekat, or other sub-units. Five hekat two-part statements are defined in the
Akhmim Wooden Tablet, and applied 30 times in the Rhind Mathematical Papyrus, and many additional times in other Middle Kingdom texts, such as the Ebers Papyrus, a medical text.
2700 BC
Egyptians introduced the earliest fully developed base 10 numeration system. Though it was not a positional system, it allowed the use of large numbers and also fractions in the form of
unit fractions
Eye of Horus
fractions, or binary fractions.
By 2700 BC, Egyptian construction techniques included precision surveying, marking north by the sun's location at noon. Clear records began to appear by 2000 BC citing approximations for π and square
roots. Exact statements of number, written arithmetic tables, algebra problems, and practical applications with weights and measures also began to appear around 2000 BC, with several problems solved
by abstract arithmetic methods.
For example, the Akhmim Wooden Tablet (AWT) lists five divisions of a unit of volume called a hekat, beginning with one hekat unity valued as 64/64. The hekat unity was divided by 3, 7, 10, 11 and
13, with all answers being exact. The first half of the answers cite a binary quotient, i.e. one hekat (64/64), divided by 3, found a quotient 21 with a remainder of 1. The scribe wrote 21 as (16 + 4
+ 1), such that a binary series was obtained by (16 + 4 + 1)/64 = 1/4 + 1/16 + 1/64. The second half of the answer scaled the remainder one (1) to 1/320th (ro) units or 1/(192) = (5/3)*1/320 = (1 + 2
The scribe combined the quotient and remainder into one statement. The 1/3rd of a hekat answer was written as: 1/4 1/16 1/64 1 2/3 ro. Scribal addition and multiplication signs are not seen. Note
that the scribal series was written from right to left. The scribe proved all of his results by multiplying the answers by its initial divisors, finding the initial hekat unity value of(64/64 all
five times. The AWT scribe wrote out this exact partitioning method in more detail, a method that was shorteded by Ahmes and other Middle Kingdom scribes. Ahmes' steps did not include the proof
aspect, for example. However, Ahmes' partitioning steps, however, did follow the AWT's two-part structure, using it 29 times in Rhind Mathematical Papyrus #81.
Hana Vymazalova published in 2002 a fresh copy of the AWT that showed that all five AWT divisions had been exact, by first parsing the proof steps, returning all five division answers to 64/64.
Vymazalova thereby updated Daressy's 1906 incomplete discussion of the subject that had only found 1/3, 1/7 and 1/10 to be exact.
Beyond the fact that (64/64)/n = Q/64 - (5R/n)*ro, with Q = quotient and R = remainder, fairly states the 2,000 BCE scribal form of hekat division, two additional facts reveal early scribal thinking.
One fact reveals that whenever the divisor n was between 1/64 and 64 a limit of 64 had been reached. RMP 80 details this two-part limit. Second, to go beyond the divisor n = 64 limit, hin, ro and
other sub-units of the hekat were developed. Gillings summaries the RMP data with 29 examples in an appendix, thereby contrasting the two-part statements to the equivalent one-part hin statements.
The medical texts and its 2,000 examples also used the extended one-part formats following: 10/n hin for 1/10th of a hekat, and 320/n ro for 1/320th of a hekat for prescription ingredients.
Ahmes was able to go beyond the 64 divisor limit and its two-part remainder arithmetic in other ways, one being to increase the size of the numerator. The two-part hekat partitioning method was
described in problem 35 as 100 hekat divided by n= 70. Ahmes wrote 100*(64/64)/70 = (6400/64)/70 = 91/64 + 30/(70*64). The quotient was written as (64 + 16 + 8 + 2 + 1)/64 =(1 + 1/4 + 1/8 + 1/32+ 1/
64). Ahmes then wrote the remainder part as (150/70)*1/320 = (2 + 1/7)ro. Finally, the combined 1 1/4 1/8 1/32 1/64 2 1/7 ro answer was written down following the right to left, using no arithmetic
addition or multiplication signs, older notation rules set down in the 350 year older Akhmim Wooden Tablet.
Our understanding of ancient Egyptian mathematics has been impeded by the reported paucity of available sources. The most famous such source is the Rhind Mathematical Papyrus, a text that can be read
by comparing many of its elements against other texts, i.e., the Egyptian Mathematical Leather Roll and the Akhmim Wooden Tablet. The Rhind papyrus dates from the Second Intermediate Period (circa
1650 BC), but its author, Ahmes, identifies it as a copy of a now lost Middle Kingdom papyrus. The Rhind papyrus contains a table of 101 Egyptian fraction expansions for numbers of the form 2/n, and
84 word problems, the answers to which were expressed in Egyptian fraction notation.
The RMP also includes formulas and methods for addition, subtraction, multiplication and division of sums of unit fractions. The RMP contains evidence of other mathematical knowledge, including
composite and prime numbers; arithmetic, geometric and harmonic means; and simplistic understandings of both the Sieve of Eratosthenes and perfect number theory. It also shows how to solve first
order linear equations as well as summing arithmetic and geometric series.
Henry Rhind's estate donated the Rhind papyrus to the British Museum in 1863. Also included in the donation was the Egyptian Mathematical Leather Roll, dating from the Middle Kingdom era. Like the
Rhind papyrus, the Egyptian Mathematical Leather Roll contains a table of Egyptian fraction expansions.
The Berlin papyrus, written around 1300 BC, shows that ancient Egyptians had solved two second-order, one unknown, equations that some have called Diophantine equations. The Berlin method for solving
$x^2 + y^2 = 100$ has not been confirmed in a second hieratic text, though it has been confirmed by a second Berlin Papyrus problem.
Sources other than the ones mentioned above include the Moscow Mathematical Papyrus, the Reisner Papyrus, and several other texts including medical prescriptions found in the Ebers Papyrus.
Two number systems were used in ancient Egypt. One, written in hieroglyphs, was a decimal based tally system with separate symbols for 10, 100, 1000, etc, as Roman numerals were later written, and
hieratic unit fractions. The second, written in a new ciphered one-number-to-one-symbol system was a digital system that was not similar to hieroglyphic system. The hieroglyphic number system existed
from at least the Early Dynastic Period. The hieratic system differed from the hieroglyphic system beyond a use of simplifying ligatures for rapid writing and began around 2150 BC. Hieratic numerals
used one symbol for each number replacing the tallies that had been used to denote multiples of a unit. For example, two symbols had been used to write three, thirty, three hundred, and so on, in a
system that was superseded by the hieratic method. Later hieroglyphic numeration was modified and adopted by the Romans for official uses, and Egyptian fractions in everyday situations.
The Rhind Mathematical Papyrus was written in hieratic. It contains examples of how the Egyptians did their mathematical calculations. Fractions were denoted by placing a line over the letter n
associated with the number being written, as 1/n. This method of writing numbers came to dominate the Ancient Near East, with Greeks 1,500 years later using two of their alphabets, Ionian and Doric,
to cipher all of their numerals, alpha = 1, beta = 2 and so forth. Concerning fractions, Greeks wrote 1/n as n', so Greek numeration and problem-solving adopted or modified Egyptian numeration,
arithmetic and other aspects of Egyptian math.
Example from the Rhind Papyrus Z2:Z1*Z1 Aa16 r:Z1*Z1*Z1*Z1:Z2 r:10 Z1-Z1-Z1-Z1 5 + + + (= 5 )
was done by repeated doubling of the number to be multiplied (the multiplicand), and choosing which of the doublings to add together (essentially a form of
arithmetic), a method that links to the Old Kingdom. The multiplicand was written next to the figure 1; the multiplicand was then added to itself, and the result written next to the number 2. The
process was continued until the doublings gave a number greater than half of the
. Then the doubled numbers (1, 2, etc.) would be repeatedly subtracted from the multiplier to select which of the results of the existing calculations should be added together to create the answer.
As a short cut for larger numbers, the multiplicand can also be immediately multiplied by 10, 100, etc.
For example, Problem 69 on the Rhind Papyrus (RMP) provides the following illustration, as if Hieroglyphic symbols were used (rather than the RMP's actual hieratic script).
To multiply 80 × 14
Egyptian calculation Modern calculation
Result Multiplier Result Multiplier
V20*V20*V20*V20:V20*V20*V20*V20 Z1 80 1
V1*V1*V1*V1:V1*V1*V1*V1 V20 / 800 10
V20*V20*V20:V20*V20*V20-V1 Z1*Z1 160 2
V20:V20-V1*V1:V1 Z1*Z1*Z1*Z1 / 320 4
V20:V20-V1-M12 [= hiero] 1120 14
denotes the intermediate results that are added together to produce the final answer.
Hieratic and Middle Kingdom math followed this form of hieroglyphic multiplication.
Subtraction defined in the Egyptian Mathematical Leather Roll (EMLR), an 1800 BC document, included four additive or identity methods, followed by one non-additive, abstract, method that was used
five to fifteen times for the 26 EMLR series listed, that looked like this:
1/pq = (1/A)* (A/pq)
with A = 3, 4, 5, 7, 25, citing A = (p + 1) 10 times.
1/8 was written using A = (2 + 1)= 3, the A = (p + 1) case, as used in the RMP 24 times, seeing p = 2, q = 4 and A = 25, following
A = 3: 1/8 = (1/3)*(3/8) = 1/3*(1/4 + 1/8) = 1/12 + 1/24
A = 25: 1/8 = 1/25*(25/8) = 1/5*(25/40)= 1/5 *(24/40 + 1/40)
= 1/5*(3/5 + 1/40) = 1/5*(1/5 + 2/5 + 1/40)
= 1/5 *(1/5 + 1/3 + 1/15 + 1/40)
= 1/25 + 1/15 + 1/75 + 1/200
with the out-of-order 1/25 + 1/15 sequence marking the scribal method of partition.
Confirmation of the EMLR (1/A)* (A/pq), with A = (p + 1) rule is found 24 times in the RMP 2/nth table, using the form
2/pq = (2/A)* (A/pq), with A = (p + 1)
example, 2/27, a = 3, q = 9
2/27 = 2/(3 + 1)*(3 + 1)/9 = 1/4*(1/3 + 1/9) = 1/12 + 1/36
Another subtraction method is seen in the RMP 2/nth table as first suggested by F. Hultsch in 1895, and confirmed by E.M. Bruins in 1944, or
2/p - 1/A = (2A - p)/Ap
2/p = 1/A + (2A -p)/Ap
where the divisors of A, from the first partition, were used to additively find (2A - p), thereby exactly solving (2A -p)/Ap.
For example,
2/19 - 1/12 = (24 - 19)/(12*19)
with the divisors of 12 = 6, 4, 3, 2, 1 being inspected to find (24 - 19) = 5 taken only from the divisors of 12. Optimally (3 + 2) was selected, by Ahmes and other scribes, over (4 + 1) such that,
2/19 = 1/12 + (3 + 2)/(12*19) = 1/12 + 1/76 + 1/114
Rational numbers could also be expressed, but only as sums of unit fractions, i.e. sums of reciprocals of positive integers, 2/3, and 3/4. The hieroglyph indicating a fraction looked like a mouth,
which meant "part", and fractions were written with this fractional solidus, i.e. the numerator 1, and the positive denominator below. Special symbols were used for 1/2 and for two non-unit
fractions, 2/3 (used often) and 3/4 (used less often).
Problem 25 on the Rhind Papyrus may have used the method of false position to solve the problem "a quantity and its half added together become 16; what is the quantity?" (i.e., in modern algebraic
notation, what is x if x+½x=16).
Assume 2
1 2 /
½ 1 /
Total 1½ 3
As many times as 3 must be multiplied to give 16, so many times must 2 be multiplied to give the answer.
1 3 /
4 12 /
2/3 2
1/3 1 /
Total 5 1/3 16
1 5 1/3 (1 + 4 + 1/3)
2 10 2/3
The answer is 10 2/3.
Check -
1 10 2/3
½ 5 1/3
Total 1½ 16
A more likely and direct approach to solve this class of problem is given by: x + (1/2)x = 16, using these steps
1. (3/2)x = 16, 2. x = 32/3, 3. x = 10 2/3.
Problem 31 sets the problem "q quantity, its 1/3, its 1/2 and its 1/7, added together, become 33; what is the quantity?" In modern algebraic notation, "what is x if x + 1/3 x + 1/2 x + 1/7 x =33?"
The answer is 14 1/4 1/56 1/97 1/194 1/388 1/679 1/776, or 14 and 28/97. To solve the problem as Ahmes wrote his answer 28/97 had to be broken up into 2/97 and 26/97, and solved the two separate
vulgar fraction conversion problems using Hultsch-Bruins (without using false position, as other algebra problem may have been solved).
The remainder arithmetic solution, the historical method that is most likely, for x + (1/3)x + (1/2)x + (1/7)x = 33 looks like this:
1. 97/42 x = 33, 2. x = 1386/97, and 3. x = 14 + 28/97.
with, 2/97 - 1/56 = (112 - 97)/(56*97) = (8 + 7)/(56*97) = 1/679 1/776,
and 26/97 - 1/4 = (104-97/(4*97) = (4 + 2 + 1)/(4*97)= 1/97 1/194 1/388,
2/97 = 1/56 1/670 1/776,
26/97 = 1/4 1/97 1/194 1/388
such that, writing out x = 14 + 28/97 in an ordered unit fraction series
4. x = 14 1/4 1/56 1/97 1/194 1/388 1/679 1/776, as written by Ahmes.
The ancient Egyptians knew that they could approximate the area of a circle as follows:
Area of Circle ≈ [(Diameter) x 8/9 ]^2.
Problem 50 of the Ahmes papyrus uses these methods to calculate the area of a circle, according to a rule that the area is equal to the square of 8/9 of the circle's diameter. This assumes that π is
4×(8/9)² (or 3.160493...), with an error of slightly over 0.63 percent. This value was slightly less accurate than the calculations of the Babylonians (25/8 = 3.125, within 0.53 percent), but was not
otherwise surpassed until Archimedes' approximation of 211875/67441 = 3.14163, which had an error of just over 1 in 10,000. Interestingly, Ahmes knew of the modern 22/7 as an approximation for pi,
and used it to split a hekat, hekat x 22/x x 7/22 = hekat; however, Ahmes continued to use the traditional 256/81 value for pi for computing his hekat volume found in a cylinder.
Problem 48 involved using a square with side 9 units. This square was cut into a 3x3 grid. The diagonal of the corner squares were used to make an irregular octagon with an area of 63 units. This
gave a second value for π of 3.111...
The two problems together indicate a range of values for Pi between 3.11 and 3.16.
Problem 14 in the Moscow Mathematical Papyrus gives the only ancient example finding the volume of a frustum of a pyramid, describing the correct formula:
$V = frac\left\{1\right\}\left\{3\right\} h\left(x_1^2 + x_1 x_2 +x_2^2\right).$
Hellenistic mathematics in Egypt
Further information: Egyptian mathematicians
Islamic mathematics in Egypt
Further information: Egyptian mathematicians
See also
External links
Further reading
• Boyer, Carl B. 1968. History of Mathematics. John Wiley. Reprint Princeton U. Press (1985).
• Chace, Arnold Buffum. 1927–1929. The Rhind Mathematical Papyrus: Free Translation and Commentary with Selected Photographs, Translations, Transliterations and Literal Translations. 2 vols.
Classics in Mathematics Education 8. Oberlin: Mathematical Association of America. (Reprinted Reston: National Council of Teachers of Mathematics, 1979). ISBN 0-87353-133-7
• Clagett, Marshall. 1999. Ancient Egyptian Science: A Source Book. Volume 3: Ancient Egyptian Mathematics. Memoirs of the American Philosophical Society 232. Philadelphia: American Philosophical
Society. ISBN 0-87169-232-5
• Couchoud, Sylvia. 1993. Mathématiques égyptiennes: Recherches sur les connaissances mathématiques de l'Égypte pharaonique. Paris: Éditions Le Léopard d'Or
• Daressy, G. "Ostraca," Cairo Museo des Antiquities Egyptiennes Catalogue General Ostraca hieraques, vol 1901, number 25001-25385.
• Gillings, Richard J. 1972. Mathematics in the Time of the Pharaohs. MIT Press. (Dover reprints available).
• Neugebauer, Otto. 1962. Exact Sciences in Antiquity Harper & Row. Dover Reprint (1969).
• Peet, Thomas Eric. 1923. The Rhind Mathematical Papyrus, British Museum 10057 and 10058. London: The University Press of Liverpool limited and Hodder & Stoughton limited
• Robins, R. Gay. 1995. "Mathematics, Astronomy, and Calendars in Pharaonic Egypt". In Civilizations of the Ancient Near East, edited by Jack M. Sasson, John R. Baines, Gary Beckman, and Karen S.
Rubinson. Vol. 3 of 4 vols. New York: Charles Schribner's Sons. (Reprinted Peabody: Hendrickson Publishers, 2000). 1799–1813
• Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4
• Sarton, George. 1927. Introduction to the History of Science, Vol 1. Willians & Williams.
• Strudwick, Nigel G., and Ronald J. Leprohon. 2005. Texts from the Pyramid Age. Brill Academic Publishers. ISBN 9004130489.
• Struve, Vasilij Vasil'evič, and Boris Aleksandrovič Turaev. 1930. Mathematischer Papyrus des Staatlichen Museums der Schönen Künste in Moskau. Quellen und Studien zur Geschichte der Mathematik;
Abteilung A: Quellen 1. Berlin: J. Springer
• Van der Waerden, B.L. 1961. ''Science Awakening". Oxford University Press.
• Vymazalova, Hana. 2002. Wooden Tablets from Cairo...., Archiv Orientalni, Vol 1, pages 27-42.
• Wirsching, Armin. 2006. Die Pyramiden von Giza - Mathematik in Stein gebaut. Books on Demand. ISBN 978-3-8334-5492-9.
|
{"url":"http://www.reference.com/browse/Arabic+mathematics","timestamp":"2014-04-18T18:31:03Z","content_type":null,"content_length":"105975","record_id":"<urn:uuid:ca93cd90-8e9f-4681-9868-77f9b2df7e09>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
An alternating projection that does not converge in norm.
(English) Zbl 1070.46013
be two intersecting closed convex sets in a Hilbert space. Let
denote the corresponding projection operators. In 1933, von Neumann proved that the iterates produced by the sequence of alternating projections defined as
converge in norm to
${P}_{{C}_{1}\cap {C}_{2}}\left({y}_{0}\right)$
are closed subspaces.
L. M. Bregman
[Sov. Math., Dokl. 6, 688–692 (1965;
Zbl 0142.16804
)] showed that the iterates converge weakly to a point in
${C}_{1}\cap {C}_{2}$
for any pair of closed convex sets. In the paper under review, the author shows that alternating projections not always converge in the norm by constructing an explicit counterexample.
46C05 Hilbert and pre-Hilbert spaces: geometry and topology
41A65 Abstract approximation theory
|
{"url":"http://zbmath.org/?q=an:1070.46013","timestamp":"2014-04-21T04:44:10Z","content_type":null,"content_length":"22491","record_id":"<urn:uuid:a800216f-9610-4397-a1ab-e1cdd8bcc704>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1980 Etale Cohomology
Princeton Mathematical Series 33, Princeton University Press, 323+xiii pages, ISBN 0-691-08238-3
An exposition of étale cohomology assuming only a knowledge of basic scheme theory.
In print. List price 125 USD.
An online bookstore Review
Sales as of June 30, 2012: 4006. Papers citing the book since 1997: 693.
Notes for a In the 1970s, derived categories were still quite new, and known to only a few algebraic geometers, and so I avoided using them. In some places this worked out quite well,
revised for example, contrary to statements in the literature they are not really needed for the Lefschetz trace formula with coefficients in Z/mZ, but in others it led to
expanded complications. Anyone who doubts the need for derived categories should try studying the Kunneth formula (VI, 8) without them. In the new version, I shall use them.
1. Etale 11.10.12 I also regret treating Lefschetz pencils only in the case of fiber dimension 1. Apart from using derived categories and including Lefschetz pencils with arbitrary fiber
Morphisms dimension, I plan to keep the book much as before, but with the statements of the main theorems updated to take account of later work. Whether the new version will ever be
2. Sheaf Theory NA completed, only time will tell.
3. Cohomology NA
4. The Brauer NA
5. The
Cohomology of NA
Curves and
6. The
Fundamental NA
A. Limits NA
B. Spectral NA
C. NA
D. Derived 26.08.13
1982 Hodge Cycles, Motives, and Shimura Varieties (with Pierre Deligne, Arthur Ogus, Kuang-yen Shih)
Lecture Notes in Math. 900, Springer-Verlag, 1982, 414 pages, ISBN 3-540-11174-3 and 0-387-11174-3
Usually out of print. List price 99.00 USD (paperback)
Available online at
for 29.95 USD per section.
1986 Arithmetic Duality Theorems
Academic Press, 421+x pages, ISBN 0-12-498040-6. Out of print.
Proves the duality theorems in Galois, étale, and flat cohomology that have come to play an increasingly important role in number theory and arithmetic geometry,
2006 Second corrected TeXed edition (paperback).
Booksurge Publishing, 339+viii pages, ISBN 1-4196-4274-X
Available from bookstores worldwide. List price 24 USD.
An online bookstore
The posted version (click 2006) agrees with published version except for the copyright page (for more information, see
1990 Vol. I Vol. II Automorphic Forms, Shimura Varieties, and L-functions, (editor with L. Clozel)
Proceedings of a Conference held at the University of Michigan, Ann Arbor, July 6--16, 1988.
Posted with the
of Elsevier.
How I scanned these
(since people keep asking).
Comments on Copyright and Fair Use Law.
2006 Elliptic Curves
Booksurge Publishing, 246 pages, ISBN 1-4196-5257-5 (ISBN is for the softcover version).
This book uses the beautiful theory of elliptic curves to introduce the reader to some of the deeper aspects of number theory.
Softcover version available from bookstores worldwide. List price 17 USD;
an online bookstore.
Library of Congress Number (LCCN): 2006909782 (full data in process).
Some corrections
Following is the blurb for Elliptic Curves that was on Amazon, and would still be, but for the incompetence of the people at BookSurge/CreateSpace/Amazon.
This book uses the beautiful theory of elliptic curves to introduce the reader to some of the deeper aspects of number theory. It assumes only a knowledge of the basic algebra, complex analysis, and
topology usually taught in advanced undergraduate or first-year graduate courses.
Indeed, the book is affordable (in fact, the most affordable of all references on the subject), but also a high quality work and a complete introduction to the rich theory of the arithmetic of
elliptic curves, with numerous examples and exercises for the reader, many interesting remarks and an updated bibliography.
Mathematical Reviews, Álvaro Lozano-Robledo
J. S. Milne's lecture notes on elliptic curves are already well-known The book under review is a rewritten version of just these famous lecture notes from 1996, which appear here as a compact and
inexpensive paperback that is now available worldwide.
Zentralblatt MATH, Werner Kleinert
|
{"url":"http://jmilne.org/math/Books/index.html","timestamp":"2014-04-17T09:37:55Z","content_type":null,"content_length":"10162","record_id":"<urn:uuid:44fa2024-d81c-4796-9c72-c0b8fd6fdf2a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electrical Engineering Archive | July 13, 2009 | Chegg.com
Hello everyone im from Bermuda and im doing an electricalengineering course, and having trouble with...
Show more
Hello everyone im from Bermuda and im doing an electricalengineering course, and having trouble with some homework questionsheres one of them, any help would greatly be appreciated as ido not have a
clue where to start i think ive reached the brainfreeze level lol :) thanks again
Dealing with a single phase transformer
A 1000kva single phase, double wound transformer has thefollowing specifications:
Primary winding resistance 0.4Ω
Secondary winding resistance 0.005Ω
Primary voltage 6600V
Secondary Voltage 4ooV
Iron loss 3kw
Assuming unity power factor, determine the full load currentsin the primary and secondary windings
At full load, determine the power losses in the primarysecondary resistances.
If the input power to the transformer at full load is 1000kw,determine the transformer's efficiency at full load.
• Show less
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/electrical-engineering-archive-2009-july-13?cp=CHEGGFREESHIP","timestamp":"2014-04-17T19:09:41Z","content_type":null,"content_length":"39628","record_id":"<urn:uuid:0545bd6c-70bb-45db-9179-a085daad57a9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Travel Diary of Makoto Sakurai
In this week, I was staying near the Kashiwa campus of the University of Tokyo, which is a little far (> 1 hour) from my regular office at the Komaba campus. The purpose of my visit was to look for
the recent developments of my old work on topological string theory (
open / closed Gromov-Witten
) and topological field theory (generalization of
Donaldson type invariants of gauge theories
). Although my recent research is more on the mathematics side (because I am now a postdoc of mathematics), I hope there will be a good communication with my old friends.
Well, we were afraid of two things; one is that of new influenza (type A; H1N1) panic from pandemic. Thanks to the efforts of the organizers and staffs, the panic could be avoided, (surprisingly,
they prepared medical kits (such as disinfection liquid and surgical masks) as well as a thermography camera). The other is the possible miscommunication between proper mathematicians and sincere
physicists. The misunderstandings due to the culture of either physics or mathematics were also overcame by the active questions from the participants from far abroad.
OK, let me summarize some of the academic topics of the conference. The 1st day talks were delivered by Murayama (opening address), Szendroi (refinement of virtual Poincare polynomials by the virtual
localization (fixed point) theorem for
DT (Donaldson-Thomas)
sheaf invariants), Jim Bryan (orbifold & crepant resolution conjecture), Toda (strong rationality conjecture on DT with some examples), and Krefl (work with Walcher, orientifold by "O-plane" by
involution on the worldsheet).
The 2nd day was mostly the 2-hour talks by Nakajima, Neitzke, and Verlide. (It was because we were awaiting for the banquet at the evening). Nakajima was on the renewed viewpoint on the old and new
on the $t$-structure by Beilinson-Bernstein-Deligne (famous French paper on perverse sheaf) and $Z$: central charge of $\mathcal{C}$: heart (Harder-Narasimhan).
Neitzke was on the 1) review on the Seiberg-Witten data and Kontsevich-Soibelman wall-crossing formula, 2) construction on hyperkahler manifolds (of the moduli space of above-mentioned supersymmetric
gauge theory; naive first approximation and quantum corrections by instantons)
, and 3) examples of moduli space of rank 2 ramified stable Higgs bundles over curve (generalization of Hitchin system). Verlinde was more on vertex algebras (of Borcherds) for $\mathcal{N} = 4$
dyons (electric & magnetic charges) by conformal field theory embedded to K3 x T^2. At the double point ($a = 0$ of a * y^2 + b * y + c = 0), they call a "wall" for Weyl reflection, and Weyl
At the 3rd day, everyone became tired. Soibelman was on the joint work with Maxim Kontsevich (, and works in progress). After addressing the tools for construction of motivic integration (motivic
functions of Denef-Loeser, their stack version by Joyce, and ind-constructible familes). He also mentioned some Hall algebras, which was reminiscent of the 5-hour talk by Kontserich at IPMU.
Mikhalkin was on the tropical geometry over non-field. I am not sure whether its "manifold" is a variety or a scheme of locally ringed space. Cheng was a contributed talk on the Borcherds-Kac-Moody
algebra for orbifolds and moduli for degenerated metric inner form, and hyperbolic geometry of Poincare sphere with an infinite set-sum of generically (except the Leech lattice) finte number of
chambers. Some recent works on wall-crossing dyons were with Lotte Hollands. Nagao was on a big conjecture on the analogy for "open" non-commutative Donaldson-Thomas invariants by elaborating the
work of Szendroi.
The 4th day was Maulik, Denef, Ohkawa, and Yamazaki. Maulik was on the generalization of the work of Bryan-Leung for the higher genus curve of poralized K3 surfaces. Denef was on the "local
Calabi-Yau" from moduli space of Seiberg-Witten (\mathcal{N} = 2) gauge theory and its prepotential. Ohkawa was a computation of difference of Betti number under the change of "theta-stability" by
flips (wall-crossing). Yamazaki was on the collaboration with Ooguri, on the path-algebra of quiver diagrams for (co-) amoebas.
The 5th day was by Fukaya, Hanany, Dimofte, and Konishi. Fukaya was not on Donaldson-Thomas, but on the open Gromov-Witten invariants / Floer homology of Lagrangian submanifold (A-brane with the
Maslov (topological) index condition) of symplectic manifold $X = (X, \omega)$. He considered the convergence condition of quantum cohomology utilizing the Novikov ring and Cho-Oh, and the famous
preprint by Fukaya-Oh-Ohno-Ohta. The first 1 hour was on the results on toric (Fano) cases, and the latter 1 hour was on the conjecture on Calabi-Yau cases. It was something like the bubbles of
Donaldson theory for the Uhlenbeck compactification of Chern-Simons theory. Hanany was on meson / baryon counting of chiral operators, by illuminating algebraic surfaces / orbifolds with at most 2
Kahler moduli, with the help of quiver diagrams, subset of crystals, and toric diagrams. Dimofte was on the refined / motivic wall-crossing, where it was not certain to consider the rigidness or
existence of motives (dilogarithm after Beilinson-Deligne). Yukiko Konishi was on the decreasing / increasing filtration structure of mixed Hodge structure for Kaehler variety with singularities
("local Calabi-Yau", namely the total space of canonical bundle of smooth nef surface, e.g. $\mathbb{P}^2$), and its application to the symplectic "open" manifolds for the Yukawa coupling.
Some of the participants will stay in Tokyo for a few more days, and I believe that all appreciated this occasion of meeting.
|
{"url":"http://makotosakurai.blogspot.com/2009_05_01_archive.html","timestamp":"2014-04-20T13:18:27Z","content_type":null,"content_length":"49596","record_id":"<urn:uuid:9db9aed6-c9fc-4145-b4f9-7b1a451c40e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finding the unpaid balance
Re: finding the unpaid balance
numberpayments = 109
interest = .18 / 12
balance = 5035.56
Plug in and you will get p = 27749.22727276801
Now you have the original mortgage.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19397","timestamp":"2014-04-16T10:11:51Z","content_type":null,"content_length":"21049","record_id":"<urn:uuid:90ccdc93-d9f1-488d-9f8d-6144828d1538>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hopedale, MA Math Tutor
Find a Hopedale, MA Math Tutor
...While a NASA employee on the Apollo Project, I made extensive use of algebra and calculus in the development of orbital rendezvous techniques. Later on I participated in the design of the
space shuttle, and the development of the first GPS operating software. Thus I have an informed perspective regarding both teaching and application of these disciplines.
7 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...I have a MS in Applied Management, and have worked in various marketing, sales and channels development roles for over ten years at IBM. I have also taught Business classes at Anna Maria
College, Nichols College, Beckers College, and Assumption College. I do contract marketing research,marketing writing, web search optimization, and various other channel development work for
various firms.
88 Subjects: including algebra 1, reading, physics, grammar
Hi! My name is Dan, and I love helping students to improve in Math and Science. I attended U.C.
27 Subjects: including logic, grammar, ACT Math, GED
...I feel inordinate joy at showing students the ins and outs of writing--helping them not only with the how-to's of writing but also giving them CONFIDENCE as writers themselves; I am also very
passionate about opening work of literary art to young minds and helping them see beyond the printed word...
19 Subjects: including prealgebra, reading, English, grammar
Hi, my name is Jim D. I have tutored five students so far, each in algebra or geometry. I always follow the curriculum set forth by their teachers in class.
3 Subjects: including prealgebra, algebra 1, geometry
Related Hopedale, MA Tutors
Hopedale, MA Accounting Tutors
Hopedale, MA ACT Tutors
Hopedale, MA Algebra Tutors
Hopedale, MA Algebra 2 Tutors
Hopedale, MA Calculus Tutors
Hopedale, MA Geometry Tutors
Hopedale, MA Math Tutors
Hopedale, MA Prealgebra Tutors
Hopedale, MA Precalculus Tutors
Hopedale, MA SAT Tutors
Hopedale, MA SAT Math Tutors
Hopedale, MA Science Tutors
Hopedale, MA Statistics Tutors
Hopedale, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Hopedale_MA_Math_tutors.php","timestamp":"2014-04-17T15:29:06Z","content_type":null,"content_length":"23587","record_id":"<urn:uuid:35a3c108-1aa0-4893-acb8-b6bcb17ecc97>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
p-groups as rational points of unipotent groups
up vote 13 down vote favorite
Is it true that every finite p-group can be realized as the group of rational points over $\mathbb{F_p}$ of some connected unipotent algebraic group defined over $\mathbb{F_p}$? For abelian p-groups,
the answer is yes via Witt vectors, but is it true in general?
p-groups algebraic-groups
1 By using a filtration of your $p$-group with all successive quotients $\mathbb F_p$, you can reduce the problem to showing that $\operatorname{Ext}^1_{\text{alg}}(\operatorname{GL}_1, G) \to \
operatorname{Ext}^1_{\mathbb Z}(\mathbb F_p, G(\mathbb F_p))$ is surjective for all connected, unipotent $\mathbb F_p$-groups $G$. A quick Google search turned up “Extensions of algebraic groups”
by Kumar and Neeb (#48 at math.unc.edu/Faculty/kumar), but the Abelian group by which you're extending there is the subobject, not the quotient. – L Spice Jul 3 '11 at 14:28
L. Spice, do you want to replace $GL_1$ with $\mathbb{G}_a$? – S. Carnahan♦ Jul 3 '11 at 14:42
@S. Carnahan, yes, thanks. – L Spice Jul 3 '11 at 14:51
Sorry, after a bit more thought, it occurs to me that, since $p$-groups have centres, it's OK to have the Abelian group as sub-object. Then Theorem 1.8(c) of the paper by Kumar and Neeb shows that
2 $\operatorname{Ext}^1_{\text{alg}}(G, \mathbb G_a) \cong H^2(\mathfrak g, \mathfrak{gl}_1)^{\mathfrak g}$; but I still can't see my way through to showing that the necessary map is surjective. – L
Spice Jul 3 '11 at 15:33
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged p-groups algebraic-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/69397/p-groups-as-rational-points-of-unipotent-groups","timestamp":"2014-04-17T07:22:06Z","content_type":null,"content_length":"50405","record_id":"<urn:uuid:787756ca-dc20-4877-9030-ccbb21031aae>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Enhancement schemes for constraint processing: Backjumping, learning, and cutset decomposition
Results 1 - 10 of 223
, 1992
"... Constraint-based reasoning is a paradigm for formulating knowledge as a set of constraints without specifying the method by which these constraints are to be satisfied. A variety of techniques
have been developed for finding partial or complete solutions for different kinds of constraint expression ..."
Cited by 948 (42 self)
Add to MetaCart
Constraint-based reasoning is a paradigm for formulating knowledge as a set of constraints without specifying the method by which these constraints are to be satisfied. A variety of techniques have
been developed for finding partial or complete solutions for different kinds of constraint expressions. These have been successfully applied to diverse tasks such as design, diagnosis, truth
maintenance, scheduling, spatiotemporal reasoning, logic programming and user interface. Constraint networks are graphical representations used to guide strategies for solving constraint satisfaction
problems (CSPs).
, 1992
"... . A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to
solve these problems completely. We may seek to partially solve the problem, in particular by satisfying ..."
Cited by 427 (23 self)
Add to MetaCart
. A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to
solve these problems completely. We may seek to partially solve the problem, in particular by satisfying a maximal number of constraints. Standard backtracking and local consistency techniques for
solving constraint satisfaction problems can be adapted to cope with, and take advantage of, the differences between partial and complete constraint satisfaction. Extensive experimentation on maximal
satisfaction problems illuminates the relative and absolute effectiveness of these methods. A general model of partial constraint satisfaction is proposed. 1 Introduction Constraint satisfaction
involves finding values for problem variables subject to constraints on acceptable combinations of values. Constraint satisfaction has wide application in artificial intelligence, in areas ranging
from temporal r...
- AI MAGAZINE , 1992
"... A large variety of problems in Artificial Intelligence and other areas of computer science can be viewed as a special case of the constraint satisfaction problem. Some examples are machine
vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, planning genetic ..."
Cited by 372 (0 self)
Add to MetaCart
A large variety of problems in Artificial Intelligence and other areas of computer science can be viewed as a special case of the constraint satisfaction problem. Some examples are machine vision,
belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, planning genetic experiments, and the satisfiability problem. A number of different approaches have been
developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a
combination of these two techniques. This paper presents a brief overview of many of these approaches in a tutorial fashion.
- Computational Intelligence , 1993
"... problem (csp), namely, naive backtracking (BT), backjumping (BJ), conflict-directed backjumping ..."
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for
propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Cited by 278 (62 self)
Add to MetaCart
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for
propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for
combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These
include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the
inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear
space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms
are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and
deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
, 1995
"... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard
random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable ..."
Cited by 161 (0 self)
Add to MetaCart
... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random
400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these
claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search
algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial
Intelligence or Operations Research.
- Annals of Mathematics and Artificial Intelligence , 1994
"... In this paper we study the properties of the class of head-cycle-free extended disjunctive logic programs (HEDLPs), which includes, as a special case, all nondisjunctive extended logic programs.
We show that any propositional HEDLP can be mapped in polynomial time into a propositional theory such th ..."
Cited by 149 (2 self)
Add to MetaCart
In this paper we study the properties of the class of head-cycle-free extended disjunctive logic programs (HEDLPs), which includes, as a special case, all nondisjunctive extended logic programs. We
show that any propositional HEDLP can be mapped in polynomial time into a propositional theory such that each model of the latter corresponds to an answer set, as defined by stable model semantics,
of the former. Using this mapping, we show that many queries over HEDLPs can be determined by solving propositional satisfiability problems. Our mapping has several important implications: It
establishes the NP-completeness of this class of disjunctive logic programs; it allows existing algorithms and tractable subsets for the satisfiability problem to be used in logic programming; it
facilitates evaluation of the expressive power of disjunctive logic programs; and it leads to the discovery of useful similarities between stable model semantics and Clark's predicate completion. 1
Introduction ...
- Artificial Intelligence , 1992
"... Representing and reasoning about incomplete and indefinite qualitative temporal information is an essential part of many artificial intelligence tasks. An interval-based framework and a
point-based framework have been proposed for representing such temporal information. In this paper, we address ..."
Cited by 137 (6 self)
Add to MetaCart
Representing and reasoning about incomplete and indefinite qualitative temporal information is an essential part of many artificial intelligence tasks. An interval-based framework and a point-based
framework have been proposed for representing such temporal information. In this paper, we address two fundamental reasoning tasks that arise in applications of these frameworks: Given possibly
indefinite and incomplete knowledge of the relationships between some intervals or points, (i) find a scenario that is consistent with the information provided, and (ii) find the feasible relations
between all pairs of intervals or points. For the point-based framework and a restricted version of the intervalbased framework, we give computationally efficient procedures for finding a consistent
scenario and for finding the feasible relations. Our algorithms are marked improvements over the previously known algorithms. In particular, we develop an O(n 2 ) time algorithm for finding one co...
- DIMACS Series in Discrete Mathematics and Theoretical Computer Science , 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided
design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Cited by 127 (3 self)
Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided
design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a
discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a
general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution,
local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical
applications of the sat...
- Artificial Intelligence , 1997
"... In recent years, many new backtracking algorithms for solving constraint satisfaction problems have been proposed. The algorithms are usually evaluated by empirical testing. This method,
however, has its limitations. Our paper adopts a di erent, purely theoretical approach, which is based on charact ..."
Cited by 115 (3 self)
Add to MetaCart
In recent years, many new backtracking algorithms for solving constraint satisfaction problems have been proposed. The algorithms are usually evaluated by empirical testing. This method, however, has
its limitations. Our paper adopts a di erent, purely theoretical approach, which is based on characterizations of the sets of search treenodes visited by the backtracking algorithms. A notion of
inconsistency between instantiations and variables is introduced, and is shown to be a useful tool for characterizing such well-known concepts as backtrack, backjump, and domain annihilation. The
characterizations enable us to: (a) prove the correctness of the algorithms, and (b) partially order the algorithms according to two standard performance measures: the number of nodes visited, and
the number of consistency checks performed. Among other results, we prove the correctness of Backjumping and Con ict-Directed Backjumping, and show that Forward Checking never visits more nodes than
Backjumping. Our approach leads us also to propose a modi cation to two hybrid backtracking algorithms, Backmarking with Backjumping (BMJ) and Backmarking with Con ict-Directed Backjumping (BM-CBJ),
so that they always perform fewer consistency checks than the original algorithms. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=60408","timestamp":"2014-04-24T20:17:22Z","content_type":null,"content_length":"38784","record_id":"<urn:uuid:c5015dd6-fb72-4e74-ad18-29445392cbe6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ars Mathematica
All analysts are uptight nerds and all algebraists are dirty hippies. Discuss.
14 thoughts on “Open Thread”
1. Curiously enough, I would characterize my Algebra prof as an uptight nerd and my Real Analysis prof as a dirty hippy (well, actually he’s cleaned up a bit, but he has photos on his webpage in
which he is quite clearly a dirty hippy and he still wears the tie-die shirts from time to time).
2. Yeah, seems fair enough to me. I’m an analyst, and calling me an uptight nerd would not be totally inaccurate.
What are set theorists?
3. So. What are people working on at the moment?
I’m currently dabbling in an eclectic mix of infinitary combinatorics and operator algebras. Not areas with much intersection at the moment (barring the occasional exception like Akemann and
Weaver’s counterexample to Naimark’s problem), but I’m working on some things which might lead to a bit more. Probably not, but even if they don’t I’ll have learned a whole bunch of interesting
4. My undergrad real analysis professor (whose own research was then on multi-dimensional real spaces, R^{\alpha} where \alpha is any positive real number, not necessarily an integer) used sometimes
to begin lectures with 5 minutes of silent meditation, in order for us to get in the right mood for theorem-proving. So, I’m not sure about this generalization. However, a famous category
theorist once told me that he became a category theorist (in the 1970s) because most of the analysts he met at the time were very very macho, and only the category theorists left their egos at
home. I know at least two other successful PhDs in analysis who switched careers afterwards (out of Pure Math entirely) because they could not stand the macho culture.
5. Apropos of nothing:
When I was a student I took a course in optimisation. In order to make it more entertaining the lecturer regularly took a short break at the lecture midpoint in order to tell us about how to deal
with bears. Keep food outside your tent ‘n’ all that. His educational method was successful. 20 years later I can still remember much of his advice, and now that I no longer live in the UK
there’s a chance that some of his advice might even be useful. Unfortunately I don’t remember anything about the simplex method.
6. Uptight twits is what they are…convergence this and convergence that.
We Algebraists know better than to rush all about some infinite dimensional Hilbert space. We know how to stop a smell the patchouli.
For David:
I am working on Clifford Algebras over Markov Chains, and I am ashamed…deeply ashamed. Probability, and PDE’s; two things I swore I would never touch. Well, one out of two ain’t bad. Still, not
too far from computing cohomology of complex Grassmannians….at least that is what I tell myself…
7. Hmm. Ok, I know what a Clifford algebra is, and I know what a Markov chain is, but I confess I haven’t a clue what a Clifford algebra over a Markov chain might be. What are they? (And where do
they crop up?)
8. Take your Markov chain with N states, and impose your favorite distance squared metric on the states of the chain. Use this to embed in everyone’s favorite manifold – R^N. Use your defined inner
product and duality to impose an exterior product. Now hopefully you are only some short steps away from defining your favorite algebraic invarients on the the chain (perhaps by the route of
embedding in the appropriate Grassmannian for example).
This would be useful for collaborative filtering or automatic classification for a start. Probably more applicable as theoretical tool.
9. I started reading this on Clifford algebras and Markov chains: http://www.siue.edu/~sstaple/index_files/clfgrph1114.pdf
One thing that strikes me is that it uses the Berezin integral. This is well known to physicists as it’s used to compute things about fermions in Quantum Field Theory. In fact, it looks a bit
like it gives a nice interpretation of those integrals in fermionic QFT in terms of self-avoiding walks. You can already interpret QFT in terms of random walks with an alternative probability
theory – ie. complex probability amplitudes. Seems like there is a connection.
Are you able to say anything more about this michael?
10. Thanks for directing me to that! The paper was new to me.
vis a vis “interpretation of those integrals in fermionic QFT in terms of self-avoiding walks.”, either you are reading more into the paper than is there, or I am reading less. The jist of the
paper is augmenting regular matrix multiplication techniques for the adjacency matrix for a graph, or the transition matrix for a Markov chain (that one would use to compute nifty things like
probaility that starting in state i the chain is in state j after k steps [for Markov chains] or the number of Hamiltonian cycles in a graph [for an adjacency matrix]) by insisting on a Clifford
product on the elements instead of just the normal field multiplication. Alternativly, one could think of it as a “Clifford action” analogous to a group action piggybacking on the matrix
It is actually a very elegant idea that gives a very slick derivation of some of these enumerables. A nice club to have in the bag so to speak.
I am not a physicist, nor do I play one on tv, so I have no exposure to the fermionic QFT stuff to enable me to make an informed judgment, but I would be willing to bet that your intuition is
corrrect. In fact, it seems so plausable that it is hard to believe that no one has done it. There is a paper in that idea for someone – extant or otherwise.
11. I may be reading more into the paper than is there – but that’s probably a good thing!
Quick summary of QFT: you typically spend your day having to integrate ‘amplitudes’ (which are a complex valued analogue of probabilities) over the space of paths from A to B. This is an infinite
dimensional space but the catch is that there isn’t a suitable measure defined on it. So you typically approach it through dicing up the paths into ‘walks’ with a finite number of steps and the
space of paths with n steps is finite dimensional. You then look at the limit as the number of steps goes to infinity. Turns out that in practice the integrand is often of the form exp(-x.Ax) (or
approximately so) for some matrix A and so the answer we want is proportional to 1/(sqrt det A). The theory is very similar to the theory of Brownian motion and for very simply QFTs it is
basically identical to Brownian motion apart from a factor of i(=sqrt(-1)) that appears.
That’s for bosonic particles. When we come to fermions, physicists do something very weird and which can seem very unmotivated. Suddenly they decide that all of the variables in the integration
are anticommuting. The integral is replaced with the Berezin integral and that 1/(sqrt det A) becomes det A modulo some factors. Miraculously the theory seems to work. So my hunch is that we can
interpret this as something like the integral over all self-avoiding walks, or something related. This is also motivated by the fact that identical fermions are particles that don’t like to sit
in the same state as each other so in a sense it doesn’t seem too weird that their paths might be self-avoiding.
Hmmm…the analogy is too clear. I’d almost put money on the Markov chain Clifford algebra stuff having been derived from statistical field theory in the first place.
12. Considering that the very first reference of the paper is “Fermionic stochastic calculus in Dirac-Fock space” I am inclined to agree with you
It may be though that there is a circle of ideas going on here that has not yet been put in a coherent unified state. It would be interesting to work on, but since it is outside my immediate
knowledge, I would have to learn too much stuff to do anything quickly enough. That is what Ars Mathematica is for. It is like a math research RSS feed!
|
{"url":"http://www.arsmathematica.net/2006/01/28/open-thread/","timestamp":"2014-04-19T14:28:36Z","content_type":null,"content_length":"26384","record_id":"<urn:uuid:45e6579f-cf5d-4581-863d-9210198a11b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|