content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
1. Current research interests in Applications of Mathematics: Mathematical finance with a particular focus on strategic analysis of options and accounting theory. Corporate Disclosure.
Bargaining Theory.
Click below for pdf files of some of my recent papers, concerning the fundamental value of a firm, and on Bargaining Theory:
1 An Alternative to the Feltham-Ohlson Valuation Framework: Using q-Theoretic Income to Predict Firm Value (with M Gietzmann) Accompanying diagrams
Journal version: Accounting & Business Research, 2004 (34 No. 4) 349-377
2 Statics and asymptotics of a price control limit: an optimal timing inventory problem (with R O Davies)
3 ‘Equity smirks’ and embedded options: the shape of a firm’s value function
Journal version in: Accounting & Business Research, 2004 (34 No. 4) 301-331
4 Endogenous Irreversibility With Finite Horizon Investment When Resale Is Possible (with M Gietzmann)
5 Dividend Policy Irrelevance: Ohlson's Uniqueness Principle in several variables
6 Returns to costly pre-bargaining claims : Taking a principled stand
Journal version in: Journal of Economic Studies, Volume 33 Issue 2 (2005).
7 Value Creation with Dye's Disclosure Option: Optimal Risk-Shielding with an Upper Tailed Disclosure Strategy (with M.B. Gietzmann)
Journal version in: Review of Quantitative Finance and Accounting. See Online First™
8 Using Voluntary Disclosure Intensity to Infer the Precision of Management's Vision (with M.B. Gietzmann)
tr> My Seminar Slides
9 Dividend Policy Irrelevancy and the Construct of Earnings
(with J. A. Ohlson and Zhan Gao)
Journal of Business Finance & Accounting, On Line First.
10 Multi-firm voluntary disclosures for correlated operations (with M.B. Gietzmann)
Annals of Finance, On Line First.
11 Why managers with low forecast precision select high disclosure intensity: an equilibrium analysis (with M.B. Gietzmann)
Review of Quantitative Finance and Accounting, On Line First.
2. Current research interests in Pure Mathematics: Automatic Continuity and the foundations of Regular Variation. General Topology (dimension theory).
Click below for pdf files of recent papers on Regular Variation with Nick Bingham. The most recent seminar overview of these from a year ago (dated: 1st Nov. 2007) is provided here by Nick. This
covers papers 1-11 and takes no account of later revisions to all papers.
Seminar Slides by N.H. Bingham
My Seminar Slides
1 Generic subadditive functions (with N H Bingham)
Proc. Amer. Math. Soc. 136 (2008), 4257-4266.
2 Infinite combinatorics and foundations of regular variation (with N H Bingham)
Journal of Mathematical Analysis and Applications, 360 (2009), 518-529.
3 Very Slowly Varying Functions -- II (with N H Bingham)
Colloquium Mathematicum 116 (2009), 105-117.
4 Beyond Lebesgue and Baire: generic regular variation (with N H Bingham)
Colloquium Mathematicum, 116 (2009), 119-138.
5 New automatic properties: subadditivity, convexity, uniformity (with N H Bingham)
Aequationes Mathematicae, 78 (2009) 257-270.
6 Infinite combinatorics in function spaces (with N H Bingham)
Publ.Inst. Math. Beograde, 86 (100) (2009), 55-73.
7 The index theorem of topological regular variation and its applications (with N H Bingham)
Journal of Mathematical Analysis and Applications, 358 (2009), 238-248.
8 Regular variation without limits (with N H Bingham)
Journal of Mathematical Analysis and Applications, 370 (2010), 322-338.
9 Regular variation, topological dynamics, and the Uniform Boundedness Theorem
Topology Proceedings, 36 (2010), 305-336.
10 Automatic continuity by analytic thinning (with N H Bingham)
Proc. Amer. Math. Soc. 138 (2010), 907-919.
11 Topological regular variation: I slow-variation (with N H Bingham)
Topology and its applications, 157 (2010), 1999-2013.
12 Topological regular variation: II the fundamental theorems (with N H Bingham)
Topology and its applications, 157 (2010), 2014-2023.
13 Topological regular variation: III regular variation (with N H Bingham)
Topology and its applications, 157 (2010), 2024-2037.
14 Kingman, category and combinatorics (with N H Bingham)
J.F.C. Kingman Festschrift, ed. N.H. Bingham and C.M. Goldie), LMS Lecture Notes Series 378, 2010.
15 Normed versus topological groups: dichotomy and duality (with N H Bingham)
Dissertationes Math. 472 (2010), 138pp.
16 Beyond Lebesgue and Baire II: Bitopology and measure-category duality (with N H Bingham)
Colloquium Math. 121 (2010), 225-238.
17 Dichotomy and infinite combinatorics: the theorems of Steinhaus and Ostrowski (with N H Bingham)
Math. Proc. Camb. Phil. Soc., 150 (2011), 1-22.
18 Homotopy and the Kestelman-Borwein-Ditor Theorem (with N H Bingham)
Canadian Math. Bull. 54.1 (2011), 12-20.
19 Analytically heavy topologies: Analytic Cantor and Analytic Baire theorems
Topology & its applications, 158 (2011), 253-275.
20 Group action and shift-compactness (with Harry I. Miller)
Journal of Mathematical Analysis and Applications, 392 (2012), 23-39.
21 Analytic Baire spaces
Fundamenta Mathematicae, 217 (2012), 189-210.
22 Almost completeness and the Effros Theorem in normed groups
Topology Proceedings, 41 (2013), 99-110.
23 Shift-compactness in almost analytic submetrizable Baire groups and spaces", invited survey article
Topology Proceedings, 41 (2013), 123-151.
24 Beyond Lebesgue and Baire III: Steinhaus' Theorem and its descendants
Topology & its applications, 160 (2013), 1144-1154.
25 The Semi-Polish Theorem: One-sided vs joint continuity in groups
Topology & its applications, 160 (2013), 1155-1163.
26 The Steinhaus theorem and regular variation : De Bruijn and after (with N H Bingham)
Indagationes Math., On Line First.
27 Uniformity and self-neglecting functions (with N H Bingham)
28 Uniformity and self-neglecting functions: II. Beurling regular variation and the class Gamma (with N H Bingham)
29 Beurling regular equivariation, Bloom dichotomy and the Gołąb-Schinzel functional equation
For further information including past research interests, please see my entry in LSE Experts | {"url":"http://www.maths.lse.ac.uk/Personal/adam/","timestamp":"2014-04-16T15:59:38Z","content_type":null,"content_length":"29233","record_id":"<urn:uuid:afd9ba20-7ff6-4409-93a2-c780596b01c5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
odd question
Karl Pech KarlPech at users.sf.net
Wed Jul 14 21:01:01 CEST 2004
I'm currently working on the following exercise:
You have given the following function:
def f2(i, j, k):
return ((i | j) & k) | (i & j)
Find a useful utilization for this function.
Actually I couldn't figure out so far, what exactly is
a "useful utilization". Can anybody of you help me?
[P.S. I have even written a small program, which should
show me, what this formula does with numbers, but I couldn't
find anything "interesting" or regular in the output-file.
This is the source of the program:
import string
def testit(i, j, k):
return ((i | j) & k) | (i & j)
q = []
results = [[], [], [], [], [], [], [], [], [], []]
fout = open("out.txt", "w")
for x in range(10):
for y in range(10):
for z in range(10):
a = [x, y, z]
if a in q:
results[testit(x, y, z)].append([x, y, z])
for x in range(len(results)):
for y in range(len(results[x])):
fout.write(string.strip(str(results[x][y]), "[]")+" : "+str(x)+'\n')
I guess I just have to construct some kind of situation there this
function could be useful but I don't have any ideas. :(
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2004-July/291486.html","timestamp":"2014-04-16T10:43:48Z","content_type":null,"content_length":"3581","record_id":"<urn:uuid:dbb14360-ff36-4eec-bb5c-4789fb6ccfda>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: skaza_@_ustl.edu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: skaza_@_ustl.edu
User Profile for: skaza_@_ustl.edu
UserID: 294563
Name: Sibel Kazak
Registered: 5/14/06
Location: Turkey
Total Posts: 6
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=294563","timestamp":"2014-04-17T03:52:57Z","content_type":null,"content_length":"12827","record_id":"<urn:uuid:47f3db76-9911-4162-ab39-8d43ea980e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Create a Calculated Field
To create a new calculated field, select Analysis > Create Calculated Field, or select Create Calculated Field on one of the Data window title menu.
The Calculated Field dialog box opens.
To define the calculation do the following:
1. Specify a name for the new field.
2. Create a formula that defines the new field. Refer to Writing for mulas in Tableau for more information about how to define a formula.
3. When finished, click OK.
The new calculated field displays in either the Dimensions area or the Measures area of the Data window depending on the data type returned by the calculation. Calculations that return a string or
date are dimensions, while calculations that return a number are measures. In the latter case, you can convert the measure to a dimension if you want to treat the calculated values as discrete rather
than continuous. | {"url":"http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i181532.html","timestamp":"2014-04-18T18:11:22Z","content_type":null,"content_length":"5682","record_id":"<urn:uuid:b66f9e05-ddc5-4958-84a8-e21616263a08>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with this DE problem
February 3rd 2010, 06:20 AM #16
Sep 2009
$\dfrac{dy}{dx}(3x-y^2) = y$
$(3x-y^2)dy = ydx$
now this is in the form of what my professor calls "exact" except we still need an integrating factor. They are quite difficult to find, for me, and honestly i cannot get this one to work out so
im sorry but, i dont think i can help.
there is no one method of finding int factor but there are special cases
Two things
1)If 1/M (dN/dx - dM/dy) = f(y) then int factor is e^(integralf(y))
2) If 1/N (dM/dy- dN/dx) = f(x) tehn int factor is e^(integralf(x))
In your case 1) applies and 1/y^4 is the int factor
February 3rd 2010, 12:28 PM #17 | {"url":"http://mathhelpforum.com/differential-equations/126504-help-de-problem-2.html","timestamp":"2014-04-20T14:32:16Z","content_type":null,"content_length":"32081","record_id":"<urn:uuid:55f1d58d-4872-4131-8141-34892b3f12f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Prime Numbers.
I'm trying to output an algorithm that will let the user enter a number, I will then take that number and find all the factors of that number. Then, I will Display back to the User ALL of the PRIME
NUMBERS (When I say prime numbers I mean any number that can be sqrtd back to its original number... like 4, 9, 36 (bc 2*2 = 4, 3 * 3 = 9 etc,] And those are all factors of 36, (including 36 which is
6 *6) So I have 95% of the code But I can't figure out how I'm going to COUNT the
Prime numbers
. The problem I'm having is the algorithm is saying "Hey is this a prime number? Yea!" But then while it Checks to see if it is a prime number it actually stores the result giving me 0. I want to
display back to the user all of the prime numbers from the Integer they entered. For exampel if they enter 36, I want to display the message, your prime factors are: 4, 9, 36
Check out my code
#include <iostream>
#include <iomanip>
using namespace std;
//ALGORITHM TO FIND ALL THE FACTORS OF A #
int main()
int num_1;
int i;
int counter = 0;
cout<<"Enter Number to be factored:";
cin>> num_1;
for(i = 2; i <= num_1; i++)
if(num_1 % i == 0)
cout << "-----FACTORS OF YOUR #-------" << endl;
cout << setw(5) << i;
counter ++; //INCREASE COUNTER EVERY TIME LOOP EXECUTES
bool prime = true;
for (i = 2; i < num_1; ++i)
if (num % i == 0)
//HERE I WANT TO SAY THE PRIME NUMBERS OF YOUR INPUT ARE: AND HAVE THEM LISTED.
if(counter == 4)
cout << endl; //IF COUNTER EQUALS 4 OUTPUT END LINE
counter = 0; //RESET COUNTER TO 0 AFTER COUNTER REACHES 4 | {"url":"http://www.dreamincode.net/forums/topic/339151-finding-prime-numbers/page__pid__1964749__st__0","timestamp":"2014-04-20T17:40:10Z","content_type":null,"content_length":"119295","record_id":"<urn:uuid:39fcd673-0b53-40fc-bed8-be9058b0e37c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Postulates and Theorems
A postulate is a statement that is assumed true without proof. A theorem is a true statement that can be proven. Listed below are six postulates and the theorems that can be proven from these
• Postulate 1: A line contains at least two points.
• Postulate 2: A plane contains at least three noncollinear points.
• Postulate 3: Through any two points, there is exactly one line.
• Postulate 4: Through any three noncollinear points, there is exactly one plane.
• Postulate 5: If two points lie in a plane, then the line joining them lies in that plane.
• Postulate 6: If two planes intersect, then their intersection is a line.
• Theorem 1: If two lines intersect, then they intersect in exactly one point.
• Theorem 2: If a point lies outside a line, then exactly one plane contains both the line and the point.
• Theorem 3: If two lines intersect, then exactly one plane contains both lines.
Example 1: State the postulate or theorem you would use to justify the statement made about each figure.
Figure 1 Illustrations of Postulates 1–6 and Theorems 1–3.
Through any three noncollinear points, there is exactly one plane (Postulate 4).
Through any two points, there is exactly one line (Postulate 3).
If two points lie in a plane, then the line joining them lies in that plane (Postulate 5).
If two planes intersect, then their intersection is a line (Postulate 6).
A line contains at least two points (Postulate 1).
If two lines intersect, then exactly one plane contains both lines (Theorem 3).
If a point lies outside a line, then exactly one plane contains both the line and the point (Theorem 2).
If two lines intersect, then they intersect in exactly one point (Theorem 1). | {"url":"http://www.cliffsnotes.com/math/geometry/fundamental-ideas/postulates-and-theorems","timestamp":"2014-04-17T07:59:02Z","content_type":null,"content_length":"126145","record_id":"<urn:uuid:06ebb799-23ee-4b41-a34c-f77d13a92abe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Why is Mata much slower than MATLAB at matrix inversion?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Why is Mata much slower than MATLAB at matrix inversion?
From Patrick Roland <patrick.rolande@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Why is Mata much slower than MATLAB at matrix inversion?
Date Sat, 21 Jul 2012 13:28:33 -0700
As a brief followup, and since StataCorp have responded, I tested
matrix multiplication too. Stata 12.0 takes fully 100 times longer to
multiply two 2000x2000 matrices than MATLAB R2011b.
This code took ~21 seconds in Mata:
j = rnormal(2000,2000,0,1)
g =j*j'
This code took ~0.21 seconds in MATLAB:
j = randn(2000,2000);
g = j*j';
Happy to stand corrected if I've made a mistake.
On Fri, Jul 20, 2012 at 6:19 PM, Richard Herron
<richard.c.herron@gmail.com> wrote:
> Snap. Yes, your m from -runiform()- will certainly be invertible.
> Richard Herron
> On Fri, Jul 20, 2012 at 7:14 PM, Patrick Roland
> <patrick.rolande@gmail.com> wrote:
>> To be clear, my point was that all Mata matrix inverse functions are
>> slower than MATLAB. It does seem though that this is not true for
>> small matrices (e.g. 100x100), but the difference is easily an order
>> of magnitude when it comes to larger matrices (2000x2000).
>> The fact that I compared cholinv() and a general inverse function
>> should be to Mata's favor, since cholinv should presumably be faster
>> if it exploits the special structure of the matrix.
>> X'X is positive definite if X is invertible (as in my example),
>> because a'X'Xa = (Xa)'(Xa) > 0.
>> On Fri, Jul 20, 2012 at 2:48 PM, David M. Drukker <ddrukker@stata.com> wrote:
>>> Patrick Roland <patrick.rolande@gmail.com> posted that the Mata function
>>> -cholinv()- is slower than a Matlab function for large matrices.
>>> Others have discussed some issues with Patrick's example. Despite these
>>> issues, we took Patrick's post seriously, looked at the code, and found
>>> something that could be sped up.
>>> We will release a faster version of -cholinv()- in an upcoming executable
>>> update.
>>> Note that any speed difference related to -cholinv()- is only noticeable for
>>> large matrices. For small matrices, such as variance-covariance matrices
>>> for models with 100 or fewer parameters, the difference is much harder to
>>> find. For example, the computation takes about .001 seconds on my machine.
>>> Best,
>>> David
>>> ddrukker@stata.com
>>> *
>>> * For searches and help try:
>>> * http://www.stata.com/help.cgi?search
>>> * http://www.stata.com/support/statalist/faq
>>> * http://www.ats.ucla.edu/stat/stata/
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-07/msg00768.html","timestamp":"2014-04-19T08:06:10Z","content_type":null,"content_length":"12371","record_id":"<urn:uuid:627e20fa-20f6-4455-9481-c7c7c27f773c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
first and second derivative of sin(x) e^(2x)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f8081eee4b0505bf082d302","timestamp":"2014-04-20T14:04:03Z","content_type":null,"content_length":"37405","record_id":"<urn:uuid:828c3699-b9f6-4d4b-ab19-19d42b32bde0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference Between GCF and LCM
The Greatest Common Factor (or the GCF) is the greatest real number shared between two integers. What makes this number a factor is that it is a whole, real number that two integers share –that is,
when broken down to their lowest multiples, the largest integer that’s shared between the two numbers is their greatest common factor.
On the other hand, the Lowest Common Multiple (or LCM) is the integer shared by two numbers that can be divided by both numbers. Basically, in the list of two numbers’ respective list of multiples,
the lowest number that the two numbers share is their lowest common multiple.
As regards the GCF, the greatest common factor must be a prime number –that is, a number that can only be divided by itself and 1. For example, the numbers 10 and 15 are broken down as such:
10: 1, 2, 5
15: 1, 3, 5, 15
When we take both sets of factors into consideration it is plain to see that the greatest prime integer shared by both numbers is 5 –it can only be divided by itself and 1 and it shows up in both 10
and 15.
However, as regards the LCM, the number must be composite (that is, it can be divided by at least itself, 1, and another multiple). Most likely the other multiple is shared between both numbers. For
example, when creating a list of the multiples of 6 and 9:
6: 6, 12, 18, 24, 30…
9: 9, 18, 27, 36, 45…
As we can see, the lowest integer shared by both 6 and 9 is 18 –it is divisible by 1, 6, 9, and itself.
The biggest difference between the GCF and the LCM is that one is based upon what can divide evenly into two numbers (GCF), while the other depends on what number shared between two integers can be
divided by the two integers (LCM). One must also consider if the numbers only share itself and 1 as common multiples of factors, than those numbers are not related to each other. That’s exactly what
the GCF and LCM finds –how two whole numbers relate to each other.
1. The GCF is based upon what integer divides evenly into two numbers; the LCM is based upon what integer two numbers share in a list of multiples.
2. The GCF must be a prime number; the LCM must be a composite number.
Search DifferenceBetween.net :
Email This Post
: If you like this article or our site. Please spread the word. Share it with your friends/family.
2 Comments
1. How do you compute the start-up costs for starting a website?
2. This article has a small error. The GCF does NOT have to be a prime number! For example the GCF of 64 and 56 is 8 and 8 is a composite number! Please fix the error
Leave a Response
Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use,
damage, or injury. You agree that we have no liability for any damages. | {"url":"http://www.differencebetween.net/science/difference-between-gcf-and-lcm/","timestamp":"2014-04-20T09:30:02Z","content_type":null,"content_length":"45442","record_id":"<urn:uuid:a42090dd-d39f-4624-8f3d-138bc740a98b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Investing in the Stock Market for Less Than 5 Years is Risky
This post provides justification for the adage that you should not put money into the stock market that you will need in less than 5 years. It is the 5-year version of earlier posts that discussed
the distribution of 10 and 20-year stock market returns (see links below).
Some years back, a friend asked me to recommend a good stock investment for her daughter's college fund. Since withdrawals were to start in about three years, my recommendation was not to put the
college fund in the stock market at all! Here's why.
What will a $20,000 Stock Market Investment be Worth in 5 Years?
In previous posts, we have looked at the distribution of historical outcomes for typical (but hypothetical) investors investing in the stock market for
10 years
and for
20 Years
. For those holding periods, the investors virtually always made money -- though sometimes barely so. In approximately 100 sample 10-year periods since 1900, we saw only one instance where the
investor's ending portfolio was worth less than his initial investment -- and no instances for 20-year periods.
The above graph (click to expand) is the 5-year version of the earlier charts. It shows the historical results of
investing $20,000 in the stock market for 5 years. The horizontal axis shows values of the portfolio at the end of the 5 years assuming dividends were reinvested (and, no taxes). The lines and bars
reflect the frequency of various outcomes for about 100 5-year periods beginning year-end 1899.
(Note: to calculate ending portfolio values for an initial investment of $1,000, divide the ending portfolio values by 20. To calculate the results for n thousand dollars, multiply the results for
$1,000 by n. For example, for a $50,000 investment, multiply the results for $1,000 by 50.)
Short-Term Investments in the Stock Market are Risky
Long-term stock market returns have averaged about 10%/year (see
Average Stock Market Returns since 19xx
). However, yearly returns vary dramatically (see
Stock Market Yearly Returns
). Over time, the long-term average prevails. However, the shorter the investment period, the more the investor is at the mercy of the yearly variation, and the less he takes advantage of the
longer-term trend. As a result, short-term investments are significantly more risky than 10 and 20-year investments.
Let's look at two ways of measuring that risk:
• What's the worst case?
• How often will you lose money?
(Note: To see results for investments with more predictable rates of return, see
What Will My Bond/ CD be Worth in 5 Years?
Worst-Case Stock Market Returns for Short-Term Investments
The blue bars show the number of years, indicated on the
vertical axis, where the ending portfolio was less than or equal to the amount shown on the horizontal axis. For example, there were two years where the ending portfolio was less than $10,000, and 22
years where the ending portfolio was less than $25,000.
In our earlier analysis, the worst 10-year result was a loss of a little over 10%. The worst 20-year result was a
of over 50%. The worst 5-year result is close to a
60% loss
-- a $20,000 initial investment resulted in an ending portfolio of a little over $8,000. By this measure, four-year results were even worse. The worst 4-year result was an ending portfolio of a bit
over $5,000 --
a loss of more than 70%!
Results like those could be a major problem for a high school student's college fund....
Frequency of Stock Market Losses for Short-Term Investments
Life is full of risks. Many readers might find the risk of even a 70% loss tolerable if the chances of realizing that loss were sufficiently remote. For example, most people think that the risk of
their plane crashing is remote enough for them to continue to fly.
The green line shows the cumulative probability, measured against the
vertical axis, that the investor's ending portfolio value will be less than the value on the horizontal axis. For example, investors lost more than half of their $20,000 investment (i.e., ended with
less than $10,000) about 2% of the time -- that's the meaning of the "2%" above "$10,000" on the horizontal axis. Similarly, the green line shows that our hypothetical investors' ending portfolio was
worth less than the $20,000 they initially invested about 7% of the time.
How Often Will I Lose Money Investing in the Stock Market for One Year?
The risk of losing money tends to increase as the number of years decreases. Whereas hypothetical investors lost money 7% of the time over five years, investors investing for four years lost money
12% of the time; one-year investors lost money
27% of the time
(see graph above). Note, however, that the worst-case loss was "only" a little less than 50% -- less than the two, three, four and five-year worst cases.
Implications of Stock Market Variability on Retirement Planning & Buying a House
These results have implications not only for planning for college. They are relevant to planning for any known, "fixed" obligation. For example, if you need money for a down payment on a home in the
next few years, having that money invested in the stock market is clearly risky.
Similarly, you might want to protect retirement savings needed for near-term expenses from short-term variations in the stock market. As a retiree, my personal rule of thumb is to keep cash needed
for the next five years protected from stock market variations. However, putting that money in the stock market can make sense if you are comfortable with the risks -- especially if even worst-case
results have little impact upon your lifestyle.
Note: As always, these results are for
investors investing in the DJIA (Dow Jones Industrial Average); results do not include the
impact of loads/commissions, transaction fees, taxes or other expenses
. Dividends are reinvested annually. Investments in any broad stock market average, such as the S&P 500, would show similar theoretical results.
Related Posts
What Will My Bond/CD be Worth in 5 Years?
: An easy way to estimate the results of investments with more predictable returns.
What Would $1 Invested in the Stock Market in 19xx be Worth Now?
Actually calculate the results for any previous 5-year period (or between any other 2 years).
Yearly Stock Market Returns since 1929
shows the variation in year-to-year returns (in percentages).
Range 1-10-Year Stock Market Returns in Dollars
: Graph of the best and worst results for holding periods ranging from 1-10 years in dollars.
Rolling 5 Year Stock Market Returns
: A different look at these 5-year returns.
What Will a $10,000 Investment be Worth in 10 Years?
: similar to this post, but for 10 years.
What Will a $100,000 Investment be Worth in 20 Years?
: similar to this post, but for 20 years.
The Best & Worst Stock Market Returns
for holding periods from 1-100 Years.
Don't Plan Retirement Assuming Average Stock Market Returns
looks at the risks of ignoring the impact of the variability of stock market returns -- especially on retirement planning.
For lists of other popular posts and an index of stock market posts, by subject area, see the sidebar to the left or the blog header at the top of the page.
Copyright © 2011. Last modified: 11/3/2011
Bookmark this on Delicious
To share via Facebook, Twitter, etc., see below.
2 comments:
1. Risky is a lot of challenging and we know it.But sometimes managing risk can lead you to a great investment and business.
1. True. However, to manage the risk well, you must first be aware of it, and understand the nature and extent of the risk. Thus, this post.
No spam, please! Comment spam will not be published. See comment guidelines here.
Sorry, but I can no longer accept anonymous comments. They're 99% spam. | {"url":"http://observationsandnotes.blogspot.com/2011/06/stock-market-returns-variability-5-year.html","timestamp":"2014-04-20T23:27:18Z","content_type":null,"content_length":"145343","record_id":"<urn:uuid:7c802808-4fa1-471b-9b60-6817147283c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Celebrating 3.14159265
March 26th, 2009 · No Comments · Community, Education
By Dave Quick
Special to Signal Tribune
On March 20, Cabrillo High School celebrated its second annual “Pi Day” with festivities honoring one of mathematics’ most intriguing and storied numbers¬– pi. “If we can have a pep rally for sports,
why not a pep rally for math?” said Fred Olmedo, head of Cabrillo’s math department.
Pi is the ratio of the distance across a circle (diameter) to the distance around the circle (circumference). The search for pi has gone on for thousands of years and engaged many different cultures
from Babylonia to India to China to the Incas and Aztecs of the Americas. The ratio pi, which starts 3.14159…, goes on endlessly without repeating itself. Supercomputers have carried out pi well past
one billion digits.
At Cabrillo, students received a “Pi Day Passport” and during their one-hour math session they could visit up to 14 event stations, each with a pi-themed activity. At the “pi bee” event, aspiring
mathematicians could compete to see who can recite the most digits of pi from memory. During the 11am session, the tension was palpable as junior Kris Grant recited 99 digits and then drew a blank,
only one digit short of tying that session’s best of 100 digits. Grant regrouped, started over again and worked his way back to the 99th digit. He paused, and then correctly recited “seven” to tie at
100 digits. He next recited “nine!” and he and his classmates jumped for joy as he captured the win with 101 pi digits from memory.
Other Pi Day events at Cabrillo included the opportunity to find by computer the first occurrence of your numeric birthday among the first 200 million digits of pi, graphic renderings of the pi
symbol, screening of “Pi Movie,” a number of athletic events involving round items and of course, pizza and apple “pi.”
“Pi Day” has become an annual celebration at Cabrillo High School. In addition to the faculty of the math department, the website www.piacrossamerica.org, a free tutorial for teachers, helps support
Cabrillo High School’s “pi day.”
No Comments so far ↓
There are no comments yet...Kick things off by filling out the form below.
Leave a Comment | {"url":"http://www.signaltribunenewspaper.com/?p=3706","timestamp":"2014-04-18T13:13:26Z","content_type":null,"content_length":"26238","record_id":"<urn:uuid:b02ad6f3-504d-424e-9af2-af6e2efe974b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
if f:A->B with A closed & bouned, then f(A) is closed & bounded.
November 17th 2009, 05:57 AM #1
Nov 2009
if f:A->B with A closed & bouned, then f(A) is closed & bounded.
I have written a valid proof that f(A) is bounded. I don't know how to show that f(B) is also closed.
I am wondering if I can use the Completeness Theorem to argue that there is a supremum and infimum. Then I would need to show that those two numbers reside in A for A to be closed and not open. I
don't know how to do that, but am I at least on the right track?
Nevermind, I found out that I am on the right track, and that I just need to use the Maximum-Minimum Theorem.
I have written a valid proof that f(A) is bounded. I don't know how to show that f(B) is also closed.
I am wondering if I can use the Completeness Theorem to argue that there is a supremum and infimum. Then I would need to show that those two numbers reside in A for A to be closed and not open. I
don't know how to do that, but am I at least on the right track?
Nevermind, I found out that I am on the right track, and that I just need to use the Maximum-Minimum Theorem.
And, of course, you must assume f is continuous...
I have written a valid proof that f(A) is bounded. I don't know how to show that f(B) is also closed.
I am wondering if I can use the Completeness Theorem to argue that there is a supremum and infimum. Then I would need to show that those two numbers reside in A for A to be closed and not open. I
don't know how to do that, but am I at least on the right track?
Nevermind, I found out that I am on the right track, and that I just need to use the Maximum-Minimum Theorem.
I assume since you are discussing the completeness theorem that you are referring to $A,B\subset\mathbb{R}$. How exactly did you prove boundedness without proving closure? Did you use the fact
that $A$ is compact?
November 17th 2009, 06:27 AM #2
Oct 2009
November 17th 2009, 06:52 AM #3 | {"url":"http://mathhelpforum.com/calculus/115149-if-f-b-closed-bouned-then-f-closed-bounded.html","timestamp":"2014-04-18T04:16:19Z","content_type":null,"content_length":"39441","record_id":"<urn:uuid:4ecca39b-3037-4d10-8e0a-a489031d079a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with C++ integration problem
11-17-2011 #1
Registered User
Join Date
Nov 2011
Need help with C++ integration problem
I have an assignment where I have to use the trapezoidal and simpson's rules to estimate the integral of a function. Everything works as far as I can tell, except for the final results. If
someone could help me out, it would be much appreciated.
//Declare global variables
double A, B, x=0, n1, n2;
char ch;
//Function to evaluate both of the functions
double evalf(double x){
if(ch == 'a')
return(sin((2*x)/5) - x + 1);
return(cos((2*x)/5) + (3*x) + 1);
//Function to input the number of intervals wanted for the Trapezoidal rule,
//and see if their accuracy is acceptable.
double trap_intervals(){
double accuracy=0, error=1, h, n;
while(error > accuracy){
printf("\nPlease enter the number of subintervals you would like: ");
scanf("%lf", &n);
printf("Please enter the accuracy you would like: ");
scanf("%lf", &accuracy);
h = (B-A)/n;
error = ((B-A)*(h*h)*sin(2*B/5))/75;
if(error < 0)
error = -error;
if(error > accuracy)
printf("\nSorry, you do not have enough subintervals for the level of accuracy.");
printf("\nYour error is %lf.\n", error);
//Function to estimate the integral of the function using the Trapezoidal Rule
double trapezoid(){
double X, Y, p, h, x, sum;
h = (B-A)/n1;
X = evalf(A);
Y = evalf(B);
for(x=(A+h); x<B; x+=h){
p = evalf(x);
return(h*((1/2)*X + sum + (1/2)*Y));
//Function to input the number of intervals wanted for Simpson's rule, and see
//if their accuracy is acceptable.
double simp_intervals(){
double accuracy = 0, error = 1, h, n;
while(error > accuracy){
printf("\nPlease enter the number of subintervals you would like: ");
scanf("%lf", &n);
printf("Please enter the accuracy you would like: ");
scanf("%lf", &accuracy);
h = (B-A)/n;
error = (-4*(B-A)*(h*h*h*h)*sin(2*B/5))/28125;
error= -error;
if(error > accuracy)
printf("\nSorry, you do not have enough subintervals for the level of accuracy.");
printf("\nYour error is %lf.\n", error);
//Function to estimate the integral of the function using Simpson's Rule
double simpson(){
double X, Y, x, p, q, h, sum1, sum2;
int i;
h = (B-A)/n2;
X = evalf(A);
Y = evalf(B);
for(x=A+h; x<B; x+=h){
i = 1;
if(i%2 == 0){
p = 2*evalf(x);
q = 4*evalf(x);
return((h/3)*(X + sum1 + sum2 + Y));
//Main function starts here.
int main(){
double temp;
//Decide which function you wish to integrate
printf("If you want to integrate the function f(x) = sin(2x/5) - x + 1, press 'a'.");
printf("\nIf you want to integrate the function f(x) = cos(2x/5) + 3x + 1, press 'b'.\n");
scanf("%c", &ch);
//Ensure the user only picks a or b
while(ch != 'a' && ch != 'b'){
printf("\nSorry, invalid input. Please try again:\n");
scanf("%c", &ch);
//Tell the user which function will be integrated
if(ch == 'a')
printf("\nThe function we are integrating is f(x) = sin(2x/5) - x + 1.\n");
printf("\nThe function we are integrating is f(x) = cos(2x/5) + 3x + 1.\n");
//Choose the intervals, swap A and B if A is greater than B
printf("Please enter the values of the interval [A,B].\nA: ");
scanf("%lf", &A);
printf("\nB: ");
scanf("%lf", &B);
printf("\nA is larger than B. Swapping values now...");
B = temp;
A = B;
temp = A;
temp = 0;
//Pick intervals for Trapezoidal Rule
printf("\nWe must now enter the number of intervals for the trapezoidal rule.");
n1 = trap_intervals();
//Pick intervals for Simpson's Rule
printf("\nWe must now enter the number of intervals for Simpsons rule.");
n2 = simp_intervals();
//Estimate the value of the integration using the Trapezoidal Rule
temp = trapezoid();
printf("\nThe value of integration using the trapezoidal rule is %lf.", temp);
temp = 0;
//Estimate the value of the integration using Simpson's Rule
temp = simpson();
printf("\nThe value of integration using Simpson's rule is %lf.", temp);
//End program
printf("\n\nPress any key to finish...");
Basically the problem is in the trapezoid and simpson functions, sorry for pasting the whole code.
What C++? All I see is C!
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
First, learn to write code without unnecessary Global Variables.
Second, learn to swap two value correctly.
Third, learn to post C code in the C sub-forum.
printf("\nA is larger than B. Swapping values now...");
B = temp;
A = B;
temp = A;
temp = 0;
Tim S.
11-17-2011 #2
Registered User
Join Date
Nov 2011
11-17-2011 #3
11-20-2011 #4
Registered User
Join Date
May 2009 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/143378-need-help-cplusplus-integration-problem.html","timestamp":"2014-04-16T19:06:58Z","content_type":null,"content_length":"57999","record_id":"<urn:uuid:444c014c-6a8d-4685-9a27-83858ee919b9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
The origins of mass & the feebleness of gravity by Frank Wilczek
@jshadlen quarks & gluons ∃ in a virtual-particle soup. The energy quanta where heisenberg locality−strongcharge balance = particle masses.
— isomorphismes (@isomorphisms)
November 19, 2012
• dark matter & dark energy
• "Even though protons, neutrons, and electrons comprise only 3% of the universe’s mass as a whole, I hope you’ll agree that it’s a particularly significant part of the mass." lol
• "Just because you can say words and they make sense grammatically doesn’t mean they make sense conceptually. What does it mean to talk about ‘the origin of mass’?”
• "Origin of mass" is meaningless in Newtonian mechanics. It was a primitive, primary, irreducible concept.
• Conservation is the zeroth law of classical mechanics.
• F=MA relates the dynamical concept of force to a kinematic quantity and a conversion factor (mass).
• rewriting equations and they “say” something different
• the US Army field guide for radio engineers describes “Ohm’s three laws”: V=IR, I=V/R, and a third one which I’ll leave it as an exercise for you to deduce”
• m=E/c²
• Einstein’s original paper Does the inertia of a body depend on its energy content? uses this ^ form
• You could go back and think through Einstein’s problem (knowing the solution) in terms of free variables. In order to unite systems of equations with uncommon terms, you need a conversion factor
converting a ∈ Sys_1 to b ∈ Sys_2.
• Min 13:30 “the body and soul of QCD”
• Protons and neutrons are built up from quarks that are moving around in circles, continuously being deflected by small amounts. (chaotic initial value problem)
• supercomputer development spurred forward by desire to do QCD computations
• Min 25:30 “The error bounds were quite optimistic, but the pattern was correct”
• A model with two parameters that runs for years on a teraflop machine.
• Min 27:20 The origin of mass is this (N≡nucleon in the diagram): QCD predicts that energetic-but-massless quarks & gluons should find stable equilibria around .9 GeV:
Or said alternately, the origin of mass is the balance of quark/gluon dynamics. (and we may have to revise a bit if whatever succeeds QCD makes a different suggestion…but it shouldn’t be too
• OK, that was QCD Lite. But the assumptions / simplifications / idealisations make only 5% difference so we’ll still explain 90% of the reason where mass comes from.
• Computer ∋ 10^27 neutrons & protons
• The supercomputer can calculate masses, but not decays or scattering. Fragile.
• Minute 36. quantum Yang-Mills theory, Fourier transform, and an analogy from { a stormcloud discharging electrical charge into its surroundings } to { a "single quark" alone in empty space would
generate a shower of quark-antiquark virtual pairs in order to keep a balanced strong charge }
• Minute 37. but just like in QM, it “costs” (∃ a symplectic, conserved quantity that must be traded off against its complement) to localise a particle (against Heisenberg uncertainty of momentum).
And here’s where the Fourier transform comes in. FT embeds a frequency=time/space=locality tradeoff at a given energy (“GDP" in economic theory). The “probability waves" or whatever—spread-out
waveparticlequarkthings—couldn’t be exactly on top of each other, they’ll settle in some middle range of the Fourier tradeoff.
• "quasi-stable compromises"
• This is similar to how the hydrogen atom gets stable in quantum mechanics. Coulomb field would like to pull the electron on top of the proton, but the quantum keeps them apart.
• Quantum mechanics uses the mathematics of musical notes (vibrating harmonics).
• Quantum chromodynamics uses the mathematics of chords, specifically triads since 3 colour forces act on each other at once.
• Particles are nothing more than stable tradeoffs that can be made between localisation costs (per energy) from QM and colour forces.
• (Aside to quote Wikipedia: “Mathematically, QCD is a non-Abeliangauge theory based on a local (gauge) symmetry group called SU(3).”)
• Minute 40. Because the compromises can’t be evened out exactly due to quanta, there’s some leftover energy. It’s the same for a particular kind of quark-gluon interaction (again, because of the
quanta). The .9 GeV overshoot | disbalance | asymmetry in some particular quark-gluon attempts to balance creates the neutrons and protons. And that’s the origin of mass.
Minute 42. Feebleness of gravity.
• (first of all, gravity is weak—notice that a paperclip sticks to a magnet rather than falling to the floor)
• (muscular forces are the result of a lot of ATP conversions and such. That just happens to be even weaker—but if you think of how far removed those biochemical electropulses and cell fibres are
from the fundamental foundation, maybe that’s not so surprising.)
• Gravity is 40 orders of magnitude weaker than the electrical force. Not forty times, forty orders of magnitude.
• Planck’s vision; necessary conversion; a theory of the universe with only numbers.
• The Planck distance, even for nuclear physicists, is about 20 orders of magnitude too small.
• The clunkiness of Planck’s constants mocks dimensional analysis. “If you measure natural objects in natural units, you should get something of the order of unity”.
• "If you agree that the proton is a natural object and the Planck scale is a natural unit, you’d be off by 18 orders of magnitude".
• Suppose gravity is a primitive. Then the question becomes: “Why is the proton so light?” Which now we can answer. (see above)
• Simple physics (local interactions, basic = atomic = fundamental = primitive behaviours) should occur at Planck scales. (More complex behaviours then should “emerge” out of this reduction.)
• So that should be, in terms of energy & momentum, 10^18 proton masses, where the fundamental interactions happen.
• The value of the quark-gluon interaction at the Planck scale. “Smart” dimensional analysis says the quantum level that makes protons from the gluon-quark interactions then gets us to ½, “which I
hope you’ll agree is a lot closer to unity than 10^−18”.
• Minute 57. “A lot of what we know about the deep structure of the Standard Model is summarised on this slide”
• weak force causes beta decay
• standard model not so great on neutrino masses
• SO(10)’s spinor representation has all the standard model’s symmetries as subgroups
• Minute 67. Trips my regression-analysis circuits. Slopes & intercepts. Affine!
• Supersymmetry would have changed the clouds and made everything line up real nicely. (The talk was in 2004 and this week, in 2012, the BBC reported that SuSy was kneecapped by the latest LHC
• "If low-energy supersymmetry turns out to be false, I’ll be very disappointed and we’ll have to think of something else."
(Source: mit.tv) | {"url":"http://isomorphismes.tumblr.com/tagged/Newton","timestamp":"2014-04-17T00:51:22Z","content_type":null,"content_length":"140244","record_id":"<urn:uuid:68226b33-d247-4c94-ac37-7d2ecb01ec3b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circuit Theory/TF Examples/Example34/io
From Wikibooks, open books for an open world
Starting Point[edit]
Starting point of i[o] looks good. Take integral to visit V[c] initial condition. Then get expression for V[total], then take integral of that to visit I[L] initial condition. Lots of integrals.
Transfer Function[edit]
$H(s)= \frac{i_o}{i_s} = \frac{\frac{1}{R_2 + \frac{1}{sC}}}{\frac{1}{sL}+\frac{1}{R_1}+ \frac{1}{R_2 + \frac{1}{sC}}}$
L :=1;
R1 := 1/2;
C := 1/2;
R2 := 1.5;
simplify((1/(R2 + 1/(s*C))/(1/(s*L) + 1/R1 + 1/(R2 + 1/(s*C))))
$\frac{i_o}{i_s} = \frac{2s^2}{8s^2 + 11s + 4}$
Homogeneous Solution[edit]
Set the denominator of the transfer function to 0 and solve for s:
solve(8*s^2 + 11*s + 4)
$s_{1,2} = \frac{-11 \pm \sqrt{7}i}{16}$
So the solution is going to have the form:
$i_{o_h} = e^{\frac{11t}{16}}(A\cos \frac{7}{16} + B\sin \frac{7}{16})$
Particular Solution[edit]
After a very long time the inductor shorts, all the current flows through it so:
$i_{o_p} = 0$
Initial Conditions[edit]
Adding the particular and homogeneous solutions, get:
$i_o(t) = i_{o_p} + i_{o_h} + C = e^{-\frac{11t}{16}}(A\cos \frac{7t}{16} + B\sin \frac{7t}{16}) + C$
Doing the final condition again, get:
$i_o(\infty) = 0 = C \Rightarrow C = 0$
Let's try for V[c] first. From the terminal relation for a capacitor:
$V_c(t) = \frac{1}{C}\int i_o dt$
io := exp(-11*t/16)*(A*cos(7*t/16) + B*sin(7*t/16))
VC := 2* int(io,t)
We know that initially V[c] = 1.5 so at t=0 can find equation for A and B:
t :=0;
solve(1.5 = VC)
At this point mupad goes numeric and get this equation:
$A = - 0.6363636364B - 0.7244318182$
Need another equation. Can find V[t] by adding V[r] and V[c]. Then from V[t] can find expression for the current through the inductor and visit it's initial condition. Need to start over in MuPad
because t=0 has ruined the current session. So repeating the setup of V[C]:
io := exp(-11*t/16)*(A*cos(7*t/16) + B*sin(7*t/16))
VC := 2*int(io,t)
The integration constant is going to be zero because after a long time V[C] is zero (the inductor shorts).
VT := VC + io*1.5
From the terminal equation for an inductor:
$I_L(t) = \frac{1}{L}\int V_T dt$
IL := int(VT,t)
Mupad goes numeric.
At this point have to figure out the integration constant. After a long time, the inductor's current is going to be 1 because it shorts the current source. Looking at IL in the mupad window can see
that every term is multiplied by e^-0.6875t which is going to zero as t goes to ∞. This means the integration constant is 1.
So add 1 to IL, then set t=0 and I[L] = 0.5 and again solve for A and B:
t :=0;
solve (IL + 1 = 0.5)
Get this equation:
$A = 6.273453094B + 1.802644711$
Now need to solve the two equations and two unknowns:
solve([A = - 0.6363636364*B - 0.7244318182,A = 6.273453094*B + 1.802644711],[A,B])
$A = -0.4916992187, B = -0.3657226563$
So now have time domain expression for i[o] step response:
$i_o(t)\mu(t) = e^{-\frac{11t}{16}}(-0.492\cos \frac{7t}{16} - 0.366\sin \frac{7t}{16})$
Impulse Response[edit]
The impulse response is the derivative of the step response:
i_u := exp(-11*t/16)*(-0.4916992187 * cos(7*t/16) - 0.3657226563*sin(7*t/16))
i_s := diff(i_u,t)
Convolution Integral[edit]
The first step is to substitute into i_s for t:
i_sub := subs(i_s, t = y-x):
Now form the convolution integral:
f := i_sub*(1 + 3*cos(2*x)):
io := int(f,x = 0..y)
Replacing y with t:
i_o :=subs(io, y=t)
There is going to be an integration constant. This value can not be determined because the driving function oscillates. The initial conditions of the inductor and capacitor have already been visited.
More information (like a specific value at a future time) is needed in order to compute an integration constant.
Thus the final answer is:
$i_o = 0.335sin(2t) - 0.0177cos(2t) - 0.474e^{-0.6875t}cos(0.438t) - 0.648e^{-0.6875t}sin(0.438t) + 0.492$
The answer indicates that i[o] = 0 when t=0. The exponential terms die after 5/0.6875 = 7.27 seconds leaving a sinusoidal function at more than a 90° phase shift from the current source with a DC
level of about 0.5 amps. | {"url":"http://en.wikibooks.org/wiki/Circuit_Theory/TF_Examples/Example34/io","timestamp":"2014-04-23T13:41:11Z","content_type":null,"content_length":"36177","record_id":"<urn:uuid:a41619dc-fe8b-44e6-828e-85811faa3239>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to MAA-Wisconsin Spring 2010 Meeting!
Face Off, the Mathematics Game Show
What is it? Face Off is a mathematics quiz show with questions from the broad realm of mathematics. And we mean broad! Teams of 2-4 students representing their schools compete to answer these
questions. Each team gets a sign with the face of a mathematician (For example, your team could play as Descartes, Gauss, Hilbert, Noether, or Newton.) A team “buzzes in” to answer a question and
earns points if its answer is correct. Teams can use a calculator, paper, and pencil. For more information, visit the Face Off website: http://www.uwosh.edu/faculty_staff/szydliks/faceoff.htm
When is it? Friday, April 16, 5:30-6:30 pm., as part of the MAA-Wisconsin Section meeting
Sample Questions:
The Off Limits category contained the following questions:
20 pts. What is lim[ x→ π/2 ]( sin x) / x ?
40 pts. What is lim[ x→ 2] (x - 3) / (x - 2) ?
60 pts. What is lim[ x→ 0] |x| / x ?
80 pts. What is lim[ x→ 1] ( 2^x - 2 ) / ( x - 1 ) ?
The Take a Number category contained the following questions:
20 pts. How many pips are on a standard die?
40 pts. What prime number is both the sum of two primes and the difference of two primes?
60 pts. What two-digit number has a cube root equal to the square root of the sum of its digits?
80 pts. What is the smallest non-palindromic number whose square is a palindrome?
How do we enter? Please contact one of the Face Off organizers if you would like to enter a team. Any student who has taken or is enrolled in Calculus I is eligible to join a Face Off team
representing their school. If a school doesn’t have enough interested students, contact the organizers anyway – we can combine interested students to form hybrid teams. Space will be limited, so form
a team soon and let us know of your interest!
Face Off Organizers:
Dr. Ken Price (pricek@uwosh.edu, (920)424-1057),
Dr. Steve Szydlik (szydliks@uwosh.edu, (920)424-7346), | {"url":"http://www.uwosh.edu/faculty_staff/szydliks/maa2010/faceoff.htm","timestamp":"2014-04-20T16:09:55Z","content_type":null,"content_length":"7811","record_id":"<urn:uuid:0078ea7f-e21d-4414-bfa6-f7da2b695ce7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by alex on Thursday, March 21, 2013 at 6:42am.
Condition Pressure Volume Temperature Comment
initial 1.0 atm 22.4 L 273 K 1 mole of N2 gas fills the tank.
final 3.0 atm 22.4 L 273 K 2 moles of O2 gas have been added to the 1
mole of N2 gas in the tank.
Key Questions
7. What was the initial pressure caused by the nitrogen in the tank?
8. What amount of pressure do you think the nitrogen contributed to the final total pressure?
Explain your reasoning.
9. If the sum of the pressure due to the nitrogen and the pressure due to the oxygen must equal
the total pressure, what is the oxygen pressure in the model? Explain your reasoning.
10. What fraction of the pressure is due to oxygen?
11. What is the oxygen mole fraction in the tank?
12. What is the relationship between the oxygen pressure ratio in Key Question 10 and the
oxygen mole fraction in Key Question 11? Express this relationship in the form of an
• chemistry - DrBob222, Thursday, March 21, 2013 at 1:53pm
7. What was the initial pressure caused by the nitrogen in the tank? 1 atm
8. What amount of pressure do you think the nitrogen contributed to the final total pressure? 1 atm. Dalton's Law of partial pressure
Explain your reasoning.
9. If the sum of the pressure due to the nitrogen and the pressure due to the oxygen must equal
the total pressure, what is the oxygen pressure in the model? Explain your reasoning. Ptotal = pN2 + pO2; therefore,
pO2 = 2 atm
10. What fraction of the pressure is due to oxygen? 2/3
11. What is the oxygen mole fraction in the tank? 2/3
12. What is the relationship between the oxygen pressure ratio in Key Question 10 and the
oxygen mole fraction in Key Question
I will leave 12 and 11 to you.
11?13? Express this relationship in the form of an
Related Questions
chemistry - A sample of gas with a volume of 2.0 L exerts a pressure of 1.0 atm...
chemistry - 17 liters of a gas is at an initial temperature of 67 degrees C and ...
chemistry - The pressure in a constant-volume gas thermometer is 0.700 atm at ...
Chemistry - An ideal gas originally at 0.8 atm and 77C was allowed to expand ...
Chemistry - A container with a volume of 19.6 L that has temperature of 37C and ...
chemistry - A certain amount of chlorine gas was placed inside a cylinder with a...
Chemistry 105 - A sample of gas has an initial volume of 33.0 L at a pressure of...
Chemistry - If argon has a volume of 5.0 dm3 and the pressure is .92 atm and the...
Chemistry - The volume of a sample of oxygen is 300.0mL when the pressure is 1....
Chemistry - A sample of methane gas at room temperature has a pressure of 1.50 ... | {"url":"http://www.jiskha.com/display.cgi?id=1363862552","timestamp":"2014-04-20T12:01:44Z","content_type":null,"content_length":"9962","record_id":"<urn:uuid:0e38f9dd-24f4-479a-9525-fc1ad3223913>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
HCSSiM 2012
I was a senior staff member at the HCSSiM program in 2012, where I taught a workshop for the first half of the program, so 17 lectures, with my awesome junior staff Elizabeth Campolongo, Maxwell
Levit, and Josh Vekhter.
I also gave a prime time theorem talk on the game of Nim.
I also wrote a bit about the tradition of Yellow Pig Day and the songs we sing every July 17th.
I also wrote a post about what is a proof and why mathematicians are trained to admit they’re wrong.
Here are the notes from the workshop:
1. May 1, 2013 at 11:12 pm |
Hi Cathy! Are you going to teach the 2013 camp?
□ May 2, 2013 at 6:16 am |
2. June 30, 2013 at 6:35 pm |
I just dropped my daughter off at HCSSiM 2013, with high hopes for all she’ll learn, and how much fun she’ll have! She has never found a group of kids that she felt were “like her”, and I hope
she will there. I wanted to thank you, as I learned of the program through your blog. (I heard of your blog at SPARK at MIT.) Sorry you won’t be there this year.
□ July 1, 2013 at 9:15 am | | {"url":"http://mathbabe.org/hcssim-2012/","timestamp":"2014-04-17T00:49:30Z","content_type":null,"content_length":"65818","record_id":"<urn:uuid:0eed3041-0ae9-4b15-a31f-26354e956f4b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
V/CV OR VC/V PATTERN
Number of results: 2,655
Sort each spelling word by determining if it follows the V/CV pattern or the VC/V pattern. Write each spelling word in the correct column. Does the word tyrant go in the V/CV pattern-long or the VC/V
pattern-short? V/CV PATTERN MEANS LONG AND VC?V PATTERN MEANS SHORT
Monday, November 7, 2011 at 6:09pm by JASON
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by RYAN
V/CV OR VC/V PATTERN
Don't understand the consep of v/cv or vc/v p Pattern help please
Tuesday, November 29, 2011 at 6:55pm by casi
V/CV OR VC/V PATTERN
can you tell me what that means(the pattern of v/cv and vc/v
Tuesday, November 29, 2011 at 6:55pm by vanesa
V/CV OR VC/V PATTERN
is tyrant a v/cv pattern or a vc/v pattern
Tuesday, November 29, 2011 at 6:55pm by james
V/CV OR VC/V PATTERN
is tyrant a v/cv pattern or a vc/v pattern
Tuesday, November 29, 2011 at 6:55pm by james
V/CV OR VC/V PATTERN
Answer for: equal ? V/VC or VC/C
Tuesday, November 29, 2011 at 6:55pm by Art
V/CV OR VC/V PATTERN
Answer for: equal ? V/VC or VC/C
Tuesday, November 29, 2011 at 6:55pm by Art
V/CV OR VC/V PATTERN
What do V/VC and VC/V mean?
Tuesday, November 29, 2011 at 6:55pm by Writeacher
V/CV OR VC/V PATTERN
Monday, December 3, 2012 at 7:24pm by Ms. Sue
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by Jessica
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by loser
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by vacant
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by tyrant
pattern - vc/v tyrant - v/cv equal - v/cv You'll find the rest of these in a dictionary such as Dictionary.com.
Sunday, February 3, 2013 at 11:36am by Ms. Sue
what is vc/v pattern and v/cv pattern?
Tuesday, January 5, 2010 at 8:08pm by dahee
V/CV OR VC/V PATTERN
what pattern is "student"
Tuesday, November 29, 2011 at 6:55pm by Valen
V/CV OR VC/V PATTERN
what pattern is "student"
Tuesday, November 29, 2011 at 6:55pm by Valen
V/CV OR VC/V PATTERN
V/CV means that when the / is the syllable split, the vowel in front (V) is a long vowel (the vowel says its name) like in equal and profile. VC/V means that the front vowel does not say its name
like in linen and punish.
Tuesday, November 29, 2011 at 6:55pm by Persons random
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by sajgdsgahfj
V/CV or VC/V
What pattern is it?
Monday, December 3, 2012 at 7:24pm by Tyrant
V/CV OR VC/V PATTERN
Monday, December 3, 2012 at 7:24pm by Kim
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by Anonymous
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by anonaymous
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by anonaymous
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by natalia
V/CV OR VC/V PATTERN
Tuesday, November 29, 2011 at 6:55pm by jeff the killer
what is the difference between a v/cv or vc/v pattern
Tuesday, January 24, 2012 at 8:49pm by Anonymous
What the heck is the V/CV pattern and VC/V pattern?!?!?. Please tell me!!!! -Thanks, Danielle
Monday, January 11, 2010 at 5:38pm by Danielle
Monday, November 7, 2011 at 6:09pm by JASON
V/CV OR VC/V PATTERN
Hello people
Tuesday, November 29, 2011 at 6:55pm by Obama official office
V/CV OR VC/V PATTERN
is coming for u. HE COMES. ZAGLO.
Tuesday, November 29, 2011 at 6:55pm by jeff the killer
what does the vc/v and v/cv pattern mean?????Tell me pretty please please with a cherry ontop
Thursday, November 29, 2012 at 10:04pm by vanesa
V/CV OR VC/V PATTERN
Vowel/continent vowel and vowel contnent/ vowel
Tuesday, November 29, 2011 at 6:55pm by Demarcus Gissendanner
determining which of the following words fall under what pattern. being able to i.d patterns on words like tyrant, equal, humor, recent, profile etc. The assignment ask to list if they are v/cv
patterns or vc/v patterns
Thursday, November 8, 2007 at 9:04pm by Crystal
Sort each spelling word by determining if it follows the v/vc or the vc/v pattern
Sunday, February 3, 2013 at 11:32am by Jayden
what does v/cv and vc/v mean
Wednesday, January 25, 2012 at 7:38am by tbone
Tuesday, March 19, 2013 at 5:19pm by JAH
identifying vowel and consenent patterns. v/cv vc/v
Thursday, November 8, 2007 at 9:04pm by Crystal
Help me i dont understand the V/CV and VC/V patters? please and thanks...
Monday, January 11, 2010 at 5:38pm by Brooklyn
i cant understand v/cv and vc/v patter mean .can you giv me exemple
Tuesday, November 27, 2012 at 9:18pm by alba
The slanted lines show a division between syllables. http://www.studytemple.com/forum/languages-cards/158816-ms-harding-penguin-chick-syllable-patterns-v-cv-vc-v.html
Tuesday, January 24, 2012 at 8:49pm by Ms. Sue
Physic II
C = Q/Vc so Q = CVc = integral i dt so Vc = (1/C) int i dt Vc = 0 at start 25 = i R + Vc 25 = i R + (1/C) int i dt 0 = R di/dt + i/C di/dt = -i/(RC) di/i =-dt/(RC) ln i = -t/(RC) i = e^[-t/(RC)] when
t = 0, Vc = 0 and i = V/R i = (V/R) e^[-t/(RC)] Remember from above Vc =(1/C...
Thursday, March 13, 2014 at 11:14am by Damon
Sort each spelling word by determining if it follows the v/vc or the vc/v pattern tyrant, equal, humor, recent, profile, linen, closet, student, smokey, legal, comet, shiver, minus, loser, punish,
cavern, local, decent, vacant, panic.
Sunday, February 3, 2013 at 11:36am by Jayden
You did not give me the speed of sound in air and in concrete call air va and concrete vc d = va t d = vc (t-1.7) so va t = vc t - 1.7 vc t (vc-va) = 1.7 vc solve for t then d = va t
Sunday, March 11, 2012 at 6:34pm by Damon
a. Vb + Vc = 6. Vb - Vc = 6/2-1 = 2. Add the two Eqs: Eq1: Vb + Vc = 6. Eq2: Vb - Vc = 2. 2Vb= 8. Vb = 4 Mi/h = Velocity of Bob's boat. Joe: Vb = 4 + 1 = 5 Mi/h. b. In Eq1, substitute 4 for vb: 4 +
Vc = 6. Vc = 6 - 4 = 2 Mi/h = Velocity of the current.
Sunday, April 15, 2012 at 3:14am by Henry
Vb = (Dc/Dw)*Vc = (100/1000)*Vc = 3 m^3 0.1Vc = 3 Vc = 30 m^3. = Vol. of cylinder.
Wednesday, March 19, 2014 at 11:17am by Henry
Answers for spelling words whether if it's V/CV or VC/V?
Thursday, January 23, 2014 at 11:36pm by Art
The slash (/) shows where to divide a word into syllables. v/cv mu / sic vc/v clos / et (Broken Link Removed)
Tuesday, January 5, 2010 at 8:08pm by Ms. Sue
List all the subsets of S. You may use "C" for chocolate, "V" for vanilla, and "M" for mint. 1. {} 2. C 3. V 4. M 5. CV 6. CM 7. VC 8. VM 9. MC 10.MV 11. CVM Is this correct.
Thursday, April 15, 2010 at 6:38pm by bb
V/CV pattern?
Monday, November 7, 2011 at 6:09pm by JASON
(Vb+Vc)1.5 = 10.5 Eq1: Vb + Vc = 7 (Vb-Vc)3.5 = 10.5 Eq2: Vb - Vc = 3 Add Eq1 and Eq2: Vb + Vc = 7 Vb - Vc = 3 2Vb = 10 Vb = 5 mi/h 5 + Vc = 7 Vc = 2 mi/h.
Wednesday, April 10, 2013 at 3:48pm by Henry
let Vc be the current speed and Vb be the boat speed in still water. Here is what you know: Vb + Vc = 10 (downstream speed) Vb - Vc = 6 (upstream speed) Adding the two equations gives you 2 Vb = 16
Solve for Vb and then Vc
Wednesday, January 21, 2009 at 10:58pm by drwls
The order of elements in a set does not matter, so VM and MV, MC and CM, and CV and VC are the same set. So list those only once to get 8 subsets.
Thursday, April 15, 2010 at 6:38pm by Reiny
The momentum of the bug before impact would have to be 1/44 of the car's momentum, and in the opposite direction. Mc*Vc - Mb*Vb = (Mb+Mc)*(43/44)Vc Vb = (Mc/Mb)*Vc - {(Mb+Mc)/Mb](43/44)Vc} = 1.114*10
^5*Vc - 1.089*10^5*Vc = 0.025*10^5 Vc = 1.1*10^5 mph Vc = car's initial speed ...
Saturday, October 16, 2010 at 6:08pm by drwls
Vb + Vc = 22 Vb - Vc = 8 Add the Eqs: 2Vb = 30 Vb = 15 mph. Replace Vb in Eq1 with 15: 15 + Vc = 22 Vc = 7 mph = Velocity of the current.
Wednesday, June 19, 2013 at 8:49pm by Henry
Eq1: Vc - Vw = 3 Eq2: Vc + Vw = 8 Add the Eqs: 2Vc = 11 Vc = 5.5 mi/h = Velocity of canoe in stil water. In Eq2, substitute 5.5 for Vc: 5.5 + Vw = 8 Vw = 8-5.5 = 2.5 mi/h. = Velocity of the water.
Saturday, May 26, 2012 at 11:23pm by Henry
vA = 3 x vB vC = 2.2 x vB vC : vA (2.2vB):(3vB) 11:15 vC is 0.733 times more potent than vA
Monday, March 10, 2014 at 10:43pm by herp_derp
Va+Vb+Vc=54 The voltage of the first battery is twice the voltage of the second and 1/3 the voltage of the third battery. This statement is not definitive. does it mean Va=2Vb AND Va=1/3 Vc or does
it mean Va=2Va+1/3 Vc OR does it mean Va=2(Va+1/3 Vc). The problem cannot be ...
Friday, July 6, 2012 at 12:21pm by bobpursley
Call the final velocities V8 and Vc Call the initial velcoity of the 8 ball Vo. Momentum conservation tells you this: M*Vc cos 33 + M*V8 cos 22 = M*Vo M*Vc sin 33 = M*V8 sin 22 M is the mass of each
ball, and can be cancelled out You are left with two equations in two unknowns...
Tuesday, January 25, 2011 at 5:39pm by drwls
Let V=vanilla, C=chocolate One scoop (2): V, C Two scoops (4): VC, VV, CV, CC Three scoops (8): VVV,VVC,VCV,VCC,CVV,CVC,CCV,CCC Four scoops (16):... etc. Get the idea?
Tuesday, September 7, 2010 at 8:31pm by MathMate
d1 = d2, (24-Vc)1.0333 = (24+Vc)0.5666, 24.8 - 1.033Vc = 13.6 + 0.5666Vc, -1.0333Vc -0.5666Vc = 13.6 -24.8, -1.6Vc = -11.2, Vc = 7mi/h. d1 = Distance upstream. d2 = Distance downstream. Vc = Velocity
of the current.
Tuesday, September 20, 2011 at 4:06pm by Henry
Vb + Vc = 12. Vb - Vc = 9. Add the Eqs: 2Vb = 21. Vb = 10.5 MPH. Vc = 1.5 mph.
Sunday, March 25, 2012 at 2:19pm by Henry
Help, I can't find the pattern rule for this sequence(eg. -6,-3,2,9,18 and the pattern rule is n squared-7) The pattern: 6,28,64,114,178. What's the pattern rule? The pattern: 96,84,64,36. What's the
pattern rule? The pattern: 0,24,216,960. What's the pattern rule?
Wednesday, November 2, 2011 at 6:32pm by Abigail
Fr = ((Vs+Vr)/(Vs-Vc))*Fc = 392 Hz. ((770+0)/(770-Vc))*349 = 392 ((770)/(770-Vc))*349 = 392 268,730/(770-Vc) = 392 301,840 - 392Vc = 268,730 -392Vc = -33,110 Vc = 84.5 mi/h = Velocity of the car. Vs
= Speed of sound in air. Vr = Velocity of the receiver(hearer) of the sound.
Monday, February 17, 2014 at 7:31pm by Henry
math algebra
(15+Vc)t = 20 Eq1: 15t + Vc*t = 20 (15-Vc)t = 10 Eq2: 15t - Vct = 10 Add Eq1 and Eq2: 15t + Vct = 20 15t _ Vct = 10 30t = 30 t = 1 h. 15*1 + Vc*1 = 20 Vc = 20-15 = 5 mi/h. = Velocity of current.
Tuesday, April 23, 2013 at 7:57pm by Henry
From Vc = sqrt(µ/r) where Vc = the velocity of an orbiting body, µ = the gravitational constant of the earth and r the radius of the circular orbit,with µ = GM, G = the universal gravitational
constant and M = the mass of the central body, the earth in this instant, r = µ/Vc^2...
Thursday, December 1, 2011 at 10:19pm by tchrwill
Which pattern is most effective to deliver a persuasive message when the audience may resist doing as you ask and you expect emotion to be more important than logic in the decision? The
problem-solving pattern The sales pattern The direct request pattern The threat pattern
Wednesday, August 31, 2011 at 8:05pm by Anonymous
Vc = Q/C Vr = i R V = Vc+Vr if in series here i = I cos wt Q = integral idt = (I/w) sin wt Vc = (I/wC) sin wt Vr = IR cos wt so V = I [ (1/wC) sin wt + R cos wt ] but given V = 110 sin(wt+P) where P
is some phase angle Trig identity V = 110 [ sin wt cos P + cos wt sin P] so ...
Tuesday, March 6, 2012 at 8:54pm by Damon
Physics ideal gases
well, all you can really define is changes dU = n Cv dT but if you say U = 0 at T = 0, I suppose that is ok for a MONATOMIC ideal gas Cv = (3/2) R BUT for diatomic (O2, N2 etc) Cv = (5/2)R So you
equation is ok only for changes of temperature (the U depends only on T) for a ...
Friday, January 2, 2009 at 6:12pm by Damon
When orbiting at 6.3km/s: From Vc = sqrt(µ/r) where Vc = the velocity of an orbiting body, µ = the gravitational constant of the earth and r the radius of the circular orbit,with µ = GM, G = the
universal gravitational constant and M = the mass of the central body, the earth ...
Saturday, December 10, 2011 at 6:00pm by tchrwill
When orbiting at 5.8km/s: From Vc = sqrt(µ/r) where Vc = the velocity of an orbiting body, µ = the gravitational constant of the earth and r the radius of the circular orbit,with µ = GM, G = the
universal gravitational constant and M = the mass of the central body, the earth ...
Sunday, December 11, 2011 at 9:45am by tchrwill
Is this f'(x) or partial derivative?
If VC(y)=((y/240)^2)(w) then VC is not dependent on w, so if you differentiate as a product, the term containing dw/dy drops out (because dw/dy=0), giving you the correct answer. If w is a function
of y, namely w=w(y), then the term dw/dy should be kept: d(VC)/dy = 2yw/240^2...
Wednesday, October 14, 2009 at 3:42pm by MathMate
I = E/R = 12/0.05 = 24q Amps = Steady-state current. 0.95I = 0.95 * 240 = 228A. Vr + Vc = 12 Volts 228*00.05 + Vc = 12 Vc = 12-11.4 = 0.6 Volts = Voltage across coil when i = 0.95I steady-state a. Vc
= 12/e^(t/T) = 0.6 e^(t/T) = 12/0.6 = 20 t/T = Ln20 = 3.0 T = L/R = 0.09/0.05...
Tuesday, April 9, 2013 at 2:24pm by Henry
Statistics(Please help)
t-test for independent means: n=8 t=-1.33 df= 8 C.V. = plus or minus 2.306 My question is how do you get 2.306 for the cv??? Why wouldnt the cv be 2.31?
Tuesday, August 2, 2011 at 3:40pm by Hannah
the reaction between CV+ and -OH, is CV+ lewis acid and OH lewis base
Tuesday, May 11, 2010 at 1:17am by bme1
the reaction between CV+ and -OH, is CV+ lewis acid and OH lewis base
Wednesday, May 12, 2010 at 12:24am by bme1
Heat Capacity at constant volume: CV = (partial q / partial T) with V constant since volume is constant, work is zero and dq = dU, so CV = (partial U / partial T) with V constant Delta U = CV * Delta
T Does this final relationship hold during a reaction where volume is NOT ...
Wednesday, March 4, 2009 at 10:25pm by jake
Chemistry (WebWorks)
i. HOMO of CV+ from Table 1 ii. LUMO of CV+ from Table 1 iii. HOMO of -OH from Table 1 iv. LUMO of -OH from Table 1 v. HOMO of CVOH from Table 1 vi. LUMO of CVOH from Table 1 vii. in phase overlap of
orbitals from -OH and the center carbon of CV+ viii. out of phase overlap of ...
Monday, July 19, 2010 at 3:17pm by Anonymous
Which two orbitals from CV+ and –OH will overlap creating the C-O bond in CVOH? Which of the following accurately shows the orbital interaction that leads to bond formation between CV+ and –OH?
Sunday, November 7, 2010 at 11:30pm by jake
final temp t Cv = specific heat capacity of fluid heat in to lower = 478 * Cv * (t-34) heat out of upper = 618 *Cv*(73-t) heat into cold = heat out of hot 478 (t-34) = 618 (73-t) solve for t
Sunday, April 11, 2010 at 4:35pm by Damon
Dots are arranged to form pattern as show below: pattern 1 they are 2 pattern 2 they are 5 pattern 3 (a) how many dots are in the 4th, 5th, 11th, 200th patterns?
Friday, February 10, 2012 at 4:50am by Vuyo
yes u answered it yesterday bt i got Vc-Va and Vc-Vb wrong so i wanted to clear my doubt tht Vb-Va should be zero?
Wednesday, April 15, 2009 at 3:54am by sweety
The absorbance values At & A0 recorded in the experiment 'Rate Law Determination of the Crystal Violet Reaction' are used in the place of the concentration values of which species? A. –OH B. CVOH C.
CV+ D. Na+ E. H2O I'm pretty sure this is CV+ right?
Friday, May 16, 2008 at 8:33pm by Amy
conservation of momentum initial mometum= final momentum (80+30)Vc=30*.8-80*V but Vc=0, the canoe was not moving initially. Solve for V
Thursday, January 19, 2012 at 3:34pm by bobpursley
A. 0.5Mc*Vc^2 = 0.5Mb*Vb^2. 0.5*0.045*Vc^2 = 0.5*0.142*Vb^2 0.0225(Vc)^2 = 0.071(Vb)^2 Vc^2 = 3.156Vb^2. Vc = 1.77Vb Mc*Vc/Mb*Vb Replace Vc with 1.77Vb: Mc*1.77Vb/Mb*Vb 0.045*1.77Vb/(0.142*Vb
0.07965Vb/0.142Vb = 0.561 = Momentum of Cardinal/Momentum of baseball. B. m1*g = 650 ...
Wednesday, November 28, 2012 at 7:45am by Henry
Statistics(Please help)
If by C.V. you mean Coefficient of Variation, then CV shows the variation as a percentage of the mean and can be calculated as follows: CV% = (SD/mean)100
Tuesday, August 2, 2011 at 3:40pm by MathGuru
The potential energy difference EB - EA equals the charge (-e) times the voltage difference. Thus 3.66*10^-19 J = (-1.60*10^-19 C)(VB - VA) VB - VA = 2.3 Volts B and C are along what is called an
equipotential line, not a field line. Field lines are everywhere perpendicular to...
Tuesday, April 14, 2009 at 7:49am by drwls
Given the the instantaneous voltage across a capacitor is VC=40(1-e -t/CR) Volts When being charged by a 40 volt DC supply, find the time, t for VC to reach 18 volts after the charging process has
started if C = 100ìF and R = 20kÙ
Friday, March 22, 2013 at 9:13am by Rachael
tan A = 50m/79m = 0.63291 A = 32.3o East of North = 57.7o CCW. b. Vs = 1.9m/sin57.7 = 2.25 m.[57.7o] = Velocity of swimmer. Vc = 2.25m/s[57.7o] - 1.9[90o] X = 2.25*cos57.7 - 1.9*cos90=1.20 m/s. Y=
2.25*sin57.7 - 1.9*sin90 = 0 m/s. b. Vc = 1.20 + i0 = 1.20 m/s =Velocity of the ...
Thursday, August 1, 2013 at 7:20pm by Henry
Thursday, February 2, 2012 at 7:48pm by cv
What is a definition of decreasing pattern, increasing pattern and repeating pattern?
Monday, March 1, 2010 at 8:38pm by Strawberryhat
Vc = sqrt(µ/r) where µ = 3.9863x10^14 and r = 6378km. At 38,268km, Vc = sqrt[3.9863x10^14/6x6,378,000] = The period derives from T = 2(Pi)sqrt[r^3/µ]
Thursday, March 10, 2011 at 6:48pm by tchrwill
Since 8000 - 1630 = 6370, the radius of the earth in km, I believe I am on safe ground assuming that you mean "740km" above the earth in your first question. The velocity required to maintain a
circular orbit derives from Vc = sqrt(µ/r) where Vc = the velocity in m/s, µ = the ...
Tuesday, November 8, 2011 at 7:12pm by tchrwill
What pattern do you see? Once you figure out the pattern, then apply it, and find the 8th number. You have 4 to start with. Let us know what you think.
Thursday, April 9, 2009 at 6:26pm by Writeacher
The sphere with a diameter of r (=(d/2) has a volume of Vs=4πr³/3. The volume of a cone with a radius of r and height h is Vc=(1/3)πr²h. If Vs>Vc, the ice cream will not fit in the cone.
Monday, March 15, 2010 at 8:37pm by MathMate
From a to b Wg= Fx cos 90 = 0 Wn= Fx cos 90 = 0 Wnet=Efk - Eik. Which is = to 1/2mVf^2 -1/2 mVi^2. Fxcos(0)= Efk. Cos of 0 =1. Vi =0 so use Efk Fx=1/2mVf^2 2Fx/m = Vf^2 Vf^2= square root of2Fx/m Vf=
9.77m/s X = distance, m= mass, F= force, V= velocity Efk = final kinetic ...
Sunday, March 23, 2014 at 7:39pm by Martha
Physics help please!
a. Yes, the voltage across the capacitor and inductor are greater than the supply voltage at resonance when the Q of the circuit is greater than 1. Vl = Vc = Q*E. E is the applied voltage. Vl and Vc
are 180 deg. out of phase. b. Yes, Kirchoff's voltage law does apply. True ...
Saturday, March 3, 2012 at 9:26am by Henry
Physics II
When an electron moves from A to B along an electric field line, the electric field does 3.94 x 10^-19 J of work on it. What are the electric potential differences (a) Vb - Va, (b) Vc - Va, and (c)
Vc - Vb?
Sunday, February 13, 2011 at 6:02pm by Scott
Vc = 81000m/3600s = 22.5 m/s. = Velocity of car. F = ((V+Vr)/(V+Vc)))*Fh = 958Hz. ((345+22.5)/(343-22.5))*Fh = 958 367.5Fh/322.5 = 958 1.140Fh = 958 Fh = 841 Hz. = Freq. of horn.
Monday, July 22, 2013 at 8:31pm by Henry
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=V%2FCV+OR+VC%2FV+PATTERN","timestamp":"2014-04-16T18:15:24Z","content_type":null,"content_length":"33863","record_id":"<urn:uuid:041bd340-ea5b-442d-a18f-3240ec1f961b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Online MBR Info
When liquid permeates through membrane, particles (or solutes) are rejected by membrane and form a layer with high particle/solute concentration near the membrane surface. Due to the high particle
concentration in the layer, particles tend to diffuse back to the bulk as long as they are not fixed in gel/cake layer. The concentration profile settles at the equilibrium between convective
particle transport to membrane surface and diffusive particle back transport to the bulk solution as illustrated in Fig. 1. This is so called concentration polarization phenomenon that fundamentally
limits the membrane performance (Porter, 1972).
The flux, J, can be calculated by balancing the convective particle transport toward membrane and the diffusive particle back transport as shown in equation (1).
----------------------------------------- (1)
J = water flux at steady state (m/s)
C = particle concentration (mg/L)
x = distance from membrane surface (m)
D[eff] = effective diffusion coefficient (m^2/s)
The minus sign in above equation is used to reflect the negative concentration gradient along the x-axis. The effective diffusion coefficient, D[eff], conceptually includes the effects from
thermodynamic diffusion, shear induced diffusion, and all other hydrodynamic forces that moves particles away from membrane surface. The particle back transport velocity is discussed in detail here.
The equation (1) can be integrated using the boundary conditions at steady state, i.e. (x=0, C=C[G]) and (x=d, C=C[B]), where d is a boundary layer thickness (m), C[G] and C[B] are particle
concentration in gel layer and in bulk, respectively.
----------------------------------------- (2)
According to this equation, steady state flux, J[SS], increases when boundary layer thickness, d, decreases and the effective diffusion coefficient, D[eff], increases. The thinner boundary layer and
higher D[eff ]can be achieved in a given condition by increasing cross-flow velocity on membrane surface in general (Bian, 2000).
Fig. 1. Concentration polarization model
Ó 2011 by S. Yoon. All rights reserved. | {"url":"http://onlinembr.info/Principles/Concentration%20Polarization.htm","timestamp":"2014-04-19T06:53:50Z","content_type":null,"content_length":"31734","record_id":"<urn:uuid:af35af62-5fec-41f5-819c-ef0b00a7a6f2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
GATE Exam Questions
16. The Prandtl mixing length for turbulent flow through pipes is
A. independent of shear stress
B. a universal constant
C. zero at the pipe wall
D. independent of radial distance from pipe axis
17. A 40° slope is excavated to a depth of 10 m in a deep layer of saturated clay of unit weight 20 kN.m^3; the relevant shear strength parameters are c[u] = 7 kN/m^2 and
φ[u] = 0. The rock ledge is at a great depth. The Taylor's stability coefficient for φ[u] = 0 and 40° slope angle is 0.18. The factor of safety of the slope is :
18. Lysimeter and Tensiometer are used to measure respectively, one of the following groups of quantities :
A. Capillary potential and permeability
"Love is like a war, Easy B. Evapotranspiration and capillary potential
to begin Hard to end." C. Velocity in channels and vapour pressure
D. Velocity in pipes and pressure head
- (Proverb) 19. The ruling minimum radius of horizontal curve of a national highway in plain terrain for ruling design speed of 100 km/hour with e = 0.07 and f = 0.15 is close to
A. 250 m
B. 360 m
C. 36 m
D. 300 m
20. Principle involved in the relationship between submerged unit weight and saturated weight of a soil is based on
A. Equilibrium of floating bodies
B. Archimedes' principle
C. Stokes' law
D. Darcy's law | {"url":"http://www.indiabix.com/civil-engineering/gate-exam-questions/117004","timestamp":"2014-04-20T23:29:19Z","content_type":null,"content_length":"44150","record_id":"<urn:uuid:e68302ad-e23f-42d6-be2c-6baa127d5599>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many cu. ft. ?
05-23-2008 #1
How many cu. ft. ?
I have 49"W x 23"D x 21"H of space i want to use in my tahoe, how many cu ft does that equal out to? I haven't decided on subs yet.
Thanks for any help.
Re: How many cu. ft. ?
I just used the RE calculator and it came to 9.5 cu. ft.
Is the RE calculator precise?
Re: How many cu. ft. ?
I got 11.52 C.F. assuming its 3/4 MDF..
Refs. Sold to: Superduper,ncpalafox,lmsfootball35,Schicksal,Nuked sodapop,G0NGSHOW,TacoSauce,scooter924,eclipsepb25, RetroAudioinc,smd4life,Stateprop486,Got_Bass89,
Anniku989,Ryan1126,altoncustomtech-- Traded with: SteveyHustle--Bought from: MOJOBASS, Xtremebass5,Path2spl,Violentaudio,5.3bowtie, mike75356,matrxx dude,
Kovemaster559,acex3a,Sundownz,Grinder1989, revrider1,andrew2944r,hornedfrog1985,tez4life .
Re: How many cu. ft. ?
to find volume multiply LxWxH then divide by 1728 gives u Gross volume
dont forget to account for the width of the wood ur using
also sub and is applicable port displacement
FEEDBACK IN LINKS BELOW TOO BIG FOR SIG
sold: goose, krisfnbz, dubb designs, 2000gator(x2), oxsign, aesoj, viveet(X2),
mike437,flossinon24s,mxracer591,riffyb, bsean63(x2),nycking23,bryce-man,fastimes(x2),
boltpride, r-sosa,fusion_freak,siacarman,kcdc30
prodgod, toasted1,newageking,nmbr1spt,projectnutz(P),DeViOu SoNe,
traded with: davet, teddyunited, rockmo, spikee, rdaudioman1256, SPY
Re: How many cu. ft. ?
Just plug the numbers in and it wil tell you just about everything you need to know.
Re: How many cu. ft. ?
Thats 10.27 gross using 3/4" mdf. port, subs and bracing will make it smaller
Refs: ToxicTuan, LivinLife, BigWill, Bushtree,Chaarlieee,80INCHES, SupaC,Rtillaree,Dalucifer, Dumple91, CRXBMPN, Shallowfu, Calikid, 90accordman, Zeuslicious, IonSQL, Quino,
themommyvan, MSW Danny, Theelfkeeper, Fusion FREAK, Bigtymer2306, keviNxL, WooferCooker, Team H & K, Luv2Shroom
Re: How many cu. ft. ?
Hey, thanks guys for the help.
Re: How many cu. ft. ?
Just one more thing, would the size i gave be optimum space for 2 15" L7s ported?
Re: How many cu. ft. ?
I think 5 Cu. Ft each would be good..
Refs. Sold to: Superduper,ncpalafox,lmsfootball35,Schicksal,Nuked sodapop,G0NGSHOW,TacoSauce,scooter924,eclipsepb25, RetroAudioinc,smd4life,Stateprop486,Got_Bass89,
Anniku989,Ryan1126,altoncustomtech-- Traded with: SteveyHustle--Bought from: MOJOBASS, Xtremebass5,Path2spl,Violentaudio,5.3bowtie, mike75356,matrxx dude,
Kovemaster559,acex3a,Sundownz,Grinder1989, revrider1,andrew2944r,hornedfrog1985,tez4life .
05-23-2008 #2
05-23-2008 #3
05-23-2008 #4
05-23-2008 #5
05-23-2008 #6
05-23-2008 #7
05-23-2008 #8
05-23-2008 #9 | {"url":"http://www.caraudio.com/forums/enclosure-design-construction-help/317345-how-many-cu-ft.html","timestamp":"2014-04-20T07:10:30Z","content_type":null,"content_length":"64758","record_id":"<urn:uuid:2434c433-4828-43db-b0e8-11bcb05e30a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
@MIF , Activity:Drawing squares
What's the formula for the number of those squares???
Last edited by anonimnystefy (2012-02-12 02:22:18)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Hi anonimnystefy;
Could you tell me the link please?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
it's on MIF page.
Activity:Drawing Squares
Last edited by anonimnystefy (2012-02-12 05:40:27)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Did you construct a table of the small values by hand?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
nope.imma gonna.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
i have found that subtracting the 4th difference of the numbers gives 2,2,2,2,...
so we have: a_n=An^4+Bn^3+Cn^2+Dn+E.we can easily find those but what has actually been bothering me is the same tak but only on an m x n grid.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Whoa! How about letting me see you sequence first? The one you got empirically.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
n |a_n
1 | 0
2 | 1
3 | 6
4 | 20
5 | 50
6 | 105
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Now you actually did those by hand? Drawing tiny little squares until your fingers bled.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
didn't do it for 6,but after finding the 3rd differences i saw that they were all squares.but i can easily check for 6.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
First things first. I notice your sequence is shifted. You can adjust that.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
i used 1x1 grid to be just a dot.seemed more logical.
2x2 was this:
. .
. .
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Hmmmm, but the problem as outlined uses the convention of counting interior squares and not lattice points. Since we are counting squares do you not now think that it is more consistent to set the
problem up in terms of squares?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
ok,then just shift the whole sequence by one to the left.
Last edited by anonimnystefy (2012-02-12 08:12:47)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Kayrect! Now what would you do to go about getting a formula, remember to shift your list first.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
so we have: a_n=An^4+Bn^3+Cn^2+Dn+E.we can easily find those
From here.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
but it's better to expand that.get a better form for working with it.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Maybe, but that form makes you look like a combinatorial genius. People will be in awe of you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
maybe,but i know that you're a combinatorial genius,so you don't have to prove it.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
When I was 4 I was rated with the IQ of a genius. But now that I am 9, I am rated as an imbecile.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
i'm happy for you.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
Thanks, I am happy for me too. Now people come by and pat me on the head and tell me there will someday be a cure.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: @MIF , Activity:Drawing squares
what about an mxn grid,i.e a rectangular grid?
Last edited by anonimnystefy (2012-02-13 03:11:14)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: @MIF , Activity:Drawing squares
For that I have:
I do not know if it is correct. Perhaps you would like to test it?
Break time! Pass me the pancakes!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=201237","timestamp":"2014-04-19T01:55:44Z","content_type":null,"content_length":"39269","record_id":"<urn:uuid:262567ca-2a06-4696-87e6-fd63b0b755bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dimensional Analysis
1 Introduction
Dimensional analysis is a widely known, highly useful technique. Dimensional analysis can be considered a simplified type of scaling argument, as discussed in reference 1.
By way of background, note that dimensions are not the same as units. This becomes particularly clear when you consider dimensionless units, as discussed in reference 2. In this document, we are
talking about dimensions, not units.
* Contents
1 Introduction
2 Elementary Dimensional Analysis
3 Connection to Scaling
4 Advanced Dimensional Analysis … or Synthesis
2 Elementary Dimensional Analysis
Let’s start with the following simple scenario: Simplicio places a one-foot-long ruler into a one-gallon jug and observes that it just fits. He concludes:
You can instantly detect that equation 1 is dimensionally unsound. A gallon is a measure of volume, so the RHS of this equation has dimensions of length cubed, where as the LHS has dimensions of
plain old length.
The rule is that any valid equation must have the same dimensions on both sides. This rule is very powerful and very easy to apply. You should routinely use it to check your equations.
Some notation (involving square brackets) to help you perform such checks will be presented in section 4.
Elementary dimensional analysis is binary: It tells us, yes or no, whether a given equation is dimensionally consistent.
3 Connection to Scaling
Scaling arguments have been an important part of modern science since Day One (reference 3 or reference 4) and have remained important ever since.
The idea of scaling (as discussed in reference 1) is intimately connected with dimensional analysis. Specifically, we can quantify the problem by checking the scaling. In this case, try multiplying
both sides of equation 1 by a factor of 2. That gives:
The new equation says Simplicio should be able to just fit two-foot ruler into a two-gallon jug. However, if you try it, you will find that it doesn’t work. For a linear scale factor of K, length
(feet) scales like K while volume (gallons) scales like K^3.
In contrast, whenever you have a dimensionally-sound equation, you will find that you can multiply both sides by some factor and get a new dimensionally-sound equation. For example, if we had gallons
on one side and cubic feet on the other, it would scale correctly.
Tangential remark: The reasoning here depends on on the dimensions, not the units. Equation 1 would be dimensionally unsound even if you re-expressed it in other units.
Dimensional analysis is a subset of scaling. Sometimes you can make a scaling argument that goes significantly beyond anything you could do with dimensional analysis, e.g. the “mean free path”
scaling law as discussed in reference 1.
4 Advanced Dimensional Analysis … or Synthesis
We now turn to something more sophisticated. It should be called dimensional synthesis rather than dimensional analysis, because rather than analyzing a given equation it can be used to synthesize
new equations.
As before, the key idea is that we want an equation where both sides have consistent dimensions. The new trick is that we throw in some adjustable parameters, and adjust things until we achieve
consistent dimensions.
4.1 Example: Pendulum
Let’s do an example. Consider a simple pendulum of length l and mass m subject to a gravitational field of magnitude g. We want to know τ, the period of oscillation. We won’t be able to find the
period exactly, but we can find a scaling law, as follows:
The dimensions in this equation are:
[t] = [m]^a [l]^b [l]^c [t]^−2c (4)
where a, b, and c are (for the moment) unknowns, to be determined by dimensional analysis. We have used the fact that g has dimensions of acceleration, i.e. dimensions of length per time squared.
The square brackets in such equations are important. Symbols of the form [⋯] should be read as “dimensions of ⋯”. In this case, [t] denotes dimensions of time, [m] denotes dimensions of mass, and [l]
denotes dimensions of length. In particular, [m] means m as in mass; it does not mean m as in meters of length.
Comparing dimensions on the two sides of equation 4, we get a system of three linear equations in three unknowns, namely
0 = a [m]
0 = b+c [l] (5)
1 = −2c [t]
In general, you can solve such systems using Gaussian elimination, as discussed in reference 5 … In simple cases like this, you can solve this system in your head. We find that a=0, b=1/2, and c=−1/
2. Plugging in, that tells us that
We see that the period scales like mass to the zeroth power. It is worth checking that this makes sense, physically. Indeed it does make sense, for the following reason: the small-amplitude pendulum
can be considered a harmonic oscillator where the restoring force depends on gravitational force. More mass means more restoring force (which tends to speed up the oscillations) but more mass also
means more inertia (which tends to slow down the oscillations).
We also see that the period scales like the square root of the length, a famous result that dates back to 1638 (reference 3).
4.2 Example: Long Wavelength, Deep Water (“Gravity Waves”)
Let’s consider the following scenario: We have a wave traveling across a large body of water such as the ocean. The wave has a well-defined wavelength, i.e. the wave is monochromatic. The wavelength
is reasonably long (20 cm or more), but the wavelength is short compared to the depth of the water. We want to know the speed of propagation of the wave.
Intuition says that the only relevant physical parameters are the wavelength λ, the speed v[g], the density ρ, and the gravitational field strength g.
The analysis proceeds pretty much in parallel to the analysis in section 4.1. We start with the Ansatz
[l] [t]^−1 = [m]^a [l]^−3a [l]^b [l]^c [t]^−2c (8)
where a, b, and c are (for the moment) unknowns, to be determined by dimensional analysis.
From this we obtain a system of three linear equations in three unknowns:
0 = a [m]
1 = b+c−3a [l] (9)
−1 = −2c [t]
This is easy to solve. We obtain a=0 and b=c=1/2. Plugging in, we find the answer is:
Remark: Really long waves are impressively fast; a tsunami can go 600 to 800 km per hour.
Also note that this is quite a departure from the usual introductory discussion of light waves and sound waves, where it is assumed that the speed is independent of the wavelength. Note that a wave
cannot maintain its shape if different Fourier components are traveling at different speeds, so the fundamental definition of “what is a wave” must be reconsidered.
We should check that it makes sense that the density drops out of the answer. Indeed this does make sense, for much the same reason that the mass dropped out in section 4.1. The restoring force
(which speeds up the wave) is proportional to the gravitational force, which depends on ρ, while the inertia (which slows down the wave) also depends on ρ in the same way.
4.3 Example: Short Wavelength, Deep Water (“Capillary Waves”)
Let’s consider a new scenario: We have a wave with a well-defined wavelength, i.e. a monochromatic wave. The wavelength is short in absolute terms (2 mm or less), and short compared to the depth of
the water. We want to know the speed of propagation of the wave.
Intuition says that the only relevant physical parameters are the wavelength λ, the speed v[c], the density ρ, and the surface tension s.
The analysis proceeds pretty much in parallel to the analysis in previous sections. We start with the Ansatz
[l] [t]^−1 = [m]^a [l]^−3a [l]^b [m]^c [t]^−2c (12)
where a, b, and c are (for the moment) unknowns, to be determined by dimensional analysis.
We have used the fact that the surface tension has units of force per unit length, and force has units of [m] [l] / [t]^2 (as you can infer from the equation F=ma or in innumerable other ways).
From this we obtain a system of three linear equations in three unknowns:
0 = a+c [m]
1 = b−3a [l] (13)
−1 = −2c [t]
This is easy to solve. We obtain c=1/2 and a=b=−1/2. Plugging in, we find the answer is:
You can check that the restoring force (s) makes the wave go faster, while the inertia (ρ) makes the wave go slower.
We see the speed of capillary waves scales inversely like the square root of wavelength, which is from section 4.2, where the speed of gravity waves scaled directly like the square root of
Of course it is possible to derive an actual equation (not just a scaling relationship) for the speed of waves on water. See reference 6.
4.4 Example: Long Wavelength, Shallow Water
Let’s consider the following scenario: We have a wave traveling across a shallow body of water. The wave has a well-defined wavelength, i.e. the wave is monochromatic. The wavelength is long compared
to the depth (d) of the water, and long enough that the surface tension contribution is negligible.
By analogy to section 4.2, you should not be surprised to hear that the speed of propagation scales like the square root of the depth. In particular,
5 Limitations
5.1 False Negatives
1. The sort of elementary dimensional analysis discussed in section 2 is almost foolproof, but not quite. In unskilled hands, it will occasionally produce “false negative” results, i.e. it will
occasionally accuse a valid equation of being dimensionally unsound.
We can use equation 16 as an example. It expresses the valid and important point that molar entropy can be measured either in units of bits per particle, or in units of joules per kelvin per
1 J/K/mol = 0.17 bit/particle (16)
1 bit/particle = 5.76 J/K/mol
Someone who looks at this equation without understanding it will get into trouble, because superficially it appears the LHS has different dimensions from the RHS. A more skillful practitioner
will realize that the energy per particle will necessarily scale like the temperature. The physics guarantees it. So even though the BIPM in Paris does not define the kelvin in terms of joules
per particle, the physics says the kelvin scales like joules per particle, and scaling is what matters, as discussed in reference 1.
Remember that dimensional analysis is not magic. It is not axiomatic. It is not some God-given 11th commandment. Instead, it is a logical corollary of the scaling laws. So if the units are
telling you one thing and the scaling is telling you another, trust the scaling.
The starting point and ending point for any discussion of equation 16 must recognize that it is physically sound. It scales correctly. Whether or not it is dimensionally sound doesn’t matter. If
you want to make it dimensionally sound, you don’t change equation 16; you change your notion of “dimensions” so that dimensions of temperature are interchangeable with dimensions of energy per
2. A similar example concerns the mole, which is officially defined to be an “amount of substance”, whatever that means. It turns out that in all practical situations, there is a one-to-one
correspondence between the “amount of substance” in a mole and the number of particles. It then becomes (at best) a matter of opinion whether “amount of substance” is the same thing as number of
particles. It’s a distinction without a difference, and if you overemphasize the distinction you can easily get false negative results.
3. A particularly thought-provoking example concerns the dimensions of time and distance. In a wide range of everyday applications, such as at a track meet, time and distance are very different, and
are measured in different units. However, in special relativity, time and distance are seen to be very very similar, and ought to be measured in the same units.
From this we conclude that two dimensions might or might not be equivalent, depending on context. A given expression might or might not be dimensionally sound, depending on context.
As always, keep in mind that there is more to physics than dimensional analysis. You need to know the physics of the actual situation to know whether it is possible to mix time and distance (as
in relativity) – or not possible to mix them (as at the track meet).
5.2 Illegitimate Dimensions
Not everything that is nominally dimensionless behaves as a dimensionless quantity should.
Here’s what we expect: As discussed above, if we have a quantity Y with dimensions of length cubed, and we scale up every length in the system by a factor of five, in simple cases we expect Y to
scale up by a factor of five cubed. By the same token, if we have a quantity Z that is dimensionless, our first guess is that it will scale like length to the zeroth power, i.e. that it will be
unchanged if we scale up the length.
As mentioned in section 5.1, dimensional analysis is usually just a stand-in for a scaling argument, and if the dimensions are telling you one thing while the scaling is telling you another, you
should trust the scaling.
Here’s an example of what can go wrong: Suppose Simplicio defines a new quantity called longivity, which is calculated by dividing the length by one inch. The longivity of a violin string is about
12. Longivity is nominally dimensionless, by construction. However, it does not behave as a dimensionless quantity should. If we scale up every length by a factor of 3, we (approximately) convert the
violin into a string bass. The longivity of the strings goes up by a factor of 3. In other words, longivity has nominal dimensions of length to the zeroth power, but scales like length to the first
I use the term illegitimate dimensions to refer to any situation where a quantity has nominal dimensions that conflict with its scaling behavior. (Most commonly, illegitimate dimensions are used to
make a quantity appear dimensionless, but there are other possibilities. In theory, something that scales like volume could illegitimately be given units of length.)
Usually, when such a quantity is defined, some sort of explicit, arbitrary unit appears in the definition, such as the inch that appeared in the definition of longivity.
By way of contrast, it is legitimate to form a dimensionless quantity by dividing one physically-relevant thing by another physically relevant thing. For example, the aspect ratio of a wing is the
span divided by the chord. The aspect ratio is a legitimate, relevant, and useful quantity.
An interesting example that straddles the border between illegitimate and legitimate concerns the activity and equilibrium quotient of chemical reactions, as discussed in reference 1.
5.3 Legitimate Dimensionless Groups
The sort of advanced dimensional analysis discussed in section 4 is nowhere near being foolproof. In skilled hands, dimensional analysis is a method for finding the right answer quickly ... but in
unskilled hands it can be a method for finding the wrong answer quickly, which is definitely not a good thing.
As an ultra-simple example, consider the fact that energies, Lagrangians, and torques all have the same dimensions. So, if you tried to equate a torque with a Lagrangian, the equation would be
nonsense but it would pass the dimensional test. This is an example of a false positive dimensional test.
As a more realistic example, if we tried to combine section 4.2 and section 4.3, including the effects of both gravity and surface tension, we would wind up with three equations in four unknowns, and
we would be stuck.
In such a situation, it is possible to get un-stuck, but doing so requires a good grasp of the physics of the situation. Often this boils down to knowing which variables are relevant. Continuing the
wave example, if we include the effects of both gravity and surface tension, we would be able to construct the following dimensionless quantity:
Similarly, if we tried to combine section 4.2 and section 4.4, we would encounter the dimensionless quantity
When you have a dimensionless quantity running around, you can multiply one side of an equation (such as equation 14) by that quantity raised to any power you want, and the result will be
dimensionally sound. If you blindly apply the dimensional analysis formalism to such a situation, you can get any answer you want, including innumerably many wrong answers. The solution to this
problem lies outside the scope of dimensional analysis. You need more powerful tools (such as scaling laws) that allow you to bring in more of the physics, as discussed in reference 1.
The combination of section 4.2 and section 4.4 provides a nice example of the power and the limitations of dimensional analysis. It turns out that the shallow-water case can be solved exactly by
elementary methods. If you do the full-blown calculation, you will find that
which is an equality, in contrast to equation 15 which was merely a proportionality.
The calculation for deep-water waves is very much more complex. Instead of doing the calculation, we rely on qualitative physical arguments to tell us that equation 19 cannot remain valid in deep
water. It would be very weird if the wavefunction of the wave could extend to arbitrarily great depths. There must be some sort of cutoff, some sort of limit as to how deep the influence of the wave
extends. If the water is deep enough, the depth must be immaterial. The only remaining length-scale in the problem is the wavelength, so the cutoff must scale like the wavelength, and the speed must
then scale according to equation 10.
Dimensional analysis, like many tools, mostly multiplies your existing skills, rather than merely adding to them. If you don’t know anything, dimensional analysis isn’t going to help you. On the
other hand, the more you know, the more powerful dimensional analysis and scaling arguments become.
Note that in this context, the dimensionless groups such as appear in equation 17 and equation 18 are traditionally given names of the form π with a subscript, which is the basis for the name of the
“Buckingham pi” theorem (reference 7).
Rather than talking about what dimensional analysis can’t do, it would be more constructive to talk about what scaling arguments can do. In reference 1 there are many examples of scaling laws, some
of which can easily be derived using dimensional arguments, but some of which cannot (such as the distance to the apparent horizon, and the monomer/dimer equilibrium density).
5.4 Dimensions Named After Units
Beware that sometimes a dimension is named after a unit. There are many examples, including voltage, acreage, mileage, et cetera. This is discussed in reference 8.
6 Anecdote: Gravitational Waves
Back when I was a senior in college, I was taking a course in General Relativity. The second-term final exam was a one-on-one oral exam. I showed up for my appointment at 8:00AM. I had stayed up all
night. (I hadn’t been studying; I had been too busy printing a million entries for the McDonald’s sweepstakes … but that’s another story.)
The dialog went something like this:
Kip: So, what would you like to be asked about?
jsd: Huh?
Kip: You get to choose the topic of the exam. If you’d been to class recently you’d know that.
jsd: Oh.
Kip: So, what topics do you find most interesting?
jsd: Well, the recent discussion of symmetries and Killing vectors seems really profound and interesting … but I don’t really understand it.
Kip: That’s OK. That stuff confuses me too.^1
jsd: How about gravitational waves and mechanical detectors. That’s something I might have some intuition about.
Kip: OK. Suppose I have two particles sitting in outer space. A gravitational wave comes along. Does the distance between the two particles change?
jsd: Hmmm. That’s tantamount to asking for a really fundamental definition of distance.
Kip: Yes.
jsd: I’d say no. The particles move, but to first order that just reflects the curvature of spacetime, and if everything moves the same, the distance doesn’t change.
Kip: That’s wrong.
jsd: Oh, yeah. I should have used a better definition of distance. Distance should be measured by rulers. There’s even a picture of that in your book, with two beads on a stick. The
gravitational wave makes the beads move relative to the stick. You could even measure the frictional heating as the beads rub against the stick.
Figure 1
: Gravitational Wave Detector: Beads Sliding on a Stick
Kip: Let’s move on. Suppose there is a whole cloud of particles sitting around. A gravitational wave passes through. The particles wiggle around while the wave is there. The question is, after
the wave has gone, do the particles return to where they started?
jsd: Well, there will always be some Compton scattering that pushes them downrange.
Kip: What about the transverse direction?
jsd: Well, in analogy to electromagnetism, you could argue that ....
Kip: I know where you’re going with that, and it’s not going to work. That’s a nice result in electromagnetism, but it depends on charge neutrality.
jsd: Oh. I guess we’re not going to have a neutral balance of positive and negative mass, are we?
Kip: Let’s move on. Suppose we have a little green man standing on Mars, shaking his fist like this. What is the intensity of the resulting gravitational waves as received on Earth?
jsd: Well, the radiated power will be proportional to capital G, because that’s the relevant coupling constant for all such things. It will also depend on the reduced quadrupole moment, I
-tick. In fact it will depend on the derivative of I-tick. Actually it has to be the third derivative, I-tick dot-dot-dot, because I-tick has dimensions of r squared dm, and anything less than
the third derivative would be nonzero for two masses moving past each other at uniform velocity, and we know that’s not going to radiate. Actually it has to the square of I-tick dot-dot-dot, for
a number of reasons, including the fact that to make an energy out of G you need two factors of mass. Next, we need a factor of r squared in the denominator, because that’s just how stuff spreads
out. And it looks like we are going to need a factor of c to the fifth in the denominator to make the units^2 come out right.
Kip: (...reaching over my shoulder to write on my paper...) You left out this factor of two.
jsd: Oh.
At this point the exam was over. I had gotten two out of three questions abjectly, totally, and fundamentally wrong. And it was clear that I hadn’t studied the gravitational radiation formula. But I
guess Kip figured out that anybody who could construct the formula on the spot – using nothing but dimensional analysis and scaling arguments – couldn’t be all bad. He even gave me a decent grade in
the course.
7 References | {"url":"http://www.av8n.com/physics/dimensional-analysis.htm","timestamp":"2014-04-19T11:57:39Z","content_type":null,"content_length":"62482","record_id":"<urn:uuid:2475352c-1a2b-48a2-a9b8-3eda005d0395>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
(This article was first published on Rmetrics blogs, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: Rmetrics blogs.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git,
hadoop, Web...
Tell Forbes how you use R
Steve McNally of the Forbes Mean Business blog says R is a name you need to know for 2011. He cites some great examples of R in action: Facebook has used R to figure out that “just two data points
are significantly predictive of whether a user remains on Facebook: (i) having more than one session as a new...
Bayesian estimation with Markov Chain Monte Carlo using PyMC
Prof. Chris Fonnesbeck briefly introduces Bayesian inference, then discusses how to estimate Bayesian models with Markov Chain Monte Carlo using PyMC.PyMC is the premier Python package for doing MCMC
estimation, and Prof. Fonnesbeck is one of the pac...
Quantitative Ecology 2010-11-10 14:56:00
At last... I have been suffering with XEmacs displaying odd characters instead of the quotation marks that are used in R help files. This was driving me up the wall because it makes the files (and R
output in general) very hard to read; however, I fina...
Quantitative Ecology 2010-11-10 14:56:00
At last... I have been suffering with XEmacs displaying odd characters instead of the quotation marks that are used in R help files. This was driving me up the wall because it makes the files (and R
output in general) very hard to read; however, I fina...
Co-authorship Network of SSRN Conflict Studies eJournal
As part of my on-going research simulating network structure using graph motifs I have been collecting novel data sets to test and benchmark the method. Since I am a political scientist studying
conflict, it was suggested to me to collect a co-authorship network within this sub-discipline. Such a network is useful for several reasons;
Generating a quasi Poisson distribution, version 2
Here and there, I mentioned two codes to generated quasiPoisson random variables. And in both of them, the negative binomial approximation seems to be wrong. Recall that the negative binomial
distribution iswhere and in R, a negative binomial di...
Don’t be a Turkey
'Indeed, I am moving on: my new project is about methods on how to domesticate the unknown, exploit randomness, figure out how to live in a world we don't understand very well. While most human
thought (particularly since the enlightenment) has focused us on how to turn knowledge into decisions, my new mission is to...
Don’t be a Turkey
'Indeed, I am moving on: my new project is about methods on how to domesticate the unknown, exploit randomness, figure out how to live in a world we don't understand very well. While most human
thought (particularly since the enlightenment) has focused us on how to turn knowledge into decisions, my new mission is to...
Forecast estimation, evaluation and transformation
I’ve had a few emails lately about forecast evaluation and estimation criteria. Here is one I received today, along with some comments. I have a rather simple question regarding the use of MSE as
opposed to MAD and MAPE. If the parameters of a time series model are estimated by minimizing MSE, why do we | {"url":"http://www.r-bloggers.com/2010/11/page/15/","timestamp":"2014-04-21T14:59:38Z","content_type":null,"content_length":"36092","record_id":"<urn:uuid:556c0666-d96c-4134-9de8-4904272ba94d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
consider the principal ideal
I must admit that I have never heard of ideals in monoid theory, but just accepting your definition of xK I would say no.
Let [itex]A[/itex] be the free monoid on a singleton {y} so [itex]A=\{1,y,y^2,\ldots\}[/itex]. Let,
[tex]K = \{1,y^3,y^4,y^5,\ldots\}[/tex]
[tex]x = y[/itex]
It's trivial to verify [itex]x,x' \in A-K = \{y,y^2\}[/itex], [itex]x' \notin xK = \{y,y^4,y^5,\ldots\}[/itex] and:
[tex]xK \cap x'K = \{y^5,y^6,y^7,\ldots\}[/tex]
I don't immediately see an obvious property on A that would make it hold for arbitrary K except requiring A to be a group, or actually requiring exactly what you want. | {"url":"http://www.physicsforums.com/showthread.php?t=358184","timestamp":"2014-04-16T07:47:14Z","content_type":null,"content_length":"25220","record_id":"<urn:uuid:95ae90d4-83d5-4581-8f2e-047cd6bcf368>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural examples of finite dimensional spaces with interesting 2-type
up vote 4 down vote favorite
Riemann surfaces provide interesting examples of 1-types - interesting as they have roles in diverse areas. However, apart from 2-dimensional lens spaces, I can't readily bring to mind natural
examples of spaces with non-trivial first two homotopy groups (non-trivial firt $k$-invariant optional, I suppose). Given a crossed module one can get an interesting 2-type, but this is via geometric
realisation, so hardly finite-dimensional and not smooth in the usual sense (maybe in some exotic notion of smoothness).
Do natural examples of spaces with interesting 2-types turn up anywhere?
I tried to find out if I could construct one in a naive way at this question, but it fell over.
homotopy-theory gt.geometric-topology examples
4 It's been a while but aren't there theorems to this effect: there are no finite-dimensional $K(\pi,n)$ spaces for $n\geq 2$. So if you had a finite-dimensional 2-type, its universal cover would be
a finite-dimensional $K(\pi,2)$. If I recall, it's some kind of Serre spectral sequence argument, showing that $H_i(K(\pi,n))$ is non-trivial for infinitely-many $i$. I may be mis-remembering? –
Ryan Budney Aug 18 '10 at 4:14
I don't need a 2-type per se, but a space with an interesting 2-type. But this is a good point you raise +1. – David Roberts Aug 18 '10 at 4:35
The wedge $S^1 \vee S^2$ is interesting. $2$-knot complements in $S^4$ have an interesting $2$-type, as well. – Ryan Budney Aug 18 '10 at 4:38
"...2-knot complements in S^4..." - cool. This is the sort of thing I was after. If you want to put this as an answer I'll accept it. – David Roberts Aug 19 '10 at 0:29
The complement of an $n$-knot for $n>1$ is aspherical if and only if the knot complement has the homotopy-type of $S^1$. This is an old result of Dyer and Vasquez, 1973. The reference is in
3 Hillman's book "2-knots and their groups" but Google books isn't bringing up that part of the book, and my actual copy is in my office... Google books does bring up Eckmann's 1976 proof, though. –
Ryan Budney Dec 31 '10 at 22:51
show 1 more comment
3 Answers
active oldest votes
The 2-type of a 4-manifold is an extremely interesting invariant. In fact, work of Hambleton and Kreck shows that in many cases it determines the homotopy type (if one adds the
intersection form as an obvious additional invariant). As a consequence, such 2-types have a very rich structure.
up vote 13 down It's quite tricky to figure out which 2-types appear in this way, just like it is tricky to figure out which groups arise as fundamental groups of 3-manifolds.
vote accepted
This is a very interesting open problem, even if one puts aside the (difficult) 4-manifold questions and just works with 4-dimensional Poincare complexes.
It's been a while, but right about now this has become the most interesting answer to this question, and a potential application for some work I plan to do. Btw, nice to meet
you in Singapore recently. – David Roberts Feb 1 '13 at 6:02
add comment
Cappell-Shaneson knots are a special class of knots in homotopy 4-spheres. In a sense they were designed as an example of knots with a well-known but prescribed 2-type.
The definition is like this. Look at bundles over $S^1$ whose fiber is a once-punctured $(S^1)^3$. The monodromy is an element of $A \in GL_3(\mathbb Z)$. A non-trivial (but fun) exercise is
to check this manifold $M = \left((S^1)^3\setminus\{*\}\right) \rtimes_A S^1$ is the complement of a smoothly embedded $S^2$ in a homotopy $4$-sphere if and only if $det(A) = \pm 1$.
$\pi_2 M$ has a single generator as a module over $\pi_1 M$, but it's far from a free module over $\pi_1 M$. Perhaps the best way to describe $\pi_2 M$ is that it's a Laurent polynomial ring
up vote $\mathbb Z[x^\pm,y^\pm,z^\pm]$ where the $x,y,z$ correspond to the generators of $\pi_1 ((S^1)^3 \setminus\{*\})$ i.e. this is $H_2$ of the universal cover, which has a natural
4 down identification with $(\mathbb R^3 \setminus \mathbb Z^3) \times \mathbb R$. The action of $\pi_1 M$ is just the action on this covering space, so you get not just multiplication by units $x^
vote ay^bz^c$ but also the automorphisms of $\mathbb Z^3$ (coming from $A \in GL_3(\mathbb Z))$ acting on the exponents of the polynomials.
The homotopy 4-spheres that contain Cappell-Shaneson knots were once considered possible counter-examples to the smooth 4-dimensional Poincare conjecture. Recent work of Akbulut and Gompf
seem to have largely removed this possibility.
I like this answer, it gives a nice computational flavour to the construction. – David Roberts Dec 16 '12 at 23:18
add comment
In low dimensional topology there are host of examples along the lines of the curve complex of a surface. Someone like Andy Putman will be able to provide a more detailed explanation, but
here is a summary. You build a simplicial complex in which vertices are isotopy classes of closed curves in a surface, and n-simplices are disjoint (n+1)-tuples of curves. If you require
that no curve bounds a disc (or an annulus if your surface has boundary) then it is fairly easy to see that the complex is finite dimensional. The mapping class group of the underlying
surface acts on this complex.
up vote 3 Now, the homotopy types of this complex and its variations, and of the quotient by the mapping class group, are very interesting objects. In particular, studying the low dimensional
down vote homotopy groups allow you to do things like construct presentations for the mapping class group.
This sort of construction is really a small industry in low dim topology/geometric group theory. Braid groups, automorphism groups of free groups, surface mapping class groups, 3-manifold
mapping class groups, and various subgroups of these, can all be studied via complexes of this type, and often we only understand the complex as far as its 2-type.
Doesn't the curve complex have the homotopy type of a wedge of (typically higher dimensional) spheres? I think Harer showed this and computed the dimension. In particular, isn't $\pi_2$
usually trivial? As you say, Andy will know. – HJRW Jan 2 '11 at 5:17
you're right, it is a wedge of spheres, but then the quotient by the mapping class group is the space with an interesting 2-type, since it is an approximation to the classifying space of
the mapping class group. So $\pi_2$ would be trivial, but having an interesting fundamental group makes a 2-type eligible for being considered interesting in my book. – Jeffrey
Giansiracusa Jan 2 '11 at 9:19
In that case, I don't think I understand the question. Why isn't 'any 4-manifold' a good answer? – HJRW Jan 2 '11 at 19:09
As I would use the terminology, an $n$-type is an equivalence class of spaces under the relation of being $n$-equivalent, so yes, 'any 4-manifold' would be a perfectly good answer as far
as I am concerned. – Jeffrey Giansiracusa Jan 3 '11 at 10:59
add comment
Not the answer you're looking for? Browse other questions tagged homotopy-theory gt.geometric-topology examples or ask your own question. | {"url":"http://mathoverflow.net/questions/35925/natural-examples-of-finite-dimensional-spaces-with-interesting-2-type/73362","timestamp":"2014-04-17T13:00:41Z","content_type":null,"content_length":"73569","record_id":"<urn:uuid:84e086bc-3928-4e60-95f7-2d8b40c86e74>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
If n>1 , show that n^4 + 4^n is never a prime?
If n>1, show that n^4 + 4^n is never a prime?
$n$ clearly has to be odd, so $\tfrac{n+1}{2}$ is an integer. Then $n^4+4^n=\left(n^2+2^n\right)^2-2^{n+1}n^2=\left[n^2+2^n+2^{\frac{n+1}{2}}n\right]\left[n^2+2^n-2^{\frac{n+1}{2}}n\right]$ $n^2+2^
n+2^{\frac{n+1}{2}}n$ is clearly greater than 1; if you can show that $n^2+2^n-2^{\frac{n+1}{2}}n>1$ then you’re done ( $n^4+4^n$ would be the product of two integers greater than one and so can’t be
Okay, here we go. Prove that $n^2+2^n-2^{\frac{n+1}{2}}\cdot n>1$ for all odd integers $n>1$. First, check the cases $n=3$ and $n=5$ separately. $3^2+2^3-2^{\frac{3+1}{2}}\cdot3=9+8-12=5>1$ $5^2+2^
5-2^{\frac{5+1}{2}}\cdot5=25+32-24=33>1$ For odd integers $n\ge7$, I claim that $2^n>2^{\frac{n+1}{2}}\cdot n$. This can be proved by a slight variation of the method of induction. When $n=7$, $2^7=
128>112=2^{\frac{7+1}{2}}\cdot7$. Suppose $2^n>2^{\frac{n+1}{2}}\cdot n$ for some odd integer $n\ge7$. Now consider $2^{\frac{(n+2)+1}{2}}\cdot(n+2)$. (NB: $n+2$ is the next odd integer.) $2^{\frac
{(n+2)+1}{2}}\cdot(n+2)=2\cdot2^{\frac{n+1 }{2}}\cdot(n+2)$ $\color{white}.\hspace{27mm}.$$=2\cdot2^{\frac{n+1}{2}}\cdot n+2\cdot2^{\frac{n+1}{2}}\cdot2$ $\color{white}.\hspace{27mm}.$$<2\cdot2^n+2\
cdot2^n\color{white}\ .$* $\color{white}.\hspace{27mm}.$$=4\cdot2^n=2^{n+2}$ * $\ 3<n\ \Rightarrow\ n+1+2<2n\ \Rightarrow\ \frac{n+1}{2}+1<n\ \Rightarrow\ 2\cdot2^{\frac{n+1}{2}}<2^n$ Hence, $2^n>2^
{\frac{n+1}{2}}\cdot n$ for all odd integers $n>5$, and it follows that $n^2+2^n-2^{\frac{n+1}{2}}\cdot n>n^2>1$. | {"url":"http://mathhelpforum.com/number-theory/45416-if-n-1-show-n-4-4-n-never-prime.html","timestamp":"2014-04-20T17:31:43Z","content_type":null,"content_length":"41288","record_id":"<urn:uuid:b749bae9-e85a-468f-bd4e-b679c6b09797>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Linear Algebraic Groups: a Crash Course
Dave Anderson
January 24, 2011
This is a collection of notes for three lectures, designed to introduce
linear algebraic groups quickly in a course on Geometric Invariant Theory.
There are several good introductory textbooks; in particular, the books by
Humphreys [H], Springer [S], and Borel [B]. Here I merely distill some of
the material from Humphreys and Springer.
1 Definitions
We'll work over a fixed algebraically closed base field k.
Definition 1.1 An algebraic group G is a group object in the category
of varieties over k. That is, G is a group and a variety, and the maps
G × G G and G G
(g, h) gh g g-1
are morphisms of varieties. (And there is a distinguished k-point e G, the
A homomorphism of algebraic groups is a group homomorphism that
is also a map of varieties.
In schemey language, another way to say this is that the functor hG :
Schemes Sets factors through Groups. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/716/2849335.html","timestamp":"2014-04-17T05:40:30Z","content_type":null,"content_length":"8039","record_id":"<urn:uuid:968d918c-3558-49e1-ad13-c80951bb82dc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper Darby ACT Tutor
Find an Upper Darby ACT Tutor
...With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. Beyond academics, I spend my time backpacking,
kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design.
14 Subjects: including ACT Math, physics, ASVAB, calculus
...Education and Teaching Experience: I hold a BA in English from the University of Pennsylvania and an MA in English from the University of Oregon. I am currently pursuing a secondary English
teaching certification and MS in Secondary Education at Saint Joseph's University. At the University of ...
17 Subjects: including ACT Math, reading, English, grammar
...It's not that they aren't ready to think in this new way - they just need lots of time, practice, and patient understanding. I'll be able to quickly diagnose where your student's trouble spots
are and how he/she approaches the assignments. I will reinforce the concepts previously taught in class and drill the particular problem areas of your student.
23 Subjects: including ACT Math, reading, writing, geometry
...Learning doesn't have to be dull or life-sucking like it is in many schools. Perhaps you're ready to make a change in your own life and need a little help to get started. Tutoring can help you
hone your skills or to catch up in areas of weakness, and I'm eager to help!
47 Subjects: including ACT Math, chemistry, English, writing
...I have experience passing the GREs as well as the Praxis II for mathematics. Furthermore, I have experience tutoring students taking math classes up to and including calculus. I would happy to
share that knowledge and experience to help someone pass the math section of the ACT test.
16 Subjects: including ACT Math, English, calculus, physics | {"url":"http://www.purplemath.com/Upper_Darby_ACT_tutors.php","timestamp":"2014-04-16T07:24:33Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:cc134fc5-d1f6-443f-b1ca-c39058c38c1d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random Math Question
Limbic~Region has asked for the wisdom of the Perl Monks concerning the following question:
I popped into #perl today to catch the tail end of a question with
and another chatter regarding randomizing a list of 100_000 elements.
stated he would be surprised if any language had any built-in random number generator suitable for randomizing 100K elements because of the amount of entropy needed.
Well I wanted to be argumentative as it had been weeks since I used IRC. Once the other chatter was satisfied with pseudo-randomness, I said that repeating the randomization on larger and smaller
scales should result in an overall random list. I was thinking a reverse bin sort and here is the process I outlined. (Assume we have 64 elements in our list but only enough entropy to truly
randomize 8).
A = 01-08, B = 09-16, C = 17-24, D = 25-32
E = 33-40, F = 41-48, G = 49-56, H = 57-64
Shuffle A - H
In turn, shuffle all 8 elements of each group as though it were a sing
+le list.
Wash, rinse, repeat
At this point, I was no longer interested in arguing for the sake of arguing (too bad my high school didn't have a debate team). I conceded the point of "truly" random but asked Dominus if he had any
way of proving his assertion. I figured that it would be nearly impossible to tell the difference between a "truly" randomization of the list and one that resulted from many of my re-orderings. In
other words, I was happy to trade linear time and only processing the list once for missing bits of randomness. Unfortunately, he didn't know of one at that moment. So I have two questions.
• How many iterations of my process would it take before you had an acceptable^* fake?
• Using code, how can you determine the amount of randomness of a given list?
entry on shuffling describes a process when using cards but I would like to see it applied to a list. * - For some definition of acceptable
Comment on Random Math Question
Download Code
Re: Random Math Question
by ickyb0d (Monk) on Oct 10, 2005 at 21:43 UTC
a little while ago i actually had to take a class on creating accurate simulations of systems... which included generating random numbers.
i believe that most computers (and their computer languages) are unable to really come up with a random set of numbers. most of the time most random number sets will require an initial
seed to generate the random numbers from. if one is not provided it simply uses the time (epoch?) as the random seed. most random numbers generated from a computer are called
pseudo-random because they can either be predicted from the seed, or they will begin to repeat at one point in time.
the only true way to get random numbers would be to have them generated by a physical process that is completely unpredictable (number of raindrops falling or perhaps radioactive decay).
Thus it's very hard to actually determine the randomness of an algorithm, you can only hope to make it more random by perhaps randomizing the seed every time, or something to that
some quick searches on comparing pseudo-random numbers to random numbers, or even looking up 'random number generators' should give you a little more insight onto your question.
sorry if i didn't answer your question initially. the thing is... that being random can mean, well... anything. it could mean they could all be the same number, or they could all
be completely different numbers for each iteration.
my guess is that you are interested in having different numbers. in this case you would want to determine the random variance of the set of numbers. This will tell you how spread
out the numbers are. You could also possibly take the correlation of all of your points (plotted on a graph). The lower the correlation the less related each iteration is to one
another; but i'm not really sure if that's a good measure of randomness.
sorry if i still didn't answer your question, statistics never really was my strong point. i still think your best bet is to seek out other algorithms and see which one works
best. if you don't have a lot of math background, it might be tough deciphering all the equations. but anyways, i hope this gives you some more insight about all of this.
Re: Random Math Question
by traveler (Parson) on Oct 10, 2005 at 21:54 UTC
Your first question is way byond my mathematical understanding.
Regarding the second, check out random.org. They have papers on random numbers including comments on how to test for true randomness. They also generate true random numbers for you...
Re: Random Math Question
by SciDude (Friar) on Oct 10, 2005 at 22:04 UTC
If each of your groups were randomized at exactly the same time (or perhaps with the same seed) then they should be identical.
Shuffling these groups may mix them up a bit, but only provides for a folded set of groups each of which was identical.
In a perfect process, using the same seed and shuffling procedure would result in an exact copy of A-H elements with each set created. I think we can all agree that the outcome of this
effort is not random.
The most important reference in this area is Knuth, D. 1981. (1st ed. 1969.) The Art of Computer Programming. Volume 2: Seminumerical Algorithms. Addison-Wesley.
I would also suggest a careful look at the Runs Test for Randomness - which is quite simple to understand and follow. Most test for randomness fall into the "runs" category.
If all else fails, simply relax your last requirement for suitable definition of acceptable and declare success!
The first dog barks... all other dogs bark at the first dog.
Re: Random Math Question
by Roy Johnson (Monsignor) on Oct 10, 2005 at 22:12 UTC
I may have misunderstood your suggestion, but it seems to me like your method would always keep elements in each group together. They'd be shuffled with respect to each other, but in a
continuous block. To get them to migrate and intermingle, you'd want to split each group in half and shuffle the half-size groups, then shuffle the members of (not-half-size) groups.
I think MJD's objection to the generator is that, if you're shuffling 100,000 elements, you need a generator that can give you a random number between 1 and 100,000. If your generator
doesn't have that, you can start with two random numbers, and use them as separate seeds. Each time you need a big random number, you get two small ones (which also become your two new
seeds). You combine the two small random numbers into one big random number.
You can do tricks (skip a number occasionally) to keep them from having synchronized repetitions, and you can vary your method of combining them, to help eliminate patterns.
Caution: Contents may have been coded under pressure.
Roy Johnson,
Since the groups themselves are getting shuffled as well, any 1 of the 100K numbers could end up in any position in the entire range. I had considered the selection of the groups
each iteration to also be random but didn't think it was worth it. I didn't really think my position was correct.
I could have had the characterization of the problem wrong as I did come in on the tail end of it. My understanding was that you already had a list of 100K numbers and you needed to
randomize the order. In this case, the numbers themselves don't matter just their positions. My process would be to apply the Fisher-Yates shuffle many times but on different scales.
Sometimes using chunks of the list as though they were single elements and sometimes as though each chunk were the entire list.
Although any number can end up in any position, without something to break up the groups between repetitions, the groups of numbers will remain grouped:
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or
desire allow. [reply]
Said Roy Johnson:
I think MJD's objection to the generator is that, if you're shuffling 100,000 elements, you need a generator that can give you a random number between 1 and 100,000. If your
generator doesn't have that, you can start with two random numbers, and use them as separate seeds.
No, that really wasn't my objection at all.
Re: Random Math Question
by blokhead (Monsignor) on Oct 10, 2005 at 22:22 UTC
I figured that it would be nearly impossible to tell the difference between a "truly" randomization of the list and one that resulted from many of my re-orderings.
Looks like you and Dominus have bumped into the very large open problem in theoretical computer science of pseudorandom generation. Essentially, a pseudorandom generator (PRG) is an
algorithm that takes a small "seed" of real randomness (say, n bits) and outputs a longer string (say, 2n bits) that "looks sufficiently random." Usually the definition of "looking
sufficiently random" means that no polynomial-time algorithm can distinguish the output of the PRG from truly random bits (with a certain probability). Update: In layman's terms, the
question is essentially: "Is it possible to tell (in a reasonable amount of time) how much randomness an algorithm uses just by looking at its output?"
Note that this is similar, but different from the notion of pseudorandom number generators (PRNGs) that you find in Perl and elsewhere. For these, "pseudorandom" means "they seem hard to
predict so we hope it's pseudorandom in the above sense." ;)
PRGs are absolutely essential for provably secure cryptography. Before anyone asks, modern cryptographic algorithms (like RSA) are definitely not provably secure. Their security is based
on widely-accepted (but unproven) assumptions that certain problems (in this case, discrete logarithm & factoring) are sufficiently difficult to compute.
That PRGs actually exist is even a stronger assumption than P ≠ NP, so this is a very difficult question. Most researchers believe that they do exist on some level. There is a great
wealth of research dealing with (assuming PRGs do exist, of course) how much they can expand their random seeds, how simple the PRG algorithms can be, etc.
Anyway, be encouraged: you are in good company thinking that there may be algorithms that don't use much randomness but whose outputs are impossible to distinguish from those that use
lots of randomness. On the other hand, since this is an extremely hard problem, don't hold it against Dominus that he wasn't able to come up with a distinguishing algorithm off the top
of his head. For that reason, be discouraged as well! Both sides of the problem are quite difficult ;)
Using code, how can you determine the amount of randomness of a given list?
An individual "shuffled" list is neither random or non-random. Randomness is a property of the process, not the individual outputs. When talking about an algorithm which tries to
distinguish between a PRG and a truly random source, we (usually) allow for it to take multiple samples from the source before saying which kind of source it thinks it's getting. Even
then, we allow for some probabilistic error in its decision.
Regarding the question of randomizing a list of 100_000 elements, you need to sample uniformly from the space of permutations of 100_000 elements. This requires log2(factorial(100000)) =
1516705 bits because that is the entropy of the random variable you wish to sample.
At the risk of devolving into a purely theoretical, impractical exercise (if it's not already too late (which it is)), here goes nothing ;)
There are two cases...
1. If pseudorandom generation is impossible, then we can tell (by sampling its output) how much true randomness any algorithm uses (call this the true entropy). In this case,
the Mersenne Twister is nowhere near big enough. The MT has 2^19937 configurations, so a single MT has at most 19937 bits of entropy. This is nowhere near the 1.5 million
bits required to sample the space in question. There would be a polynomial-time algorithm that would be able to tell (by sampling its output) whether or not your algorithm
was using MT.
2. On the other hand, if the MT is really pseudorandom in the strong sense of my previous comment, then we can talk about not only its true entropy but also its computational
entropy, that is, the amount of entropy it can "fool" all polynomial-time algorithms into thinking it uses.
From what I recall, if pseudorandom generation turns out to be possible in this strong sense, it is quite reasonable for a function's computational entropy to be much higher
(say, by a squared factor) than its true entropy. In this case, MT could be sufficient to sample the desired space.
Essentially, if pseudorandom generation is possible, then bits from the pseudorandom generator are completely interchangable with truly random bits in the polynomial-time realm.
If there is ever a case where it made a (statistically) significant difference in an algorithms output, then already that gives you a distinguishing algorithm that contradicts
the definition of the pseudorandom generator! Neat, huh?
blokhead, Thanks for all the wonderful explanations.
Let's for this discussion assume that all we need is an algorithm that can potentially produce every p! permutation of a sequence (1..p)
I was wondering whether a two dimensional shuffling (not sure if that is the right term) would help us achieve such a suffle without having to resort to a random variable with
entropy 1516705 bits. Here are the steps i have in mind -
1. Partition the sequence into k-lists (each list containing n-elements) and that each list can be truly randomized. Also WLOG let's assume k <= n
2. For all i 1 to n and j = 1 to k,
a. generate integers (X,Y) ~ U from the planar region bounded by x = n and y = k.
b. Now we swap element(i,j) with element(X,Y).
Since each (i,j) is equally likely => i*n+j is equally likely. Is there anything wrong with this approach?
SK [d/l]
Re: Random Math Question
by Dominus (Parson) on Oct 11, 2005 at 00:58 UTC
Said Limbic~Region:
I figured that it would be nearly impossible to tell the difference between a "truly" randomization of the list and one that resulted from many of my re-orderings. ... Unfortunately,
he didn't know of one at that moment.
Something I meant to say on IRC, but didn't get to, was that I don't think your question here was exactly well-posed. Suppose someone asked you if you would be able to distinguish a true
random number from one that was only pseudo-random. Of course, you can't; the question isn't really sensical. The question they need to be asking in that case is whether you would be
able to tell a sequence of random numbers from a sequence of pseudo-random numbers. And in that case the answer is that yes, methods for doing that are well-studied.
So the question I think you need to ask here is whether a sequence of arrays shuffled by your method would be distinguishable from a sequence of arrays shuffled into truly random orders.
And having thought about it a little more (but just a little) I think the answer is probably yes; you could apply analogs of many of the usual statistical tests for randomness to such
arrays, and find out that actually the method you were using wasn't very random at all.
Section 3.3 of Knuth's The Art of Computer Programming goes into these tests in great detail.
Setion 3.4.2 discusses random shuffling methods, including the Fisher-Yates algorithm, and then says:
R. Salfi has pointed out that Algorithm P cannot possibly generate more than m distinct permutations when we obtain the uniform U's with a linear congruential sequence of modulus m,
or indeed whenever we use a recurrence U[n+1] = f(U[n]) for which U[n] can take only m different values, because the final permutation in such cases is entirely determined by the
value of the first U that is generated. Thus for example, if m = 2^32, certain permutations of 13 elements will never occur, since 13! ≅ 1.42 × 2^32.
This was exactly my concern when I originally said that generating truly random permutations was impossible with the pseudorandom generators supplied with most programming languages,
including Perl. The space of permutations is just too big. However, Knuth continues:
Section 3.5 shows that we need not despair about this.
I haven't read section 3.5 in a long time, and I don't have time to pore over it now to figure out what Knuth meant by that, unfortunately. So I really don't know what the situation is.
So the question I think you need to ask here is whether a sequence of arrays shuffled by your method would be distinguishable from a sequence of arrays shuffled into truly random
Oh, I totally agree now that I have had a night's rest and a few good replies. It was one of those things where, in trying to make a convincing argument, I lost sight of the forest
through the trees. As I indicated in another reply, I considered changing the method to change what elements appeared in each group for every iteration. Without that change, it
should be easy to see that the re-ordering isn't random.
I don't mind being wrong for the sake of a good discussion.
Said Limbic~Region:
As I indicated in another reply, I considered changing the method to change what elements appeared in each group for every iteration. Without that change, it should be easy
to see that the re-ordering isn't random.
As I kept saying on IRC, it doesn't matter how you change the method, because the results won't be random regardless of what method you choose. If you only have enough entropy to
randomize 8 things, and you try to generate random permutations of 64 things, you are not going to be able to generate all possible permutations.
You asked for a method that I could use to tell that your method wasn't working. Here's one: give me a sample of 200,000 of the "randomized" 64-element lists. You said "(Assume
we have 64 elements in our list but only enough entropy to truly randomize 8)." Then I can tell there's something fishy because the sample you gave me will have lots of duplicate
lists in it. You have only about log(8!) = 15.3 bits of entropy, so in the 200,000 items, there will be several that appear five times each. But the probability of such an
occurrence in a truly random sample of 200,000 shuffled lists is about 10^-356. So it'll be totally obvious that you're cheating.
This will be true regardless of what clever method you are using to mix up the items. You can't get blood from a stone. As I said last night, what you're doing is like taking a
16-bit PRNG and then using it to generate 32-bit numbers by concatenating two 16-bit numbers together. If you hand someone a list of your "random" 32-bit numbers, it will be
clear that they aren't really random.
Re: Random Math Question
by fluxion (Monk) on Oct 11, 2005 at 04:00 UTC
Re: Random Math Question
by sfink (Deacon) on Oct 11, 2005 at 04:09 UTC
The only precise definition of "random" that seems possible is if every possible permutation of the list is equiprobable. Which means that the minimum number of bits of entropy you need
is log2(100_000!), because if you had any less then there will be at least one permutation that is not derived from one of the seeds, and thus is not equiprobable.
That's a big number. Um... quick google search on "factorial approximation" shows that ln(100_000!) is about 100_000*ln(100_000)-100_000, and e is about 2, so you need about 100_000*ln
(100_000) which is about 1.2 million bits.
That's really not all that much information -- but way more than a computer can typically provide in a short amount of time. It's not a language issue; in fact, it's more of a hardware
issue. Just like us, computers can only perceive the world through external devices (senses), and so they can't "make up" bits of entropy (aka truly random numbers) on their own. They
need some sort of device that feeds in bits of entropy gradually from thermal noise or whatever.
Linux implements this, in the form of /dev/random. It pulls entropy from IRQ timings, mouse movements, keyboard event interarrival times, and more stuff that I don't understand. I see
one person who made his disks go crazy to measure the rate of entropy "production", and he saw 1-1.5 KB/sec. So that would take 20 minutes to come up with enough bits to randomize your
list of 100_000 things. Not infeasible, though most applications would rather not wait that long!
Of course, such truly random numbers generally aren't distinguishable from good pseudorandom numbers, so I wanted to mention my favorite way of gauging randomness:
In the early, early days of the web, I stumbled across a site (I think it was an HTML site; can't remember for sure) with a random number contest. People would send in their opinions of
the "most random number" between 1 and 100, and after a suitable period of time waiting for all submissions, the winner would be announced -- whichever number was chosen by the fewest
Works for me.
Re: Random Math Question
by blazar (Canon) on Oct 11, 2005 at 08:51 UTC
I popped into #perl today to catch the tail end of a question with Dominus and another chatter regarding randomizing a list of 100_000 elements. Dominus stated he would be surprised
if any language had any built-in random number generator suitable for randomizing 100K elements because of the amount of entropy needed.
A possibly relevant link that doesn't seem to have been mentioned yet: The Marsaglia Random Number CDROM including the Diehard Battery of Tests.
Re: Random Math Question
by Moron (Curate) on Oct 11, 2005 at 11:26 UTC
The concept of randomness carries with it a puzzling paradox. For example, whatever test you develop to test for randomness, then one can argue that if it obeyed the test, it can't be
random if it conformed to a defined pattern, conclusion being that the validity of any such test is disproved by reductio ad absurdum. Localised shuffling cannot be even pseudo-random
because a minimum definition for randomness has to include that all candidates have an equal chance of selection. Obviously pseudo-random doesn't quite do that, although in a way it does
if you can manage to hide some of the calculation details from yourself, so that you can't make skewed predictions like the result being in a certain range.
Free your mind
Re: Random Math Question
by Perl Mouse (Chaplain) on Oct 11, 2005 at 13:26 UTC
I don't have much to add after Dominus's remarks, expect perhaps rephrasing it a little.
If you have a set of N things, then there are N! possible permutations, or shufflings, of that set. A shuffle algorithm is fair (and random) if every possible permutation can be a result
of the algorithm, and with an equal chance. For instance, if you have the set "A, B, C", the algorithm should return "A, B, C" 1 out of 6 times, "A, C, B" 1 out of 6 times, "B, A, C" 1
out of 6 times, etc.
But the only difference between different runs of the program is the "state" of you random number generator, for whatever "state" is meaningful (seed for a typical PRNG, bitsequence to
be read from /dev/u?random). Your sequence of random numbers must be able to distinguish between at least N! - in this case 100,000!. For a PRNG that's available in Perl or many OSses,
where the sequence returned depends only on the seed, it means the seed must be large enough to have at least 100,000! different states.
As others have calculated, that requires more than a million bits. And this is only a lower bound, your algorithm must be good enough as well.
Alternatively, you must read more than a million bits from some device that generates random bits. /dev/random on Linux for instance, or a camera in front of a lava lamp. Brownian motion
and radioactive decay are other good sources to generate random bits with, although I do not know of any implementations.
Perl --((8:>*
Re: Random Math Question
by tbone1 (Monsignor) on Oct 11, 2005 at 13:47 UTC
Dominus stated he would be surprised if any language had any built-in random number generator suitable for randomizing 100K elements because of the amount of entropy needed.
If they could get our upper management in the code, they'd have enough entropy to cause the heat-death of the universe.
tbone1, YAPS (Yet Another Perl Schlub)
And remember, if he succeeds, so what.
- Chick McGee
Log In^?
Node Status^?
node history
Node Type: perlquestion [id://498952]
Approved by Skeeve
Front-paged by Aristotle
How do I use this? | Other CB clients
Other Users^?
Others lurking in the Monastery: (4)
As of 2014-04-21 00:04 GMT
Find Nodes^?
Voting Booth^?
April first is:
Results (489 votes), past polls | {"url":"http://www.perlmonks.org/index.pl?node_id=498952","timestamp":"2014-04-21T00:07:46Z","content_type":null,"content_length":"61241","record_id":"<urn:uuid:f93733a9-c5d2-41ee-be23-23978fbfc290>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Right Way To Teach Sorting
June 08, 2011
Last week, I said that I had what I thought was a better way to teach sorting. This article describes that better way.
Last week, I said that C++ textbooks often make what I consider to be four mistakes:
• They teach readers how to write their own sorting algorithms before using the standard library.
• The first sort algorithm they teach has quadratic performance.
• They never teach how to write a sort algorithm that performs better; and
• They never get around to explaining what a stable sort is or why it is useful.
I also said that I had what I thought was a better way to do it. This article describes that better way.
Programs that sort often have to deal with three separate, but interrelated, problems:
• Sorting
• Merging
• Comparison
I am mentioning comparison as a separate problem because students so often get it wrong. For example, I've lost count of how many times I've seen code like this:
struct Point { int x, y; };
bool compare(const Point& p, const Point& q)
if (p.x < q.x && p.y < q.y)
return true;
return false;
Of course, the compare function could have used one line in place of four:
return p.x < q.x && p.y < q.y;
but either way, the code is wrong — at least if the intent is to use compare for sorting.
In order to avoid this common mistake, I think that it is important to be sure that students understand what comparison means before they learn how to sort. In order to do so, I think that it is a
good idea to begin with merging, which is an easier problem than sorting.
Not only that, but once you know how to merge, you know how to sort, because of the following algorithm:
1. Is your sequence of length 0 or 1? If so, you're done.
2. Divide the sequence approximately in half, yielding two subsequences with lengths that differ by no more than 1.
3. Sort each subsequence.
4. Merge the two sorted subsequences.
You might think that (3) is problematic: How can we sort a sequence without first knowing how to sort it? However, (1) shows us how to sort very short sequences, and each time we reach (3), we make a
recursive call that deals with ever-shorter sequences until eventually we reach (1).
This algorithm is, of course, a recursive implementation of Mergesort. If it is implemented correctly, it is stable, and it runs in O(n log n) time. Of course, it consumes extra space; but O(n) extra
space is usually much better than O(n2) time.
Therefore, I think that a reasonable way to teach students about sorting is:
1. Use the standard sort algorithm.
2. Show students how to write a comparison function, explaining about order relations along the way.
3. Show how to merge sorted sequences, first using the standard merge algorithm and then writing it explicitly.
4. Show how to write Mergesort by combining merging and recursion.
It is probably true that these four steps together take more teaching time than simply instructing students on how to write a bubble sort or an insertion sort. However, not only do the students learn
about sorting, but they also learn about merging, comparison, and recursion. Moreover, they do so without picking up any bad habits along the way. | {"url":"http://www.drdobbs.com/cpp/the-right-way-to-teach-sorting/230500040","timestamp":"2014-04-18T03:11:49Z","content_type":null,"content_length":"94040","record_id":"<urn:uuid:b3b2b264-8767-48b9-9620-a36890e25831>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
A horizontal line is one which runs left-to-right across the page.
In geometry, a horizontal line is one which runs from left to right across the page. It comes from the word 'horizon', in the sense that horizontal lines are parallel to the horizon.
The horizon is horizontal
Its cousin is the vertical line which runs up and down the page. A vertical line is perpendicular to a horizontal line. (See perpendicular lines).
Coordinate geometry
A line will be horizontal if any two points on the line have the same y-coordinate.
See Horizontal line (coordinate geometry).
Other line topics
(C) 2009 Copyright Math Open Reference. All rights reserved | {"url":"http://www.mathopenref.com/horizontal.html","timestamp":"2014-04-18T13:38:55Z","content_type":null,"content_length":"8930","record_id":"<urn:uuid:bc7cda57-1cc4-4b8a-8aa4-715d47b0cbc5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grove Hall, MA Calculus Tutor
Find a Grove Hall, MA Calculus Tutor
...I obtained my Master's degree from MIT and my bachelor's degree from the University of Minnesota. I was employed my junior and senior years to tutor undergraduates. Most of my tutoring
experience was with physics and calculus material.
10 Subjects: including calculus, physics, geometry, algebra 2
...I take the topics in Algebra, Exponentials, Logarithms, Trigonometry, and make them easier to understand. Once my students begin to understand the material, positive results usually follow. I
have taught a course involving statistics and concentrated in several stats courses at the PhD level.
24 Subjects: including calculus, chemistry, physics, statistics
...I am familiar with a few Java IDEs as well, so I am able to tutor from a versatile standpoint. I received excellent scores in all areas on my first and only attempt at the SAT. I have
excellent reading and communication skills, and a background in Latin to help with vocabulary.
38 Subjects: including calculus, reading, English, writing
...Whether a student is trying to avoid dropping down a level, or if a student aspires to move up a level next year, students have been able to achieve that while working with me. Regardless of
the hole that students may dig themselves into, virtually every student I have ever worked with has incre...
13 Subjects: including calculus, geometry, GRE, algebra 1
...I am majoring in psychology (BA) and minoring in Hispanic studies, but as a student at a liberal arts college, I am continuing to explore different interests as much as my schedule allows me
to in academic fields such as linguistics, French, math, history, and English. I have a true passion for ...
30 Subjects: including calculus, reading, English, elementary (k-6th)
Related Grove Hall, MA Tutors
Grove Hall, MA Accounting Tutors
Grove Hall, MA ACT Tutors
Grove Hall, MA Algebra Tutors
Grove Hall, MA Algebra 2 Tutors
Grove Hall, MA Calculus Tutors
Grove Hall, MA Geometry Tutors
Grove Hall, MA Math Tutors
Grove Hall, MA Prealgebra Tutors
Grove Hall, MA Precalculus Tutors
Grove Hall, MA SAT Tutors
Grove Hall, MA SAT Math Tutors
Grove Hall, MA Science Tutors
Grove Hall, MA Statistics Tutors
Grove Hall, MA Trigonometry Tutors
Nearby Cities With calculus Tutor
Cambridgeport, MA calculus Tutors
Dorchester, MA calculus Tutors
East Braintree, MA calculus Tutors
East Milton, MA calculus Tutors
East Watertown, MA calculus Tutors
Kenmore, MA calculus Tutors
North Quincy, MA calculus Tutors
Quincy Center, MA calculus Tutors
Readville calculus Tutors
South Boston, MA calculus Tutors
South Quincy, MA calculus Tutors
Squantum, MA calculus Tutors
West Quincy, MA calculus Tutors
Weymouth Lndg, MA calculus Tutors
Wollaston, MA calculus Tutors | {"url":"http://www.purplemath.com/grove_hall_ma_calculus_tutors.php","timestamp":"2014-04-18T08:51:27Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:41d7379f-1cc2-4809-8c06-520f4bba0463>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
NVIDIA Interview Questions
NVIDIA Interview Questions
Consider the statement
result = a ? b : c;
Implement the above statement without using any conditional statements.
Write a multi threaded C code with one thread printing all even numbers and the other all odd numbers. The output should always be in sequence
ie. 0,1,2,3,4....etc
Write a multi threaded C code with one thread printing all even numbers and the other all odd numbers. The output should always be in sequence
ie. 0,1,2,3,4....etc
Consider the statement
result = a ? b : c;
Implement the above statement without using any conditional statments.
Write multi threaded code in C so that one thread prints all even numbers and other all odd numbers with the output always in sequence ie. 0,1,2,3...etc
Implement a ternary operator without using any condition statements. For example
result = a ? b : c;
Implement the above statement without using ternary or any conditional statements so that result gets b or c depending on value of a. Assume a is either 1 or 0 for simplicity.
In Linux, we use virtual address.So each process will think it has 4 GB
memory space even if the real memory is only 2GB. Now suppose we do not have
MMU and programmer use real physical address in their program. We only have
small size of physical memory. How can we design the system?
We run two video game benchmarks on our new designed SOC. The two
benchmarks have the same instruction set. The benchmark with higher power
consumption always work well while the other one always get stuck. What can
be the problems?
In the new mobile phone, we can either choose to use a 1GHz solo core or
500MHz duo core processor. What tradeoffs should we consider?
This is a hardware design problem. I can not figure it out. Suppose we have 2 pipelined hardware multipliers(or something). One is working at 1GHz with 2 operations executed in parallel at the
same time. The other one is at 500GHz with 4 operations. Suppose we have transistors of 3 types(low leaky(30%),middle leaky(50%) and high leaky(70%)) and here leaky means leakage power of the
transistor. Which multiplier should we use considering the 3 types of transistors.
How will you implement run-time polymorphism in C? There are two structs. There is a common function receiving only one argument(only one). The function should accept both base struct and derived
struct objects and do corresponding actions. i.e if base struct object is passed, do base struct's task and vice versa
The interviewer asked the following question.
char *s = "Hello";
The second print statement crashes sometimes. Why
Given an int, write code to return the number of bits that are 1 in O(m) time, where m is the number of bits that are 1.
Create the mirror image of a binary tree.
For a given US based phone number, write a function to return all possible alphanumberic words that can be formed with that number, based on the keypad of a standard phone.
For example, one possible value for 1-800-623-6537 could be 1-800-MCDNLDS
Write a function to generate a second array of numbers containing running average of N elements from the original array
So for instance if the original array is,
2,6,4,2,3 and N=3
result = 2,4,3,4,3
you can assume the corner elements can be filled with original elements where there are not enough elements to take avg of N elements
You have written a memory manager and after using it your coworker complains that he is facing severe issues of fragmentation. What could be the reason(s) and how can you fix it
What is mip-maps and why are they used
Why we use 4X4 matrix for representing and calculations in transformation of 3D points when that can be done only with 3X3 matrix.
(the concept of homogenization of matrices and how they help including translation operation)
What the different shaders(vetex, geomtery, pixel) describe their roles and the order in which they are executed in graphics pipeline
given y bytes and you can transfer only x bytes at once..give a mathematical expression having only + - / * which gives the number of iterations to copy y bytes. ( dont try giving modulo operator
answers )
Sequence of steps that happen in CPU, cache, TLB, VM, HDD leading to execution of “x = 7” which isn’t present in cache or sysmem nor translation in TLB. Also specify if any intrs, exceptions or
faults are generated. | {"url":"http://www.careercup.com/page?pid=nvidia-interview-questions","timestamp":"2014-04-16T13:04:17Z","content_type":null,"content_length":"180331","record_id":"<urn:uuid:ddbc3b10-0cd4-44c7-ba68-03be6b91c90a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word growth of groups - Z^d has polynomial growth degree d
May 2nd 2012, 06:15 AM #1
Jul 2011
Word growth of groups - Z^d has polynomial growth degree d
I am trying to prove that Z^d has polynomial growth rate degree d. This seems to be a standard fact but I haven't come across a proof.
My attempt is:
Let {x[1 ], ... , x[d]} be the standard generating set.
Then any word of length less than or equal to r is a product of the form (x_1)^(s_1)...(x_d)^(s_d) , where the s_i are integers and the sum of the |s_i| is less than or equal to r. So we can view
this as choosing d words in Z, each of length less than or equal to r. Hence the number of words length less than or equal to r is less than or equal to (2r+1)^d. (The growth rate of Z is 2r+1 -
proved by induction)
So now we need to bound this below by a polynomial of degree d. My attempt at doing this is as follows: the number of words of length less or equal to d.r is greater than or equal to (the number
of words in Z of length less or equal to r)^d , ie (2r+1)^d . So we've bounded below the growth rate at d.r for each r by a polynomial in r of degree d.
Is this sufficient to conclude that the growth rate is polynomial degree d?
Is there a proof of this fact which gives a formula for the growth function of Z^d (with respect to the standard basis)?
Thanks in advance for any help!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/198243-word-growth-groups-z-d-has-polynomial-growth-degree-d.html","timestamp":"2014-04-18T11:57:26Z","content_type":null,"content_length":"30412","record_id":"<urn:uuid:fe0c7b14-0461-433f-8c47-3c10ac7b8df3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 4 GED
Perimeter is the measure of the distance around a shape. The most basic way to calculate a perimeter is by adding up the length of all the sides. However, there are situations where you won’t be
given the length of all the sides, or might just want to calculate the perimeter faster. In these situations you can use formulas to calculate perimeters of specific shapes. These formulas will be
available to you during the GED math test.
Perimeter of a Square
Perimeter = 4 x side
When you need to find the perimeter of a square all you need to do is multiply one side by four.
Perimeter of a Rectangle
Perimeter = 2 x length + 2 x width
Multiply one of the short sides by two and multiply one of the long sides by, then add those two numbers together.
Perimeter of a Triangle
Perimeter = side1 + side2 + side3
If the triangle does not have a right angle, the way to find the perimeter is by adding up the length of all three sides.
3 + 4 + 5 = 12
If you need to find the perimeter of a triangle with a right angle, you can find the perimeter even if you only know the lengths of two of the sides.
The way you do that is by multiplying each side by using the Pythagorean Theorem (a² + b² = c ²).
Now you just add the tree sides together as above to get the perimeter.
5 + 4 + 6.4 = 15.4
Circumference of a Circle
Circumference = π x diameter
When you talk about the distance around a circle the term circumference is used instead of radius. The diameter is how wide a circle is at its widest point.
π is approximately 3.14, so just use 3.14 for π (pi).
If you are only given a circle’s radius in a question, you can still find the circumference by multiplying the radius by two before multiplying by π, since the radius is half the diameter.
How do you find the exact degrees of an angle?
Never mind! I read triangle properties and there it was but thank you anyways! your examples are AMAZING!!:)
An the exmaple of Pythagorean Theorem hw did you come up with the answer 41.
• The 41 comes from adding 25 and 16 together. The 25 comes from squaring (multiplying a number by itself) the length of one of the sides of the triangle that we know, 5. Five times 5 is 25. The 16
comes from squaring the other side we know, 4. I have some more about the Pythagorean theorem here.
□ how did you get 6.4 from? I am so confused!
☆ 6.4 is the square root of 41
how did you get 6.4 as the square root of 41?
I need to know how did you get 6.4 as the squareroot of 41 ?????? Very confused on this one!!
I,m still coming up with 1681 as the squareroot of 41 on my casio-fx260!!!& I,ve been using this brand of calculator for a while.Please tell did I miss something? Or can you explain how you came up
with the 6.4??? Need assistance ASAP! Thanks
• Hi, I think you’re confusing square roots and squares. When you find the square root of something you find what number times itself equals your number. When you square something you multiply that
number by itself. For example the square root of 16 is 4 since 4 times 4 is 16. But 16 squared is 256 (16 time 16). On the casio-fx260 the square and square root share the same button. It’s in
the middle of the top row of buttons and looks like x with a small two floating next to it. When you want to square a number, hit the number and that button. That’s what you did with 41. But if
you want the square root you want to use the button’s function in yellow above the button. It looks like a check mark with a line comming out of it. To use the yellow function of a button hit
shift then the button. So you put 41, shift (top left button), then the same button you used for squaring. You should get 6.4 something. I just rounded to 6.4 in the example above.
□ Wow ! Was,nt thinking logically ! Thanks sooooooo much for everything !
Someone just told me that I need a scientific calculator to find the squareroot of 41,is this true?? From my understanding the casio-fx260 is as scientific as its gonna get!!!
• You don’t need to have a scientific calculator to find square roots, but scientific calculators can also be used to find the square root. Most basic calculators have the square root button. The
casio-fx260 has it too, just hit 41 and the square root button.
□ I luv your site
In my GED 2013, every problem ask me to use ratios like 3:4:5 to solve if I don’t have any other variables to use its so confusing, Does Sin and cos better to use?
i got 6.5 can you show me what to do to find 6.4 rather than just the answer or using a calculator
Now you just add the tree sides together as above to get the perimeter
41 ( shift ) the second key from shift= 6.403124237 ( using the casio fx-260 )
So, 5+4+6.4=15.4
O.M.G… this was so help, wonder why i didnt find this site sooner? Im taking the ged math 2morrow and so NERVOUS… but i have faith that i will pass now knowing what i know about these formulas…
Thanks a Bunch, and Wish me LUCK!!! | {"url":"http://www.math4ged.com/perimeter/","timestamp":"2014-04-20T10:47:03Z","content_type":null,"content_length":"69541","record_id":"<urn:uuid:db417d86-6cbf-4f87-9299-ff900b00d3b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 70
, 1999
"... Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a
training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized ..."
Cited by 784 (8 self)
Add to MetaCart
Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training
corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain-specific synonymy as well as with polysemous words. In contrast
to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval
experiments on a number of test collections indicate substantial performance gains over direct term matching methodsaswell as over LSI. In particular, the combination of models with different
dimensionalities has proven to be advantageous.
- IEEE TRANS. INFORM. THEORY , 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in
modulation and analog-to-digital conversion was first recognized during the early development of pulsecode modula ..."
Cited by 639 (11 self)
Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and
analog-to-digital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett
published the first high-resolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which
would provide a theory for quantization as analog-to-digital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its
origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
- In Proc. of Uncertainty in Artificial Intelligence, UAI’99 , 1999
"... Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering,
natural language processing, machine learning from text, and in related areas. Compared to standard Latent Sema ..."
Cited by 529 (6 self)
Add to MetaCart
Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural
language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of
co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics.
In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over
Latent Semantic Analysis in a number of experiments.
- Machine Learning , 2001
"... Abstract. This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to
the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurren ..."
Cited by 446 (4 self)
Add to MetaCart
Abstract. This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the
latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a
probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled
version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently
in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data
collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic
- Proceedings of the IEEE , 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads
of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Cited by 248 (11 self)
Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of
several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and
psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and
renovating approaches. It is within the ill-defined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been
attacking fundamentally similar optimization problems.
- In The First International Workshop on MapReduce and its Applications , 2010
"... MapReduce programming model has simplified the implementation of many data parallel applications. The simplicity of the programming model and the quality of services provided by many
implementations of MapReduce attract a lot of enthusiasm among distributed computing communities. From the years of e ..."
Cited by 91 (8 self)
Add to MetaCart
MapReduce programming model has simplified the implementation of many data parallel applications. The simplicity of the programming model and the quality of services provided by many implementations
of MapReduce attract a lot of enthusiasm among distributed computing communities. From the years of experience in applying MapReduce to various scientific applications we identified a set of
extensions to the programming model and improvements to its architecture that will expand the applicability of MapReduce to more classes of applications. In this paper, we present the programming
model and the architecture of Twister an enhanced MapReduce runtime that supports iterative MapReduce computations efficiently. We also show performance comparisons of Twister with other similar
runtimes such as Hadoop and DryadLINQ for large scale data parallel applications.
"... We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which
enables to incorporate unlabeled data in the standard supervised learning. This regularizer can be applied to ..."
Cited by 81 (2 self)
Add to MetaCart
We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables
to incorporate unlabeled data in the standard supervised learning. This regularizer can be applied to any model of posterior probabilities. Our approach provides a new motivation for some existing
semi-supervised learning algorithms which are particular or limiting instances of minimum entropy regularization. A series of experiments illustrates that the proposed solution benefits from
unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy
regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the “cluster assumption”. Finally, we also illustrate that the
method can be far superior to manifold learning in high dimension spaces, and also when the manifolds are generated by moving examples along the discriminating directions.
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which
jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Cited by 54 (18 self)
Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly
optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference
vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic
quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray
level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained
vector quantizati...
- Fourth IEEE International Conference on eScience
"... Most scientific data analyses comprise analyzing voluminous data collected from various instruments. Efficient parallel/concurrent algorithms and frameworks are the key to meeting the
scalability and performance requirements entailed in such scientific data analyses. The recently introduced MapReduc ..."
Cited by 45 (9 self)
Add to MetaCart
Most scientific data analyses comprise analyzing voluminous data collected from various instruments. Efficient parallel/concurrent algorithms and frameworks are the key to meeting the scalability and
performance requirements entailed in such scientific data analyses. The recently introduced MapReduce technique has gained a lot of attention from the scientific community for its applicability in
large parallel data analyses. Although there are many evaluations of the MapReduce technique using large textual data collections, there have been only a few evaluations for scientific data analyses.
The goals of this paper are twofold. First, we present our experience in applying the MapReduce technique for two scientific data analyses: (i) High Energy Physics data analyses; (ii) Kmeans
clustering. Second, we present CGL-MapReduce, a stream based MapReduce implementation and compare its performance with Hadoop. 1.
- IEEE Trans. Pattern Analysis and Machine Intelligence , 2003
"... Abstract—For several major applications of data analysis, objects are often not represented as feature vectors in a vector space, but rather by a matrix gathering pairwise proximities. Such
pairwise data often violates metricity and, therefore, cannot be naturally embedded in a vector space. Concern ..."
Cited by 43 (4 self)
Add to MetaCart
Abstract—For several major applications of data analysis, objects are often not represented as feature vectors in a vector space, but rather by a matrix gathering pairwise proximities. Such pairwise
data often violates metricity and, therefore, cannot be naturally embedded in a vector space. Concerning the problem of unsupervised structure detection or clustering, in this paper, a new embedding
method for pairwise data into Euclidean vector spaces is introduced. We show that all clustering methods, which are invariant under additive shifts of the pairwise proximities, can be reformulated as
grouping problems in Euclidian spaces. The most prominent property of this constant shift embedding framework is the complete preservation of the cluster structure in the embedding space. Restating
pairwise clustering problems in vector spaces has several important consequences, such as the statistical description of the clusters by way of cluster prototypes, the generic extension of the
grouping procedure to a discriminative prediction rule, and the applicability of standard preprocessing methods like denoising or dimensionality reduction. Index Terms—Clustering, pairwise proximity
data, cost function, embedding, MDS. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=43711","timestamp":"2014-04-20T09:55:37Z","content_type":null,"content_length":"40134","record_id":"<urn:uuid:22794830-15d5-49fc-9699-e370cbd6b60a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Image Processing with Maple
Version 10 of Maple contains the ImageTools package that contains some basic image operations. In this example we will use Maple to construct an edge detector. As an example, we will use the
following image from the Florence, Tuscany post.
with (ImageTools);
Now, using the Read command, we can read the image into Maple and represent it as a 3 dimensional array. The first dimension is the height, next dimension is the width, and the last dimension
corresponds to the three color planes (red, green, and blue).
img := Read ("galileo.jpg");
We would like to perform the edge detection on the grayscale version of the image. Using the ToGrayscale command, the RGB color image is converted to grayscale image.
g_img := ToGrayscale (img);
We will extract the edges using two Sobel operator. One for the horizontal edge component and the other for the vertical edge component. The two operators are defined by the following two matrices:
Gx = Matrix ([[1,2,1], [0,0,0],[-1,-2,-1]]) Gy = Matrix ([[-1,0,1],[-2,0,2],[-1,0,1]])
Gx := Matrix ([[1,2,1], [0,0,0],[-1,-2,-1]]);
Gy := Matrix ([[-1,0,1],[-2,0,2],[-1,0,1]]);
To get the horizontal and vertical edge components, we use the Convolution command that performs 2 dimensional discrete convolution of the grayscale image with the two matrices.
img_x := Convolution (img_r, Gx);
img_y := Convolution (img_r, Gy);
To get the combined edge map, we take the absolute value of the horizontal and vertical components and add them together.
edge := Array (abs (img_x) + abs (img_y), datatype=float[8]);
The values in the edge map range from 0 to about 3.8, however the valid range for an image is from 0 to 1. We must rescale all the pixel values, such that the brightest pixel is 1. To do this, first
we must find the darkest and the brightest pxiel values, and the difference between them.
min_v, max_v := rtable_scanblock (edge, [rtable_dims (edge)], 'Minimum', 'Maximum');
delta_v := max_v - min_v;
Using these values we can remap the pixel intesities to fit between 0 and 1.
img_edge := Array ((edge-min_v)/delta_v, order=C_order, datatype=float[8]);
Finally, we use the Write command to write the image into a file.
Write ("edge.jpg", img_edge);
The final image looks like this, where the edges are highlighted in white. worksheet includes all the necessary commands. | {"url":"http://beta.mapleprimes.com/posts/43687-Image-Processing-With-Maple","timestamp":"2014-04-18T20:54:30Z","content_type":null,"content_length":"53689","record_id":"<urn:uuid:396d1030-27fe-4f86-9f6f-0ecc88718935>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Review of Vectors
The sum of two vectors u and v yields
What is the dot product of two perpendicular vectors?
Which vector operation is most helpful when trying to find the area of a parallelogram?
Which vector operation is most helpful when trying to find the projection of one vector onto another?
The cross product satisfies all of the following criteria EXCEPT
What happens when a vector is multiplied by a scalar?
What is the sum of the vectors that point to the vertices of a cube centered at the origin (in 3-dim space)?
Which of the following strategies is useful in answering the previous question?
Which of the following could not be the sum of two 3-dimensional vectors in the x - y plane?
Which of the following could not be the cross product of two 3- dimensional vectors in the x - y plane?
Which of the following is not a similarity between the dot product and the cross product?
What is (1, 2, 5) + 3(1, 0, - 1) ?
What is (1, 2, 5)·(1, 0, - 1) ?
What is (1, 0, 1)×(0, 14, 0) ?
What is (1, 0, 1)·(0, 14, 0) ?
What is the x -component of the cross product (13, 23, 33)×(- 3, 0, 0) ?
What is (1, 0)×(0, 1) ?
The displacement of a man walking through the desert can be represented by a vector: the magnitude of the vector corresponds to the distance he has walked, while the direction of the vector
corresponds to the direction in which he has walked. (This is perhaps the most intuitive example of vectors being applied to real problems). The next 5 questions will deal with such displacement
If your dog's displacement vector (in a place where your house sits at the origin) is given by (3, 5) , how far is she from home?
If a man walks 3 miles east, then 4 miles north, what is his final displacement vector (in miles)?
If a rabbit makes 3 successive displacements, given by the vectors u , v , and w , what is its total displacement vector from the starting point?
Ernesto Mojito is trying to find his way home late at night. He should have walked in the direction of the unit vector u , but ended up at a displacement v from his original starting point. How far
has he gone in the right direction?
Let El Palacio Real be at the origin of Madrid. Assume El Museo del Prado lies (in kilometers) at (4, 2) , and Metropolis lies at (3, 0) . How far must a tourist walk to go from Metropolis to Prado?
(Hint: first find the displacement vector using vector subtraction, then compute its magnitude.)
The velocity of a moving car can be represented by a vector: the magnitude of the vector corresponds to the speed of the car, the direction of the vector corresponds to the direction in which the car
is moving. If the car speeds up or slows down, the magnitude of its corresponding velocity vector changes (gets longer or shorter). If the car turns, the direction of its velocity vector is also
altered (and will rotate to point in the new direction in which the car is heading). This idea will be useful in answering the following 5 questions.
If a car has initial velocity vector v and then doubles in speed, what is the car's new velocity vector?
If a car with initial velocity vector (1, 0) turns 90 degress to the right, without changing its speed, what is its new velocity vector?
What is the speed of a car with velocity (3, 4) ?
A car with initial velocity (3, 4) makes a 37 degree turn to the left, drives 30 minutes in this direction and then turns 85 degrees to the right. After another 20 minutes of travel, the car turns
again, and it is now unclear in which direction the car is traveling. However, the car is still traveling at its original speed. Which of the following is a candidate for the present velocity of the
A car with initial velocity (1, 0, 0) gets driven off a cliff. Which of the following is a candidate for the car's final velocity as it hits the ocean? (Assume the positive z -direction points upward
to the sky).
A line passing through the origin in 3-dimensional space can be characterized by a vector v : all points on the line can be written in the form t v , where t is a real number ( t = 0 yields the
origin itself), and the full set of points (ranging over all values of t) is the whole line. For a line that does not pass through the origin, all the points can be written of the form u + t v ,
where u is a particular vector which can be chosen at will from the points which lie on the line. This idea is central to the following 5 questions.
Which of the following is a set of two distinct parallel lines?
Which of the following is a point on the line given by (1, 2, 1) + t(3, 0, - 1) ?
If one has two lines, given by u [1] + t v [1] and u [2] + t v [2] , which condition ensures that they will be parallel?
If one has two lines, given by u [1] + t v [1] and u [2] + t v [2] , which condition ensures that they will be perpendicular?
In the plane, which line is NOT the same as (1, 0) + t(2, 3) ? | {"url":"http://www.sparknotes.com/physics/vectors/review/quiz.html","timestamp":"2014-04-18T21:01:56Z","content_type":null,"content_length":"96905","record_id":"<urn:uuid:ff7ac003-edaa-4df8-b910-17e73358bcfa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibonacci in Technical Analysis
Fibonacci tools utilize special ratios that naturally occur in nature to help predict points of support or resistance. Fibonacci numbers are 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, etc. The sequence
occurs by adding the previous two numbers (i.e. 1+1=2, 2+3=5) The main ratio used is .618, this is found by dividing one Fibonacci number into the next in sequence Fibonacci number (55/89=0.618). The
logic most often used by Fibonacci based traders is that since Fibonacci numbers occur in nature and the stock, futures, and currency markets are creations of nature - humans. Therefore, the
Fibonacci sequence should apply to the financial markets. There are many Fibonacci tools used by traders, they include:
• Fibonacci Retracements
Arguably the most heavily used Fibonacci tool is the Fibonacci Retracement. To calculate the Fibonacci Retracement levels, a significant low to a significant high should be found. From there, prices
should retrace the initial difference (low to high or high to low) by a ratio of the Fibonacci sequence, generally the 23.6%, 38.2%, 50%, 61.8%, or the 76.4% retracement.
Note that a trendline was drawn from a significant low (beginning of trend) to a significant high (end of trend). The chart below shows that Fibonacci Retracements can be used to retrace downtrend
moves as well: fibonacci retracements act as support and resistance. Notice after the bottom that price rallied to the 23.6% retracement level and then was promptly rejected downwards. After breaking
resistance a few months later, the 23.6% retracement became support. Price rallied up to the 50% retracement level, where it ran up against resistance. Price continued to fluctuate between the 38.2%
retracement level (acting as support) and the 50% retracement level (acting as resistance).
• Fibonacci Arcs
Fibonacci Arcs are percentage arcs based on the distance between major price highs and price lows. Therefore, with a major high, major low distance of 100 units, the 31.8% Fibonacci Arc would be a
31.8 unit semi-circle.
After the significant bear market, the rally was stopped by the 50% arc; the 50% arc retracement acted as resistance, 38.2% arc than gave support, bouncing between the 50% arc and the 38.2% arc for
many months. After price broke through the resistance arc at 50%, price moved up to the next significant Fibonacci ratio, 61.8%, where it found a new resistance level. The prior resistance level at
50%, after being broken, became a new support level. The next Fibonacci arc was at 100%, where price met resistance.
• Fibonacci Fans
Fibonacci Fans use Fibonacci ratios based on time and price to construct support and resistance trendlines; also, Fibonacci Fans are used to measure the speed of a trend's movement, higher or lower.
If prices move below a Fibonacci Fan trendline, then price is usually expected to fall further until the next Fibonacci Fan trendline level; therefore, Fibonacci Fan trendlines are expected to serve
as support for uptrending markets. Likewise, in a downtrend, if price rises to a Fibonacci Fan trendline, then that trendline is expected to act as resistance; if that price is pierced, then the next
Fibonacci Fan trendline higher is expected to act as resistance. The Fibonacci ratio is also used to predict areas of time in which price could change course.
• Fibonacci Time Extensions
Fibonacci Time Extensions are used to predict periods of price change (i.e. lows or highs). For example, after a downtrend, a reversal would be expected at a significant Fibonacci Time Extension
line. Similarly, after an uptrend, a reversal warning could occur if a Fibonacci Time Extension was soon approaching. The Fibonacci Time Extension tool is created by locating a significant high (low)
and finding a significant retracement or extension low (high).
Fibonacci Tools are very popular, possibly the very reason that they appear to work. Whether or not a trader believes Fibonacci ratios work beyond nature and into the financial markets, traders
should be aware of Fibonacci Retracements (most often used) and the other Fibonacci Tools. Because there are many traders out there who do believe that the Fibonacci ratios apply to the financial
markets, that means there are real supply and demand forces working on the markets at these important Fibonacci junctures. This is important because, after all, supply and demand is the concept that
moves the markets. | {"url":"http://www.tradersplace.in/fibonacci-in-technical-analysis.html","timestamp":"2014-04-18T02:58:59Z","content_type":null,"content_length":"111526","record_id":"<urn:uuid:f8dc10eb-75ef-4bae-8cb0-fbcc355461d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to derive this equation
1. The problem statement, all variables and given/known data
I wanted to know how this equation is derived. Thanks.
2. Relevant equations
[tex] {v_{f}}^2 = {v_{i}}^2 + 2a\Delta d [/tex]
3. The attempt at a solution
[tex] v_{f} - v_{i} = \frac {\Delta d a} {\Delta v} [/tex] | {"url":"http://www.physicsforums.com/showthread.php?t=426459","timestamp":"2014-04-19T19:37:07Z","content_type":null,"content_length":"24247","record_id":"<urn:uuid:056776b1-4f6f-4f93-84be-e5ed5dea154e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stoneham, MA Science Tutor
Find a Stoneham, MA Science Tutor
...After taking several courses as an undergraduate, and having taken an intensive Arabic summer course in Middlebury College, I was accepted at the Center for Arabic Studies Abroad in Cairo,
Egypt. There I participated in a year long course learning Arabic at a professional level. I passed a Fore...
47 Subjects: including chemistry, biostatistics, astronomy, electrical engineering
...I have MS degrees in Physics (University of Stuttgart, Stuttgart - Germany and in Electrical Engineering (University of Florida,Gainesville - Florida. I am certified in Physics and
Mathematics. I taught eight years High school Physics.
6 Subjects: including physics, algebra 1, electrical engineering, prealgebra
...Of the hundreds of students I have worked with over the years, most see score increases following tutoring (nearly all students if they are motivated to do well and practice the specific test
strategies). I've helped coach students to 300+ point improvements on the SAT, 5+ point improvements on t...
26 Subjects: including ACT Science, English, linear algebra, algebra 1
...I use many examples and problems, starting with easy ones and working up to harder ones. As they progress I help them see how these particular examples and problems fit into the big ideas they
are studying. I have also found that study skills and organization play a large role in students' academic success, and that certain study techniques are particularly useful for math and physics.
9 Subjects: including physics, calculus, geometry, algebra 1
I am a recent graduate of MIT with 6 years of experience working with a wide range of students in grades 8-12 to improve SAT scores in the Reading, Math, and Writing sections. I have successfully
tutored students at all skill levels and have achieved measurable success in all cases. I am a patient...
18 Subjects: including chemistry, physical science, physics, microbiology
Related Stoneham, MA Tutors
Stoneham, MA Accounting Tutors
Stoneham, MA ACT Tutors
Stoneham, MA Algebra Tutors
Stoneham, MA Algebra 2 Tutors
Stoneham, MA Calculus Tutors
Stoneham, MA Geometry Tutors
Stoneham, MA Math Tutors
Stoneham, MA Prealgebra Tutors
Stoneham, MA Precalculus Tutors
Stoneham, MA SAT Tutors
Stoneham, MA SAT Math Tutors
Stoneham, MA Science Tutors
Stoneham, MA Statistics Tutors
Stoneham, MA Trigonometry Tutors
Nearby Cities With Science Tutor
Arlington, MA Science Tutors
Belmont, MA Science Tutors
Burlington, MA Science Tutors
Everett, MA Science Tutors
Lynnfield Science Tutors
Malden, MA Science Tutors
Medford, MA Science Tutors
Melrose, MA Science Tutors
Reading, MA Science Tutors
Saugus Science Tutors
Wakefield, MA Science Tutors
West Medford Science Tutors
Wilmington, MA Science Tutors
Winchester, MA Science Tutors
Woburn Science Tutors | {"url":"http://www.purplemath.com/Stoneham_MA_Science_tutors.php","timestamp":"2014-04-19T20:01:06Z","content_type":null,"content_length":"24073","record_id":"<urn:uuid:bffd2157-b222-4244-a255-f30c6192e4c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Towards improving the utilisation of university teaching space
- Proceedings of the Sixth International Conference on the Practice and Theory of Automated Timetabling (PATAT , 2006
"... Abstract. A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is
usually on courses that are expected to fit into one room. However, it can also happen that the course will ..."
Cited by 6 (5 self)
Add to MetaCart
Abstract. A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually
on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into
any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events
dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do
not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation ’ which is basically
the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to
increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are
based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting ’ strategy. The local moves are designed to improve utilisation and satisfy the other
constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives. 1
, 2008
"... Universities aim for good “Space Management ” so as to use the teaching space efficiently. Part of this task is to assign rooms and time-slots to teaching activities with limited numbers and
capacities of lecture theaters, seminar rooms, etc. It is also common that some teaching activities require s ..."
Cited by 4 (3 self)
Add to MetaCart
Universities aim for good “Space Management ” so as to use the teaching space efficiently. Part of this task is to assign rooms and time-slots to teaching activities with limited numbers and
capacities of lecture theaters, seminar rooms, etc. It is also common that some teaching activities require splitting into multiple events. For example, lectures can be too large to fit in one room
or good teaching practice requires that seminars/tutorials are taught in small groups. Then, space management involves decisions on splitting as well as the assignments to rooms and time-slots. These
decisions must be made whilst satisfying the pedagogic requirements of the institution and constraints on space resources. The efficiency of such management can be measured by the “utilisation”: the
percentage of available seat-hours actually used. In many institutions, the observed utilisation is unacceptably low, and this provides our underlying motivation: to study the factors that affect
teaching space utilisation, with the goal of improving it. We give a brief introduction to our work in this area, and then introduce a specific model for splitting. We present experimental results
that show threshold phenomena and associated easy-hard-easy patterns of computational difficulty. We discuss why such behaviour is of importance for space management. Contact Author. 1 1
"... In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource.
Often, each component is associated with different sets of soft constraints, and so with different measures of ..."
Cited by 4 (2 self)
Add to MetaCart
In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often,
each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures.
This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally
difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the
complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to
contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality.
- Problem, Proceedings of the 7th PATAT Conference, 2008, http://www.cs.nott.ac.uk/∼jxm/timetabling /patat2008-paper.pdf
"... Abstract This paper describes a branch-and-cut procedure for an extension of the bounded colouring problem, generally known as curriculum-based university course timetabling. In particular, we
focus on Udine Course Timetabling [di Gaspero and Schaerf, J. Math. Model. Algorithms 5:1], which has been ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract This paper describes a branch-and-cut procedure for an extension of the bounded colouring problem, generally known as curriculum-based university course timetabling. In particular, we focus
on Udine Course Timetabling [di Gaspero and Schaerf, J. Math. Model. Algorithms 5:1], which has been used in Track 3 of the 2007 International Timetabling Competition. First, we present an
alternative integer programming formulation for this problem, which uses a lower than usual number of variables and a mildly-increased number of constraints (exponential in the number of periods per
day). Second, we present the corresponding branch-and-cut procedure, where constraints from enumeration of event/free-period patterns, necessary to reach optimality, are added only when they are
violated. We also describe further problem-specific cuts from bounds implied by the soft constraints, cuts from patterns given by days of instruction and free days, and all related separation
routines. We also discuss applicability of standard cuts from graph colouring and weighted matching. The results of our preliminary experimentation with an implementation using ILOG Concert and CPLEX
10 are provided. Within 15 minutes, it is possible to find provably optimal solutions to two instances (comp01 and comp11) and good lower bounds for several other instances. Keywords integer
programming, branch-and-cut, cutting planes, soft constraints, educational timetabling, university course timetabling2 Edmund K. Burke et al. 1
, 2007
"... The utilisation of University teaching space is notoriously low: rooms are often unused, or only half full. We expect that one of the reasons for this is overall mismatch the sizes of rooms and
the sizes events. For example, there might be an excess of large rooms. Good space planning should match t ..."
Cited by 3 (2 self)
Add to MetaCart
The utilisation of University teaching space is notoriously low: rooms are often unused, or only half full. We expect that one of the reasons for this is overall mismatch the sizes of rooms and the
sizes events. For example, there might be an excess of large rooms. Good space planning should match the set of rooms to the set of events whilst taking account of the pedagogic requirements of the
institution. We give methods to visualise and demonstrate the mismatch between the event and room-size profiles. We then provide methods to generate better room profiles. In particular, a method to
produce robust room profiles. The method is based on scenario-based ideas of stochastic programming. We give evidence that such robust room profiles allow the reliable achievement of higher levels of
utilisation. 1
, 2009
"... For many problems in Scheduling and Timetabling the choice of an mathematical programming formulation is determined by the formulation of the graph colouring component. This paper briefly
surveys seven known integer programming formulations of vertex colouring and introduces a new formulation using ..."
Cited by 2 (2 self)
Add to MetaCart
For many problems in Scheduling and Timetabling the choice of an mathematical programming formulation is determined by the formulation of the graph colouring component. This paper briefly surveys
seven known integer programming formulations of vertex colouring and introduces a new formulation using “supernodes”. In the definition of George and McIntyre [SIAM J. Numer. Anal. 15 (1978), no. 1,
90–112], “supernode” is a complete subgraph, where each two vertices have the same neighbourhood outside of the subgraph. Seen another way, the algorithm for obtaining the best possible partition of
an arbitrary graph into supernodes, which we give and show to be polynomial-time, makes it possible to use any formulation of vertex multicolouring to encode vertex colouring. The power of this
approach is shown on the benchmark problem of Udine Course Timetabling. Results from empirical tests on DIMACS colouring instances, in addition to instances from other timetabling applications, are
also provided and discussed.
"... Universities have to invest considerable financial and human resources in the provision of space for teaching activities such as lectures, seminars, tutorials and workshops. Naturally, they
would like such investments to be made wisely and efficiently. However, there is considerable evidence that in ..."
Add to MetaCart
Universities have to invest considerable financial and human resources in the provision of space for teaching activities such as lectures, seminars, tutorials and workshops. Naturally, they would
like such investments to be made wisely and efficiently. However, there is considerable evidence that in many Universities the resulting space is considerably underused. In a report by the Higher
Education Funding Council for England (HEFCE), roughly speaking, it was found that often space was only used half the time, and then only half filled [9]. At least on the face of it this seems like
an inefficient use of resources; it is natural to expect or hope that better planning for the space capacity would improve this. However, a fundamental problem has been that there was no real
foundation for deciding whether or not such usage levels are in fact an inevitable result of meeting the timetabling and teaching resources, or alternatively, to provide methods to improve the
situation. In this abstract we briefly overview our work towards remedying this situation. Here, we cannot hope to cover all work in the topic; instead the aim is to discuss our various strands of
research and how they are woven together. The space planning process needs to decide upon what space resources need to be provided | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3745212","timestamp":"2014-04-17T23:56:52Z","content_type":null,"content_length":"34880","record_id":"<urn:uuid:34bc1902-67c5-45ae-a255-3fd3ed2440c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
BarsFA - 4-transistor full adder
For my transistor-computer project (which is already going for 3 years) I needed compact implementations for most of common digital blocks, and full adder - is one of the most important among these.
Canonical implementation of CMOS full adder takes 28 transistors:
Modern implementations using transmission gate and number of tricks reduce this number down to 8-11, with more strict requirements for transistor selection. These schemes usually could not be used
with discrete transistors, as they use 4-terminal transistors, and suffer from degradation of logical 1 level, which becomes even more severe with discrete transistors as they have Vt=1.5-2 compared
to ~0.5V for integrated transistors.
The smallest full adder I've seen was using 6 transistors and capacitors at inputs - but I am not sure how to make it work reliable in real-world. Known implementation using bipolar transistors -
also using
22 transistors
But can you make it using only 4 transistors? After thinking and trying few variants, i've got the following schematic working:
Simulated waveforms:
You can download schematic for simulation in LTspice IV
How it works? As order of terms is not important, we can just mix them in analog way, and by tuning threshold voltage of double inverter easily get carry. Then we can subtract carry from analog sum
using Q3 - and we are getting sum. Surely, this all requires threshold voltages tuning and simulation across temperature range as it's quite sensitive to transistor selection, resistor values and
temperature. Schottky diodes are here to prevent transistor saturation, which significantly reduce performance.
One could use MOSFET's - this will provide better temperature stability, but these transistors must have quite low Vt. | {"url":"http://3.14.by/en/read/BarsFA-4T-full-adder","timestamp":"2014-04-18T18:14:13Z","content_type":null,"content_length":"5658","record_id":"<urn:uuid:8fcae94d-9ff0-4eed-95da-1e77309d0ae4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Griffith SAT Tutor
Find a Griffith SAT Tutor
...I have 1 year of experience teaching Algebra 2 and Honors Algebra 2. I have 6 years of experience teaching Geometry. I have 11 years of experience teaching Pre-Algebra.
14 Subjects: including SAT math, calculus, algebra 1, trigonometry
I am currently a graduate student at Chicago State University, pursuing my M.A. in English Literature. I am also an English tutor and graduate assistant in the English Department. I have
experience on the elementary level, as well as high school.
24 Subjects: including SAT reading, SAT writing, reading, English
...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help
all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon.
12 Subjects: including SAT math, calculus, geometry, algebra 1
...I am proud to be a product of CPS having attended elementary and high school in Chicago. I received a Doctorate degree in 2002 from Loyola University Chicago in Curriculum and Instruction. I
hold a Masters degree in Library Science and Communications Media and a Bachelor's degree in Early Childhood Education.
12 Subjects: including SAT reading, English, writing, grammar
...In order to ensure that our goals and assumptions are the same, I suggest an informal meeting or trial run before the start of official tutoring sessions.*I have taken courses in multivariable
calculus, probability theory, complex analysis, linear algebra, and abstract algebra. My GRE scores, wh...
18 Subjects: including SAT reading, SAT math, reading, chemistry
Nearby Cities With SAT Tutor
Crestwood, IL SAT Tutors
Crown Point, IN SAT Tutors
Dixmoor, IL SAT Tutors
Dyer, IN SAT Tutors
Glenwood, IL SAT Tutors
Highland, IN SAT Tutors
Lake Station SAT Tutors
Lansing, IL SAT Tutors
Lynwood, IL SAT Tutors
Merrillville SAT Tutors
Munster SAT Tutors
Saint John, IN SAT Tutors
Sauk Village, IL SAT Tutors
Schererville SAT Tutors
Steger SAT Tutors | {"url":"http://www.purplemath.com/griffith_sat_tutors.php","timestamp":"2014-04-16T07:28:14Z","content_type":null,"content_length":"23464","record_id":"<urn:uuid:099bd262-3251-48f0-b998-2d6c48eb7f25>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Professor Kenneth C. Millett
Professor of Mathematics
Email: millett-at-math.ucsb.edu
Kenneth C. Millett is Professor of Mathematics at the University of California, Santa Barbara.
His short 2013 CV and an abbreviated list of publications with links to selected papers can be found here.
He has served as the Chair of The Chancellor's Outreach Advisory Board (COAB), as the Regional Director of the National Science Foundation funded California Alliance for Minority Participation^(1)
and, as Director of UCSB's California Mathematics and Science Teaching Program.^(2). Dr. Millett served as the appointed University of California delegate to the national Academic Assembly and was
elected Chair of the Western Regional Council of the College Board^(4). He has served as member of the American Mathematical Society's Subcommittee on Undergraduate Education^(5) and as a member of
the Advisory Board of the AMS-SIAM project, "Employment and the U.S. Mathematics Doctorate: Connections with Non-Academic Opportunities." He is a member of the Board of Governors and the Science
Policy Committee of the Mathematical Association of America. From 1983 through 1997 he was a member of the state wide Advisory Committee of the California Mathematics Project^(6) and served as its
Chair. He has been active in the work of the Mathematicians and Educational Reform Forum^(7) and the South Coast Mathematics Partnership^(8). Dr. Millett was the founding President and Executive
Director of the California Coalition for Mathematics and Science^(9). He is a member of the Mathematical Association of America, the American Mathematical Society, the American Association for the
Advancement of Science, the European Mathematical Society, the Societe Mathematique de France, the Association for Women in Mathematics, the Society for the Advancement of Chicanos and Native
Americans in Science and Sigma Xi. In 1988 he received the Carl B. Allendoerfer eward and, in 1991, he received the Chauvenet Prize for an article on knot theory written with W. B. R. Lickorish. In
1998, he received the Award for Distinguished Public Service from the American Mathematical Society. In 2000, he was elected a Fellow of the American Association for the Advancement of Science. AAAS
President Mary Good congratulated him on this recognition at the Fellows Forum held at the 2001 annual meeting. In 2006, Dr. Millett was given the "Giant in Science" award by the QEM/MSE Network for
his "outstanding contributions to the field of mathematics and to minority participation in STEM disciplines." In 2012, he was elected a Fellow of the American Mathematical Society.
He has had three Ph.D. students.
Henry Clay Fickle, Knots, Z-Homology Spheres and Contractible 4-Manifolds, June 1981.
Jorge Alberto Calvo , Geometric Knot Theory, June 1998, is an Associate Professor at Ave Maria University in Naples, FL.
Teresita Ramirez-Rosas, Quadrisecants and Ropelength of Knots, June 2009
and was the Co-director for
Eleni Panagiotou, A study of the topological entanglement of polymers, June 2012 National Technical University of Athens, Greece (with Lambropoulou and Theodorou)
Personal Background
Born in Hustisford, Wisconsin on November 16, 1941, Ken Millett grew up in Oconomowoc, Wisconsin. He graduated from Oconomowoc High School in 1959. His parents, Clarence and Isola Millett, still live
there. His three sisters, Diane, Rita and Roxanne currently live in Pasadena, CA, San Antonio, TX and, Boulder, CO, respectively. He has two children: Rebecca, living in Cape Elizabeth, Maine with
her husband, Kevin Kobel, and their two children, Christopher and Hannah, and David, a graduate of the University of California at Santa Cruz. His wife, Janis Cox Millett, has previously worked for
the Office of the President of the University of California providing assistance to programs to increase access and success for all students, especially women and students of color. She assisted the
creation of the California Mathematics Project and, for two years, was on leave to the Achievement Council, a privately funded program. In 2000, Janis received her Ed. D. and served as the founding
director of the California Center for Effective Schools in the Givertz Graduate School of Education at UCSB. Ken and Janis live in Santa Barbara, California and Grambois, France.
Education Background
Ken Millett received his Bachelors Degree in Science from the Massachusetts Institute of Technology in 1963, received his Master of Science and Doctor of Philosophy Degrees from the University of
Wisconsin at Madison in 1964 and 1967, respectively. Following lecturer appointments at UCLA and MIT, he joined the faculty of the University of California as Santa Barbara in 1969. Since then he has
been a visiting professor at the Institut des Hautes Etudes Scientifiques, Princeton University, Occidental College, UCLA, MSRI, several French research institutes and universities, most recently the
Universite de Provence in Marseilles, and at the LOMI, Saint Petersburg. He has published over 50 scientific papers and edited four research volumes concerned with aspects of geometric topology, knot
theory and, their applications to mathematics, computer science, physics, chemistry and, molecular biology. He has also written articles on mathematics education and educational reform as well as
developing materials to increase public understanding and support for the renewal and reform of mathematics teaching and assessment.
^1 The California Alliance for Minority Participation (CAMP) is an alliance of the eight campuses of the University of California working in collaboration with the California State University and the
California Community Colleges to double the number of African American, Native American, Native Pacific Island, and Chicano/Latino students receiving advanced degrees in the mathematical sciences,
the life and physical sciences and engineering in California. In recognition of its success, the Regents of the University of California provide additional funding for the CAMP effort.
^2 The California Mathematics and Science Teaching Program provided fellowships to more than 200 students attending UCSB, Westmont College, Santa Barbara City College, Allan Hancock College, Ventura
College and, Oxnard College in connection with their work in regional mathematics and science classrooms. The principal mission of the program is to encourage students of color to explore mathematics
and science teaching careers and, in the course of their work, provide junior high and high school students with strong role models who can encourage them to become mathematically and scientifically
successful and to continue their studies at the college and university level. This program is partially supported by the University of California's Community Teaching Fellowship Program and by the
NSF funded California Alliance for Minority Participation.
^3 The Mathematics component Achievement Program (MAP) was created in 1988 to promote mathematical achievement among students of color. MAP now supports over 100 students each quarter at all levels
of undergraduate study. MAP served as the model for programs in statistics, biological sciences, chemistry, geography, geology and physics that now form the core of the CAMP effort at UCSB.
^4 The goal of the College Board is "Educational Excellence for All Students." Established in 1900, it is a national, nonprofit association of about 3,000 education organizations committed to
achieving this goal. There are three national assemblies: Guidance and Admission, Academic Affairs, and College Scholarship Service. Among the Board's better know services are the Advanced Placement
Program and the SAT, PSAT/NMSQT. The principal thrusts of the Board are programs directed to promoting access and success of students of color, for example EQUITY 2000 and Pacesetter.
^5 The American Mathematical Society (AMS), founded in 1880 to further mathematical research and scholarship, has over 30,000 members. Its mission is to promote mathematical research, strengthen
mathematical education, and foster awareness and appreciation of mathematics and its connection to other disciplines and everyday life. The Committee on Undergraduate Education advises the Society on
issues concerning undergraduate mathematics programs. The Sloan Foundation has funded a joint effort of the AMS and the Society for Industrial and Applied Mathematics to undertake analysis of
employment of PhD's in mathematics outside academia.
^6 The California Mathematics Project funds seventeen sites in collaborative professional development programs devised to improve the quality of mathematics education in California and thereby
develop in all students at all grade levels throughout the state an enhanced sense of mathematical power.
^7 The Mathematicians and Educational Reform Forum in an alliance of teachers of mathematics and mathematics departments interested devoted to the improvement of mathematics education in the United
States. Dr. Millett serves on the advisory committee to the Departmental Network, a confederation of 13 leading mathematics departments.
^8 The "Partnership" engaged university, community college, high school, junior high school and elementary school teachers in work to improve the teaching and learning of mathematics in the South
Coast area (including Santa Barbara and Ventura Counties). With funding from the University of California, the participating school districts, and a grant for the California Eisenhower Grant program,
one of the principal activities of the Partnership is a summer program that brings university and community college students, especially students of color, interested in mathematics teaching careers
into an intensive two month summer program. During this program, participants receive training in the use of new mathematics curricula and teaching and assessment strategies, they work with
experienced teacher partners in summer mathematics programs for underrepresented students and they complete and present a capstone project. The 1995 program supported more than 20 students.
^9 The California Coalition for Mathematics and Science was an alliance of California's education, public policy, government, professional, and business leaders dedicated to a statewide collaborative
initiative seeking the systemic reform of mathematics education and the mathematical empowering of all citizens for success in an increasingly technological world. | {"url":"http://www.math.ucsb.edu/~millett/KM.html","timestamp":"2014-04-17T06:41:41Z","content_type":null,"content_length":"13294","record_id":"<urn:uuid:0bf9cb77-2a92-4752-a21a-dd0c5664eadc>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
CrypTool-Online / Ciphers / Four-Square
Four-Square Cipher
test it
The Four-Square cipher is similar to the Playfair cipher. It was developed by the Frenchman Felix Marie Delastelle (*1840; † 1902)^1. He also developed the Bifid cipher.
Contrary to the Playfair cipher, two Polybius matrices are used for the Four-Square cipher. Both matrices should ideally be constructed with a different password. Both Polybius matrices are then
assigned to two ordered plaintext matrices. In figure 1, plaintext matrices have a blue background and Polybius matrices have a green background. The right upper matrix has been created with the key
"KRYPTOGRAPHIE" and the lower left matrix has been created with the key "BEISPIEL".
Fig. 1: Four-Square cipher (GE → IE)
For the message "GEHEIMNIS" the characters in the message are split up into pairs analogous to the Playfair cipher. It is not necessary to replace doubly occurring characters. For an uneven number of
characters, an additional character has to be inserted at the last position. "GEHEIMNIS" therefore is split up into "GE HE IM NI SX". Now, a rectangle is drawn with the corners being the G in the
upper left matrix and the E in the lower right matrix. The two remaining corners of the rectangle, which lie in the green fields, determine the characters in the ciphertext. GE is therefore encoded
by IE. And HE is encoded by II, as can be seen in figure 2.
Fig. 2: Four-Square cipher (HE → II)
The complete ciphertext is therefore: "IE II GM DC NX".
The Four-Square cipher is very similar to the Playfair cipher in terms of security. It can be cracked very fast for a sufficiently long enough message. It also is quite difficult to handle because of
the presence of two keys and four matrices. But this also leads to one advantage over the Playfair cipher:. There the word OTTO for example would be encoded by XY YX; the characters are reversed in
both the cleartext and the ciphertext. In the Four-Square cipher, this pattern is not conserved in the ciphertext. OTTO is encoded, according to figure 2 by DT QM.
The Four-Square cipher belongs to the monoalphabetic ciphers just like the Playfair cipher. Since pairs of characters are created during the encoding process, it also belongs to the class of
bigraphic methods.
^1 o.V.: "Félix Delastelle", http://en.wikipedia.org/wiki/Felix_Marie_Delastelle, 2009-02-20 | {"url":"http://www.cryptool-online.org/index.php?option=com_content&view=article&id=78&Itemid=88&lang=en","timestamp":"2014-04-19T17:15:55Z","content_type":null,"content_length":"22963","record_id":"<urn:uuid:fd5e472f-8dd0-4f91-96d6-55ab709fa706>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Right Way To Teach Sorting
June 08, 2011
Last week, I said that I had what I thought was a better way to teach sorting. This article describes that better way.
Last week, I said that C++ textbooks often make what I consider to be four mistakes:
• They teach readers how to write their own sorting algorithms before using the standard library.
• The first sort algorithm they teach has quadratic performance.
• They never teach how to write a sort algorithm that performs better; and
• They never get around to explaining what a stable sort is or why it is useful.
I also said that I had what I thought was a better way to do it. This article describes that better way.
Programs that sort often have to deal with three separate, but interrelated, problems:
• Sorting
• Merging
• Comparison
I am mentioning comparison as a separate problem because students so often get it wrong. For example, I've lost count of how many times I've seen code like this:
struct Point { int x, y; };
bool compare(const Point& p, const Point& q)
if (p.x < q.x && p.y < q.y)
return true;
return false;
Of course, the compare function could have used one line in place of four:
return p.x < q.x && p.y < q.y;
but either way, the code is wrong — at least if the intent is to use compare for sorting.
In order to avoid this common mistake, I think that it is important to be sure that students understand what comparison means before they learn how to sort. In order to do so, I think that it is a
good idea to begin with merging, which is an easier problem than sorting.
Not only that, but once you know how to merge, you know how to sort, because of the following algorithm:
1. Is your sequence of length 0 or 1? If so, you're done.
2. Divide the sequence approximately in half, yielding two subsequences with lengths that differ by no more than 1.
3. Sort each subsequence.
4. Merge the two sorted subsequences.
You might think that (3) is problematic: How can we sort a sequence without first knowing how to sort it? However, (1) shows us how to sort very short sequences, and each time we reach (3), we make a
recursive call that deals with ever-shorter sequences until eventually we reach (1).
This algorithm is, of course, a recursive implementation of Mergesort. If it is implemented correctly, it is stable, and it runs in O(n log n) time. Of course, it consumes extra space; but O(n) extra
space is usually much better than O(n2) time.
Therefore, I think that a reasonable way to teach students about sorting is:
1. Use the standard sort algorithm.
2. Show students how to write a comparison function, explaining about order relations along the way.
3. Show how to merge sorted sequences, first using the standard merge algorithm and then writing it explicitly.
4. Show how to write Mergesort by combining merging and recursion.
It is probably true that these four steps together take more teaching time than simply instructing students on how to write a bubble sort or an insertion sort. However, not only do the students learn
about sorting, but they also learn about merging, comparison, and recursion. Moreover, they do so without picking up any bad habits along the way. | {"url":"http://www.drdobbs.com/cpp/the-right-way-to-teach-sorting/230500040","timestamp":"2014-04-18T03:11:49Z","content_type":null,"content_length":"94040","record_id":"<urn:uuid:b3b2b264-8767-48b9-9620-a36890e25831>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Would This Meteor Make a Huge Crater? | Science Blogs | WIRED
• By Rhett Allain
• 07.02.12 |
• 8:29 am |
Here is a fun little video to promote Eden TV’s Science Month.
I think a meteor this size would do more than just crush a car. Let’s do a simple calculation.
How Big?
Here is a frame from the end of the video.
I am really not sure about the type of car this smashed. If I give it a rough estimate, the meteorite seems to be approximately a sphere with radius 1 meter. But what really matters is the mass. The
mass depends on the density. This site puts the density of stoney type meteorites around 3.5 g/cm^3 and iron type around 7-8 g/cm3.
Looking online at meteorites (I am absolutely a non-expert when it comes to meteorites – just to be clear), the Eden TV meteorite (Edeneorite) kind of looks like an iron meteorite. But there is a
case of a meteorite hitting a car. The famous Peekskill meteorite.
The Peekskill meteorite was a stoney type and the Eden TV rock looks partially similar – so let me go with a density of 3.5 gm/cm^3. This would put the mass of the rock at 1.47 x 10^5 kg (16 short
tons). Compare that mass to the mass of some type of SUV (like the Toyota Highlander) which only has a mass of 1,800 kg.
Just placing a meteorite this massive on an SUV would probably crush it.
How Much Energy?
This is the real question. How much kinetic energy would a rock this large have when it hit the ground? First, some assumptions:
• The rock started SUPER far from the Earth and at rest (clearly not true).
• There was no energy lost due to the atmosphere (again, not true).
• The mass of the rock didn’t change as it went through the atmosphere (yup, this is wrong too).
So, why do I make three wrong assumptions? Do three wrongs make a right? Well, each one of these assumptions makes the problem much easier than if I didn’t assume them. Also, in the end something
could happen. What if I get a meteorite energy around the same value as a baseball. Or maybe I find that the meteor has the same energy as blast from the Death Star. In either of these cases, the
value will be wrong but still say something about the real energy. Ok?
Since we are dealing with energy, we obviously need to use the work-energy principle. Let me take the rock plus the Earth as the system. In that case, I need two positions. One with the rock far away
and one with the rock on the surface of the Earth.
Here there are two types of energy. There is kinetic energy and gravitational potential energy. Since I am dealing with objects far from the surface of the Earth, I need to use the real form of the
gravitational potential energy.
G is the gravitational constant and M[E] is the mass of the Earth. The starting distance (R[far]) is very large so that the second term in the potential energy is almost zero. I could solve for the
final speed of the rock, but I just need the energy. Putting in values for the stuff I know, I get a final kinetic energy of 9 x 10^11 Joules.
Fine, that is a big energy. But how big? The Wikipedia page on nuclear weapon yeilds lists energies in terajoules. So, my estimate for the energy of this meteor impact would be around 0.9 TJ and be
smaller than the estimated 50-60 TJ of the early nuclear weapons. However, in terms of TNT equivalent this would be about 200 tons of the explosive.
Even if I overestimated by a factor of 100, this would still be 2 tons of explosives. I am fairly certain that 2 tons of TNT would leave a crater with no sign of the vehicle it crashed into.
One More Comparison
What is something that could possibly smash that Eden TV vehicle like that? What if I dropped a similar vehicle from about 10 meters? Surely that would smash both cars up quite a bit? How much energy
would be in the dropped car right before it collides? For this calculation, I can use the simple form of gravitational potential energy.
Using a dropped car mass of 1,800 kg the car would have 1.7 x 10^6 Joules of energy. This would be equivalent to[DEL: 3.8 x 10^-4 kg:DEL] 0.4 kg of TNT explosives (Wikipedia page on TNT equivalent).
Big difference.
In the end, I think Eden TV could have picked a more realistic looking meteorite. Or maybe that was the point. Put up a video with something that inspires people to explore this interesting question. | {"url":"http://www.wired.com/2012/07/would-this-meteor-make-huge-crater/","timestamp":"2014-04-20T22:34:48Z","content_type":null,"content_length":"105673","record_id":"<urn:uuid:8df15b0e-f38d-4b14-ba3a-73a8b9b26d17>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Image plane equation [Archive] - OpenGL Discussion and Help Forums
11-08-2004, 12:20 AM
I am rendering a simple 3D object (e.g. a cube, vertices only). I need to know what is the z-coordinate value for the points lying on the image plane. OpenGL transforms the projection matrix so that
the z-coordinate remains unchanged and is used in the depth buffer. Can I assume that the focal distance/z-coordinate of points lying on image plane after projection is equal to the near clipping
plane value?
To sum up, my question is, how do I find out the focal length or equivalently the z-coordinate of points lying on the image plane after projection (using OpenGL projection matrix, that is)?
Thank you. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-130696.html","timestamp":"2014-04-16T16:52:07Z","content_type":null,"content_length":"4652","record_id":"<urn:uuid:eb48ea2c-6a1b-49f4-8174-43f4d80719fa>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ticket #133
Related Rank2Types, RankNTypes, LiberalTypeSynonyms, PolymorphicComponents, ScopedTypeVariables
Compiler support
GHC mostly (ExplicitForAll; still allows forall as an expression id)
nhc98 none
Hugs mostly (+98; still allows forall as an expression id)
UHC mostly (none; still allows forall as an expression id)
JHC none
LHC none
ExplicitForAll enables the use of the keyword 'forall' to make a type explicitly polymorphic. Syntactically, it would mean the following change to Haskell 98:
• forall becomes a reserved word.
• . (dot) becomes a special (not reserved) operator.
• The following syntactic rule changes:
type → forall tyvars . type
| context => type
| ftype
ftype → btype -> type
| btype
gendecl → vars :: type
It does not allow the use of explicitly polymorphic types in any way not already allowed by Haskell 98 for implicitly polymorphic types.
Report Delta
Changes relative to H2010 report.
In Section 4.1.2:
With one exception (that of the distinguished type variable in a class declaration (Section 4.3.1)), the type
variables in a Haskell type expression are all assumed to be universally quantified; there is no explicit syntax
for universal quantification [4]. For example, the type expression forall a . a -> a denotes the type ∀ a . a → a.
For clarity, however, we often write quantification explicitly when discussing the types of Haskell programs.
When we write an explicitly quantified type, the scope of the extends as far to the right as possible; for
example, ∀ a . a → a means ∀ a . (a → a).
The type variables in a type signature may be explicitly quantified with the forall keyword. All type
variables used in the type must be quantified, with one exception (that of the distinguished type
variable in a class declaration (Section 4.3.1), which must not be quantified). No additional type
variables may be quantified. The scope of the quantifier extends as far to the right as possible.
For example, the type expression a -> a denotes the type ∀ a . a → a. Quantifying the type variables is
optional. For example, the type expression a -> a also denotes the type ∀ a . a → a.
In Section 4.1.4:
(Eq a, Show a, Eq b) => [a] -> [b] -> String
forall a b . (Eq a, Show a, Eq b) => [a] -> [b] -> String
• Small and simple syntactic extension.
• Simplifies the later inclusion of semantic extensions that depend on it, e.g. Rank2Types.
• Easy to implement in tools that don't yet support the semantic extensions.
• The Report already mentions types using the explicit forall-quantified form, so only the grammar changes above are needed.
• A small and incremental extension with little value of its own, only serving as a stepping stone for the various semantic extensions. | {"url":"https://ghc.haskell.org/trac/haskell-prime/wiki/ExplicitForall?version=7","timestamp":"2014-04-19T02:41:34Z","content_type":null,"content_length":"12722","record_id":"<urn:uuid:7ab56346-031f-4bae-9c04-f0a63277b1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Laboratory
Momentum and Energy
NOTE: There will be a full report on this week's lab. It is due the following week, when your lab is not in session. You are required to bring your report to the CAP building and drop it in the
plastic bin outside the door to CAP 212 during the same time and weekday that your lab met the previous week! Please see the "Laboratory Report" and "Report Grading Rubric" links in the "Resources"
menu to the right for help with writing the report. Print the "Report Scoring Sheet" and use it as your Title Page.
In this lab we will be examining the conservation of momentum and energy using a collision between two carts.
In any real collision, energy is always lost. When scientists want to talk about the percentage of energy lost during a collision, they refer to the collision’s elasticity. An idealized collision
where no energy is lost (imagine a ball that will bounce forever) would be called an elastic collision. There is never a real collision in which no energy is lost, just like there is no ball that
exists that will bounce forever, but some collisions have such little energy loss that they are approximately elastic, just like some situations have such little air resistance or friciton that we
assume there is none at all.
Collisions in which the total kinetic energy of the two bodies after the collision does not equal their total kinetic energy before the collisions are called inelastic collisions. A completely
inelastic (or perfectly inelastic) collision is one in which the two colliding object move off together with the same velocity after the collision. Examples are a dropped ball that sticks to the
floor rather than bounces or railroad cars that couple together when they collide. In inelastic collisions some of the kinetic energy is transformed into other forms of energy such as heat, noise, or
stored elastic energy.
Momentum (p) is another quantity we could examine. Momentum is mass times velocity.
p = mv (1)
During a collision, each individual object in the system may experience a change in momentum. However, if there are no net external forces acting on a system, then the total momentum will be
conserved, no matter whether the collision is elastic or inelastic.
p[before] = p[after] (2)
A change in momentum is called an impulse (J).
J = Δp (3)
Impulse is also the product of average force and time.
J = FΔt (4)
When two carts collide, the elapsed time of the collision is the same for both. Newton's third law states that the carts will experience the same force, only in the opposite direction. therefore, Eq.
(4) predicts that the carts should have the same impulse, only in the opposite direction. Therefore, the overall change in momentum, in the absence of external forces, will be zero.
Preliminary Questions
1. Think of three examples that can be approximated as totally elastic collisions.
2. Think of three examples of totally inelastic collisions.
3. Imagine you are dropping a ball. Is the momentum of the ball conserved as it falls? Explain your answer in terms of mass, velocity, and external forces.
4. Now think of the earth and the ball as a system as the ball falls toward the earth. Ignoring air friction, are there any other external forces other than the gravitational force between the earth
and the ball? Is the momentum of the earth/ball system conserved? How must the Earth react to the ball in order for momentum to be conserved?
5. Next think a rear-end collision between a stationary car and a moving cars in which the cars stick together. Assume that air resistance and the rolling friction between the car tires and the road
is small enough to ignore. Is energy conserved? Are the momenta of the individual cars conserved? What about the momentum of the two-car system?
Vernier dynamics track
Two low-friction dynamics carts with magnetic and Velcro™ bumpers
Extra 200g masses for pan balance
Double pan balance
Cart masses
Motion detector
LabPro and computer with Logger Pro
Activity 1. Predicting the results of collisions
Figure 1: Carts on a track with two motion detectors.
Set two carts on a flat track such that the magnets on the ends are facing each other (see Figure 1). When you push the carts toward each other, providing they do not actually collide, the result
should be a fairly elastic collision. On the opposite ends from the magnets are Velcro stickers. If the carts collide with these ends facing, the collision should be fairly inelastic.
Before actually observing collisions, let’s make some predictions.
For each of the following situations, predict how the momentum of each cart, and the momentum of the two-cart system, will change after the collision. The momentum of the two-cart system, also known
as the total momenutm, is just the sum of the momentum of each of the two carts.
p[CartA] + p[CartB ] = p[total] (5)
You can recreate the table below, but make the cells larger to give yourself plenty of room to write your predictions.
A table (example below) showing the direction of the momentum of each cart and the system comprised of both carts – both before and after the collision – is a useful way of displaying this
information. Just a simple →, 0, or ← is all that is needed (no numbers!).
If the cart is moving to the left, its momentum arrow will be pointing toward the left. If the cart is moving to the right, its momentum arrow will be pointing toward the right. If Cart B is carrying
Mass C, then assume for your predictions that Cart B has an greatly-increased mass.
If the magnitudes are different, or if the momentum of either cart has increased or decreased after the collision, indicate this by changing the length of the momentum arrow. Assume that Cart A is on
the left, and is always initially moving to the right.
Helpful hints: Do not cause violent collisions between carts! Keep the velocities low (gentle pushes). Also, position the mass on the cart so that it does not slide during the collision.
1a) A moving Cart A collides inelastically with a stationary Cart B.
1b) A moving Cart A collides elastically with a stationary Cart B.
2a) A moving Cart A collides inelastically with a stationary Cart B that is carrying Mass C.
2b) A moving Cart A collides elastically with a stationary Cart B that is carrying Mass C.
3a) A moving Cart A collides inelastically with Cart B that is moving towards it.
3b) A moving Cart A collides elastically with a Cart B that is moving towards it.
4a) A moving Cart A collides inelastically with a Cart B that is moving towards it carrying Mass C.
4b) A moving Cart A collides elastically with a Cart B that is moving towards it carrying Mass C.
│ ┌─────┬───────────────────────────────────────────────────────────────────────────┐ │
│ │ │ Momentum of a two-cart collision │ │
│ ├─────┼─────────────────────────────────────┬─────────────────────────────────────┤ │
│ │ │ Before the Collision │ After the Collision │ │
│ ├─────┼────────┬────────┬───────────────────┼────────┬────────┬───────────────────┤ │
│ │ │ Cart A │ Cart B │ Sum of both Carts │ Cart A │ Cart B │ Sum of both Carts │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 1a) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 1b) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 2a) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 2b) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 3a) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 3b) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 4a) │ │ │ │ │ │ │ │
│ ├─────┼────────┼────────┼───────────────────┼────────┼────────┼───────────────────┤ │
│ │ 4b) │ │ │ │ │ │ │ │
│ └─────┴────────┴────────┴───────────────────┴────────┴────────┴───────────────────┘ │
Table 1: Momentum of a two-cart collision
Activity 2: Graphing momentum and energy
In this activity you are going to graph momentum and kinetic energy during a collision (since the track is level, there should be no change in potential energy). You will need the masses and
velocities of Cart A, Cart B, and Mass C (Mass C can be the 1.00 kg mass that is already on the table).
Connect two motion detctors to Logger Pro, and set them so that one looks at Cart A and the other looks at Cart B. Because the detectors are facing opposite directions, you should reverse the
direction of the detector that is looking at Cart B.
Open Logger Pro. Click "Collect" and give the carts a push. Identify motion detector A and motion detector B, then figure out and write down which corresponds to position 1, velocity 1, position 2,
and velocity 2 in logger pro. This will help with trouble-shooting if something goes wrong. You will be converting these velocities into momentum and energy. If you are having problem getting data,
please ask your instructor for help.
Next convert the velocity data into momentum and energy data.
First, set up three "User Parameters" that contain the masses of Cart A, Cart B, and Cart B with Mass C on it. To do this, go to Data > User Parameters, and click Add. Label and enter all three
masses, including units.
Next, create calculated columns which calculate the momentum and kinetic energy of each cart for situation 1a). Instead of actual velocities, use the appropriate velocity column for Cart A or Cart B.
Instead of actually entering the masses numerically into your equation, make use of the User Parameters that you just created. Make sure that everything is correctly labeled, with units.
Momentum of Cart A =
Momentum of Cart B =
Energy of Cart A =
Energy of Cart B =
Show these calculations in your notebook, and also in your report!
Also calculate the total momentum and the total energy, which would just be the sum of the individuals.
Total Momentum = Momentum of Cart A + Momentum of Cart B (6)
Total Energy = Energy of Cart A + Energy of Cart B (7)
Next you will change the position graph so that it will plot your momentums. Left-click on Y-axis of the upper graph, click "more", and set this graph to plot three calculated data sets: the momentum
of each individual cart and the total momentum.
Next change the velocity graph so that it will plot your energies. Left-click on Y-axis of the lower graph, click "more", and set this graph to plot the remaining three calculated data sets: the
energy of each individual cart and the total energy.
Now collect data for situation 1a), an inelastic (Velcro) collision between two carts. Are the results what you would expect? Can you identify each line on the graph as it corresponds to the momentum
or energy of Cart A and Cart B? Can you identify the total momentum? Can you identify where on the graph the collision takes place?
Call your instructor over to verify that you are getting good data and are correctly interpreting the graphs.
Recreate Table 1 above, but add another column. This column will be percent difference between the total momentum before and after the collision. You will use Analyze > Interpolate on the momentum
graph to find the momentum of each cart, and the sum of both, just before and just after the collision.
Make another table of energy data like Table 2 below.
│ │Total KE before collision│Total KE after collision│% of Ke lost during the collision │
│1a)│ │ │ │
│1b)│ │ │ │
│2a)│ │ │ │
│2b)│ │ │ │
│3a)│ │ │ │
│3b)│ │ │ │
│4a)│ │ │ │
│4b)│ │ │ │
Table 2: Kinetic Energy in collisions
Determine the total kinetic energy before the collision and the total energy after the collision using Analyze > Interpolate on the energy graph. Then calculate the percentage of kinetic energy lost
during the collision (see Eq. (8) below) for the third column.
Percent energy lost = (KE[before] - KE[after]) / KE[before] (8)
Show example calculations in your report. You don't have to show every single similar calculation as long as you display the results in a table.
Print the graph of 1a) for your notebook. You will also print the graph of 1b), but not 2a) through 4b) (to save paper).
After you complete 1a), perform the other seven situations and fill in the tables.
IMPORTANT: Remember to update your calculated column equations when you add put Mass C onto Cart B! If you add Mass C to the wrong cart, or forget to factor it into your momentum equations, you will
get incorrect results!
Discuss your results. Did they match your predictions? Why or why not?
Was momentum conserved in each collision? Explain.
Did the percent of energy lost depend on the type of collision? Explain.
Activity 3: Impulse
For each of the situations above, calculate the impulse (change in momentum) of each cart during the collision. You have already collected this data, so no new experimentation is needed. Put these
calculations in a table like Table 3 below, using the correct units. Again, show example calculations.
According to Newton's Third Law, the impulse of the two carts should be equal in magnitude. Using percent difference, compare the impulse of each cart during the collisions.
│ │ Impulse (J) during collisions │
│ │ Impulse of Cart A (units) │ Impulse of Cart B (units) │ % difference │
│ 1a) │ │ │ │
│ 1b) │ │ │ │
│ 2a) │ │ │ │
│ 2b) │ │ │ │
│ 3a) │ │ │ │
│ 3b) │ │ │ │
│ 4a) │ │ │ │
│ 4b) │ │ │ │
Table 3: Impulse during collisions
Did Newton's Third Law hold true? Were there any external forces at work? Describe the effects of these external forces. | {"url":"http://www1.appstate.edu/dept/physics/labs/QuickGuides/1103-1104/momentum1103_2.htm","timestamp":"2014-04-21T02:04:19Z","content_type":null,"content_length":"28456","record_id":"<urn:uuid:e305376c-426f-41a0-83f6-015145b22378>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
colour dependent on the value at a certain point
So I plan to represent some data in terms of a sphere. I want to draw a sphere using my x y and z coordinates which I think I know how to do but then I would like to colour code the surface of the
sphere dependent on the values of my function which depend on x and y z so to have for example, red for high values blue for low etc. any ideas how I go about this?
Many thanks guys! :)
10 Comments
Show 7 older comments
The input was 3D arranged in a sphere :)
Ah, then spherical output representation becomes reasonable -- though surface of the sphere vs sphere as a volume becomes a consideration.
I take it you wrote your own spherical FFT routine? For any particular cubiod array input indices (I,J,K), what location is mapped to in your array that you put through your FFT routine? I presume
here that the FFT routine puts its results into exactly the same logical shape as what was passed to it? If so then you should then be able to reverse the process to map the outputs back to cubiod
array indices. Not using sphere(), just based on the exact reverse of the mapping that was used in the input processing. Once the FFT values are mapped back to cubiod array locations, we can proceed
to do a voxel viewing or spherical surface generation.
Hi there well yea my samples were taken of coordinates that are arranged in a spherical volume. However a map of the points of the surface where the colour reflects the value of the points would be
enough. Yea the FFT routine retains the shape of the input, so for example if I simply use scatter all the points can be seen. But it's difficult to see a peak in the middle thats in red amongst
densely packed blue points so I was thinking to just show the peak at the surface. The suggestion is really good but how would I go about coding for this?
as I say I dont need to go back to the original locations as the output is already in the correct locations. Hence I just need to use the points at the surface to draw the sphere and then somehow use
the FFT vlaues at the points to map for colour?? What do you reckon?? PS I really appreciate you taking so much time Walter!!!! :)
No products are associated with this question. | {"url":"http://www.mathworks.com/matlabcentral/answers/59232-colour-dependent-on-the-value-at-a-certain-point","timestamp":"2014-04-25T03:53:34Z","content_type":null,"content_length":"34664","record_id":"<urn:uuid:0e26d1fc-5b9b-4131-b3c3-7a68f2e64de6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
According to The Orange Grove, "This book covers elementary trigonometry. It is suitable for a one-semester course at the...
see more
Material Type:
Open Textbook
Michael Corral
Date Added:
Jan 06, 2011
Date Modified:
Nov 05, 2013
Trigonometry textbook, wiki
Material Type:
Open Textbook
Mutiple Authors
Date Added:
Nov 17, 2008
Date Modified:
Sep 03, 2013
According to The Orange Grove, this is "a book introducing basic concepts from computational number theory and algebra,...
see more
Material Type:
Open Textbook
Victor Shoup
Date Added:
Jan 05, 2011
Date Modified:
Jan 05, 2011
According to OER Commons, 'These are the lecture notes of a one-semester undergraduate course which we taught at SUNY...
see more
Material Type:
Open Textbook
Gerald Marchesi, Dennis Pixton, Matthias Beck
Date Added:
Feb 02, 2011
Date Modified:
Feb 02, 2011
'A First Course in Linear Algebra is an introductory textbook designed for university sophomores and juniors. Typically such...
see more
Material Type:
Open Textbook
Rob Beezer
Date Added:
Feb 26, 2013
Date Modified:
Sep 09, 2013
'This book is designed for the transition course between calculus and differential equations and the upper division...
see more
Material Type:
Open Textbook
Joseph E. Fields
Date Added:
Oct 14, 2013
Date Modified:
Nov 05, 2013
This is a free, online textbook that, according to the author, "is intended to suggest, it is as much an extended problem set...
see more
Material Type:
Open Textbook
John M. Erdman
Date Added:
Jan 25, 2011
Date Modified:
Jan 25, 2011
This is a free, online textbook. The table of contents can be found here: http://samizdat.mines.edu/latex/latextoc.ps
see more
Material Type:
Open Textbook
Harvey J. Greenberg
Date Added:
Apr 27, 2010
Date Modified:
Jan 03, 2014
This is a free, online textbook in which the author attempts to "summarize calculus.״
Material Type:
Open Textbook
Karl Heinz Dovermann
Date Added:
Jun 10, 2011
Date Modified:
Jun 10, 2011
This is a free textbook that is offered by Amazon for reading on a Kindle. Anybody can read Kindle books—even without a...
see more
Material Type:
Open Textbook
John Stuart Mill
Date Added:
Nov 26, 2013
Date Modified:
Dec 08, 2013 | {"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=5&category=2514&materialType=Open%20Textbook&sort.property=overallRating","timestamp":"2014-04-20T23:56:05Z","content_type":null,"content_length":"189710","record_id":"<urn:uuid:36fe78db-ba60-4092-8221-fc265f9597b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
perfect square trinomial/square of binomial
March 3rd 2011, 03:38 PM
perfect square trinomial/square of binomial
Directions: Find the value of C that makes the expression a perfect square trinomial. Then write the expression as the square of a binomial.
Problem: x² + 6x + c
March 3rd 2011, 03:48 PM
A perfect square trinomial will always be in the following form: $(a + b) = a^2 + 2ab + b^2$
For your problem a is clearly equal to x. The linear term is then 6x = 2ab = 2(x)b. Thus we may identify 2b = 6. That makes b = 3. Can you finish this?
March 3rd 2011, 04:02 PM
Yes. Thank you for your help. | {"url":"http://mathhelpforum.com/algebra/173352-perfect-square-trinomial-square-binomial-print.html","timestamp":"2014-04-17T16:39:44Z","content_type":null,"content_length":"4833","record_id":"<urn:uuid:7e0da7ef-3591-4ba3-b337-c74577b9b0d5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Del pezzo surfaces in positive characteristic
up vote 4 down vote favorite
For me a Del Pezzo surface $X$ over an algebraically closed field of characteristic $p$ is an algebraic surface where the anticanonical bundle $\omega^{-1}_X$ or $-K_X$ is ample. (I prefer the second
notation, although it is not very correct),
In characteristic 0, as far as I know, there is a classification. $X$ has to be isomporphic either to $\mathbb{P}^1\times \mathbb{P}^1$ or a blow-up of $\mathbb{P}^2$ at $9-K_X^2\geq 0$ points in
general position, i.e. not 3 points in the same line and not 6 points in the same conic of $\mathbb{P}^2$.
I would like to know if there is such a classification in characteristic $p\geq 2$. As far as I know the classification is definitely the same for $p>3$, and probably even for $p=3$ and there are
'extra' surfaces for $p=2$.
The perfect answer would confirm whether these assertions are true, describe the extra cases (in particular in terms of which curves can live in them or minimality) and/or give a reference.
But anything is better than nothing, so even if you know a bit of this I would like to know.
1 The case of $X={\mathbb P}^1\times {\mathbb P}^1$ is missing in your list. – rita Sep 23 '11 at 14:05
1 Singular del Pezzo surfaces are also of interest; see "Non-normal del Pezzo surfaces" by Reid (MR1311389). Extra things do happen in char. p. – inkspot Sep 23 '11 at 16:36
Fixed the typo, thanks rita – Jesus Martinez Garcia Sep 23 '11 at 19:01
add comment
1 Answer
active oldest votes
The classical classification of (smooth) Del Pezzo surfaces as blow-ups relies on the Kodaira vanishing theorem in characteristic zero, but is actually true over any algebraically
closed field. See for example Kollar's Rational curves on algebraic varieties book, section III.3. (This paper by Xie on the Kawamata vanishing theorem on rational surfaces in
characteristic $p$ might also be useful.)
up vote 8 down
vote accepted There is also an interesting classification of Del Pezzo surfaces as complete intersections in weighted projective spaces, which holds over any base field (not neccessarily
algebraically closed). Again, the reference is Kollar's book, chapter III.3.
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-surfaces curves-and-surfaces characteristic-p classification ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/76143/del-pezzo-surfaces-in-positive-characteristic","timestamp":"2014-04-19T04:39:17Z","content_type":null,"content_length":"56502","record_id":"<urn:uuid:1e6755bc-2ab1-435b-8280-5be457fb7d34>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'The Cube Resistors' Brain Teaser
The Cube Resistors
Science brain teasers require understanding of the physical or biological world and the laws that govern it.
Puzzle ID: #1623
Category: Science
Submitted By: anilrapire
Consider a cube, each edge of which has a resistor of resistance r on it. What is the resistance between two points on the same side of the cube but on opposite corners?
Show Hint
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=1623&comm=0","timestamp":"2014-04-21T02:14:11Z","content_type":null,"content_length":"22052","record_id":"<urn:uuid:67afff06-2cbc-4330-ae52-659e56feefbe>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof of square root being irrational
September 14th 2007, 10:58 AM #1
Sep 2007
Proof of square root being irrational
Let N be a Natural Number such that square root of N is not an integer. Prove that then square root of N is even irrational.
*The problem gives you a hint stating that:
Assume square root of N is in the irrational numbers. Then the Set X := {x is in the real numbers : x multiplied by square root N is in the Natural Numbers} is nonempty. Show that if x is in X
and x' := (x multiplied by square root of N) minus (x[square root of N]) then x' is in X and x' < x. thus square root of 2, 3, 5,... are irrational.
Let N be a Natural Number such that square root of N is not an integer. Prove that then square root of N is even irrational.
*The problem gives you a hint stating that:
Assume square root of N is in the irrational numbers. Then the Set X := {x is in the real numbers : x multiplied by square root N is in the Natural Numbers} is nonempty. Show that if x is in X
and x' := (x multiplied by square root of N) minus (x[square root of N]) then x' is in X and x' < x. thus square root of 2, 3, 5,... are irrational.
Someone posted this:
If you know the rational root theorem we have:
let y=sqrt(n), then:
and if this has rational roots they are amoung the factors (positive or
negative) of n.
So we have sqrt(n) is an integer, and [sqrt(n)]^2=n, that is n is a
perfect square.
So we have that if sqrt(n) is rational then sqrt(n) is an integer, hence
if sqrt(n) is not an integer it is irrational
I'm pretty sure we have not covered the rational root theorem in class, and as much sense as that theorem makes, I don't think I can use that to solve the problem on a test if we have not covered
it yet.
September 14th 2007, 12:34 PM #2
Grand Panjandrum
Nov 2005
September 14th 2007, 01:07 PM #3
Sep 2007 | {"url":"http://mathhelpforum.com/calculus/18966-proof-square-root-being-irrational.html","timestamp":"2014-04-20T15:14:51Z","content_type":null,"content_length":"36466","record_id":"<urn:uuid:3c285e43-c886-461b-88e2-ad2f0d67209f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modular Functions
Date: 06/23/97 at 06:22:05
From: Mart de Graaf
Subject: Modular functions
I hope you can help me with this problem. Recently I saw a documentary
on the proof of Fermat's Last Theorem. There I encountered "Modular
They said there were functions over the complex area with an
incredible amount of symmetry.
Can you tell me some more about modular functions? I got really
curious then, but I couldn't find any answers.
Date: 06/23/97 at 09:00:46
From: Doctor Anthony
Subject: Re: Modular functions
Dear Mart,
Modular funcions are functions with super-symmetry, which means they
can be transfomed in an infinity of different ways and yet remain
unaltered. They cannot be represented graphically because they exist
in hyperbolic space - they are complex, but with a real and imaginary
component along the x-axis and a real and imaginary component along
the y-axis.
A simple example of the type of transformation involved is:
If q = e(pi*i*w) q is a function of w, and if now w is replaced
by w' where
aw + b
w' = -------- ..(1) a, b, c, d integers such that ad-bc = 1
cw + d
The infinity of transformations (1) forms a group G and then with the
aid of functions F(0), F(1), F(2), F'(1) being themselves functions of
q we can construct further functions that remain unaltered for G or
for some subgroup of G.
In the proof of Fermat's Last Theorem, use is made of the conjecture
that every elliptic equation is related to a modular form. If an
elliptic equation is found that cannot be related to a modular form,
then that equation has no solutions.
Elliptic equations are of the form y^2 = x^3 + ax^2 + bx + c with a,
b, c integers, and we require integer solutions for x and y. It was
shown that if A^n + B^n = C^n with A, B, C integers was a solution of
the Fermat equation, then this could be transformed into an elliptic
y^2 = x^3 + (A^n-B^n)x^2 - A^n*B^n
It turns out that this equation can never be modular, so since all
elliptic equations are modular, we cannot have A^n + B^n = C^n
What Andrew Wiles accomplished was to prove the Taniyama-Shimura
conjecture that every elliptic equation must be modular. From there
the rest of Fermat's Last Theorem falls into place.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/52239.html","timestamp":"2014-04-20T22:24:36Z","content_type":null,"content_length":"7240","record_id":"<urn:uuid:0daabe71-e75d-4664-a961-c9defdbf3087>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Real Array Expressed as Complex Array
Anne Archibald peridot.faceted@gmail....
Thu Jul 3 15:57:00 CDT 2008
2008/7/3 Lorenzo Isella <lorenzo.isella@gmail.com>:
> Hello,
>> 814v3268b9e5rca34b98bb7b84b57@mail.gmail.com> Content-Type:
>> text/plain; charset=ISO-8859-1 On Thu, Jul 3, 2008 at 9:39 AM, Lorenzo
>> Isella <lorenzo.isella@gmail.com> wrote:
>>> > Since the result was that the array pot_ext_dimensionless is real, how
>>> > comes that it is expressed as a complex array (though the imaginary
>>> > part is always zero)?
>> It all depends on how you're calculating the pot_ext_dimensionless
>> array; clearly somewhere in there an operation makes it complex.
>> You'll have to show us how it's calculated.
> I think I solved the problem: I introduced some real elements taken from
> an array that has also some complex entries into pot_ext_dimensionless;
> although all the elements of pot_ext_dimensionless are all real, somehow
> scipy retains memory of these, once-existing, complex entries.
The key idea is that arrays have a data type: that is, each numpy
array, upon creation, specifies the type of all its contents. So your
numpy arrays are marked as containing complex numbers. The fact that
the imaginary part of these complex numbers is approximately or
exactly zero isn't relevant; they are still stored as a pair of
floating-point numbers. If you prefer, you can think of their type as
"potentially complex numbers". In any case, such numbers are printed
as a+bj even if a or b is zero, and various arithmetic operations
treat them as complex numbers. If you know that the answer should be
real, and you want to represent them more conveniently and
efficiently, you can simply take the real part.
>> You can always access the (real,imaginary) part of a complex array
>> with (pot_ext_dimensionless.real, pot_ext_dimensionless.imag)
>> But be careful, these arrays are not contiguous (they're a view into
>> the complex array). That wrinkle has bitten me before, but I can't
>> quite recall the circumstances. You can always make them contiguous
>> with numpy.ascontiguousarray().
> This sounds important and not 100% clear to me. Do you mean that if I
> have a complex array z and call real.z, I do not get in general an array
> with the same length as z, since purely imaginary entries are "skipped"
> rather than appearing as entries with zero real part, as one would expect?
No. Taking "X.real" is the same as the mathematical operation of
taking the real part:it throws away any imaginary part and interprets
what's left as real, whether it's zero or not. "Contiguous", in this
context, is a technical feature of numpy arrays that you should almost
never need to care about. (Numpy arrays can be homogeneous blocks of
memory, but they can also be homogeneous elements "strided" through
memory with other data in between. A few functions cannot deal with
this striding.)
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2008-July/017403.html","timestamp":"2014-04-18T12:26:52Z","content_type":null,"content_length":"6067","record_id":"<urn:uuid:c3db5553-8ee5-4404-864d-2dbf9bf56931>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derry, NH Science Tutor
Find a Derry, NH Science Tutor
...I am a recent graduate from Saint Anselm College in Manchester NH. I graduated with a BA in Chemistry and the American Chemical Society Certification. I am very sociable,and I have been
tutoring students throughout my college career.
3 Subjects: including chemistry, physical science, organic chemistry
...I have my certification to teach both high school level math and chemistry in Massachusetts and have taught both over the past 6 years. I have also tutored a number of students in the
algebras, geometry and trigonometry through my school districts. I am a certified math teacher (grades 8-12) in...
6 Subjects: including chemistry, geometry, algebra 1, algebra 2
...I will provide additional resources and worksheets to the student that might help them achieve-not just proficiency- but mastery of these topics.I'm a professor of chemistry at a large
university in Massachusetts. My PhD and post-doctoral work focused on synthesis of complicated organic molecule...
1 Subject: organic chemistry
...I look forward to working with you soon!I studied piano for several years. Also, I have performed vocally with the Merrimack Valley Players and the Hatherly Players for ten years in concerts
and musical theater productions. At the high school where I work, I have run a chess club for about 6 years.
19 Subjects: including physics, calculus, geometry, SAT math
...Let me coach you toward your success.I currently work as a consultant for a medical company, editing and reviewing their nursing articles prior to posting on their website which has an
international audience. I also recently was recognized as a reviewer for a new edition of a Primary Care nursin...
6 Subjects: including physiology, anatomy, nursing, proofreading
Related Derry, NH Tutors
Derry, NH Accounting Tutors
Derry, NH ACT Tutors
Derry, NH Algebra Tutors
Derry, NH Algebra 2 Tutors
Derry, NH Calculus Tutors
Derry, NH Geometry Tutors
Derry, NH Math Tutors
Derry, NH Prealgebra Tutors
Derry, NH Precalculus Tutors
Derry, NH SAT Tutors
Derry, NH SAT Math Tutors
Derry, NH Science Tutors
Derry, NH Statistics Tutors
Derry, NH Trigonometry Tutors | {"url":"http://www.purplemath.com/derry_nh_science_tutors.php","timestamp":"2014-04-20T10:54:16Z","content_type":null,"content_length":"23497","record_id":"<urn:uuid:e6024b5c-0599-487e-aece-6c02281c2576>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting math/programming problem....
July 20th 2013, 02:06 PM
Interesting math/programming problem....
The problem is simple, let me describe it:
Suppose i ask 5 questions to 3 students and they give their answers.
Every student is able to see what every student had answered.
All the questions have a YES/NO answer.
The correct answers are not given be me to the students, but i give to all students how many questions got right each of the students.
•Example for the questions Q1,Q2,Q3,Q4,Q5 let's suppose the correct answers are 11001 (1 for YES and 0 for NO, so we have that question 1 has as an answer the YES, while question 4 has an answer
of NO and question 5 has also an answer of YES.) and the students don't know about it.
•Also let's suppose the student-1 gave the answer to all questions 10000, student-2 gave 11111 and student-3 gave 10110 and all 3 students know the answers all the others gave.
•So now i give to the students the information that the student-1 had 3 correct answers, the student-2 had 3 correct answers also, while the student-3 had only 1 correct answer.
My question is:
►Can a student deduce the correct answers from this information?
IMPORTANT NOTE: I'm NOT speaking about this specific example, i'm interested in the general problem!
So let's see the general problem:
Let's say we have N questions being asked to K students.
So we get the answers Ak1, Ak2, ..., AkNby the k-th students with k from 1 to K and Akx = 0 or 1 for every k at 1 to K and x at 1 to N.
And the numbers of total student's correct answers is denoted by Cj with j from 1 to K.
Example (6 questions, 4 students, random answers, random Cj numbers of correct answers) we are given the information:
Student-1: 100111
Student-2: 100001
Student-3: 001101
Student-4: 010111
C1= 4 (I.e student-1 answered correctly 4 questions)
C2= 4
C3= 1
C4= 5
►Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that
validates the above information that had been given.
►Is the aforementioned string unique?
(Note: any Cj equal to 0 or N gives us the desired string with the correct answers immediately.)
Special cases of the problem can be solved by taking multiple different cases and by deductive reasoning we obtain the solution, but that's a non general solution since we have to use different
cases and different type of reasoning each time depending on what the N,K,Cj numbers are.
I tried to solve the above with using the XOR operator but i couldn't go anywhere.
Can you help me find the solution to the ► questions?
July 20th 2013, 04:10 PM
Re: Interesting math/programming problem....
"Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that
validates the above information that had been given."
No, not always. For example, it might be that the n students all answer the m questions in exactly the same way. Knowing that each had i questions correct and m- i questions wrong tells us
nothing about which answer were correct and which wrong.
July 20th 2013, 10:41 PM
Re: Interesting math/programming problem....
July 20th 2013, 11:45 PM
Re: Interesting math/programming problem....
"Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that
validates the above information that had been given."
No, not always. For example, it might be that the n students all answer the m questions in exactly the same way. Knowing that each had i questions correct and m- i questions wrong tells us
nothing about which answer were correct and which wrong.
Indeed, but in that case i should transform my question to ask about a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way to find if the string of the correct answers can be found or not and
if it can, to find it(with a general and straightforward way)....(Smile) | {"url":"http://mathhelpforum.com/discrete-math/220714-interesting-math-programming-problem-print.html","timestamp":"2014-04-20T08:47:14Z","content_type":null,"content_length":"9112","record_id":"<urn:uuid:7a339f48-fe44-42a7-867f-ea3e6cdb929c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question Re: Backdoor Roth w/ 2012 and 2013 Contributions
My wife and I contributed $5,000 each in the 2012 contribution year to our Roth IRAs. Due to unforeseen circumstances, we now exceed the 2012 Roth contribution limits. Therefore, with the help of
information on this board, I have successfully recharacterized the 2012 contributions to Traditional IRAs, which had no prior balance. So far so good.
Now, I want to convert the TIRAs to Roth, but I also want to make new IRA contributions for 2013 and get them into Roth one way or another. The problem is that we don't know whether we'll be over or
under the Roth contribution limits for 2013, but it's likely we'll be over again.
Therefore, I'm wondering if there's any disadvantage to adding $5,500 in non-deductible 2013 contributions to each TIRA (which currently include the recharacterized 2012 contributions) and then
converting the TIRAs (each including 2012 and 2013 non-deductible money) to Roth.
This seems to me to be the simplest option. I suppose that the only IRS implication will be to file Forms 8606 regarding the non-deductible TIRA contributions for 2012 and 2013, correct?
Thanks in advance for your time and thoughts on this. I read a number of previous threads but didn't find anything addressing this exact situation. Please let me know if I omitted anything of
importance from my explanation.
Re: Question Re: Backdoor Roth w/ 2012 and 2013 Contribution
As long as you have no pre-tax contributions in tIRAs, my understanding is that the Backdoor Roth is just as good as making a direct Roth IRA contribution if you were so eligible.
Re: Question Re: Backdoor Roth w/ 2012 and 2013 Contribution
You can do as you planned and won't have to worry about how high your income is in 2013. When you convert, any earnings in the recharacterized contribution will be taxable (the amount your
recharacterization transfer exceeded your original contribution).
You will then have a single conversion to report for 2013. If your conversion includes taxable income of $200 for example, and you need to take Roth distributions from this conversion before 5 years
have passed, the first $200 out is deemed to be the taxable portion of your conversion and will be subject to a 10% penalty unless you reach 59.5 first. The rest of the conversion was not taxable and
will not have a penalty.
That's the only difference vrs being able to make a regular Roth contribution in the first place, ie a possible 10% penalty on the taxable amount if you don't hold the conversion 5 years. You will
report the 2013 conversion on Form 8606, the same form (different section) used to report the non deductible TIRA contribution. A separate 8606 is required for each spouse.
Re: Question Re: Backdoor Roth w/ 2012 and 2013 Contribution
That's great. Thanks very much for your insight. This is what I am going to do.
Re: Question Re: Backdoor Roth w/ 2012 and 2013 Contribution
by Alan S.
A separate 8606 is required for each spouse.
How many spouses do you figure he has?
Sorry. Couldn't resist.
Actually, I wanted to thank you, Alan, for the explanation. I had googled my situation --- which was similar to fitz's --- and a Wikipedia entry made the top of the list. It said the same thing you
did, but not as detailed. I was just about to do my standard Boglehead site seach on Google for confirmation of the Wikipedia info --- site:Bogleheads.org search_term(s) --- when I noticed that this
thread came up second on the google return list.
I was looking for disadvantages to a backdoor Roth. I wanted to get my mAGI to 170,000. I'm figuring that without any finagling, our mAGI is going to fall in the 190-200K range for 2013. I was going
to follow the suggestion from another poster here, whereby my wife would contribute ALL of her 2013 salary to her 401k, and I would cut her a check for that. She made about 20K in 2012 (she works
outside the home part-time --- very full-time inside the home), and will probably make a similar amount this year. Maybe a bit more. I was trying to figure out if it was worth the effort. It still
may be, but knowing I can still do a backdoor Roth without any grave consequences muddies the waters a little less. | {"url":"http://www.bogleheads.org/forum/viewtopic.php?f=1&t=108631&p=1595832","timestamp":"2014-04-16T19:26:35Z","content_type":null,"content_length":"24168","record_id":"<urn:uuid:eb7a0bf2-43a0-42d5-a88c-5746b549b68e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
SOLVE: 2(4c-4)-5=6c+1 (please i need to find the answer ASAP) :D
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Start by distributing the 2 on left side: 2(4c) - 2(4) - 5 = 6c +1 8c - 8 - 5 = 6c + 1 Now collect like terms 8c - 13 = 6c + 1 Now add 13 to both sides and subtract 6c from both sides 2c = 14 Now
divide both sides by 2 c = 7
Best Response
You've already chosen the best response.
Thanks to both!!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bbe72fe4b0bcefefa052c5","timestamp":"2014-04-21T04:56:49Z","content_type":null,"content_length":"32457","record_id":"<urn:uuid:21120158-798d-4b85-a5bc-80a6f30b5074>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Those Deceiving Error Bars
Have you ever looked at a histogram with the data displayed as counts per bin in the form of points with error bars, and wondered whether those fluctuations and departures from the underlying
hypothesized model (usually overimposed as a continuous line or histogram) were really significant or worth ignoring ?
The subject is one of the topics which takes the most time away in discussions which arise during talks at internal meetings of HEP experiments. Physicists in the audience will be always happy to
compare their ability of eye-fitting and to argue about whether there's a bump here or a mismodeling there. It is just as if we came with a built-in "goodness-of-fit" co-processor in the back of our
mind, and that was connected with our mouth without passing through those other parts of our brain handling the "think first" commandment. We do lose a lot of time in such discussions.
Besides the issue of deceiving zero-entry bins, there are several other reasons why one should be careful with such eyeballing comparisons, but by far the most important one is that the data, when
they consist in event counts per bin, are universally shown as points with error bars, and the error bars by default are drawn symmetrically above and below the observed count, and extend from N -
sqrt(N) to N + sqrt(N), if N is the bin content. In other words, the default is to use the fact that the event counts, being a random variable drawn from a Poisson distribution, has a variance equal
to the mean.
Here I should explain to the least knowledgeable readers what is a Poisson distribution. Any statistics textbook explains that the Poisson is a discrete distribution describing the probability to
observe N counts when an average of m is expected. Its formula, P(N|m)=[exp(-m)* m^N]/N! (where ! is the symbol for the factorial, such that N!=N*(N-1)*(N-2)*...*1, and P(N|m) should be read as "the
probability that I observe N given an expectation value of m").
So what's the problem with Poisson error bars ? The problem is that those error bars are not representing exactly what we would want them to. A "plus-or-minus-one-sigma" error bar is one which should
"cover", on average, 68% of the time the true value of the unknown quantity we have measured: 68% is the fraction of area of a Gaussian distribution contained within -1 and +1 sigma. For
Gaussian-distributed random variables a 1-sigma error bar is always a sound idea, but for Poisson-distributed data it is not typically so. What's worse, we do not know what the true variance is,
because we only have an estimate of it (N), while the variance is equal to the true value (m). In some cases this makes a huge difference.
Take a bin where you observe 9 event counts and the true value was 16: the variance is sqrt(16)=4, so you should assign an error bar of +-4 to your data point at 9. But you do not know the true
value, so you correctly estimate it as N=9, whose square root is 3. You thus proceed to plot a point at 9 and draw an error bar of +-3. Upon visually comparing your data point (9+-3) with the
expectation from a true model, drawn as a continuous histogram and having value 16 in that bin, you are led to believe you are observing a significant negative departure of event counts from the
model, since 9+-3 is over two "sigma" away from 16; 9 is instead less than two sigma away from 16+-4. So the practice of plotting +-sqrt(N) error bars deceives the eye of the user.
Worse still is the fact that the Poisson distribution, for small (m<50 or so) expected counts, is not really symmetric. This causes the +-sqrt(N) bar to misrepresent the situation very badly for
small N. Let us see this with an example.
Imagine you observe N=1 event count in a bin, and you want to draw two models on top of that observation: one model predicts m=0.01 events there, the other predicts m=1.99. Now, regardless of whether
m=0.01 or m=1.99 is the expectation of the event counts, if you see 1 event you are going to draw a error bar extending from 0 (i.e., 1-sqrt(1)) to 2 (1+sqrt(1)), thus apparently "covering" the
expectation value in both cases; but while for m=1.99 the probability to observe 1 event is very high (and thus the error bar around your N=1 data point should indeed cover 1.99), for m=0.01 the
probability to observe 1 event is very small: P(1|0.01)=exp(-0.01)*0.01^1/1!=0.01*exp(-0.01)=0.0099. N=1 should definitely not belong to a 1-sigma interval if the expectation is 0.01, since almost
all the probability is concentrated at N=0 in that case (P(0|0.01)=0.99)!
The solution, of course, is to try and draw error bars that correspond more precisely to the 68% coverage they should be taken to mean. But how to do that ? We simply cannot: as I explained above, we
observe N, but we do not know m, so we do not know the variance. Rather, we should realize that the problem is ill-posed. If we observe N, that measurement has NO uncertainty: that is what we saw,
with 100% probability. Instead, we should apply a paradigm shift, and insist that the uncertainty should be drawn around the model curve we want to compare our data points to, and not around the data
If our model predicts m=16, should we then draw a uncertainty bar, or some kind of shading, around that histogram value, extending from 16-sqrt(16) to 16+sqrt(16), i.e. from 12 to 20 ? That would be
almost okay, were it not for the asymmetric nature of the Poisson. Instead, we need to work out some prescription to count the probability of different event counts for any given m (where m, the
expectation value of the event counts, is not an integer!), finding an interval around m which contains 68% of it.
Sound prescriptions do exist. One is the "central interval": we start from the value of N which is the nearest integer to m smaller than m, and proceed to move right and left summing the probability
of N+1 and N-1 given m, taking in turn the largest of these. We continue to sum until we exceed 68%: this gives us a continuous range of integer values which includes m and "covers" precisely as it
should, given the Poisson nature of the distribution.
a preprint recently produced by R.Aggarwal and A.Caldwell
, titled "
Error Bars for Distributions of Numbers of Events
". The paper also deals with the more complicated issue of how to include in the display of model uncertainty the systematics on the model prediction, finding a Bayesian solution to the problem which
I consider overkill for the problem at hand. I would be very happy, however, if particle physics experiments turned away from the sqrt(N) error bars and adopted the method of plotting box
uncertainties with different colours, as advocated in the cited paper. You would get something like what is shown in the figure on the right.
Note how the data points can now be immediately classified here, and more soundly, according to how much they depart to the model, which is now not anymore a line, but a band giving the extra
dimensionality of the problem (the model's probability density function, as colour-coded by green for 68% coverage, yellow for 95% coverage, and red for 99% coverage). I would be willing to get away
without the red shading -68% and 95% coverages would suffice, and are more in line with current styles of showing expected values adopted in many recent search results (the so-called "Brazil-bands").
Despite the soundness of the approach advocated in the paper, though, I am willing to bet that it will be very hard to impossible to convince the HEP community to stop plotting Poisson error bars as
sqrt(N) and start using central intervals around model expectations. An attempt to do so was made in BaBar, but resulted in a failure -everybody continued to use the standard sqrt(N) prescription.
There is a definite amount of serendipity in the behaviour of the average HEP experimentalist, I gather!
The OPERA results for superluminal neutrinos as presented in their graphs would certainly look different if they followed the approach you advocate above.
Anonymous (not verified) | 12/22/11 | 10:31 AM
Mention the Poisson distribution and I think of what I read when I was studying it years ago, decades before the appearance of the Wikipedia article from which this extract comes:
The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in 1837 ...
The first practical application of this distribution was done by Ladislaus Bortkiewicz in 1898 when he was given the task to investigate the number of soldiers of the Prussian army killed
accidentally by horse kick; this experiment introduced Poisson distribution to the field of reliability engineering.
Over sixty years to find an application!
P.S. I have just noticed how ohwilleke has mentioned
a Poisson distribution or other discrete probability distribution that does well with low numbers
One of Bortkiewicz’s publications was Das Gesetz der kleinen Zahlen (The Law of Small Numbers), of special significance to Roulette players.
Robert H. Olley / Quondam Physics Department / University of Reading / England
Robert H Olley | 12/22/11 | 11:54 AM
Dear Tommaso, all your warnings are valid and I have experienced most of them in practice. But it's still true that they're only important if the statistics is poor and so is the confidence level.
When you have enough data, "0" in a bin doesn't occur; the shape of the Poisson distribution reduces to the Gaussian; sqrt(N) and sqrt(N-sqrt(N)) become indistinguishable as error bars, and so on.
There are situations in which the non-Gaussianity and other things are important but they're pretty rare. That's why e.g. for the Higgs Brazil graphs, Gibbs-like Gaussian treatment of all the error
bars is just fine and I actually prefer it over some excessively complicated formulae that elevate the risk of errors.
Luboš Motl (not verified) | 12/22/11 | 13:30 PM
Good point. In almost every situation where the distinction matters, the data don't have the statistical power to make a strong prediction anyway.
ohwilleke (not verified) | 12/22/11 | 15:34 PM
Hi Lubos,
the problem is that no matter how much data you have, in HEP you will always end up looking in the tails, where you have few events! That is what happens with new particle searches: we exclude easily
resonances in a mass spectrum where there is a lot of data, and the new particle would produce a significant and visible contribution; but we are usually turned on by the small fluke occurring right
on the tail, where statistical inference is the hardest to make, both because of the issues I have discussed above and because the tails are the less well known part of the underlying background
So that is why the arguments in the paper I discussed are indeed relevant in HEP.
Tommaso Dorigo | 12/23/11 | 09:46 AM
FWIW, I am far more unhappy when I see physics papers that use a continuous normal distribution to model a discrete quantity that is ideal for a Poisson distribution or other discrete probability
distribution that does well with low numbers (e.g. studies of how many kinds of neutrinos, or strong force colors there are, or how many jets a collision has produced, or how many quarks are in an
unidentified hadron, or what the charge of a quantum physics scale entity is), than I am when error bars are off, simply because errors bar are routinely poor estimates of error, that hindsight
reveals are almost always too small in published work due to publication bias, anyway. It shows to me an unwillingness of the researcher to think seriously about what is going on in a statistical
model in the context of the experiment in question.
ohwilleke (not verified) | 12/22/11 | 15:31 PM
I might agree with what you say ohwilleke, but the point of my post is that the typical judgement of the physicist sitting in the back row at a physics analysis meeting usually manages to get the
whole audience in pointless discussions, because error bars do deceive the eye. So a different style of plotting them would indeed ease matters in that department.
Tommaso Dorigo | 12/23/11 | 09:48 AM
“Those Deceiving Error Bars!”
Imagine those words as the translation of an Italian madrigal, set to a hitherto undiscovered work of Francesco Petrarca.
Robert H. Olley / Quondam Physics Department / University of Reading / England
Robert H Olley | 12/23/11 | 14:12 PM
Patrick Lockerby | 12/23/11 | 19:03 PM
Tommaso Dorigo | 12/24/11 | 03:54 AM
’Tis poetry, that’s what it is! (Or, to be more precise, Poetic Diction.)
But more that that, I think that such a system may indeed develop in the brains of those much accustomed to gazing on graphs – especially those presented at conferences using PowerPoint.
Meanwhile, for yourself, Patrick, just now on Meridian News they were recounting what happened in Lewes 175 years ago today:
On 27 December 1836, an avalanche occurred in Lewes, the worst ever recorded in Britain. A large build-up of snow on the nearby cliff slipped down onto a row of cottages called Boulters Row (now
part of South Street). About fifteen people were buried, and eight of these died. A pub in South Street is named The Snowdrop in memory of the event.
From the map, it is not the sort of place one associates with avalanches.
Robert H. Olley / Quondam Physics Department / University of Reading / England
Robert H Olley | 12/27/11 | 14:07 PM
As an old timer, I cannot approve this proposal, mainly because it inherently assumes that a model is correct and should take precedence over data.
Brazil plots are the ultimate magicians trick, imo, introducing unprecedented hidden levels of modeling uncertainty . At least simple square root error bars are not obfuscating, they are what they
are and are telling is what we might find if we repeated the experiment and looked at the data the prescribed times from the statistics, irrespective of underlying models, imo. OK, when the numbers
are small one can qualify the statistics, but introducing models in the error bars is like wagging the dog by the tail.
If you look at the plot in your blog http://www.science20.com/quantum_diaries_survivor/cms_bosons , it is evident that there is no need for a model to gauge the importance of the signal from the
anna v (not verified) | 12/24/11 | 02:00 AM
Hi Anna,
thanks for your opinion, which I understand but in part disagree with. The purpose of those coloured shading, as described in the first part of the preprint I linked, is only to feed the user with a
more visual description of the possible range of statistical uncertainty of the model prediction, if the model is correct. When we compare data to a model we know the model could be wrong, and we
cannot certainly do a goodness-of-fit by eye which uses that uncertainty in our assessment. We usually just stop at asking ourselves whether the data and the model appear compatible as they are
drawn. These shadings can address the issue, making it a bit easier for us.
Also, I insist that putting a Gaussian sqrt(N) bar on an observation is very unprincipled - we observed that datum, so there is no uncertainty.
Tommaso Dorigo | 12/24/11 | 04:00 AM
As an old timer, I cannot approve this proposal, mainly because it inherently assumes that a model is correct and should take precedence over data.
Brazil plots are the ultimate magicians trick, imo, introducing unprecedented hidden levels of modeling uncertainty . At least simple square root error bars are not obfuscating, they are what they
are and are telling is what we might find if we repeated the experiment and looked at the data the prescribed times from the statistics, irrespective of underlying models, imo. OK, when the numbers
are small one can qualify the statistics, but introducing models in the error bars is like wagging the dog by the tail.
If you look at the plot in your blog http://www.science20.com/quantum_diaries_survivor/cms_bosons , it is evident that there is no need for a model to gauge the importance of the signal from the
anna v (not verified) | 12/24/11 | 02:14 AM
What do you think of the 'Pearson chisq' motivated error bars discussed for CDF here:
'upper error = 0.5 + sqrt(n+0.25), lower error = -0.5 + sqrt(n+0.25)' ?
It's somewhat of a hybrid between the default square root and a more thorough interval evaluation: plot the interval of true values mu for which the observed number would be within sqrt(mu) of the
true value.
There is a long discussion of different possible strategies and their 'coverage' here : http://www-cdf.fnal.gov/publications/cdf6438_coverage.pdf
Of course if one has a theoretical curve then the task of evaluating whether the observations agree with it can well expressed as fluctuations around the theory. But the frequentist's first task is
to produce meaningful error bars that are right the right fraction of the time...
stringph (not verified) | 12/26/11 | 18:12 PM
As to "... forgetting to take into account bins where the data had fluctuated down to zero entries ..."
would that indicate that you should ignore
"... three ... ATLAS ... ZZ candidates ... at 126 GeV [that] is the source of the significance that ATLAS quotes at that mass..."
in light of
the fact that Figure 4(a) of ATLAS-CONF-2011-162.pdf
shows those 3 events as all being in a single bin (120-125 GeV)
of their ZZ-4l histogram for 4.8/fb
with the two adjacent bins (115-120) and 125-130) being empty
the 3 event total being consistent with background
for the region 115-130 GeV ?
Tony Smith (not verified) | 12/27/11 | 00:59 AM
Actually I was looking at Figure 4(b) with respect to my previous comment.
Tony Smith (not verified) | 12/27/11 | 05:32 AM | {"url":"http://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735","timestamp":"2014-04-16T19:24:01Z","content_type":null,"content_length":"75296","record_id":"<urn:uuid:312a54ce-d715-40ab-9e08-2ad218962b6c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Momentum maps and matrix poisson brackets.
up vote 3 down vote favorite
I'm trying to understand how this solution works. The question was to deduce momentum maps for right and left actions of $SO(3)$ on $GL(3,R)$, and got them as $J_R = \frac{1}{2}(Q^TP-P^TQ)$ and $J_L=
\frac{1}{2}(PQ^T-QP^T)$ respectively, for $Q\in GL(3)$, $P\in GL(3)^*$. However, then I need to find if they Poisson commute. The solution is:
$$ \{ J_L,J_R \} = \frac{1}{4}\{(PQ^T-QP^T),(Q^TP-P^TQ)\} = \frac{1}{4}((P-P^T)(Q-Q^T)-(Q^T-Q)(P-P^T)) $$
But I don't understand how this works (specifically, how to compute a poisson bracket with matrices). There is no explicit definition of the poisson bracket in the solution. Elsewhere in the text
there was a Poisson bracket defined as
$$\{F,H\}=\operatorname{ tr}\left(\frac{\partial F}{\partial Q}\frac{\partial H}{\partial P}-\frac{\partial H}{\partial Q}\frac{\partial F}{\partial P}\right)$$
but that is for scalar functions $F,H$. I guess you could apply that formula to every entry of the $P,Q$'s but there must be a shortcut. Sorry if i've not given enough context, please ask. Thanks for
any help.
sg.symplectic-geometry dg.differential-geometry mp.mathematical-physics
Perhaps there is a problem with the notation to begin with: I guess your Poisson brackets should include a tensor product somewhere. Could you write down the Poisson brackets of the entries of your
matrices? – mathphysicist Jun 30 '11 at 12:32
add comment
1 Answer
active oldest votes
Below I write as I would proceed, hoping it could be useful to you, but, by the way, could I know the motivation of this problem?
I would prefer to work on the real associative unitary algebra $g\equiv\mathfrak{gl}(n)$, for arbitrary $n$, instead of its unit group $GL(n)$. Let $\Phi_{R(L)}$ be the natural right
(resp. left) action of $SO(n)$ on $g$, and $Psi_{R(L)}$ its lift to $T^\ast g$.
The canonical symplectic $2$-form $\omega$ on $T^\ast g=g\times g^\ast$ is constant and given by the bilinear product $\omega((A_1,f_1),(A_2,f_2))=f_1(A_2)-f_2(A_1)$, for all $(+
(A_1,f_1),(A_2,f_2)\in g\times g^\ast$.
Let us identify $g$ with $g^\ast$ through the linear isomorphism $A\mapsto \langle A,\cdot\rangle$, where $\langle,\rangle$ is the scalar product on $g$ defined by $\langle A,B\rangle=\
textrm{tr}(A^T B)$ for all $A,B\in g$. Consequently let us identify $T^\ast g$ with $g\times g$.
With this idenifications we find the following expression for the symplectic form and the actions:
$\omega((A_1,B_1),(A_2,B_2))=\mathrm{Tr}(B_1^T A_2-B_2^T A_1)$,
$\Psi_R(O,(A,B))=(AO,BO)$, $\Psi_L(O,(A,B))=(OA,OB)$.
For any $X\in\mathfrak{so}(n)$, let $\zeta_{R(L)}^X$ dentote the fundamental vector field on $g\times g$ corresponding to $X$ w.r.t. the action $\Psi_{R(L)}$. We find the following
up vote 3 expressions:
down vote
$\zeta_R^X(A,B)=(AX,BX)$, $\zeta_L^X(A,B)=(XA,XB)$.
Now it is easy to find that:
$(\zeta_R^X\omega)(A,B):(P,Q)\to\mathrm{Tr}((BX)^T Q-P^T (AX))=\mathrm{Tr}(X^T(B^T Q+P^T A)$ is equal to the differential of $J_R^X(A,B)=\mathrm{Tr}(X^T(B^T A))$, and
$(\zeta_L^X\omega)(A,B):(P,Q)\to\mathrm{Tr}((XB)^T Q-P^T (XA))=\mathrm{Tr}(X^T(QB^T+AP^T))$ is equal to the differential of $J_L^X(A,B)=\mathrm{Tr}(X^T(AB^T))$.
So the actions $\Psi_{R(L)}$ are hamiltonian with momentum maps $J_{R(L)}:T^\ast g\cong g\times g \to\mathfrak{so}(n)^\ast$ defined by $J_{R(L)}(A,B):X\in\mathfrak{so}(n)\to J_{R(L)}^X
Finally, for arbitrary $X,Y\in\mathfrak{so}(n)$, we find that $\{J_R^X,J_L^Y\}\equiv\omega(\zeta_R^X,\zeta_L^Y)=0$.
Infact its value at an arbitrary $(A,B)\in T^\ast g\cong g\times g$ is equal to $\omega((AX,BX),(YA,YB))=\mathrm{Tr}((BX)^T YA-(YB)^T AX)=\mathrm{Tr}(-XB^T YA+B^TYAX)=0$.
This completes the proof.
add comment
Not the answer you're looking for? Browse other questions tagged sg.symplectic-geometry dg.differential-geometry mp.mathematical-physics or ask your own question. | {"url":"http://mathoverflow.net/questions/69178/momentum-maps-and-matrix-poisson-brackets?answertab=oldest","timestamp":"2014-04-18T13:57:19Z","content_type":null,"content_length":"55658","record_id":"<urn:uuid:d92dbe9a-bdbf-427d-805c-088672d03000>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statements and negations (3.1) p. 88-93 Compound statements and
Statements and negations (3.1) p. 88-93
Compound statements and connectives (3.2) p. 94-101
Express statements using symbols
Form the negation of statements
Form compound statements using connectives
Statements – a sentence that is either true or false,
but not both simultaneously, p. 89
17 < 9
11 > 5
All two digit numbers are greater than 1 digit numbers
Negation- a statement with an opposite “truth.
When a statement is true its negation is false and
when the statement is false its negation is true. p. 89
Statements - “Symbolism”
Just as x can be used as a name for a number, a
symbol such as p can be used as a name for a
When p is used as a name for a statement the symbols
~p are used as a name for the negation of p.
Let p stand for “Miami is a city in Florida.”
Then ~p is the statement “Miami is not a city in
“Quantified” Statements
A “quantified” statement is one that says something
about “all”, “some”, or “none” of the objects in a
“All students in the college are taking history.”
“Some students are taking mathematics.”
“No students are taking both mathematics and
“Equivalent” Statements
In any language there are many ways to say the same
thing. The different linguistic constructions of a
statement are considered equivalent.
“All students in the college are taking history.”
“Every student in the college is taking history.”
“Some students are taking mathematics.”
“At least one student is taking mathematics.”
Negating Quantified Statements
The negation of a statement about “all” objects is
“not all”. “Not all” can often be expressed by “some
are not.”
p : All students in the college are taking history.
~p : Some students in the college are not taking
Negating Quantified Statements
The negation of a statement about “some” objects is
“not some”. “Not some” can often be expressed by
“none” or “not any.”
p : Some students are taking mathematics.
~p : None of the students are taking mathematics.
“Compound” Statements, 94
Simple statements can be connected with “and”, “Either
… or”, “If … then”, or “if and only if.” These more
complicated statements are called “compound.”
“Miami is a city in Florida” is a true statement.
“Atlanta is a city in Florida” is a false statement.
“Either Miami is a city in Florida or Atlanta is a city in
Florida” is a compound statement that is true.
“Miami is a city in Florida and Atlanta is a city in
Florida” is a compound statement that is false.
“And” Statements, p. 95
When two statements are represented by p and
q the compound “and” statement is p /\ q.
p: Harvard is a college.
q: Disney World is a college.
p/\q: Harvard is a college and Disney World is
a college.
p/\~q: Harvard is a college and Disney World is
not a college.
“Either ... or” Statements, p. 96
When two statements are represented by p and q the
compound “Either ... or” statement is p\/q.
p: The bill receives majority approval.
q: The bill becomes a law.
p\/q: The bill receives majority approval or the bill
becomes a law.
p\/ ~q: The bill receives majority approval or the
bill does not become a law.
“If ... then” Statements, p. 96
When two statements are represented by p and q the
compound “If ... then” statement is: p q.
p: Ed is a poet.
q: Ed is a writer.
p q: If Ed is a poet, then Ed is a writer.
q p: If Ed is a writer, then Ed is a poet.
~q ~p: If Ed is not a writer, then Ed is not a Poet
“If and only if” Statements, p.98
When two statements are represented by p and q the
compound “if and only if” statement is: p q.
p: The word is set.
q: The word has 464 meanings.
p q: The word is set if and only if the word has
464 meanings.
~q ~p: The word does not have 464 meanings if
and only if the word is not set.
Symbolic Logic, p. 99
Statements of Logic
Name Symbolic Form
Negation ~p
Conjunction p/\q
Disjunction p\/q
Conditional pq
Biconditional p q
“Truth Tables” – Negation, p. 103
If a statement is true then its negation is false.
If the statement is false then its negation is true.
This can be represented in the form of a table
called a “truth table.”
p ~p
T F
F T
“Truth Tables” – Conjunction, p.104
The conjunction of two statements is true
only when both of them are true.
p q pq
T T T
T F F
F T F
F F F
“Truth Tables” – Disjunction, p.106
The disjunction of two statements is false
only when both of them are false.
p q pq
T T T
T F T
F T T
F F F
Constructing a Truth Table, p. 107
Construct a truth table for ((p/\~q)\/q).
p q ~q (p~ q) (p~ q)q
T T F F T
T F T T T
F T F F T
F F T F F
Officehours M-F 9:00-10:15
Beach Bldg Room 113 or by appointment
Work p.93 #1-30 odd
Work p.101
Read p. 103-123 (3.3, 3.4) | {"url":"http://www.docstoc.com/docs/5274009/Statements-and-negations-(31)-p-88-93-Compound-statements-and","timestamp":"2014-04-18T01:41:39Z","content_type":null,"content_length":"58891","record_id":"<urn:uuid:6141edaf-0f19-4d23-be48-099b5a5c9320>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I'm trying to make a histogram with excel
• one year ago
• one year ago
Best Response
You've already chosen the best response.
This is what I get
Best Response
You've already chosen the best response.
It's supposed to look more like this
Best Response
You've already chosen the best response.
I'm having trouble importing the data into excel
Best Response
You've already chosen the best response.
from textedit...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
that's ok :)
Best Response
You've already chosen the best response.
That's a bar graph, not a histogram.
Best Response
You've already chosen the best response.
What do those data represent? Are they all from one sample? If you're having trouble importing, sometimes copying the data to Word, or Notepad or something first helps - put all the numbers in a
single row. If you still have trouble, go back and put commas between all the numbers (Excel should recognize the spaces and put the numbers in different cells, though.
Best Response
You've already chosen the best response.
they're all from one sample. I'll try putting in commas
Best Response
You've already chosen the best response.
I was able to transfer the data over. The boundaries are in the attachment. How do I calculate the frequencies using excel?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I forget the particular function. With a data set that small, I would just count them myself.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5071ec5de4b057a2860ceec7","timestamp":"2014-04-17T06:45:53Z","content_type":null,"content_length":"60981","record_id":"<urn:uuid:2d2fc02d-bc58-4642-8c5a-ee3c920fa23d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bellevue, WA Algebra 1 Tutor
Find a Bellevue, WA Algebra 1 Tutor
...I am confident that with a little bit of time and motivation I can help any student understand and think more critically about the subject they are struggling with. In many cases, students
just need a push toward the more interesting side of learning, and I always encourage students to look past...
14 Subjects: including algebra 1, writing, geometry, biology
Background: I recently graduated from the University of Washington with a B.S. degree in chemistry. Throughout my college career I had a special focus in mathematics. Outside of school, current
events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha.
17 Subjects: including algebra 1, chemistry, geometry, physics
...I have completed over four years of University level coursework in Psychology ranging from Child Development to Neural basis of behaviors to Cognitive Psychology to Brain anatomy laboratories.
I have tutored University level Psychology courses for over two years and I truly enjoy teaching the su...
27 Subjects: including algebra 1, chemistry, reading, writing
...I believe in customizing my tutoring to what the student needs, which may include a math refresher to fill in some gaps that may exist. I have completed coursework for an English major at the
University of Washington and PhD coursework in English at Rutgers University. I can tutor reading, literature, grammar, and writing.
32 Subjects: including algebra 1, English, reading, writing
...I teach by breaking problems down into simple steps and keeping careful track of all quantities as we work. Working as a technical writer in the software industry, I wrote, edited,
illustrated, and published professional documentation. I have a deep understanding of English grammar and usage, and a keen eye for readability.
18 Subjects: including algebra 1, chemistry, biology, algebra 2
Related Bellevue, WA Tutors
Bellevue, WA Accounting Tutors
Bellevue, WA ACT Tutors
Bellevue, WA Algebra Tutors
Bellevue, WA Algebra 2 Tutors
Bellevue, WA Calculus Tutors
Bellevue, WA Geometry Tutors
Bellevue, WA Math Tutors
Bellevue, WA Prealgebra Tutors
Bellevue, WA Precalculus Tutors
Bellevue, WA SAT Tutors
Bellevue, WA SAT Math Tutors
Bellevue, WA Science Tutors
Bellevue, WA Statistics Tutors
Bellevue, WA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Beaux Arts Village, WA algebra 1 Tutors
Bothell algebra 1 Tutors
Burien, WA algebra 1 Tutors
Clyde Hill, WA algebra 1 Tutors
Hunts Point, WA algebra 1 Tutors
Issaquah algebra 1 Tutors
Kirkland, WA algebra 1 Tutors
Medina, WA algebra 1 Tutors
Mercer Island algebra 1 Tutors
Newcastle, WA algebra 1 Tutors
Redmond, WA algebra 1 Tutors
Renton algebra 1 Tutors
Seattle algebra 1 Tutors
Shoreline, WA algebra 1 Tutors
Yarrow Point, WA algebra 1 Tutors | {"url":"http://www.purplemath.com/Bellevue_WA_algebra_1_tutors.php","timestamp":"2014-04-19T02:21:15Z","content_type":null,"content_length":"24198","record_id":"<urn:uuid:048bfb41-ca01-47b1-aec2-ffb10d4b836d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hi there. I haven't been here in a while.
Is there a proper name for the interior part of a trigonometric function or is it simply called the inner function, or something to that effect? I have looked around online for an answer but I can't
seem to find anything.
For example, the function
does g(x) have a proper term or name? Or is it just "the inner function"?
Thank you very much. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=18093","timestamp":"2014-04-19T20:02:59Z","content_type":null,"content_length":"13619","record_id":"<urn:uuid:1397087e-e9fc-4ccb-a7b3-02de22388029>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Quentin Math Tutor
...I currently teach in a 90% Latino school where almost all of my communications with parents are in Spanish, and where the school administration often requests my written and oral translation
assistance. As a full-time classroom teacher for 9 years, I have advised and/or mentored hundreds of stud...
43 Subjects: including trigonometry, GED, GRE, public speaking
...Standards today are harder than ever and I believe that providing tools to help develop understanding and awareness, build confidence, improve grades and reduce test anxiety is the key to
having a student who is both successful and comfortable in the classroom. During our first session, we will ...
17 Subjects: including algebra 2, English, geometry, algebra 1
...My method relies on close observation and diagnosis. I'll help you identify where your strengths and weaknesses are and find any specific problems with your understanding of the material. Then,
we'll break your problem areas down into simple steps and walk through their applications.
29 Subjects: including algebra 2, elementary (k-6th), geometry, phonics
...Students are often waiting for the correct explanation that will get them over the hump and I can give that explanation. Just as there are hundreds of ways to prove the Pythagorean Theorem,
there's bound to be an explanation for you. Math isn't an abstract subject for geniuses, it's a practical tool set that just requires a lot of practice and hard work.
19 Subjects: including algebra 2, basketball, tennis, discrete math
...I have taken graduate course work related to childhood development and learning disorders. Also, I have received my doctoral degree in clinical psychology. I minored in sociology in my
undergraduate studies.
30 Subjects: including algebra 2, biology, grammar, prealgebra | {"url":"http://www.purplemath.com/san_quentin_math_tutors.php","timestamp":"2014-04-18T23:29:34Z","content_type":null,"content_length":"23797","record_id":"<urn:uuid:bf0c47a8-6e43-4e57-8ebb-9a0969bc3e73>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issues with convection. What is a useful framework beyond bulk models of large N, non-interacting, scale-separated, equilibrium systems
Seminar Room 1, Newton Institute
The representation of cumulus clouds presents some notoriously stubborn problems in climate modelling. The starting point for our representations is the well-known Arakawa and Schubert (1974) system
which describes interactions of cloud types ("plumes") with their environment. In some ways, this system has become brutally simplified: in applications, generally only a single "bulk" cloud type is
considered, there are assumed to be very many clouds present, and an equilibrium between convection and forcing is assumed to be rapidly reached. In other ways, the system has become greatly
complicated: the description of a plume is much more "sophisticated". In this talk, I want to consider what might be learnt from almost the opposite perspective: i.e., keep the plume description
brutally simple, but take seriously the implications of issues like finite cloud number (leading naturally to important stochastic effects), competitive communities of cloud types (leading to a
proposed relation for the co-existence of shallow and deep convection) and prognostic effects (leading to questions about how far equilibrium thinking holds).
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/CLP/seminars/2010082716451.html","timestamp":"2014-04-17T00:56:40Z","content_type":null,"content_length":"6797","record_id":"<urn:uuid:89f6c0ae-44c8-4e56-b80b-0777c038643d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gibbstown Statistics Tutors
...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania
certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma...
19 Subjects: including statistics, calculus, geometry, algebra 1
...My approach towards tutoring is simple. I perceive tutoring as not just an opportunity to furnish knowledge, but also a learning opportunity for me. Therefore when I tutor, I place myself at
the student?s academic level as a learning partner.
23 Subjects: including statistics, physics, geometry, biology
...I consider Math to be one of the most enjoyable subjects since it is very linear. Math problems can be likened to little puzzles, for which you only need to uncover the correct answer. I hold
Bachelor of Science and Master of Science degrees.
20 Subjects: including statistics, reading, algebra 2, biology
Hello. My name is Mr. H. and I am announcing that I am available to tutor you.
13 Subjects: including statistics, chemistry, physics, biology
...I have passed all of the required PRAXIS tests for elementary education in both states. I student taught in a 3rd grade classroom for 16 weeks, tutored many students at the elementary level,
substitute taught in the elementary level, and completed fieldwork in 1st and 4th grade. I have taught middle school math for 6 years.
21 Subjects: including statistics, reading, algebra 1, SAT math | {"url":"http://www.algebrahelp.com/Gibbstown_statistics_tutors.jsp","timestamp":"2014-04-21T12:44:56Z","content_type":null,"content_length":"24807","record_id":"<urn:uuid:759e91f0-90fb-42ee-a55d-5143279418ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Absolute Value of Integers ( Read ) | Arithmetic
Do you live where it snows? How much snow has ever accumulated in 24 hours where you live?
Cameron is amazed at the record snowfall in Alaska. One time, the day began without any snow on the ground. Then it began to snow and within 24 hours there was 62 inches of snow.
We can use an integer to write the increase in snowfall, and we can use absolute value to show the distance between the depth of snow and the bare ground.
Do you know how to do this?
Understanding absolute value is the goal of this Concept. Pay attention and we will revisit this situation at the end of it.
Sometimes, when we look at an integer, we aren’t concerned with whether it is positive or negative, but we are interested in how far that number is from zero. Think about water. You might not be
concerned about whether the depth of a treasure chest is positive or negative simply how far it is from the surface.
This is where absolute value comes in.
What is absolute value?
The absolute value of a number is its distance from zero on the number line.
We can use symbols to represent the absolute value of a number. For example, we can write the absolute value of 3 as $|3|$
Writing an absolute value is very simple you just leave off the positive or negative sign and simply count the number of units that an integer is from zero.
Find the absolute value of 3. Then determine what other integer has an absolute value equal to $|3|$
Look at the positive integer, 3, on the number line. It is 3 units from zero on the number line, so it has an absolute value of 3.
Now that you have found the absolute value of 3, we can find another integer with the same absolute value. Remember that with absolute value you are concerned with the distance an integer is from
zero and not with the sign.
Here is how we find another integer that is exactly 3 units from 0 on the number line. The negative integer, -3, is also 3 units from zero on the number line, so it has an absolute value of 3 also.
So, $|3|=|-3|=3$
This example shows that the positive integer, 3, and its opposite, -3, have the same absolute value. On a number line, opposites are found on opposite sides of zero. They are each the same distance
from zero on the number line. Because of this, any integer and its opposite will always have the same absolute value. To find the opposite of an integer, change the sign of the integer.
Just like we can find the absolute value of a number, we can also find the opposite of a number.
Find the opposite of each of these numbers: -16 and 900.
-16 is a negative integer. We can change the negative sign to a positive sign to find its opposite. The opposite of -16 is +16 or 16.
900 is the same thing as +900. We can change the positive sign to a negative sign to find its opposite. So, the opposite of 900 is -900.
Find the absolute value of each number.
Example A
Solution: $22$
Example B
Example C
Find the opposite of -18.
Here is the original problem once again.
Do you live where it snows? How much snow has ever accumulated in 24 hours where you live?
Cameron is amazed at the record snowfall in Alaska. One time, the day began without any snow on the ground. Then it began to snow and within 24 hours there was 62 inches of snow.
We can use an integer to write the increase in snowfall, and we can use absolute value to show the distance between the depth of snow and the bare ground.
Do you know how to do this?
To express both of these values using integers and absolute value, we can begin with the increase in snowfall. Because it is an increase in snowfall, we use a positive integer to express this amount.
To express the difference in snowfall accumulation and the bare ground, we use an absolute value.
The absolute value of 62 is 62.
This is how we can express this situation using integers.
Whole Numbers
the positive counting numbers including 0.
a part of a whole written with a numerator and denominator.
positive whole numbers and their opposites. Positive and negative numbers.
for a negative number, it has a positive partner. For a positive number, it has a negative partner.
Absolute Value
the distance that a number is from zero.
Guided Practice
Here is one for you to try on your own.
To identify the absolute value of this number, we have to think about the number of units it is from zero. Remember that absolute value does not concern positive or negative, but the distance that a
value is from zero.
$|-234| = 234$
This is our answer.
Video Review
- This James Sousa video is an introduction to integers includes absolute value.
Directions: Write the opposite of each integer.
7. 20
8. -7
9. 22
10. -34
11. 0
12. -9
13. 14
14. 25
Directions: Find the absolute value of each number.
15. $|13|$
16. $|-11|$
17. $|-5|$
18. $|17|$
19. $|-9|$ | {"url":"http://www.ck12.org/concept/Absolute-Value-of-Integers-Grade-7/?eid=None&ref=None","timestamp":"2014-04-17T19:22:37Z","content_type":null,"content_length":"110533","record_id":"<urn:uuid:1c9bc5ce-8a51-46fe-afea-d59bd61468b6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of Flow Cytometry Data by Matrix Relevance Learning Vector Quantization
Flow cytometry is a widely used technique for the analysis of cell populations in the study and diagnosis of human diseases. It yields large amounts of high-dimensional data, the analysis of which
would clearly benefit from efficient computational approaches aiming at automated diagnosis and decision support. This article presents our analysis of flow cytometry data in the framework of the
DREAM6/FlowCAP2 Molecular Classification of Acute Myeloid Leukemia (AML) Challenge, 2011. In the challenge, example data was provided for a set of 179 subjects, comprising healthy donors and 23 cases
of AML. The participants were asked to provide predictions with respect to the condition of 180 patients in a test set. We extracted feature vectors from the data in terms of single marker
statistics, including characteristic moments, median and interquartile range of the observed values. Subsequently, we applied Generalized Matrix Relevance Learning Vector Quantization (GMLVQ), a
machine learning technique which extends standard LVQ by an adaptive distance measure. Our method achieved the best possible performance with respect to the diagnoses of test set patients. The
extraction of features from the flow cytometry data is outlined in detail, the machine learning approach is discussed and classification results are presented. In addition, we illustrate how GMLVQ
can provide deeper insight into the problem by allowing to infer the relevance of specific markers and features for the diagnosis.
Citation: Biehl M, Bunte K, Schneider P (2013) Analysis of Flow Cytometry Data by Matrix Relevance Learning Vector Quantization. PLoS ONE 8(3): e59401. doi:10.1371/journal.pone.0059401
Editor: Avi Ma’ayan, Mount Sinai School of Medicine, United States of America
Received: April 5, 2012; Accepted: February 16, 2013; Published: March 18, 2013
Copyright: © 2013 Biehl et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: PS was supported by the Medical Research Council UK (Strategic Biomarker Grant G0801473). KB gratefully acknowledges funding by DFG (HA 2719/6-1) and the CITEC Centre of Excellence at the
University of Bielefeld. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
We present in this article our main results obtained in the context of the DREAM6/FlowCAP2 Molecular Classification of Acute Myeloid Leukemia Challenge 2011 [1]–[3]. This challenge was organized in a
joint effort by the Dialogue for Reverse Engineering Assessments and Methods (DREAM) project [3]–[6] and the Flow Cytometry: Critical Assessment of Population Identification Methods (FlowCAP)
initiative [2].
Flow cytometry constitutes a powerful technique which is widely used in medical research and clinical practice for the study and diagnosis of various diseases [7]. Flow cytometry measurements
typically yield a quantitative description of several tens or even hundreds of thousands of cells in a given sample. Light scatter and fluorescence properties are used to identify deviations from
normal cell size or structure and to quantify functional properties in terms of, e.g., protein marker expressions [7], [8]. The amount of available data, its high dimension, and the complexity of the
diagnosis tasks trigger a significant interest in systems for automated analysis and decision support.
Along these lines, the DREAM6/FlowCAP2 challenge addressed the analysis of given flow cytometry data, representing peripheral blood and bone marrow samples of, in total, 359 subjects. Some of these
corresponded to cases of Acute Myeloid Leukemia (AML) and the ultimate goal was to predict the condition of a number of patients whose diagnosis was unknown to the participants. Hence, the goal of
the challenge could be formulated as a machine learning problem: From the given example data with known diagnoses, criteria were to be inferred which then allowed for the classification of the test
We extracted feature vectors from the data in terms of a few characteristic quantities, summarizing the statistics of the observed marker values. Predictions were obtained by means of a specific
machine learning technique termed Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) [9]–[11]. This prototype based method extends standard Learning Vector Quantization [12], [13] by
using Adaptive Distance Measures in Relevance LVQ, which motivated the acronym and team name Admire-LVQ. In the challenge, our team achieved the best possible performance with respect to the required
test set prediction.
In the following section a description of the data set and our analysis is given. Thereafter we present and discuss our main results and the obtained prediction. We conclude with a brief outlook on
possible extensions and future work.
Data Set and Analysis
In this section we first describe the extraction of features from the given data. The specific machine learning analysis based on Generalized Matrix Relevance Learning Vector Quantization is
outlined. Furthermore, its validation in terms of the given data set is discussed.
The data set provided in the challenge comprised 359 subjects. For each of these, a varying number of cells, on the order of a few thousands, had been analysed by means of flow cytometry, see [2],
[3] for details. The first 179 subjects served as the training data; label information was provided, specifying 23 subjects as AML patients . The remaining 156 subjects are referred to as healthy
donors throughout this contribution. Note that the latter group of subjects includes a number of patients with a diagnose different from AML [14].
The task was to predict the diagnosis with respect to a test set of 180 subjects for which no label information was provided. The total number of AML cases in the test set, 20, was also disclosed to
the participants. However, this information was not exploited in our approach. We have analysed the transformed and compensated flow cytometry data as provided by the organizers of the challenge [2],
[3]. In our analysis we omitted the non-specific isotope control data representing non-human binding antibodies, which corresponds to tube 8 in the data set [3].
In clinical practice, a possible workflow is to sort cells according to a small number of gating variables in a first step, identifying potentially degenerate or immature cells. Subsequently, the
selected cells are analysed according to the remaining markers, aiming at a reliable diagnosis and potential identification of the AML subtype [7], [8]. In our approach we follow a simpler, more
direct strategy in which we omit cell specific information. After visual inspection in terms of histograms we decided to represent the data by a limited number of statistical characteristics per
patient and marker. Moreover, we took into account all markers at once in order to assign each subject to one of the two classes in a single processing step.
Feature Extraction and Normalization
A key step in the design of a classifier in this challenge was the extraction of appropriate features from the provided data. The data corresponding to tubes 1–7 represents 31 characteristic
quantities per cell: the so-called Forward Scatter on linear scale (FS Lin), the Sideward Scatter on logarithmic scale (SS Log), and 29 fluorescence intensities on logarithmic scale quantifying the
expression of various surface proteins. All of these quantities are referred to as markers in the following. Table 1 lists the considered markers and the index which we refer to in the analysis.
Table 1. List of the 31 markers used in the analysis.
Note that the potential gating markers FS Lin, SS Log, and CD45-EDC were provided for all cells in the data set. The other 28 markers were measured in one tube only, representing a sub-population of
cells per subject. We rescaled all markers by the respective largest possible value as to limit all observations to the interval .
FS Lin can be interpreted as a measure of cell size, while SS Log roughly quantifies intracellular granularity [7]. Note furthermore that the expression of IgG1 was measured by means of four
different binding antigens. In our analysis, however, the corresponding values were treated as four independent markers (), formally.
For the purpose of a first, visual inspection, we computed histograms corresponding to the frequency of marker values in the training set. Figures 1 and 2 display histograms of 4 example markers: FS
Lin , SS Log , CD45-EDC , and CD10-PC7 for one patient per class ( and103). The main purpose of Figures 1 and 2 is to illustrate the extraction of feature vectors from the sample data which is
described in the following.
Figure 1. Example histograms and extracted features: FS Lin and SS Log.
Histograms and extracted features correspond to one healthy donor (subject , upper panels) and one AML patient (subject , lower panels), respectively. Histograms display the frequency of a particular
marker value for visual inspection. Six features are extracted per patient and marker, corresponding to mean, standard deviation, skewness, kurtosis, median, and interquartile range of the observed
frequency of marker values, cf. Eq. (1). Here the first 12 components of the 186-dim. feature vectors are displayed before z-score transformation.
Figure 2. Example histograms and extracted features: CD45-EDC and CD10-PC7.
For further description see Figure 1. The quantities displayed here correspond to features 13–18 and 181–186 of the 186-dim. vectors before z-score transformation.
For each patient and marker a varying number of cell measurements, typically a few thousands, were made available. In our analysis, we did not make use of cell specific information, as it is done
frequently in terms of a so-called gating procedure in clinical practice [7], [8]. We extracted information only on the level of single marker statistics over the entire population of cells. A direct
classification of histograms using, for instance, entropic distance measures or statistical divergences would be feasible here [15], [16]. We resorted, however, to reducing the information to only
six quantities per marker which summarize the characteristics of the corresponding histogram. We denote by the value measured for marker in individual cell of patient . From the available data we
determined the following quantities:
In addition we computed (e) median and (f) interquartile range in the set of observed values . The skewness is a measure of the asymmetry, with positive values indicating that more weight is
contained in the left side of the histogram. The kurtosis quantifies how sharply peaked a histogram is. Note that in the above defined , sometimes termed excess kurtosis in the literature, a constant
3 is subtracted yielding in case of normal densities.
Hence we obtained, for each patient , a set of 6 quantities per marker. A particular subject was subsequently represented by the concatenated vector of characteristic features. As one example, the
skewness of marker 17 (CD19-PE, see Table 1) observed for patient 42 corresponds to component of the feature vector since .
The features representing markers 1–3 (FS Lin, SS Log, CD45-EDC) and marker 31 (CD10-PC7) are shown for one example subject from each class in Figures 1 and 2, together with the corresponding
In the training processes described in the following, we applied an additional z-score transformation: Given a (sub-)set of training examples we computed for the quantities
and rescaled all features in training, validation or test data by subtracting the mean and subsequently dividing by . Consequently, the transformed features display zero mean and unit variance in the
actual training set. While the transformation did not affect the classification performance, it enhances the interpretability of the results, in particular with respect to the relevance matrix, see
Matrix Relevance Learning Vector Quantization
We employed Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) for the analysis of the obtained feature vectors. This highly flexible and powerful variant of LVQ is described in detail
in [9]–[11]. Here we employed the algorithm in its simplest setting with one prototype per class and a single, global relevance matrix as defined below.
The two classes, i.e. healthy donors (class 1) and AML patients (class 2), are represented by the prototype vectors , respectively. Given a particular z-score-transformed feature vector representing
one of the patients, its distance from the prototypes is determined as(2)
Here and are matrices and the specific parameterization of the distance guarantees non-negativity of the measure:(3)
In a simple Nearest Prototype Classification (NPC) scheme, a feature vector is assigned to class 1 if and to class 2, else. While serve as typical representatives of the classes, elements of the
symmetric matrix can be interpreted as to quantify the relevance of a pair of feature dimensions in the classification scheme.
Both, prototypes and relevances, are determined in the same supervised training process. Given a set of examples with class labels , training is guided by the minimization of the cost function [9],
[17], [18](4)
where the index corresponds to the correct prototype with while identifies the wrong prototype. In general, the objective of training can be further specified by introducing a function in the cost
function, e.g. a sigmoidal [17]. Here, we resorted to the simple case . Note that the contribution of a single example to the cost function satisfies . It is negative if is classified correctly and
its absolute value relates to the margin of the classification.
Alternatively we refer to the closely related score which is computed as(5)
A value indicates that feature vector is assigned to class 1, healthy donors, with high certainty. Large values close to signal confident classification as an AML patient (class 2). The NPC scheme
can be reformulated as assigning vector to class 1 if and to class 2 else. While the score may serve as a relative measure of certainty in GMLVQ, it should not be interpreted directly as a
probability for AML. Note that any monotonically increasing function could be used to transform without modifying the actual ordering of patients according to .
We implemented the iterative optimization of , cf. Eq. (4), by means of a gradient descent procedure with respect to the adaptive quantities and . At iteration step , updates along the normalized
gradients and subsequent normalization of were performed:
with the time-dependent step sizes and . The full form of the gradient terms is given in [9], [11], [19], for instance. We employed gradient descent with waypoint averaging and step size control,
which has been introduced and described in greater detail in [19]: After a gradient step, Eq. (6), the achieved value of the cost function is compared with where
corresponds to the position in search space, on average over the last updates. The observation of signals oscillatory behavior of the iteration. In this case, we set and and reduced the step sizes by
a factor . All results presented here were obtained with parameters and in the waypoint averaging scheme. Initial step sizes were for prototypes and for matrix updates, respectively. In the problem
at hand, the obtained classification scheme and error rates turned out very robust with respect to the choice of these parameters.
For a given training set, we initialized prototypes close to the corresponding class conditional means with small random deviations; similarly we chose the initial close to the identity matrix:(7)
where the Kronecker–Delta if and if . The components of and all elements of were generated independently according to a uniform density . Results were found to depend only very weakly on details of
the initialization.
In order to evaluate the performance of the GMLVQ classifier before applying it to the test set, we employed a validation scheme based on randomized subsets of the available training data. In every
run we selected ca. of the data from each class randomly, i.e. 17 of the 23 AML examples and 117 of the 156 healthy donors. These example data were used for training the GMLVQ system while the
remaining 45 served as a validation set. The random split of the data was repeated 50 times and, if not stated otherwise, results presented in this section were obtained on average over the
validation runs.
Figure 3 displays the averaged error rates of naïve Nearest Prototype Classification in the course of gradient based training. Note that an over-fitting effect was observed: Performing more than ca.
60 training steps decreased the error rates with respect to training examples to very low values. At the same time, however, validation set performance deteriorated. Closer inspection revealed that
this effect was essentially due to patient , listed as a case of AML in the training set. If contained in the validation set, this patient was consistently misclassified by the NPC scheme. On the
contrary - if employed for training - the system achieved agreement with the label, eventually, but at the expense of an increased error rate in the validation set.
Figure 3. Learning curves in the validation procedure.
Class specific and total error rates of Nearest Prototype Classification, corresponding to in Eq. (8), on average over 50 randomized validation runs. The upper panel corresponds to the performance in
the respective training set, the lower panel displays error rates with respect to the validation set. The curves correspond to including patient in training or validation set.
Based on this observation, we employed an early stopping strategy, terminating the training process after 40 gradient steps. When omitting patient from the training set or re-labeling the subject as
healthy donor, the learning curves converged smoothly and overfitting was not observed anymore. Moreover, we obtained virtually the same classification, i.e. the same order of scores with respect to
the test set patients in all these scenarios. The precise numerical results reported in the following section were obtained by means of the early stopping strategy including subject labelled as an
AML case .
In addition to the error rates of the naïve NPC scheme we also evaluated the validation set performance in terms of the Receiver Operating Characteristics (ROC) [20]. By introducing a threshold , the
GMLVQ scheme can be biased with respect to one of the two classes:(8)
with the score defined in Eq. (5) For thresholds in the range we computed the corresponding class-wise error rates with respect to the validation set on average over the 50 training runs, yielding
the threshold-averaged ROC curves [20] displayed in Figure 4.
Figure 4. Validation set performance.
Threshold-averaged ROC as obtained in the validation runs using labeled data. The curves correspond to using the data set including patient (lower, blue line) and excluding patient from the analysis
completely (upper, red line), respectively.
The ROC analysis revealed very high sensitivity (true positive rate) and specificity (1 - false positive rate) with respect to the validation set performance, the corresponding Area Under Curve being
[20]. In addition, removal of patient from the data set resulted in an almost perfect ROC with . Given the close to error–free classification we refrained from employing complementary performance
measures such as precision/recall or other characteristics [20]. For the same reason, we did not compare the validation performance of the simple GMLVQ scheme with more sophisticated settings or
alternative classifiers.
Results and Discussion
Final results, including the test set scores, were obtained using all 179 training samples for training. In addition, we performed an average over 50 randomized intializations in order to rule out an
influence of the initial configuration of the GMLVQ system. In each run, 40 gradient steps were performed with waypoint averaging and step size control as described above. The final test set scores
were obtained on average over the 50 randomized training runs.
Before discussing the outcome of the GMLVQ training in terms of prototypes and relevances we present the actual test set predictions.
Test Set Prediction
Figure 5 displays the GMLVQ based scores , cf. Eq. 5, with respect to the 180 test set patients. Values close to correspond to patients that are identified as AML patients with high certainty, while
small correspond to a classification as healthy donor. It would be very interesting to study potential correlations of the scores with additional information about the patients, e.g. measures of the
severity of the AML cases. Unfortunately such information was not disclosed and is not available for the given data set.
Figure 5. Test set predictions.
GMLVQ based score vs. patient number in the test set (left panel) and ordered according to (right panel). The dotted line marks an example posterior choice of the threshold , cf. Eq. (5), for crisp
classification yielding correct prediction of 20 AML patients in the test set.
Although it was known to the participants that the test set contained 20 AML cases, we did not make explicit use of this information. In the GMLVQ training, a threshold value does not have to be
specified. The result in terms of scores and the corresponding ranking of test set patients is independent of the actual number of AML cases. In a practical context, and if a crisp classification is
the goal, the actual value of should be set according to domain expert (user) preferences concerning the compromise between sensitivity and specificity. The example threshold value marked in the
right panel of Figure 5 was chosen a posteriori for illustration purposes only and is neither a result nor a parameter of the training process. With respect to performance in the challenge it is
The comparison with the unknown test set labels after submission of the predictions [3] revealed that the 20 patients with highest GMLVQ score corresponded precisely to the 20 AML patients in the
test set. Hence, we achieved the best possible prediction according to Receiver Operator Characteristics or other evaluation methods like Precision/Recall, which only depend on the order of scores
and the corresponding ranking of patients.
The obtained classifier can be illustrated in terms of a two-dimensional visualization: Figure 6 displays the training and test data in terms of projections on the leading eigenvectors of the
relevance matrix [21]. Two rather well separated clusters can be identified which reflect the assignment of classes. Note that the training set subject (patient 116) that was consistently
misclassified by the NPC scheme is, indeed, located in the cluster representing healthy donors. This relates to the overfitting behavior discussed in greater detail in the previous section.
Figure 6. Visualization of the data set as obtained by GMLVQ.
Projections of normalized feature vectors on the leading eigenvectors of are displayed. Green circles correspond to healthy donors, red symbols mark AML patients in the training set, while blue dots
represent test set data. Stars indicate the positions of the prototypes. The red arrow marks patient in the training set, who is labeled as AML but is misclassified persistently for a large range of
thresholds , cf. Eq. (5).
It is remarkable that error-free classification of the test set data was obtained by a number of teams who extracted different features from the data and used a variety of classification approaches
[1]. For example, Vilar et al. employed a histogram based classifier in connection with the Kullback-Leibler divergence used as an entropic distance measure [16]. Amar et al. also extracted
statistical moments from the data, but applied Support Vector Machine Regression, subsequently [22]. Logistic Regression was applied successfully by Manninen et al. [23]. Strickert and Seifert based
their predictions on a method termed Correlative Matrix Mapping [24]. Using their software library Jstacs [25], Keilwagen and Grau built a weighted ensemble of classifiers which also achieved perfect
An additional ranking of the best performing teams was suggested by the organizers in retrospect [1], [3], [26]. It hinges on interpreting the submitted scores as probabilistic assignments and on the
reliability of the test set labels. In our opinion, the suggested posterior ranking according to, e.g., the Pearson correlation between scores and the test set class labels is questionable, see also
the DREAM6 discussion forum at [3].
Characteristics of the GMLVQ Classifier
Apart from yielding the actual classification scheme, the GMLVQ analysis provides insights into the structure of the data which become available by inspection of the prototypes and relevance matrix.
The interpretability of the classifier has proven useful in several applications and facilitates discussions with the respective domain experts [27], [28].
Figure 7 visualizes the difference vector of prototypes representing healthy donors (1) and AML patients (2), respectively. For the sake of clarity, we have shown only the 31 components which
correspond to the features , cf. Eq. (1 a). A positive difference corresponds to markers which display a greater value in the AML prototype compared to the typical healthy donor in the data set,
examples being HLA-DR-FITC , CD117-PE , and CD34-PC5 . Example markers which display reduced values in AML patients are CD15-FITC , CD16-PC5 (), and CD10-PC7 .
Figure 7. GMVLQ prototypes.
Components of the difference vector corresponding to the feature , cf. Eq. (1), as represented by the AML prototype and healthy donor prototype . Positive bars indicate that is typically greater in
AML patients than in healthy donors.
In addition we analysed the resulting relevance matrix . We focused on the diagonal elements which formally accumulate the importance of feature for the resulting classification.
The direct interpretation of is simplified if all features assume values on the same order of magnitude. This condition was realized here by the explicit z-score transformation mentioned above.
Moreover, it is important to note that, given a particular set of feature vectors and prototypes , a continuum of matrices may exist which yield the same distances , cf. Eq. (3) and, hence, the same
classification scheme with respect to the training data. This ambiguity problem is particularly pronounced for inter-dependent or highly correlated features in high dimension. Resulting difficulties
concerning the interpretation of in terms of feature relevances are discussed in depth in [28]. There, schemes are suggested for posterior regularization which provide unique, interpretable and which
we also applied here: Note that arbitrary vectors from the null-space or kernel of the matrix.
can be added to the rows of a given without changing the GMVLQ cost function (4) and the actual classification of training data. In [28] a column space projection is suggested in order to remove
contributions from [28]:(9)
is constructed from the eigenvectors of with eigenvalues zero.
Zero eigenvalues of reflect the presence of linear dependent or strongly correlated features and the corresponding eigenvectors mark directions in input space in which training samples and prototypes
do not vary. In the data considered here, one clearly expects dependencies between related markers, the four versions of IgG1 being an obvious example. In addition, extracted features like and or and
should be strongly correlated.
For the following discussion we determined by means of a posterior column space projection (9) with retaining only the leading eigendirections of with eigenvalues . Thereafter, the matrix was
normalized again to satisfy and we computed the regularized .
It is remarkable that, in the given problem, this posterior regularization has very little influence on the test set classification. In particular, the ordering of test set scores obtained with the
regularized system is the same as the one presented in the previous section. This suggests that the correlations and dependencies observed in the training set are already representative for the
entire data. In [28] example problems are presented where the posterior regularization has a non-trivial effect also on test set performance.
Figure 8 displays the diagonal entries of for all 186 features. After regularization, the heuristic interpretation of as the relevance or significance of feature in the classification is justified
[28]. The figure displays the features in groups of six, corresponding to the 31 markers, cf. Table 1.
Figure 8. Relevance profile.
Diagonal relevances of features Vertical grid lines separate the groups of 6 quantities corresponding to each of the 31 markers, cf. Table 1. Marker numbers are given explicitly for 7 highly relevant
A relatively small number of markers appears to contribute the most significant features: FS-Lin (1), SS-Log (2), CD15-FITC (7), CD117-PE (16), CD16-PC5 (21), CD34-PC5 (23), and CD10-PC7 (31). A more
detailed discussion of the obtained provides further, valuable information: For instance, the histogram shape as measured by skewness and kurtosis appears to be of minor importance with respect to
marker 16 (CD117-PE), while measures of the corresponding histogram width (std, iqr) seem to represent significant differences between AML patients and healthy donors. On the contrary, for CD10-PC7
(marker 31) skewness and kurtosis carry most discriminative power.
While several of the above mentioned markers have been discussed as relevant in the context of AML in the literature, see e.g. [7], [28], [29], [30], their expression characteristics can vary a lot
with the actual AML variant. For instance, both, HLA-DR positive and HLA-DR negative types of AML exist [31], the same applies to several other markers.
Due to the limited size of the data set and because information about AML subtypes was not disclosed, one should not over-interpret the results presented above. It is very likely that our findings in
terms of relevances and prototypes are highly specific for the provided data set which seems to represent particular types of AML only. Nevertheless, our results demonstrate the interpretability of
the GMLVQ approach and illustrate how the method could be used for efficient biomarker selection in collaboration with domain experts.
Obviously, the outcome and interpretation of relevance parameters depends on the precise form of the distance measure, Eq. (3), or more generally, on the parameterization of the classifier. For
instance, systems with diagonal matrix could only take into account single features and would disregard the discriminative power of particular pairs of features. Accordingly, featurs which display
low relevance in our scheme might become significant in more complex classifiers. Nevertheless we believe that our method provides valuable insight into the discriminative power of features and pairs
of features. The following simple experiment further illustrates this claim: We ranked features according to the corresponding and restricted the obtained GMLVQ classifier to the use of only 18
features for classification. All other features were omitted when evaluating distances and scores, cf. Eqs. (3,5), no re-training of the system was performed. The restricted classifier was evaluated
in terms of its test set ROC. Close to perfect test set classification with an AUC was retained when using only the leading 18 features which all are derived from the above mentioned 7 markers. It is
interesting to note that also the following two subsets of 18 features, i.e. with relevance ranks 19–36 and 37–54, yielded excellent test set performance. Figure 9 shows how the resulting AUC
decreases for subsequent subsets of 18 features with decreasing relevance. Performance deteriorated when subsets of features with very low relevance were used, resulting in essentially random class
assignments with AUC.
Figure 9. Test set performance (AUC of ROC) for the GMLVQ system restricted to subsequent subsets of 18 features which are ordered according to diagonal relevance .
The AUC deteriorates from values close to 1 for highly relevant features to AUC when using 18 features of low relevance.
A more reliable determination of discriminative markers, and even more so, the selection of a minimal set of features for correct classification would require systematic validation studies including
the re-training of the GMLVQ system on the respective feature sets. Due to the limitations of the data provided in the challenge we postponed this line of research to forthcoming studies.
More challenging data sets will have to be inspected to further demonstrate the usefulness of the approach in the analysis of flow cytometry data. This should, of course, include the systematic
comparison with other methods. A comparison of various classifiers in the context of the FlowCAP2/DREAM6 challenge can be found in [1].
The identification of leukemia subtypes in a larger study population requires the introduction of several prototypes representing the class of AML patients. The extension of GMLVQ in terms of
localized distance measures [9], [11] appears also promising in this context.
The reliable identification of feature relevances for marker selection should also be based on larger, more representative data sets. For a successful application of GMLVQ for bio-marker selection in
the context of tumor classification see [27]. The application of multi-class, potentially localized, GMLVQ will open new routes to the identification of discriminative markers in the differential
diagnosis of AML subtypes. In forthcoming studies, the consideration of histogram specific distance measures will also be studied along the lines of [15].
The analysis presented here was based on the entire cell population of a given subject. More general problems, including the above mentioned identification of AML subtypes, might require an analysis
on the level of individual cells. We intend to consider the development of prototype based automated gating procedures in forthcoming projects.
Available Software
The specific Matlab code used to generate our contribution to the DREAM6/FlowCAP2 Molecular Classification of Acute Myeloid Leukemia Challenge 2011 is publicly available at http://
www.the-dream-project.org/story/code [3].
A Matlab toolbox Relevance and Matrix adaptation in Learning Vector Quantization, including GMLVQ and important variants, is made available at http://matlabserver.cs.rug.nl/gmlvqweb/web/[32].
We would like to thank all members and supporters of the DREAM project and the FlowCAP initiative for the organization of this highly interesting challenge. We furthermore thank Wade T. Rogers for
permission to use the data for this publication. We are indebted to Marc Strickert, Michael Seifert, Jens Keilwagen, and Jan Grau for drawing our attention to the challenge and for useful
discussions. M.B. thanks Carmen Wesemeier for providing a brief introduction to the clinical practice of flow cytometry.
Author Contributions
Development and implementation of algorithms: MB KB PS. Analyzed the data: MB KB PS. Wrote the paper: MB KB PS. | {"url":"http://www.ploscollections.org/article/info:doi/10.1371/journal.pone.0059401","timestamp":"2014-04-20T21:45:28Z","content_type":null,"content_length":"214583","record_id":"<urn:uuid:19418893-055e-4b6d-94b6-e86ad06c7fb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper Semicontinuous Property of Uniform Attractors for the 2D Nonautonomous Navier-Stokes Equations with Damping
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 861292, 11 pages
Research Article
Upper Semicontinuous Property of Uniform Attractors for the 2D Nonautonomous Navier-Stokes Equations with Damping
College of Mathematics and Information Science, Henan Normal University, Xinxiang, Henan 453007, China
Received 25 May 2013; Revised 20 August 2013; Accepted 21 August 2013
Academic Editor: Ahmed El-Sayed
Copyright © 2013 Xin-Guang Yang and Jun-Tao Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Our aim is to investigate the long-time behavior in terms of upper semicontinuous property of uniform attractors for the 2D nonautonomous Navier-Stokes equations with linear damping and nonautonomous
perturbation external force, that is, the convergence of corresponding attractors when the perturbation tends to zero.
1. Introduction
In the present paper, we investigate the long-time behavior of uniform attractors for the nonautonomous 2D Navier-Stokes equations with damping and singular external force that governs the motion of
incompressible fluid where is a bounded domain with smooth boundary , is the kinematic viscosity of the fluid, is the velocity vector field which is unknown, is the pressure, is positive constant, ,
and is a small positive parameter.
Along with (1)–(4), we consider the averaged Navier-Stokes equation with damping formally corresponding to the case .
The function represents the external forces of problem (1)–(4) for and problem (5)–(8) for , respectively.
The functions and are taken from the space of translational bounded functions in , namely, for some constants .
We denote and note that is of the order as .
As a straightforward consequence of (9), we have
When in (5)–(8), the system reduces to the well-known 2D incompressible Navier-Stokes equation:
Since the last century, the global well-posedness and large-time behavior of solutions to the Navier-Stokes equations have attracted many mathematicians to study. For the well posedness of 3D
incompressible Navier-Stokes equations, in 1934, Leray [1, 2] derived the existence of weak solution by weak convergence method; Hopf [3] improved Leray's result and obtained the familiar Leray-Hopf
weak solution in 1951. Since the Navier-Stokes equations lack appropriate priori estimate and the strong nonlinear property, the existence of strong solution remains open. For the
infinite-dimensional dynamical systems, Sell [4] constructed the semiflow generated by the weak solution which lacks the global regularity and obtained the existence of global attractor of the
incompressible Navier-Stokes equations on any bounded smooth domain; Cheskidov and Foias [5] introduced a weak global attractor with respect to the weak topology of the natural phase space for 3D
Navier-Stokes equation with periodic boundary; Flandoli and Schmalfuß [6] deduced the existence of weak solutions and attractors for 3D Navier-Stokes equations with nonregular force; Kloeden and
Valero [7] investigated the weak connection of the attainability set of weak solutions of 3D Navier-Stokes equations; Cutland [8] obtained the existence of global solutions for the 3D Navier-Stokes
equations with small samples and germs; Chepyzhov and Vishik [9–11] investigated the trajectory attractors for 3D nonautonomous incompressible Navier-Stokes system which is based on the works of
Leray and Hopf. Using the weak convergence topology of the space (see below for the definition), Kapustyan and Valero [12] proved the existence of a weak attractor in both autonomous and
nonautonomous cases and gave an existence result of strong attractors. Kapustyan et al. [13] considered a revised 3D incompressible Navier-Stokes equations generated by an optimal control problem and
proved the existence of pullback attractors by constructing a dynamical multivalued process. For more results of the well-posedness and long-time behavior of the 2D autonomous incompressible
Navier-Stokes equations, such as the existence of global solutions, the existence of global attractors, Hausdorff dimension, and inertial manifold approximation, we can refer to Ladyzhenskaya [14],
Robinson [15], Sell and You [16], and Temam [17, 18]. Moreover, Caraballo and Real [19] derived the existence of global attractor for 2D autonomous incompressible Navier-Stokes equation with delays;
Chepyzhov and Vishik [20, 21] investigated the long-time behavior and convergence of corresponding uniform (global) attractors for the 2D Navier-Stokes equation with singularly oscillating forces as
the external force tend to be steady state by virtue of linearization method and estimate the corresponding difference equations; Foias and Temam [22, 23] gave a survey about the geometric properties
of solutions and the connection between solutions, dynamical systems, and turbulence for Navier-Stokes equations, such as the existence of -limit sets; Rosa [24] and Hou and Li [25] obtained the
existence of global (uniform) attractors for the 2D autonomous (nonautonomous) incompressible Navier-Stokes equations in some unbounded domain, respectively; Lu et al. [26] and Lu [27] proved the
existence of uniform attractors for 2D nonautonomous incompressible Navier-Stokes equations with normal or less regular normal external force by establishing a new dynamical systems framework;
Miranville and Wang [28] derived the attractors for nonautonomous nonhomogeneous Navier-Stokes equations.
However, the infinite-dimensional systems for 3D incompressible Navier-Stokes equations have not been yet completely resolved, so many mathematicians pay attention to this challenging problem. In
this regard, some mathematicians pay their attentions to the Navier-Stokes equation with damping. Let us recall some known results for the 3D incompressible Naver-Stokes equations with damping. For
the 3D autonomous Navier-Stokes equation with damping, the authors of [29] showed that the initial boundary value problem of a 3D Navier-Stokes equation with damping has a unique weak solution and
Song and Hou [30] derived the global attractors for the same autonomous system. Kalantarov and Titi [31] investigated the Navier-Stokes-Voight equations as an inviscid regularization of the 3D
incompressible Navier-Stokes equations, and further obtained the existence of global attractors for Navier-Stokes-Voight equations. Recently, Qin et al. [32] showed the existence of uniform
attractors by uniform condition-(C) and weak continuous method to obtain uniformly asymptotical compactness in and . However, there are fewer results for the upper semicontinuous and lower
semicontinuous for the nonautonomous system with perturbation case. In this paper, we will show the long-time behavior in terms of upper semicontinuous property of uniform attractors for the problem
(1)–(4), that is, the convergence of corresponding attractors when the perturbation tends to zero.
This paper is organized as follows: in Section 2, we will give some preliminaries of uniform attractors; in Section 3, the uniform boundedness of uniform attractors of 2D Navier-Stokes equation with
damping for will be obtained; the main result will be stated in the last section.
2. Some Preliminaries of Uniform Attractors
The Hausdorff semidistance in from one set to another set is defined as () is the generic Lebesgue space and is the usual Sobolev space. We set , is the closure of the set in topology with norm or ,
is the closure of the set in topology, and is the closure of the set in topology.
The family of functions denote a local Bochner integration function class, and denotes all translation bounded functions which satisfies for all ; that is, is translation bounded in . is translation
compact function in . Obviously, .
Operator is the Helmholtz-Leray orthogonal projection in onto the space , is the Stokes operator subject to the nonslip homogeneous Dirichlet boundary condition with the domain , is a self-adjoint
positively defined operator on with domain , and is the first eigenvalue for the Stokes operator ; we define the Hilbert space as with its inner product and norm topology as .
The problems (1)–(4) and (5)–(8) can be written as a generalized abstract form where the pressure has disappeared by force of the application of the Leray-Helmholtz projection , and is the bilinear
operator. The bilinear form can be extended as a continuous trilinear operator and satisfies
Firstly, we will give some Lemmas which can be found in [20], then derive some new results to prove the uniform boundedness of corresponding attractors in Section 3.
Lemma 1. For each , every nonnegative locally summable function on and every , one has for all .
Proof. See, for example, Chepyzhov et al. [20].
Lemma 2. Let fulfill the fact that for almost every , the differential inequality where, for every , the scalar functions and satisfy for some , , and . Then
Proof. See, for example, Chepyzhov et al. [20].
The existence of global solution and uniform attractor for (17)–(20) can be derived by similar methods as [33].
Theorem 3. (1) Assume , ; then problem (17)–(20) possesses a unique global weak solution which satisfies Moreover, one chooses an arbitrary nonautonomous external force and fixed, the global solution
generates a process (, , ) which is continuous with respect to , where is a symbol which belongs to the symbol space , and means the closure in the topology .
(2) Assume that , ; then the family of processes , () generated by the global weak solution of problem (17)–(20) possesses a uniform (with respect to ) attractor in .
Theorem 4. Assume that ; the functions and are taken from the space of translational bounded functions in and (10)–(13) hold, and then the family of processes generated by the global solution of
problem (1)–(4) possesses uniform (with respect to ) attractors for any fixed in .
Proof. As the similar argument in [33], we choose in [33], since and are translational bounded in , and then for any fixed , we can deduce that is translational bounded in and the existence of
uniformly compact attractors for any fixed .
Theorem 5. If the function is taken from the space of translational bounded functions in , then the processes generated by system (5)–(8) have a uniformly (with respect to ) compact attractor in .
Proof. As the similar technique in [33], we can easily deduce the existence of a uniformly compact attractor if we choose since is translation bounded in .
The structure of the uniform attractor will be discussed as follows: since the functions and are translation bounded and satisfy (10)–(13), the global solution of problem (1)–(4) generates the family
of processes acting on by the formula , , where is a solution to (1)–(4).
Similar to the procedure in [33] and by Theorem 4, the processes class has a uniformly (with respect to ) absorbing set which is bounded in for any fixed , which means that for any bounded set ,
there exists a time such that
Hence, are also uniformly absorbing with respect to as or which belongs to , is the integer part of .
The processes have a uniform global attractor as uniform -set where denotes the closure in and is an arbitrarily uniformly bounded absorbing set of the processes ; here, we can set .
On the other hand, for each fixed , is also bounded in , since (). Assuming , then . Besides, if and , then for some and .
Next, we consider the equation class as follows to describe the structure of the uniform attractor
For every external force , by the well-poseness of the abstract equation (17), we can derive that (38) generates a family of processes on , which shares similar properties to , corresponding to the
original equation (1) with external force . Moreover, from Theorem 3 we know the map is -continuous.
Definition 6. The kernel of (17) is the family of all complete orbits which are uniformly bounded in . The set is called the kernel section of at time . For every , the following representation
(complete orbit) of uniform attractors of (1) holds:
Definition 7. The structure of uniform attractors for problem (5)–(8) can be described as the uniform -set or kernel section:
3. Uniform Boundedness of in
Firstly, we consider the auxiliary linear equation with nonautonomous external force and give some useful estimates and then prove the uniform boundedness of in .
Considering the linear equation we obtain the following lemmas.
Lemma 8. Assume ; then problem (43) has a unique solution Moreover, the following inequalities hold for every and some constant , independent of the initial time .
Proof. Firstly, similar to the discussion in [32] or [34], by the Galerkin approximation method, we can obtain the existence of global solution; here we omit the details.
Then, multiplying (43) by , , and , respectively, using the Poincaré inequality, we get By the Gronwall inequality to (51), (53), integrating over for (50), (52), and (53), we can easily complete the
Setting , , , we have the following lemma.
Lemma 9. Let . Assume that holds for some constant . Then the solution to the following Cauchy problem with satisfies the inequality where constant is independent of .
Proof. Noting that and then using (54) and (57), we can deduce the following estimates of as
From Lemmas 2 and 8, we have
Similarly, we derive that
Hence, using the Poincaré inequality, by (45)–(47) and (58)–(60), we derive
Next, we set which implies that for any , since in (55).
Integrating (55) with respect to time from to , we see that is a solution to the problem such that we can deduce that from (62) to (63).
Using (46) and (61), we conclude
Noting that , , and , using (58), (68), and (69), we derive that
Hence, by (51), (58), and (62), we conclude
Combining (70) and (71), the proof for the lemma is finished.
Now, we will use the auxiliary linear equation and some estimates to prove the uniform boundedness of in . For convenience, we set and assume for some constants since and are translation bounded in .
Theorem 10. The attractors of problem (1)–(4) with (or (5)–(8) with ) are uniformly (with respect to ) bounded in , namely,
Proof. Let be the solution to (1)–(4) with the initial data as . For , we consider the auxiliary linear equation
By Lemma 9, we have the estimate
Multiplying (75) with and integrating over , using the boundary value condition, we derive that
By the Gronwall inequality and similar to (60), noting that when tends to infinite, we can set such that since .
Setting the function as which satisfies the problem where is a solution for problem (1)–(4), and is a solution to (75), is the bilinear operator which is defined in Section 2.
Taking the scalar product of (80) with in , we obtain Here we use the property of trilinear operator (21)-(22); we observe that
so that
Inserting (82)–(84) into (81) and then using the inequality and (78), we have which implies that where
Therefore, from Theorem 3, we derive from (88) that for any ,
Applying Lemma 2 with , , , , we get
Recalling that and using (85) and (90), we end up with for all .
Thus, for every , the processes have an absorbing set
On the other hand, if , the processes also possess an absorbing set
In conclusion, for every , the set is an absorbing set for the processes which is independent of . Since , (74) follows and hence the proof is finished.
4. Convergence of to
Next, we will study the difference of two solutions for (1) with and (4) with , which share the same initial data. Denote with belonging to the absorbing set which can be found in Section 3. In
particular, for , since , we obtain for some , as the size of depends on .
Lemma 11. For every , , and , the difference where satisfies the estimate for some positive constants and , both independent of .
Proof. Since the difference solves the difference fulfills the Cauchy problem where is the solution to (75).
Taking inner product in of (101) with , we obtain
Noting we derive
Next, we estimate each term on the right-hand side of (104).
Applying (22) to (27), we find
Hence, from (105) to (107), we obtain where and satisfy (76) and (96), respectively, and
Thus, it follows from (102) and (104) that
Noting that , by the Gronwall inequality, we get
Consequently, holds for some positive constants and .
Finally, since , using (76) to control , we may obtain where is a positive constant.
Next, we want to generalize Lemma 11 to derive the convergence of corresponding uniform attractors. Let the external force in (38) be , then satisfies inequality (73).
Define and we have
For any , we observe that is a solution to (38) with external force and . For , we investigate the property of the difference
Lemma 12. The inequality holds; here and are defined as in Lemma 11.
Proof. As the similar discussion to the proof of Lemma 11, replacing , , and by , , and , respectively, noting that (96) still holds for , and the family , (), is -continuous, and using (116) in
place of (73), we can finally complete the proof of the lemma.
The main result of this paper reads as follows.
Theorem 13. Let , and let (73) hold. Then the uniform attractor for problem (1)–(4) converges to of problem (5)–(8) in the limit in the following sense:
Proof. For , , from (110)-(111), we obtain that there exists a complete bounded trajectory of (38), with some external force such that .
We choose such that
From the equality and applying Lemma 12 with , , we obtain
On the other hand, the set attracts all sets uniformly when . Then, for all , there exists some time which is independent on , such that
Choosing and using (123)-(124), we readily get
Since and is arbitrary, taking the limit , we can prove the theorem.
Xin-Guang Yang was in part supported by the Young Teacher Research Fund of Henan Normal University (qd12104) and the Innovational Scientists and Technicians Troop Construction Projects of Henan
Province (no. 114200510011). Jun-Tao Li was in part supported by the Natural Science Foundation of China (61203293), Foundation of Henan Educational Committee (2011B120005), Key Scientific and
Technological Project of Henan Province (122102210131), and College Young Teachers Program of Henan Province (2012GGJS-063).
1. J. Leray, “Essai sur les mouvements plans d’un liquide visqueux que limitentdes parois,” Journal de Mathématiques Pures et Appliquées, vol. 13, pp. 331–418, 1934.
2. J. Leray, “Sur le mouvement d'un liquide visqueux emplissant l'espace,” Acta Mathematica, vol. 63, no. 1, pp. 193–248, 1934. View at Publisher · View at Google Scholar · View at MathSciNet
3. E. Hopf, “Über die Anfangswertaufgabe für die hydrodynamischen Grundgleichungen,” Mathematische Nachrichten, vol. 4, pp. 213–231, 1951. View at MathSciNet
4. G. R. Sell, “Global attractors for the three-dimensional Navier-Stokes equations,” Journal of Dynamics and Differential Equations, vol. 8, no. 1, pp. 1–33, 1996. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. A. Cheskidov and C. Foias, “On global attractors of the 3D Navier-Stokes equations,” Journal of Differential Equations, vol. 231, no. 2, pp. 714–754, 2006. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
6. F. Flandoli and B. Schmalfuß, “Weak solutions and attractors for three-dimensional Navier-Stokes equations with nonregular force,” Journal of Dynamics and Differential Equations, vol. 11, no. 2,
pp. 355–398, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. P. E. Kloeden and J. Valero, “The weak connectedness of the attainability set of weak solutions of the three-dimensional Navier-Stokes equations,” Proceedings of The Royal Society of London A,
vol. 463, no. 2082, pp. 1491–1508, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. N. J. Cutland, “Global attractors for small samples and germs of 3D Navier-Stokes equations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 62, no. 2, pp. 265–281, 2005. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. V. V. Chepyzhov and M. I. Vishik, “Evolution equations and their trajectory attractors,” Journal de Mathématiques Pures et Appliquées, vol. 76, no. 10, pp. 913–964, 1997. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society, Providence, RI, USA, 2002. View at MathSciNet
11. M. I. Vishik and V. V. Chepyzhov, “Trajectory and global attractors of the three-dimensional Navier-Stokes system,” Mathematical Notes, vol. 71, no. 2, pp. 177–193, 2002. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. A. V. Kapustyan and J. Valero, “Weak and strong attractors for the 3D Navier-Stokes system,” Journal of Differential Equations, vol. 240, no. 2, pp. 249–278, 2007. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. O. V. Kapustyan, P. O. Kasyanov, and J. Valero, “Pullback attractors for a class of extremal solutions of the 3D Navier-Stokes system,” Journal of Mathematical Analysis and Applications, vol.
373, no. 2, pp. 535–547, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Gordon and Breach Science Publishers, New York, NY, USA, 1969. View at MathSciNet
15. J. C. Robinson, Infinite-dimensional Dynamical Systems, Cambridge University Press, Cambridge, UK, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
16. G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, New York, NY, USA, 2002. View at MathSciNet
17. R. Temam, Navier-Stokes Equations, Theory and Numerical Analysis, North-Holland, Amsterdam, The Netherlands, 1979. View at MathSciNet
18. R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Springer, Berlin, Germany, 2nd edition, 1997. View at MathSciNet
19. T. Caraballo and J. Real, “Attractors for 2D-Navier-Stokes models with delays,” Journal of Differential Equations, vol. 205, no. 2, pp. 271–297, 2004. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
20. V. V. Chepyzhov, M. I. Vishik, and V. Pata, “Averaging of 2D Navier-Stokes equations with singularly oscillating forces,” Nonlinearity, vol. 22, no. 2, pp. 351–370, 2009. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. M. I. Vishik and V. V. Chepyzhov, “The global attractor of a nonautonomous two-dimensional Navier-Stokes system with a singularly oscillating external force,” Doklady Mathematics, vol. 413, no.
3, pp. 236–239, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
22. C. Foias and R. Temam, “The connection between the Navier-Stokes equations, dynamical systems, and turbulence theory,” in Directions in Partial Differential Equations, pp. 55–73, Academic Press,
New York, NY, USA, 1987. View at Zentralblatt MATH · View at MathSciNet
23. C. Foias and R. Temam, “Some analytic and geometric properties of the solutions of the evolution Navier-Stokes equations,” Journal de Mathématiques Pures et Appliquées, vol. 58, no. 3, pp.
339–368, 1979. View at Zentralblatt MATH · View at MathSciNet
24. R. Rosa, “The global attractor for the D Navier-Stokes flow on some unbounded domains,” Nonlinear Analysis: Theory, Methods & Applications, vol. 32, no. 1, pp. 71–85, 1998. View at Publisher ·
View at Google Scholar · View at MathSciNet
25. Y. Hou and K. Li, “The uniform attractor for the 2D non-autonomous Navier-Stokes flow in some unbounded domain,” Nonlinear Analysis: Theory, Methods & Applications, vol. 58, no. 5-6, pp. 609–630,
2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. S. Lu, H. Wu, and C. Zhong, “Attractors for nonautonomous 2D Navier-Stokes equations with normal external forces,” Discrete and Continuous Dynamical Systems A, vol. 13, no. 3, pp. 701–719, 2005.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. S. Lu, “Attractors for nonautonomous 2D Navier-Stokes equations with less regular normal forces,” Journal of Differential Equations, vol. 230, no. 1, pp. 196–212, 2006. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
28. A. Miranville and X. Wang, “Attractors for nonautonomous nonhomogeneous Navier-Stokes equations,” Nonlinearity, vol. 10, no. 5, pp. 1047–1061, 1997. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
29. X. Cai and Q. Jiu, “Weak and strong solutions for the incompressible Navier-Stokes equations with damping,” Journal of Mathematical Analysis and Applications, vol. 343, no. 2, pp. 799–809, 2008.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
30. X.-L. Song and Y.-R. Hou, “Attractors for the three-dimensional incompressible Navier-Stokes equations with damping,” Discrete and Continuous Dynamical Systems A, vol. 31, no. 1, pp. 239–252,
2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
31. V. K. Kalantarov and E. S. Titi, “Global attractors and determining modes for the 3D Navier-Stokes-Voight equations,” Chinese Annals of Mathematics B, vol. 30, no. 6, pp. 697–714, 2009. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
32. Y. Qin, X. Yang, and X. Liu, “Uniform attractors for a 3D non-autonomous Navier-Stokes-Voight Equations,” In press. View at Publisher · View at Google Scholar
33. X. Yang, “Uniform attractors for the 2D non-autonomous Navier-Stokes Equations with damping,” Acta Mathematica Scientia. In press.
34. Y. Qin, X. Yang, and X. Liu, “Averaging of a 3D Navier-Stokes-Voight equation with singularly oscillating forces,” Nonlinear Analysis: Real World Applications, vol. 13, no. 2, pp. 893–904, 2012.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/aaa/2013/861292/","timestamp":"2014-04-19T22:11:17Z","content_type":null,"content_length":"945542","record_id":"<urn:uuid:c4c33370-4b1b-4f5c-97f6-c5d61e49a114>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula class
Join Date
Aug 2010
Rep Power
I want to make a class with the following (pseudo)-API:
__________________________________________________ _______________
class: Formula
public Formula(String formulaInStringForm)
This constructor takes in a String-representation of a formula. The formula may contain the following operators:
Furthermore the formula may contain the variabele X. X may appear multiple times in the formula, but the variabele must always be named X. The String is read from left to right, the operators
are applied in the order of the 4 groups given above. When the String does not represent a legal formula, the behaviour of the constructor is left undefined.
public double getResult(double X)
This methode return f(X), where f is the function given to the constructor.
__________________________________________________ _______________
This class would be very useful in many application, because it gives the user of the program not only the possibility to specify values, but to specify an entire formula. The question is,
how do you write such a class?
My first question is, is there a class in the standard library that already does this?
If not, my second question is are there convenient ways to implement such a class?
Implementing reading the bracket operator isn't difficult using recursion. In pseudo-code:
when(there is another bracket left in the String)
find the matching closing bracket
new Formula(formulaInStringForm.subString(position first bracket+1, position last bracket);
However implementing the other operators isn't so easy. My thoughts were to make Objects that excecute these operations, an object that add's 2 numbers for example). All those objects could
be put in order into some kind of tree structere. When an X-input is given, it is send into each of the leaves. The result of 2 leaves are send to the node above them resulting in 1 result.
So the whole tree is traversed with the end-result popping out at the top.
Am I on the good track?
Join Date
Feb 2011
Rep Power
Whew. This isn't a complete solution, but you should wind up with a recursive method like this:
Java Code:
public double parse(String formula) {
formula = formula.trim();
if (formula.startsWith("(") && formula.endsWith(")")) {
// strip off the parentheses
return parse(formula.substring(1, (formula.length() - 1)));
} else if {
// handle multiplication by splitting formula on the "*" character
String[] operands = formula.split("*");
double product = 1.0;
for (String op in operands) {
product *= parse(op);
return product;
} else if {
// etc.
} else {
return Double.parseDouble(formula);
Join Date
Aug 2010
Rep Power
I think you misunderstood me. The class should the a formula only once as a String and then be able to fill in many X values, one after another, throught a methode. So not:
public double Formula (String formula)
public Formula(String formula)
public double fillIntoFormula(double X)
Also, this isn't about getting the exact implementation, I just want to come up with a concept that is workable without missing more convenient solutions. The idea of a tree consisting of
operator objects could work, I think, but would be complex. I want to know if anyone esle has a better/easier solution.
Join Date
Feb 2011
Rep Power
I think you misunderstood me. The class should the a formula only once as a String and then be able to fill in many X values, one after another, throught a methode. So not:
public double Formula (String formula)
public Formula(String formula)
public double fillIntoFormula(double X)
Also, this isn't about getting the exact implementation, I just want to come up with a concept that is workable without missing more convenient solutions. The idea of a tree consisting of
operator objects could work, I think, but would be complex. I want to know if anyone esle has a better/easier solution.
Ah, right. I forgot about the x business. Put parse() in your Formula class. Assign the formula string to an instance variable. Then implement fillIntoFormula() to call parse() recursively,
but change the final else of the big if/else statement to return the value of X.
Trust me, the recursive method is the way to go, not a whole tree of operator objects, as that'd be complete overkill. Besides, if you trace out the recursive calls, you'll see that they'll
form a tree. | {"url":"http://www.java-forums.org/new-java/39466-formula-class.html","timestamp":"2014-04-21T00:25:14Z","content_type":null,"content_length":"79820","record_id":"<urn:uuid:ac6b948d-2ff1-4b08-ac4a-90c9c103650e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of common rafter
Carpenter's square
redirects here. For the plant, see
Scrophularia marilandica
The steel square is a tool that carpenters and other tradesmen use. Today the steel square is more commonly referred to as the framing square. It consists of a large arm and a smaller one, which meet
at an angle of 90 degrees (a right angle). It can also be made of metals like aluminum, which is light and resistant to rust. The wider arm, two inches wide, is called the blade; the narrower arm,
one and a half inches wide, the tongue. The square has many uses, including laying out common rafters, hip rafters and stairs. It has a diagonal scale, board foot scale and an octagonal scale. On the
newer framing squares there are degree conversions for different pitches and fractional equivalents.
Carpenters squares are very like steel squares.
Use in stair framing
Stairs usually consist of three components. They are the stringer, the tread and the riser. The stringer is the structural member that carries the load of the staircase, the tread is the horizontal
part that is stepped on, and the riser board is the vertical part which runs the width of the structure. There are many types of stairs: open, closed, fully housed, winding, and so on, to mention a
few of them.
Laying out a staircase requires rudimentary math. There are numerous building codes to which staircases must conform. In an open area the designer can incorporate a more desirable staircase. In a
confined area this becomes more challenging. In most staircases there is one more rise than there are treads.
1. The rise (vertical measurement), and the run (horizontal measurement). Note that the stringer will rest partially on the horizontal surface.
2. This is a two-by-twelve piece of lumber. A framing square is placed on the lumber so that the desired rise and tread marks meet the edge of the board. The outline of the square is traced. The
square is slid up the board until the tread is placed on the mark and the process is repeated.
3. The board is cut along the dotted lines, and the top plumb cut and the bottom level cut are traced by holding the square on the opposite side.
4. The stringer in this example has two pieces of tread stock. This allows for a slight overhang. There is also a space in between the boards. The bottom of the stringer must be cut to the thickness
of the tread. This step is called dropping the stringer. After one stringer is cut this piece becomes the pattern that is traced onto the remaining stringers.
Use in roof framing
There is a table of numbers on the face side of the steel square; this is called the
rafter table
. The rafter table allows the carpenter to make quick calculations based on the
Pythagorean theorem
. The table is organized by columns that correspond to various
of the roof. Each column describes a different roof inclination (
) and contains the following information .
1. Common rafter per foot of run The common rafter connects the peak of a roof (the ridge) to the base of a roof (the plate). This number gives the length (hypotenuse) of the common rafter per
twelve units of horizontal distance (run).
2. Hip or valley rafter per foot of run The hip or valley rafter also connects the ridge to the plate, but lies at a 45-degree angle to the common rafter. This number gives the length of the hip or
valley rafter per seventeen units of run.
3. Difference in lengths jacks The jack rafters lie in the same plane as the common rafter but connect the top plate (the wall) or ridge board to the hip or valley rafter respectively. Since the hip
or valley rafter meets the ridge board and the common rafter at angles of 45 degrees, the jack rafters will have varying lengths when they intersect the hip or valley. Depending on the spacing of
the rafters, their lengths will vary by a constant factor—this number is the common difference.
4. This angle can be cut on the fly by aligning this given number on the blade of the steel square and the twelve-inch mark on the tongue, and drawing a line along the tongue.
5. Cutting hip and valley criple rafters are all cut in a similar way.
Use of the octagon scale
Use of the Diagonal scale
Carpenter's square
, a
set square
is a guide for establishing right angles (ninety-degree angles), usually made of metal and in the shape of a
right triangle
See also
• Siegele,H.H, The Steel Square, Sterling Publishing Co., 1981, isbn= 0-8069-8854-1
• Ulrey,Harry F,Carpenters and Builders Library, No.3, Theodore Audel & Co., 1972
• Schuttner,Scott, Basic Stairbuilding,The Taunton Press, 1990, isbn= 0-942391-44-6
• Spence, William P,Constructing Staircases Balustrades & Landings, Sterling Publishing Co.,Inc., 2000
• Gochnour,Chris, Fine Woodworking, 11 Essential Measuring and Woodworking Tools, , p. 75 The Taunton Press, No. 182, February 2006
• http://en.wikipedia.org/wiki/Hip_Roof | {"url":"http://www.reference.com/browse/common+rafter","timestamp":"2014-04-18T10:54:29Z","content_type":null,"content_length":"78825","record_id":"<urn:uuid:690ad176-66dd-4e2f-a9a1-62c8a10642db>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
math combination sample
math combination sample Related topics: "algebra 2" calculator
6th grade multiplication online class
math answers for ap prob sol,act
fractions and ratios
ti 83 + solve for variables
solving equations
algebra 2 with trigonometry prentice hall question | test | problems
Author Message
Jrimnenn Posted: Wednesday 19th of Aug 08:55
Hey dudes, I have just completed one week of my college, and am getting a bit worried about my math combination sample home work. I just don’t seem to grasp the topics. How can one
expect me to do my homework then? Please guide me.
IlbendF Posted: Thursday 20th of Aug 07:20
I think I know what you are looking for. Check out Algebrator . This is an excellent product that helps you get your homework done faster and right. It can assist you with problems in
math combination sample , hypotenuse-leg similarity and more.
Majnatto Posted: Friday 21st of Aug 09:54
I totally agree, Algebrator is great! I am really good in algebra now, and I have the highest grades in the class! It helped me even with the most challenging math problems, like those
on converting fractions or quadratic formula. I really think you should try it .
From: Ontario
Outafnymintjo Posted: Saturday 22nd of Aug 08:24
linear inequalities, side-angle-side similarity and inequalities were a nightmare for me until I found Algebrator , which is really the best math program that I have come across. I have
used it through many algebra classes – Algebra 1, Basic Math and Basic Math. Just typing in the math problem and clicking on Solve, Algebrator generates step-by-step solution to the
problem, and my math homework would be ready. I highly recommend the program.
vlidol Posted: Monday 24th of Aug 07:09
I hope this software has an easy navigation. Can I have a look at it?
Mov Posted: Wednesday 26th of Aug 11:54
You can download this software from http://www.softmath.com/links-to-algebra.html. There are some demos available to see if it is really what you are looking for and if you find it
good, you can get a licensed version for a nominal amount. | {"url":"http://www.softmath.com/algebra-software/subtracting-exponents/math-combination-sample.html","timestamp":"2014-04-17T09:33:58Z","content_type":null,"content_length":"36888","record_id":"<urn:uuid:b95ba172-350b-46a5-b135-f01988d7b49d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
• It's the only way to access our downloadable files;
• You can use our search box tool;
• Registered users see fewer Adverts;
• You will receive our 'irregular' newsletters;
• It's free.
Unless specified otherwise in the individual descriptions MathSticks resources are licenced under a Creative Commons Licence.
You are free to use; share; copy; distribute and transmit the work. Provided that you give mathsticks.com credit for the work and logos remain intact. You may not alter, transfrom, or build upon the
work, nor may you use it in any form for commercial purposes. | {"url":"http://mathsticks.com/category/tags/multiplication","timestamp":"2014-04-19T19:43:28Z","content_type":null,"content_length":"67578","record_id":"<urn:uuid:546202ce-dd87-4e7e-a9ee-ae0d62a4d1a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Falling Object Model
If h is the height measured in feet, t is the number of seconds the object has fallen and s is the initial speed (in ft/sec), then the model for height of a falling object is
h = –16t^2 + st.
The “–16t^2” term comes from the acceleration due to gravity. If the value of h is in meters and s in meters/sec, the equation becomes
h = –5t^2 + st.
A ball is thrown vertically upward with an initial speed of 80 ft/sec. How high will the ball be after 3 seconds?
t = 3 and s = 80 ft/sec
So, h = –16(3)^2 + 80(3)
= –144 + 240
= 96 feet
These equations are simplified. They ignore air resistance and the gravitational constant is approximate. Also, this model only works for the surface of the Earth (at sea level). The model on other
planets will be different because their gravity is different. For example, on the surface of the moon, with h in meters and s in m/sec, the falling object model is h = –0.8t^2 + st. | {"url":"http://hotmath.com/hotmath_help/topics/falling-object-model.html","timestamp":"2014-04-16T04:11:58Z","content_type":null,"content_length":"5038","record_id":"<urn:uuid:e491f0a1-cfe2-48ce-9940-391cbd2b124d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tropical Fish Keeping - Aquarium fish care and resources - Test.
- -
Is there a Fourth of July in England?
How many birthdays does the average man have?
Some months have 31 days; how many have 28?
How many outs are there in an inning?
Is it legal for a man in California to marry his widow's sister?
Divide 30 by 1/2 and add 10. What is the answer?
If there are 3 apples and you take away 2, how many do you have?
A doctor gives you three pills telling you to take one every half hour.
How many minutes would the pills last?
A farmer has 17 sheep, and all but 9 die. How many are left?
How many animals of each sex did Moses take on the ark?
How many two cent stamps are there in a dozen?
Falina 10-25-2007 11:26 AM
Re: Test.
Is there a Fourth of July in England?
Yes but it is not celebrated.
How many birthdays does the average man have?
1 - the same one each year.
Some months have 31 days; how many have 28?
All of them.
How many outs are there in an inning?
Is it legal for a man in California to marry his widow's sister?
No, for he is dead.
Divide 30 by 1/2 and add 10. What is the answer?
If there are 3 apples and you take away 2, how many do you have?
A doctor gives you three pills telling you to take one every half hour.
How many minutes would the pills last?
A farmer has 17 sheep, and all but 9 die. How many are left?
9 live sheep.
How many animals of each sex did Moses take on the ark?
How many two cent stamps are there in a dozen?
I wasn't sure on the inning one. I think it's something to do with baseball but not sure. How did I do?
Falina 10-25-2007 12:15 PM
Originally Posted by Daz
Spoil sport :twisted:
VICTORY IS MINE! :twisted:
JouteiMike 10-25-2007 07:13 PM
Well that was....fun.
fish_4_all 10-25-2007 07:31 PM
6 outs per inning, 3 per side.
Falina 10-26-2007 04:05 AM
Originally Posted by fish_4_all
6 outs per inning, 3 per side.
I think it depends whether we are talking about baseball or cricket. However, since I loath both, I' in no position to argue.
All times are GMT -5. The time now is 01:37 PM.
Powered by vBulletin® Version 3.7.4
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Search Engine Friendly URLs by vBSEO 3.6.0 PL2 | {"url":"http://www.tropicalfishkeeping.com/printthread.php?t=9006","timestamp":"2014-04-18T18:37:49Z","content_type":null,"content_length":"10804","record_id":"<urn:uuid:1269bef9-85ef-41d7-8753-a295669c85e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Tricks
Let us learn about algebra booleana
Algebra booleana
is a logical calculus of true values, which was developed by George Boole in the 1840s. Algebra booleana resembles the
of real numbers, but with the numeric operations of addition
, multiplication
, & negation −
replaced by the respective logical operations of conjunction
, disjunction
, & complement
The laws of Boolean algebra can be explained axiomatically as certain equations which is said to be as axioms together with their logical consequences called
, or semantically as those equations which are true for every possible assignment of 0 or 1 to their variables. The axiomatic approach is sound & complete in the sense which it proves respectively
neither more nor fewer laws than the semantic approach
In our next blog we shall learn about
ferrous metals list
I hope the above explanation was useful.Keep reading and leave your comments. | {"url":"http://mathlogicactivities.blogspot.com/2010/08/algebra-booleana.html","timestamp":"2014-04-20T15:51:31Z","content_type":null,"content_length":"75910","record_id":"<urn:uuid:5c1e348d-b0c6-4b03-851e-93a598a8afdf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
MSP Synthesis Tutorial 2: Tremolo and Ring Modulation
From Cycling '74 Wiki
lick here to open the tutorial patch: Media:02yTremoloAndRingMod.maxpat
Multiplying signals
In the previous tutorial we added sine tones together to make a complex tone. In this tutorial we will see how a very different effect can be achieved by multiplying signals. Multiplying one wave by
another - i.e., multiplying their instantaneous amplitudes, sample by sample - creates an effect known as ring modulation (or, more generally, amplitude modulation). ‘Modulation’ in this case simply
means change; the amplitude of one waveform is changed continuously by the amplitude of another.
Technical detail:
Multiplication of waveforms in the time domain is equivalent to
of waveforms in the frequency domain. One way to understand convolution is as the superimposition of one spectrum on every frequency of another spectrum. Given two spectra
, each of which contains many different frequencies all at different amplitudes, make a copy of
at the location of every frequency in
, with each copy scaled by the amplitude of that particular frequency of
Since a cosine wave has equal amplitude at both positive and negative frequencies, its spectrum contains energy (equally divided) at both f and -f. When convolved with another cosine wave, then, a
scaled copy of (both the positive and negative frequency components of) the one wave is centered around both the positive and negative frequency components of the other.
Multiplication in the time domain is equivalent to convolution in the frequency domain ----
In our example patch, we multiply two sinusoidal tones. Ring modulation (multiplication) can be performed with any signals, and in fact the most sonically interesting uses of ring modulation involve
complex tones. However, we'll stick to sine tones in this example for the sake of simplicity, to allow you to hear clearly the effects of signal multiplication.
The tutorial patch contains two cycle~ objects, and the outlet of each one is connected to one of the inlets of a *~ object. However, the output of one of the cycle~ objects is first scaled by an
additional *~ object, which provides control of the over-all amplitude of the result. (Without this, the over-all amplitude of the product of the two cycle~ objects would always be 1.)
When you first open the tutorial patch, a loadbang object initializes the frequency and amplitude of the oscillators. One oscillator is at an audio frequency of 1000 Hz. The other is at a sub-audio
frequency of 0.1 Hz (one cycle every ten seconds). The 1000 Hz tone is the one we hear (this is termed the carrier oscillator), and it is modulated by the other wave (called the modulator) such that
we hear the amplitude of the 1000 Hz tone dip to 0 whenever the 0.1 Hz cosine goes to 0. (Twice per cycle, meaning once every five seconds.)
• Click on the ezdac~ to turn audio on and raise the volume on the gain~ slider. You will hear the amplitude of the 1000 Hz tone rise and fall according to the cosine curve of the modulator, which
completes one full cycle every ten seconds. (When the modulator is negative, it inverts the carrier, but we don't hear the difference, so the effect is of two equivalent dips in amplitude per
modulation period.)
The amplitude is equal to the product of the two waves. Since the peak amplitude of the carrier is 1, the over-all amplitude is equal to the amplitude of the modulator.
• Drag on the number box labeled Amplitude to adjust the sound to a comfortable level. Click on the message box containing the number 1 to change the modulator rate.
With the modulator rate set at 1, you hear the amplitude dip to 0 two times per second. Such a periodic fluctuation in amplitude is known as tremolo. (Note that this is distinct from vibrato, a term
usually used to describe a periodic fluctuation in pitch or frequency.) The perceived rate of tremolo is equal to two times the modulator rate, since the amplitude goes to 0 twice per cycle. As
described on the previous page, ring modulation produces the sum and difference frequencies, so you're actually hearing the frequencies 1001 Hz and 999 Hz, and the 2 Hz beating due to the
interference between those two frequencies.
• One at a time, message boxes containing 2 and 4. What tremolo rates do you hear? The sound is still like a single tone of fluctuating amplitude because the sum and difference tones are too close
in frequency for you to separate them successfully, but can you calculate what frequencies you're actually hearing?
• Now try setting the rate of the modulator to 8 Hz, then 16 Hz.
In these cases the rate of tremolo borders on the audio range. We can no longer hear the tremolo as distinct fluctuations, and the tremolo just adds a unique sort of ‘roughness’ to the sound. The sum
and difference frequencies are now far enough apart that they no longer fuse together in our perception as a single tone, but they still lie within what psychoacousticians call the critical band.
Within this critical band we have trouble hearing the two separate tones as a pitch interval, presumably because they both affect the same region of our basilar membrane.
• Try setting the rate of the modulator to 32 Hz, then 50 Hz.
At a modulation rate of 32 Hz, you can hear the two tones as a pitch interval (approximately a minor second), but the sensation of roughness persists. With a modulation rate of 50 Hz, the sum and
difference frequencies are 1050 Hz and 950 Hz - a pitch interval almost as great as a major second - and the roughness is mostly gone. You might also hear the tremolo rate itself, as a tone at 100
You can see that this type of modulation produces new frequencies not present in the carrier and modulator tones. These additional frequencies, on either side of the carrier frequency, are often
called sidebands.
• Listen to the remaining modulation rates.
At certain modulation rates, all the sidebands are aligned in a harmonic relationship. With a modulation rate of 200 Hz, for example, the tremolo rate is 400 Hz and the sum and difference frequencies
are 800 Hz and 1200 Hz. Similarly, with a modulation rate of 500 Hz, the tremolo rate is 1000 Hz and the sum and difference frequencies are 500 Hz and 1500 Hz. In these cases, the sidebands fuse
together more tightly as a single complex tone.
• Experiment with other carrier and modulator frequencies by typing other values into the <link type="refpage" name="number">number box</link> objects. Note how different ratios of frequencies
create different harmonic (or inharmonic) sidebands.
Multiplication of two digital signals is comparable to the analog audio technique known as ring modulation. Ring modulation is a type of amplitude modulation - changing the amplitude of one tone
(termed the carrier) with the amplitude of another tone (called the modulator). Multiplication of signals in the time domain is equivalent to convolution of spectra in the frequency domain.
Multiplying an audio signal by a sub-audio signal results in regular fluctuations of amplitude known as tremolo. Multiplication of signals creates sidebands - additional frequencies not present in
the original tones. Multiplying two sinusoidal tones produces energy at the sum and difference of the two frequencies. This can create beating due to interference of waves with similar frequencies,
or can create a fused complex tone when the frequencies are harmonically related. When two signals are multiplied, the output amplitude is determined by the product of the carrier and modulator | {"url":"http://cycling74.com/wiki/index.php?title=MSP_Synthesis_Tutorial_2:_Tremolo_and_Ring_Modulation&oldid=842","timestamp":"2014-04-21T14:18:39Z","content_type":null,"content_length":"42026","record_id":"<urn:uuid:2134b97d-a6e5-40ba-8f6e-6addcc91c9b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the integral of cos 2x? - Homework Help - eNotes.com
What is the integral of cos 2x?
We have to find the integral of cos 2x.
Int[ cos 2x dx]
let 2x = u
du = 2*dx
dx = (1/2)*du
=> Int [ cos u * (1/2) du]
=> (1/2) sin u + C
substitute u = 2x
=> (1/2) sin 2x + C
The required integral of cos 2x is (1/2)*sin 2x + C
integral is antiderivative of the function.
in this case, a derivative would use the chain rule, wherareas the antiderivative or the integral would use the same, except in the opposite direction. Thus, Integral of of cos 2x is (1/2)*sin 2x
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-integral-cos-2x-252060","timestamp":"2014-04-19T01:26:50Z","content_type":null,"content_length":"27214","record_id":"<urn:uuid:935a2e3c-bdba-44cc-b57c-c0bf1a807043>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yitwah Cheung's Homepage - Research
(with N. Chevallier) Hausdorff dimension of singular vectors, [pdf], submitted
Abstract: We prove the Hausdorff dimension of the set of singular vectors in R^d is ^d^2/[d+1] for d>1.
(with B. Weiss) TBD, in preparation
Abstract: We give examples of divergent trajectories satisfying a conjecture of Wolfgang Schmidt on successive minima of a lattice, with arbitrarily slow rates of divergence.
(with A. Eskin) Slow Divergence and Unique Ergodicity [ pdf ], arXiv:0711.0240v1., preprint
Abstract: Masur showed that a Teichmuller geodesic that is recurrent in the moduli space of closed Riemann surfaces is necessarily determined by a quadratic differential with a uniquely ergodic
vertical foliation. In this paper, we show that a divergent Teichmuller geodesic satisfying a certain slow rate of divergence is also necessarily determined by a quadratic differential with unique
ergodic vertical foliation. As an application, we sketch a proof of a complete characterization of the set of nonergodic directions in any double cover of the flat torus branched over two points.
(with J. Chaika and H. Masur) Winning games for bounded geodesics in Teichmuller discs [ pdf ], Journal Modern Dynamics, to appear
Abstract: We prove that for the flat surface defined by a holomorphic quadratic differential the set of directions such that the corresponding Teichmueller geodesic lies in a compact set in the
corresponding stratum is a winning set in Schmidt game. This generalizes a classical result in the case of the torus due to Schmidt and strengthens a result of Kleinbock and Weiss.
(with J. Athreya) A Poincare section for the horocycle flow on the space of lattices [ pdf], Int. Math. Res. Not., to appear
Abstract: In this paper, we show that the "BCZ map" introduced by Boca-Cobeli-Zaharescu is a Poincare section of the horocycle flow on the modular surface. Using this discovery, we develop the
ergodic properties of the BCZ map, proving in particular that it is ergodic and has zero entropy with respect to Lebesgue measure. We also show that many earlier results on statistical properties of
Farey fractions follow from the equidistribution principle for closed horocycle orbits. As one further application of the BCZ map, we compute the average depth of a horocycle orbit relative to the
number of visits into the cusp.
(with A. Goetz and A. Quas) Piecewise Isometries, Uniform Distribution and 3 log 2 - π^2/[8] [ pdf ], Ergod. Th. Dynam. Sys., 32, (2012), 1862-1888.
Abstract: We use analytic tools to study a simple family of piecewise isometries of the plane parameterized by an angle parameter. In previous work we showed the existence of large numbers of
periodic points, each surrounded by a 'periodic island'. We also proved conservativity of the systems as infinite measure-preserving transformations. In experiments it is observed that the periodic
islands fill up a large part of the phase space and it has been asked whether the periodic islands form a set of full measure. In this paper we study the periodic islands around an important family
of periodic orbits and and demonstrate that for all angle parameters that are irrational multiples of π the islands have asymptotic density in the plane of 3 log 2 - π^2/[8] ≈ 0.846.
(with P. Hubert and H. Masur) Dichotomy for the Hausdorff dimension of the set of nonergodic directions [ pdf ], Inventiones, 183, (2011), 337-383.
Abstract: We consider billiards in a (1/2)-by-1 rectangle with a barrier midway along a vertical side. Let NE be the set of directions theta such that the flow in direction theta is not ergodic. We
show that the Hausdorff dimension of the set NE is either 0 or 1/2, with the latter occurring if and only if the length of the barrier satisfies the condition that the sum of (loglog q[k+1])/q[k] is
finite, where q[k] is the denominator of the kth convergent of the length of the barrier.
Hausdorff dimension of the set of Singular Pairs [ pdf ], Annals of Math., 173, (2011), 127-167.
Abstract: In this paper we show that the Hausdorff dimension of the set of singular pairs is ^4/[3]. We also show that the action of diag(e^t,e^t,e^-2t) on SL[3]R/SL[3]Z admits divergent
trajectories that exit to infinity at arbitrarily slow prescribed rates, answering a question of A.N. Starkov. As a by-product of the analysis, we obtain a higher dimensional generalisation of the
basic inequalities satisfied by convergents of continued fractions. As an illustration of the techniques used to compute Hausdorff dimension, we show that the set of real numbers with divergent
partial quotients has Hausdorff dimension ^1/[2].
(with P.Hubert and H.Masur) Topological Dichotomy and Strict Ergodicity for Translation Surfaces [ pdf ] Ergod. Th. Dynam. Sys., 28 (2008), 1729--1748.
Abstract: In this paper the authors find examples of translation surfaces that have infinitely generated Veech groups, satisfy the topological dichotomy property that for every direction either the
flow in that direction is completely periodic or minimal, and yet have minimal but non uniquely ergodic directions.
(with A. Eskin) Unique Ergodicity of Translation Flows [ pdf | ps ] Fields Institute Communications 51 (2007) 213-222.
Abstract: This preliminary report contains a sketch of the proof of the following result: a slowly divergent Teichmuller geodesic satisfying a certain logarithmic law is determined by a uniquely
ergodic measured foliation.
Hausdorff dimension of the set of points on divergent trajectories of a homogeneous flow on a product space [pdf | ps] Ergod. Th. Dynam. Sys., 27 (2007), 65--85.
Abstract: In this paper we compute the Hausdorff dimension of the set D_n of points on divergent trajectories of the homogeneous flow induced by a certain one-parameter subgroup of G=SL(2,R) acting
by left multiplication on the product space G^n/Gamma^n, where Gamma=SL(2,Z). We prove that the Hausdorff dimension of D_n equals 3n-(1/2) for any n greater than one.
(with H. Masur) Minimal nonergodic directions on genus 2 translation surfaces [ pdf | ps ] Ergod. Th. Dynam. Sys. 26 (2006), 341--351.
Abstract: It is well-known that on any Veech surface, the dynamics in any minimal direction is uniquely ergodic. In this paper, it is shown that for any genus 2 translation surface which is not a
Veech surface there are uncountably many minimal but not uniquely ergodic directions. Slides [ pdf | ps ] for talk at Midwest Dynamics Seminar, October 2005.
(with H. Masur) A divergent Teichmuller geodesic with uniquely ergodic vertical foliation [ pdf | ps ] (Israel J. Math. 152 (2006), 1--15.
Abstract: We construct an example of a quadratic differential whose vertical foliation is uniquely ergodic and such that the Teichmuller geodesic determined by the quadratic differential diverges in
the moduli space of Riemann surfaces.
Slowly divergent geodesics in moduli space [ pdf | ps ] (Conform. Geom. Dyn. 8 (2004), 167--189.)
Abstract: Slowly divergent geodesics in the moduli space of Riemann surfaces of genus at least 2 are constructed via cyclic branched covers of the torus. Nonergodic examples (i.e. geodesics whose
defining quadratic differential has nonergodic vertical foliation) diverging to infinity at sublinear rates are constructed using a Diophantine condition. Examples with an arbitrarily slow
prescribed growth rate are also exhibited.
Hausdorff dimension of the set of nonergodic directions [ pdf | ps ] (Ann. of Math., 158 (2003), 661--678.)
Abstract: It is known that nonergodic directions in a rational billard form a subset of the unit circle with Hausdorff dimension at most 1/2. Explicit examples realizing the dimension 1/2 are
constructed using Diophantine numbers and continued fractions. A lower estimate on the number of primitive lattice points in certain subsets of the plane is used in the construction. | {"url":"http://math.sfsu.edu/cheung/","timestamp":"2014-04-20T18:25:38Z","content_type":null,"content_length":"18545","record_id":"<urn:uuid:a27c7474-21c7-43bc-91f5-42f5cc83ddb3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information Theory b-log
Update (4/3/2014): I believe I have solved the conjecture, and proven it to be correct. I will make a preprint available shortly. The original blog post remains available below. — Tom
I have an extremal conjecture that I have been working on intermittently with some colleagues (including Jiantao Jiao, Tsachy Weissman, Chandra Nair, and Kartik Venkat). Despite our efforts, we have
not been able to prove it. Hence, I thought I would experiment with online collaboration by offering it to the broader IT community.
In order to make things interesting, we are offering a $1000 prize for the first correct proof or counterexample! Feel free to post your thoughts in the public comments. You can also email me if you
have questions or want to bounce some ideas off me.
Although I have no way of enforcing them, please abide by the following ground rules:
1. If you decide to work on this conjecture, please send me an email to let me know that you are doing so. As part of this experiment with online collaboration, I want to gauge how many people
become involved at various degrees.
2. If you solve the conjecture or make significant progress, please keep me informed.
3. If you repost this conjecture, or publish any results, please cite this blog post appropriately.
One final disclaimer: this post is meant to be a brief introduction to the conjecture, with a few partial results to get the conversation started; it is not an exhaustive account of the approaches we
have tried.
1. The Conjecture
Conjecture 1. Suppose ${X,Y}$ are jointly Gaussian, each with unit variance and correlation ${\rho}$. Then, for any ${U,V}$ satisfying ${U-X-Y-V}$, the following inequality holds:
$\displaystyle 2^{-2I(Y;U)} 2^{-2I(X;V|U)} \geq (1-\rho^2)+ \rho^2 2^{-2I(X;U)} 2^{-2I(Y;V|U)} . \ \ \ \ \ (1)$
2. Partial Results
There are several partial results which suggest the validity of Conjecture 1. Moreover, numerical experiments have not produced a counterexample.
Conjecture 1 extends the following well-known consequence of the conditional entropy power inequality to include long Markov chains.
Lemma 1 (Oohama, 1997). Suppose ${X,Y}$ are jointly Gaussian, each with unit variance and correlation ${\rho}$. Then, for any ${U}$ satisfying ${U-X-Y}$, the following inequality holds:
$\displaystyle 2^{-2 I(Y;U)} \geq 1-\rho^2+\rho^2 2^{-2I(X;U)}. \ \ \ \ \ (2)$
Proof: Consider any ${U}$ satisfying ${U-X-Y}$. Let ${Y_u, X_u}$ denote the random variables ${X,Y}$ conditioned on ${U=u}$. By Markovity and definition of ${X,Y}$, we have that ${Y_u = \rho X_u + Z}
$, where ${Z\sim N(0,1-\rho^2)}$ is independent of ${X_u}$. Hence, the conditional entropy power inequality implies that
$\displaystyle 2^{2h(Y|U)} \geq \rho^2 2^{2h(X|U)} + 2 \pi e(1-\rho^2) = 2 \pi e \rho^2 2^{-2I(X;U)} + 2 \pi e(1-\rho^2). \ \ \ \ \ (3)$
From here, the lemma easily follows. $\Box$
Lemma 1 can be applied to prove the following special case of Conjecture 1. This result subsumes most of the special cases I can think of analyzing analytically.
Proposition 1. Suppose ${X,Y}$ are jointly Gaussian, each with unit variance and correlation ${\rho}$. If ${U-X-Y}$ are jointly Gaussian and ${U-X-Y-V}$, then
$\displaystyle 2^{-2I(Y;U)} 2^{-2I(X;V|U)} \geq (1-\rho^2)+ \rho^2 2^{-2I(X;U)} 2^{-2I(Y;V|U)}. \ \ \ \ \ (4)$
Proof: Without loss of generality, we can assume that ${U}$ has zero mean and unit variance. Define ${\rho_u = E[XU]}$. Since ${U-X-Y}$ are jointly Gaussian, we have
$\displaystyle I(X;U) =\frac{1}{2}\log\frac{1}{1-\rho_u^2} \ \ \ \ \ (5)$
$\displaystyle I(Y;U) =\frac{1}{2}\log\frac{1}{1-\rho^2\rho_u^2}. \ \ \ \ \ (6)$
Let ${X_u,Y_u,V_u}$ denote the random variables ${X,Y,V}$ conditioned on ${U=u}$, respectively. Define ${\rho_{XY|u}}$ to be the correlation coefficient between ${X_u}$ and ${Y_u}$. It is readily
verified that
$\displaystyle \rho_{XY|u} = \frac{\rho\sqrt{1-\rho_u^2}}{\sqrt{1-\rho^2\rho_u^2}}, \ \ \ \ \ (7)$
which does not depend on the particular value of ${u}$. By plugging (5)-(7) into (4), we see that (4) is equivalent to
$\displaystyle 2^{-2I(X;V|U)} \geq (1-\rho_{XY|u}^2)+ \rho_{XY|u}^2 2^{-2I(Y;V|U)}. \ \ \ \ \ (8)$
For every ${u}$, the variables ${X_u,Y_u}$ are jointly Gaussian with correlation coefficient ${\rho_{XY|u}}$ and ${X_u-Y_u-V_u}$ form a Markov chain, hence Lemma 1 implies
$\displaystyle 2^{-2I(X_u;V_u)} \geq (1-\rho_{XY|u}^2)+ \rho_{XY|u}^2 2^{-2I(Y_u;V_u)}. \ \ \ \ \ (9)$
The desired inequality (8) follows by convexity of
$\displaystyle \log\left[(1-\rho_{XY|u}^2)+ \rho_{XY|u}^2 2^{-2z}\right] \ \ \ \ \ (10)$
as a function of ${z}$. $\Box$
3. Equivalent Forms
There are many equivalent forms of Conjecture 1. For example, (1) can be replaced by the symmetric inequality
$\displaystyle 2^{-2(I(X;V)+I(Y;U))} \geq (1-\rho^2)2^{-2I(U;V)}+ \rho^2 2^{-2(I(X;U)+I(Y;V))}. \ \ \ \ \ (11)$
Alternatively, we can consider dual forms of Conjecture 1. For instance, one such form is stated as follows:
Conjecture 1′. Suppose ${X,Y}$ are jointly Gaussian, each with unit variance and correlation ${\rho}$. For ${\lambda\in [{1}/({1+\rho^2}),1]}$, the infimum of
$\displaystyle I(X,Y;U,V)-\lambda\Big(I(X;UV)+I(Y;UV)\Big), \ \ \ \ \ (12)$
taken over all ${U,V}$ satisfying ${U-X-Y-V}$ is attained when ${U,X,Y,V}$ are jointly Gaussian.
President Obama bestows 2011 National Medals of Science and Technology
including Claude E. Shannon Award winner Sol Golomb
Nevanlinna Prize
Nominations of people born on or after January 1, 1974
for outstanding contributions in Mathematical Aspects of Information Sciences including:
1. All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information
processing and modelling of intelligence.
2. Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.
Nomination Procedure: http://www.mathunion.org/general/prizes/nevanlinna/details/
The Epijournal: a new publication model
Information and Inference (new journal)
The first issue of Information and Inference has just appeared:
It includes the following editorial:
In recent years, a great deal of energy and talent have been devoted to new research problems arising from our era of abundant and varied data/information. These efforts have combined advanced
methods drawn from across the spectrum of established academic disciplines: discrete and applied mathematics, computer science, theoretical statistics, physics, engineering, biology and even finance.
This new journal is designed to serve as a meeting place for ideas connecting the theory and application of information and inference from across these disciplines.
While the frontiers of research involving information and inference are dynamic, we are currently planning to publish in information theory, statistical inference, network analysis, numerical
analysis, learning theory, applied and computational harmonic analysis, probability, combinatorics, signal and image processing, and high-dimensional geometry; we also encourage papers not fitting
the above description, but which expose novel problems, innovative data types, surprising connections between disciplines and alternative approaches to inference. This first issue exemplifies this
topical diversity of the subject matter, linked by the use of sophisticated mathematical modelling, techniques of analysis, and focus on timely applications.
To enhance the impact of each manuscript, authors are encouraged to provide software to illus- trate their algorithm and where possible replicate the experiments presented in their manuscripts.
Manuscripts with accompanying software are marked as “reproducible” and have the software linked on the journal website under supplementary material. It is with pleasure that we welcome the scien-
tific community to this new publication venue.
Robert Calderbank David L. Donoho John Shawe-Taylor Jared Tanner
Comparing Variability of Random Variables
Consider exchangeable random variables ${X_1, \ldots, X_n, \ldots}$. A couple of facts seem quite intuitive:
Statement 1. The “variability” of sample mean ${S_m = \frac{1}{m} \sum_{i=1}^{m} X_i}$ decreases with ${m}$.
Statement 2. Let the average of functions ${f_1, f_2, \ldots, f_n}$ be defined as ${\overline{f} (x) := \frac{1}{n} \sum_{i=1}^{n} f_i(x)}$. Then ${\max_{1\leq i \leq n} \overline{f}(X_i)}$ is less
“variable” than ${\max_{1\leq i \leq n} f_i (X_i)}$.
To make these statements precise, one faces the fundamental question of comparing two random variables ${W}$ and ${Z}$ (or more precisely comparing two distributions). One common way we think of
ordering random variables is the notion of stochastic dominance:
$\displaystyle W \leq_{st} Z \Leftrightarrow F_W(t) \geq F_Z(t) \ \ \ \mbox{ for all real } t.$
However, this notion really is only a suitable notion when one is concerned with the actual size of the random quantities of interest, while, in our scenario of interest, a more natural order would
be that which compares the variability between two random variables (or more precisely, again, the two distributions). It turns out that a very useful notion, used in a variety of fields, is due to
Ross (1983): Random variable ${W}$ is said to be stochastically less variable than random variable ${Z}$ (denoted by ${\leq_v}$) when every risk-averse decision maker will choose ${W}$ over ${Z}$
(given they have similar means). More precisely, for random variables ${W}$ and ${Z}$ with finite means
$\displaystyle W \leq_{v} Z \Leftrightarrow \mathbb{E}[f(X)] \leq \mathbb{E}[f(Y)] \ \ \mbox{ for increasing and convex function } f \in \mathcal{F}$
where ${\mathcal{F}}$ is the set of functions for which the above expectations exist.
One interesting, but perhaps not entirely obvious, fact is that this notion of ordering ${W\leq_v Z}$ is equivalent to saying that there is a sequence of mean preserving spreads that in the limit
transforms the distribution of ${W}$ into the distribution of another random variable ${W'}$ with finite mean such that ${W'\leq_{st} Z}$! Also, using results by Hardy, Littlewood and Polya (1929),
the stochastic variability order introduced above can be shown to be equivalent to Lorenz (1905) ordering used in economics to measure income equality.
Now with this, we are ready to formalize our previous statements. The first statement is actually due to Arnold and Villasenor (1986):
$\displaystyle \frac{1}{m} \sum_{i=1}^{m} X_i \leq_v \frac{1}{m-1} \sum_{i=1}^{m-1} X_i \ \ \ \ \ \ \ \ \ \ \ \ \mbox{for all }\ \ m \in \mathbb{N}.$
Note that when you apply this fact to a sequence of iid random variables with finite mean ${\mu}$, it strengthens the strong law of large number in that it ensures that the almost sure convergence of
the sample mean to the mean value ${\mu}$ occurs with monotonically decreasing variability (as the sample size grows).
The second statement comes up in proving certain optimality result in sharing parallel servers in fork-join queueing systems (J. 2008) and has a similar flavor:
$\displaystyle \max_{1\leq i \leq n} \overline{f}(X_i) \leq_v \max_{1\leq i \leq n} f_i (X_i).$
The cleanest way to prove both statements, to the best of my knowledge, is based on the following theorem first proved by Blackwell in 1953 (later strengthened to random elements in separable Banach
spaces by Strassen in 1965, hence referred to by some as Strassen’s theorem):
Theorem 1 Let ${W}$ and ${Z}$ be two random variables with finite means. A necessary and sufficient condition for ${W \leq_v Z}$ is that there are two random variables ${\hat{W}}$ and ${\hat{Z}}$
with the same marginals as ${W}$ and ${Z}$, respectively, such that ${\mathbb{E}[\hat{Z} |\hat{W}] \geq \hat{W}}$ almost surely.
For instance, to prove the first statement we consider ${\hat{W} = W = \frac{1}{n} \sum_{i=1}^n X_i}$ and ${Z = \frac{1}{n-1} \sum_{i=1}^{n-1} X_i}$. All that is necessary now is to note that ${\hat
{Z} : = \frac{1}{n-1} \sum_{i\in I, i eq J} X_i}$, ${J}$ is an independent uniform rv on the set ${I := \{1,2, \ldots, n\}}$, has the same distribution as random variable ${Z}$. Furthermore,
$\displaystyle \mathbb{E} [ \hat{Z} | W ] = \mathbb{E} [ \frac{1}{n} \sum_{J=1}^{n} (\frac{1}{n-1} \sum_{i\in I, i eq J} X_i ) | W ] = \mathbb{E} [ \frac{1}{n} \sum_{j=1}^{n} X_j | W ] = W.$
Similarly to prove the second statement, one can construct ${\hat{Z}}$ by selecting a random permutation of functions ${f_1, \ldots, f_n}$.
The face of randomness or Poisson for civilians
Information and the origin of life | {"url":"https://blogs.princeton.edu/blogit/page/2/","timestamp":"2014-04-21T02:21:58Z","content_type":null,"content_length":"71911","record_id":"<urn:uuid:5389f3dc-29b9-4823-be72-d1e5e7b6c62d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fill in the following
Number of results: 43,944
Solar Energy Help ASAP
Below you see a plot of an ideal and a non-ideal JV curve. Based on this plot and assuming an irradiance of 1000W/m2, give approximately the numerical values to the questions bellow: I. Estimate the
ideal Fill Factor (in %): II. Estimate the non-ideal Fill Factor (in %): III. ...
Thursday, October 17, 2013 at 3:48pm by Anonymous
5th Grade Math
0.38 = (3 X 0.1) + ( X ) My sons paper instructs us to fill in the missing values. We understand that we are to fill in the two values in the second set of parenthesis, but that is all we understand.
Please help...I feel so inadequate because I can't help my child with this ...
Tuesday, September 7, 2010 at 8:00pm by Sabrina
Advanced Maths
A reservoir is fed by two pipes of different diameter. The pipe with the larger diameter takes three hours less than the smaller pipe to fill the reservoir. if both pipes are opened simultaneously,
the reservoir can fill in two hours. Calculate how long it takes the pipe with ...
Saturday, February 15, 2014 at 12:40pm by Michelle
Algebra 1
Let tank volume = V in gallons First pipe fill rate = Q1 = V/12 g.p.m Second pipe fill rate = Q2 = V/10 g.p.m Let Q3 = third pipe fill rate in gallons/min 4 minutes = V/(Q1 + Q2 + Q3) = V/(V/12 + V/
10 + Q3). = 1/(1/12 + 1/10 + Q3/V) 1/3 + 2/5 + 4 Q3/V = 1 4 Q3/V = 4/15 Q3/V = ...
Friday, February 22, 2013 at 6:41pm by drwls
I really need help with this question please Break-even equation. Fill in the blank. The following table contains selected data concerning several outpatient clinics in the new Ambulatory Care Center
at Hope University Hospital. Fill in the missing information. A. PRICE PER ...
Wednesday, August 14, 2013 at 6:57pm by Mae
Computers- Microsoft Word
Merge can be used to ... ... make address labels for a list of people. ... fill in names and addresses in form letters. ... fill in specific terms in form letters according to which name the letter
is being addressed to. and others.
Tuesday, January 29, 2008 at 8:09pm by Writeacher
Environmental studies
Are there problems with mitigating impacts on wetlands/streams at an off-site location within the same watershed? i.e. fill a wetland in Maryland and mitigate for it or create it in Virginia. Should
the federal government (U.S. Army Corps of Engineers) or a developer decide ...
Tuesday, June 3, 2008 at 8:10pm by Thano
(This is a question where we have to fill in the blank) So: 2 3 5 ? 12 11 16 26 41 ? It's a number sequence question and I need to fill in the ? as well as write a rule in words. PLEASE HELP!(And the
top number has to relate with the bottom number)
Tuesday, August 10, 2010 at 10:17am by Marianne
5th grade Math
One tap can fill a barrel in 3 minutes and another can fill it in 6 minutes. Both taps are turned on together for 1 minute. What fravction of the barrel will be filled in 1 minute?
Sunday, April 26, 2009 at 7:23pm by Serena
One pipe can fill a swimming pool in 8 hours. Another pipe takes 12 hours. How long will it take to fill the pool if both pipes are used simultaneously?
Monday, May 24, 2010 at 9:54pm by TinaG
It takes 1 and 3/4 hours to fill a new underground gasoline storage tank. What part of the tank would be fill if the gasoline had been shut off after 1 hours?
Tuesday, July 16, 2013 at 7:23pm by vincent
1. it takes: ____ electrons to fill the 6d subshell of an atom. 2. It takes _____ electrons to fill the n=6 shell of an atom. I thought it was 22 and 72?
Wednesday, March 12, 2014 at 9:26pm by abby
Using the following six digits, 1, 2, 4, 6, 8, and 9, fill in the blanks below to make the maximum product that you can. Use each digit only once. How does an understanding of place value help
determine the best solution to this problem?
Monday, January 17, 2011 at 9:56pm by Melissa
Jim can fill a pool in 30 min, sue can do it in 45 min, Tony can do it in 90min, how long would it take all three to fill the pool together? How would i figure this out?
Thursday, June 28, 2012 at 5:10pm by Anonymous
you fill up 40 gallon pool putting in 2 gallons water every minute, if someone scoops out 1 gallon every 8 minutes, how long would it take to fill the pool?
Thursday, January 24, 2013 at 6:21pm by Lisa
Math- PLEASE HELP
MS Wilkens went to the gas station to fill her car. Her gas tank is 20% full. The cost of gas is $4.29 per gallon. How much will itcost to fill the tank? Use the conversion 1 gallon=231 in cubed.
They showed a pic. of a tank with length=1.75ft width=1.25 ft height=11 in I did ...
Sunday, March 17, 2013 at 4:43pm by Bri
Use the Microsoft Web site microsoft. com/ windows/ compatibility to research whether a home or lab PC that does not have Windows Vista installed qualifies for Vista. Fill in the following table and
print the Web pages showing whether each hardware device and appli-cation ...
Wednesday, December 1, 2010 at 12:45am by Anonymous
colin is buying dirt to fill a garden bed that is a 9 foot by 16 foot rectangle. if he wants to fill it to a depth of 4 inches, how many cubic yards of dirt does he need? if dirt costs $25 per yard,
how much will the project cost?
Thursday, April 19, 2012 at 12:50am by christian
I love chocolate and coffee. I begin by making one cup of hot chocolate. I drink half of my cup and then I fill the cup with coffee. I stir the new mixture and drink half of the cup and then fill the
cup again with coffee. I continue this pattern until I have consumed two full...
Monday, October 10, 2011 at 10:03pm by Help Please
One pump fills a tank two times as fast as another pump. If the pumps work together they fill the tank in 18 minutes. How long does it take each pump working alone to fill the tank?
Sunday, February 2, 2014 at 7:56pm by Tim
A 5/8 inch (inside) diameter garden hose is used to fill a round swimming pool 7.3m in diameter.How long will it take to fill the pool to a depth of 1.0m if water issues from the hose at a speed of
0.50m/s ?
Monday, November 5, 2012 at 1:56pm by Jokella
Math 8R - Homework Help!: Part 1
Part 1: Fill in the definitions I need the definitions for the following words: solution evaluate inverse operations
Monday, October 29, 2012 at 2:30pm by Laruen
I dont get how to fill in the blanks Show the products for cellular respiration by completing the following chemical reaction equation. Give the correct formula for oxygen as well Glucose+_________
---> ________+______+___________
Wednesday, May 16, 2007 at 7:09pm by Steve
Algebra 2
An extruder can fill an empty bin in 2 hours and a packaging crew can empty a full bin in 5 hours. If a bin is half full when an extruder begins to fill it and a crew begins to empty it how long will
it take to fill the bin?
Thursday, November 3, 2011 at 10:54am by GY
How many electrons would be required to fill the following sublevels? *3s *4s *3d *4d *3p *2d *3f *4d Choices are as follows A. 2 B 6 C 10 D 14 E There is no such orbital
Sunday, November 7, 2010 at 5:29pm by Kimberly
PSY210 APPENDIX G
According to Sternberg, a person can experience eight general types of love. In the following table, a type of love as identified by Sternberg is in the left column. In the center column, write the
combination of components that type of love demonstrates (intimacy, passion, or...
Monday, October 19, 2009 at 3:05am by Matt
the pet shop owner told jean to fill her new tank 3/4 full with water. Jean filled it 9/12 full. what fraction of the tank does Jean still need to fill?
Saturday, February 23, 2013 at 1:45pm by Lissette
The pet shop owner told jean to fill the new fish tank 3/4 full of water.Jean filled the tank 9/12 full.What fraction of the tank does she still need to fill?
Wednesday, March 21, 2012 at 9:14pm by kelly
the pet shop owner told Jean to fill her new fish tank 3/4 full with water. Jean filled it 9/12 full. What fraction of the tank does Jean still need to fill??
Saturday, November 10, 2012 at 10:54am by Marie
the pet shop owner told Jean to fill her new fish tank 3/4 full with water. Jean filled it 9/12 full . what fraction of the tank does Jean still need to fill?
Monday, March 4, 2013 at 4:08pm by jessica
The pet shop owner told Jean to fill her new fish tank 3/4 full with water. Jean filled it 9/12 full. What fraction of the tank does Jean still need to fill?
Monday, February 10, 2014 at 4:08pm by Anne
what is being said in the following line: “Foul devil, for God’s sake, hence, and trouble us not; For thou hast made the happy earth thy hell, Fill’s it with cursing cries and deep exclaims.”
Friday, January 15, 2010 at 10:42am by kim
Complete the following chart with examples of each type of sculpture. Fill in the title of the artwork, the artist, the year made, and the medium. In the last column, determine whether the work is
additive, subtractive, or neither. I need an example of how to post
Saturday, March 13, 2010 at 11:29pm by Anonymous
It takes 10 hours to fill a pool with water, and 20 hours to drain it. If the pool is empty and the drain is open, how long will it take to fill the pool?
Wednesday, April 17, 2013 at 9:00pm by Rayyan
How many electrons would be required to fill the following sublevels? This is what I got *3s = A *4s = A *3d = C *4d =C *3p =B *2d= E *3f =E *4d =C Choices are as follows A. 2 B 6 C 10 D 14 E There
is no such orbital
Sunday, November 7, 2010 at 6:22pm by Kimberly
English 10-Check/Help
Part 1: Poem Structure Check my following answers below, and I need help with theme? Dx In this part, you will analyze the structure of the poem. Please fill out the following. Title of Poem: Annabel
Lee Poet: Edgar Allan Poe Theme: Type of Poem: Narrative poetry Style of ...
Wednesday, May 1, 2013 at 10:21am by Victoria
Am using the inventory method at moment but am stuck on how fill i the inventory for __Agl+__Fe2(CO3)3--> __Fel3+__Ag2CO3 I think the brackets are throwing me Hows this element b4 after
------------------------------------ Ag 1 2 l 1 3 Fe 6 1 CO 3 3 I dont know if this is ...
Sunday, May 16, 2010 at 12:17am by ANDY
All are correct, except the last part of the last sentence reads strangely. You wouldn't fill out the same forms over and over, every day, would you? Try this: "Please take these forms home, fill
them out, and bring them back tomorrow."
Tuesday, February 3, 2009 at 5:13am by Writeacher
Calculate the following, expressing the answer in scientific notation with the correct number of significant figures: (8.86 + 1.0 * 10^-3) / 3.610 * 10^-3 please help me write out the equation and
fill in the appropriate data thanks for your help Jiskha
Sunday, February 8, 2009 at 12:38pm by y912f
Tap A fills up a water tank in 45 mins. Tap A fills up_____(ugh soo right here for the answer... do i put it in a fraction or something?)of the tank in one min. Tap B fills up the same tank in 75
mins. Tap B fills up_____of the tank in one min Tap A and tap B fill together up(...
Friday, April 13, 2007 at 5:37pm by karen j.
Math Middle School
Please help me solve this challenge problem: I love hot chocolate and coffee. I began by making one cup of hot chocolate. I drink half of my cup and then I fill the cup with coffee. I stir the new
mixture and drink half of the cup and then fill the cup again with coffee. I ...
Tuesday, October 11, 2011 at 8:58pm by HELP PLEASE!!!!!
Jim can fill a pool carrying bucks of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 ½ hours. How quickly can all three fill the pool together?
Sunday, September 23, 2007 at 8:34pm by Jen
Jim can fill a pool carrying buckets of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 ½ hours. How quickly can all three fill the pool together?
Sunday, November 29, 2009 at 8:02pm by Britany
A circular swimming pool is 4 feet deep and has a diameter of 15 feet. If it takes 7.5 gallons of water to fill one cubic foot, to the nearest whole number, how many gallons are needed to fill this
swimming pool.
Monday, March 14, 2011 at 11:04am by Anonymous
. Jim can fill a pool carrying buckets of water in 30 minutes. Courtney can do the same job in 45 minutes. Bob can do the same job in 1 ½ hours. How quickly can all three fill the pool together?
Sunday, December 12, 2010 at 12:10am by Anonymous
list the sequnce in which the following orbitals fill up: 1s , 2s , 3s, 4s , 5s, 6s, 7s, 2p, 3p , 4p, 5p , 6p , 7p , 3d, 4d, 5d, 6d, 4f, 5f
Wednesday, October 20, 2010 at 6:27pm by Raynique
Jim can fill a pool carrying buckets of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 and a half hours. How quickly can all three fill the pool together?
Wednesday, March 4, 2009 at 10:51am by Nicole
Jim can fill a pool carrying buckets of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 ½ hours. How quickly can all three fill the pool together? ANYBODY
KNOW THE ANSWER??
Wednesday, December 9, 2009 at 10:25pm by matixbdm
Pipe A takes twice the time to fill up the pool as pipe B does.The pool can be filled up in 2 h when both pipes are opened How long will it take each pipe to fill it up alone Plz slove it in Linear
Monday, May 27, 2013 at 9:32am by Mary
The large one can fill in x hours, then the small one in (x+6) hours. During each hour, working together, the two hoses can fill 1/x+1/(x+6) of the pool, or (2x+6)/(x(x+6)). Since using both hoses,
it fills the pool in 4 hours, each hour fills 1/4 of the pool, therefore: 1/x...
Saturday, November 5, 2011 at 10:18pm by MathMate
how do i balance the following(fill in the blanks): 1.) __BF3 + __Li2SO3 --> __B2(SO3)3 + __LiF 2.) __CaCO3 + __H3PO4 --> __Ca3(PO4)2 + __H2CO3 3.) __B2Br6 + __HNO3 --> __B(NO3)3 + __HBr
Monday, December 15, 2008 at 8:46pm by lyne
6th grade science
Think about it. For a gas the molecules are randomly oriented usually with a great deal of space between them (except at high pressures) and they fill the space in which they are located. For a
liquid the molecules are closer together but they do not fill the space in which ...
Wednesday, September 23, 2009 at 7:48pm by DrBob222
the fill pipe for a tank can fill the tank in 4 hours and the drain pipe can drain in 2 hours. If both pipes are accidentally opened, how long will it take to empty a half-filled tank?
Wednesday, September 24, 2008 at 8:23pm by Anonymous
Revise the following sentences to remove all pronoun errors. 41. When a police officer fires his gun, he must fill out a lot of paperwork. 42. When a teacher writes an exam, they think about what
aspects of student learning they want to test.
Thursday, July 28, 2011 at 1:07am by bill
1.) The first person who requires your assistance is Tina the bearded lady. Tina’s side job is to look after the elephants (she double majored at circus university.) a. Tina must fill the
semi-circular trough shown above with peanuts. Tina knows that the store bought peanuts ...
Saturday, February 11, 2012 at 5:41pm by harry
represent the relationship between the number of triangles and the perimeter of the figures they form. fill in table below. triangle 1: 6-6-5(b) triangle 2:6-6-5(b)&(H) triangle 3: 6-6-5 (b)-5 (B) 5
(h) fill in number of triangles-1, 2, 3, and perimeter they have? Help
Monday, March 4, 2013 at 9:03pm by Brieson
Calculate the RMM (relative molar mass) for NiCl2.6H20 =(58.69+35.45*2+6*(16+2*1.008)) =237.64 Mass required for one litre of 0.5M solution = 0.5*237.64= 118.82 g Fill a 1 litre volumetric flask with
about 800 ml of distilled water and gradually introduce the crystalline ...
Tuesday, August 24, 2010 at 7:08pm by MathMate
Fill in the missing information in the following table of four neutral atoms Symbol- 37Cl number of protons- 36 46 number of neutrons- 42 number of electrons- 33 mass number- 77 106
Wednesday, January 25, 2012 at 11:41pm by Tiffany
A circular swimming pool is 4 feet deep and has a diameter of 15 feet. If it takes 7.5 gallons of water to fill one cubic foot, to the nearest whole number, how many gallons are needed to fill this
swimming pool. Diameter = 2r A = π r^2 V = 4A W = 7.5V
Tuesday, March 15, 2011 at 10:45am by Anonymous
Math - Reposted - Mahi's question
Fill the seven quart. Empty into the five quart, leaving two. Mark the 7 quart jar at that 2 quart level. dump the five quart contents. Pour the 2 quarts into the five, marking its level. Fill the
seven quart to the 2 quart level where marked. Fill the five quart to the two ...
Friday, February 9, 2007 at 1:05pm by bobpursley
The following is kinetic rate data taken as absorbance of reactant vs. time: A: ln(A): Time/min: 0.532 ?? 0.0 0.484 2.54 0.442 5.12 0.395 8.35 0.350 11.95 fill in the ln(A) column
Monday, September 24, 2012 at 11:15pm by Brunette
It takes 96 tiles to fill a 2-foot by 3-foot rectangle. How many tiles would it take to fill a 4-foot by 6-foot rectangle?
Tuesday, January 4, 2011 at 1:02pm by heather
3/4 full is the same as saying the aqarium is initially at 15 gallons of water.there are 4 quarts in 1 gallon. To fill the aquarium up will take 5 more gallons! 2quart pitcher times 5 equals 10 times
will the Vashis have to fill the aquarium with a 2 quart pitcher. So 10 is ...
Saturday, March 19, 2011 at 1:39pm by Rodney
Microsoft Excel
I have a workbook in excel, and when I create a new worksheet, it show as "Sheet 1", but I want it to automatically create or fill the dates. Not the column but the Worksheet tab. I want to
automatically fill the date for all the worksheet tab. I hope this make sense. Thank you.
Sunday, January 20, 2013 at 5:51pm by Jesus L
Punnett Square for F1 Cross ¨C Expected Genetic Outcomes F1 Parent, genes: _________ (student to fill in the blanks) ¡â alleles > ¡á alleles v F1 Parent, genes:__________ (student fill in blank)
Sunday, July 8, 2012 at 7:12pm by Cato
A swimming pool size is 42 m long and 55 m wide with an average depth of 7.5 m. Using the density of water as 1 g/ml: A. how many gallons of water fill the pool B. The weight of water in lbs after
the pool is full C. How long will it take to fill the ppol if a pump fills the ...
Friday, September 27, 2013 at 10:43pm by wendy
Using the following six digits, 1, 2, 4, 6, 8, and 9, fill in the blanks below to make the maximum product that you can. Use each digit only once. How does an understanding of place value help
determine the best solution to this problem? __ __ __ x __ __ __
Wednesday, September 22, 2010 at 6:11pm by james
college algebra
water from a tank is being used for irrigation while the tank is being filled. The two pipes can fill the tank in 6h and 8h, respectively. the outlet pipe can empty the tank in 24h. How long would it
take to fill the tank with all three pipes open? round the answer to the ...
Wednesday, December 12, 2012 at 12:43pm by Zachery
Consider an open economy in which the central bank targets the interest rate. Fill in the blanks in the following statement using the options below: If the central bank sells domestic currency in the
foreign exchange market then, ceteris paribus, unless it carries out open ...
Sunday, June 10, 2007 at 11:18am by Lucy Wang
given two containers, one holding 4 gal. and the other holding 3 gal. how does one measure out two gal.? Fill the 3 gal, pour it in the 4 gal. Fill the 3 gal, then pout it int the 4 gal until filled.
How much is left in the 3 gal? thank you bobpursley!!!!!!!!!!!
Thursday, April 26, 2007 at 6:18pm by Ethan
There are 6 ways to fill first, then 5 ways to fill second, and 4 ways to hand out third prize. So there are 6*5*4 or 120 ways, as Jordan said. Except Jordan meant to say 120 different permutations
and not combinations. Permutations imply positioning, while combinations do not.
Sunday, August 23, 2009 at 7:33pm by Reiny
Jim can fill a pool carrying buckets of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 ½ hours. How quickly can all three fill the pool together? A. 12
minutes B. 15 minutes C. 21 minutes D. 23 minutes E. 28 minutes
Thursday, March 19, 2009 at 10:46pm by sharon
ms.smith has a new fish aquarium that measures 45.3 cm long, 23.4 cm wide and 28 cm high. She has had a problem with the fish jumping out of the tank. today she is cleaning and decides to fill the
tank to a level that is 4cm from the top to see if that will help. how mmany ...
Monday, October 22, 2007 at 8:16pm by kelsy
[1.5%] ANOVA: Three groups of 5 students were tested in their ability to correctly answer a 10-question quiz under different formats. Group 1 had a True/False quiz, Group 2 had a Fill-In quiz, and
Group 3 had a Multiple Choice quiz. Their scores were as follows: True/False ...
Monday, May 17, 2010 at 2:39pm by TJohnson
Speeds for a randomly selected sample of n = 36 vehicles will be recorded. Determine the values that fill in the blanks in the following sentence. For samples of n = 36 vehicles, there is about a 95%
chance that the mean vehicle speed will be between ___ and ___.
Wednesday, October 3, 2012 at 9:42pm by margie wilcox
Solid sodium azide (NaN3) is used in air bags to fill them quickly upon impact. The solid NaN3 undergoes a decomposition reaction to produce Na(s) and N2(g). a. Provide the balanced equation for the
decomposition of NaN3. b. What mass, in grams, of NaN3 is needed to fill the ...
Wednesday, November 6, 2013 at 10:53pm by Anonymous
Sorry... Consider the function f(x) = 8.5 x − cos(x) + 2 on the interval 0 ¡Ü x ¡Ü 1. The Intermediate Value Theorem guarantees that there is a value c such that f(c) = k for which values of c and k?
Fill in the following mathematical statements, giving an interval with ...
Saturday, February 5, 2011 at 11:17pm by Abigail
Sorry... Consider the function f(x) = 8.5 x − cos(x) + 2 on the interval 0 ¡Ü x ¡Ü 1. The Intermediate Value Theorem guarantees that there is a value c such that f(c) = k for which values of c and k?
Fill in the following mathematical statements, giving an interval with ...
Sunday, February 6, 2011 at 3:36pm by Abigail
if an input pipe could fill the tank in x hours, and the drain could empty it in y hours with no input, 4/x - 1/y = 1/6 3/x - 1/y = 1/10 x=15,y=10 so, if n pipes fill the tank in 3 hours, n/15 - 1/10
= 1/3 n = 13/2 Hmmm. I was expecting an integer.
Tuesday, June 26, 2012 at 4:30pm by Steve
Algebra 1
You drive a car that runs on ethanol and gas. You have a 20-gallon tank to fill and you can buy fuel that it is either 25 percent ethanol or 85 percent ethanol. How much of each type of fuel should
you buy to fill your tank so that it is 50 percent ethanol?
Thursday, April 21, 2011 at 9:55pm by Stephanie
Consider the function f(x)=8.5x−cos(x)+2 on the interval 0¡Üx¡Ü1 . The Intermediate Value Theorem guarantees that there is a value c such that f(c)=k for which values of c and k? Fill in the
following mathematical statements, giving an interval with non-zero length in ...
Saturday, February 5, 2011 at 11:17pm by Abigail
well you are trying to fill inside of a rectangle with water so unless you already tried this I would find the volume of the pool(120x45x10)which gives us 54000. Then you would divide the volume of
the pool by the rate at which it is filled so 54000/15(gal. per min.)= 3600. ...
Tuesday, July 26, 2011 at 11:37am by Gregory
pam scored 78 on a test that had 4 fill-in questions worth 7 points each and 24 muliple choice questions worth 3 points each. She had one fill-in question wrong. How many multiple-choice questions
did pam get right?
Friday, October 28, 2011 at 4:22am by Sharday
A) For every g gallon you add, the cost increases by 3.03. Therefore, $3.03 is the cost per gallon. B) C(2) = 3.03(2) C(2) = $6.06 C) C(9) = 3.03(9) C(9) = 27.27 D) A negative number would be
inappropriate - you cannot fill your tank with negative gallons of gas. E) Assuming ...
Saturday, July 18, 2009 at 1:40pm by Marth
A tank fitted with two pipes is to filled with water. One pipe can fill it in 5 hours. After it has been open for 3 hours, the second pipe is opened and the tank is filled in 4 hours more. How long
it would take the second pipe alone to fill the tank?
Wednesday, September 14, 2011 at 7:39am by jannaline inguito
i have a 36 in. wide by 18 in. length by 20 in high rectangular aquarium that needs to be filled. i have a 12 in high cylindrical bucket with a 12 in diameter to fill the tank. if the aquarium
contains 3 in of gravel and the bucket is to be filled to a depth of 11 in, how many...
Monday, February 20, 2012 at 5:35pm by mikey
nisha is following a recipe for tomatoe sauce that calls for 1 3/4 tsps of oregano. she is using a measuring spoon that holds 1/8 tsp. how many times ill she need to fill the measuring spoon with
oregano to make the tomato sauce? I thought it was multiplication, but I was wrong:(
Monday, July 1, 2013 at 5:15pm by andra
Oh and the qn is How long will it take tap A to fill the tank how long will it take tap B to fill the tank help
Friday, September 10, 2010 at 10:39am by Jamie
College Math
An inlet pipe on a swimming pool can be used to fill the pool in 24 hours. The drain pipe can be used to empty the pool in 40 hours. If the pool is one-third filled and then the drain pipe is
accidentally opened, how long will it take to fill the pool?
Sunday, July 3, 2011 at 4:46pm by Lamont
Algebra I
Pump A, working alone, can fill a tank in 3 hours, and pump B can fill the same tank in 2 hrs. If the tank is empty to start and pump A is switched on for one hour, after which pump B is also
switched on and the two work together, how many minutes will pump B have been working...
Wednesday, June 30, 2010 at 5:59pm by Meredith
An inlet pipe on a swimming pool can be used to fill the pool in 24 hours. The drain pipe can be used to empty the pool in 40 hours. If the pool is one-third filled and then the drain pipe is
accidentally opened, how long will it take to fill the pool?
Sunday, July 3, 2011 at 4:28pm by Lamont
An inlet pipe on a swimming pool can be used to fill the pool in 28 hours. The drain pipe can be used to empty the pool in 42 hours. If the pool is one-third filled and then the drain pipe is
accidentally opened, how long will it take to fill the pool?
Sunday, July 10, 2011 at 10:45pm by brian
College physics
What is the effect of inserting a ferromagnet inside a winding coil where current flows? The effect is to Fill in the Blank 01 (increase/decrease) the magnetic field inside the coil compared to the
field that would have been produced in free space. This also Fill in the Blank ...
Wednesday, July 28, 2010 at 1:10pm by katie
Fill the 3-pint bucket from the 4-pint bucket. Fill the 1-pint bucket from the 3-pint bucket, leaving you 2 pints. Pour the 1-pint contents into the 4-pint bucket.
Wednesday, December 1, 2010 at 10:02pm by PsyDAG
Which of the following contains only positive statements, rather than normative statements? A. The unemployment rate is high mainly because companies are hiring immigrants to fill positions that
would have been offered to natives. Companies should hire more native workers. B. ...
Saturday, August 15, 2009 at 5:14pm by AJ
Human Relations
Which of the following steps will elicit a meaningful feedback from a passive person? A. confront the person. B. engage in a small talk to fill the uncomfortable empty pauses during a discussion. C.
Set time limits. D. Ask close-ended questions. I think the correct answer ...
Sunday, March 27, 2011 at 9:58am by Nancy
Microsoft Excel
make column A at the top read angle make column B at the top read sine(theta) next, fill in column A next, to to column B, the first point eneter the formula =sin(A1) there are a number of ways to
then fill in the rest of column B with the same formula.
Tuesday, June 14, 2011 at 10:24pm by bobpursley
Gayle had 5 1/3pounds of candy.She put 1 1/2 pounds of candy into each gift bag.How many gift bags could she fill completely fill? What fraction of a gift bag will she have left? what fraction of a
pound of candy will she have left?
Wednesday, September 21, 2011 at 6:21pm by Michelle
Gayle had 5 1/3pounds of candy.She put 1 1/2 pounds of candy into each gift bag.How many gift bags could she fill completely fill? What fraction of a gift bag will she have left? what fraction of a
pound of candy will she have left?
Wednesday, September 21, 2011 at 6:21pm by Michelle
C++ Programming
Draw a flowchart and write a C++ code for a program that reads two double numbers and then the program should calculate and display the sum and difference of the two numbers. Apply the following
format: Set the width for printing to 10 and use & as a fill character. The number...
Thursday, February 27, 2014 at 11:17am by Anonymous
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Fill+in+the+following&page=2","timestamp":"2014-04-19T15:18:28Z","content_type":null,"content_length":"43238","record_id":"<urn:uuid:91c29642-b0f0-47a7-909a-dbe756da7f88>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consistency of minimizers and the slln for stochastic programs
Results 1 - 10 of 12
, 2000
"... Quantitative stability of optimal values and solution sets to stochastic programming problems is studied when the underlying probability distribution varies in some metric space of probability
measures. We give conditions that imply that a stochastic program behaves stable with respect to a minim ..."
Cited by 27 (12 self)
Add to MetaCart
Quantitative stability of optimal values and solution sets to stochastic programming problems is studied when the underlying probability distribution varies in some metric space of probability
measures. We give conditions that imply that a stochastic program behaves stable with respect to a minimal information (m.i.) probability metric that is naturally associated with the data of the
program. Canonical metrics bounding the m.i. metric are derived for specic models, namely for linear two-stage, mixed-integer two-stage and chance constrained models. The corresponding quantitative
stability results as well as some consequences for asymptotic properties of empirical approximations extend earlier results in this direction. In particular, rates of convergence in probability are
derived under metric entropy conditions. Finally, we study stability properties of stable investment portfolios having minimal risk with respect to the spectral measure and stability index of the
- C o rpus-based wo rk on discourse marke rs such as ‘ a n d ’ ,‘ i f’ , ‘ bu t ’ ,e , 1997
"... : Epi-convergence in distribution is a useful tool in establishing limiting distributions of "argmin" estimators; however, it is not always easy to find the epi-limit of a given sequence of
objective functions. In this paper, we define the notion of stochastic equi-lower-semicontinuity of a sequence ..."
Cited by 12 (2 self)
Add to MetaCart
: Epi-convergence in distribution is a useful tool in establishing limiting distributions of "argmin" estimators; however, it is not always easy to find the epi-limit of a given sequence of objective
functions. In this paper, we define the notion of stochastic equi-lower-semicontinuity of a sequence of random objective functions. It is shown that epi-convergence in distribution and finite
dimensional convergence in distribution (to a given limit) of a sequence of random objective functions are equivalent under this condition. Key words and phrases: argmin estimators, convergence in
distribution, epi-convergence, equi-semicontinuity AMS 1991 subject classifications: Primary 62F12, 60F05; Secondary 62E20, 60F17. Running head: Stochastic equi-semicontinuity 1 Introduction Many
statistical estimators are defined as the minimizer (or maximizer) of some objective function; common examples include maximum likelihood estimation and M-estimation. Since any maximization problem
can be re-exp...
"... We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the resulting
problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance constrain ..."
Cited by 6 (1 self)
Add to MetaCart
We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the resulting
problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance constrained ()Departamento de Matemática, Pontifícia Universidade Católica do Rio de Janeiro, Rio de
"... An analysis of convex stochastic programs is provided if the underlying probability distribution is subjected to (small) perturbations. It is shown, in particular, that ε-approximate solution
sets of convex stochastic programs behave Lipschitz continuous with respect to certain distances of probabil ..."
Cited by 5 (3 self)
Add to MetaCart
An analysis of convex stochastic programs is provided if the underlying probability distribution is subjected to (small) perturbations. It is shown, in particular, that ε-approximate solution sets of
convex stochastic programs behave Lipschitz continuous with respect to certain distances of probability distributions that are generated by the relevant integrands. It is shown that these results
apply to linear two-stage stochastic programs with random recourse. Consequences are discussed on associating Fortet-Mourier metrics to two-stage models and on the asymptotic behavior of empirical
estimates of such models, respectively.
"... informs doi 10.1287/moor.1060.0222 ..."
, 2008
"... This paper presents a Nash equilibrium model where the underlying objective functions involve uncertainties and nonsmoothness. The well known sample average approximation method is applied to
solve the problem and the first order equilibrium conditions are characterized in terms of Clarke generalize ..."
Cited by 2 (2 self)
Add to MetaCart
This paper presents a Nash equilibrium model where the underlying objective functions involve uncertainties and nonsmoothness. The well known sample average approximation method is applied to solve
the problem and the first order equilibrium conditions are characterized in terms of Clarke generalized gradients. Under some moderate conditions, it is shown that with probability one, a statistical
estimator obtained from sample average approximate equilibrium problem converges to its true counterpart. Moreover, under some calmness conditions of the generalized gradients and metric regularity
of the set-valued mappings which characterize the first order equilibrium conditions, it is shown that with probability approaching one exponentially fast with the increase of sample size, the
statistical estimator converge to its true counterparts. Finally, the model is applied to an equilibrium problem in electricity market. Key words. Stochastic Nash equilibrium, exponential
convergence, H-calmness, Clarke generalized gradients, metric regularity. 1
, 1999
"... . To justify the use of sampling to solve stochastic programming problems one usually relies on a law of large numbers for random lsc (lower semicontinuous) functions when the samples come from
independent, identical experiments. If the samples come from a stationary process, one can appeal to the e ..."
Cited by 1 (0 self)
Add to MetaCart
. To justify the use of sampling to solve stochastic programming problems one usually relies on a law of large numbers for random lsc (lower semicontinuous) functions when the samples come from
independent, identical experiments. If the samples come from a stationary process, one can appeal to the ergodic theorem proved here. The proof relies on the `scalarization' of random lsc functions.
1 Introduction Stochastic programming models can be viewed as extensions of linear and nonlinear programming models to accommodate situations in which only information of a probabilistic nature is
available about some of the parameters of the problem. The following formulation includes both the stochastic programming with recourse models and the stochastic programming with chance constraints
models : min Eff 0 (¸¸ ¸; x)g (1) so that Eff i (¸¸ ¸; x)g 0; i = 1; : : : ; m; x 2 IR n where - ¸¸ ¸ is a random vector with support \Xi ae IR N , - P is a probability distribution function on IR N
, - f ...
, 2009
"... Abstract We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the
resulting problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance ..."
Add to MetaCart
Abstract We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the
resulting problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance constrained problems. Numerical experiments are performed to correctly tune the parameters
involved in the SAA. In addition, we present a method for constructing statistical lower bounds for the optimal value of the considered problem and discuss how one should tune the underlying
parameters. We apply the SAA to two chance constrained problems. The first is a linear portfolio selection problem with returns following a multivariate lognormal distribution. The second is a joint
chance constrained version of a simple blending problem.
"... Abstract. An analysis of convex stochastic programs is provided when the underlying probability distribution is subjected to (small) perturbations. It is shown, in particular, that ε-approximate
solution sets of convex stochastic programs behave Lipschitz continuously with respect to certain distanc ..."
Add to MetaCart
Abstract. An analysis of convex stochastic programs is provided when the underlying probability distribution is subjected to (small) perturbations. It is shown, in particular, that ε-approximate
solution sets of convex stochastic programs behave Lipschitz continuously with respect to certain distances of probability distributions that are generated by the relevant integrands. It is shown
that these results apply to linear two-stage stochastic programs with random recourse. We discuss the consequences on associating Fortet–Mourier metrics to two-stage models and on the asymptotic
behavior of empirical estimates of such models, respectively. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1218329","timestamp":"2014-04-17T16:59:14Z","content_type":null,"content_length":"35976","record_id":"<urn:uuid:e2430b54-887d-49f5-9fd1-0b6951c6e50e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Magazine - December 2008
Matrices and Tilings with Right Trominoes
Carlos Ueno
After the introduction of polyominoes by S. W. Golomb, many questions have arisen concerning tilings of plane regions with these figures. In this article we focus our attention on the study of
tilings with right trominoes, also called L-trominoes. We give an explicit, recursive way of obtaining transfer matrices for tilings of this kind which allows us to obtain (via computer calculations)
results about enumerating tilings of rectangles and the corresponding generating functions, as well as some statements about the existence of tilings in a class of regions called strips.
The Mystery of Robert Adrain
Frank J. Swetz
Robert Adrain (1775-1843) was a leading mathematician and educator in the early United States. Although self-educated in mathematics, he held professorships in the subject at Columbia, Rutgers, and
the University of Pennsylvania. He was an avid problem-solver, researcher, and editor of several mathematical related journals. He has been credited with doing some of the first “real mathematics”
in the U.S. Among his accomplishments are a derivation of a probability distribution for errors and successful applications of the Method of Least Squares. Despite his contributions to the growth of
American mathematics, Adrain has received little attention. This article calls attention to that fact and suggests further consideration of this man’s accomplishments, particularly as a pioneering
mathematics educator.
Touching the Z[2] in Three-Dimensional Rotations
Vesna Stojanoska and Orlin Stoytchev
Rotations in three-dimensional space have the following fascinating property: a full 360〫 rotation of an object is topologically nontrivial, i.e., you cannot deform this motion to the trivial one,
but a 720〫 rotation is trivial. This fact can be demonstrated by attaching three (or more) strands to the object you rotate and fixing their other ends to your desk surface. By rotating the object
you produce a kind of a braid. Then, without further rotating the object, you try to unplait the braid by moving the strands around. The braids obtained from an even number of full rotations, no
matter how complicated, are trivial, while those coming from an odd number of rotations are nontrivial. This connection between three-dimensional rotations and braids can be made rigorous to prove
that the fundamental group of SO(3) is indeed Z[2] .
Packing Squares in a Square
Iwan Praton
Here is a bounty problem from Paul Erdös: Place n nonoverlapping squares (not necessarily of the same size) inside a unit square. What is the largest possible value for the sum of the side lengths of
the n squares? Around 1932 Erdös conjectured that when n = k^2 + 1 the answer is k. In 1995 he and Soifer provided conjectures for all values of n. This note shows that it suffices to prove the
original conjecture: that is the original conjecture actually implies the more general conjecture.
Why is the Sum of Independent Normal Random Variables Normal? "
Bennett Eisenberg and Rosemary Sullivan
The fact that the sum of independent normal random variables is normal is fundamental in probability and statistics. The standard proofs using convolutions and moment generating functions do not give
much insight into why this is true. This paper gives two more proofs, one geometric and one algebraic, that provide more insight into why the sum of independent normal random variables must be
A Converse to a Theorem on Linear Fractional Transformations
Xia Hua
An interesting geometric fact about a linear fractional transformation (also called Möbius transformation and bilinear transformation) is that it maps circles and lines to circles and lines in a
bijective fashion. Naturally we want to ask: what can we say about an arbitrary bijective function that maps circles and lines onto circles and lines? We will show that any such function is either a
linear fractional transformation or the complex conjugate of a linear fractional transformation.
Sublimital Analysis
Thomas Q. Sibley
The Bolzano-Weierstrass theorem asserts, under appropriate circumstances, the convergence of some subsequence of a sequence. While this famous theorem ignores the actual limit of the subsequence, it
is natural to investigate such limits. This note characterizes the set of possible limits of subsequences of a given sequence.
Proof Without Words: Isosceles Dissections
We show that every triangle can be dissected into four isosceles triangles, that every acute triangle can be dissected into three isosceles triangles, and that a triangle can be dissected into two
isosceles triangles if and only if one angle is three times another or the triangle is right angled. This item was motivated by an earlier proof without words that dissected an arbitrary triangle
into six isosceles triangles.
Proof Without Words: Exponential Inequalities
Angel Plaza
This paper gives two visual proofs of the following exponential inequalities: | {"url":"http://www.maa.org/publications/periodicals/mathematics-magazine/mathematics-magazine-december-2008","timestamp":"2014-04-16T22:47:51Z","content_type":null,"content_length":"97060","record_id":"<urn:uuid:4c7bcff5-6ac3-4074-80f0-4ba2ec25b740>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial function
August 25th 2009, 04:59 PM #1
Jan 2009
Polynomial function
I can't figure out how to do this problem. I tried using the quadratic formula, but ended up getting a negative under the square root. Any help would be greatly appreciated!
The target weight w for an adult man of medium build is:
w(h)=0.0728h^2 - 6.986h + 289 where h is in inches and w is in pounds. If a man of medium build has achieved his target weight of 170 lb, how tall is he?
I can't figure out how to do this problem. I tried using the quadratic formula, but ended up getting a negative under the square root. Any help would be greatly appreciated!
The target weight w for an adult man of medium build is:
w(h)=0.0728h^2 - 6.986h + 289 where h is in inches and w is in pounds. If a man of medium build has achieved his target weight of 170 lb, how tall is he?
$170=0.0728h^2 - 6.986h + 289$
$0 = 0.0728h^2 - 6.986h + 119$
$a = 0.0728$
$b = -6.986$
$c = 119$
$h = \frac{-(-6.986) \pm \sqrt{(-6.986)^2 - 4(0.0728)(119)}}{2(0.0728)}$
$h = 73.817542...$
$h = 22.143996...$
which solution for $h$ makes sense in the context of the problem?
August 25th 2009, 05:11 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/99234-polynomial-function.html","timestamp":"2014-04-17T13:05:02Z","content_type":null,"content_length":"35222","record_id":"<urn:uuid:33fd20e8-5260-4271-89d5-d81945523d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ecological Archives
Ecological Archives A018-044-A1
Jian Yang, Hong S. He, and Stephen R. Shifley. 2008. Spatial controls of occurrence and spread of wildfires in the Missouri Ozark Highlands. Ecological Applications 18:1212 1225.
Appendix A. Separate the contribution of vegetation type to fire occurrence from other factors in calculating fire initiation probability and fire ignition rate.
Statement of the problem
The landscape fire succession simulation model, LANDIS, simulates fire occurrence based on a statistical association of fire occurrence density ([]) with a set of spatial covariates (Z) including
road proximity, distance to nearest city, land ownership, topography, and vegetation type. Fire occurrence density [] for any location u on the landscape is specified through a log-linear Poisson
process model
Where []is the vector of coefficients to be estimated for each of the corresponding covariates, and T is the matrix transpose operation. Among all of the above factors, only the change of vegetation
type over time is explicitly simulated in LANDIS. Other factors are assumed to be constant over time in the model. Therefore, separating the contribution of vegetation to fire occurrence from the
contribution of other factors can improve the model's computing performance in two aspects: (1) computational memory load can be greatly reduced when several maps of spatial covariates (e.g., slope,
elevation, aspect, distance to nearest city) are replaced with only a map of fire ignition rate in the simulation to account for the contribution of all factors other than vegetation type; (2)
computational cost can also be reduced when the calculation of fire occurrence density is simplified into the result of the two contributions.
Let the vector of coefficients ([]) in Eq. A.1 to be a set of two components: a vector (P) of fire initiation probability and a vector ([]) containing all other coefficients. When written in the
matrix format, it is []. Furthermore, we write the list of spatial covariates (Z) into two parts: [], where V is the vector indicating which vegetation type the location u belongs to, and W is the
vector containing all other spatial covariates. Then Eq. A.1 can be written as
Simplifying the matrix operation on the right hand side of the Eq. A.2, we get
Taking the exponential function transformation on both sides, we get
Equation A.4 essentially indicates that fire occurrence density []is determined by two components (i.e., contribution from vegetation type and contribution from other factors). This is consistent
with the hierarchical fire frequency model shown in our paper as fire occurrence density is a product of fire initiation probability p and ignition rate
Comparing A.4 and A.5, we decided to assume fire initiation probability
and fire ignition rate
Notice that we multiplied [] by the coefficient 0.1 to normalize the value into the range of 0 and 1.
Estimation of fire initiation probability and fire ignition rate map
The coefficients estimated from the log-linear Poisson model and fire initiation probabilities for different vegetation types are shown in Table A1. The final model included polynomial transformed
variables (e.g., D^2) of some continuous covariates (e.g., road proximity, D). Fire initiation probabilities for vegetation types (Table A1) were calculated using Eq. A.4. Fire ignition rate was
calculated using Eq. A.5. The derived map of ignition rate (Fig. A1) depicts the contribution of all other factors including road proximity, distance to nearest city, ownership, slope, and aspect.
The estimated fire initiation probabilities with respect to different vegetation types and spatially explicit fire ignition rate map were used as parameters and input data in the LANDIS simulation.
TABLE A1. Estimated coefficients of the predictor variables used in the final log-linear Poisson model, and the corresponding fire initiation probabilities for the vegetation types.
│Variable │Coefficient│Fire initiation probability│
│Vegetation type │ │ │
│Grassland │ 0.591 │ 0.181 │
│Open woodland │ 0.789 │ 0.220 │
│Deciduous forest │ 0.857 │ 0.236 │
│Oak-pine mixed forest │ 0.870 │ 0.239 │
│Road proximity │ │ │
│D:distance to nearest road (m) │ -1.6e-03 │ │
│D^2 │ 3.2e-07 │ │
│Ownership │ │ │
│Private inholdings │ 2.6e+00 │ │
│Public lands │ 3.2e+00 │ │
│Slope │ │ │
│S: Slope (degeree) │ 2.0e-02 │ │
│S^2 │ -6.6e-04 │ │
│Aspect │ │ │
│Xeric │ 1.4e-01 │ │
│Flat │ 4.5e-01 │ │
│Municipality proximity │ │ │
│T:Distance to nearest towns (m) │ -6.4e-05 │ │
│T^2 │ -9.8e-09 │ │
│T^3 │ 3.2e-13 │ │
FIG. A1. The map of estimated fire ignition rate, defined as the number of potential ignitions per square kilometer per decade.
[Back to A018-044] | {"url":"http://esapubs.org/archive/appl/A018/044/appendix-A.htm","timestamp":"2014-04-18T10:37:01Z","content_type":null,"content_length":"16054","record_id":"<urn:uuid:d60de052-6a75-4674-ba23-5535dcb277e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taking the second derivative - implicit differentiation
March 20th 2013, 11:40 AM #1
Mar 2013
so cal
Taking the second derivative - implicit differentiation
The original equation is this:
2x^2 -3y^2 = 0
(2x^2 -3y^2)' = (0)'
4x - 6y(dy/dx) = 0
I got the first derivative....which is correct according to my solutions manual: (2x/3y)
I took the derivative of this:
4x - 6y(dy/dx) = 0
(4x - 6y(dy/dx))' = (0)'
and got this using the product rule:
(6y)' ..... 6(dy/dx)
4 - 6(dy/dx)*(dy/dx) - 6y*(dy/dx)^2 = 0 ----> correct according the solutions manual
__________________________________________________ __________________
This is where I went wrong....
I tried to isolate (dy/dx)^2 here:
4 = 6(dy/dx)^2 + 6y(dy/dx)^2
4 = [(dy/dx)^2][(6)(1+y)]
dividing out [(6)(1+y)]
This is how the solutions manual does it:
(dy/dx)^2 = - (3)(dy/dx)^2 -2 / 3y
= - (8/9y^3)
What did I do wrong when trying to isolate (dy/dx)^2 ???
Also ... a side question... is there an easier way to type out this stuff? Like a program I can use?
Thank you to whoever answers this....I have been trying to figure this out for like 2 hours
Re: Taking the second derivative - implicit differentiation
I haven't done implicit differentiation in a while, so I might be missing something ... but I managed to get down to the next-to-last line in your book's solution. How they managed to simplify
that final numerator to -8, that I don't know.
See the attached pdf.
A suggestion. Maybe your notation is confusing you. Remember that, though the first derivative can written dy/dx, the second derivative is NOT "(dy/dx)^2" and is not written that way. I suggest
you write y' for the first derivative and y'' for the second (so you have to remember that you're differentiating with respect to x).
In the pdf, since I'd figured out y', I substituted for that in the equation for y''. I'm not sure I follow your presentation, so I don't know whether you did that or not.
I hope this helps.
Re: Taking the second derivative - implicit differentiation
Ok I sort of get it more now that I realize I was writing it down wrong but I still can only make it to the second to last line....
Re: Taking the second derivative - implicit differentiation
Hi girl19!
The original equation is this:
2x^2 -3y^2 = 0
(2x^2 -3y^2)' = (0)'
4x - 6y(dy/dx) = 0
I got the first derivative....which is correct according to my solutions manual: (2x/3y)
I took the derivative of this:
4x - 6y(dy/dx) = 0
(4x - 6y(dy/dx))' = (0)'
and got this using the product rule:
(6y)' ..... 6(dy/dx)
That last is not quite correct.
It should be
$\left(\frac{dy}{dx}\right)' = \frac{d}{dx}\left(\frac{dy}{dx}\right) = \frac{d^2y}{dx^2}$
In particular:
$\frac{d^2y}{dx^2} e \left(\frac{dy}{dx}\right)^2$
4 - 6(dy/dx)*(dy/dx) - 6y*(dy/dx)^2 = 0 ----> correct according the solutions manual
__________________________________________________ __________________
This is where I went wrong....
I tried to isolate (dy/dx)^2 here:
4 = 6(dy/dx)^2 + 6y(dy/dx)^2
So this should be:
$4=6\left(\frac{dy}{dx}\right)^2 + 6y\left(\frac{d^2y}{dx^2}\right)$
Now you can substitute what you found earlier: $\frac{dy}{dx} = \frac{2x}{3y}$, and get:
$4=6\left(\frac{2x}{3y}\right)^2 + 6y\left(\frac{d^2y}{dx^2}\right)$
4 = [(dy/dx)^2][(6)(1+y)]
dividing out [(6)(1+y)]
This is how the solutions manual does it:
(dy/dx)^2 = - (3)(dy/dx)^2 -2 / 3y
= - (8/9y^3)
What did I do wrong when trying to isolate (dy/dx)^2 ???
So you mixed up $\frac{d^2y}{dx^2}$ and $\left(\frac{dy}{dx}\right)^2$.
Also ... a side question... is there an easier way to type out this stuff? Like a program I can use?
Thank you to whoever answers this....I have been trying to figure this out for like 2 hours
Yes there is.
Not a program, but a special way of typing that works on this forum.
If you type for instance [TEX]x_1^2 + 2y_2 = 3[/TEX], you get:
$x_1^2 + 2y_2 = 3$
This is called $\LaTeX$.
Re: Taking the second derivative - implicit differentiation
March 20th 2013, 01:47 PM #2
Nov 2012
Normal, IL USA
March 20th 2013, 02:04 PM #3
Mar 2013
so cal
March 20th 2013, 03:12 PM #4
March 20th 2013, 09:06 PM #5
Super Member
Jul 2012 | {"url":"http://mathhelpforum.com/calculus/215144-taking-second-derivative-implicit-differentiation.html","timestamp":"2014-04-18T17:58:09Z","content_type":null,"content_length":"47018","record_id":"<urn:uuid:7db713a3-a2e5-406f-aac1-4900a598ba39>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
How would you express the following?
August 2nd 2012, 11:59 AM #1
Aug 2012
Down Under
How would you express the following?
If s(m,n) = d has n how would u say
I dont get why its different depending on the order of the quantifer
Please explain?
Re: How would you express the following?
What does "s(m,n) = d has n" mean?
Concerning the order of quantifiers, it is true that for every person x there exists a person y such that y is the mother of x. However, it is not true that there exists a person y such that for
every person x it is the case that y is the mother of x.
Re: How would you express the following?
sorry i meant s(m,n) means m has n
Re: How would you express the following?
What does "m has n" mean? Are m and n numbers, which is what they usually denote in mathematics, or is m a person?
August 2nd 2012, 12:16 PM #2
MHF Contributor
Oct 2009
August 2nd 2012, 01:29 PM #3
Aug 2012
Down Under
August 2nd 2012, 01:36 PM #4
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/201660-how-would-you-express-following.html","timestamp":"2014-04-16T04:14:46Z","content_type":null,"content_length":"37674","record_id":"<urn:uuid:4617d7ad-644c-449a-82e3-5c076e6b6f85>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
which is the period of oscillation of a pendulum with L = 0.5625m?
• 9 months ago
• 9 months ago
Best Response
You've already chosen the best response.
For small angles: \[T=2\pi \sqrt{\frac{ L }{g }}=1,563 s\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51cb843ae4b011c79f62e194","timestamp":"2014-04-20T14:00:51Z","content_type":null,"content_length":"27639","record_id":"<urn:uuid:e050b410-c792-40e1-8b7b-8c9abec70efa>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
How would I factor these polynomials completely?
August 29th 2007, 07:08 PM #1
Junior Member
Aug 2007
How would I factor these polynomials completely?
I was supposed to factor $16p^2-64$, and following the steps I ended up with $8(2p^2-8)$ I don't think that's factored COMPLETELY though?
Then I was supposed to factor $4n^2-32n+48$ , and I got $4(n^2-8n+12)$ . I don't think that's factored completely either?! Were there different steps I was supposed to follow or something?
your input would be greatly appreciated!
I don't think I'm familiar with a formula for factoring expressions like the first one.
Yes, I've heard of foiling. I think it may be starting to come back to me, how to turn them into 'foil' problems. Is that the thing where I would need to find two numbers whose product is 48, and
the sum would have to be -32? And then you somehow turn that into a 'foil' problem? I can't remember exactly...
here's the formula. try it and tell me what you get.
$x^2 - y^2 = (x + y)(x - y)$
Yes, I've heard of foiling. I think it may be starting to come back to me, how to turn them into 'foil' problems. Is that the thing where I would need to find two numbers whose product is 48, and
the sum would have to be -32? And then you somehow turn that into a 'foil' problem? I can't remember exactly...[/quote]you already made it simpler didn't you? by factoring out the 4? don't worry
about 48 and 32, now all you have to worry about is 8 and 12. (By the way, if you are factoring the original expression, it would be even harder, since the coeffcient of $n^2$ is not 1. with
experience you should be able to foil it, but beginners have this whole process they have to go through to factor by groups and stuff.
anyway, you were right so far, now you just have to put it together. let's say the two numbers you found were $a$ and $b$ (and they can be positive or negative), then the answer would be in the
$(n + a)(n + b)$
contracting the original expression into that form is called "foiling." try it
This is my 31
Last edited by Jhevon; August 29th 2007 at 09:33 PM.
ok, so for the first one would that be: $16(p-2)(p+2)$ ?
I'm still working on the second one...
Last edited by deathtolife04; August 29th 2007 at 07:59 PM.
ok, yeah, that sounds better
so for the second one I figured out that -6 and -2 have a product of 12 and a sum of -8
so, how would i write my final answer? i think if i just write it 4(n-6)(n-2) that would be wrong? cuz then it would look like you just had to distribute the 4 to the n-6?????
ok, yeah, that sounds better
so for the second one I figured out that -6 and -2 have a product of 12 and a sum of -8
so, how would i write my final answer? i think if i just write it 4(n-6)(n-2) that would be wrong? cuz then it would look like you just had to distribute the 4 to the n-6?????
write (2n - 12)(2n - 4)
i split the 4 into 2 times 2, distributed one 2 in the first set of brackets and the other in the second. it is what you would get if you foiled directly
on second thought, i think it may be better if you keep the constants out in front, it looks messier, but it is factored "completely" when in that form
ok, thanks!
Last edited by deathtolife04; August 29th 2007 at 08:48 PM.
write (2n - 12)(2n - 4)
i split the 4 into 2 times 2, distributed one 2 in the first set of brackets and the other in the second. it is what you would get if you foiled directly
on second thought, i think it may be better if you keep the constants out in front, it looks messier, but it is factored "completely" when in that form
actually, so would i need to put a 4 in front of each set of parenthesis? or keep it as 4(n-6)(n-2)
Last edited by ThePerfectHacker; September 2nd 2007 at 02:20 PM.
yeah, i think i'm good now! thanks
let me clarify, though. do what i did before as in, keep it with ONE 4, right?
so: 4(n-6)(n-2)
but then wouldn't they think that the 4 should only be distributed through the FIRST set of parenthesis? or does it matter?
to "factor completely" means we should be able to "pull anything else out." if you distribute the 4, then it will become something you can "pull out"
so leave it as it was the first time, which is what i believe you have here
August 29th 2007, 07:25 PM #2
August 29th 2007, 07:32 PM #3
Junior Member
Aug 2007
August 29th 2007, 07:39 PM #4
August 29th 2007, 07:48 PM #5
Junior Member
Aug 2007
August 29th 2007, 08:00 PM #6
August 29th 2007, 08:26 PM #7
Junior Member
Aug 2007
August 29th 2007, 08:29 PM #8
August 29th 2007, 08:30 PM #9
Junior Member
Aug 2007
August 29th 2007, 08:50 PM #10
Junior Member
Aug 2007
August 29th 2007, 08:51 PM #11
August 29th 2007, 09:06 PM #12
Junior Member
Aug 2007
August 29th 2007, 09:11 PM #13
August 29th 2007, 09:14 PM #14
Junior Member
Aug 2007 | {"url":"http://mathhelpforum.com/algebra/18204-how-would-i-factor-these-polynomials-completely.html","timestamp":"2014-04-17T15:17:15Z","content_type":null,"content_length":"81928","record_id":"<urn:uuid:15b1281f-3aea-4c32-b1e1-58c47a967b11>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
G13BBF filters a time series by a transfer function model.
2 Specification
SUBROUTINE G13BBF ( Y, NY, MR, NMR, PAR, NPAR, CY, WA, IWA, B, NB, IFAIL)
INTEGER NY, MR(NMR), NMR, NPAR, IWA, NB, IFAIL
REAL (KIND=nag_wp) Y(NY), PAR(NPAR), CY, WA(IWA), B(NB)
3 Description
From a given series
${y}_{1},{y}_{2},\dots ,{y}_{n}$
a new series
${b}_{1},{b}_{2},\dots ,{b}_{n}$
is calculated using a supplied (filtering) transfer function model according to the equation
$bt=δ1bt-1+δ2bt-2+⋯+δpbt-p+ω0yt-b-ω1yt-b-1-⋯-ωqyt-b-q.$ (1)
As in the use of
, large transient errors may arise in the early values of
due to ignorance of
, and two possibilities are allowed.
(i) The equation (1) is applied from $t=1+b+q,\dots ,n$ so all terms in ${y}_{t}$ on the right-hand side of (1) are known, the unknown set of values ${b}_{t}$ for $t=b+q,\dots ,b+q+1-p$ being taken
as zero.
(ii) The unknown values of ${y}_{t}$ for $t\le 0$ are estimated by backforecasting exactly as for G13BAF.
4 References
Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day
5 Parameters
1: Y(NY) – REAL (KIND=nag_wp) arrayInput
On entry
: the
${Q}_{y}^{\prime }$
backforecasts starting with backforecast at time
$1-{Q}_{y}^{\prime }$
to backforecast at time
followed by the time series starting at time
, where
${Q}_{y}^{\prime }={\mathbf{MR}}\left(6\right)+{\mathbf{MR}}\left(9\right)×{\mathbf{MR}}\left(10\right)$
. If there are no backforecasts either because the ARIMA model for the time series is not known or because it is known but has no moving average terms, then the time series starts at the
beginning of
2: NY – INTEGERInput
On entry
: the total number of backforecasts and time series data points in array
Constraint: ${\mathbf{NY}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1+{Q}_{y}^{\prime },{\mathbf{NPAR}}\right)$.
3: MR(NMR) – INTEGER arrayInput
On entry
: the orders vector for the filtering transfer function model followed by the orders vector for the ARIMA model for the time series if the latter is known. The transfer function model orders
appear in the standard form
as given in the
G13 Chapter Introduction
. Note that if the ARIMA model for the time series is supplied then the routine will assume that the first
${Q}_{y}^{\prime }$
values of the array
are backforecasts.
the filtering model is restricted in the following way:
□ ${\mathbf{MR}}\left(1\right)\text{, }{\mathbf{MR}}\left(2\right)\text{, }{\mathbf{MR}}\left(3\right)\ge 0$.
the ARIMA model for the time series is restricted in the following ways:
□ ${\mathbf{MR}}\left(\mathit{k}\right)\ge 0$, for $\mathit{k}=4,5,\dots ,10$;
□ if ${\mathbf{MR}}\left(10\right)=0$, ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)=0$;
□ if ${\mathbf{MR}}\left(10\right)e 0$, ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)e 0$;
□ ${\mathbf{MR}}\left(10\right)e 1$.
4: NMR – INTEGERInput
On entry
: the number of values supplied in the array
. It takes the value
if no ARIMA model for the time series is supplied but otherwise it takes the value
. Thus
acts as an indicator as to whether backforecasting can be carried out.
Constraint: ${\mathbf{NMR}}=3$ or $10$.
5: PAR(NPAR) – REAL (KIND=nag_wp) arrayInput
On entry: the parameters of the filtering transfer function model followed by the parameters of the ARIMA model for the time series. In the transfer function model the parameters are in the
standard order of MA-like followed by AR-like operator parameters. In the ARIMA model the parameters are in the standard order of non-seasonal AR and MA followed by seasonal AR and MA.
6: NPAR – INTEGERInput
On entry
: the total number of parameters held in array
□ if ${\mathbf{NMR}}=3$, ${\mathbf{NPAR}}={\mathbf{MR}}\left(2\right)+{\mathbf{MR}}\left(3\right)+1$;
□ if ${\mathbf{NMR}}=10$, ${\mathbf{NPAR}}={\mathbf{MR}}\left(2\right)+{\mathbf{MR}}\left(3\right)+1+{\mathbf{MR}}\left(4\right)+{\mathbf{MR}}\left(6\right)+{\mathbf{MR}}\left(7\right)+{\mathbf
7: CY – REAL (KIND=nag_wp)Input
On entry
: if the ARIMA model is known (i.e.,
must specify the constant term of the ARIMA model for the time series. If this model is not known (i.e.,
) then
is not used.
8: WA(IWA) – REAL (KIND=nag_wp) arrayWorkspace
9: IWA – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which G13BBF is called.
let $K={\mathbf{MR}}\left(3\right)+{\mathbf{MR}}\left(4\right)+{\mathbf{MR}}\left(5\right)+\left({\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)\right)×{\mathbf{MR}}\left(10\right)$,
□ if ${\mathbf{NMR}}=3$, ${\mathbf{IWA}}\ge {\mathbf{MR}}\left(1\right)+{\mathbf{NPAR}}$;
□ if ${\mathbf{NMR}}=10$, ${\mathbf{IWA}}\ge {\mathbf{MR}}\left(1\right)+{\mathbf{NPAR}}+K×\left(K+2\right)$.
10: B(NB) – REAL (KIND=nag_wp) arrayOutput
On exit
: the filtered output series. If the ARIMA model for the time series was known, and hence
${Q}_{y}^{\prime }$
backforecasts were supplied in
, then
${Q}_{y}^{\prime }$
‘filtered’ backforecasts followed by the filtered series. Otherwise, the filtered series begins at the start of
just as the original series began at the start of
. In either case, if the value of the series at time
is held in
, then the filtered value at time
is held in
11: NB – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which G13BBF is called.
In addition to holding the returned filtered series,
is also used as an intermediate work array if the ARIMA model for the time series is known.
□ if ${\mathbf{NMR}}=3$, ${\mathbf{NB}}\ge {\mathbf{NY}}$;
□ if ${\mathbf{NMR}}=10$, ${\mathbf{NB}}\ge {\mathbf{NY}}+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{MR}}\left(1\right)+{\mathbf{MR}}\left(2\right),{\mathbf{MR}}\left(3\right)\
12: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, ${\mathbf{NMR}}e 3$ and ${\mathbf{NMR}}e 10$,
or ${\mathbf{MR}}\left(\mathit{i}\right)<0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NMR}}$,
or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)=1$,
or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)=0$ and ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)e 0$,
or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)e 0$, and ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)=0$,
or NPAR is inconsistent with the contents of MR,
or WA is too small,
or B is too small.
A supplied model has parameter values which have failed the validity test.
The supplied time series is too short to carry out the requested filtering successfully.
This only occurs when an ARIMA model for the time series has been supplied. The matrix which is used to solve for the starting values for MA filtering is singular.
Internal memory allocation failed.
7 Accuracy
Accuracy and stability are high except when the AR-like parameters are close to the invertibility boundary. All calculations are performed in basic precision except for one inner product type
calculation which on machines of low precision is performed in additional precision.
If an ARIMA model is supplied, a local workspace array of fixed length is allocated internally by G13BBF. The total size of this array amounts to
integer elements, where
is the expression defined in the description of the parameter
The time taken by G13BBF is roughly proportional to the product of the length of the series and number of parameters in the filtering model with appreciable increase if an ARIMA model is supplied for
the time series.
9 Example
This example reads a time series of length
. It reads one univariate ARIMA
model for the series and the
filtering transfer function model.
initial backforecasts are required and these are calculated by a call to
. The backforecasts are inserted at the start of the series and G13BBF is called to perform the filtering.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/G13/g13bbf.html","timestamp":"2014-04-18T15:00:56Z","content_type":null,"content_length":"33629","record_id":"<urn:uuid:5673604e-8590-4075-bb12-9e2757194122>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
20 search hits
Chiral model for dense, hot and strange hadronic matter (1999)
Detlef Zschiesche Panajotis Papazoglou Christian W. Beckmann Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
Introduction: Until now it is not possible to determine the equation of state (EOS) of hadronic matter from QCD. One succesfully applied alternative way to describe the hadronic world at high
densities and temperatures are effective models like the RMF-models [1], where the relevant degrees of freedom are baryons and mesons instead of quarks and gluons. Since approximate chiral
symmetry is an essential feature of QCD, it should be a useful concept for building and restricting e ective models. It has been shown [2,3] that effective sigma-omega models including SU(2)
chiral symmetry are able to obtain a reasonable description of nuclear matter and finite nuclei. Recently [4] we have shown that an extended SU(3) × SU(3) chiral sigma-omega model is able to
describe nuclear matter ground state properties, vacuum properties and finite nuclei satisfactorily. This model includes the lowest SU(3) multiplets of the baryons (octet and decuplet[5]), the
spin-0 and the spin-1 mesons as the relevant degrees of freedom. Here we will discuss the predictions of this model for dense, hot, and strange hadronic matter.
Critical review of quark gluon plasma signals (2000)
Detlef Zschiesche Lars Gerland Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
Compelling evidence for a new form of matter has been claimed to be formed in Pb+Pb collisions at SPS. We critically review two suggested signatures for this new state of matter: First the
suppression of the J/psi , which should be strongly suppressed in the QGP by two different mechanisms, the color-screening [1] and the QCD-photoe ect [2]. Secondly the measured particle, in
particular strange hadronic, ratios might signal the freeze-out from a quark-gluon phase.
Critical review of quark gluon plasma signatures (1999)
Stefan Scherer Steffen A. Bass Marcus Bleicher Mohamed Belkacem Larissa V. Bravina Jörg Brachmann Adrian Dumitru Christoph Ernst Lars Gerland Markus Hofmann Ludwig Neise Manuel Reiter Sven Soff
Christian Spieles Henning Weber Eugene E. Zabrodin Detlef Zschiesche Joachim A. Maruhn Horst Stöcker Walter Greiner
Noneequilibrium models (three-fluid hydrodynamics and UrQMD) use to discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion
collisions. It is demonstrated that these two models - although they do treat the most interesting early phase of the collisions quite differently(thermalizing QGP vs. coherent color fields with
virtual particles) - both yields a reasonable agreement with a large variety of the available heavy ion data.
Current status of quark gluon plasma signals (2001)
Detlef Zschiesche Steffen A. Bass Marcus Bleicher Jörg Brachmann Lars Gerland Kerstin Paech Stefan Scherer Sven Soff Christian Spieles Henning Weber Horst Stöcker Walter Greiner
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark
matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and
directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous
irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
Effects of Dirac sea polarization on hadronic properties : a Chiral SU(3) approach (2003)
Amruta Mishra K. Balazs Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
Abstract: The e ect of vacuum fluctuations on the in-medium hadronic properties is investigated using a chiral SU(3) model in the nonlinear realization. The e ect of the baryon Dirac sea is seen
to modify hadronic properties and in contrast to a calculation in mean field approximation it is seen to give rise to a significant drop of the vector meson masses in hot and dense matter. This e
ect is taken into account through the summation of baryonic tadpole diagrams in the relativistic Hartree approximation (RHA), where the baryon self energy is modified due to interactions with
both the non-strange ( ) and the strange ( ) scalar fields.
Enhanced strange particle yields : signal of a phase of massless particles? (2000)
Sven Soff Detlef Zschiesche Marcus Bleicher Christoph Hartnack Mohamed Belkacem Larissa V. Bravina Eugene E. Zabrodin Steffen A. Bass Horst Stöcker Walter Greiner
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to
proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string
tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or
the phi meson are considerably affected by hadronic rescattering of the decay products.
GEANT4 : a simulation toolkit (2003)
S. Agostinelli Dennis Dean Dietrich Walter Greiner Kerstin Anja Paech Stefan Scherer Horst Stöcker Henning Weber Detlef Zschiesche et al. Geant4 Collaboration
Abstract Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The
physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy
range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex
geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It
has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear
physics, accelerator design, space engineering and medical physics. PACS: 07.05.Tp; 13; 23
Hadrons in dense resonance matter: a chiral SU(3) approach (2000)
Detlef Zschiesche Panajotis Papazoglou Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
A nonlinear chiral SU(3) approach including the spin 3 2 decuplet is developed to describe dense matter. The coupling constants of the baryon resonances to the scalar mesons are determined from
the decuplet vacuum masses and SU(3) symmetry relations. Di erent methods of mass generation show significant differences in the properties of the spin- 3 2 particles and in the nuclear equation
of state
Hypermatter in chiral field theory (1997)
Panajotis Papazoglou Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner
Abstract. A generalized Lagrangian for the description of hadronic matter based on the linear SU(3)L × SU(3)R -model is proposed. Besides the baryon octet, the spin-0 and spin-1 nonets, a gluon
condensate associated with broken scale invariance is incorporated. The observed values for the vacuum masses of the baryons and mesons are reproduced. In mean-field approximation, vector and
scalar interactions yield a saturating nuclear equation of state. Finite nuclei can be reasonably described, too. The condensates and the e ective baryon masses at finite baryon density and
temperature are discussed.
Impact of baryon resonances on the chiral phase transition at finite temperature and density (2004)
Detlef Zschiesche Gebhard Zeeb Stefan Schramm Horst Stöcker
We study the phase diagram of a generalized chiral SU(3)-flavor model in mean-field approxi- mation. In particular, the influence of the baryon resonances, and their couplings to the scalar and
vector fields, on the characteristics of the chiral phase transition as a function of temperature and baryon-chemical potential is investigated. Present and future finite-density lattice
calculations might constrain the couplings of the fields to the baryons. The results are compared to recent lattice QCD calculations and it is shown that it is non-trivial to obtain,
simultaneously, stable cold nuclear matter. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Detlef+Zschiesche%22/start/0/rows/10/author_facetfq/Horst+St%C3%B6cker/sortfield/title/sortorder/asc","timestamp":"2014-04-20T13:42:28Z","content_type":null,"content_length":"55853","record_id":"<urn:uuid:28406028-0cb2-4602-9d4c-c6eb4f46aff8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |