content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Posts by
Posts by dEE
Total # Posts: 757
what are the legal consequences of using steroids illegally.
Explain your changes?
Explain your changes?
3x+9y=-9, -x-y=9
If a population has a standard deviation ó of 25 units, what is the standard error of the mean if samples of size 100 are selected?
What is the confidence interval for c= 0.95 x = 5.3, s = 0.5 , and n =53
Find the area under the standard normal curve for the following: (A) P(z < -1.80) (B) P(-1.93 < z < 0) (C) P(-0.17 < z < 2.05)
how and what do I post from the cash receipts journal to the general journal
Gas is confined in a tank at a pressure of 11 ATM and a temperature of 25 degree Celsius. If 2/3 of the gas is withdrawn and the temperature is raised to 75 degree Celsius, what is the new pressure
in the tank?
e Chebyshev s theorem to find what percent of the values will fall between 226 and 340 for a data set with a mean of 283 and standard deviation of 19.
Why should we check if your data is binomial?
give an example where the median would be less than the mean.
given 100,90,80,75,70,60,30,25,5,4 what score corresponds to the 70th percentile?
social and criminal justice
Hello I have a 2 to 3 page paper due on Monday 11/14, 2011 and I am having trouble finding good reliable information to complete the assignment. The instructions are to: Prepare a paper identifying
at least three changes expected in the field of criminal justice over the next...
relative frequency distribution
a rectangular table measures 3/5 of a meter long by 1/7 of a meter wide. what is the area
Algebra II
b/3 + 6=82
a student attaches a mass to the end of a 0.80 m string. The system will be whirled in a horizontal circular path at 31.5 m/s. The maximum tension the string can withstand without breaking is 250 N.
What is the maximum mass the student can use?
a jet plane flying at 550 m/s makes a curve. The radius of the circle in which the plane is flying is 9.0 X 10^3 m. What centripetal acceleration does the plane experience? Express this acceleration
relative to g, the acceleration due to gravity.
The average distance separating earth and moon is 384000 km. What is the net gravitational force exerted by earth and moon on a 3.00 * 10^4 kg spaceship located halfway between them.
A vehicle is travelling at 25.0 m/s. It brakes provide an acceleration of -3.75 m/s^2[forward]. What is the driver's maximum reaction time if she is to avoid hitting an obstacle 95.0 m away? the
answer is a number less than 0.5 second. can you tell me the exact number and ...
describe the implied powers doctrine.
If the base of an isosceles triangle has a length of 10, what is the shortest possible integral value for the length of each of the congruent sides?
Starting from rest, a 5.00 kg block slides 2.50 m down a rough 30.0 degree incline. The coefficient of kinetic friction between block and the incline is .436. Determine the work done by the friction
force between block and incline and the work done on the normal force
B= 5 (given) L=h+1= 8 H= 7
I got 1.12/.55 .05 x-.6x+1.2= .08 -.55x= -1.12 -1.12/-.55
Basic Communication Skills
Environment? Location?
21/3=7 answer 7
Physics please help
Starting from rest, a 5.00 kg block slides 2.50 m down a rough 30.0 degree incline. The coefficient of kinetic friction between block and the incline is .436. Determine the work done by the friction
force between block and incline and the work done on the normal force?
-31 and 0
Is that suppose to be 2X or X^2
B+4=g B+g=18 b+b+4=18 2b=18-4 2b=14 b=14/2 b=7
Starting from rest, a 5.00 kg block slides 2.50 m down a rough 30.0 degree incline. The coefficient of kinetic friction between block and the incline is .436. Determine the work done by the friction
force between block and incline and the work done on the normal force?
Change 427 ml to liters Multiply liters and the molarity (1.73) and ur answer will be in moles Find the molar mass of k3po4 then convert your moles to grams using the molar mass
Convert the grams to moles by using the molar mass then convert ml to liters M= mol/liters
ANSWERS FOR TEACHER AID TEXT 02604500
ANSWERS FOR TEACHER AID TEST 02604500
Add the masses on the bottom not subtract..sorry I hit the wrong button
T= (2gm1m2)/(m1-m2) A= g(m1-m2)/m1+m2 D= 1/2at^2
Ok I tried an didn't even come close to the right answer.
The coefficient of static friction between the 3.00 kg crate and the 35.0 degree incline is .300. What minimum force must be applied to the crate from sliding down the incline?
-15.00 m/s= Voy- 1/2(10.0 m/s) (.20 sec)^2 Voy = initial velocity and is equal to 0
Thank you!!
Two objects with masses of 3.00 kg are connected by a light string that passes over a frictionless pulley.what is the distance each object will move in the first second of motion if both objects
start from rest.
Media Influences on America
Bill Goodykoontz states that media and popular culture often understand society better than straight coverage. Why is this so? Can you name any examples?
The sum of two forces, one having a magnitude of 8 N acting due west and other having a magnitude of 6 N acting due north is ___. What is the magnitude of that force? Answer in units of N
Thanks so much!!!
An object with mass m1=5.00 kg rests on a frictionless horizontal table and is connected to a cable that passes over a pulley and is then fastened to a hanging object with mass m2= 10.0 kg. Find the
acceleration of each object and the tension in the cable.
An object with mass m1=5.00 kg rests on a frictionless horizontal table and is connected to a cable that passes over a pulley and is then fastened to a hanging object with mass m2= 10.0 kg. Find the
acceleration of each object and the tension in the cable.
angle 3=3x+9
Reiny, thanks!
Decision Criteria
You have available five different investment strategies and their respective payoffs for various states-of-nature as shown in the chart below. Which investment would you make under the different
decision criteria? States of Nature Sever Decline Moderate Decline Stable Moderate...
Smitty's Bar and Grill has brand name recognition of 61% around the world. Assuming we randomly select 2 people. The assumptions of a Bernoulli process are met. What is the probability a)exacatly 5
of the 12 recognize the name of Smitty's Bar and Grill? b)5 or fewer re...
The Ace AirLines has a policy of overbooking flights. The random variable x represents the number of passengers who cannot be boarded because there were more passengers that seats. x P(x) 0 0.061 1
0.132 2 0.255 3 0.345 4 0.207 a)Is this a probability distribution? How do you ...
its called order of operations, pre algebra
how do you get the answer: 32-13+21=114
80x=100 X represents the number of hours. Solve for x. 100/80=?
100/40 = 2.5 Which is 2 and a half hours.
40x=100 X represents the number of hours. Solve for x.
3.5x10 to the 4th 9.82x10 to the 8th
Finite Math
Pivot the system about the element in row 2, column 1. [3 3| 2] [ 9 4| 1]
Finite Math
A dietitian wishes to plan a meal around three foods. The meal is to include 16,200 units of vitamin A, 4,620 units of vitamin C, and 680 units of calcium. The number of units of the vitamins and
calcium in each ounce of the foods is summarized in the table below. Determine th...
Finite Math
Solve the system of linear equations, using the Gauss-Jordan elimination method. (If there is no solution, enter NO SOLUTION. If there are infinitely many solutions involving one parameter, enter the
solution using t for the last variable. If the solution involves two paramete...
Find the equation of the circle that satisfies the given conditions. Center at the origin and passes through (7, 9) 130 = ?
All sages provide both wisdom and inspiration. Since Dasha s speech contained wisdom and greatly inspired her audience, Dasha is a sage. Which one of the following points out the flaw in the argument
Calculate the monthly finance charge for the following credit card transaction. Assume that it takes 10 days for a payment to be received and recorded and that the month is 30 days long. Assume 365
days in a year. (Round your answer to the nearest cent.) $3,000 balance, 21% ra...
Hi - I'm finding this question a bit tricky and think I answered it wrong, can someone please verify for me? Many thanks.... In triangle EFG, find the value of f, if e = 7.3 cm, g = 8.7 cm and <E =
73. Please help
Ahhh...I see! Thank you for helping, I would have never have worked this out correctly.
how will I see that the angle is 45 degrees? Don't we have to calculate that? This is what I'm asking you about. How is <O calculated so it comes up to 45 degrees? I have 2 sides and have to find the
angle as it's not given before I can proceed to find the third...
Hi - I'm reposting this thread please read the bottom...thanks. Ajax is 8 km due west of Oshawa. Uxbridge is 16 km NW of Oshawa. How far is it from Ajax to Uxbridge? Explain whether you have enough
information to solve this problem. my answer was 13.856 km but not confiden...
Hi - I'm reposting this thread as there was some encoding issues in previous version that I clarified near bottom. Thanks. Hi - I'm having problems drawing a diagram for the following question. The
part I find unclear is "they mark a point A, 120 m along the edge ...
Thank you, that's what I got too:)
Maybe all my questions have this problem. THanks for telling me how to fix it. The original question I posted was: Noah and Brianna want to calculate the distance between their houses which are
opposite sides of a water park. They mark a point, A, 120 m along the edge of the w...
Hi - I'm having problems drawing a diagram for the following question. The part I find unclear is "they mark a point A, 120 m along the edge of the waterpark from Brianna's house. I'm wondering if
point A is the waterpark. See question. My answer to this quest...
Hi - How did you get 45 degrees? Think that's the part I'm lost with.
Ajax is 8 km due west of Oshawa. Uxbridge is 16 km NW of Oshawa. How far is it from Ajax to Uxbridge? Explain whether you have enough information to solve this problem. my answer was 13.856 km but
not confident how I found the angle oshawa to begin with. I used Cos Oshawa = a/...
Hi sorry - ¢ was a triangle that I guess formated different on this page. Sorry
Hi - I'm finding this question a bit tricky and think I answered it wrong, can someone please verify for me? Many thanks.... In ¢ EFG, find the value of f, if e = 7.3 cm, g = 8.7 cm and ÚE = 73 .
my answer was 4.73 cm but I'm not conf...
Thank you for clarifying! Also appreciate how quickly you helped me. Very good math skills :)
An architect designs a house that is 12 m wide. The rafters holding up the roof are equal length and meet at an angle of 68°. The rafters extend 0.6 m beyond the supporting wall. How long are the
rafters? Hi - I was wondering if you could help me with this question. I'...
after the play_________, several critics made scathing comments about it
Health care
is the best topic to talk to the doctors about not being able to afford RT's and that were are able to refer them to other places??
Health care
You are a new administrator at a hospital, the icu & er are suffering a shortage of respiratory therapists, you need to gather information for a meeting with them, what information do you need before
the meeting? develop a tactical plan, what are the major issues? how does it ...
why were the plebeians and patricians in conflict?
thanks :)
Why were the Etruscans considered to be the greatest influence on early Rome?
Factor x3+8 x2+4x +32=
Is this a couplet? If this be error and upon me proved, I never writ, nor no man loved.
test the null hypothesis and interpret findings on toxin levels between farm, urban and suburban homes
What is software that monitors incoming communications and filters out those that are from untrusted sites or fit a profile of suspicious activities
How much HNO3 can be made from 25.0g of NO2?
Assuming that the air temperature drops 6.5 Celsius degrees per 1000 m, compute the temperature at the tropopause if the sea-level air temperature is 15 C and the altitude of the tropopause is 11 km.
Human Resources
The question is describe the process for tracking and evaluating training effectiveness. Not sure what the process is??
if an ocean wave has a wavelength of 15.0m. its period is 10s wht is the speed of the wave?
Human Resources
What are the 5 recruitment strategies or methods in health care? They can be internal or external. Thank you.
Which is correct.. Who does the play have as its star? or Whom does the play have as its star? Thank you.
Find the area of the largest rectangle that can be inscribed in a semicircle or radius r.
Find the point on the parabola y^2=2x that is closest to the point (2,4)
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=dEE&page=4","timestamp":"2014-04-18T19:29:53Z","content_type":null,"content_length":"25737","record_id":"<urn:uuid:cb7fe012-5b5b-4836-9be4-f81ac9785381>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
anonimnystefy wrote:
That is my solution with the plus/minus sign inside the root taken as +. I am sure taking the minus sign would work as well.
No the minus will not work in this case , as we are finding y^2 and y^2 cant be negative.
But thats not the big problem of mine...my problem is that I can't get to the answer using this equation
And the correct answer for x and y in my book and every where else is
Another thing I want to say,when I write the equation
to this,i.e.. moving the equation from one side to another
I get correct answer.Is here I am doing something wrong? | {"url":"http://www.mathisfunforum.com/post.php?tid=18018&qid=227966","timestamp":"2014-04-17T18:35:48Z","content_type":null,"content_length":"28942","record_id":"<urn:uuid:8da78f9c-a0ba-40eb-9484-9dca76e40cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sine: Other graphics
Weierstrass-Riemann function
The everywhere continuous, but nowhere differentiable Weierstrass–Riemann functions (red), (green), and (blue). Here .
Weyl sums
Weyl sums of the form for various .
Various parametrized surfaces
Sphere parametrized through .
Torus parametrized through .
Klein bottle parametrized through .
Steiner's Roman surface parametrized through .
Cross cap parametrized through .
Fractional derivatives
Real part (left graphic) and imaginary part (right graphic) of the fractional derivatives of .
Real part of the fractional derivatives of over the . | {"url":"http://functions.wolfram.com/ElementaryFunctions/Sin/visualizations/19/","timestamp":"2014-04-20T21:18:35Z","content_type":null,"content_length":"41816","record_id":"<urn:uuid:9d1b24c7-feeb-495b-80d0-fedda6dbaeff>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivation in paper - calculus - How?
April 30th 2008, 03:49 AM #1
Nov 2006
Derivation in paper - calculus - How?
Hi all
the paper is Pybus(2000) An integrated framework for the inference of viral population history from reconstructed genealogies
wanted some help with this
$U = exp(-\int^{gi+ti}_{x=ti} \frac{{i \choose 2}}{Ne(x)} dx$
In the paper they solve U for gi and get
$g_i {i \choose 2} = -ln(U) \cdot ~ (\int^{gi+ti}_{x=ti} \frac{1}{Ne(x)}\frac{dx}{gi})^{-1}$
i dont know how he has done the re arranging. Im not even sure it is correct
can someone please show me how? or if its too much effort tell me what method he used
for those interested U is a uniform unit random variable. Solving for gi will generate a variate sample from an equation.
Thank you guys so much for the help
Last edited by chogo; April 30th 2008 at 04:24 AM.
Follow Math Help Forum on Facebook and Google+
The exponent on the integral expresssion gives it away.
After the logarithm is introduced, simply multiply and divide the integral side by $g_{i}$. Leave the numerator $g_{i}$ outside the integral expression and put the denominator $g_{i}$ inside the
integral expression.
$i \choose 2$ is also constant and has been removed from the integral expression before division of the integral expression to the other side.
Watch out for these things. In practical applications, integrals, derivative, logarithms, and various other things sometimes are substituted with some form of linear approximation. That does not
appear to have happened here, but keep your eyes out for it in your future readings.
Last edited by TKHunny; May 1st 2008 at 04:20 AM.
Follow Math Help Forum on Facebook and Google+
April 30th 2008, 11:10 AM #2
MHF Contributor
Aug 2007 | {"url":"http://mathhelpforum.com/calculus/36647-derivation-paper-calculus-how.html","timestamp":"2014-04-16T20:21:05Z","content_type":null,"content_length":"34793","record_id":"<urn:uuid:7bb5c535-449b-44d2-ae20-1080813ec246>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
The lack of use of Latex here at MHF
July 24th 2012, 07:02 AM #1
The lack of use of Latex here at MHF
It has come to my attention here which is such in that how many forum members here avoid the Latex formula editor system.
For simple problems it can be given the benefit of the doubt, but for more detailed problems where each of which requires a more detailed response, you are, unfortunately, impaired. The
deficiency of use of parentheses contributes to this quite significantly.
If you lack the confidence to post using Latex for the first time with no prior experience, like myself, then you can go 'advanced' and continue to review your post indefinitely; until you are
happy you have got the 'hang of it'. It will take you (roughly) a whole evening in order to iron out any of your mistakes whilst using the formula editor system. Practice makes perfect.
Side-note for the disclaimer: You can find all the relevant code to 'punch-in' here: Latex Math Symbols
Re: The lack of use of Latex here at MHF
I have a problem with LaTeX – I don't know how to put mathematical equations and symbols in listings. I use the listings package and it's offers great looking listings, but it doesn't allow math
symbols in $ ... $. Another package, algorithms, allows math, but listings doesn't look as good as with listings (the problem is that algorithms demands to get new line after every if, then,
Even Giants Born Small
What is Diabetic neuropathy
August 5th 2012, 08:22 PM #2
Aug 2012 | {"url":"http://mathhelpforum.com/math/201305-lack-use-latex-here-mhf.html","timestamp":"2014-04-16T04:20:32Z","content_type":null,"content_length":"33467","record_id":"<urn:uuid:46378064-b5cf-4bbe-a9e8-0ce04a8478e5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
May 18th 2010, 01:05 PM #1
May 2010
Consider the equation below.
f(x) = 2(cos(x))2 - 4sin(x)
0 ≤ x ≤ 2
(a) Find the interval on which f is increasing.
(b) Find the local minimum and maximum values of f.
(c) Find the inflection points. (Order your answers from smallest to largest x-value.)
( , ) (smaller x value)
( , ) (larger x value)
Find the interval on which f is concave up.
Find the intervals on which f is concave down.
Consider the equation below.
f(x) = 2(cos(x))2 - 4sin(x)
0 ≤ x ≤ 2
(a) Find the interval on which f is increasing.
(b) Find the local minimum and maximum values of f.
(c) Find the inflection points. (Order your answers from smallest to largest x-value.)
( , ) (smaller x value)
( , ) (larger x value)
Find the interval on which f is concave up.
Find the intervals on which f is concave down.
set f'(x) = 0 to find critical values in the given interval
find the sign of f'(x) between critical values to whether f(x) is increasing or decreasing
set f''(x) = 0 to find critical values in the given interval
find the sign of f''(x) between critical values to whether f(x) is concave up or down ... inflection points occur where concavity changes.
May 18th 2010, 02:08 PM #2 | {"url":"http://mathhelpforum.com/calculus/145372-graphing.html","timestamp":"2014-04-17T13:08:58Z","content_type":null,"content_length":"33102","record_id":"<urn:uuid:8a0d4ec3-0650-4446-8628-e7ffb4aa3190>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
int formatting???
Author int formatting???
Ranch Hand
Joined: Feb i have an int (unknown lenght) that i need to format so there's a comma every 3 digits, like "1,000,203,2203". I know about DecimalFormat df1 = new DecimalFormat(), but I'm not quite
11, 2004 sure what to put in the parentheses. If I knew the int was 4 digits, I could do "0,000", but I'm not sure how to go about formatting an unknown number of digits. Let me know!
Posts: 33 Thanks!
Ranch Hand
You're on the right track, just use #'s instead of 0's in you DecimalFormatter, like this:
Joined: Sep If the number is more than 6 digits, it will automatically stick in another "," at the right places so you don't have to guess at the size.
13, 2002 Try it out and you'll see what I mean.
Posts: 331
Blake Minghelli<br />SCWCD<br /> <br />"I'd put a quote here but I'm a non-conformist"
subject: int formatting??? | {"url":"http://www.coderanch.com/t/396698/java/java/int-formatting","timestamp":"2014-04-20T11:51:55Z","content_type":null,"content_length":"21057","record_id":"<urn:uuid:1b3e52c6-c12c-43ac-9aad-f6510a801d74>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
29.3 Maxwell's Consistent Equations
Home | 18.013A | Chapter 29 Tools Glossary Index Up Previous Next
29.3 Maxwell's Consistent Equations
As we have noted, Ampere's law must be modified in the presence of time dependent currents, because otherwise we would have
$0 = ∇ ⟶ · ( ∇ ⟶ × B ⟶ ) = 4 π ∇ ⟶ · j ⟶ c = − 4 π c ∂ ρ ∂ t$
By considering what happens when (say alternating) current is passed through a circuit with a gap, Maxwell realized that consistency requires that there must be something that contributes to the curl
of $B ⟶$ in the gap, if there is such a thing in the circuit.
But the gap is empty space to a first approximation; the only thing present in it is the electric field $E ⟶$ that is produced by the charge density $ρ$ .
Recall that in electrostatics we had the relation $∇ ⟶ · E ⟶ = 4 π ρ$ .
We can see then that a consistent way to change Ampere's law to take into account varying current is to add a term $− 1 c ∂ E ⟶ ∂ t$ to its left side, to produce
$∇ ⟶ × B ⟶ − 1 c ∂ E ⟶ ∂ t = 4 π c j ⟶$
Exercise 29.1 Take the divergence of both sides here to verify that this equation is consistent with charge conservation.
We can put together all the differential equations satisfied by electric and magnetic fields into the following list. These are called "Maxwell's Equation"' although they are much simpler than those
he published in 1874. Those had the same content, more or less, but were 20 equations in 20 variables.
$∇ ⟶ × B ⟶ − 1 c ∂ E ⟶ ∂ t = 4 π c j ⟶ 1 c ∂ B ⟶ ∂ t + ∇ ⟶ × E ⟶ = 0 ⟶ ∇ &
LongRightArrow; · B ⟶ = 0 ∇ ⟶ · E ⟶ = 4 π ρ$
The present forms of these equations were obtained by Heaviside who introduced vector notation, divergence and curl.
One reason that progress in this area was slow, from Faraday's law of 1831 to Maxwell's equations in 1874 was the difficulty people had in describing multidimensional phenomena and in understanding
the equations they wrote, without vector notation.
We have here ignored the effects of matter on electric and magnetic fields except in providing electrical current and charge.
In fact, matter consists of charge particles with both signs and magnetic dipoles, (like little magnets) and these are affected by electric and magnetic fields.
Electric fields cause charge to move in conductors, and polarize non-conductors. By attracting charge of one sign and repelling that of the other they make non-conductors behave like they were full
of electric dipoles.
As a result, non-conductors modify the effects of fields within them by the fields produced by the polarization and physicists describe these things by defining two kinds of electric fields, $D &
LongRightArrow;$ and $E ⟶$ and two kinds of magnetic fields, $B ⟶$ and $H ⟶$ one of which is the field produced by the actual charge distribution, and the
other the field including the effects of polarization in matter as well. You will study these things in your physics course. | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter29/section03.xhtml","timestamp":"2014-04-16T16:19:57Z","content_type":null,"content_length":"13232","record_id":"<urn:uuid:d3efab36-13c9-4241-a106-646f83f0ad4e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECE 5605 Stochastic Signals and Systems (3C)
Current Prerequisites & Course Offering
For current prerequisites for a particular course, and to view course offerings for a particular semester, see the Virginia Tech Course Timetables.
ECE 5605 Stochastic Signals and Systems (3C)
5605: Egnineering applications of probability theory, random variables and random processes. Topics include: Gaussian and non-Gaussian random variables, correlation and stationarity of random
processes. Time and frequency response of linear systems to random inputs using both classical transform and modern state space techiques.
What is the reason for this course?
The analysis of system response to stochastic signals and noise is fundamental for the understanding of advanced system analysis and synthesis.
Typically offered: Fall. Program Area: Communications.
Prerequisites: STAT 4714.
Why are these prerequisites or corequisites required?
A basic course in probability and statistics such as STAT 4714 provides the necessary background in probability theory and random variables that the beginning graduate student should have for ECPE
5605, which is an advanced treatment of probability and stochastic processes. ECPE 5606 is the second course in the sequence, which requires ECPE 5605 as prerequisite.
Department Syllabus Information:
Major Measurable Learning Objectives:
• analyze the response of linear systems to both deterministic and random input processes.
• design system structures to meet desired performance objectives for both continuous and discrete time applications.
Course Topics
Topic Percentage
Probability space, sigma fields; probability axioms, conditional probability, random variables 10%
Probability distributions and density functions; independent and conditional random variables 10%
Two or more random variables; functions of random variables expectations, moments; characteristic functions 10%
Correlation; covariance; parameter estimation; multivariate normal variables random sequences and stochastic convergence; Central Limit Theorem 10%
Stochastic processes; Gaussian, exponential, random phase sinusoids in continuous and discrete time 20%
Strict and wide-sense stationary processes; correlation functions and expected values 20%
Linear transformations on random variables; linear system response to stochastic processes; ergodicity; power spectral density. 20% | {"url":"http://www.ece.vt.edu/graduate/courses/viewcourse.php?number=5605-104","timestamp":"2014-04-17T09:34:58Z","content_type":null,"content_length":"11746","record_id":"<urn:uuid:8cc16e4d-b21b-47d0-8b4a-5f945097f33f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project Euler — problem 13
July 9, 2012
By Tony
The 13th in Project Euler is one big number problem:
Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
Obviously, there are some limits in machine representation of numbers. In R, 2^(-1074) is the smallest non-zero positive number, and 2^1023 is the largest. The numbers in the 13th problem is
definitely within this range. Although, there is one global parameter to be modified. By default, if one sum up these 100 numbers naively, the result would be 5.537376e+13. This is due to the number
of digits to print is 7 by default. Thus, to get the first 10 digits, you need to adjust this parameter a little bit.
1 options(digits = 10)
2 digits <- digits <- scan("clipboard", what = 0)
3 result <- sum(digits)
4 # the digits before e is the answer
5 # only nine digits will show, as the tenth number is zero
Two more things to be addressed.
• R package gmp() is a specific package for big number calculation.
• The first 10 digits in the sum is mostly dependent on the first 10 digits of 100 numbers. You could simply calculate the sum of these 100 first-10-digits (for safety reason, take first-11-digits)
to get the result. Be ware of the option(digits) still to be changed.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/project-euler-problem-13-2/","timestamp":"2014-04-16T04:17:40Z","content_type":null,"content_length":"36585","record_id":"<urn:uuid:d2fc78c4-b256-43d8-b4ef-5c820b535bd4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivatives of partial fractions
I'm having issue with one problem. We're asked to break down the problem into partial fractions to solve for the integral.
Well, I'm stuck on one. I'm being asked for the values of A, B, and C for the following problem.
∫((9x^2+13x-83)/((x-3)(x^2 + 16)))dx
I can get it worked down pretty far, but I'm continually given back that I've done it incorrectly.
Broken down:
(9x^2+13x-83)/((x-3)(x^2 + 16)) = A/(x-3) + (bx + c)/(x^2 + 16)
From there, I get (multiplying both sides by the common denominator, and now only working with the right side as it's the only one that needs to be worked):
A(x^2+12) + Bx(x-3) + C(x-3)
= ax^2 + 12A + Bx^2 - 3Bx + Cx - 3C
= (A+B)x^2 + (C-3B)x + (12A - 3C)
-9x^2 + 13x -83 = (A+B)x^2 + (C-3B)x + (12A - 3C)
Breaking it down further, I get:
A + B = -9
3B - C = 13
12A - 3C = -83
I'm thinking something may be wrong with my values in the second part. Any help would be appreciated. Whatever answers I put in, I'm marked as incorrect. I went so far as to plug the last little part
I posted into Wolfram|Alpha to confirm, and it spits the same answers I've been getting. I get the feeling that I did something incorrectly earlier on. | {"url":"http://www.physicsforums.com/showthread.php?s=99c0e2c4a6e922c0e9c00a7d8ce1db2c&p=2934364","timestamp":"2014-04-19T17:46:07Z","content_type":null,"content_length":"24847","record_id":"<urn:uuid:7c4a6b1d-e882-4281-819a-ee073881a854>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can someone help me please... (Integral/Diff. Calculus)
February 24th 2010, 12:26 AM #1
Feb 2010
I was reviewing my lessons on our "Strength of Materials" and I came up with this problem.
I get stucked on this thing.
How did the final equation/formula came up?
I'm kinda curious.
help me please!!
thanks a bunch in advance!
$\frac{(0.25)(20\pi)^2}{JG}= \frac{1600\pi^2}{JG}$
I see that there is no "JG" in the last formula. Presumably they have been replaced by some other formula. To get that final expression you must have $JG= 512\pi (0.20)^4(12x 10^6)$.
that's not what I meant.
I mean. can you explain to me, step by step (if you could) like how did they derive the equation, then integrate. and what are the derivatives that were used if there are.
btw, it's the formula for Torsion in strngth of materials,
theta = angle
J = torsion constant for the section
G = Modulus of elesticity
T = Torsion
L = Length ....
February 24th 2010, 02:25 AM #2
MHF Contributor
Apr 2005
February 24th 2010, 03:44 AM #3
Feb 2010 | {"url":"http://mathhelpforum.com/calculus/130509-can-someone-help-me-please-integral-diff-calculus.html","timestamp":"2014-04-21T08:47:31Z","content_type":null,"content_length":"35988","record_id":"<urn:uuid:404dad33-a9cc-4c2f-8e61-fa011a300c02>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
User gowers
bio website gowers.wordpress.com
visits member for 4 years, 6 months
seen Mar 16 at 23:40
stats profile views 29,449
Mathematics professor at Cambridge
25 awarded Good Answer
Apr Euler-Lagrange equation for several occurrences of integrals
3 revised Corrected spelling in title
Apr Almost orthogonal vectors
3 comment Looking at this almost a further year later, I'm still confused by Bill's remark, because what I wrote in the previous comment seems (i) correct and (ii) the standard volume argument
that he discusses. Can anyone shed light on this?
Mar What are the Applications of Hypergraphs
31 comment Here are two partial explanations for why algorithms based on hypergraphs are less common than algorithms based on graphs. 1. Some polynomial-time algorithms for graphs turn into
NP-complete problems when you try to generalize them to hypergraphs (e.g., finding a perfect matching). 2. We often use graphs to model symmetric binary relations, and symmetric binary
relations appear much more frequently than symmetric ternary relations (and beyond).
Mar functions satisfying “one-one iff onto”
31 comment Does $f$ have to be continuous, or something like that? Otherwise, the result seems to be trivially false because you can mess about with the map on a set of measure zero.
Mar Family of subsets such that there are at most two sets containing two given elements
30 comment It is perhaps worth adding that the above construction is generated by two standard tricks. The first is to dualize the problem by defining $T_i$ to be the set of $k$ such that $i\in
S_k$ and reformulating the conditions in terms of the $T_i$. (The main one says that the maximum intersection of any two $T_i$ is 2.) The other trick is to use graphs of polynomials to
get plenty of sets with small intersections.
Mar Family of subsets such that there are at most two sets containing two given elements
30 comment Thanks a lot -- I've edited it now.
Mar Family of subsets such that there are at most two sets containing two given elements
30 revised edited body
30 answered Family of subsets such that there are at most two sets containing two given elements
27 awarded Good Answer
Mar Family of subsets such that there are at most two sets containing two given elements
25 comment Have you tried taking the characteristic functions of the $S_i$, adding them up, and looking at the $\ell_2$ norm? The condition on the $S_i$ should put a strong condition on the average
inner product, and then the Cauchy-Schwarz inequality should give a bound the other way. I feel this ought to work, but can't quite be certain without writing it down.
25 asked Best known constant for parallel sorting
Mar Do good math jokes exist?
23 comment Approximately two and a half years later I see that I didn't write what I intended to write. I did of course intend to write "compact" -- or else the joke makes no sense. In other words,
Andrew Stacey's version is what I intended (except that in my version there was just one examiner).
21 awarded Nice Answer
12 awarded Notable Question
Feb How to mentor an exceptional high school student?
28 comment I've always wondered how people know they've been downvoted. Is it by repeatedly checking the number of votes so that one catches it after it decreases and before it increases again? Or
is there some more efficient method that I've been too stupid to work out?
15 awarded Notable Question
Feb Economical hard word problem
7 comment I don't know whether this suits my motivation until I've messed around with it for a while. But it looks promising, so many thanks. (Sorry for not writing this earlier -- I've been away
from Mathoverflow for a while.)
1 awarded Good Question
Jan awarded Nice Answer | {"url":"http://mathoverflow.net/users/1459/gowers?tab=activity&sort=all&page=5","timestamp":"2014-04-18T08:55:11Z","content_type":null,"content_length":"47156","record_id":"<urn:uuid:e61adaa1-efe0-4cd6-ba23-8933c6f58edf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 24
- ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS , 1999
"... ..."
- in Proc. DAC , 1998
"... We describe the verification of the IM: a large, complex (12,000 gates and 1100 latches) circuit that detects and marks the boundaries between Intel architecture (IA-32) instructions. We
verified a gate-level model of the IM against an implementation-independent specification of IA-32 instruction le ..."
Cited by 31 (6 self)
Add to MetaCart
We describe the verification of the IM: a large, complex (12,000 gates and 1100 latches) circuit that detects and marks the boundaries between Intel architecture (IA-32) instructions. We verified a
gate-level model of the IM against an implementation-independent specification of IA-32 instruction lengths. We used theorem proving to to derive 56 model-checking runs and to verify that the
modelchecking runs imply that the IM meets the specification for all possible sequences of IA-32 instructions. Our verification discovered eight previously unknown bugs. 1 Introduction The Intel
architecture (IA-32) instruction set has several hundred opcodes. The opcode length is variable, as are the lengths of operand and address displacement data. The architecture also includes the notion
of prefix bytes, which change the semantics of the subsequent instruction. Two of the prefixes (h66, h67) can affect the length of the instruction. A single instruction may have multiple prefix
bytes, but overall ...
- Formal Hardware Verification , 1996
"... ion The main problem with model checking is the state explosion problem -- the state space grows exponentially with system size. Two methods have some popularity in attacking this problem:
compositional methods and abstraction. While they cannot solve the problem in general, they do offer significa ..."
Cited by 26 (6 self)
Add to MetaCart
ion The main problem with model checking is the state explosion problem -- the state space grows exponentially with system size. Two methods have some popularity in attacking this problem:
compositional methods and abstraction. While they cannot solve the problem in general, they do offer significant improvements in performance. The direct method of verifying that a circuit has a
property f is to show the model M satisfies f . The idea behind abstraction is that instead of verifying property f of model M , we verify property f A of model MA and the answer we get helps us
answer the original problem. The system MA is an abstraction of the system M . One possibility is to build an abstraction MA that is equivalent (e.g. bisimilar [48]) to M . This sometimes leads to
performance advantages if the state space of MA is smaller than M . This type of abstraction would more likely be used in model comparison (e.g. as in [38]). Typically, the behaviour of an
abstraction is not equivalent...
, 1997
"... Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the
hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level s ..."
Cited by 19 (1 self)
Add to MetaCart
Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the
hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level specification but have implementations that exhibit highly nondeterministic behaviors. A
typical example of such hardware systems are processors. At the high level, the sequencing model inherent in processors is the sequential execution model. The underlying implementation, however, uses
features such as nondeterministic interface protocols, instruction pipelines, and multiple instruction issue which leads to nondeterministic behaviors. The goal is to develop a methodology with which
a designer can show that a circuit fulfills the abstract specification of the desired system behavior. The abstract specification describes the highlevel behavior of the system independent of any
timing or implem...
, 2003
"... The paper presents a collection of 93 different bugs, detected in formal verification of 65 student designs that include: 1) singleissue pipelined DLX processors; 2) extensions with exceptions
and branch prediction; and 3) dual-issue superscalar implementations. The processors were described in a hi ..."
Cited by 17 (4 self)
Add to MetaCart
The paper presents a collection of 93 different bugs, detected in formal verification of 65 student designs that include: 1) singleissue pipelined DLX processors; 2) extensions with exceptions and
branch prediction; and 3) dual-issue superscalar implementations. The processors were described in a high-level HDL, and were formally verified with an automatic tool flow. The bugs are analyzed and
classified, and can be used in research on microprocessor testing.
- in DATE , 2005
"... In this paper we describe a fully-automated methodology for formal verification of fused-multiply-add floating point units (FPUs). Our methodology verifies an implementation FPU against a simple
reference model derived from the processor’s architectural specification, which may include all aspects o ..."
Cited by 13 (5 self)
Add to MetaCart
In this paper we describe a fully-automated methodology for formal verification of fused-multiply-add floating point units (FPUs). Our methodology verifies an implementation FPU against a simple
reference model derived from the processor’s architectural specification, which may include all aspects of the IEEE specification including denormal operands and exceptions. Our strategy uses a
combination of BDD- and SAT-based symbolic simulation. To make this verification task tractable, we use a combination of casesplitting, multiplier isolation, and automatic model reduction techniques.
The case-splitting is defined only in terms of the reference model, which makes this approach easily portable to new designs. The methodology is directly applicable to multi-GHz industrial
implementation models (e.g., HDL or gate-level circuit representations) that contain all details of the high-performance transistorlevel model, such as aggressive pipelining, clocking, etc.
Experimental results are provided to demonstrate the computational efficiency of this approach. 1
- FORMAL METHODS IN COMPUTER-AIDED DESIGN (FMCAD '96) , 1996
"... A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE
standard for floating point arithmetic. The utility of the general specification is illustrated using a numb ..."
Cited by 11 (1 self)
Add to MetaCart
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard
for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
- In CHARME 2001, volume 2144 of LNCS , 2001
"... We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported
operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is v ..."
Cited by 11 (6 self)
Add to MetaCart
We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported
operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is verified on the gate level against a formal description of the IEEE standard by means of
the theorem prover PVS.
- In GI Jahrestagung 2000 , 2000
"... . This paper presents status results of a microprocessor verification project. The authors verify a complete 32-bit RISC microprocessor including the floating point unit and the control logic of
the pipeline. The paper describes a formal definition of a "correct" microprocessor. This correctness ..."
Cited by 8 (4 self)
Add to MetaCart
. This paper presents status results of a microprocessor verification project. The authors verify a complete 32-bit RISC microprocessor including the floating point unit and the control logic of the
pipeline. The paper describes a formal definition of a "correct" microprocessor. This correctness criterion is proven for an implementation using formal methods. All proofs are verified mechanically
by means of the theorem proving system PVS. 1 Introduction Microprocessor design is an error-prone process. With increasing complexity of current microprocessor designs, formal verification has
become crucial. In order to achieve completely verified designs, adjusting the design process itself plays an important role: the more high-level information on the design is available, the faster
the verification can be done. The authors re-designed a simple RISC processor, the DLX [1], with respect to verifiability. The design includes the complete pipe control and forwarding logic. The
- LECTURE NOTES IN COMPUTER SCIENCE , 1998
"... The floating-point(FP) division bug in Intel's Pentium processor and the overflow flag erratum of the FIST instruction in Intel's Pentium Pro and Pentium II processor have demonstrated the
importance and the difficulty of verifying FP arithmetic circuits. In this paper, we present the verificatio ..."
Cited by 8 (0 self)
Add to MetaCart
The floating-point(FP) division bug in Intel's Pentium processor and the overflow flag erratum of the FIST instruction in Intel's Pentium Pro and Pentium II processor have demonstrated the importance
and the difficulty of verifying FP arithmetic circuits. In this paper, we present the verification of FP adders with reusable specifications, using extended word-level SMV, which is improved by using
the Multiplicative Power HDDs (*PHDDs), and by incorporating conditional symbolic simulation as well as a short-circuiting technique. Based on the case analysis, the specifications of FP adders are
divided into several hundreds of implementation-independent sub-specifications. We applied our system and these specifications to verify the IEEE double precision FP adder in the Aurora III Chip at
the University of Michigan. Our system found several design errors in this FP adder and generated one counterexample for each error within several minutes. A variant of the corrected FP adder is
created to illustrate the capability of our system to handle different FP adder designs. For each of FP adders, the verification task finished in 2 CPU hours on a Sun UltraSPARC-II server. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=842550","timestamp":"2014-04-20T14:17:24Z","content_type":null,"content_length":"38067","record_id":"<urn:uuid:a7ddca37-6396-45f3-8d9a-415ebc412f9f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Series stuff
April 20th 2008, 09:12 AM #1
Super Member
Oct 2007
London / Cambridge
Understanding Series stuff
Now i have managed to solve the attached problem using some instructions given to me. But i really do not understand how this works.
Under what premise can I assume that $a_{n}=t^n$ then I solve the equation $t^n = 3t^{n-1}-t^{n-2}$. then I was told that $a_{n}$ is a linear combination of the roots.
I solve this question fine, I just really do not understand any of the theory.
Could someone show me some reading material or a proof of this method.
Many Thanks
Given a sequence: $a_n$
Let: $<br /> A\left( x \right) = \sum\limits_{n = 0}^\infty {\left( {\frac{{a_n }}<br /> {{n!}} \cdot x^n } \right)} <br />$
(this is called the exponential generating function of the sequence)
Note that $<br /> A'\left( x \right)<br />$ is the exponential generating function of $<br /> a_{n + 1} <br /> <br />$ (try differentiating)
So if we have the recurrence equation: $<br /> a_{n + 2} = c_1 \cdot a_{n + 1} + c_2 \cdot a_n <br />$ we must have: $<br /> A''\left( x \right) = c_1 \cdot A'\left( x \right) + c_2 \cdot A\left(
x \right)<br />$
Suppose there are two different solutions to this equation: $t^2-c_1\cdot{t}-c_2=0$(1) (maybe complex)
Then the general solution to this differential equation is: $<br /> A\left( x \right) = A \cdot e^{t_1 \cdot x} + B \cdot e^{t_1 \cdot x} <br />$ (for some constants A and B that depend on the
initial conditions) where $t_1$ and $t_2$ are the roots of the equation (1)
Therefore: $A(x)=<br /> A \cdot e^{t_1 \cdot x} + B \cdot e^{t_1 \cdot x} = A \cdot \sum\limits_{n = 0}^\infty {\left( {\frac{{t_1 ^n }}<br /> {{n!}} \cdot x^n } \right)} + B \cdot \sum\limits_{n
= 0}^\infty {\left( {\frac{{t_2 ^n }}<br /> {{n!}} \cdot x^n } \right)} <br />$
Thus: $<br /> A\left( x \right) = \sum\limits_{n = 0}^\infty {\left( {\frac{{A \cdot t_1 ^n + B \cdot t_2 ^n }}<br /> {{n!}} \cdot x^n } \right)} <br />$
And we finally get $<br /> A \cdot t_1 ^n + B \cdot t_2 ^n = a_n <br />$ (because of the uniqueness of the power series expansion of a function)
Read here to find out more about this topic: http://www.mathhelpforum.com/math-he...tionology.html
April 21st 2008, 04:18 PM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/35217-understanding-series-stuff.html","timestamp":"2014-04-21T13:39:35Z","content_type":null,"content_length":"38744","record_id":"<urn:uuid:75104f19-fb47-47f5-b0b2-0f98f6692482>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Development of Transition Pieces
Figure 8-17. Transition pieces. are not shown in either the top or front view, but would be equal in length to the hypotenuse of a right triangle, having one leg equal in length to the
projected element in the top view and the other leg equal to the height of the projected element in the front view. When it is necessary to find the true length of
a number of edges, or elements, then a true-length diagram is drawn adjacent to the front view. This prevents the front view from being cluttered with lines.
Since the development of the oblique cone will be symmetrical, the starting line will be element 0-7. The development is constructed as follows: With 0 as center
and the radius equal to the true length of element 0-6, draw an arc. With 7 as center and the radius equal to distance 6-7 in the top view, draw a second arc
intersecting the first at point 6. Draw element 0-6 on the development. With 0 as center and the radius equal to 8-13 the true length of element 0-5, draw an arc. With 6 as
center and the radius equal to distance 5-6 in the top view, draw a second arc intersecting the fast point 5. Draw element 0-5 on the development. This is repeated
until all the element lines are located on the development view. This development does not show a seam allowance. DEVELOPMENT OF TRANSITION PIECES Transition pieces are usually made to connect
two different forms, such as round pipes to square pipes. These transition pieces will usually fit the definition of a nondevelopable surface that must be developed by approximation. This is done by
assuming the surface to be made from a series of triangular surfaces laid side-by-side to form the development. This form of development is known as triangulation (fig. 8-17). | {"url":"http://draftingmanuals.tpub.com/14040/css/14040_139.htm","timestamp":"2014-04-21T04:32:23Z","content_type":null,"content_length":"18942","record_id":"<urn:uuid:5242e3f5-eec0-4987-91d2-ac2516a3e616>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fuzzy Expert Systems
This is the second part in a three-part series of introductory articles on the fuzzy field. The preceding article was titled "What is Fuzzy Logic?", and the next article will be titled "What is Fuzzy
One point I didn't make in my previous article, "What is Fuzzy Logic", is that in practice, the terms fuzzy subset and membership function get used nearly interchangeably. I'll probably slip up and
swap back and forth some - my apologies in advance.
What is a Fuzzy Expert System?
Put as simply as possible, a fuzzy expert system is an expert system that uses fuzzy logic instead of Boolean logic. In other words, a fuzzy expert system is a collection of membership functions and
rules that are used to reason about data. Unlike conventional expert systems, which are mainly symbolic reasoning engines, fuzzy expert systems are oriented toward numerical processing.
The rules in a fuzzy expert system are usually of a form similar to the following:
if x is low and y is high then z = medium
where x and y are input variables (names for know data values), z is an output variable (a name for a data value to be computed), low is a membership function (fuzzy subset) defined on x, high is a
membership function defined on y, and medium is a membership function defined on z. The part of the rule between the "if" and "then" is the rule's _premise_ or _antecedent_. This is a fuzzy logic
expression that describes to what degree the rule is applicable. The part of the rule following the "then" is the rule's _conclusion_ or _consequent_. This part of the rule assigns a membership
function to each of one or more output variables. Most tools for working with fuzzy expert systems allow more than one conclusion per rule.
A typical fuzzy expert system has more than one rule. The entire group of rules is collectively known as a _rulebase_ or _knowledge base_.
The Inference Process
With the definition of the rules and membership functions in hand, we now need to know how to apply this knowledge to specific values of the input variables to compute the values of the output
variables. This process is referred to as _inferencing_. In a fuzzy expert system, the inference process is a combination of four subprocesses: _fuzzification_, _inference_, _composition_, and
_defuzzification_. The defuzzification subprocess is optional.
For the sake of example in the following discussion, assume that the variables x, y, and z all take on values in the interval [ 0, 10 ], and that we have the following membership functions and rules
low(t) = 1 - t / 10
high(t) = t / 10
rule 1: if x is low and y is low then z is high
rule 2: if x is low and y is high then z is low
rule 3: if x is high and y is low then z is low
rule 4: if x is high and y is high then z is high
Notice that instead of assigning a single value to the output variable z, each rule assigns an entire fuzzy subset (low or high).
1. In this example, low(t)+high(t)=1.0 for all t. This is not required, but it is fairly common.
2. The value of t at which low(t) is maximum is the same as the value of t at which high(t) is minimum, and vice-versa. This is also not required, but fairly common.
3. The same membership functions are used for all variables. This isn't required, and is also *not* common.
In the fuzzification subprocess, the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise. The degree of
truth for a rule's premise is sometimes referred to as its _alpha_. If a rule's premise has a nonzero degree of truth (if the rule applies at all...) then the rule is said to _fire_.
For example:
x y low(x) high(x) low(y) high(y) alpha1 alpha2 alpha3 alpha4
0.0 0.0 1.0 0.0 1.0 0.0 1.0 0.0 0.0 0.0
0.0 3.2 1.0 0.0 0.68 0.32 0.68 0.32 0.0 0.0
0.0 6.1 1.0 0.0 0.39 0.61 0.39 0.61 0.0 0.0
0.0 10.0 1.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0
3.2 0.0 0.68 0.32 1.0 0.0 0.68 0.0 0.32 0.0
6.1 0.0 0.39 0.61 1.0 0.0 0.39 0.0 0.61 0.0
10.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 1.0 0.0
3.2 3.1 0.68 0.32 0.69 0.31 0.68 0.31 0.32 0.32
3.2 3.3 0.68 0.32 0.67 0.33 0.67 0.33 0.32 0.32
10.0 10.0 0.0 1.0 0.0 1.0 0.0 0.0 0.0 1.0
In the inference subprocess, the truth value for the premise of each rule is computed, and applied to the conclusion part of each rule. This results in one fuzzy subset to be assigned to each output
variable for each rule.
I've only seen two _inference methods_ or _inference rules_: _MIN_ and _PRODUCT_. In MIN inferencing, the output membership function is clipped off at a height corresponding to the rule premise's
computed degree of truth. This corresponds to the traditional interpretation of the fuzzy logic AND operation. In PRODUCT inferencing, the output membership function is scaled by the rule premise's
computed degree of truth.
Due to the limitations of posting this as raw ASCII, I can't draw you a decent diagram of the results of these methods. Therefore I'll give the example results in the same functional notation I used
for the membership functions above.
For example, let's look at rule 1 for x = 0.0 and y = 3.2. As shown in the table above, the premise degree of truth works out to 0.68. For this rule, MIN inferencing will assign z the fuzzy subset
defined by the membership function:
rule1(z) = { z / 10, if z <= 6.8
0.68, if z >= 6.8 }
For the same conditions, PRODUCT inferencing will assign z the fuzzy subset defined by the membership function:
rule1(z) = 0.68 * high(z)
= 0.068 * z
Note: I'm using slightly nonstandard terminology here. In most texts, the term "inference method" is used to mean the combination of the things I'm referring to separately here as "inference" and
"composition." Therefore, you'll see terms such as "MAX-MIN inference" and "SUM-PRODUCT inference" in the literature. They mean the combination of MAX composition and MIN inference, or SUM
composition and PRODUCT inference respectively, to use my terminology. You'll also see the reverse terms "MIN-MAX" and "PRODUCT-SUM" - these mean the same things as the reverse order. I think it's
clearer to describe the two processes separately.
In the composition subprocess, all of the fuzzy subsets assigned to each output variable are combined together to form a single fuzzy subset for each output variable.
I'm familiar with two _composition rules_: _MAX composition_ and _SUM composition_. In MAX composition, the combined output fuzzy subset is constructed by taking the pointwise maximum over all of the
fuzzy subsets assigned to the output variable by the inference rule. In SUM composition the combined output fuzzy subset is constructed by taking the pointwise sum over all of the fuzzy subsets
assigned to the output variable by the inference rule. Note that this can result in truth values greater than one! For this reason, SUM composition is only used when it will be followed by a
defuzzification method, such as the CENTROID method, that doesn't have a problem with this odd case.
For example, assume x = 0.0 and y = 3.2. MIN inferencing would assign the following four fuzzy subsets to z:
rule1(z) = { z / 10, if z <= 6.8
0.68, if z >= 6.8 }
rule2(z) = { 0.32, if z <= 6.8
1 - z / 10, if z >= 6.8 }
rule3(z) = 0.0
rule4(z) = 0.0
MAX composition would result in the fuzzy subset:
fuzzy(z) = { 0.32, if z <= 3.2
z / 10, if 3.2 <= z <= 6.8
0.68, if z >= 6.8 }
PRODUCT inferencing would assign the following four fuzzy subsets to z:
rule1(z) = 0.068 * z
rule2(z) = 0.32 - 0.032 * z
rule3(z) = 0.0
rule4(z) = 0.0
SUM composition would result in the fuzzy subset:
fuzzy(z) = 0.32 + 0.036 * z
Sometimes it is useful to just examine the fuzzy subsets that are the result of the composition process, but more often, this _fuzzy value_ needs to be converted to a single number - a _crisp value_.
This is what the defuzzification subprocess does.
There are more defuzzification methods than you can shake a stick at. A couple of years ago, Mizumoto did a short paper that compared roughly thirty defuzzification methods. Two of the more common
techniques are the CENTROID and MAXIMUM methods. In the CENTROID method, the crisp value of the output variable is computed by finding the variable value of the center of gravity of the membership
function for the fuzzy value. In the MAXIMUM method, one of the variable values at which the fuzzy subset has its maximum truth value is chosen as the crisp value for the output variable. There are
several variations of the MAXIMUM method that differ only in what they do when there is more than one variable value at which this maximum truth value occurs. One of these, the AVERAGE-OF-MAXIMA
method, returns the average of the variable values at which the maximum truth value occurs.
For example, go back to our previous examples. Using MAX-MIN inferencing and AVERAGE-OF-MAXIMA defuzzification results in a crisp value of 8.4 for z. Using PRODUCT-SUM inferencing and CENTROID
defuzzification results in a crisp value of 6.7 for z.
Note: sometimes the composition and defuzzification processes are combined, taking advantage of mathematical relationships that simplify the process of computing the final output variable values.
After all this ...
Where are Fuzzy Expert Systems Used?
To date, fuzzy expert systems are the most common use of fuzzy logic. They are used in several wide-ranging fields, including:
• Linear and nonlinear control.
• Pattern recognition.
• Financial systems.
and many others I can't think of. It's late. I'm going home! :-)
Erik Horstkotte, Togai InfraLogic, Inc.
The World's Source for Fuzzy Logic Solutions (The company, not me!)
erik@til.com, gordius!til!erik - (714) 975-8522
info@til.com for info, fuzzy-server@til.com for fuzzy mail-server
Back to the Fuzzy Archive Home Page. | {"url":"http://www.austinlinks.com/Fuzzy/expert-systems.html","timestamp":"2014-04-18T00:56:09Z","content_type":null,"content_length":"13856","record_id":"<urn:uuid:eee2de16-8d0d-4f2f-92bd-fa2ce0ee4d7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
using the Lagrange Theorem
July 29th 2010, 10:58 AM #1
Junior Member
Sep 2009
using the Lagrange Theorem
Suppose G is a group of order 48 and H is a subgroup of order 12, then how many distant right cosets of H are there in G?
I thought it was 6. Am I right? If not can u show me which why to go?
Lagrange's theorem: if $G$ is a finite group and $H\leq G$ then $|G|=[G:H]\cdot |H|$ .
Now you can see your mistake.
July 29th 2010, 01:09 PM #2
Oct 2009 | {"url":"http://mathhelpforum.com/differential-geometry/152325-using-lagrange-theorem.html","timestamp":"2014-04-17T16:02:51Z","content_type":null,"content_length":"33263","record_id":"<urn:uuid:e13b1234-9e4f-4e7e-b068-4cbb4fb528c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colonia Statistics Tutor
...I earned my Bachelor of Arts in Sociology from The College of New Jersey in May 2013. Since graduation, I have worked at a Non Profit Organization in Central Jersey, but am now looking to
pursue my passion for education. I have completed numerous hours of volunteer work which has included tutoring students in a variety of subjects and working with elementary aged children.
49 Subjects: including statistics, Spanish, English, reading
...I then analyze the discussion to find areas in the students' foundation that may need more work. Group rates can be offered for most situations, but if you are interested in learning any of the
above subjects for the first time, it may be helpful to consider the small group dynamic! STANDARDIZE...
34 Subjects: including statistics, chemistry, physics, calculus
...I am very familiar with the format of the GMAT and can efficiently prepare any student to succeed on the test. I started playing volleyball in high school in a structured and competitive
environment. After high school, I played in advanced organized leagues on the Jersey Shore and Northern New Jersey for over 10 years.
23 Subjects: including statistics, English, writing, geometry
Over the past 3 years I have tutored both through WyzAnt and in tutoring centers in northern New Jersey. I tutor mainly in mathematics, and I hold a Bachelor's in Mathematics from Rutgers
University. I find that the most detrimental thing to a mathematics student is lacking a core foundation.
21 Subjects: including statistics, calculus, geometry, accounting
...In addition, I have several years of experience in the areas of applied microbiology and plant physiology. I taught the Portals to Academic Study Success (PASS) course at Rutgers for two years.
This course focuses on time management and study skills for freshmen undergraduate students but most all of the skills would be relevant to high school and younger students as well.
27 Subjects: including statistics, chemistry, English, SAT reading | {"url":"http://www.purplemath.com/Colonia_statistics_tutors.php","timestamp":"2014-04-20T14:06:44Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:acd1c8f5-e8a9-4e9f-a358-095114edeb83>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Why would one prefer ZFC to ZC?
Harvey Friedman friedman at math.ohio-state.edu
Tue Jan 26 20:21:09 EST 2010
On Jan 26, 2010, at 5:09 PM, Jeremy Bem wrote:
> Why would someone prefer ZFC? [over ZC)
One reason to consider:
THEOREM. Every Borel set in the Euclidean plane = RxR, symmetric about
the line y = x, contains or is disjoint from the graph of a Borel
function from R into R.
The above theorem is provable in ZFC but not in ZC.
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-January/014337.html","timestamp":"2014-04-20T16:39:26Z","content_type":null,"content_length":"2811","record_id":"<urn:uuid:6f589928-4a49-407a-b086-a4639677985f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Albert harold
bio website
age 26
visits member for 1 year, 6 months
seen Dec 11 '13 at 6:08
stats profile views 368
Nov Weak amenability and quasi central bounded approximate identity
7 comment Thank u so much Alvin, You really helped me. It was true!
7 awarded Commentator
Nov Bounded approximate identity and kernel of algebra homomorphism
7 comment Every ideal with quasi-central bounded approximate identity in weakly amenable Banach algebras, has trace extension property, so it is weakly amenable!
6 accepted Bounded approximate identity and kernel of algebra homomorphism
Nov Weak amenability and quasi central bounded approximate identity
6 revised added 5 characters in body
6 asked Weak amenability and quasi central bounded approximate identity
Nov Bounded approximate identity and kernel of algebra homomorphism
6 comment No, I want to ker(T) has a bounded approximate identity. In particular, when A is weakly amenable and has a quasi-central bounded approximate identity, I want to see ker(T) has a bounded
approximate identity!
Nov Bounded approximate identity and kernel of algebra homomorphism
5 comment Thanks ALVIN. $\cal A$ is not amenable, but $\cal A$ is weakly amenable and has a quasi-central bounded approximate identity. What do you think about it?
Nov Bounded approximate identity and kernel of algebra homomorphism
5 revised added 4 characters in body
Nov Bounded approximate identity and kernel of algebra homomorphism
5 revised added 44 characters in body
4 asked Bounded approximate identity and kernel of algebra homomorphism
28 accepted When can we “displace” an ultrafilter limit with another limit?
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment If we can prove above notion, we see that for every $\phi\in \Delta_{\cal A}$, $\phi$-amenability is equivalent to ultra $\phi$-amenability. We say $\cal A$ is ultra $\phi$-amenable if
for every ultrafilter $\cal U$, $(\cal A)_{\cal U}$ is $(\phi)_{\cal U}$-amenable.
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment And I mean that $a_i\in \cal A$ for all $i\in F$ and $\cal U$ is a free ultrafilter on the index set $F$, and both nets are bounded. Thanks....
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment Yes, I want to study ultra $\phi$-amenability and ultra character amenability. I want to show that if $\phi\in \Delta_{\cal A}$, and $\cal A$ is $\phi$-amenable, then $(\cal A)_{\cal U}$
is $(\phi)_{\cal U}$-amenable, for every ultrafilter $\cal U$. But I confront to interchanging limits.
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment In uniformly convergence, we can displace limit operators. My goal is only inform that is this true when one of the limits is the ultrafilter limit? (because behavior of ultrafilter
limit, is different from the other limits.), thank you so much.
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment It isnt for master theses!
Jun When can we “displace” an ultrafilter limit with another limit?
27 revised deleted 12 characters in body
Jun When can we “displace” an ultrafilter limit with another limit?
27 comment In which cases this notion can be true?
Jun asked When can we “displace” an ultrafilter limit with another limit? | {"url":"http://mathoverflow.net/users/27066/albert-harold?tab=activity","timestamp":"2014-04-21T15:31:36Z","content_type":null,"content_length":"45802","record_id":"<urn:uuid:07569ce6-d0cd-4a3b-87f8-ec0aada07034>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do you find the area of an acute triangle when you're only given the side lengths????
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4dfb6c930b8bbe4f12e6433f","timestamp":"2014-04-16T22:45:42Z","content_type":null,"content_length":"45509","record_id":"<urn:uuid:20e72e7e-00ba-419a-abd7-f4576429c1e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the False Discovery Rate and an Asymptotically Optimal Rejection Curve
Helmut Finner, Thorsten Dickhaus and Markus Roters
The Annals of Statistics Volume 37, Number 2, , 2009. ISSN 0090-5364
In this paper we introduce and investigate a new rejection curve for asymptotic control of the false discovery rate (FDR) in multiple hypotheses testing problems. We first give a heuristic motivation
for this new curve and propose some procedures related to it. Then we introduce a set of possible assumptions and give a unifying short proof of FDR control for procedures based on Simes’ critical
values, whereby certain types of dependency are allowed. This methodology of proof is then applied to other fixed rejection curves including the proposed new curve. Among others, we investigate the
problem of finding least favorable parameter configurations such that the FDR becomes largest. We then derive a series of results concerning asymptotic FDR control for procedures based on the new
curve and discuss several example procedures in more detail. A main result will be an asymptotic optimality statement for various procedures based on the new curve in the class of fixed rejection
curves. Finally, we briefly discuss strict FDR control for a finite number of hypotheses.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00006797/","timestamp":"2014-04-18T15:42:49Z","content_type":null,"content_length":"8312","record_id":"<urn:uuid:ec288534-1f9c-4575-821d-d77fe22efd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic statistical analysis in genetic case-control studies
This protocol describes how to perform basic statistical analysis in a population-based genetic association case-control study. The steps described involve the (i) appropriate selection of measures
of association and relevance of disease models; (ii) appropriate selection of tests of association; (iii) visualization and interpretation of results; (iv) consideration of appropriate methods to
control for multiple testing; and (v) replication strategies. Assuming no previous experience with software such as PLINK, R or Haploview, we describe how to use these popular tools for handling
single-nucleotide polymorphism data in order to carry out tests of association and visualize and interpret results. This protocol assumes that data quality assessment and control has been performed,
as described in a previous protocol, so that samples and markers deemed to have the potential to introduce bias to the study have been identified and removed. Study design, marker selection and
quality control of case-control studies have also been discussed in earlier protocols. The protocol should take ~1 h to complete. | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3154648/?lang=en-ca","timestamp":"2014-04-20T05:56:39Z","content_type":null,"content_length":"160352","record_id":"<urn:uuid:3754ca24-1b88-4000-87c0-f913b1784335>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: scalar question
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: scalar question
From "Michael Blasnik" <michael.blasnik@verizon.net>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: scalar question
Date Fri, 21 Jul 2006 16:04:37 -0400
I don't think this suggested approach will handle missing values properly since it will treat them as zeros added to the row sums and give no indication of a reduced number of observations.
Getting back to the original approach. I'm not sure if you wouldn't be better off with a reshape long to get what you want, but if you keep with the current approach, I would change the middle of the
loop to :
summarize V`i', meanonly
scalar average_V`i'= r(mean)
scalar z= z + cond( average_V`i'<. , average_V`i',0)
The cond() function will avoid adding a missing value into your sum (but it will replace it with 0). You may want to count how many missing values you end up with (in another scalar counter). I also
added the meanonly topion to the summarize command to make it faster
Michael Blasnik
----- Original Message ----- "Radu Ban" <raduban@gmail.com> wrote
you can try the -egen rsum- command and then take the mean, as the sum
of means is equal to mean of sums. for example:
egen rowsum = rsum(V*)
summarize rowsum
scalar z = r(mean)
drop rowsum
2006/7/21, Jeffrey W Ladewig <jeffrey.ladewig@uconn.edu>:
I am using a simple while loop statement (see below) to add the mean of each
variable in a series. The program runs fine except if one of the variables
contains all missing values (there are reasons why I need to keep the
variables). If, for instance, the 500th variable contains all missing
values, then the scalar (i.e., average_V`i') for the 500th variable equals a
missing value (of course), but the additive scalar (i.e., z) from that point
forward only reports missing values. That is, my additive scalar stops
adding. I have been programming a bypass around each of these problematic
variables, but is there a command or something that I could use instead?
scalar z = 0
local i = 1
while `i' <= 1000 {
summarize V`i'
scalar average_V`i'= r(mean)
scalar z= z + average_V`i'
local i = `i' + 1
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-07/msg00665.html","timestamp":"2014-04-17T00:54:07Z","content_type":null,"content_length":"8422","record_id":"<urn:uuid:25acf069-6561-4fb5-b3bb-3d9344033ed8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: How teaching factors rather than multiplicand & multiplier
confuses kids!
Replies: 1 Last Post: Nov 9, 2012 2:14 PM
Messages: [ Previous | Next ]
Re: How teaching factors rather than multiplicand & multiplier confuses kids!
Posted: Nov 9, 2012 2:14 PM
On Nov 9, 2012, at 1:31 PM, Joe Niederberger <niederberger@comcast.net> wrote:
> And besides, do you really need a specific point in time to "ascend" to the symbolic level? Like Monday at 3?
Symbolic? I was just talking about the number level. You know, what is the square root of 256 or why does 1/7 result in a repeating decimal or why is a number divisible by 4 when the last two digits
are divisible by 4. I don't think you can cover those things, and a hundred other such things, with bananas and apples.
Bob Hansen
Date Subject Author
11/9/12 Re: How teaching factors rather than multiplicand & multiplier Joe Niederberger
confuses kids!
11/9/12 Re: How teaching factors rather than multiplicand & multiplier confuses kids! Robert Hansen | {"url":"http://mathforum.org/kb/message.jspa?messageID=7920712","timestamp":"2014-04-16T16:39:22Z","content_type":null,"content_length":"18893","record_id":"<urn:uuid:cb69b6ef-c5ea-4387-bae9-ed210458498f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: Linux Programmer's Manual (3)
Updated: 2008-08-11
Index Return to Main Contents
feclearexcept, fegetexceptflag, feraiseexcept, fesetexceptflag, fetestexcept, fegetenv, fegetround, feholdexcept, fesetround, fesetenv, feupdateenv, feenableexcept, fedisableexcept, fegetexcept -
floating-point rounding and exception handling
#include <fenv.h>
int feclearexcept(int excepts);
int fegetexceptflag(fexcept_t *flagp, int excepts);
int feraiseexcept(int excepts);
int fesetexceptflag(const fexcept_t *flagp, int excepts);
int fetestexcept(int excepts);
int fegetround(void);
int fesetround(int rounding_mode);
int fegetenv(fenv_t *envp);
int feholdexcept(fenv_t *envp);
int fesetenv(const fenv_t *envp);
int feupdateenv(const fenv_t *envp);
These eleven functions were defined in C99, and describe the handling of floating-point rounding and exceptions (overflow, zero-divide etc.).
exception occurs when an operation on finite numbers produces infinity as exact answer.
The overflow exception occurs when a result has to be represented as a floating-point number, but has (much) larger absolute value than the largest (finite) floating-point number that is
The underflow exception occurs when a result has to be represented as a floating-point number, but has smaller absolute value than the smallest positive normalized floating-point number (and would
lose much accuracy when represented as a denormalized number).
The inexact exception occurs when the rounded result of an operation is not equal to the infinite precision result. It may occur whenever overflow or underflow occurs.
The invalid exception occurs when there is no well-defined result for an operation, as for 0/0 or infinity - infinity or sqrt(-1).
Exception handling
Exceptions are represented in two ways: as a single bit (exception present/absent), and these bits correspond in some implementation-defined way with bit positions in an integer, and also as an
opaque structure that may contain more information about the exception (perhaps the code address where it occurred).
Each of the macros FE_DIVBYZERO, FE_INEXACT, FE_INVALID, FE_OVERFLOW, FE_UNDERFLOW is defined when the implementation supports handling of the corresponding exception, and if so then defines the
corresponding bit(s), so that one can call exception handling functions, for example, using the integer argument FE_OVERFLOW|FE_UNDERFLOW. Other exceptions may be supported. The macro FE_ALL_EXCEPT
is the bitwise OR of all bits corresponding to supported exceptions.
The feclearexcept() function clears the supported exceptions represented by the bits in its argument.
The fegetexceptflag() function stores a representation of the state of the exception flags represented by the argument excepts in the opaque object *flagp.
The feraiseexcept() function raises the supported exceptions represented by the bits in excepts.
The fesetexceptflag() function sets the complete status for the exceptions represented by excepts to the value *flagp. This value must have been obtained by an earlier call of fegetexceptflag() with
a last argument that contained all bits in excepts.
The fetestexcept() function returns a word in which the bits are set that were set in the argument excepts and for which the corresponding exception is currently set.
Rounding mode
The rounding mode determines how the result of floating-point operations is treated when the result cannot be exactly represented in the signifcand. Various rounding modes may be provided: round to
nearest (the default), round up (towards positive infinity), round down (towards negative infinity), and round towards zero.
Each of the macros FE_TONEAREST, FE_UPWARD, FE_DOWNWARD, and FE_TOWARDZERO is defined when the implementation supports getting and setting the corresponding rounding direction.
The fegetround() function returns the macro corresponding to the current rounding mode.
The fesetround() function sets the rounding mode as specified by its argument and returns zero when it was successful.
C99 and POSIX.1-2008 specify an identifier, FLT_ROUNDS, defined in <float.h>, which indicates the implementation-defined rounding behavior for floating-point addition. This identifier has one of the
following values:
-1 The rounding mode is not determinable.
0 Rounding is towards 0.
1 Rounding is towards nearest number.
2 Rounding is towards positive infinity.
3 Rounding is towards negative infinity.
Other values represent machine-dependent, non-standard rounding modes.
The value of FLT_ROUNDS should reflect the current rounding mode as set by fesetround() (but see BUGS).
Floating-point environment
The entire floating-point environment, including control modes and status flags, can be handled as one opaque object, of type
. The default environment is denoted by
(of type
const fenv_t *
). This is the environment setup at program start and it is defined by ISO C to have round to nearest, all exceptions cleared and a non-stop (continue on exceptions) mode.
The fegetenv() function saves the current floating-point environment in the object *envp.
The feholdexcept() function does the same, then clears all exception flags, and sets a non-stop (continue on exceptions) mode, if available. It returns zero when successful.
The fesetenv() function restores the floating-point environment from the object *envp. This object must be known to be valid, for example, the result of a call to fegetenv() or feholdexcept() or
equal to FE_DFL_ENV. This call does not raise exceptions.
The feupdateenv() function installs the floating-point environment represented by the object *envp, except that currently raised exceptions are not cleared. After calling this function, the raised
exceptions will be a bitwise OR of those previously set with those in *envp. As before, the object *envp must be known to be valid.
These functions return zero on success and non-zero if an error occurred.
These functions first appeared in glibc in version 2.1.
IEC 60559 (IEC 559:1989), ANSI/IEEE 854, C99, POSIX.1-2001.
Glibc Notes
If possible, the GNU C Library defines a macro
which represents an environment where every exception raised causes a trap to occur. You can test for this macro using
. It is only defined if
is defined. The C99 standard does not define a way to set individual bits in the floating-point mask, for example, to trap on specific flags. glibc 2.2 supports the functions
() and
() to set individual floating-point traps, and
() to query the state.
#define _GNU_SOURCE
#include <fenv.h>
int feenableexcept(int excepts);
int fedisableexcept(int excepts);
int fegetexcept(void);
The feenableexcept() and fedisableexcept() functions enable (disable) traps for each of the exceptions represented by excepts and return the previous set of enabled exceptions when successful, and -1
otherwise. The fegetexcept() function returns the set of all currently enabled exceptions.
C99 specifies that the value of
should reflect changes to the current rounding mode, as set by
(). Currently, this does not occur:
always has the value 1.
This page is part of release 3.21 of the Linux
project. A description of the project, and information about reporting bugs, can be found at
Subscribe to Comments Feed | {"url":"https://www.linux.com/learn/docs/man/fesetenv3","timestamp":"2014-04-17T00:12:33Z","content_type":null,"content_length":"48040","record_id":"<urn:uuid:6c0c7ee0-a21a-4223-b1f2-5cf905a7cead>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hammond, IN Calculus Tutor
Find a Hammond, IN Calculus Tutor
...I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in
Greece. Thus I bring first hand knowledge to your history studies.
41 Subjects: including calculus, chemistry, physics, English
...I am licensed in both math and physics at the high school level. I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus,
advanced placement calculus, integrated chemistry/physics, and physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course.
12 Subjects: including calculus, physics, geometry, algebra 1
...I am currently a student at Purdue University Calumet, and I am majoring in math education. One day I plan on being a high school math teacher. I specifically I want to tutor in math for
elementary, middle, or high school students.
9 Subjects: including calculus, algebra 2, precalculus, elementary (k-6th)
Hello to all students, I am Ed, a college graduate with a Bachelor of Science degree. I have instructed grammar and high school children in the math and science fields before and helped them
acquire the knowledge tools for them to be successful in higher school courses in college. I feel that this...
14 Subjects: including calculus, chemistry, geometry, algebra 1
I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of
advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized...
12 Subjects: including calculus, geometry, algebra 1, trigonometry | {"url":"http://www.purplemath.com/Hammond_IN_calculus_tutors.php","timestamp":"2014-04-18T03:56:43Z","content_type":null,"content_length":"23988","record_id":"<urn:uuid:1cedc392-8685-4e35-90f3-a3ead5759775>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Psycho Punk
Joined: 19 Aug 2003
Total posts: 19610
Location: Dublin
Gender: Male
Posted: 17-12-2013 21:24 Post subject: Early Polynesians used binary to ease mental arithmetic
Early Polynesians used binary to ease mental arithmetic
19:00 16 December 2013 by Jacob Aron
Polynesian islanders spoke the language of computers centuries before the first programmer was born. It seems that inhabitants of Mangareva island in French Polynesia created their own particular
hybrid of decimal and binary number systems to do mental arithmetic.
The binary system can represent any number using a string of 1s and 0s. Though it is used by computers today, it was first described by the 17th century mathematician Gottfried Leibniz.
First you create a series of columns, each one devoted to a different power of 2, starting with 1 (20), followed by 2 (21), 4 (22), 8 (23) and so on. You then put 1s in the columns needed to make up
the target number and 0s in the rest. So 2 is represented by 1 0, while 8 is represented by 1 0 0 0, and 13 is represented by 1 1 0 1, and so on.
Andrea Bender and Sieghard Beller at the University of Bergen in Norway studied the Mangarevan language, which dates back to AD 1500, or even earlier. The pair say that, as in the decimal system,
there are Mangarevan words for the numbers 1 through 9. Beyond those, the islanders only had words for 10 (takau), 20 (paua), 40 (tataua) and 80 (varu) – the binary powers multiplied by 10. So they
used the binary system to count in 10s, but added 1 to 9 in the normal way.
Numerical mastery
For example, instead of the widespread system of adding the tens column in the sum 73+80 by remembering that 7+8=15, under the Mangarevan system you combine "forty twenty ten three" with "eighty" to
get "eighty forty twenty ten three", or 153.
In this way, the binary component of their counting system may have simplified calculations, since you only need to remember how to combine four numbers when counting in 10s. "They invented these
binary steps, which make calculations easier," Bender suggests.
She says the system may originally have come about because the islanders tallied culturally important items such as coconuts and octopuses in groups of 1, 2, 4 and 8. But Mangarevans also traded
items over long distances – including as far as Hawaii – and in bulk, and so would have needed a way to efficiently count much larger numbers.
Bender's team is particularly intrigued by the way that the islanders combined the two number systems. "Mangarevans had found a way to compensate for the downsides of a purely binary system by mixing
decimal and binary steps in a well-balanced manner, thus demonstrating numerical mastery on an advanced level," they write. "Mangarevan deserves a prominent position in theorising on numerical
Mangarevan is still spoken by a very few people today, but with only 600 speakers left as of 2011, it is classified as "in trouble". And modern speakers no longer use the old counting system.
Journal reference: PNAS, DOI: 10.1073/pnas.1309160110
Additional reporting by Celeste Biever and Victoria Jaggard | {"url":"http://www.forteantimes.com/forum/viewtopic.php?p=1377334","timestamp":"2014-04-17T00:48:41Z","content_type":null,"content_length":"24085","record_id":"<urn:uuid:62e24f06-a40c-44ed-b9da-f74ca9ffb189>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allocation policies and cost a
Awi Federgruen
Allocation policies and cost approximations for multilocation inventory systems
Coauthor(s): Paul Zipkin.
Consider a central depot that supplies several locations experiencing random demands. Periodically, the depot may place an order for exogenous supply. Orders arrive after a fixed leadtime, and are
then allocated among the several locations. Each allocation reaches its destination after a further delay. We consider the special case where the penalty-cost/holding-cost ratio is constant over the
locations. Several approaches are given to approximate the dynamic program describing the problem. Each approach provides both a near-optimal order policy and an approximation of the optimal cost of
the original problem. In addition, simple but effective allocation policies are discussed.
Source: Naval Research Logistics
Exact Citation:
Federgruen, Awi, and Paul Zipkin. "Allocation policies and cost approximations for multilocation inventory systems." Naval Research Logistics 31, no. 1 (1984): 97-129.
Volume: 31
Number: 1
Pages: 97-129
Date: 1984 | {"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=af7&pub=4024","timestamp":"2014-04-16T13:32:43Z","content_type":null,"content_length":"4139","record_id":"<urn:uuid:9d3b6331-868d-4e32-bf36-747f5d40423b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Formulas
Formulas wouldn't tell us squat if we couldn't then turn around and apply them to situations in the real world. Fortunately, we can do just that. Formulas can help us figure out how to deal with,
plan for or manipulate objects of all different shapes and sizes, both two-dimensional and three-dimensional. The two-dimensional ones are easier, as you may imagine—let's start there. We're not a
throw-you-in-the-deep-end-of-the-pool-to-teach-you-to-swim kind of people.
Two-dimensional shapes appear a lot in the world. A soccer field is a rectangle, the body of a bicycle may be a triangle, and some cakes have circular tops. These are all three-dimensional objects,
but with two-dimensional parts. Practical considerations aside, two-dimensional shapes are good for math problems because we can draw them on paper. Because most paper that we know of is more or less
We will go through some familiar shapes in the next few pages. For each shape we will give you a formula for the perimeter, meaning the distance around the outside of the shape, and a formula for the
area, meaning the size of the surface covered by the shape. Perimeter is measured in units of length such as inches, feet or miles. Area is measured in units of length squared, such as square inches,
square feet or square miles. Perimeter usually works better in poetry. For example, Robert Frost never would have been such a hit if he'd written that he had "square miles to go before I sleep." | {"url":"http://www.shmoop.com/algebraic-expressions/geometric-formulas-help.html","timestamp":"2014-04-18T08:39:29Z","content_type":null,"content_length":"33916","record_id":"<urn:uuid:07156d3f-d791-47bd-89dd-74ebf72cb6e2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal probability calculations
Let X~N(0,1) . Compute each in terms of function ϕ. (0 being mu and 1 being delta^2)
And evaluate it numerically.
1. P(X<=-5)
2. P(-2<=X<=7)
3. P(X>=3)
for the first one i get
But how do i evaluate it?
this is a wiki page on it
Normal distribution - Wikipedia, the free encyclopedia | {"url":"http://mathhelpforum.com/statistics/158852-normal-probability-calculations.html","timestamp":"2014-04-18T07:15:37Z","content_type":null,"content_length":"48912","record_id":"<urn:uuid:1de053ef-86c0-4d0f-8ad1-96df2745c56f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with a simple c++ hailstone length program
Newbie Poster
1 post since Sep 2012
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
http://www.cs.ecu.edu/~rws/c3300/prog1.pdf (for anyone that wants to view exactly whats needed for turn in)
Hello! I'm trying to write a c++ program that takes in the lower and upper limits of a bunch of numbers and output the length of the longest hailstone sequence. Posted below is what I have so far,
but I always get the last number in the sequence for the longest length. I don't know how to make the transition to show the longest hailstone sequnce like it should be. Any help is appreciated!
//Justin Chestnutt
//CSCI 3300
//Program 1
//Program reads in user input for upper and lower limits to print out hailstone sequence for numbers in the limit range and determine the longest sequence.
#include <iostream>
using namespace std;
//Hailstone sequence takes any number as an input and divides it by 2 if the number is even otherwise multiply by 3 and add 1 if the number is odd.
//The idea is that no matter what the user inputs you will eventually come to the answer of 1 at the end of the sequence.
//Hailstone function takes the input of the current number then calculates the next number in the hailstone sequence
int Hailstone(int n)
if(n % 2 == 0)
n /= 2; //number is even
n = 3 * n + 1; //number is odd
return n; //returns the value
//Calculates length of the hailstone sequence
int lengthHail(int num)
int count = 1;
while(num != 1)
num = Hailstone(num);
count++; //adds one to count everytime a calculation is done
return count;
//Ask for user input and reads in two integers that represent the lower limit and upper limit.
int main()
int lower, upper;
int count = 0;
cout << "Enter Lower Limit.\t";the
cin >> lower;
cout << "Enter Upper Limit.\t";
cin >> upper;
for(int i = lower;i <= upper; i++)
lower = i;
while(lower != 1)
cout << " " << Hailstone(lower) << " " << endl;
lower = Hailstone(lower);
cout << "\nLength of longest sequence is " << lengthHail(upper) << ".\n" << endl;
return 0;
Posting Sage w/ dash of thyme
9,363 posts since May 2006
Reputation Points: 2,905 [?]
Q&As Helped to Solve: 1,151 [?]
Skill Endorsements: 45 [?]
•Team Colleague
Print the program out
Grab another piece of paper and a pencil
Sit at your desk and go through the program line by line
Write down the variable names and what gets loaded into them
Follow the code until you see something that doesn't look right and figure out why.
np complete
Posting Pro in Training
414 posts since Sep 2010
Reputation Points: 8 [?]
Q&As Helped to Solve: 38 [?]
Skill Endorsements: 0 [?]
Check line 42.
In line 55, why are you calling lengthHail() method ? You are yourself counting the number of numbers printed in line 53. Print that instead of lengthHail(). It will print 1 less than the required
number , thats because it doesn't counts the initial number. So add 1 while printing count.
cout << "\nLength of longest sequence is " << count + 1 << ".\n" << endl;
Newbie Poster
2 posts since Jun 2013
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Angles are often measured in degrees, minutes, and seconds. There are 360 degrees in a circle, 60 minutes in one degree, and 60 seconds in one minute. Write a program that reads two angular
measurements given in degrees, minutes, and seconds, and then calculates and prints their sum. Constants are required for this program. | {"url":"http://www.daniweb.com/software-development/cpp/threads/434133/help-with-a-simple-c-hailstone-length-program","timestamp":"2014-04-20T03:25:03Z","content_type":null,"content_length":"40231","record_id":"<urn:uuid:c5636a18-0378-401d-8646-445716a56953>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> FMIL and WLSMV
Ryan Marek posted on Tuesday, December 11, 2012 - 9:15 am
I'm currently a graduate student who is new to this area of statistics in general. I apologize in advanced if this has been addressed earlier.
I'm currently modeling some CFAs to produce latent factors at various time points. The indicators are categorical (1 = Presence; 0 = Absence). I have around 800 cases for the first time point, 600
cases for the second time point, and 300 cases for the third time point. I used the WLSMV estimator and found excellent fitting models as well as model invariance. However, I have been told that I
should be using FIML to handle my missing data. I'm curious as to what I should do.
Is there a way to use FIML with WLSMV? If not, what would you suggest I do or do you feel the analytic plan is fine as is.
Also, any recommended readings would be most helpful!
Linda K. Muthen posted on Tuesday, December 11, 2012 - 10:49 am
You can change the estimator to ML and you will be using FIML. With maximum likelihood and categorical dependent variables, numerical integration is required. Each factor requires one dimension of
integration. A model with more than four factors can be computationally demanding.
Ryan Marek posted on Tuesday, December 11, 2012 - 2:31 pm
Thanks for the advice! I have 3 factors for each time point. When doing one dimension of integration, do I just use this syntax or do I need to specify something else?
Linda K. Muthen posted on Wednesday, December 12, 2012 - 5:43 am
You don't need to specify anything to obtain numerical integration is most cases. How many time points do you have?
Ryan Marek posted on Wednesday, December 12, 2012 - 7:17 am
We're taking two approaches. I am looking to model 3 latent constructs (pre-surgery, 1 month post-surgery, and 3 months post-surgery) for a longitudinal study.
For now, I believe my adviser would like to try to take a more applied approach. We have a measure we used at the first time point (pre-surgery) and we'd like to demonstrate how latent construct
modeling can help clean up our post-surgical measures to show how our pre-surgical measure can adequately predict these latent constructs 1 and 3 months from surgery. In this case, we would have two
time points, but we'd not really be modeling change across time.
Ryan Marek posted on Wednesday, December 12, 2012 - 7:21 am
Let me be more clear in my above message. We have 3 latent constructs for three time points in our first approach to model change across time. In our second approach, we have 3 latent constructs for
two time points and want to our measure to predict these latent constructs.
Linda K. Muthen posted on Thursday, December 13, 2012 - 9:05 am
Your models have too many dimensions of integration to be practical using maximum likelihood. Weighted least squares does not handle your missing data properly. I would suggest using Bayes with the
default of non-informative priors. This handles missing data in a full-information way like maximum likelihood.
J Owens posted on Wednesday, January 02, 2013 - 9:52 am
Dear Dr. Muthen:
I am running a path analytic model with a categorical endogenous mediating variable. I am estimating nested models and need the Chi-square goodness-of-fit results to test for statistical significance
using the sequential/forward constraint imposition method. As a result, I am using the WLS estimator with the theta parameterization. However, I also have missing data and would like to use FIML
estimation to handle it. Is this possible? If so, what command would I use to tell MPlus to use FIML for the missing data?
Many thanks!
Linda K. Muthen posted on Wednesday, January 02, 2013 - 10:11 am
FIML refers to maximum likelihood estimation not weighted least squares. The default in Mplus is to use all available information for all estimators. If you have a lot of missing data, I suggest
using maximum likelihood estimation or multiple imputation. However, you will not obtain chi-square values for these methods that can be used for testing nested models.
Betsy Lehman posted on Thursday, March 13, 2014 - 10:15 pm
Dear Drs. Muthen,
I am reading your response to this question, experiencing some confusion about it. I, too, am running a path analysis with a categorical variable. My categorical variable is a dichotomous covariate
in a path model with 11 other continuous variables. As I understand it, FIML is preferred for its ability to handle missing data and non-normality (I have a fair amount of missing data, skewness, and
kurtosis), and yet, it is not advised for a model that has a categorical variable. It seems that the WLS or WLSMV is preferred for a model with a categorical variables. Is this correct? Specifically,
is it correct that FIML is not appropriate if one of my variables is dichotomous?
Thanks so much for your help.
Linda K. Muthen posted on Friday, March 14, 2014 - 6:47 am
No, this is not correct. Both WLSMV and ML can be used with categorical outcomes. ML is preferred if there is a lot of missing data and not too many latent variables with categorical indicators.
Please note that the scale of an observed exogenous variable is not an issue. All observed exogenous covariates in regression are treated as continuous whether they are binary or continuous.
Betsy Lehman posted on Saturday, March 15, 2014 - 11:14 am
Thank you so much for clearing this up! As I'm sure you know, wading around in the statistics world can be pretty overwhelming sometimes. I really appreciate your help.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=11420","timestamp":"2014-04-16T16:04:59Z","content_type":null,"content_length":"32653","record_id":"<urn:uuid:2c9a3eac-2e80-4fc8-b03e-0265accfc952>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on the distinction between policy capturing and judgment analysis
Comments on the distinction between "policy capturing" and "judgment analysis"
In the 60's early 70's "policy capturing" was universally used. In the 70's and the 80's "judgment analysis" came into widespread use, but both terms were considered synonymous. Now we have a clear
view of the distinction between these two terms, and we can use the correct term when appropriate. The following comments trace this evolution and describe the current view.
From the Brunswik list archives (1995)
>>> Posting number 180, dated 1 Aug 1995 09:08:55
Date: Tue, 1 Aug 1995 09:08:55 -0400
From: Tom Stewart
Subject: Social judgment theory, judgment analysis, and policy capturing
Following up on Ken Hammond's message about the origins of social judgment theory, and also in response to a private message from Len Dalgleish, I've been doing a little investigation of the terms
"policy capturing" and "judgment analysis."
First, judgment analysis (the term I strongly prefer) is not synonomous with SJT. Judgment analysis is just a method for modeling judgment. It is often used in SJT research, and it is useful in
implementing some of the remedies suggested by SJT, but there is much more to SJT.
Both terms originated at about the same time, in the early 60's, in connection with work being done at Lackland AFB (see appended messages from Joe Ward and Jim Naylor). "Policy capturing" was first
used by Christal and Bottenberg. Naylor and Wherry, who were doing contract work with Lackland used the acronym "JAN" (Judgment ANalysis) in 1965 to refer to a technique for clustering judges based
on their judgment policies. Sam Houston (1974), also associated with Lackland, wrote a monograph with judgment analysis in the title. Classic papers in the Brunswikian tradition, such as Hammond
(1955), Hoffman (1960), and Slovic and Lichtenstein (1973) do not use either term.
For a long time, I have been encouraging the use of the term "judgment analysis" instead of "policy capturing." I can remember the exact moment when my distaste for the term "policy capturing"
peaked: I was sitting next to Len Adelman at a meeting when somebody mentioned policy capturing. I looked over at Len and he was doodling a stick figure carrying a butterfly net and chasing a
"policy." I then realized that we don't capture anything, we analyze judgment. Furthermore only a few people (mostly psychologists) have used the word "policy" to mean "judgment policy." To everyone
else, it has a different meaning. I am happy that Ray Cooksey has used "judgment analysis" in his forthcoming book.
Of course, "policy capturing" is still in widespread use. In bibliographic searches, one of the best ways to find studies that use judgment analysis is to search for the keywords "policy capturing."
In the current issue of Medical Decision Making is an article with "policy capturing" in its title. Among its authors are highly respected subscribers to this list who are well acquainted with
judgment analysis. Presumably, they chose "policy capturing" for a reason. They may want to respond to this message.
One unfortunate terminological hybrid must be stopped. I have seen policy capturing and judgment analysis combined into "policy analysis." We really don't need a third term to refer to the same
procedure. This one is obviously unacceptable since policy analysis has long been used to refer to a completely different type of inquiry. I hope we can put a stop to it before it spreads.
..Tom Stewart
Hammond, K. R. (1955). Probabilistic functioning and the clinical method. Psychological Review, 62, 255-262.
Hoffman, P.J. (1960). The paramorphic representation of clinical judgment. Psychological Bulletin, 57, 116-131.
Houston, S. R. (1974). Judgment analysis: Tool for decision makers. New York: MSS Information Corporation.
Naylor, J. C., and Wherry, R. J., Sr. (1965). The use of simulated stimuli and the "JAN" technique to capture and cluster the policies of raters. Educational and Psychological Measurement, 25(4),
Slovic, P., and S. Lichtenstein, (1973). Comparison of Bayesian and regression approaches to the study of information processing in judgment. In L. Rappoport and D. A. Summers (Eds.), Human Judgment
and Social Interaction. New York: Holt, Rinehart & Winston, pp. 16-108.
Comments from Joe Ward:
I have just talked with Bob Bottenberg about your questions. We both agree that THE ORIGIN OF TERMS "POLICY CAPTURING" and "JUDGMENT ANALYSIS (JAN)" as used at the Personnel Research Laboratory at
Lackland AFB came from informal discussions among Ray Christal, Bob Bottenberg and Joe Ward. The terms may have been used earlier and may have been publicly documented earlier by others; however, Bob
and I believe that we did not know of the use of these expressions by others.
The idea of "POLICY CAPTURING" using REGRESSION MODELS to "mathematically capture a policy" was stimulated from the recognition in the 1950's that the PERSONNEL ASSIGNMENT PROBLEM (assigning persons
to jobs, or "person-job match") was mathematically the same as the TRANSPORTATION PROBLEM OF LINEAR PROGRAMMING.
When we talked about trying to "assign personnel to jobs to maximize "payoff" or "value" to the Air Force", the universal response was that "But we don't know the "payoff" or "value" of each person
on each job"! Then our response was "Then we do not need the many counselors who are trying to put each person into the "right" job!" "So we can just assign the personnel at random"!
Of course, the counselors probably were doing something toward "maximizing" the "payoff" of the assignment process. So we decided that we might be able to "capture" the counselors "policy" with one
or more regression models and then we could fill in the "payoff" array with the predicted values from the regression model(s). Then I developed the DECISION INDEX to provide counselors with
information that would allow them to approximate the "optimum" assignment of personnel to jobs. The DECISION INDEX is about the best approach to the "sequential" personnel assignment problem.
Later we incorporated the idea of "clustering" regression equations to determine how many different "policies" were involved.
Several of the original Air Force Publications have been published in the Journal of Experimental Education. This happened because Jack Schmid at Univ. of Northern Colorado at Greeley was editor of
the J of Exp. Ed. Sam Houston did a lot of Policy Capturing while he was at Univ. of Northern Colorado, and Sam's wife did her dissertaion (at UNC) using Policy capturing in the study of Pornography.
The most concentrated source is in the J of Exp. Ed. vol. 36, No.4, Summer'68 But the J of Exp. Ed. contains other examples.
This volume contains:
1. JAN: A TECHNIQUE FOR ANALYZING GROUP JUDGMENT by Ray Christal (Footnote #2 indicates "Dr. Joe H. Ward, Jr. is credited with suggesting use of a least-squares-weighted regression formula to capture
the policy of a single rater: also see (2)." The (2) is Paul Hoffman's article, The Paramorphic Representation of Clinical Judgments", Psychological Bulletin, LVII (1960), pp. 116-131.
2. SELECTING AN HAREM - AND OTHER APPLICATIONS OF THE POLICY-CAPTURING MODEL by Ray Christal.
JAN - which has been used to describe both the "POLICY CAPTURING" using regression models and the "CLUSTERING" of the POLICY EQUATIONS -combines regression modeling and "CLUSTERING" that was first
defined in 1961 in HIERARCHICAL GROUPING TO MAXIMIZE PAYOFF and then revised and published in the J. of the ASA in 1963. This original hierarchical clustering algorithm is now contained in many of
the statistical packages.
The DECISION INDEX was first described in THE COUNSELING ASSIGNMENT PROBLEM BY Ward in Psychometrika, 23, 55-65
For reference to the origin of POLICY SPECIFYING see J. of Exp. Ed. v.48,1
CREATING MATHEMATICAL MODELS OF JUDGMENT PROCESSES: FROM POLICY-CAPTURING TO POLICY-SPECIFYING by Joe Ward
Policy Specifying provides a procedure to create prediction models and allows the judge to observe the output of the models. Then the models are modified and the judge takes another look. This
continues until the output of the function seems OK. This approach was developed for the Air Force PROMIS system for recruiting. The idea was created to allow for the judge to express more easily
"INTERACTIONS" among variables. Our observations with "POLICY CAPTURING" is that it is not easy for judges to express interactions and non-linearities. But POLICY SPECIFYING MAKES THIS EASIER.
About Policy Capturing vs. Policy Specifying
Policy Capturing attempts to predict the judgments from a mathematical model.
Policy Specifying attempts to allow the judge to create a mathematical model that produces output values ("judgments") that are desired by the judge. Rather than make judgments and then try to allow
the regression model to reproduce the judgments, the Policy Specifying approach allows the judge to define a model that hopefully produces output that expresses the judge's policy. This approach
seems to allow for the expression of interactions and non-linearities.
-- Joe
Comments from Jim Naylor:
It gets a bit murky, particulary since I cannot locate a copy of the Bottenberg and Christal Tech Report from 1961.
Bottenberg, R. A. & Christal, R. E. An iterative technique for clustering criteria which retains optimum predictive efficiency. WADD-TN-61-30, AD-261 615. Lackland AFB, Tex.: Personnel Research
Laboratory, Wright Air Development Division, March, 1961.
My memory is that the first ACTUAL use of the term "policy capturing" in print was not in the above, but in the tech report
Naylor, J. C., and Wherry, R. J. Feasibility of Distinguishing Supervisor's Policies in Evaluation of Subordinates by Using Ratings of Simulated Job Encumbents. PRL-TR-64-25. Personnel Reseaarch
laboratory, Aerospace Medical Division, Air Force Systems Command, Lackland Air force Base, Texas, October, 1964.
I have a copy of the above and the term policy capturing is used numerous times.
The first journal article to use both the terms "policy capturing" and "Judgment Analysis" was the 1965 Naylor and Wherry paper in EPM.
The first ACTUAL use of the Judgment Analysis term was in the tech report of Christal:
Christal, R. E. JAN: A Technique for Analyzing Individual and Group Judgment. Lackland AFB, Tex.: PRL-TDR-63-3, AD-403-813. Personnel Research Laboratory, Aerospace Medical Division, February, 1963.
Confused! Hope this helps some.
James C. Naylor, Ph.D. 614-292-3038 Office
Chair, Department of Psychology 614-292-4537 Fax
The Ohio State University
142 Townshend Hall naylor.2@osu.edu
Additional comment from Naylor:
Looks fine to me. I don't disagree with any of your comments. I'm glad joe could confirm that the Bottenberg and Christal paper contains both terms. My own concern about using the term Judment
Analysis to replace policy capturing is that I have always seen JAN as a very specific technicque developed at Lackland for clustering policies. The first stage of that process involved capturing the
policies of individual raters using linear regression. To me these two terms are quite explicit in meaning and are NOT substitutable.
>>>End of Posting number 180, dated 1 Aug 1995 09:08:55
Further comments from Tom Stewart (1998)
Policy capturing is simply the application of regression analysis to modeling judgment. It is not based on the lens model or any Brunswikian ideas. Policy capturing studies are atheoretical,
generally do not include the environment side of the lens model, generally use orthogonal designs, and generally use linear regression (although ANOVA and conjoint measurement is also used), and
often confront people with unfamiliar tasks.
"Judgment analysis" to refers to modeling of judgment in the Brunswik/Hammond tradition. This requires an analysis of the environment, which policy capturing does not include. Furthermore, judgment
analysis is clearly not wedded to regression.
Judgment analysis requires studying both sides of the lens model and using a task that is familiar to subjects, as well as gathering data under representative conditions. Representative design is
meaningless unless task is based on a naturally-occuring judgment problem.
Comments from Ray Cooksey (1998)
(Author of Judgment Analysis: Theory, Methods, and Applications)
Policy capturing has traditionally only referred to the modeling of judgments and is almost always associated with multiple regression models. However, this is just a small aspect of judgment
analysis which involves a full implementation of representative design, considerations of task ecology as well as judgment process, and need not be tied to multiple regression analysis. In the
context of judgment analysis, 'policy capturing' in the Bottenberg, Ward & Naylor sense, is exclusively linked to the single system model which ignores the existence of an ecological criterion.
However, it is still quite common for people who do 'judgment analysis' to refer to the process of modeling judgments as 'policy capturing'.
Thus, there is a technical use of 'policy capturing' which refers to a stand-alone method linked only in a very minor way to Brunswik's ideas (if one uses the single system conceptualization) and a
less formal sense which refers to the methodological exercise of modeling judgments in the larger context of at least a double system lens model. The former sense is synonymous with multiple
regression modeling of judgments made on a series of profiles (usually hypothetical) whereas the latter sense potentially encompasses any statistical, mathematical or modeling procedure that allows
one to model or 'capture' judgments as well as models of an ecological criterion (which then permits the measurement of achievement). My view is this latter sense is the one which continues to keep
'policy capturing' alive as a term of reference for what we do and is why the term is so hard to extinguish. I deliberately titled my book 'Judgment Analysis' to signal this but even I am guilty of
slipping into using the term in its less formal sense (witness chapter 4 in my book, which is entitled 'Capturing Judgment Policies'!). The debate is further confused if we throw the label 'Social
Judgment Theory' into the fray! This label is largely synonomous with Judgment Analysis as it has evolved - but both labels are still commonly used (witness an entire edition of Thinking & reasoning
devoted to SJT). What I think we are seeing is the slow process of evolution at work, which for a period of time, means that many related 'species' may co-exist before one comes to predominate. Tom
is arguing for Judgment Analysis to predominate and I agree - it signals best what we are doing.
Linearity is only a part of judgment analysis by association with regression models which are most commonly linear in composition. However, nothing in Judgment Analysis technically requires an
assumption of linearity and there are many examples of nonlinear judgment modeling in the literature. However, I signaled in Chapter 8 of my book that new modeling techniques need to be developed
which are inherently nonlinear so as to better represent dynamic judgment systems (including models that admit fuzzy logic and chaotic dynamics). Such models are only now beginning to emerge. Whether
or not they will do a better job than linear models remains to be seen - many would invoke the law of parsimony and say that if nonlinear models add only incrementally to what a linear model can
predict, then stick with the linear model.
My personal view, however, is that we have over-simplified our modeling systems for long enough and it is time to 'complexify' them to as to more appropriately encompass the constraints, contexts and
conditions under which judgments are made. For this type of effort, multiple regression and related statistical models just will not do. In fact, we may not have the mathematical technology yet
available to do such modeling (although recent developments in dynamic systems theory and modeling look promising), and this means that qualitative mapping approaches may be the first approach to
attacking the problem (a perspective I am currently developing with respect to the judgments and decisions made by organisational managers and CEOs as well as magistrates and court judges and
Associate Professor Ray W. Cooksey
Department of Marketing & Management
University of New England
Armidale, NSW 2351
phone: +61 2 6773 2563
fax: +61 2 6773 3914
email: rcooksey@metz.une.edu.au
Web Site: http://metz.une.edu.au/~rcooksey | {"url":"http://www.albany.edu/cpr/brunswik/notes/japc.html","timestamp":"2014-04-19T21:14:42Z","content_type":null,"content_length":"20452","record_id":"<urn:uuid:73bece72-3ebb-44c0-9597-907627b73186>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] missing array type
Travis Oliphant oliphant.travis at ieee.org
Mon Feb 27 22:02:04 CST 2006
Sasha wrote:
>On 2/27/06, Travis Oliphant <oliphant at ee.byu.edu> wrote:
>>.... I think 0-stride arrays are acceptable (I
>>think you can make them now but you have to provide your own memory,
>Not really. Ndarray constructor has never allowed zeros in strides. It
>was possible to set strides to a tuple containing zeros after
>construction in some cases. I've changed that in r2054
><http://projects.scipy.org/scipy/numpy/changeset/2054>. Currently
>zero strides are not allowed.
Ah, right. It was only possible to do it in C-code. But, it is
possible to do it in C-code.
Since Colin has expressed some reservations, it's probably a good idea
to continue the discussion before doing anything.
One issue I have with zero-stride arrays is that essentially is what
broadcasting is all about. Recently there has been a discussion about
bringing repmat functionality over. The repmat function is used in some
array languages largely because there is no such thing as broadcasting
and arrays are not ND.
Perhaps what is desired instead is rather than play games with indexing
on a two dimensional array you simply define the appropriate
4-dimensional array. Currently you can define the size of the new
dimensions to be 1 and they will act like 0-strided arrays when you
operate with other arrays of any desired shape. Zero-strided arrays are
actually quite fundamental to the notion of broadcasting.
[Soap Box]
I've been annoyed for several years that the idea of linear operators is
constrained in most libraries to 2 dimensions. There are many times I
want to find an inverse of an operator that is most naturally expressed
with 6 dimensions. I have to myself play games with indexing to give
the computer a matrix it can understand. Why is that? I think the
computer should be doing the work of raveling and unraveling those
indices for me. I think we have the opportunity in NumPy/SciPy to be
much more general. A tensor class that handles the "index-raveling"
that so many people have become conditioned to think is necessary could
and should be handled by the class. If you've ever written
finite-element code you should know exactly what I mean.
[End Soap Box]
On the one hand, we could just tell people to try and use broadcasting
so that zero-strided arrays show up in Python in definitive ways. On
the other we can just expose the power of zero-strided arrays to Python
and let people come up with their own rules. I lean toward giving
people the capability and letting them show me what it can do.
The only thing controversial, I think is the behavior of outputs on
ufuncs for strided arrays. Currently ufunc outputs always have full
strides unless an output array is given. Changing this default
behavior would require some justification (not to mention some code
tweaking). I'm not immediately inclined to change it even if
zero-strided arrays are allowed to be created from Python.
>In order to make zero stride arrays really useful, they should survive
>transformation by ufunc. With my patch if x is a zero-stride array of
>length N, then exp(x) is a regular array and exp is called N times to
>compute the result. That would be a much bigger project. As a first
>step, I would just disallow using zero-stride arrays as output to
>avoid problems with inplace operations.
Hmm.. Could you show us again what you mean by these problems and the
better behavior that could happen if ufuncs were changed?
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/006661.html","timestamp":"2014-04-20T03:26:10Z","content_type":null,"content_length":"6474","record_id":"<urn:uuid:61d586f3-6502-4c9a-95fb-80a15e93d9fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Vertical Pendulum Seismometer
A classical pendulum seismometer consists of a spring, a mass (black), and a damping device (light orange). These are all connected to a rigid frame that is fixed to the ground. When the ground
moves, the mass is not able to move exactly in sync because of the inertia of the mass. The differential movement between mass and frame is recorded as a seismogram. The amplitude and phase
differences between true ground motion and relative mass motion depend on the damping constant and the ratio of the frame-motion frequency to the eigenfrequency of the seismometer. The plot on the
right shows these differences with the black dot indicating the current mass position.
Seismometers are the key tool in studying earthquake sources and the Earth's interior using seismic waves. In its simplest form, a seismometer consists of a moveable mass attached to a rigid frame
by a spring and a damping system. The relative movement of the mass with respect to the frame is recorded as a seismogram. Its relation to the true ground motion is given by the seismometer equation
where is the damping term and is the eigenfrequency; is the friction coefficient, is the spring constant, and is the attached mass; is called the damping constant. Dependent on the values of and the
input frequency, the movement the mass experiences differs in amplitude and phase from the motion of the frame. For input frequencies close to the eigenfrequency of the system and small damping
constants, the system exhibits resonance.
F. Scherbaum,
Of Poles and Zeros: Fundamentals of Digital Seismology
, 2nd ed., Norwell, MA: Springer, 2007. | {"url":"http://www.demonstrations.wolfram.com/VerticalPendulumSeismometer/","timestamp":"2014-04-21T14:42:13Z","content_type":null,"content_length":"44001","record_id":"<urn:uuid:dfe7bd65-f237-459b-8566-3f2c7894faae>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biblioteca Digital do IPB: An introduction to the level set methods and its applications
Biblioteca Digital do IPB >
Escola Superior de Tecnologia e Gestão >
Matemática >
DEMAT - Resumos em Proceedings Não Indexados ao ISI >
Please use this identifier to cite or link to this item: http://hdl.handle.net/10198/4765
Título: An introduction to the level set methods and its applications
Reis, Ilda
Autor: Tavares, João Manuel R.S.
Jorge, R.M. Natal
Image analysis
Level set method
Palavras-chave: Implicit function
Issue Date: 2008
Citação: Reis, Ilda; Tavares, João Manuel R. S; Jorge, R. M. Natal (2008) - An introduction to the level set methods and its applications. In IACM-ECCOMAS. Venice, Italy. ISBN
Finding a mathematical model which describes the evolution of an interface (in this context, an interface is understood as the boundary between two separate and closed regions, each
one having a volume measure different from zero) over the time, like a burning flame or breaking waves, can be a challenging problem. The main difficulties arise when sharp corners
appear or different parts of the interface are split or merged, [1]. That kind of interface can be modeled as the embedded zero level set of an implicit time-dependent function. So,
the evolving interface can be followed by tracking the zero level set of that implicit function. The above briefly described technique, known as the Level Set Method was introduced by
Osher and Sethian, [2]. The idea behind this method [3] is to start with a closed curve, in two dimensions (or a surface in three dimensions) and allow the curve to move
perpendicularly to itself from an initial speed, F. If the sign speed is preserved, the location of the propagating front is computed as the arrival time T(x, y) of the front as it
crosses the point (x, y). In this case, the equation that describes this arrival time is given as: |∇T| F = 1, T = 0 on. In the general case, the interface can not be considered as
the level set of a spacial-dependent function because the arrival time is not a single-valued function. The way to address this difficulty is to represent the initial curve implicitly
as a zero level set of a function in one higher dimension. So, at any time, the front is given by the zero level set of the time-dependent function, , referred to as the level set
function. Mathematically, the set written as: {x(t) : (x(t), t) = 0} represents the interface at time t. Applying the chain rule and some algebraic manipulation, we can obtain the
Resumo: level set equation: t + |∇ | F = 0, (x(0), 0) = . This method is a powerful mathematical and computational tool for tracking the evolution of curves/surfaces along image sequences.
The main advantage came from a different approach similar to the Eulerian formulation. Instead of tracking a curve through time, the Level Set Method evolves a curve by updating the
level set function at fixed coordinates through time, [4]. This approach [3], which handles topological merging and breaking in a natural way, is easily generalized to any other
dimensional space and do not require that the moving front behaves as an explicit function. The Level Set Method has been widely applied in different areas [3] like geometry, grid
generation, image enhancement and noise removal in image processing, shape detection and recognition in image analysis, combustion and crystal growth analysis, among others. Our
purpose is to use this approach in the segmentation of structures represented in medical images. This task is very important for an adequate medical diagnosis, for example, in the
location of anatomical structures or even in the analysis of its motion. The main difficulties [4] are due to the large variability in the structure shapes and the considerable
quantity of noise that acquired images can have. We designed a computational platform in C++, using Visual Studio .NET 2005 environment, and integrated in it the computational library
OpenCV (http://sourceforge.net/projects/opencvlibrary) that gave us the possibility for using a great quantity of basic algorithms available for image processing and analysis. Now, we
are implementing the above described technique to segment anatomical structures represented in medical images. Our final goal is to estimate the material properties of anatomical
structures segmented and tracked along image sequences. In this presentation, we are going to describe the Level Set methodology, exhibit some of its possible applications, present
our segmentation method under development and show some of its experimental results.
Arbitragem yes
URI: http://hdl.handle.net/10198/4765
ISBN: 978-84-96736-55-9
Appears in DEMAT - Resumos em Proceedings Não Indexados ao ISI
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | {"url":"https://bibliotecadigital.ipb.pt/handle/10198/4765","timestamp":"2014-04-17T04:48:31Z","content_type":null,"content_length":"43232","record_id":"<urn:uuid:b582504e-3ae0-4dea-82cc-2320f1676645>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deranged permutations
Author Deranged permutations
Ranch Hand
For a 2 element list with 2 distinct elements [0, 1] a permutation exists where there all elements are in a different position than the original list. We can say that the maximum
Joined: Jan "derangement" of the list is 2
17, 2006
Posts: 1296
For a 3 element list with 2 distinct elements [0, 1, 1] every permutation has at least one element that is in the same position in the permuted list as the original list. The maximum
derangement of this list is also 2.
Is there a formula for the generalization of this observation. i.e. for an M element list with N distinct elements (where N <= M) the maximum possible derangement of a permutation of
that list is X?
I've tried googling, but I can't come up with the right keywords, and the math quickly goes over my head.
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
Joined: Oct
14, 2005 It can't just be a function of M and N, because (up to isomorphism) there are two different 4-element lists with 2 distinct elements, namely [0, 0, 1, 1] and [0, 1, 1, 1].
18129 And they have different maximum derangements: the first's MD is 4 and the second's is 2.
8 In general for a list with K zeroes and L ones, where K <= L, the maximum derangement is 2K, unless I'm missing something. But extending that to 3 or more distinct elements isn't nearly
as obvious, although I haven't spent any time poking at it.
I like...
Ranch Hand
Paul Clapham wrote:It can't just be a function of M and N, because (up to isomorphism) there are two different 4-element lists with 2 distinct elements, namely [0, 0, 1, 1] and [0,
Joined: Jan 1, 1, 1].
17, 2006
Posts: 1296 And they have different maximum derangements: the first's MD is 4 and the second's is 2.
Good point. I hadn't thought that through.
In general for a list with K zeroes and L ones, where K <= L, the maximum derangement is 2K, unless I'm missing something. But extending that to 3 or more distinct elements isn't
nearly as obvious, although I haven't spent any time poking at it.
Unfortunately the problem I was trying to solve might be a bit more difficult. I was looking at the Best Shuffle problem on Rosetta Code and trying to figure out if there is an algorithm
for deciding ahead of time the maximum deraingement of the target word. You could then theoretically continuously random shuffle and check until you came up with a permutation with
maximum derangement,
Joined: Oct
14, 2005 Garrett Rowe wrote:Unfortunately the problem I was trying to solve might be a bit more difficult.
8 It might, or it might not. It might depart from the triviality of the case I already analyzed in a linear way, or it might turn out to be more like calculating the number of partitions
of an integer. My spidey math sense is leaning towards the latter but I wouldn't bet big money on either outcome at this point in time.
I like...
Bartender Garrett Rowe wrote:
For a 3 element list with 2 distinct elements [0, 1, 1] every permutation has at least one element that is in the same position in the permuted list as the original list. The maximum
Joined: Oct derangement of this list is also 2.
02, 2003
maybe i'm slow...why is the derrangement 2? If there is no permutation where all the elements are different, why wouldn't it be 1?
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
I like...
Joined: Oct
14, 2005
Lines 01 and 02 differ in the first two positions, so their "derangement" is two, for example.
I like...
Joined: Oct
02, 2003
Posts: ah...thanks. it's not the number of different arrangements with all elements moved, but how many elements are in a different position for each permutation, then the maximum of those...
10916 thanks
I like...
subject: Deranged permutations | {"url":"http://www.coderanch.com/t/553152/Programming/Deranged-permutations","timestamp":"2014-04-19T17:35:55Z","content_type":null,"content_length":"36061","record_id":"<urn:uuid:96da85ff-3e1f-4d46-8f41-792249581f57>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
In theoretical
particle physics
, the
non-commutative Standard Model
, mainly due to the French mathematician
Alain Connes
, uses his
noncommutative geometry
to devise an extension of the
Standard Model
to include a modified form of
general relativity
. This unification implies a few constraints on the parameters of the Standard Model. Under an additional assumption, known as the "big desert" hypothesis, one of these constraints determines the
mass of the
Higgs boson
to be around 170
, comfortably within the range of the
Large Hadron Collider
. Recent
experiments exclude a Higgs mass of 158 to 175 GeV at the 95% confidence level.
However, the previously computed Higgs mass was found to have an error, and more recent calculations are in line with the measured Higgs mass.
Current physical theory features four
elementary forces
: the
gravitational force
, the
electromagnetic force
, the
weak force
, and the
strong force
. Gravity has an elegant and experimentally precise theory:
general relativity
. It is based on
Riemannian geometry
and interprets the gravitational force as curvature of
. Its
formulation requires only two empirical parameters, the
gravitational constant
and the
cosmological constant
The other three forces also have a Lagrangian theory, called the
Standard Model
. Its underlying idea is that they are mediated by the exchange of
-1 particles, the so-called
gauge bosons
. The one responsible for electromagnetism is the
. The weak force is mediated by the
W and Z bosons
; the strong force, by
. The gauge Lagrangian is much more complicated than the gravitational one: at present, it involves some 30 real parameters, a number that could increase. What is more, the gauge Lagrangian must also
contain a
0 particle, the
Higgs boson
, to give mass to the spin 1/2 and spin 1 particles. This particle has yet to be observed, and if it is not detected at the
Large Hadron Collider
in Geneva, the consistency of the Standard Model is in doubt.
Alain Connes
has generalized
Bernhard Riemann
's geometry to
noncommutative geometry
. It describes spaces with curvature and uncertainty. Historically, the first example of such a geometry is
quantum mechanics
, which introduced
uncertainty relation
by turning the classical observables of position and momentum into noncommuting operators. Noncommutative geometry is still sufficiently similar to
Riemannian geometry
that Connes was able to rederive
general relativity
. In doing so, he obtained the gauge
as a companion of the gravitational one, a truly geometric unification of all four
fundamental interactions
. Connes has thus devised a fully geometric formulation of the
Standard Model
, where all the parameters are geometric invariants of a noncommutative space. A result is that parameters like the
electron mass
are now analogous to purely mathematical constants like
. In 1929 Weyl wrote Einstein that any unified theory would need to include the metric tensor, a gauge field, and a matter field. Einstein considered the Einstein-Maxwell-Dirac system by 1930. He
probably didn't develop it because he was unable to geometricize it. It can now be geometricized as a non-commutative geometry.
See also
1. ^ The TEVNPH Working Group [1]
2. ^ Resilience of the Spectral Standard Model [2]
External links
Although Aristotle in general had a more empirical and experimental attitude than Plato, modern science did not come into its own until Plato's Pythagorean confidence in the mathematical nature
of the world returned with Kepler, Galileo, and Newton. For instance, Aristotle, relying on a theory of opposites that is now only of historical interest, rejected Plato's attempt to match the
Platonic Solids with the elements -- while Plato's expectations are realized in mineralogy and crystallography, where the Platonic Solids occur naturally.Plato and Aristotle, Up and Down-Kelley
L. Ross, Ph.D.
The goal of string theory is to explain the "?" in the above diagram.
I enjoyed the Livescribe demonstration by Clifford of Asymptotia along with the explanation for Quantum Gravity. The two pillars for me were very emblematic with regards to "pillars of science." This
as well as the arch is very fitting to me of what becomes self evident. If under such an examination of the two areas Clifford is talking about, Quantum Mechanics and General Relativity then are the
attempts at unification.
The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. ISBN 3936122202. Distributed by DIRECTMEDIA Publishing GmbH.
That question mark can be demonstrated above as to where in the location in Cliffords diagrams is related to the Aristotelian Arch in my view?
A number of ordinary mechanical quantities take on a different form as the speed approaches the speed of light.
Relativistic Mechanical Quantities(Link)
Kinematic Time Shift Calculation
Hafele and Keating Experiment
Usefulness of the Quantity pc
Calorimeters for High Energy Physics experiments – part 1
April 6, 2008 by Dorigo
first tau-neutrino “appearing” out of several billion of billions muon neutrinos
Also See:
Leptons are involved in several processes such as beta decay.
Composition Elementary particle
Statistics Fermionic
Generation 1st, 2nd, 3rd
Interactions Electromagnetism, Gravitation, Weak
Symbol l
Antiparticle Antilepton (l)
Types 6 (electron, electron neutrino, muon, muon neutrino, tau, tau neutrino)
Electric charge +1 e, 0 e, −1 e
Color charge No
Spin ^1⁄[2]
A lepton is an elementary particle and a fundamental constituent of matter.^[1] The best known of all leptons is the electron which governs nearly all of chemistry as it is found in atoms and is
directly tied to all chemical properties. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged
leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed.
There are six types of leptons, known as flavours, forming three generations.^[2] The first generation is the electronic leptons, comprising the electron (e−) and electron neutrino (ν
e); the second is the muonic leptons, comprising the muon (μ−) and muon neutrino (ν
μ); and the third is the tauonic leptons, comprising the tau (τ−) and the tau neutrino (ν
τ). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons through a process of particle decay: the transformation from a higher
mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high energy collisions (such as
those involving cosmic rays and those carried out in particle accelerators).
Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks however, leptons are not subject to the strong interaction, but they are subject to the other
three fundamental interactions: gravitation, electromagnetism (excluding neutrinos, which are electrically neutral), and the weak interaction. For every lepton flavor there is a corresponding
type of antiparticle, known as antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. However, according to certain theories,
neutrinos may be their own antiparticle, but it is not currently known whether this is the case or not.
The first charged lepton, the electron, was theorized in the mid-19th century by several scientists^[3]^[4]^[5] and was discovered in 1897 by J. J. Thomson.^[6] The next lepton to be observed was
the muon, discovered by Carl D. Anderson in 1936, but it was erroneously classified as a meson at the time.^[7] After investigation, it was realized that the muon did not have the expected
properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particle to be proposed.^[8] The first neutrino,
the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay.^[8] It was first observed in the Cowan–Reines neutrino experiment conducted by
Clyde Cowan and Frederick Reines in 1956.^[8]^[9] The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz and Jack Steinberger,^[10] and the tau discovered between 1974 and
1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory.^[11] The tau neutrino remained elusive until July 2000, when
the DONUT collaboration from Fermilab announced its discovery.^[12]^[13]
Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also
be synthesized, as well as lepton–antilepton particles such as positronium.
2011 Review of Particle Physics.
Please use this CITATION: K. Nakamura et al. (Particle Data Group), Journal of Physics G37, 075021 (2010) and 2011 partial update for the 2012 edition.
Main Components of CNGS
A 400 GeV/c proton beam is extracted from the SPS in 10.5 microsecond short pulses of 2.4x1013 protons per pulse. The proton beam is transported through the transfer line TT41 to the CNGS target
T40. The target consists of a series of graphite rods, which are cooled by a recirculated helium flow. Secondary pions and kaons of positive charge produced in the target are focused into a
parallel beam by a system of two pulsed magnetic lenses, called horn and reflector. A 1 km long evacuated decay pipe allows the pions and kaons to decay into their daughter particles - of
interest here is mainly the decay into muon-neutrinos and muons. The remaining hadrons (protons, pions, kaons) are absorbed in an iron beam dump with a graphite core. The muons are monitored in
two sets of detectors downstream of the dump. Further downstream, the muons are absorbed in the rock while the neutrinos continue their travel towards Gran Sasso.microsecond short pulses of
2.4x1013 protons per
For me it has been an interesting journey in trying to understand the full context of a event in space sending information through out the cosmos in ways that are not limited to the matter
configurations that would affect signals of those events.
In astrophysics, the most widely discussed mechanism of particle acceleration is the first-order Fermi process operating at collisionless shocks. It is based on the idea that particles undergo
stochastic elastic scatterings both upstream and downstream of the shock front. This causes particles to wander across the shock repeatedly. On each crossing, they receive an energy boost as a
result of the relative motion of the upstream and downstream plasmas. At non-relativistic shocks, scattering causes particles to diffuse in space, and the mechanism, termed "diffusive shock
acceleration," is widely thought to be responsible for the acceleration of cosmic rays in supernova remnants. At relativistic shocks, the transport process is not spatial diffusion, but the
first-order Fermi mechanism operates nevertheless (for reviews, see Kirk & Duffy 1999; Hillas 2005). In fact, the first ab initio demonstrations of this process using particle-in-cell (PIC)
simulations have recently been presented for the relativistic case (Spitkovsky 2008b; Martins et al. 2009; Sironi & Spitkovsky 2009).
Several factors, such as the lifetime of the shock front or its spatial extent, can limit the energy to which particles can be accelerated in this process. However, even in the absence of these,
acceleration will ultimately cease when the radiative energy losses that are inevitably associated with the scattering process overwhelm the energy gains obtained upon crossing the shock. Exactly
when this happens depends on the details of the scattering process. See: RADIATIVE SIGNATURES OF RELATIVISTIC SHOCKS
So in soliton expressions while trying to find such an example here in the blog does not seem to be offering itself in the animations of the boat traveling down the channel we are so familiar with
that for me this was the idea of the experimental processes unfolding at LHC. The collision point creates shock waves\particle sprays as Jets?
In mathematics and physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it travels at constant speed. Solitons are caused by a
cancellation of nonlinear and dispersive effects in the medium. (The term "dispersive effects" refers to a property of certain systems where the speed of the waves varies according to frequency.)
Solitons arise as the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described by John
Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation".
So in a sense the shock front\horn for me in respect of Gran Sasso is the idea that such a front becomes a dispersive element in medium expression of earth to know that such densities in earth have a
means by which we can measure relativist interpretations as assign toward density determinations in the earth. Yet, there are things not held to this distinction so know that they move on past such
targets so as to show cosmological considerations are just as relevant today as they are while we set up the experimental avenues toward identifying this relationship here on earth.
For more than a decade, scientists have seen evidence that the three known types of neutrinos can morph into each other. Experiments have found that muon neutrinos disappear, with some of the
best measurements provided by the MINOS experiment. Scientists think that a large fraction of these muon neutrinos transform into tau neutrinos, which so far have been very hard to detect, and
they suspect that a tiny fraction transform into electron neutrinos. See: Fermilab experiment weighs in on neutrino mystery
When looking out at the universe such perspective do not hold relevant for those not looking past the real toward the abstract? To understand the distance measure of binary star of Taylor and Hulse,
such signals need to be understood in relation to what is transmitted out into the cosmos? How are we measuring that distance? For some who are even more abstractedly gifted they may see the waves
generated in gravitational expression. So this becomes a means which which to ask if the binary stars are getting closer then how is this distance measured? You see?
Measurement of the neutrino velocity with the OPERA detectorin the CNGS beam
Pressure and heat melts protons and neutrons into a new state of matter - the quark gluon plasma.
Now you must know that this entry holds philosophical perspective and is the mandate of Night Light Mining Company to explore the potentials of planetary and geological data gained from scientific
analysis to help the society of earth to move farther out into space, and to colonize.
Why are Planets Round?
It is always interesting to see water in space.
Image: NASA/JPL-
Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from
radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to
planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."
With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a
result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes.
I wanted to explore the philosophical bend first, as it sets the tone for analysis not only of the potentials of planets but of what we can gained from understanding the place of values we can set
around ourselves.
Two-dimensional analogy of space–time distortion. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of
space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime. See: Spacetime
Be it known then, that such universality can exist in principle around this "central core" that such equatorial measures are distinctive and related to the equatorial possibility of Inverse Square
Law, that as a mathematical principle, this is brought to bear on how we solidify the substance of the elemental table, that we can say, indeed, that such values can be assigned in "refractive light"
to values which are built to become "round in planetary constitution."
The life cycle of a lunar impact and associated time and special scales. The LCROSS measurement methods are “layered” in response to the rapidly evolving impact environment. See: Impact:Lunar
CRater Observation Satellite (LCROSS)
It becomes an evolutionary discourse then about what began from universality "in principle" can become such a state as evident in the framework of elemental consideration, that one might say indeed
that it is "this constitution" that will signify the relevance to the spacetime fabric and it's settled orbit.
See Also:
Isostatic Adjustment is Why Planets are Round?
Wegener proposed that the continents floated somewhat like icebergs in water. Wegener also noted that the continents move up and down to maintain equilibrium in a process called isostasy.Alfred
Just thought I would add this for consideration. Grace satellite does a wonderful job of discerning this feature? Amalgamating differing perspectives allows one to encapsulate a larger view on the
reality of Earth. More then the sphere. More then, what Joseph Campbell describes:
The Power of Myth With Bill Moyers, by Joseph Campbell , Introduction that Bill Moyers writes,
"Campbell was no pessimist. He believed there is a "point of wisdom beyond the conflicts of illusion and truth by which lives can be put back together again." Finding it is the "prime question of
the time." In his final years he was striving for a new synthesis of science and spirit. "The shift from a geocentric to a heliocentric world view," he wrote after the astronauts touched the
moon, "seemed to have removed man from the center-and the center seemed so important...
While one can indeed approximate according to the spherical cow, in terms of events in the cosmos, I was being more specific when it comes to demonstrating a geometrical feature of the sphere in
terms of the geometry of the Centroid. This feature is embedded in the validation of the sphere in regard to gravity?
Image: NASA/JPL-
Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from
radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to
planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."
With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a
result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes.
It was important to see how such planets form and given their "Mass and densities" which I thought to show how such a valuation could be seen in relation to the variance of gravity so it is
Isostasy (Greek isos = "equal", stásis = "standstill") is a term used in geology to refer to the state of gravitational equilibrium between the earth's lithosphere and asthenosphere such that the
tectonic plates "float" at an elevation which depends on their thickness and density. This concept is invoked to explain how different topographic heights can exist at the Earth's surface. When a
certain area of lithosphere reaches the state of isostasy, it is said to be in isostatic equilibrium. Isostasy is not a process that upsets equilibrium, but rather one which restores it (a
negative feedback). It is generally accepted that the earth is a dynamic system that responds to loads in many different ways, however isostasy provides an important 'view' of the processes that
are actually happening. Nevertheless, certain areas (such as the Himalayas) are not in isostatic equilibrium, which has forced researchers to identify other reasons to explain their topographic
heights (in the case of the Himalayas, by proposing that their elevation is being "propped-up" by the force of the impacting Indian plate).
In the simplest example, isostasy is the principle of buoyancy observed by Archimedes in his bath, where he saw that when an object was immersed, an amount of water equal in volume to that of the
object was displaced. On a geological scale, isostasy can be observed where the Earth's strong lithosphere exerts stress on the weaker asthenosphere which, over geological time flows laterally
such that the load of the lithosphere is accommodated by height adjustments.
Such strength variances can be attributed to the height with which this measure is taken(time clocks and such) and such a validation in terms of Inverse Square Law goes to help to identify this
strength and weakness, according to the nature of the mass and density of the planet.
As one of the fields which obey the general inverse square law, the gravity field can be put in the form shown below, showing that the acceleration of gravity, g, is an expression of the
intensity of the gravity field.
See: Hyperphysics-Inverse Square Law-Gravity
It is important then such a measure of the energy needed in which to overcome the pull of the earth, then was assigned it's energy value so such calculations are then validated in the escape
velocity. There are other ways in which to measure spots in space when holding a bulk view of the reality in regards to gravity concentrations and it locations.
See: Hyperphysics-Gravity-Escape Velocity
See Also:
As I pounder the very basis of my thoughts about geometry based on the very fabric of our thinking minds, it has alway been a reductionist one in my mind, that the truth of the reality would a
geometrical one.
The emergence of Maxwell's equations had to be included in the development of GR? Any Gaussian interpretation necessary, so that the the UV coordinates were well understood from that perspective as
well. This would be inclusive in the approach to the developments of GR. As a hobbyist myself of the history of science, along with the developments of today, I might seem less then adequate in the
adventure, I persevere.
On the Hypotheses which lie at the Bases of Geometry.
Bernhard Riemann
Translated by William Kingdon Clifford
[Nature, Vol. VIII. Nos. 183, 184, pp. 14--17, 36, 37.]
It is known that geometry assumes, as things given, both the notion of space and the first principles of constructions in space. She gives definitions of them which are merely nominal, while the
true determinations appear in the form of axioms. The relation of these assumptions remains consequently in darkness; we neither perceive whether and how far their connection is necessary, nor a
priori, whether it is possible.
From Euclid to Legendre (to name the most famous of modern reforming geometers) this darkness was cleared up neither by mathematicians nor by such philosophers as concerned themselves with it.
The reason of this is doubtless that the general notion of multiply extended magnitudes (in which space-magnitudes are included) remained entirely unworked. I have in the first place, therefore,
set myself the task of constructing the notion of a multiply extended magnitude out of general notions of magnitude. It will follow from this that a multiply extended magnitude is capable of
different measure-relations, and consequently that space is only a particular case of a triply extended magnitude. But hence flows as a necessary consequence that the propositions of geometry
cannot be derived from general notions of magnitude, but that the properties which distinguish space from other conceivable triply extended magnitudes are only to be deduced from experience. Thus
arises the problem, to discover the simplest matters of fact from which the measure-relations of space may be determined; a problem which from the nature of the case is not completely
determinate, since there may be several systems of matters of fact which suffice to determine the measure-relations of space - the most important system for our present purpose being that which
Euclid has laid down as a foundation. These matters of fact are - like all matters of fact - not necessary, but only of empirical certainty; they are hypotheses. We may therefore investigate
their probability, which within the limits of observation is of course very great, and inquire about the justice of their extension beyond the limits of observation, on the side both of the
infinitely great and of the infinitely small.
For me the education comes, when I myself am lured by interest into a history spoken to by Stefan and Bee of Backreaction. The "way of thought" that preceded the advent of General Relativity.
Einstein urged astronomers to measure the effect of gravity on starlight, as in this 1913 letter to the American G.E. Hale. They could not respond until the First World War ended.
Translation of letter from Einstein's to the American G.E. Hale by Stefan of BACKREACTION
Zurich, 14 October 1913
Highly esteemed colleague,
a simple theoretical consideration makes it plausible to assume that light rays will experience a deviation in a gravitational field.
[Grav. field] [Light ray]
At the rim of the Sun, this deflection should amount to 0.84" and decrease as 1/R (R = [strike]Sonnenradius[/strike] distance from the centre of the Sun).
[Earth] [Sun]
Thus, it would be of utter interest to know up to which proximity to the Sun bright fixed stars can be seen using the strongest magnification in plain daylight (without eclipse).
Fast Forward to an Effect
Bending light around a massive object from a distant source. The orange arrows show the apparent position of the background source. The white arrows show the path of the light from the true position
of the source.
The fact that this does not happen when gravitational lensing applies is due to the distinction between the straight lines imagined by Euclidean intuition and the geodesics of space-time. In
fact, just as distances and lengths in special relativity can be defined in terms of the motion of electromagnetic radiation in a vacuum, so can the notion of a straight geodesic in general
To me, gravitational lensing is a cumulative affair that such a geometry borne into mind, could have passed the postulates of Euclid, and found their way to leaving a "indelible impression" that the
resources of the mind in a simple system intuits.
Einstein, in the paragraph below makes this clear as he ponders his relationship with Newton and the move to thinking about Poincaré.
The move to non-euclidean geometries assumes where Euclid leaves off, the basis of Spacetime begins. So such a statement as, where there is no gravitational field, the spacetime is flat should be
followed by, an euclidean, physical constant of a straight line=C?
I attach special importance to the view of geometry which I have just set forth, because without it I should have been unable to formulate the theory of relativity. ... In a system of reference
rotating relatively to an inert system, the laws of disposition of rigid bodies do not correspond to the rules of Euclidean geometry on account of the Lorentz contraction; thus if we admit
non-inert systems we must abandon Euclidean geometry. ... If we deny the relation between the body of axiomatic Euclidean geometry and the practically-rigid body of reality, we readily arrive at
the following view, which was entertained by that acute and profound thinker, H. Poincare:--Euclidean geometry is distinguished above all other imaginable axiomatic geometries by its simplicity.
Now since axiomatic geometry by itself contains no assertions as to the reality which can be experienced, but can do so only in combination with physical laws, it should be possible and
reasonable ... to retain Euclidean geometry. For if contradictions between theory and experience manifest themselves, we should rather decide to change physical laws than to change axiomatic
Euclidean geometry. If we deny the relation between the practically-rigid body and geometry, we shall indeed not easily free ourselves from the convention that Euclidean geometry is to be
retained as the simplest. (33-4)
It is never easy for me to see how I could have moved from what was Euclid's postulates, to have graduated to my "sense of things" to have adopted this, "new way of seeing" that is also accumulative
to the inclusion of gravity as a concept relevant to all aspects of the way in which one can see reality.
Susan Holmes- Statistician Persi Diaconis' mechanical coin flipper.
In football's inaugural kickoff coin toss, the coin is not caught but allowed to bounce on the ground. That introduces an extra complication, one mathematicians have yet to sort out.
Persi Diaconis See here.
The Ground State
There is always an "inverse order to Gravity" that helps one see in ways that we are not accustom too. The methods of "prospective measurements" in science have taken a radical turn? Satellites as a
measure, have focused our views.
While one may now look at the "sun in a different way" it had to first display itself across the "neutrino Sudbury screen" before we knew to picture the sun now in the way we do. It was progressive,
in the way the sun now forms a picture of what we now know in measure.
So you try and bring it all together under this "new way of seeing" and hopefully your account of "the way reality is," is shared by others who now understand what the heck I am doing?
To get a simple physical understanding of what the acoustic oscillations are, it may be helpful to change the perspective. Normally, the common way of presenting the phenomenon has been in terms
of standing waves where the analysis is done in Fourier space. But the baryon-photon fluid really is just carrying sound waves, and the dispersion relation is even pretty linear. So let’s instead
think of things in terms of traveling waves in real spacehttp://72.14.253.104/search?q=cache:xLcnPGO6BDQJ:cmb.as.arizona.edu/~eisenste/acousticpeak/
spherical_acoustic.ps+Fourier+space+when+I%27m+thinking+about+sound.&hl=en&ct=clnk&cd=1&gl=ca-Steward Observatory, University of Arizona
c 2005
"Uncertainty" has this way of rearing it's head once we reduce our perspective to the microscopic principals(sand), yet, on the other side of the coin, how is it that only 5% of mass determination
allows us to see the universe mapped in the way it has in regards to the CMB?
There is this "entropic valuation" and with it, temperature. Some do not like the porridge "to hot or to cold," with regards to "living in a place" within the universe.
So I'll repeat the blog comment entry here in this blog so one can gather some of what I mean.
At 2:56 AM, December 11, 2007, Plato said...
As a lay person with regards to the complexity of the language(sound)and universe, it is sometimes reduced to "seeing in ways that are much easier to deal with," although of course, it may not be
the same for everyone?:)
:)Something good science people "do not want to hear?"
Good link in html.
The launching of the sound waves is very similar to dropping a rock in a pond and seeing the circular wave come off (obviously that a gravity wave, not a compressional wave, but I’m focusing
on the geometry). The difference here is that the area where the “rock” entered is still the most likely region to form galaxies; the spherical shell that it produced is only carrying 5% of
the mass.
Hopefully, this demystifies the effect: we’re seeing the imprint of spherical sound waves launched from the sites of dark matter overdensities in the early universe. But also I hope it makes
it more clear as to why this effect is so robust: the propagation of sound in the baryon-photon plasma is very simple, and all we’re doing is measuring how far it got.
"Mapping," had to begin somewhere. Whatever that may mean,one may think of Mendeleeev or Newlands.
Generally Grouping Order increases the density of objects within a frame of reference, resulting in a more pronounced single object.
"Sand with pebbles" on a beach? It had to arise from someplace?
The other side of the Coin is?
This recording was produced by converting into audible sounds some of the radar echoes received by Huygens during the last few kilometres of its descent onto Titan. As the probe approaches
the ground, both the pitch and intensity increase. Scientists will use intensity of the echoes to speculate about the nature of the surface.
and not to be undone.
Mass results in an increase in the gravitational force exerted by an object. Density fluctuations on the surface of the Earth and in the underlying mantle are thus reflected invariations in
the gravity field.As the twin GRACE satellites orbit the Earth together, these gravity field variations cause infinitesimal changes in the distance between the two. These changes will be
measured with unprecedented accuracy by the instruments aboard GRACE leading to a more precise rendering of the gravitational field than has ever been possible to date.
Layman pondering.
So now that you have this "comprehensive view" I have gained on the way I am seeing the universe. You can "now see" how diverse the application of sound in analogy is. It is helping me to develop the
"Colour of Gravity" as a artistic endeavour. I refrain from calling it "scientific" and be labelled a crackpot.
A Synesthesic View on Life.
Who knows how I can put these things together and come up with what I do. Yet, it had not gone unnoticed that such concepts could merge into one another, and come out with some tangible result as a
"artistic effort." Some may be used to the paintings of Kandinsky(abstract), yet the plethora of imaging that unfolds in the conceptual framework might have been self evident, from such a chaotic
mess of the layman's view here?
At this point in the development, although geometry provided a common framework for all the forces, there was still no way to complete the unification by combining quantum theory and general
relativity. Since quantum theory deals with the very small and general relativity with the very large, many physicists feel that, for all practical purposes, there is no need to attempt such an
ultimate unification. Others however disagree, arguing that physicists should never give up on this ultimate search, and for these the hunt for this final unification is the ‘holy grail’. Michael
The search for this "cup that overflow" is at the heart of all who venture for the lifeblood of the mystery of life. While Atiyah speaks to a unification of Quantum theory and Relativity, it is not
without a understanding on Einstein's part that having gained from Marcel Grossmann, that such a descriptive geometry could be leading Einstein to discover the very basis of General relativity?
Marcel Grossmann was a mathematician, and a friend and classmate of Albert Einstein. He became a Professor of Mathematics at the Federal Polytechnic Institute in Zurich, today the ETH Zurich,
specialising in descriptive geometry.
So what use "this history" in face of the unification of the very large with the very small? How far back should one go to know that the steps previous were helping to shape perspective for the
future. Allow for perspective to be changed, so that new avenues of research could spring forth
Gaspard Monge, Comte de Péluse-Portrait by Naigeon in the Musée de Beaune Born: 9 May 1746 in Beaune, Bourgogne, France
Died: 28 July 1818 in Paris, France-was a French mathematician and inventor of descriptive geometry.
Monge contributed (1770–1790) to the Memoirs of the Academy of Turin, the Mémoires des savantes étrangers of the Academy of Paris, the Mémoires of the same Academy, and the Annales de chimie,
various mathematical and physical papers. Among these may be noticed the memoir "Sur la théorie des déblais et des remblais" (Mém. de l’acad. de Paris, 1781), which, while giving a remarkably
elegant investigation in regard to the problem of earth-work referred to in the title, establishes in connection with it his capital discovery of the curves of curvature of a surface. Leonhard
Euler, in his paper on curvature in the Berlin Memoirs for 1760, had considered, not the normals of the surface, but the normals of the plane sections through a particular normal, so that the
question of the intersection of successive normals of the surface had never presented itself to him. Monge's memoir just referred to gives the ordinary differential equation of the curves of
curvature, and establishes the general theory in a very satisfactory manner; but the application to the interesting particular case of the ellipsoid was first made by him in a later paper in
1795. (Monge's 1781 memoir is also the earliest known anticipation of Linear Programming type of problems, in particular of the transportation problem. Related to that, the Monge soil-transport
problem leads to a weak-topology definition of a distance between distributions rediscovered many times since by such as L. V. Kantorovich, P. Levy, L. N. Wasserstein, and a number of others; and
bearing their names in various combinations in various contexts.) A memoir in the volume for 1783 relates to the production of water by the combustion of hydrogen; but Monge's results had been
anticipated by Henry Cavendish.
Descriptive geometry
Example of four different 2D representations of the same 3D object
Descriptive geometry is the branch of geometry which allows the representation of three-dimensional objects in two dimensions, by using a specific set of procedures. The resulting techniques are
important for engineering, architecture, design and in art. [1] The theoretical basis for descriptive geometry is provided by planar geometric projections. Gaspard Monge is usually considered the
"father of descriptive geometry". He first developed his techniques to solve geometric problems in 1765 while working as a draftsman for military fortifications, and later published his findings.
Monge's protocols allow an imaginary object to be drawn in such a way that it may be 3-D modeled. All geometric aspects of the imaginary object are accounted for in true size/to-scale and shape,
and can be imaged as seen from any position in space. All images are represented on a two-dimensional drawing surface.
Descriptive geometry uses the image-creating technique of imaginary, parallel projectors emanating from an imaginary object and intersecting an imaginary plane of projection at right angles. The
cumulative points of intersections create the desired image.
So given the tools, we learnt to see how objects within a referenced space, given to such coordinates, have been defined in that same space. Where is this point with in that reference frame?
What is born within that point, that through it is emergent product. Becomes a thing of expression from nothing? It's design and all, manifested as a entropic valuation of the cooling period?
Crystalline shapes born by design, and by element from whence it's motivation come? An arrow of time?
......A Condensative Result exists. Where "energy concentrates" and expresses outward.
I mean if I were to put on my eyeglasses, and these glasses were given to a way of seeing this universe, why not look at the whole universe bathed in such spacetime fabric?
This a opportunity to get "two birds" with one stone?
I was thinking of Garrett's E8 Theory article and Stefan's here.
On March 31, 2006 the high-resolution gravity field model EIGEN-GL04C has been released. This model is a combination of GRACE and LAGEOS mission plus 0.5 x 0.5 degrees gravimetry and
altimetry surface data and is complete to degree and order 360 in terms of spherical harmonic coefficients.
High-resolution combination gravity models are essential for all applications where a precise knowledge of the static gravity potential and its gradients is needed in the medium and short
wavelength spectrum. Typical examples are precise orbit determination of geodetic and altimeter satellites or the study of the Earth's crust and mantle mass distribution.
But, various geodetic and altimeter applications request also a pure satellite-only gravity model. As an example, the ocean dynamic topography and the derived geostrophic surface currents,
both derived from altimeter measurements and an oceanic geoid, would be strongly correlated with the mean sea surface height model used to derive terrestrial gravity data for the combination
Therefore, the satellite-only part of EIGEN-GL04C is provided here as EIGEN-GL04S1. The contributing GRACE and Lageos data are already described in the EIGEN-GL04C description. The
satellite-only model has been derived from EIGEN-GL04C by reduction of the terrestrial normal equation system and is complete up to degree and order 150.
How many really understand/see the production of gravitational waves in regards to Taylor and Hulse?
To see Stefan's correlation in terms of "wave production" is a dynamical quality to what is still be experimentally looked for by LIGO?
As scientists, do you know this?
6:41 AM, November 11, 2007
See here
Thus the binary pulsar PSR1913+16 provides a powerful test of the predictions of the behavior of time perceived by a distant observer according to Einstein's Theory of Relativity.
Since we know the theory of Relativity is about Gravity, then how is it the applications can be extended to the way we see "anew" in our world?
A sphere, our earth, not so round anymore.
Uncle has tried to correct me on "isostatic adjustment."
Derek Sears, professor of cosmochemistry at the University of Arkansas, explains. See here
Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from
radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to
planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."
With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a
result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes. K. Shumacker. Scientific America
Do not have time to follow up at this moment.
7:02 AM, November 11, 2007
.....and here.
In context of the post and differences, I may not have pointed to the substance of the post, yet I would have dealt with my problem in seeing.
In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or
contracting sphere) or cylindrically symmetric (like a spinning disk).
A simple example is the spinning dumbbell. Set upon one end, so that one side of the dumbell is on the ground and the other end is pointing up, the dumbbell will not radiate when it spins around
its vertical axis but will radiate if it tumbles end-over-end. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. If we imagine an
extreme case in which the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would
be given off.
Given the context of the "whole universe" what is actually pervading, if one did not include gravity?
So singularities are pointing to the beginning(i), yet, we do not know if we should just say, the Big Bang, because, one would had to have calculated the energy used and where did it come from
"previous" to manifest?
So some will have this philosophical position about "nothing(?)," and "everything as already existing."
Wherever there are no gravitational waves the space time is flat. One would have to define these two variances. One from understanding the relation to "radiation" and the other "perfectly spherically
Grossmann is getting his doctorate on a topic that is connected with non-Euclidean geometry. I don’t know what it is.
Einstein to Mileva Maric,1902
Animal Navigation
The long-distance navigational abilities of animals have fascinated humans for centuries and challenged scientists for decades. How is a butterfly with a brain weighing less than 0.02 grams able
to find its way to a very specific wintering site thousands of kilometers away, even though it has never been there before? And, how does a migratory bird circumnavigate the globe with a
precision unobtainable by human navigators before the emergence of GPS satellites? To answer these questions, multi-disciplinary approaches are needed. A very good example of such an approach on
shorter distance navigation is the classical ongoing studies on foraging trips of Cataglyphis desert ants. My Nachwuchsgruppe intends to use mathematical modelling, physics, quantum chemistry,
molecular biology, neurobiology, computer simulations and newly developed laboratory equipment in combination with behavioral experiments and analyses of field data to achieve a better
understanding of the behavioral and physiological mechanisms of long distance navigation in insects and birds.
Tony Smith has some interesting information in response to a post by Clifford of Asymptotia.
Clifford writes:
This is simply fascinating. I heard about it on NPR. While it is well known that birds are sensitive to the earth’s magnetic field, and use it to navigate, apparently it’s only been recently
shown that this sensitivity is connected directly to the visual system (at least in some birds). The idea seems to be that the bird has evolved a mechanism for essentially seeing the magnetic
field, presumably in the sense that magnetic information is encoded in the visual field and mapped to the brain along with the usual visual data
While my post has been insulted by cutting it short(and stamping it and proclaiming irrelevance,) I'd like to think otherwise, even in face of his streamlining that Clifford likes to do. His blog, he
can do what he wants of course.
In any case, it seems reasonable to agree with Buhler, who concludes in his biography of Gauss that "the oft-told story according to which Gauss wanted to decide the question [of whether space is
perfectly Euclidean] by measuring a particularly large triangle is, as far as we know, a myth."
So I'll repeat the post of mine here and the part, that he has deleted. You had to know how to see the relevance of the proposition of birds in relation to the magnetic field of the earth, to know
why the bird relation is so important.
On Magnetic vision
Rupert Sheldrake has had similar thoughts on this topic.
"Numerous experiments on homing have already been carried out with pigeons. Nevertheless, after nearly a century of dedicated but frustrating research, no one knows how pigeons home, and all
attempts to explain their navigational ability in terms of known senses and physical forces have so far proved unsuccessful. Researchers in this field readily admit the problem. 'The amazing
flexibility of homing and migrating birds has been a puzzle for years. Remove cue after cue, and yet animals still retain some backup strategy for establishing flight direction.' 'The problem of
navigation remains essentially unsolved.'
Many of academics might have steered clear because of the the thoughts and subject he has about this? It seems to me that if this information is credible, then some of Rupert's work has some
substance to it and hence, brings some credibility to the academic outlook?
Update: Here I am adding some thoughts in regards to Rupert Sheldrake that I was having while reading his work. He had basically himself denounced the process of birds having an physiological
connection to magnetic fields because of not having any information to support the magnetic vision Clifford is talking about. So Rupert moves beyond this speculation, to create an idea about what he
calls Morphic resonance with regards to animals.
So Rupert presents future data and theoretics in face of what we now know in terms of the neurological basis is experimentally being talked about in the article in question Clifford is writing about.
On How to see in the Non Euclidean Geometrical World
8.6 On Gauss's Mountains
One of the most famous stories about Gauss depicts him measuring the angles of the great triangle formed by the mountain peaks of Hohenhagen, Inselberg, and Brocken for evidence that the geometry
of space is non-Euclidean. It's certainly true that Gauss acquired geodetic survey data during his ten-year involvement in mapping the Kingdom of Hanover during the years from 1818 to 1832, and
this data included some large "test triangles", notably the one connecting the those three mountain peaks, which could be used to check for accumulated errors in the smaller triangles. It's also
true that Gauss understood how the intrinsic curvature of the Earth's surface would theoretically result in slight discrepancies when fitting the smaller triangles inside the larger triangles,
although in practice this effect is negligible, because the Earth's curvature is so slight relative to even the largest triangles that can be visually measured on the surface. Still, Gauss
computed the magnitude of this effect for the large test triangles because, as he wrote to Olbers, "the honor of science demands that one understand the nature of this inequality clearly". (The
government officials who commissioned Gauss to perform the survey might have recalled Napoleon's remark that Laplace as head of the Department of the Interior had "brought the theory of the
infinitely small to administration".) It is sometimes said that the "inequality" which Gauss had in mind was the possible curvature of space itself, but taken in context it seems he was referring
to the curvature of the Earth's surface.
See:Reflections on Relativity
As a layperson, Riemann and Gauss were instrumental for helping me see beyond what we were accustom to in Euclidean, so I find Clifford's blog post extremely interesting as well. Maybe even a
biological/physiological impute into our senses as well? Who knows?:)
Einstein's youth and the compass, becomes the motivation that drives the vision of what exists beyond what was acceptable in that youth. The mystery. Creates a new method on how we view the world
beyond the magnetic, to help us include the view in the gravitational one as well.
From a early age, young Albert showed great interest in the world around him. When he was five years old, his father gave him a compass, and the child was enchanted by the device and intrigued by
the fact the needle followed a invisible field to point always in the direction of the north pole.Reminiscing in old age, Einstein mentioned this incident as one of the factors that perhaps
motivated him years later to study the gravitational field. God's Equation, by Amir D. Aczel, Pg 14
While something could exist that is abstract, like for instance the Gaussian arc, this inclusion in the value of general relativity is well known. Mileva's response in quote above was the key for
Einstein's views on developing General Relativity, and without it "electromagnetism would not, and could not" have been included geometrically in the theory of GR.
It was a succession to "Gravitational wave production" that was understood in regards to Taylor and Hulse.
The theory of relativity predicts that, as it orbits the Sun, Mercury does not exactly retrace the same path each time, but rather swings around over time. We say therefore that the perihelion --
the point on its orbit when Mercury is closest to the Sun -- advances.
I would think this penduum exercise would make a deeper impression if held in concert with the way one might have look at Mercuries orbit.
Or, binary pulsar PSR 1913+16 of Taylor and Hulse. These are macroscopic valutions in what the pendulum means. Would this not be true? See:Harmonic Oscillation
I guess not every string theorist would know this? Maybe even Bee would understand that "German" is replace by another form of seeing using abstract language, for how everything can be seen in
relation to the ground state? Where there are no gravitational waves, spacetime is flat.
You had to know how such views on the navigation of the birds could have a direct link to the evolutionary output of the biology and physiology of the species. What Toposense?
Yes it's a process where the mathematical minds look at knitting and such, in such modularc forms, to have said, "hey there is a space of thinking" that we can do really fancy twists and such.
One thing us humans can certainly do is construct the monumental world reality with straight lines and such in the Euclidean view. But nature was there before we thought to change all it's curves.
But the truth is, the Earth's topography is highly variable with mountains, valleys, plains, and deep ocean trenches. As a consequence of this variable topography, the density of Earth's surface
varies. These fluctuations in density cause slight variations in the gravity field, which, remarkably, GRACE can detect from space. See: The Mind Field
See here for more info on Grace.
Look out into the wild world that nature itself presents and tell me what the ancient mind did not see. Native Americans lived closer to nature. Hopefully you'll understand why it is we must engage
ourselves to experiencing the views of nature?:)
Mandalic Construction
See: The Last Mimzy
The "Ancient Medicine wheels" might have been place accordingly? Do you imagine seeing in the abstract world, the magnetic view we see of earth in it's different disguise?
So that last line about the "medicine wheels" probably caused Clifford to do what he did in regards to the post I wrote.
Yes I am creating a direct link between the Medicine Wheels and the Medicine Wheel as a Mandala constructed by early Native Americans. Where they were shamanically placed on the earth.
What is a Medicine Wheel?
The term "medicine wheel" was first applied to the Big Horn Medicine Wheel in Wyoming, the most southern one known. That site consists of a central cairn or rock pile surrounded by a circle of
stone; lines of cobbles link the central cairn and the surrounding circle. The whole structure looks rather like a wagon wheel lain-out on the ground with the central cairn forming the hub, the
radiating cobble lines the spokes, and the surrounding circle the rim. The "medicine" part of the name implies that it was of religious significance to Native peoples.
Figure 4 - Distribution of medicine wheel sites east of the Rockies
What was of importance is the underlying psychological patterns that exist in the forms of Mandalas. That such a thing like the Medicine wheel, would retain a impact from one's life, to another life.
There are various forms of mandalas with distinct concepts and different purposes. The individual representations range from the so-called Cosmic Mandalas, which transmit the ancient knowledge of
the development of the universe and the world-systems which represents a high point among Mandalas dedicated to meditation; to the Mandalas of the Medicine Buddha which demonstrates how the
Buddha-power radiates in all directions, portraying the healing power of the Buddha.
It would not be easy to understand this "seed mandala" as it makes it way into conscious recognition. It arises to awareness through the subconscious pathway during our susceptibility in dream time.
This open accessibility is the understanding that there is a closer connection to the universality of being, and the realization that the degrees beyond the "emotive body" is developing the
understanding of the "mental one" as well as, leading to "the spiritual one."
This comparative view is analogousness to development beyond the abstract view we see of earth in it's gravitational form.
However, the signals that scientists hope to measure with LISA and other gravitational wave detectors are best described as "sounds." If we could hear them, here are some of the possible sounds
of a gravitational wave generated by the movement of a small body inspiralling into a black hole.
It would be much like a "energy packet" that would contain all that is demonstrated in "extravagant patterns." Look like a "flower in real life," or a "intricate pattern," while encouraging the
person to explore these doorways and move on from.
That seed contains all of the history we have supplanted to it by how we built previously and embedded all the philosophy we had learnt from it.
The Emotional Body of the Earth
Would to me seem very emotive in terms of it's weather. How such weather patterns spread across the earth. Also, it would not seem so strange then that while we would have seen polarization aspects
in the cosmos, in terms of magnetic field variances in relation to north and south, we would see "this of value" in the earth as well?
So would the earth have it's positive and negative developments in relation to aspect of it's weather? Most certainly psychological when the snows have lasted so long, one could indeed wish for
warmer weather, but that's not what I mean. I mean on a physiological level, such ionic generations would indeed cause the state of the human body to react. | {"url":"http://www.eskesthai.com/search/label/General%20Relativity","timestamp":"2014-04-19T09:25:22Z","content_type":null,"content_length":"344147","record_id":"<urn:uuid:eedf8ff4-d832-49c3-b573-782cf28580f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gradient theorem, why F=-grad(U) ?
If we put them into a common context, then F is a force field. A line integral of F represents the work done by the force field on a free particle that traverses that path. The amount of work done is
equal to the particle's gain in kinetic energy, which is equal to its loss of potential energy. So when physicists flip the sign, they are accounting for changes in potential energy rather than
changes in kinetic energy.
Outside this physical context, there is no reason to flip the sign. You just use the fundamental theorem of line integrals. | {"url":"http://www.physicsforums.com/showthread.php?p=4208028","timestamp":"2014-04-21T02:09:08Z","content_type":null,"content_length":"31091","record_id":"<urn:uuid:b2953ece-9eab-4c77-b38d-2a8a6cae5b57>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monday, February 4th, 2013
At least eight people—journalists, colleagues, blog readers—have now asked my opinion of a recent paper by Ross Anderson and Robert Brady, entitled “Why quantum computing is hard and quantum
cryptography is not provably secure.” Where to begin?
1. Based on a “soliton” model—which seems to be almost a local-hidden-variable model, though not quite—the paper advances the prediction that quantum computation will never be possible with more
than 3 or 4 qubits. (Where “3 or 4″ are not just convenient small numbers, but actually arise from the geometry of spacetime.) I wonder: before uploading their paper, did the authors check
whether their prediction was, y’know, already falsified? How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions—not to
mention the famous experiments with superconducting Josephson junctions, buckyballs, and so forth that have demonstrated the reality of entanglement among many thousands of particles (albeit not
yet in a “controllable” form)?
2. The paper also predicts that, even with 3 qubits, general entanglement will only be possible if the qubits are not collinear; with 4 qubits, general entanglement will only be possible if the
qubits are not coplanar. Are the authors aware that, in ion-trap experiments (like those of David Wineland that recently won the Nobel Prize), the qubits generally are arranged in a line? See
for example this paper, whose abstract reads in part: “Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap.”
3. Finally, the paper argues that, because entanglement might not be a real phenomenon, the security of quantum key distribution remains an open question. Again: are the authors aware that the most
practical QKD schemes, like BB84, never use entanglement at all? And that therefore, even if the paper’s quasi-local-hidden-variable model were viable (which it’s not), it still wouldn’t justify
the claim in the title that “…quantum cryptography is not provably secure”?
Yeah, this paper is pretty uninformed even by the usual standards of attempted quantum-mechanics-overthrowings. Let me now offer three more general thoughts.
First thought: it’s ironic that I’m increasingly seeing eye-to-eye with Lubos Motl—who once called me “the most corrupt piece of moral trash”—in his rantings against the world’s
“anti-quantum-mechanical crackpots.” Let me put it this way: David Deutsch, Chris Fuchs, Sheldon Goldstein, and Roger Penrose hold views about quantum mechanics that are diametrically opposed to one
another’s. Yet each of these very different physicists has earned my admiration, because each, in his own way, is trying to listen to whatever quantum mechanics is saying about how the world works.
However, there are also people all of whose “thoughts” about quantum mechanics are motivated by the urge to plug their ears and shut out whatever quantum mechanics is saying—to show how whatever
naïve ideas they had before learning QM might still be right, and how all the experiments of the last century that seem to indicate otherwise might still be wiggled around. Like monarchists or
segregationists, these people have been consistently on the losing side of history for generations—so it’s surprising, to someone like me, that they continue to show up totally unfazed and itching
for battle, like the knight from Monty Python and the Holy Grail with his arms and legs hacked off. (“Bell’s Theorem? Just a flesh wound!”)
Like any physical theory, of course quantum mechanics might someday be superseded by an even deeper theory. If and when that happens, it will rank alongside Newton’s apple, Einstein’s elevator, and
the discovery of QM itself among the great turning points in the history of physics. But it’s crucial to understand that that’s not what we’re discussing here. Here we’re discussing the possibility
that quantum mechanics is wrong, not for some deep reason, but for a trivial reason that was somehow overlooked since the 1920s—that there’s some simple classical model that would make everyone
exclaim, “oh! well, I guess that whole framework of exponentially-large Hilbert space was completely superfluous, then. why did anyone ever imagine it was needed?” And the probability of that is
comparable to the probability that the Moon is made of Gruyère. If you’re a Bayesian with a sane prior, stuff like this shouldn’t even register.
Second thought: this paper illustrates, better than any other I’ve seen, how despite appearances, the “quantum computing will clearly be practical in a few years!” camp and the “quantum computing is
clearly impossible!” camp aren’t actually opposed to each other. Instead, they’re simply two sides of the same coin. Anderson and Brady start from the “puzzling” fact that, despite what they call
“the investment of tremendous funding resources worldwide” over the last decade, quantum computing still hasn’t progressed beyond a few qubits, and propose to overthrow quantum mechanics as a way to
resolve the puzzle. To me, this is like arguing in 1835 that, since Charles Babbage still hasn’t succeeded in building a scalable classical computer, we need to rewrite the laws of physics in order
to explain why classical computing is impossible. I.e., it’s a form of argument that only makes sense if you’ve adopted what one might call the “Hype Axiom”: the axiom that any technology that’s
possible sometime in the future, must in fact be possible within the next few years.
Third thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and
Brady’s persuasive, and vice versa. Dyakonov agrees that scalable QC will never work, but he ridicules the idea that we’d need to modify quantum mechanics itself to explain why. Anderson and Brady,
by contrast, are so eager to modify QM that they don’t mind contradicting a mountain of existing experiments. Indeed, the question occurs to me of whether there’s any pair of quantum computing
skeptics whose arguments for why QC can’t work are compatible with one another’s. (Maybe Alicki and Dyakonov?)
But enough of this. The truth is that, at this point in my life, I find it infinitely more interesting to watch my two-week-old daughter Lily, as she discovers the wonderful world of shapes, colors,
sounds, and smells, than to watch Anderson and Brady, as they fail to discover the wonderful world of many-particle quantum mechanics. So I’m issuing an appeal to the quantum computing and
information community. Please, in the comments section of this post, explain what you thought of the Anderson-Brady paper. Don’t leave me alone to respond to this stuff; I don’t have the time or
the energy. If you get quantum probability, then stand up and be measured! | {"url":"http://www.scottaaronson.com/blog/?m=201302","timestamp":"2014-04-20T08:14:32Z","content_type":null,"content_length":"27168","record_id":"<urn:uuid:fd08ef44-c978-419f-9493-a87960d82e22>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breakthrough Toward Quantum Computing
Soulskill posted more than 2 years ago | from the advancing-in-fundamental-increments dept.
redwolfe7707 writes "Qubit registers have been a hard thing to construct; this looks to be a substantial advance in the multiple entanglements required for their use. Quoting: 'Olivier Pfister, a
professor of physics in the University of Virginia's College of Arts & Sciences, has just published findings in the journal Physical Review Letters demonstrating a breakthrough in the creation of
massive numbers of entangled qubits, more precisely a multilevel variant thereof called Qmodes. ... Pfister and researchers in his lab used sophisticated lasers to engineer 15 groups of four
entangled Qmodes each, for a total of 60 measurable Qmodes, the most ever created. They believe they may have created as many as 150 groups, or 600 Qmodes, but could measure only 60 with the
techniques they used.'" In related news, research published in the New Journal of Physics (abstract) shows "how quantum and classical data can be interlaced in a real-world fiber optics network,
taking a step toward distributing quantum information to the home, and with it a quantum internet."
cancel ×
First Quantum Post (0)
GameboyRMH (1153867) | more than 2 years ago | (#36803138)
Users on quantum PCs will see a dirty joke in place of this text.
Re:First Quantum Post (1)
blair1q (305137) | more than 2 years ago | (#36803274)
But not if they actually try to read it.
Re:First Quantum Post (1)
Schmorgluck (1293264) | more than 2 years ago | (#36803638)
Actually, since you couldn't know for certain that your post would be first before you had posted it, it was in a quantum state of being first and not-first until you collapsed its wave function by
posting it.
Or something like that...
Re:First Quantum Post (0)
| more than 2 years ago | (#36806048)
thank you
Quantum Internet? (0)
Synn (6288) | more than 2 years ago | (#36803164)
Quantum internet, really? Will I be eating Quantum pop tarts while I surf Quantum porn?
Re:Quantum Internet? (1, Funny)
clyde_cadiddlehopper (1052112) | more than 2 years ago | (#36803210)
Cue Schrodinger's Goatse in 3-2-1...
Re:Quantum Internet? (0)
tom17 (659054) | more than 2 years ago | (#36803238)
Are you saying you like *dead* stretched anuses?
Re:Quantum Internet? (1)
Alsee (515537) | more than 2 years ago | (#36805568)
Quantum link to goatse... you don't know if you saw it or not, but you're absolutely certain that you don't want to know.
Re:Quantum Internet? (1)
Greyfox (87712) | more than 2 years ago | (#36805692)
Hmm. Looks like a wormhole...
Re:Quantum Internet? (1)
Thing 1 (178996) | more than 2 years ago | (#36807250)
Hmm. Looks like a wormhole...
What, Morgan Freeman's?
Re:Quantum Internet? (1)
Lanteran (1883836) | more than 2 years ago | (#36806052)
You won't know if this link is goatse until you click it and collapse the wave function!
Re:Quantum Internet? (2, Funny)
| more than 2 years ago | (#36803230)
Quantum porn?
He or She does all possible things with another He or She all at once. If you really like what you see, don't blink because it will be something different by the time your eyelids open back up.
Re:Quantum Internet? (0)
| more than 2 years ago | (#36803842)
Actually I'm pretty sure that already exists. Quantum computing or no, rule 34 is absolute.
Re:Quantum Internet? (0)
| more than 2 years ago | (#36803906)
And what's more, they're always in a super position...
Re:Quantum Internet? (1)
Dogtanian (588974) | more than 2 years ago | (#36804952)
Quantum porn?
He or She does all possible things with another He or She all at once. If you really like what you see, don't blink because it will be something different by the time your eyelids open back up.
You mean like a pornographic version of that Doctor Who episode with the scary statues?
Re:Quantum Internet? (1)
vlm (69642) | more than 2 years ago | (#36803240)
Quantum internet, really? Will I be eating Quantum pop tarts while I surf Quantum porn?
It will almost certainly be the marketing term of the decade.
Much as I had "turbo sunglasses" in the 80s, because nothing says glare reduction than an turbocharger, and I bought a nano "i-pod" some years ago, i- as in internet when ironically its probably the
only piece of consumer end-user electronics apple sold that decade without a web browser.
I'm sure I'll see stickers to put on my quantum computer that somehow make it faster, and quantum tennis shoes, RSN.
Re:Quantum Internet? (1)
Dogtanian (588974) | more than 2 years ago | (#36805288)
I bought a nano "i-pod" some years ago, i- as in internet when ironically its probably the only piece of consumer end-user electronics apple sold that decade without a web browser.
Sure, but remember the "i" prefix originated (as far as Apple were concerned) with the original iMac, not the iPad. The former was Apple's "comeback" product and culturally prominent at the time
(remember the late-90s translucent coloured plastic fad it sparked). In that case it *did* supposedly stand for "Internet".
I'm assuming that the name "iPod" was then chosen to piggyback on the success and name recognition of the iMac, regardless of whether the "i" was relevant. The fact that the iPod was even more
successful than the iMac makes it easy to forget that it didn't originate Apple's iNaming scheme(!)
Re:Quantum Internet? (1)
narcc (412956) | more than 2 years ago | (#36808620)
In the before time, in the long long ago, we had tons of stuff prefixed with "e" or "i" -- how Apple managed its now near monopoly on that particular lower-case vowel prefix is anyone's guess.
Re:Quantum Internet? (1)
Sulphur (1548251) | more than 2 years ago | (#36808846)
In the before time, in the long long ago, we had tons of stuff prefixed with "e" or "i" -- how Apple managed its now near monopoly on that particular lower-case vowel prefix is anyone's guess.
Favorite letter?
Re:Quantum Internet? (0)
| more than 2 years ago | (#36803396)
Quantum porn: 50% chance of being straight porn, 50% chance of being gay porn, and you don't know which it turns out to be until you are "done".
Re:Quantum Internet? (1)
equex (747231) | more than 2 years ago | (#36803768)
Haha! Why AC? This is a good joke!
Re:Quantum Internet? (0)
| more than 2 years ago | (#36804324)
Because you can't know if a quantum joke is good or bad until you posted it.
Re:Quantum Internet? (1)
ThePeices (635180) | more than 2 years ago | (#36803424)
"Quantum internet, really? Will I be eating Quantum pop tarts while I surf Quantum porn?"
No, you will not be going down on slutty pop music singers while looking at porn on a quantum computer.
You are just not an A-list celebrity.
Re:Quantum Internet? (1)
mrops (927562) | more than 2 years ago | (#36804274)
You wouldn't know if its a porn or a preacher giving a sermon until you hit play, and then as soon as you observe it, it won't be either.
Re:Quantum Internet? (1)
tsotha (720379) | more than 2 years ago | (#36806488)
You will be surfing porn an /. at the same time. And, probably, eating both chocolate and strawberry pop tarts.
*stare* (2)
Spigot the Bear (2318678) | more than 2 years ago | (#36803166)
Right, magic, got it.
Obligatory: But Will It Ecrypt My Google Mail (0)
| more than 2 years ago | (#36803176)
so the N.S.A [youtube.com] can't eavesdrop?
Yours In D.C.,
K. Trout
Re:Obligatory: But Will It Ecrypt My Google Mail (1)
peragrin (659227) | more than 2 years ago | (#36804976)
Sure but china has already hacked your password anyway so there isn't much point.
How do they work? (2)
Normal Dan (1053064) | more than 2 years ago | (#36803196)
How would one read the output of a quantum computer if they quantum state changes upon observation? Wouldn't it just spit out random numbers?
Re:How do they work? (1)
| more than 2 years ago | (#36803284)
Wouldn't it just spit out random numbers?
Typically, yes, but this can be more useful than it sounds! What you want to do is perform a number of quantum operations [wikipedia.org] such that, when you do measure your qubit, you will get the
answer you want with probability >0.5. The operations you can perform on a qubit are very limited (you'll notice there are no AND operations, OR operations or anything that would allow you an IF
statement) and you almost never get answer that's correct with absolute certainty, but they could still be fantastically useful. With just a few operations you can sometimes coerce a qubit into a
state such that it will give you correct answer with probability between 0.5 and 1.0. Once you have that, it's just a matter of repeating the computation enough times to be confident.
Re:How do they work? (2)
vlm (69642) | more than 2 years ago | (#36803476)
How would one read the output of a quantum computer if they quantum state changes upon observation? Wouldn't it just spit out random numbers?
One term to google for is decoherence.
The two paragraph wikipedia answer is at:
http://en.wikipedia.org/wiki/Quantum_computer#Operation [wikipedia.org]
The multi-page answer at quantiki is at:
http://www.quantiki.org/wiki/Basic_concepts_in_quantum_computation#Decoherence_and_recoherence [quantiki.org]
My crappy slashdot car analogy is the internal state of my car is almost infinitely complicated, O2 sensor levels and thermostat bypass fractions. But you could theoretically compute an algebraic
equation that boils down to can I drive 400 miles on a tank of gas? The answer at the end is, is the engine running or not, just one binary bit. All the hidden internal variables and states, zillions
of them, like coolant temp, O2 loop state, etc, all collapse down to one bit. Its not really important what the internal state is when the engine shuts off, or if it shut off because the O2 loop
leaned out of spec, or the fuel pump control loop when haywire when it sucked air, or...
So, a 1024 qubit computer has 2^1024 internal variables all of a vaguely analog complex number value, and those 2^1024 values collapse down to a mere 1024 bits when you factor my RSA key... There's a
darn near infinite number of possible values that collapse down to 1024 bits and you randomly get one of them.
Re:How do they work? (0)
| more than 2 years ago | (#36803534)
quantum computers are probabalistic rather than deterministic... take a single qubit for example, it can be a zero, a one ir a linear superposition of the both... when we measure a quantum system,
its wavefunction collapses into one of the 2 states: a 1 or a zero. You then average across multiple readings to get the actual value... For example, say the qubit was in an equal superposition of 1
and 0.. that means when we measure, the probability of getting a 1 is 0.5 and tge probability of getting a zero is 0.5 . If we average across a large enough ensemble we recover the state of being in
an equal superposition.
Re:How do they work? (0)
| more than 2 years ago | (#36805676)
A qubit in an eigenstate (1 or 0) will remain in that state until the state is destroyed (by measuring a complimentary variable).
Re:How do they work? (3, Informative)
Alsee (515537) | more than 2 years ago | (#36805460)
We can compare it to rolling dice, where the dice can be loaded to shift the percentages. If we put a weight on the 1, we might roll a 1 half the time and randomly get 2 through 6 the rest of the
time. In quantum mechanics we can preform calculations that change how the dice are loaded. Ideally, we can load the die so strongly that 2 through 6 are driven down to zero percent, and 100% of the
time we "randomly" roll a 1.
Depending on the particular problem and the particular technology used, certain parts of the computer might not be working with a perfect-clean 100%. Particular parts of the computer might have the
dice loaded to 99.9% randomly roll a particular result. Obviously we don't want a computer that's oly 99.9% right :D
Different kinds of quantum computers deal with that in different ways. The simplest example is a quantum computer that works on a beam of photons or something. A beam of light might contain a
trillion photons per nanosecond. If 99.9% of those photons randomly come out on the right answer, the right answer obviously lights up brightly. The 0.1% of photons lighting up the various wrong
answers will be too dim to notice.
For some quantum calculations they use the very simple technique of just running it a dozen times or something. There's basically a 99% chance you'll get a dozen matching correct answers, and a 1%
chance you'll get eleven matching correct answers and one random error that you throw away. There is a minuscule chance you'll get (and throw away) two or possibly three garbage results out of the
dozen, but the only way you would ever get a wrong answer from that is if the same wrong answer came up seven or more times at once, and mathematically that won't happen a million or billion years.
However for a "real" desktop-type quantum computer, there's a much more complicated and powerful technique they would build in.... error correcting bits and error correcting codes. By adding in a few
extra bits, the computer can automatically spot and correct any random wrong values as soon as the appear. All of the automatic error correction stuff might make the computer something like 50%
bigger or maybe twice the size, but it can easily match the (effectively zero) error rate of standard computers.
Re:How do they work? (0)
| more than 2 years ago | (#36808502)
For some quantum calculations they use the very simple technique of just running it a dozen times or something. There's basically a 99% chance you'll get a dozen matching correct answers, and a 1%
chance you'll get eleven matching correct answers and one random error that you throw away. There is a minuscule chance you'll get (and throw away) two or possibly three garbage results out of the
dozen, but the only way you would ever get a wrong answer from that is if the same wrong answer came up seven or more times at once, and mathematically that won't happen a million or billion years.
Mostly correct, but one minor point: Most of the calculations we want to use quantum computers for give answers that are easy to verify and so you usual only need to run the program once.
Factorization, for example, (although Shor's algorithm is actually more complicated than this) the quantum algorithm gives you the factors and then a classical computer can be used to check the
result. If the result is wrong, you just rerun the program, but unless it is, you don't need to.
Re:How do they work? (1)
mgiuca (1040724) | more than 2 years ago | (#36867902)
Couldn't you also use a regular computer to verify it with 100% accuracy (for some classes of problems)?
For example, I can (as I understand it) use a quantum computer to find the two prime factors of a semiprime number. Say that the QC can give me the correct answer with, say, 60% accuracy. Now I just
need to take the answer and use a regular computer to multiply the two numbers and see if they give me the original number. If they don't, I ask the QC again until it gets it right. No need to
continuously run the computation to reduce the probability of error. Is there some reason this might not work?
Re:How do they work? (1)
Alsee (515537) | more than 2 years ago | (#36881128)
Yes, for most kinds of problems a normal computer can quickly verify whether a quantum result is correct. However I don't think a normal computer can verify certain kinds of "non-existence" or
"optimum" results. For example if you plug in a problem and the quantum computer says no solution exists, a normal computer can't confirm that. Or if you plug in a traveling salesman problem asking
for the shortest route and it gives you a short-route answer, a normal computer can obviously calculate the length of that route, but the normal computer generally can't verify that no shorter route
Re:How do they work? (1)
mgiuca (1040724) | more than 2 years ago | (#36882308)
Thanks! Very informative. And ... Slashdot requires that I say more than just this. Hmm. Extremely informative?
Re:How do they work? (0)
| more than 2 years ago | (#36811396)
The quantum state doesn't always change upon observation. It depends on what physical property you measure and what physical property the state characterizes (in technical terms "of which it is an
eigenstate"). So it is really connected to the Heisenberg "uncertainty" principle: for example, if you know the position of a quantum system (ie its quantum state is a position eigenstate), then
measurement of its momentum (mass times velocity in classical physics) become random. However, measurements of its position will give nonrandom, ultraprecise results in the absence of back-action
(another story).
Note that quantum physics can be used to perform measurements that are much more sensitive and precise than classical measurements, so the inherent randomness of quantum mechanics (The Old One
playing at dice as Einstein put it) do not preclude extremely precise observations... and computations!
um... (1)
Charliemopps (1157495) | more than 2 years ago | (#36803248)
Someone clarify this for me: I thought that currently we could only entangle photons, and the photon entanglement could be explained by classical optics physics. So while it's "technically"
entanglement, it's not what we are really after. Do we need to entangle non-photon particles or will photons be good enough?
Re:um... (5, Informative)
jmizrahi (1409493) | more than 2 years ago | (#36804754)
Neither statement is true. First, we have entangled many systems other than photons. We have entangled trapped ions, neutral Rydberg atoms, superconducting qubits, nuclear spin states, and the list
goes on. There are advantages and disadvantages to each quantum computing architecture. One of the fundamental issues facing all quantum computing architectures is the question of scalability. It is
not always clear how to go from 1 or 2 qubits to thousands or millions of qubits. Some architectures, such as trapped ions, lend themselves naturally to scaling. The significance of this work is that
up to this point, it has been unclear how you might scale a photonic quantum computer. The authors of this paper have taken the first steps towards overcoming that obstacle. As to your second
statement, observed photon entanglement cannot be explained via classical optics. It has been shown to violate a Bell inequality, which is the hallmark of non-classicality in quantum mechanics.
imagine a beowulf makerbot.. (1)
decora (1710862) | more than 2 years ago | (#36803322)
of these things...
Price Pfister? (1)
nog_lorp (896553) | more than 2 years ago | (#36803350)
I hear Price Pfister is releasing a breakthrough new design in Commodes, called the Qmode!
am I the only one ? (1)
Chuby007 (1961870) | more than 2 years ago | (#36803444)
am I the only one that has no idea of what that post means ? don't lie !
Re:am I the only one ? (0)
| more than 2 years ago | (#36803496)
Yeah. No clue here.
Re:am I the only one ? (0)
| more than 2 years ago | (#36804062)
Beats the hell outa me what it means or how it would apply to me! Maybe it's like Google's
"Feeling Lucky" search, you take whatever comes along!
Fucking Qubits... (0)
| more than 2 years ago | (#36803766)
Fucking Qubits, how do they work?
Re:Fucking Qubits... (1)
maxwell demon (590494) | more than 2 years ago | (#36804150)
You don't want your qubits to fuck. When they fuck, they don't work.
But maybe that's why we have so much trouble with getting more qubits. If we let them fuck, maybe they'll multiply by themselves!
Level 60!?! (1)
Kamiza Ikioi (893310) | more than 2 years ago | (#36803948)
I never got past level 3 in Q-bert! First Pacman, then Donkey Kong, now Q-bert. This is getting serious.
I'll believe it when I see it. (0)
| more than 2 years ago | (#36805064)
Right now, quantum computers are much further from reality than human level AI. The problem is decoherence, something that cannot be overcome unless we essentially either A) find an existing physical
system in biology that overcomes it or B) rewrite the laws of physics to gain a better understanding of QM. Physicists are headed in exactly the wrong direction with their current superstring
theories and M-theory. If they would just let the math guide them, the solution would be obvious. Instead, their mathematics just gets increasingly complicated. I'm not holding my breath for quantum
Re:I'll believe it when I see it. (1)
Wandering Idiot (563842) | more than 2 years ago | (#36807926)
I don't suppose you'd care to elaborate on what your brilliantly simple Theory of Everything involves, would you? (Presumably not, since it's much easier to just imply you have one and act smug,
rather than proposing a theory and running the risk of actual criticism)
Re:I'll believe it when I see it. (1)
Maritz (1829006) | more than 2 years ago | (#36813600)
Physicists are headed in exactly the wrong direction with their current superstring theories and M-theory. If they would just let the math guide them, the solution would be obvious.
Please, elaborate. In detail. I wouldn't understand but it would be fun to try and ask someone who would understand to put it into thick bastard terms for me. You ought to drop a quick email to the
likes of Brian Greene, Ed Witten etc if you get a minute.
this is not a breakthrough... (1)
Gravis Zero (934156) | more than 2 years ago | (#36806136)
it's a quantum leap. :)
Hey... (1)
tsotha (720379) | more than 2 years ago | (#36806492)
How many of these "breakthroughs" are going to have to happen before I can actually buy something. It's like a breakthrough and not a breakthrough at the same time.
Got $10 million? Was: Re:Hey... (0)
| more than 2 years ago | (#36807018)
Re:Hey... (1)
Sulphur (1548251) | more than 2 years ago | (#36808886)
How many of these "breakthroughs" are going to have to happen before I can actually buy something. It's like a breakthrough and not a breakthrough at the same time.
As soon as they get 640k qbits. No one will ever need more than that.
Quantum mechanics is... (1)
jamiesan (715069) | more than 2 years ago | (#36807094)
Just the universe's way of waffling Schrodinger: Are you being wishy washy? Universe: Well... yes and no.
Que the Sam Beckett jokes (1)
PDX (412820) | more than 2 years ago | (#36808074)
If the wmap cold spot is an alternate universe then a tachyon beam might be able to break past dimensional barriers that exist between universes. If the other universe has two cold spots then a hub
of data could be formed. Imagine the total output of every universes' data collections piped across dimensional barriers. The rate of data is limited by the phase data and the rotation of the beam.
Multi-verse theory has proved correct. The downside is not knowing if anyone can survive in the other universe. The challenge is to detect FTL signals.
LOL cats (1)
jjbarrows (958997) | more than 2 years ago | (#36809604)
Should be funny, not dead and/or alive, long live classic internet!
Will I get a quantum camera... (1)
gestalt_n_pepper (991155) | more than 2 years ago | (#36809948)
to see what might have been?
Bad description of entanglement (1)
harryjohnston (1118069) | more than 2 years ago | (#36820094)
From TFA: "imagine that two people, each tossing a coin on their own and keeping a record of the results, compared this data after a few coin tosses and found that they always had identical outcomes,
even though each result, heads or tails, would still occur randomly from one toss to the next". That's badly wrong. (Although I'm sure the researcher understands quantum mechanics, it was probably
the PR guy who got it wrong!)
Entanglement really isn't all that mysterious; it just seems strange if you haven't gotten your head around non-commuting observables. Entangled particles are the quantum analogue of classical
correlations - so it isn't as if two people are tossing separate coins, which of course aren't correlated.
Instead, imagine choosing a playing card at random from a shuffled deck and (without looking) cutting it in half and putting the two halves in separate envelopes. Keep one envelope and send the other
to a friend living near Alpha Centauri. Open the envelopes at the same (pre-arranged) time. Gee whiz, you both simultaneously see two halves of the same card. Magic! (Well, maybe not.)
That's the classical playing card. A quantum playing card is weird: you can't see whether the card is black or red and whether it is odd or even at the same time. If you find out whether the card is
black or red the number on the card changes at random; if you find out whether it is odd or even the suit of the card changes at random. Just to really make things awkward, you can choose to make a
measurement that one third looks at the card's colour and two thirds looks at whether the card is odd or even (yes, I know that doesn't even make sense but that's the way it works). Then ... if you
cut a whole bunch of cards in half, do different measurements each time, and take care of a few loopholes, you find that the statistics you get prove that until you looked at each card (or half of
it) it didn't actually have a specific colour or a specific number, just a wavefunction describing the probabilities. This is called Bell's Inequality.
My advice: if you don't need to understand it, don't bother trying. The important point is that it's the quantum cards (non-commuting observables) that are weird, not the fact that you can cut them
in half (entanglement).
(Incidentally, if the card has been cut into two, and you look at the colour of each half, the numbers on the two halves change independently of one another. The entangled cards aren't mystically
bound together forever. Only the initial measurement is the same.)
Check for New Comments | {"url":"http://beta.slashdot.org/story/154948","timestamp":"2014-04-24T17:03:19Z","content_type":null,"content_length":"199205","record_id":"<urn:uuid:6b5fe5cf-7cfb-46e9-8e6c-844fe91a084a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to create smooth (3D) cameras for games
Does anyone know any nice articles/books/tutorials on how to create different types of cameras? (follow cameras, third person cameras, first person cameras, cinematic cameras, etc)
I’m trying to implement some third person follow camera but it looks rather jittery and crappy. Thanks :)
Take a look at the source of Linderdaum Engine
There is a GameCamera.h file where you will find the low-level camera implementation. In CameraPositioner.h there are some behaviors for what you are looking for (3rd person, trajectory, etc).
Well, I have a problem, I’d appreciate it if anyone could help. I’m not really a huge transformation guru, especially in 3D, especially when it concerns rotations.
I’ve been busy working on a 3D shooter game, and I’ve now managed to implement a decent 1st and 3rd person camera, but now the next problem: I have to update the movement of the player according to
the camera…
I realised that, if I could find the ‘2D’ angle between the front/look vector of the camera,
and (0,0,1) in a default coordinate system, I could have a predefined set of direction vectors.
Then I could rotate that vector with the angle between the camera’s look vector and the front vector, to get the result directions I want. But I’m not sure how to do this entirely.
Here’s some pseudo-code, since we tend to understand that better than words anyway:
vec3 direction(0.0f,0.0f,0.0f);
if(keydown(up)) direction += (0.0f,0.0f,1.0f);
if(keydown(down)) direction += (0.0f,0.0f,-1.0f);
if(keydown(left)) direction += (-1.0f,0.0f,0.0f);
if(keydown(right)) direction += (1.0f,0.0f,0.0f);
direction.normalize(); //the predefined direction vector
vec3 lookW = camera->GetLook();
vec3 Front(0.0f,0.0f,1.0f);
vec3 Up(0.0f,1.0f,0.0f);
float angle = GetClockWiseAngleAlongAxis(/*axis*/Up, /*vec 1*/lookW, /*vec 2*/Front);
direction.rotateAlongAxis(/*axis*/Up, /*radians*/angle);
This would give me the result I want. But I’m not sure
how to code ‘GetClockWiseAngleAlongAxis’
In this case, I intend for that function to return the angle between vector lookW and vector Front, but in ‘2D’ or in other words, negating the Y component here.
When I’m writing this edit, I realise that’s actually pretty easily done by just setting the Y component to 0 lol. But still! I need to find the angle in a clockwise/anticlockwise direction: I cannot
allow for it to find the ‘smallest’ angle, since then might rotate left when it’s supposed to rotate right and vice versa.
Forgive me for my bad math terminology =/
Can anyone help?
EDIT: I Found an answer to my question at : http://stackoverflow.com/questions/5188561/signed-angle-between-two-3d-vectors-with-same-origin-within-the-same-plane-reci | {"url":"http://devmaster.net/posts/20763/how-to-create-smooth-3d-cameras-for-games","timestamp":"2014-04-19T15:44:51Z","content_type":null,"content_length":"16391","record_id":"<urn:uuid:55fe9e7f-b7c4-473f-96df-8a7b01472971>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic Puzzle Forums - View Single Post - Today's Challenge - Find the fault in the ladder
Here's the puzzle at the bottleneck. Don't look unless you've really worked it over. There are some complementary exclusion sets and 2x2 sets in there that took me a few iterations to develop. I
don't want to spoil your fun.
At this point only clues 7 and 8 are left.
Can you figure it out from here? I'll post the key to solving this in a later reply if anyone asks; but not until asked. | {"url":"http://www.logic-puzzles.org/forum/showpost.php?p=1165&postcount=2","timestamp":"2014-04-16T19:10:22Z","content_type":null,"content_length":"17143","record_id":"<urn:uuid:19f9546b-0c43-4d7f-81b2-e689dd13dd15>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
The maximum number of infected individuals in SIS epidemic models: Computational techniques and quasi-stationary distributions
Artalejo, Jesús R. and Economou, A. and López-Herrero, M.J. (2010) The maximum number of infected individuals in SIS epidemic models: Computational techniques and quasi-stationary distributions.
Journal of Computational and Applied Mathematics, 233 (10). pp. 2563-2574. ISSN 0377-0427
This is the latest version of this item.
Restricted to Repository staff only until 31 December 2020.
Official URL: http://www.sciencedirect.com/science/article/pii/S0377042709007341
We study the maximumn umber of infected individuals observed during an epidemic for a Susceptible–Infected–Susceptible (SIS) model which corresponds to a birth–death process with an absorbing state.
We develop computational schemes for the corresponding distributions in a transient regime and till absorption. Moreover, we study the distribution of the current number of infectedindividuals given
that the maximum number during the epidemic has not exceeded a given threshold. In this sense, some quasi-stationary distributions of a related process are also discussed.
Item Type: Article
Uncontrolled Stochastic SIS epidemic model; Maximum number of infected individuals; Extinction time; Transient distribution; Quasi-stationary distribution; Absorption probabilities
Subjects: Medical sciences > Biology > Ecology
ID Code: 15803
References: N.T.J. Bailey. The Elements of Stochastic Processes with Applications to the Natural Sciences. John Wiley & Sons, New York (1990)
D.J. Daley, J. Gani. Epidemic Modelling: An Introduction. Cambridge Studies in Mathematical Biology, 15Cambridge University Press, Cambridge (1999)
O. Diekmann, J.A.P. Heesterbeek. Mathematical Epidemiology of Infectious Diseases: Model Building. Analysis and Interpretation, John Wiley & Sons, Chichester (2000)
L.J.S. Allen. An Introduction to Stochastic Processes with Applications to Biology. Prentice-Hall, New Jersey (2003)
S.M. Moghadas, A.B. Gumel. A mathematical study of a model for childhood diseases with non-permanent immunity. Journal of Computational and Applied Mathematics, 157 (2003), pp.
J. Wei, X. Zou. Bifurcation analysis of a population model and the resulting SISepidemic model with delay. Journal of Computational and Applied Mathematics, 197 (2006), pp.
M. Song, W. Ma, Y. Takeuchi. Permanence of a delayed SIR epidemic model with density dependent birth rate. Journal of Computational and Applied Mathematics, 201 (2007), pp.
N. Yoshida, T. Hara. Global stability of a delayed SIR epidemic model with density dependent birth and death rates. Journal of Computational and Applied Mathematics, 201 (2007),
pp. 339–347
D. Clancy. A stochastic SIS infection model incorporating indirect transmission. Journal of Applied Probability, 42 (2005), pp. 726–737
P. Coolen-Schrijner, E.A. van Doorn Quasi-stationarydistributions for a class of discrete-time Markov chains. Methodology and Computing in Applied Probability, 8 (2006), pp.
Y. Xu, L.J.S. Allen, A.S. Perelson. Stochastic model of an influenza epidemic with drug resistance. Journal of Theoretical Biology, 248 (2007), pp. 179–193
F. Ball, P. Neal. Network epidemic models with two leels of mixing. Mathematical Biosciences, 212 (2008), pp. 69–87
M. Lindholm. On the time to extinction for a two-type version of Bartlett’s epidemic model. Mathematical Biosciences, 212 (2008), pp. 99–108
P. Stone, H. Wilkinson-Herbots, V. Isham. A stochastic model for head lice infections. Journal of Mathematical Biology, 56 (2008), pp. 743–763
I. Nasell. Extinction and quasi-stationarity in the Verhulst logistic model. Journal of Theoretical Biology, 211 (2001), pp. 11–27
E.A. van Doorn, P.K. Pollett Survival in a quasi-death process. Linear Algebra and Its Applications, 429 (2008), pp. 776–791
M.F. Neuts. The distribution of the maximum length of a Poisson queue during a busy period. Operations Research, 12 (1964), pp. 281–285
R.F. Serfozo. Extreme values of birth and death processes and queues. Stochastic Processes and Their Applications, 27 (1988), pp. 291–306
J.R. Artalejo, A. Economou, A. Gomez-Corral Applications of maximum queue lengths to call center management. Computers and Operations Research, 34 (2007), pp. 983–996
J.R. Artalejo, S.R. Chakravarthy. Algorithmic analysis of the maximal level length in general-block two-dimensional Markov processes. Mathematical Problems in Engineering (2006),
pp. 1–15 Article ID 53570
J.R. Artalejo. On the transient behavior of the maximum level length in structured Markov chains, 2009 (submitted for publication)
A.M. Cohen. Numerical Methods for Laplace Transform Inversion. Springer, New York (2007)
E. Seneta. Non-negative Matrices and Markov Chains. Springer, New York (1981)
P.G. Ciarlet. Introduction to Numerical Linear Algebra and Optimization. Cambridge University Press, Cambridge (1989)
J.V. Ross, T. Taimre, P.K. Pollett. On parameter estimation in population models. Theoretical Population Biology, 70 (2006), pp. 498–510
M.J. Keeling, P. Rohani. Modeling Infectious Diseases in Humans and Animals. Princeton University Press, Princeton (2008)
L. Han, S. Han, Q. Deng, J. Yu, Y. He Source tracing and pursuing of network virus. IEEE 8th International Conference on Computer and Information Technology Workshops (2008), pp.
Deposited On: 02 Jul 2012 11:19
Last Modified: 06 Feb 2014 10:31
Available Versions of this Item
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15803/","timestamp":"2014-04-19T02:52:31Z","content_type":null,"content_length":"39766","record_id":"<urn:uuid:f88c955d-9e5f-451b-a3ac-a799264b5a8f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample size
How to choose SPSS sample size? What are the main factors that influence on this size?
Don't hesitate to turn to our statisticians. We are proud to offer you quality statistical services supported by 100% money back guarantee. Regardless of the test and software you choose, you can
expect from us:
• comprehensive task evaluation and quick quote
• custom approach to your statistical project
• accurate data analysis and detailed interpretation
• constant project progress updates
• free adjustments and timely delivery
Let us take care of your data!
Actually SPSS has not the power to calculate directly for determining sample size. But there are some separate (expensive) products can be used for this purpose. There are a number of very useful
websites exit for power/sample size calculations that you can find with Google. Many of these have free calculators. Also, power/sample size are design specific. You need to have an idea of how you
will analyze the data, size of the effect you want to find, the variation in data, the level of significance and the population size as well. Sample sizes are needed in statistics because it is not
always possible to conduct an analysis of a whole population. In that case, SPSS can help you to choose your sample in a snap. I am describing the process below.
1. Firstly, select the ̔Data̕ menu and then click ̔Select Cases̕.
2. Check the ̔Random sample of cases̕ button, then check the ̔Filtered̕ button.
3. Click ̔Sample̕ in the center of the dialog box, then check the ̔Approximately̕ button.
4. Type a percentage into the box. For example, type ̔5%̕ if you want to sample 5 percent of the population.
5. Click ̔Continue̕ and then click ̔OK̕.
This completes the selection of a random sample. Similarly if you know exact how many cases you want to select (the sample size) then click the ̔exactly’ button and write the exact number that you
want and then ̔cases from the first̕ from how many cases you want to choose the sample that you need to mention. Then click ‘Continue̕ and then click ‘OK̕.
To check that the system has picked a representative sample:
1. Run a descriptive output for the variable.
2. Click ̔Analyze̕ and then click ‘Frequencies̕.
3. Click ̔variable’ in the left window of the dialog box, then click the arrow key in the middle to transfer ̔variable to the right-hand window.
4. Click ̔Statistics̕, then place check marks in ̔Mean, Std. Deviation̕ and ̔S.E. mean̕.
5. Click ̔Continue̕ and then click ̔OK̕.
This will deliver a report that will tell you if you have chosen a representative sample. If you haven’t, run through the procedure again with a different percentage in your SPSS research.
When doing an SPSS research analysis, the following factors should be considered when deciding on how big your sample size should be:
1. Purpose of the study – if your SPSS research entails doing a segmentation of the market or building predictive models via SPSS’ multivariate techniques (e.g. factor analysis, regression analysis,
cluster analysis), you will need a bigger sample size. The acceptable ratio of the number of cases or respondents to be included in the study is 20 times the number of variables to be included in the
multivariate analysis.
2. The available population size or sampling frame also plays an important role in deciding on how big or small your sample size should be.
3. One should also consider the level of precision to be used. This is also called the margin of error. The normal margin of error for any study is at 10%. Lower the margin would require larger
sample size.
4. Aside from the margin of error, one should also consider the degree of variability to be assumed in the study. A normal study uses a variance of 0.5.
5. Time and cost constraints are also important factors to be considered when deciding the sample size for any SPSS research. | {"url":"http://www.spss-research.com/sample-size/","timestamp":"2014-04-20T03:23:13Z","content_type":null,"content_length":"106416","record_id":"<urn:uuid:334f3c9d-f374-4e9d-87a6-a4aaced2f603>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Given that AE ll BD , solve for x. The diagram is not drawn to scale.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/514bd6e9e4b05e69bfacf700","timestamp":"2014-04-17T06:49:54Z","content_type":null,"content_length":"47708","record_id":"<urn:uuid:8032b5a2-9aa1-41c8-82bf-5d2840082dee>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wierd trig question
May 1st 2008, 11:13 AM #1
May 2008
Wierd trig question
Hi, I'll be glad if anyone could help me with this one:
(A special trigonometry question)
A bus and a cab leave from point a
The cab towards point c, the bus towards b.
they both get there at the same time.
I should mention the points a,b,c create a simple non-even triangle.
After that, they both turn and drive across the line BC and meet at point D.
The cab driver the continues to point B, and then back to A
The bus driver "Cuts" the triangle and goes stright back to A from point D.
The cab driver arrives at point A 43.75 minutes after the bus.
The bus's speed is 56.25% of the cab's.
you need to use trig in order to find the Angle BAC
Be very greatful if anyone helps me out! :-)
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/trigonometry/36806-wierd-trig-question.html","timestamp":"2014-04-17T19:46:48Z","content_type":null,"content_length":"28792","record_id":"<urn:uuid:91d1f826-cb4c-4d67-a671-a70b72d381a2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
San Diego, CA 92131
Lita: Award-Winning Math Tutor
...ita has thirteen years of experience as a private-school classroom teacher, with experience teaching Pre-Algebra, First and Second year Algebra, Geometry, Pre-
(Advanced Mathematics or Math Analysis), Trigonometry,
, AP
AB, and SAT/ACT review...
Offering 10 subjects including calculus | {"url":"http://www.wyzant.com/geo_Carlsbad_CA_calculus_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-20T16:09:51Z","content_type":null,"content_length":"60435","record_id":"<urn:uuid:11fda549-dfc8-4579-8e09-cedd38062550>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
The zeros of the digamma function
up vote 0 down vote favorite
I wonder what work have been done on the zeros of the digamma function and on the values of the gamma function at such points (on the negative real axis). Any help please :)
special-functions complex-analysis hypergeometric-functions reference-request
1 I've fixed the tags and added a link. I think you might like to consult the FAQs on how to ask a really good question. In particular, you might provide some background on where you are coming from
and what you know already. – David Roberts Aug 3 '12 at 0:41
add comment
1 Answer
active oldest votes
See, for example, P. Sebah, X. Gourdon, Introduction to the Gamma Function, available here.
Topic 5.1.5, page 13, is about Zeros of the digamma function. We can see that on the negative axis, the digamma function has a single zero between each consecutive negative integers (the
poles of the gamma function).
The authors presents the first five zeros of the digamma function on the negative axis with 50 decimal places.
up vote ADDED:
1 down
vote I have found this beautiful manuscript written by Hermite, with reference to gamma functions, Cours de M. Hermite, Librairie Scientifique A. Hermann, 1883.
Also, the complete Oeuvres de Charles Hermite is available here.
Another reference is NIST Digital Library of Mathematical Functions
In a research paper that I did I found something which is equivalent to what is shown on pg 13 of that article: x_n ~ -n + 1/logn. And if we let d_n denote the absolute value of the gamma
function at those points, then I found that (n!d_n)/logn = e, when n goes to infinity. Now my professor wants me to relate these results to the mathematical world by citing the similar
results that others have found before me. I saw that the author of that article says that the first result was mentioned by Hermite but do you by any chance know the specific article or
book of Hermite that has that? Thanks!!! – Tri Ngo Aug 3 '12 at 14:47
@Tri Ngo: Please, see ADDED in the above answer. – Papiro Aug 3 '12 at 16:48
Thanks PaPiro, I wonder if they have ever translated his work into English. – Tri Ngo Aug 6 '12 at 14:18
add comment
Not the answer you're looking for? Browse other questions tagged special-functions complex-analysis hypergeometric-functions reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/103824/the-zeros-of-the-digamma-function/103842","timestamp":"2014-04-19T04:56:11Z","content_type":null,"content_length":"57117","record_id":"<urn:uuid:1ecdfb25-f64e-46f2-bb75-ebef70f2e460>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Groups of change with totals from 1 to 100 cents using the least amount of coins.
An illustration of a Roman Numeral clock showing 6:40.
An illustration of a right triangle with sides, 40, 75, and x. Additional height of x is added to 75.…
Triangle with sides labeled with numerals 28, 36, 40 and angles A, B, C labeled. | {"url":"http://etc.usf.edu/clipart/keyword/40","timestamp":"2014-04-19T01:59:08Z","content_type":null,"content_length":"11419","record_id":"<urn:uuid:2d7fe8d2-013f-47f6-929d-930a0c083e70>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding and Subtracting Rational Expressions
...with the same denominator
Adding or subtracting rational expressions with the same denominator is like adding or subtracting fractions with the same denominator. We add or subtract the numerators and keep the denominator the
same. This arrangement is fine with the denominator, who likes himself just the way he is. Good for him.
Sample Problems
Addition of rational expressions is relatively straightforward. We add the numerators and simplify by collecting like terms. If you ever have a friend over, though, don't ask if they want to look at
your term collection. Personal experience tells us that they probably won't be interested.
Sample Problem
We add the numerators and keep the denominator the same, which yields
Next, we simplify by collecting like terms in the numerator to find
That's it. So easy a caveman could do it. Great, now we'll be hearing from those guys in the Geico commercials.
With subtraction, though, there is one extra thing to be careful of: When subtracting, make sure to keep track of signs.
Sample Problem
Subtracting the numerators gives us
Notice that we're subtracting the whole chunk (x – 4), so we need to be careful with signs when we simplify. We don't want to blow this. Hm? Oh...blow chunks. Very funny.
The nice thing about adding or subtracting rational expressions with the same denominator is that we don't need to think about the denominator. We copy the denominator and worry about getting the
numerator right. If only we didn't need to worry about the numerator, either, math would be perfect.
...with different denominators
When asked to add or subtract rational expressions with different denominators, we first need to find the least common denominator (LCD) of the expressions. After turning each expression into an
equivalent expression where the denominator is the LCD, we can add or subtract as we did earlier. We like being able to do things we did earlier, because it means we need to learn less new stuff. Our
brains are getting full.
To find the LCD of a pair of fractions, we first factor the denominators. The LCD must contain every factor from each denominator. The number of times a factor appears in the LCD must be the same as
the largest number of times the factor appears in any one denominator. Don't try shorting the LCD. It will know, and it won't be happy about it. Plus, it knows a guy.
First, we'll do this with number-type fractions.
Sample Problem
Find the LCD of
To find the LCD, first we factor the denominators of the two fractions to find
The factors in the denominators are 5 and 6, so the LCD must have factors of 5 and 6. Since the factor 5 occurs twice in one of the denominators, the factor 5 must also occur twice in the LCD. The
LCD of the two fractions is
5 × 5 × 6 = 150.
To rewrite a fraction so that it has the LCD as a denominator, we multiply the fraction by a clever form of 1. The clever form of 1 uses the factors of the LCD that aren't in the denominator yet.
Clever, right? Like a fox.
To do this with rational expressions instead of rational numbers, we need to factor polynomials instead of numbers. It's a good idea to simplify each rational expression first to keep the LCD as
simple as possible. We don't want the LCD to be complicated, because then we'll have that Avril Lavigne song stuck in our head all day. Again.
Sample Problem
Find the LCD of the rational expressions
For each expression, write the equivalent expression where the denominator is the LCD.
First, factor the expressions to find
We can simplify the first expression, so the rational expressions we'll deal with going forward are
The LCD must contain every factor in either denominator. Since none of these factors occurs more than once, we don't need to do any fancy-shmancy finagling and the LCD is simply
(x^2 + 2)(x + 3)(x + 2).
There's no reason to do extra work, so we'll leave the LCD like that. Someone else can come along and clean it up if they'd like. To write a rational expression over a common denominator, we multiply
the expression by a clever form of 1. The clever form of 1 needs to use the factors that are in the LCD but not in the fraction's denominator yet. Gosh, so many rules. What is this, Soviet Russia?
Now that we know how to find LCDs, we can add and subtract rational expressions that have different denominators. First, we put the rational expressions over the same denominator. Then, we add or
subtract them according to what the problem tells us to do. If the problem tells us to jump off a cliff, we do that too, but only off a very short cliff. Gotta be smart about these things.
Sample Problem
First we find the LCD of the two expressions, which is (x + 1)(x + 2). The equivalent rational expressions are
Now that we have expressions with the same denominator, we can do the addition:
This expression simplifies to
which is our final answer. Unless your teacher asks you to multiply out the denominator, don't bother. If your teacher asks you to jump off a cliff, tell her the problem beat her to it, and you're
already on it. | {"url":"http://www.shmoop.com/polynomial-division-rational-expressions/adding-subtracting.html","timestamp":"2014-04-19T02:02:32Z","content_type":null,"content_length":"43767","record_id":"<urn:uuid:c447f1f8-f0d5-44fe-b1b1-9321c8b608fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
At an ocean port, the water has a max depth of 4m above the mean level at 8 A.M, and the period is 12.4h. Find the depth of the water at 10 A.M.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0fac84e4b04f0f8a91d23f","timestamp":"2014-04-19T15:13:30Z","content_type":null,"content_length":"74971","record_id":"<urn:uuid:e967690d-8332-44a0-8cc6-7e1c7967ab8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Stats on Solving Times in the Daily Competition
Some Stats on Solving Times in the Daily Competitions - Updated!
This page shows some data for solving sudokus and jigsaw sudokus over a 160 day period (6th March to 18th August). All the time bands shown refer to these bands:
<=5, <=10, <=15, <=20, <=30, <=40, <=50, <=60, <=120, >120 minutes
We have ignored all "Don't Know" selections and incorrect answers.
There is a thread open in the forum for your comments and feedback: here.
Graph 1. This shows the total number of correct solutions in each time band for Sudokus. For the Daily Sudoku Competition there more moderate puzzles per week than others which gives moderates the
biggest total line.
Graph 2. This shows the total number of correct solutions in each time band for Jigsaw Sudokus. For the Daily Jigsaw Competition there more gentles puzzles per week than others which gives gentles
the biggest total line.
Graph 3 and 4. The Average Time to Solve is published on the solutions in the archive. These two graphs show these numbers coloured by grade. By this measure most gradings have been sucessful. Tough
Sudokus show the most amount of variation and bleeding into other bands. Something for us to watch.
Graph 5 and 6. This is the normalised graph for Sudoku time bands. Normalised means we're turned the numbers into percentages. From this we can smooth out the different quantities of grades to
properly compare them. Gentle and diabiolicals are certainly shown to be different but it looks like moderates and tough puzzles need to be more differentiated in the grading algorithm.
Graph 7 and 8. This is the normalised graphs for Jigsaws. Moderate and Tough appear to have similar time profiles.
There is a thread open in the forum for your comments and feedback: COMMENTS. | {"url":"http://www.sudoku.org.uk/solvetimes.htm","timestamp":"2014-04-17T13:40:13Z","content_type":null,"content_length":"10986","record_id":"<urn:uuid:3de965fb-e449-4de3-b6f3-2ab5f37721d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Proving the existence of a limit?
Just because (a_n - a_{n+1}) -> 0 doesn't mean a_n is Cauchy. Again, a counterexample to this claim is a_n = log(n). We have log(n) - log(n+1) = log(n/(n+1)) -> log(1) = 0, but if this were Cauchy,
then it would be convergent (since the reals are complete), and it clearly isn't.
It turns out that being bounded isn't good enough either, although finding a counterexample was trickier. At any rate: try a_n = exp(i(1 + 1/2 + ... + 1/n)). This doesn't converge - it goes around
the unit circle in the complex plane. On the other hand,
[tex]a_n - a_{n+1} = \exp\left(i \left(1 + \frac{1}{2} + ... + \frac{1}{n}\right)\right)\left(1 - \exp\left(\frac{i}{n+1}\right)\right) \to 0.[/itex]
Okay, how about [tex]a_n-a_{n-1} \le k \left( \frac{1}{n}-\frac{1}{n-1} \right) [/tex] for all n greater then M | {"url":"http://www.physicsforums.com/showpost.php?p=1614196&postcount=12","timestamp":"2014-04-19T17:31:57Z","content_type":null,"content_length":"8308","record_id":"<urn:uuid:94d0c622-c92d-42e0-a17b-7c5e051af0ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
AB Testing And Statistical Significance
When you or your client wants to test a completely new element, in cases in which the result may effect sales or conversions, an AB test is usually the best approach. Unfortunately, AB testing needs
a lot of visitors to work properly and so often, we end up making decisions based on the results in the first few days. This is, of course, a bad approach as even though the new element might
initially perform better, you might eventually find that in the long run, the original delivers more conversions.
You may choose to test anything on a page ranging from single elements (like headings, text content, images, call to actions, offers, colour) to complete layout changes. The aim of these tests can be
• Trying to increase ROI by purposely optimizing a page to get more sales
• Trying to increase sign-ups/user interaction by adding new elements to the page
• Making sure you don’t negatively effect a page by introducing some brand changes
As you can see from the last one, the aim of a test isn’t always to find a winner – Sometimes it is good enough to make sure that the B is not worse than A.
Statistical Significance
There is something called “Statistical Significance” which is basically the point at which your tool can accurately tell you whether one element is definitely better than the other. Also known as the
confidence level, the higher its value, the lower the chances are that the test result happened by chance. In other words, if you have a confidence level of 90%, then there is only a 10% chance that
the result is random, and not actually because one element is really better than another. To explain this, imagine this scenario:
You have two tables at home. You grab a coin and flip it a 100 times on the first table – 53 heads. You then do the same thing on the second table and get 48 heads. Does this mean that on the first
table you are more likely to get a head when flipping a coin? Of course not! It is completely random and will never be statistically significant, even if you had to flip the coins a million times per
Similarly, when doing an AB test, we need to keep in mind there is always some randomness to the test. This means that although one of the options might initially seem like it’s the best option, you
might later find out that the advantage was completely random.
Several big sites like Google, Amazon and Firefox are constantly using AB testing to launch big changes. The bigger the site, the faster the test will reach statistical significance.
How is it calculated?
First off, you will need quite a large sample for tests to become statically valid. The most common confidence level used is 95%. When this value is reached, we can generally assume that the test is
now statistically valid.
While there seem to be different ways to calculate it, the simplest way to calculate it (without considering the sample size) is as follows: The difference between the two results must be larger than
the square root of the sum of the two results. In other words, if A got 25 conversions, and B got 29, then the total is 54.
The difference between the two values is 4. The square root of 54 is… 7.34846923.
Since this value is higher than the difference (4), then this result is not statistically valid. In reality, this is not enough since you must also consider the size of the sample which can make an
enormous difference. There are many formulas that you can use and different companies have different levels at which they can be confident about a change they just tested.
I won’t go into the details of the formulas myself, you can have a look at the wikiHow page which does a good job at explaining. You will definitely need the total visitors amount, and the total
conversions amount of each item you’re testing to measure statistical significance.
I suggest you play around with the following pages to learn more: | {"url":"http://www.webgeekly.com/lessons/website-testing/ab-testing-statistical-significance/","timestamp":"2014-04-18T21:01:31Z","content_type":null,"content_length":"39993","record_id":"<urn:uuid:ab3ee293-4176-468d-aa05-d10a1463dfba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many stone is 70 kg?
You asked:
How many stone is 70 kg?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/how_many_stone_is_70_kg","timestamp":"2014-04-19T20:05:21Z","content_type":null,"content_length":"53749","record_id":"<urn:uuid:b9f1c519-46c7-48bf-8ae9-dd58913af8ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ordered Triple Proof
August 2nd 2008, 05:28 PM #1
Junior Member
Mar 2008
This is how I've defined an ordered triple: ((a,b),c) is the set { {{a,1},{b,2}} , {c,3} }. Based on the fact that I've already proved equality for ordered pairs (a,b) = (c,d) iff a=c, b=d, I'm
trying to prove the same for the triples. This is what I've got:
((a,b),c) = ((d,e),f) if
{ {{a,1},{b,2}} , {c,3} } = { {{d,1},{e,2}} , {f,3} }
These sets are equal if:
i) {{a,1},{b,2}} = {{d,1},{e,2}}, which is true iff a=d, b=e
and {c,3} = {f,3}, which is true iff c=f.
Thus, in this case, a=d, b=e, c=f.
ii) {{a,1},{b,2}} = {f,3} and {c,3} = {{d,1},{e,2}}
What do I do about this second case? Can I ignore it because it is impossible for them to be equal? I can't figure out how to deal with it. Is there a problem in my definition for an ordered
triple? I know my definition for an ordered pair is different from the usual {{a},{a,b}} but that is what my book is using. Any help would be appreciated!
Say $(a,b,c) = (d,e,f)$.
By definition $((a,b),c) = ((d,e),f)$.
But you already proved this for pairs.
Thus $(a,b)=(d,e)$ and $c=f$.
Using pairs again $a=b$ and $b=e$.
August 2nd 2008, 05:56 PM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/discrete-math/45157-ordered-triple-proof.html","timestamp":"2014-04-16T08:12:04Z","content_type":null,"content_length":"33899","record_id":"<urn:uuid:3fcd100c-3e76-4c41-9815-01adad8303c6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portola Valley Geometry Tutor
Find a Portola Valley Geometry Tutor
...This helps us get to know each other and establish a good relationship. I also greatly appreciate feedback so I can continually improve as a tutor. I have a somewhat busy schedule, though I
will be as flexible as possible with time.
27 Subjects: including geometry, chemistry, calculus, physics
...My specialty is in Microeconomics, but I am very familiar with all the major aspects of free-market economic theory, including Macroeconomics, Econometrics, Money & Banking and International
Economics. I have strong Financial background/experience: I am a Chartered Financial Analyst (Level I), I...
22 Subjects: including geometry, calculus, statistics, accounting
I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students.
11 Subjects: including geometry, calculus, statistics, Chinese
I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra,
trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years.
11 Subjects: including geometry, calculus, statistics, algebra 1
...I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical and technical skills.I have years of
experience in discrete math. I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject.
49 Subjects: including geometry, calculus, physics, statistics | {"url":"http://www.purplemath.com/Portola_Valley_Geometry_tutors.php","timestamp":"2014-04-18T15:53:23Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:2527ef75-2a84-45ef-a806-173d759ba601>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huffman Code Discussion and Implementation
by Michael Dipperstein
This is one of those pages documenting an effort that never seems to end. I thought it would end, but I keep coming up with things to try. This effort grew from a little curiosity. One day, my copy
of "Numerical Recipes In C" fell open to the section on Huffman Coding. The algorithm looked fairly simple, but the source code that followed looked pretty complicated and relied on the vector
library used throughout the book.
The complexity of the source in the book caused me to search the web for clearer source. Unfortunately, all I found was source further obfuscated by either C++ or Java language structures. Instead of
searching any further, I decided to write my own implementation using what I hope is easy to follow ANSI C.
I thought that I could put everything to rest after implementing the basic Huffman algorithm. I thought wrong. Mark Nelson of DataCompression.info had mentioned that there are canonical Huffman codes
which require less information to be stored in encoded files so that they may be decoded later. Now I have an easy to follow (I hope) ANSI C implementation of encoding and decoding using canonical
Huffman codes.
As time passes, I've been tempted to make other enhancements to my implementation, and I've created different versions of code. Depending on what you're looking for, one version might suit you better
than another.
Click here for information on the different versions of my code, as well as instructions for downloading and building my source code.
The rest of this page discusses the results of my effort.
Algorithm Overview
Huffman coding is a statistical technique which attempts to reduce the amount of bits required to represent a string of symbols. The algorithm accomplishes its goals by allowing symbols to vary in
length. Shorter codes are assigned to the most frequently used symbols, and longer codes to the symbols which appear less frequently in the string (that's where the tatistical part comes in).
Arithmetic coding is another statistical coding technique.
Building a Huffman Tree
The Huffman code for an alphabet (set of symbols) may be generated by constructing a binary tree with nodes containing the symbols to be encoded and their probabilities of occurrence. The tree may be
constructed as follows:
Step 1. Create a parentless node for each symbol. Each node should include the symbol and its probability.
Step 2. Select the two parentless nodes with the lowest probabilities.
Step 3. Create a new node which is the parent of the two lowest probability nodes.
Step 4. Assign the new node a probability equal to the sum of its children's probabilities.
Step 5. Repeat from Step 2 until there is only one parentless node left.
The code for each symbol may be obtained by tracing a path to the symbol from the root of the tree. A 1 is assigned for a branch in one direction and a 0 is assigned for a branch in the other
direction. For example a symbol which is reached by branching right twice, then left once may be represented by the pattern '110'. The figure below depicts codes for nodes of a sample tree.
/ \
(0) (1)
/ \
/ \
Once a Huffman tree is built, Canonical Huffman codes, which require less information to rebuild, may be generated by the following steps:
Step 1. Remember the lengths of the codes resulting from a Huffman tree generated per above.
Step 2. Sort the symbols to be encoded by the lengths of their codes (use symbol value to break ties).
Step 3. Initialize the current code to all zeros and assign code values to symbols from longest to shortest code as follows:
1. If the current code length is greater than the length of the code for the current symbol, right shift off the extra bits.
2. Assign the code to the current symbol.
3. Increment the code value.
4. Get the symbol with the next longest code.
5. Repeat from A until all symbols are assigned codes.
Encoding Data
Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with it's code.
Decoding Data
If you know the Huffman code for some encoded data, decoding may be accomplished by reading the encoded data one bit at a time. Once the bits read match a code for symbol, write out the symbol and
start collecting bits again. See Decoding Encode Files for details.
A copy of the section from "Numerical Recipes In C" which started this whole effort may be found at http://lib-www.lanl.gov/numerical/bookcpdf/c20-4.pdf.
A copy of one David Huffman's original publications about his algorithm may be found at http://compression.graphicon.ru/download/articles/huff/huffman_1952_minimum-redundancy-codes.pdf .
A discussion on Huffman codes including canonical Huffman codes may be found at http://www.compressconsult.com/huffman/.
Further discussion of Huffman Coding with links to other documentation and libraries may be found at http://datacompression.info/Huffman.shtml.
Implementation Issues
What is a Symbol
One of the first questions that needs to be resolved before you start is "What is a symbol?". For my implementation a symbol is any 8-bit combination as well as an End Of File (EOF) marker. This
means that there are 257 possible symbols in any code.
Handling End-of-File (EOF)
The EOF is of particular importance, because it is likely that an encoded file will not have a number of bits that is a integral multiple of 8. Most file systems require that files be stored in
bytes, so it's likely that encoded files will have spare bits. If you don't know where the EOF is, the spare bits may be decoded as an extra symbol.
At the time I sat down to implement Huffman's algorithm, there were two ways that I could think of for handling the EOF. It could either be encoded as a symbol, or ignored. Later I learned about the
"bijective" method for handling the EOF. For information on the "bijective" method refer to SCOTT's "one to one" compression discussion.
Ignoring the EOF requires that a count of the number of symbols encoded be maintained so that decoding can stop after all real symbols have been decoded and any spare bits can be ignored.
Encoding the EOF has the advantage of not requiring a count of the number of symbols encoded in a file. When I originally started out I thought that a 257^th symbol would allow for the possibility of
a 17 bit code. And I didn't want to have to deal with 17 bit values in C. As it turns out a 257^th symbol will create the possibility of a 256 bit code and I ended up writing a library that could
handle 256 bit codes anyway. (See Code Generation.)
Consequently, I have two different implementations, a 0.1 version that contains a count of the number of symbols to be decoded, and a versions 0.2 and later that encode the EOF.
Code Generation
The source code that I have provided generates a unique Huffman tree based on the number of occurrences of symbols within the file to be encoded. The result is a Huffman code that yields an optimal
compression ratio for the file to be encoded. The algorithm to generate a Huffman tree and the extra steps required to build a canonical Huffman code are outlined above.
Using character counts to generate a tree means that a character may not occur more often than it can be counted. The counters used in my implementation are of the type unsigned int, therefore a
character may not occur more than UINT_MAX times. My implementation checks for this and issues an error. If larger counts are required the program is easily modifiable.
Code Length
In general, a Huffman code for an N symbol alphabet, may yield symbols with a maximum code length of N - 1. Following the rules outlined above, it can be shown that if at every step that combines the
two parentless nodes with the lowest probability, where only one of the combined nodes already has children, an N symbol alphabet (for even N) will have two N - 1 length codes.
Example Given a 6 symbol alphabet with the following symbol probabilities: A = 1, B = 2, C = 4, D = 8, E = 16, F = 32
Step 1. Combine A and B into AB with a probability of 3.
Step 2. Combine AB and C into ABC with a probability of 7.
Step 3. Combine ABC and D into ABCD with a probability of 15.
Step 4. Combine ABCD and E into ABCDE with a probability of 31.
Step 5. Combine ABCDE and F into ABCDEF with a probability of 63.
The Following tree results:
/ \
(0)F ABCDE
/ \
(10)E ABCD
/ \
(110)D ABC
/ \
(1110)C AB
/ \
(11110)A B(11111)
In order to handle a 256 character alphabet, which may require code lengths of up to 255 bits, I created a libraries that performs standard bit operations on arrays unsigned characters. Versions
prior to 0.3 use a library designed specifically for 256 bit arrays. Later versions use a library designed for arbitrary length arrays. Though I haven't used my libraries outside of this application,
they are written in the same portable ANSI C as the rest of my Huffman code library.
Writing Encoded Files
I chose to write my encoded files in two parts. The first part contains information used to reconstruct the Huffman code (a header) and the second part contains the encoded data.
In order to decode files, the decoding algorithm must know what code was used to encode the data. Being unable to come up with a clean way to store the tree itself, I chose to store information about
the encoded symbols.
To reconstruct a traditional Huffman code, I chose to store a list of all the symbols and their counts. By using the symbol counts and the same tree generation algorithm that the encoding algorithm
use, a tree that matching the encoding tree may be constructed.
To save some space, I only stored the non-zero symbol counts, and the end of count data is indicated by an entry for a character zero with a count of zero. The EOF count is not stored in my
implementations that encode the EOF, both the encoder and decoder assume that there is only one EOF.
Canonical Huffman codes usually take less information to reconstruct than traditional Huffman codes. To reconstruct a canonical Huffman code, you only need to know the length of the code for each
symbol and the rules used to generate the code. The header generated by my canonical Huffman algorithm consists of the code length for each symbol. If the EOF is not encoded the total number of
encoded symbols is also included in the header.
Encoded Data
The encoding of the original data immediately follows the header. One natural by-product of canonical Huffman code is a table containing symbols and their codes. This table allows for fast lookup of
codes. If symbol codes are stored in tree form, the tree must be searched for each symbol to be encoded. Instead of searching the leaves of the Huffman tree each time a symbol is to be encoded, my
traditional Huffman implementation builds a table of codes for each symbol. The table is built by performing a depth first traversal of the Huffman tree and storing the codes for the leaves as they
are reached.
With a table of codes, writing encoded data is simple. Read a symbol to be encoded, and write the code for that symbol. Since symbols may not be integral bytes in length, care needs to be taken when
writing each symbol. Bits need to be aggregated into bytes. In my 0.1 version of code, all the aggregation is done in-line. My versions 0.2 and later use a library to handle writing any number of
bits to a file.
Decoding Encode Files
Like encoding a file, decoding a file is a two step process. First the header data is read in, and the Huffman code for each symbol is reconstructed. Then the encoded data is read and decoded.
I have read that the fastest method for decoding symbols is to read the encoded file one bit at time and traverse the Huffman tree until a leaf containing a symbol is reached. However I have also
read that it is faster to store the codes for each symbol in an array sorted by code length and search for a match every time a bit is read in. I have yet to see a proof for either side.
I do know that the tree method is faster for the worst case encoding where all symbols are 8 bits long. In this case the 8-bit code will lead to a symbol 8 levels down the tree, but a binary search
on 256 symbols is O(log[2](256)) or an average of 16 steps.
Since conventional Huffman encoding naturally leads to the construction of a tree for decoding, I chose the tree method here. The encoded file is read one bit at a time, and the tree is traversed
according to each of the bits. When a bit causes a leaf of the tree to be reached, the symbol contained in that leaf is written to the decoded file, and traversal starts again from the root of the
Canonical Huffman encoding naturally leads to the construction of an array of symbols sorted by the size of their code. Consequently, I chose the array method for decoding files encoded with a
canonical Huffman code. The encoded file is read one bit at time, with each bit accumulating in a string of undecoded bits. Then all the codes of a length matching the string length are compared to
the string. If a match is found, the string is decoded as the matching symbol and the bit string is cleared. The process repeats itself until all symbols have been decoded.
Portability Issues
All the source code that I have provided is written in strict ANSI-C. I would expect it to build correctly on any machine with an ANSI-C compiler. I have tested the code compiled with gcc on Linux
and mingw on Windows 98 and 2000.
The code makes no assumptions about the size of types or byte order (endian), so it should be able to run on all platforms. However type size and byte order issues will prevent files that are encoded
on one platform from being decoded on another platform. The code also assumes that an array of unsigned char will be allocated in a contiguous block of memory.
Actual Software
I am releasing my implementations of Huffman's algorithms under the LGPL. If you've actually read this page to get all the way down here, you already know that I have different implementations. In
general earlier versions are simpler (maybe easier to follow) and later versions are easier to use as libraries and better suited for projects taking advantage of the LGPL. In some cases later
version also fix minor bugs.
Each version is contained in its own zipped archive which includes the source files and brief instructions for building an executable. None of the archives contain executable programs. A copy of the
archives may be obtained by clicking on the links below.
Version Comment
Version 0.81 Incorporates Emanuele Giaquinta's patch to eliminate redundant check during the canonical decode process.
Version 0.8 Replaces getopt with my optlist library.
Explicitly license the library under LGPL version 3.
Version 0.7 Uses latest bit file library. This may fix memory access errors with non-GNU compilers.
Version 0.6 Functions and data structures common to canonical and standard Huffman encoding routines are declared once and shared, rather than declared twice as static.
Makefile builds code as library to ease compliance with the LGPL.
Version 0.5 Avoids name space conflicts between huffman.c and chuffman.c by renaming functions required by routines outside of the library and declaring local functions as static.
Sample code demonstrates both canonical and standard Huffman codes without a the need to recompile.
Uses latest bit file library for files.
Version 0.4 Makes huffman.c and chuffman.c more libraries like by removing main and adding a header file with prototypes for encode/decode functions.
Version 0.3 Uses generic bit array library for handling symbol codes.
Version 0.2 Uses encoded EOF to determine end of encoded data.
Handles bitwise file reads and writes with calls to my bit stream file library.
Version 0.1 Uses symbol count to determine end of encoded data.
Handles bitwise file reads and writes with in line code.
Add sample usage of encode/decode functions.
If you have any further questions or comments, you may contact me by e-mail. My e-mail address is: mdipper@alumni.engr.ucsb.edu
For more information on Huffman coding algorithm and other compression algorithms, visit DataCompression.info. | {"url":"http://michael.dipperstein.com/huffman/index.html","timestamp":"2014-04-18T03:41:30Z","content_type":null,"content_length":"26365","record_id":"<urn:uuid:1356678a-eafa-45d5-a6bf-19ad39f040b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel planes and intersecting planes
June 8th 2010, 07:44 AM
Parallel planes and intersecting planes
Hey! I am having trouble with a question, and I am hoping one of you can help me with it!
For what values of k will the planes 2x - 6y + 4z + 3 = 0 and 3x - 9y + 6z + k = 0
i) not intersect?
ii) intersect in a line?
iii) intersect in a plane?
i) for them to not intersect the planes must be parallel. If I multiply plane (1) by 1.5 I get the values in plane (2). Therefore if I set the k value to something that s not a scalar quality of
the C value in (1) the planes will be parallel. So:
3x - 9y + 6z + 4 = 0
ii) I believe these two planes to be parallel, so by this logic these two planes can never meet. Therefore no k value will make these planes intersect in a line.
*Unsure about this
iii) I don't understand what the question means by "intersect in a plane". If it means all values of the planes are the same, then I would set the k value to a scalar value of k in the first
question. So:
1.5x3 = 4.5
3x - 9y + 6z + 4.5 = 0
I have no idea if I am correct in my thinking here or not. I hope someone here can help me in these solutions!
June 8th 2010, 08:19 AM
Hey! I am having trouble with a question, and I am hoping one of you can help me with it!
For what values of k will the planes 2x - 6y + 4z + 3 = 0 and 3x - 9y + 6z + k = 0
i) not intersect?
ii) intersect in a line?
iii) intersect in a plane?
i) for them to not intersect the planes must be parallel. If I multiply plane (1) by 1.5 I get the values in plane (2). Therefore if I set the k value to something that s not a scalar quality of
the C value in (1) the planes will be parallel. So:
3x - 9y + 6z + 4 = 0
ii) I believe these two planes to be parallel, so by this logic these two planes can never meet. Therefore no k value will make these planes intersect in a line.
*Unsure about this
iii) I don't understand what the question means by "intersect in a plane". If it means all values of the planes are the same, then I would set the k value to a scalar value of k in the first
question. So:
1.5x3 = 4.5
3x - 9y + 6z + 4.5 = 0
I have no idea if I am correct in my thinking here or not. I hope someone here can help me in these solutions!
i) Their scalar doesn't matter, they both already are parallel, so it is just their level, and that is at k=/=3.
ii) undefined
iii) They intersect on a plane at k=3, by logic, a plane cannot be skewed unless it is bounded, these are unbounded, so the only alternative is that that k=3, there their cross product = 0.
June 8th 2010, 08:25 AM
Shouldn't it be $ke\frac{9}{2}$?
And likewise for part (iii), $k=\frac{9}{2}$
Also, for part (ii), I would say no solutions (like the OP) rather than undefined. (For a little more info, I agree with quasi's assessment on this page.) | {"url":"http://mathhelpforum.com/calculus/148263-parallel-planes-intersecting-planes-print.html","timestamp":"2014-04-20T00:43:00Z","content_type":null,"content_length":"8056","record_id":"<urn:uuid:bcfdb5e6-f590-4999-8eb4-266082bbb42f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capitol Heights SAT Math Tutor
Find a Capitol Heights SAT Math Tutor
...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a
true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics.
22 Subjects: including SAT math, calculus, geometry, GRE
...For more than 20 years, I have helped others market their skills appropriately, develop new skills, and take the steps to land the job of their dreams. I am the first in my coal mining family
to attend college, and was admitted to Harvard, Early Admission, Navy ROTC. In addition to the ROTC Sch...
82 Subjects: including SAT math, English, Spanish, GRE
Hello students and parents! I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry,
Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Che...
11 Subjects: including SAT math, chemistry, calculus, French
...As a cum laude graduate of The George Washington University School of Public Health and one of the tutors at the Enrichment Centers Inc., I have had the opportunity of working with many
students and help them reach their uttermost potential and see their grades improve remarkably. My key subject...
28 Subjects: including SAT math, reading, physics, statistics
...I always start with how a writer organizes and carries out research and focuses his/her theme. I move on to how to outline the writing project (or assignment). I then teach basic paragraphing -
the structure first, then the diction (style), and then the transition. I will work with the studen...
22 Subjects: including SAT math, reading, English, writing | {"url":"http://www.purplemath.com/capitol_heights_md_sat_math_tutors.php","timestamp":"2014-04-20T16:18:41Z","content_type":null,"content_length":"24478","record_id":"<urn:uuid:76058f49-29c7-4d8a-bea5-4b2cfe3d250a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Finding a min gate input count realization of enabling logic...?
You do not fully state what your problem is. Can it be assumed that there are seven imput variable lines (the six "S" lines, and the enable M)? If so, then there are 2^7, or 128 possible input line
combinations. That makes for a long truth table, but it isn't difficult to work with. There are two basic approaches to working with this. 1) Either you can put everything into a 128 combination
truth table, or 2) for each desired output line, you can eliminate all inputs that don't pertain to it, and then make four truth tables, one for each of those desired outputs.
First, though you need to specify clearly what the inputs mean, and from this what input combination drives each of the outputs. | {"url":"http://www.physicsforums.com/showpost.php?p=815274&postcount=2","timestamp":"2014-04-17T03:56:37Z","content_type":null,"content_length":"7447","record_id":"<urn:uuid:966f962c-3f5e-4b9c-99c8-8d52792e8214>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Norriton, PA Calculus Tutor
Find an East Norriton, PA Calculus Tutor
...I am especially personable and I know I have the ability to inspire students to have success beyond their expectations especially with the creative method I use for teaching. I believe that
the best approach for teaching is to help students conceptualize some seemingly abstract topics in these s...
16 Subjects: including calculus, Spanish, physics, algebra 1
...Fail to learn it right and you will be in big trouble the rest of the way. I teach it right. Algebra 2 is a lot harder than it used to be.
23 Subjects: including calculus, English, geometry, statistics
...Solve probability and statistics problems I successfully obtained B.S. in Business Administration. Related classes include: 1. Principles of Microeconomics 2.
27 Subjects: including calculus, geometry, statistics, algebra 1
...I am especially well-versed in helping people to make the transition from Windows to Mac, as I assisted my parents in the process. My services would be best suited to helping customize a Mac
in order to best serve the needs of the user. I used MatLAB during my junior and senior year of college to help model civil engineering problems.
21 Subjects: including calculus, reading, physics, geometry
...Since 2007 have taught SAT math prep seminars for Temple University, and have developed several techniques that help students answer questions quickly and with reduce mental effort. My methods
help students gain confidence in their ability to take the SAT, and improve their scores. I have a Bac...
22 Subjects: including calculus, writing, geometry, statistics
Related East Norriton, PA Tutors
East Norriton, PA Accounting Tutors
East Norriton, PA ACT Tutors
East Norriton, PA Algebra Tutors
East Norriton, PA Algebra 2 Tutors
East Norriton, PA Calculus Tutors
East Norriton, PA Geometry Tutors
East Norriton, PA Math Tutors
East Norriton, PA Prealgebra Tutors
East Norriton, PA Precalculus Tutors
East Norriton, PA SAT Tutors
East Norriton, PA SAT Math Tutors
East Norriton, PA Science Tutors
East Norriton, PA Statistics Tutors
East Norriton, PA Trigonometry Tutors
Nearby Cities With calculus Tutor
Center Square, PA calculus Tutors
Eagleville, PA calculus Tutors
Jeffersonville, PA calculus Tutors
Lawncrest, PA calculus Tutors
Limerick, PA calculus Tutors
Lower Gwynedd, PA calculus Tutors
Lower Merion, PA calculus Tutors
Norristown, PA calculus Tutors
Plymouth Valley, PA calculus Tutors
Radnor, PA calculus Tutors
Tredyffrin, PA calculus Tutors
Trooper, PA calculus Tutors
Upper Chichester, PA calculus Tutors
Upper Gwynedd, PA calculus Tutors
West Bradford, PA calculus Tutors | {"url":"http://www.purplemath.com/East_Norriton_PA_Calculus_tutors.php","timestamp":"2014-04-17T00:52:29Z","content_type":null,"content_length":"24272","record_id":"<urn:uuid:2e13b613-c2b7-4575-9697-0f922c6843f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Who has to take algebra 1 exam
Replies: 6 Last Post: Aug 30, 2013 7:14 PM
Messages: [ Previous | Next ]
Re: Who has to take algebra 1 exam
Posted: Aug 29, 2013 4:25 PM
Date Subject Author
8/29/13 Who has to take algebra 1 exam Holly Brightman
8/29/13 Re: Who has to take algebra 1 exam Gen Lagattuta
8/29/13 Re: Who has to take algebra 1 exam Kate Martin-Bridge
8/29/13 Re: Who has to take algebra 1 exam Evelyne Stalzer
8/30/13 Re: Who has to take algebra 1 exam Cheryl Stockwell
8/30/13 Re: Who has to take algebra 1 exam Ryley David
8/29/13 Re: Who has to take algebra 1 exam michelle Van Etten | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2592502&messageID=9240157","timestamp":"2014-04-21T04:57:00Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:d79423fc-e8e5-4074-9fd2-dc253eebe419>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Name the following organic molecule
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b60cdce4b0f6bfba553006","timestamp":"2014-04-21T10:22:09Z","content_type":null,"content_length":"77498","record_id":"<urn:uuid:3ba75131-5f71-4e1a-bde5-fbde768fe030>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis and Design of a Novel Compact Multiband Printed Monopole Antenna
International Journal of Antennas and Propagation
Volume 2013 (2013), Article ID 694819, 8 pages
Application Article
Analysis and Design of a Novel Compact Multiband Printed Monopole Antenna
EMC Lab, School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
Received 7 December 2012; Revised 2 May 2013; Accepted 29 May 2013
Academic Editor: Renato Cicchetti
Copyright © 2013 Junjun Wang and Xudong He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
A compact multiband printed monopole antenna is presented. The proposed antenna, composed of a modified broadband T-shaped monopole antenna integrating some band-notch structures in the metallic
patch, is excited by means of a microstrip line. To calculate the bandwidth starting frequency (BSF) of the T-shaped broadband antenna, an improved formula is proposed and discussed. The multiband
operation is achieved by etching three inverted U-shaped slots on the radiant patch. By changing the length of the notch slots, operation bands of the multiband antenna can be adjusted conveniently.
The antenna is simulated in Ansoft HFSS and then fabricated and measured. The measurement results show that the proposed antenna operates at 2.25–2.7GHz, 3.25–3.6Hz, 4.95–6.2GHz, and 7-8GHz,
covering the operation bands of Bluetooth, WiMAX, WLAN, and downlink of X-band satellite communication system and thus making it a proper candidate for the multiband devices.
1. Introduction
With the rapid development of the communication technology, there is a great demand for antennas suitable to operate with dual- or multibands characteristics in wireless communication devices, such
as mobile phones and laptops. Printed antennas have been paid great attention in recent years because of their compact size, low profile, light weight, and low cost. A great quantities of printed
antennas for dual- or multibands operations have been reported in the literature [1–12]. In [1–7], the authors have presented several kinds of printed monopole or dipole antennas for dual-band
operation. In [8–12], slot antennas have been utilized. Antennas mentioned above achieve dual- or multibands operation; however, they usually have complicated structures or narrow bandwidth, and
their operation bands cannot be adjusted easily either.
Recently, a lot of wideband antennas have been proposed because of the wide operation band and high date rate [13–15]. To avoid the interference between UWB (ultrawideband) antennas and narrow
bandwidth communication systems, antenna designers have proposed several UWB antennas with band-notch characteristics [16–19]. The above-mentioned design solutions provide us a different way to
achieve the multiband operation.
In this paper, we present a novel compact multiband monopole antenna based on the broadband antenna theory and employing band-notch technique. A T-shaped monopole antenna is designed to achieve a
broad impedance bandwidth. Using the techniques suitable to widening of the operative frequency band, three inverted U-shaped slots are etched on the metallic patch to reject the undesired bands; in
this way the multiband operation is achieved. The operation bands of the proposed antenna can be adjusted conveniently by changing the length of each band-notch slot.
The organization of the paper is as follows. In Section 2, the configuration of the proposed antenna is introduced. An improved formula for computing the bandwidth starting frequency (BSF) with
higher accuracy is proposed and discussed. Then the broadband characteristic of the T-shaped monopole antenna is analyzed. Finally, the frequency behaviour of the band-notch structures consisting of
three inverted U-shaped slots etched on the metallic patch is investigated. Results of the proposed antenna (return loss, normalized radiation pattern, and peak gain) are presented and discussed in
Section 3, while some conclusions are drawn in Section 4.
2. Antenna Analysis and Design
Figure 1 shows the geometry of the proposed multiband antenna. The proposed antenna is printed on a low-cost FR4 substrate with relative permittivity of 4.4, dielectric loss tangent of 0.02, and
thickness of 1mm. A T-shaped patch is printed on one side of the substrate and a truncated ground plane on the other. The T-shaped patch is realized by removing two symmetric metal notches at the
bottom of a rectangular patch in order to improve significantly the impedance matching of the monopole antenna at high frequency. Three inverted U-shaped slots with different sizes are etched on the
T-shaped patch to reject the undesired frequency bands of the proposed multiband antenna. The commercial software Ansoft HFSS has been adopted for the analysis and design of the proposed antenna. The
optimized geometrical parameters describing the antenna are reported in Table 1.
An improved formula useful to calculate the BSF in terms of the antenna parameters is proposed and discussed firstly. Then the effects of the antenna parameters are investigated. Finally, the
inverted U-shaped band-notch structures are studied.
2.1. Improved Formula for BSF of the Broadband Antenna
For a broadband antenna, the BSF and bandwidth are two important factors to evaluate its frequency performance. An accurate formula to calculate BSF of a broadband antenna is quite necessary for
antenna designers to save simulation time and accelerate the design process. In this paper, an improved formula is presented to provide a much more accurate prediction of BSF of the T-shaped
Kumar and Ray have proposed a formula to calculate the BSF of the planar monopole [19]. Thomas and Sreenivasan improved the formula by considering the effect of the substrate [20]. Equation (1) is
the formula proposed by Thomas and Sreenivasan: where denotes the length of the monopole (both the planar monopole and the equivalent cylinder monopole), denotes the gap between the ground plane and
the monopole, and denotes the radius of the equivalent cylinder monopole. The equivalent radius is expressed as where denotes the area of the radiant patch and the area of the side face of the
equivalent cylinder monopole, is the effective dielectric constant of the air-substrate composite dielectric and can be calculated by , and denotes the relative constant of substrate. The
parameters,, andappearing in (1) and (2) are expressed in millimeters.
However, (1) is not accurate enough to calculate the BSF of the T-shaped monopole antenna because the parameter does not take into account the effect of the two bevel cuts on the feeding gap.
Therefore, we propose to replace it by an effective parameter defined as follows: Here denotes the width of the higher edge of the radiant patch, and , , and have the same meanings as in (1), while
in this paper, and . Then (3) can be rewritten as The modified formula to calculate BSF of the T-shaped monopole is After performing some numerical simulations it is found that the values of the
BSF calculated by (5) are smaller than the simulated ones. So a calibration factor with its value of 1.145 is introduced. Then (5) can be modified as Figure 2 shows the calculated and simulated BSF
of the T-shaped monopole with different , respectively, while the other parameters of the broadband antenna stay unchanged. From Figures 2(a) and 2(b) it can be observed that, for , the values of BSF
calculated by (1) and (6) almost have the same accuracy. For ,, however, the values of BSF calculated by (6) are obviously much more accurate than those calculated by (1) compared to the simulated
ones as shown in Figures 2(c) and 2(d), validating the accuracy of (6). The relative error of the proposed BSF formula comparing with the simulation is calculated; the maximum and mean values are
11.07% and 4.06%, respectively.
2.2. Broadband Antenna Design
In this section, the parameters of the broadband antenna are analyzed and discussed in detail. Figure 3 shows the frequency behavior of the scattering parameter for the different dimensions of the
T-shaped structure. From Figures 3(a) and 3(b), it can be observed that the two cuts at the lower edge of the radiant patch have a significant effect on the impedance matching at higher frequency.
Correspondently, the impedance match at higher frequency is improved. It is also seen in Figures 3(a) and 3(b) that the BSF of the broadband antenna decreases when decreases or increases. The reason
is that the effective gap between the ground plane and the radiant patch increases while the width or length of the cuts increases. Because of the same reason, the BSF decreases when increases as
shown in Figure 3(c) and when increases as shown in Figure 3(d), and we can see that with the increase of, the BSF of the monopole antenna decreases and the impedance matching at higher frequency
degrades. Because longer provides a longer current path at lower frequency, thus a lower BSF.
2.3. Multiband Antenna Design
Based on the broadband antenna design, three inverted U-shaped slots are etched on the T-shaped radiant patch to reject the undesired frequency bands, thus achieving the multiband operation. The
resonant frequency of each inverted U-shaped slot can be approximately calculated by (7) reported in [21]: where denotes the resonant frequency of the th band-notch structure and denotes the length,
expressed in millimeters, of the th band-notch structure with . Equation (7) predicts a decrement of the resonant frequency as the parameter is increased.
Figure 4 shows the frequency behavior of the scattering parameter for different values of the geometrical parameters,and. It can be seen that with the increase of the length of the band-notch
structures, the resonant frequency decreases and the bandwidth also changes, verifying the behavior predicted by (7). It is also found that the width and the position of the inverted U-slots also
have effects on the frequency performance of the band-notch structures, with .
Figure 5 shows the frequency behavior of the scattering parameter of the broadband antenna with only one inverted U-shaped slot and with all the three slots (the proposed multiband antenna). It can
be seen that each band-notch structure can work independently and has little effect on the frequency performance of the other band-notch structures.
The maps of the surface current distributions, excited on the monopole antenna at each one of the operative frequency bands, are shown in Figure 6.
3. Results and Discussion
The proposed antenna has been fabricated and then measured using Agilent Vector Network Analyzer E5071C. Figure 7 shows the simulated and measured of the proposed multiband antenna. It can be seen
that reasonable agreement has been achieved between the simulated and measured results. Because of the fabrication tolerances and of the perturbation effect caused by the SMA connector, there are
some discrepancies between the two results. The fluctuation of relative permittivity and loss tangent of the FR4 substrate at high frequency also contributes to the disagreement.
The radiated electric field has a linear polarization for the proposed antenna. Figure 8 shows the computed radiation patterns of the proposed antenna at the working frequency of 2.4, 3.5, 5.5, and
7.5GHz. It can be seen that the proposed antenna, similar to the typical monopole antennas, has nearly omnidirectional radiation pattern on -plane except at 7.5GHz. The degradation at 7.5GHz can
be explained as the following: with the increasing frequency the electrical length of the antenna is more than the half wavelength; then the surface current distributed on the radiant patch will be
destructive, thus degradation of the radiation pattern at this frequency is observed.
Figure 9 shows the simulated peak gain of the multiband antenna at the proposed frequency band. From the figure it appears that the peak gain increases as the frequency increases. The deviation
between the maximum and minimum peak gain in each operation band is less than 1.5dBi.
4. Conclusion
A multiband antenna based on a broadband antenna and the band-notch structures has been presented. A compact T-shaped monopole antenna with three inverted U-shaped slots has been adopted to achieve a
multiband frequency behavior. After simulation and optimization in Ansoft HFSS, the proposed antenna is fabricated and measured. The measured results have shown that the frequency range of the
proposed antenna can cover the operation bands of Bluetooth (2.4–2.484GHz), WiMax (3.3–3.69GHz), WLAN (5.15–5.875GHz), and downlink ofX-band satellite communication system (7.25–7.75GHz). An
improved formula useful to calculate the BSF of a general the T-shaped monopole antenna has been proposed and discussed. Comparison between the simulated and calculated BSF with different parameters
of the T-shaped monopole has shown the good accuracy of the modified formula presented in this work.
Conflict of Interests
The authors declare that they have no conflict of interests with software Ansoft HFSS, in this paper.
1. A. Mehdipour, A. R. Sebak, C. W. Trueman, and T. A. Denidni, “Compact multiband planar antenna for 2.4/3.5/5.2/5.8-GHz wireless applications,” IEEE Antennas and Wireless Propagation Letters, vol.
11, pp. 144–147, 2012. View at Publisher · View at Google Scholar · View at Scopus
2. F. Mirzamohammadi, J. Nourinia, and C. Ghobadi, “A novel dual-wideband monopole-like microstrip antenna with controllable frequency response,” IEEE Antennas and Wireless Propagation Letters, vol.
11, pp. 289–292, 2012. View at Publisher · View at Google Scholar · View at Scopus
3. M. J. Kim, C. S. Cho, and J. Kim, “A dual band printed dipole antenna with spiral structure for WLAN application,” IEEE Microwave and Wireless Components Letters, vol. 15, no. 12, pp. 910–912,
2005. View at Publisher · View at Google Scholar · View at Scopus
4. O. Tze-Meng, T. K. Geok, and A. W. Reza, “A dual-band omni-directional microstrip antenna,” Progress in Electromagnetics Research, vol. 106, pp. 363–376, 2010. View at Scopus
5. K. G. Thomas and M. Sreenivasan, “Compact CPW-fed dual-band antenna,” Electronics Letters, vol. 46, no. 1, pp. 13–14, 2010. View at Publisher · View at Google Scholar · View at Scopus
6. N. Zhang, P. Li, B. Liu, X. W. Shi, and Y. J. Wang, “Dual-band and low cross-polarisation printed dipole antenna with L-slot and tapered structure for WLAN applications,” Electronics Letters,
vol. 47, no. 6, pp. 360–361, 2011. View at Publisher · View at Google Scholar · View at Scopus
7. Y. L. Kuo and K. L. Wong, “Printed Double-T Monopole Antenna for 2.4/5.2Ghz dual-band WLAN operations,” IEEE Transactions on Antennas and Propagation, vol. 51, no. 9, pp. 2187–2192, 2003. View
at Publisher · View at Google Scholar · View at Scopus
8. W. C. Liu, C. M. Wu, and N. C. Chu, “A compact CPW-Fed slotted patch antenna for dual-band operation,” IEEE Antennas and Wireless Propagation Letters, vol. 9, pp. 110–113, 2010. View at Publisher
· View at Google Scholar · View at Scopus
9. M. N. Mahmoud and R. Baktur, “A dual band microstrip-fed slot antenna,” IEEE Transactions on Antennas and Propagation, vol. 59, no. 5, pp. 1720–1724, 2011. View at Publisher · View at Google
Scholar · View at Scopus
10. A. A. Gheethan and D. E. Anagnostou, “Broadband and dual-band coplanar folded-slot antennas (CFSAs),” IEEE Antennas and Propagation Magazine, vol. 53, no. 1, pp. 80–89, 2011. View at Scopus
11. C. Hsieh, T. Chiu, and C. Lai, “Compact dual-band slot antenna at the corner of the ground plane,” IEEE Transactions on Antennas and Propagation, vol. 57, no. 10, pp. 3423–3426, 2009. View at
Publisher · View at Google Scholar · View at Scopus
12. Y.-C. Lee and J.-S. Sun, “Compact printed slot antennas for wireless dual- and multi-band operations,” Progress in Electromagnetics Research, vol. 88, pp. 289–305, 2008.
13. M. R. Ghaderi and F. Mohajeri, “A compact hexagonal wide-slot antenna with microstrip-fed monopole for UWB application,” IEEE Antennas and Wireless Propagation Letters, vol. 10, pp. 682–685,
2011. View at Publisher · View at Google Scholar · View at Scopus
14. N. Chahat, M. Zhadobov, R. Sauleau, and K. Ito, “A compact UWB antenna for on-body applications,” IEEE Transactions on Antennas and Propagation, vol. 59, no. 4, pp. 1123–1131, 2011. View at
Publisher · View at Google Scholar · View at Scopus
15. G. Cappelletti, D. Caratelli, R. Cicchetti, and M. Simeoni, “A low-profile printed drop-shaped dipole antenna for wide-band wireless applications,” IEEE Transactions on Antennas and Propagation,
vol. 59, no. 10, pp. 3526–3535, 2011. View at Publisher · View at Google Scholar · View at Scopus
16. C. C. Lin, P. Jin, and R. W. Ziolkowski, “Single, dual and tri-band-notched ultrawideband (UWB) antennas using capacitively loaded loop (CLL) resonators,” IEEE Transactions on Antennas and
Propagation, vol. 60, no. 1, pp. 102–109, 2012. View at Publisher · View at Google Scholar · View at Scopus
17. W. T. Li, Y. Q. Hei, W. Feng, and X. W. Shi, “Planar antenna for 3G/Bluetooth/WiMAX and UWB applications with dual band-notched characteristics,” IEEE Antennas and Wireless Propagation Letters,
vol. 11, pp. 61–64, 2012. View at Publisher · View at Google Scholar · View at Scopus
18. A. A. Gheethan and D. E. Anagnostou, “Dual band-reject UWB antenna with sharp rejection of narrow and closely-spaced bands,” IEEE Transactions on Antennas and Propagation, vol. 60, no. 4, pp.
2071–2076, 2012. View at Publisher · View at Google Scholar · View at Scopus
19. G. Kumar and K. P. Ray, Broad Band Microstrip Antennas, Artech House, Boston, Mass, USA, 2003.
20. K. G. Thomas and M. Sreenivasan, “A simple ultrawideband planar rectangular printed antenna with band dispensation,” IEEE Transactions on Antennas and Propagation, vol. 58, no. 1, pp. 27–34,
2010. View at Publisher · View at Google Scholar · View at Scopus
21. Q. X. Chu and Y. Y. Yang, “3.5/5.5GHz dual band-notch ultra-wideband antenna,” Electronics Letters, vol. 44, no. 3, pp. 172–174, 2008. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/ijap/2013/694819/","timestamp":"2014-04-18T07:53:49Z","content_type":null,"content_length":"114205","record_id":"<urn:uuid:c2da837a-6f81-45cd-a4e5-f17c434347f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPOJ.com - Problem PYRSUM2
Submit All submissions Best solutions PS PDF Back to list
SPOJ Problem Set (classical)
9138. Pyramid Sums 2
Problem code: PYRSUM2
This is a harder version of PYRSUM
Tommy is stacking square blocks in columns labelled from 1 to 1000000 (10^6). Since it can be quite boring writing out the locations of every block he instead specifies a set of 2D pyramids that
when built on top of each other will make the shape he wants. Pyramids always have height H=(W+1)/2 and take up N=H^2 blocks so it is quite easy for him to work out how many blocks he will need in
total from this description.
What is not so easy is working out how many blocks he will need to build in the space that occurs in the range between two columns (inclusive!). Given a set of instructions consisting of either
• “build [centre] [w]” (build another pyramid, width [w] with its midpoint at [centre]) or
• “count [left] [right]” (count the number of blocks added so far within the range of these columns inclusive)
you must try to answer the queries as quickly as possible.
(1<=N<=200000), the number of operations to perform.
N lines, each containing one operation (as detailed above).
Answer each count query on its own line, putting an additional blank line after each test case.
Please use 64-bit counters as the result may overflow a 32-bit container!
build 5 3
build 6 5
count 4 7
Visualisation of example test case:
^ ^
Added by: Robin Lee
Date: 2011-07-08
Time limit: 3s
Source limit: 50000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: All
Resource: Self
hide comments
2014-02-03 17:18:06 Robin Lee
There is no point in submitting solutions copy-pasted from the Internet; these will all be disqualified.
Last edit: 2014-02-04 02:16:42 | {"url":"http://www.spoj.com/SPOJ/problems/PYRSUM2/","timestamp":"2014-04-20T08:16:37Z","content_type":null,"content_length":"21180","record_id":"<urn:uuid:b61fd963-8f70-4234-b358-acdf1afd75f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
A prime ideal principle in commutative algebra.
(English) Zbl 1168.13002
Let $R$ be a commutative ring with identity. It is a “metatheorem” in commutative algebra that an ideal maximal with respect to some property is often prime. Of course the best known and probably
most important is Krull’s result that an ideal maximal with respect to missing a multiplicatively closed set is prime. Also, an ideal maximal with respect to not being principal, invertible, or
finitely generated or an ideal maximal among annihilators of nonzero elements of a module is prime. This delightful paper actually gives such a metatheorem, the Prime Ideal Principle.
Let $𝔉$ be a family of ideals of $R$ with $R\in 𝔉$. Then $𝔉$ is an Oka family (resp., Ako family) if for an ideal $I$ of $R$ and $a,b\in R$, $\left(I,a\right),\left(I:a\right)\in 𝔉$ implies $I\in 𝔉$
(resp., $\left(I,a\right),\left(I,b\right)$ implies $\left(I,ab\right)\in 𝔉$). The Prime Ideal Principle states that if $𝔉$ is an Oka or Ako family, then the complement of the family ${𝔉}^{c}\
subseteq \text{Spec}\left(R\right)$. Hence if $𝔉$ is an Oka or Ako family in $R$ and every nonempty chain of ideals in $𝔉$ has an upper bound in $𝔉$ and all primes belong to $𝔉$, then all ideals of
$R$ belong to $𝔉$.
From these two results we recapture the results listed above plus many more and the well known consequences such as $R$ is noetherian (resp. a Dedekind domain, a PIR) if every nonzero prime ideal is
finitely generated (resp., invertible, principal). The paper studies Oka and Ako families and related types of families in detail.
Many more applications of the Prime Ideal Principal are given, some of them new such as the following: a ring $R$ is Artinian if and only if for each prime ideal $P$ of $R$, $P$ is finitely generated
and $R/P$ is finitely cogenerated. The work is also interpreted in terms of categories of cyclic modules.
This paper was a joy to read and should be read by all those interested in commutative algebra.
13A15 Ideals; multiplicative ideal theory
[1] Anderson, D. D.: A note on minimal prime ideals, Proc. amer. Math. soc. 122, 13-14 (1994) · Zbl 0841.13001 · doi:10.2307/2160834
[2] Balcerzyk, S.; Józefiak, T.: Commutative noetherian and Krull rings, (1989)
[3] Cohen, I. S.: Commutative rings with restricted chain condition, Duke math. J. 17, 27-42 (1950) · Zbl 0041.36408 · doi:10.1215/S0012-7094-50-01704-2
[4] Eakin, P.: The converse to a well known theorem on noetherian rings, Math. ann. 177, 278-282 (1968) · Zbl 0155.07903 · doi:10.1007/BF01350720
[5] Eisenbud, D.: Commutative algebra with a view toward algebraic geometry, Grad texts in math. 150 (1995) · Zbl 0819.13001
[6] Gilmer, R.: Multiplicative ideal theory, Queen’s papers in pure and appl. Math. 90 (1992) · Zbl 0804.13001
[7] Goodearl, K.: Ring theory: nonsingular rings and modules, Monographs in pure appl. Math. (1976) · Zbl 0336.16001
[8] Huckaba, J.: Commutative rings with zero divisors, Monographs in pure appl. Math. (1988)
[9] Kaplansky, I.: Elementary divisors and modules, Trans. amer. Math. soc. 66, 464-491 (1949) · Zbl 0036.01903 · doi:10.2307/1990591
[10] Kaplansky, I.: Commutative rings, (1974)
[11] Krull, W.: Idealtheorie in ringen ohne endlichkeitsbedingung, Math. ann. 101, 729-744 (1929)
[12] Lam, T. Y.: A first course in noncommutative rings, Grad texts in math. 131 (2001)
[13] Lam, T. Y.: Lectures on modules and rings, Grad texts in math. 189 (1999)
[14] Lam, T. Y.: Exercises in classical ring theory, Problem books in math. (2001)
[15] Lam, T. Y.: Exercises in modules and rings, Problem books in math. (2007)
[16] T.Y. Lam, M. Reyes, Oka and Ako ideal families in commutative rings, in press · Zbl 1182.13003
[17] Matsumura, H.: Commutative ring theory, (1986)
[18] Mcadam, S.: Primes and annihilators, Bull. amer. Math. soc. 76, 92 (1970)
[19] Nagata, M.: Local rings, (1962) · Zbl 0123.03402
[20] Nagata, M.: A type of subrings of a noetherian ring, J. math. Kyoto univ. 8, 465-467 (1968) · Zbl 0184.06701
[21] Oka, K.: Sur LES fonctions analytiques de plusieurs variables, VIII, J. math. Soc. Japan 3, 204-214 (1951) · Zbl 0045.04106 · doi:10.2969/jmsj/00320259
[22] Stenström, B.: Rings of quotients, Grundlehren math. Wiss. 217 (1975) | {"url":"http://zbmath.org/?format=complete&q=an:1168.13002","timestamp":"2014-04-18T23:54:36Z","content_type":null,"content_length":"28757","record_id":"<urn:uuid:e1a4187b-db98-454d-8a48-3ec19f3a2412>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heuristic Minimization for Synchronous Relations
V. Singhal, Y. Watanabe and Robert K. Brayton
EECS Department
University of California, Berkeley
Technical Report No. UCB/ERL M93/30
Optimization for synchronous systems is an important problem in logic synthesis. However,, the full utilization of don't care information for sequential synthesis is far from being solved.
Synchronous boolean relations can represent sequential don't care information up to in synchronous systems. This allows greater flexibility in expressing don't care information than ordinary boolean
relations relating input and output space. Synchronous relations can be used to specify sequential designs both at the finite state machine level as well as at the level of combinational elements and
latches In this report we also show that the synchronous relation formulation can also be used to find a minimal sum-of-products form which implements a function compatible with an arbitrary set of
boolean relations. The main objective of this report is to present a heuristic approach to find a minimal implementation for a given synchronous relation.
BibTeX citation:
Author = {Singhal, V. and Watanabe, Y. and Brayton, Robert K.},
Title = {Heuristic Minimization for Synchronous Relations},
Institution = {EECS Department, University of California, Berkeley},
Year = {1993},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/2326.html},
Number = {UCB/ERL M93/30},
Abstract = {Optimization for synchronous systems is an important problem in logic synthesis. However,, the full utilization of don't care information for sequential synthesis is far from being solved. Synchronous boolean relations can represent sequential don't care information up to in synchronous systems. This allows greater flexibility in expressing don't care information than ordinary boolean relations relating input and output space. Synchronous relations can be used to specify sequential designs both at the finite state machine level as well as at the level of combinational elements and latches In this report we also show that the synchronous relation formulation can also be used to find a minimal sum-of-products form which implements a function compatible with an arbitrary set of boolean relations. The main objective of this report is to present a heuristic approach to find a minimal implementation for a given synchronous relation.}
EndNote citation:
%0 Report
%A Singhal, V.
%A Watanabe, Y.
%A Brayton, Robert K.
%T Heuristic Minimization for Synchronous Relations
%I EECS Department, University of California, Berkeley
%D 1993
%@ UCB/ERL M93/30
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/2326.html
%F Singhal:M93/30 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/2326.html","timestamp":"2014-04-17T12:36:02Z","content_type":null,"content_length":"6310","record_id":"<urn:uuid:95b249ec-f565-4e81-a9b0-a15c627152eb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: The "expected sign" of a multiple regression coefficient
Replies: 9 Last Post: Apr 27, 2005 12:52 PM
Messages: [ Previous | Next ]
Re: The "expected sign" of a multiple regression coefficient
Posted: Apr 18, 2005 6:15 PM
Richard Ulrich wrote:
> On 17 Apr 2005 20:28:12 -0700, "Reef Fish"
> <Large_Nassau_Grouper@Yahoo.com> wrote:
> [snip, much...]
> >
> > The "expected sign" of a (multiple) regression coefficient is the
> > single ERROR most often committed by social scientists and
economist in
> > their interpretation of regression coefficients.
> I seem to differ greatly from you on the nature of this error....
> >
> > Over the years, I have not found a SINGLE CASE in which a
> > was given (nor hinted) on where the "expectation" of the expected
> > came from.
> Deaf to all explanations? no, making a rhetorical point.
"not given (nor hinted)" -- how could it be heard? :-)
I am used to hearing rhetorics. Yours is not even a good rhetorical
> >
> > The ERROR was always when the user think of the sign of a multiple
> > correlation coefficient as the sign of the SIMPLE correlation
> > that X and Y, whereas the SIGN of the coefficient is the sign of
> > PARTIAL correlation between that X and Y, GIVEN all the rest of the
> > independent variables in the regression!
> Yes, the sign of the simple correlation is important to
> pay attention to.
But it has no relevance to the sign of a MULTIPLE regression
> [...]
> >
> > One of the latest national news is the NEW SAT exam, consisting of
> > Verbal, Quantitative, and Essay. Let's say those are THREE of
> > 10 independent variables used to predict (of fit) the GPR of
> [snip, invented example of "opposite-sign" prediction.]
Not invented. Been through that process nearly 30 years ago!
> Here is a *real* example of opposite-sign prediction.
> It was either the SAT or another achievement test which figured
> out that they achieved more reliable estimation of Verbal by
> subtracting off some of the achievement on a Reading Speed
> sub-scale that was computed internally (not reported to users).
> Folks who read faster could get farther through the test, without
> knowing more, so it provided a *rational* correction.
What's your point? Were you using regression methods to build a
PREDICTION model with the available data?
I was teaching a graduate course in Data Analysis in which each
student chose his own REAL data sets to learn how to do multiple
regression and model building throughout the course.
(That'w why I've analyzed THOUSANDS of real data sets with regression
methods from those graduate classes alone!)
One student was working for the Admissions Office of the University
and used the real data used BY the university to build its prediction
model(s). It was an extensive dataset with thousands of observations,
with the usual SAT scores, a Math Achievement Score, some nominal and
categorical variables in the students' applications, to FIT to the
students' Grade Point Average at the end of their Freshman year, so
that the "prediction models" were used in subsequent year to help
decide whether to admit certain students, based on the predicted
performance at the end of the Freshman year.
That turned out to be a GOLDEN set of data to use for pedagogical
purposes to demonstrate the "expected sign" fallacy as well as
showing how the SIGN of the SAT math variable in the predictive
models can be POSITIVE or NEGATIVE, statistically insignificantly
negative, OR statistically significantly negative, ALL depending on
what OTHER variables are in the predictive models. The variable
that would make the sign of the SAT Math variable NEGATIVE in
predicting GPR was the PRESENCE of the Math Achievement Score in
the same model, in combination with certain other variables.
Thus, one could in fact manipulate the SIGN of the SAT Math score
coefficient at will, to be positive OR negative, where there is
no question that the SIMPLE correlation between of that variable and
GPR is consistently and significantly POSITIVE.
One important fact to remember is that this is a PREDICTIVE model.
Granted it would be difficult to explain to the unwashed why
the sign of the SAT score is NEGATIVE!
"Does that mean the lower my SAT Math score, the better chance I have
to be admitted?" would be the obvious question by the student or
the parents.
YES, if you knew the PREDICTION model and use it to cheat! :-)
Sort of like using insider information in stock trading. Remember
Martha? :) THEN, indeed a student could deliberately score low
on the SAT Math and enhance his/her chances of admission IF HE KNEW
THAT MODEL was used.
The PREDICTIVE model is neither meant to be a CAUSAL model nor
a CONTROL model. To use it as such would just be another common
abuse of regression models.
Over the years, those predicted models (developed by MY graduate
students in the course, not the models actually used by the Admissions
Office) with a NEGATIVE sign for SAT Math, consistently stood
all tests of cross-validation, subsampling, and the rest of the
data-analytic techniques to see if a developed model is "stable"
and hold for future predictions.
Contrary to your unsubstantiated speculation:
RU> We are avoiding artifacts that
RU> willl not be consistent between samples.
> Here is a rule, I think, for using opposite-sign predictors:
> Make sure that they actually work. I think, too, there will
> have to be a face-valid interpretation of them. The easiest
> instances that I know of have involved pairs of variables,
> so that the (B-k*A) term can be explicitly used in prediction;
> also, you can figure out separately whether (B-k*A) works
> better than another model of difference, like k*log(B/A).
You are just using your ad hoc way of trying to explain the
effect of the PARTIAL correlation information imbedded in a
predictive model in which your intuition about its sign was wrong.
> >
> > The MIS-interpretation of the "expected sign" of multiple
> > coefficient gave rise to a flurry of papers on Ridge Regression,
> > the sole purpose of make the observed sign "correct", when they
> > give NO raason (or even know that those are not SIMPLE ocrrelation
> > signs) why any sign should have a "positive" or "negative"
> But, I take it, you forever *missed* the explanation for why
> people did not like the opposite-sign predictions: They
> didn't hold up.
WRONG! I explained why they don't like it, because it's counter-
intuitive and people MIS-interpret such predictive models as if they
"explain" or "control" the GPR average in the students performance.
TWO or THREE wrongs do not make one right!
> That's why the ridge-regressions *did* tend
> to work -- they replicated. "Reduced variance" was the goal.
That is why Ridge Regression was a fad for a few years. It was a
naked emperor promoted by those who MISINTERPRET the signs of the
multiple regression coefficients.
If the Ridge Regression enthusiasts wanted to use MSE as their
rather than LS, they could play Stein's game and publish some results
irrelevant to sound application.
> >
> > I have rejected more submitted journal papers based on that faulty
> > false premise than you can imagine. But such misinterpretations
> > EVERYWHERE in the applied journals of economics and social
> >
> *All* those people seem to differ greatly from you, on the
> nature of the problems in regressions.
*All* those who thought the "expected" sign should be the same as the
"expected sign" of a simple correlation are WRONG, without exception.
(completely orthogonal X's excepted, as noted in my introduction).
That is both a theoretical AND an empirical FACT.
> EVERYWHERE ... it seems to me like this ought to have provoked
> a response of curiosity. Do you *still* not wonder why?
There is nothing to wonder why, when I knew the fallacy of those who
misinterpret coefficients, and I know the theoretical as well as
proven empirical reasons (as in the University Admission data).
What is there left to wonder?
I routinely took published articles from economics and the social
sciences as lecture material in my graduate classes to PROVE that
the authors were WRONG when they say "the expected sign", becuase
they were dealing with models with a large number of independent
(they like to use fancy words like "exogenous" too, as if that
added anything of substance) variables and they say variable
X is "expected" to have a positive sign in some model when they did not
even TELL what the other variables are, let alone reason WHY the
partial correlation should be expected to be positive!
You KNOW then and there, they were thinking "simple correlation" when
it should have been the "partial correlation", whose sign (expected
or observed) depends CRITICALLY and ENTIRELY on what OTHER variables
are in the fitted model!
> [snip, some]
> >
> > I've seen the "expected sign" MIS-interpreted every time I've seen
> > that term used in a multiple regression context; I've NEVER seen
> > anyone argue on why that sign is expected to be "positive" or
> > "negative" by arguing from a partial correlation point of view!
> Artifact, Bob, artifact.
I have reason to believe that you are in the camp of the "expected
sign" abusers, from everything you've said in this post!
> We are avoiding artifacts that
> willl not be consistent between samples.
That's only YOUR unsubstantiated speculations. I've explained how
the Admission Office data could consistently support, and significantly
support PREDICTIVE models of GPR in which the SIGN of the SAT Math
variable was NEGSTIVE.
> Here is some
> background of why your rant does not move me, and
> must have frustrated a good many good researchers whom
> you have reviewed.
I consider that my contribution to stop/lessen statistical ABUSE and
statistical POLLUTION, by those in the social and economic sciences
who are no more equipped to practice statistics as they are to
practice brain surgery or law, after reading a book or a chapter
of a computer manual and thinks they can practice statistics correctly.
> Psychometrics figured out a long time ago that rating
> scales are not created by multiple regression of items.
> (Certain ideas in making *scales*, I believe, have carried over
> usefully to good intuitions while *using* multiple regression.)
Don't change the subject.
We are NOT talking about rating scales or any of what psychometricians
do. We are SPECIFICALLY talking about the CORRECT and PROPER
interpretation of the SIGN of multiple regression coefficients.
> The most common way to create additive scales in the
> social sciences makes use of simple sums of items, or
> of item responses "0,1,2,3". It takes a huge sample to
> justify using differential weighting of items, or of scores
> (for all items, or for single items).
I've had colleagues who talked about "unit weighting" too. They were
talking about STATISTICS. They were exercising psychometric voodoo and
quackery in the name of statistics!
< iirrelevant tangent to the interpretation of SIGN of the multiple
regression coefficients snipped >
> Now, if the opposite sign can replicate, I would certainly
> search for the reason.
The reasons would be PARTIAL correlation of one variable with another
in the PRESENCE of the remaining variables.
Simple! Tod bad you never learned that.
> However, these suppressors are usually accidents.
Your inexperience with REAL data in "model building" using regression
methods showed.
> Hope this helps.
Sorry, your post didn't help one bit in explaining away the common
by those who are totally oblivious to the DIFFERENCE between the
"expected sign" of a simple correlation and that of a PARTIAL
Hope this helps. If not, I am not surprised.
Of all the textbooks on regression, the one that best articulates the
FALLACY of the "expected sign" phenomenon (by people like yourself
and the other social scientists and economists) is the book by
and Tukey!! "Data Analysis and Regression" (1977).
Get a copy of that book, and read the relevent chapters related to
partial correlations (and the misnomer of "keeping the other variables
constnat" when speaking of the given variables in a partial corr.),
and try to read it CAREFULLY and read it WELL.
> _
> --
> Rich Ulrich, wpilib@pitt.edu
> http://www.pitt.edu/~wpilib/index.html
-- Bob.
Date Subject Author
4/18/05 Re: The "expected sign" of a multiple regression coefficient Large_Nassau_Grouper@Yahoo.com
4/19/05 Re: The "expected sign" of a multiple regression coefficient Richard Ulrich
4/20/05 Re: The "expected sign" of a multiple regression coefficient Jim Clark
4/20/05 Re: The "expected sign" of a multiple regression coefficient Large_Nassau_Grouper@Yahoo.com
4/27/05 Re: The "expected sign" of a multiple regression coefficient David Reilly
4/27/05 Re: The "expected sign" of a multiple regression coefficient Large_Nassau_Grouper@Yahoo.com
4/27/05 Re: The "expected sign" of a multiple regression coefficient Large_Nassau_Grouper@Yahoo.com
4/20/05 Re: The "expected sign" of a multiple regression coefficient Richard Ulrich | {"url":"http://mathforum.org/kb/message.jspa?messageID=3736929","timestamp":"2014-04-19T14:52:12Z","content_type":null,"content_length":"39497","record_id":"<urn:uuid:64057578-2786-4c95-ac9e-0033423d92b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Toward efficient agnostic learning
Results 1 - 10 of 155
"... We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of
the value of the function on instances drawn according to some distribution, and possibly may query the fun ..."
Cited by 421 (57 self)
Add to MetaCart
We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of the
value of the function on instances drawn according to some distribution, and possibly may query the function on instances of its choice. First, we establish some connections between property testing
and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being k-colorable or having a ae-clique (clique of
density ae w.r.t the vertex set). Our graph property testing algorithms are probabilistic and make assertions which are correct with high probability, utilizing only poly(1=ffl) edge-queries into the
graph, where ffl is the distance parameter. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph
which corre...
- JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY , 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Cited by 317 (66 self)
Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit
sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum
achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching
leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in
this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
- Information and Computation , 1995
"... this paper, we concentrate on linear predictors . To any vector u 2 R ..."
- MACHINE LEARNING , 2002
"... We consider the following clustering problem: we have a complete graph on # vertices (items), where each edge ### ## is labeled either # or depending on whether # and # have been deemed to be
similar or different. The goal is to produce a partition of the vertices (a clustering) that agrees as mu ..."
Cited by 222 (4 self)
Add to MetaCart
We consider the following clustering problem: we have a complete graph on # vertices (items), where each edge ### ## is labeled either # or depending on whether # and # have been deemed to be similar
or different. The goal is to produce a partition of the vertices (a clustering) that agrees as much as possible with the edge labels. That is, we want a clustering that maximizes the number of #
edges within clusters, plus the number of edges between clusters (equivalently, minimizes the number of disagreements: the number of edges inside clusters plus the number of # edges between
clusters). This formulation is motivated from a document clustering problem in which one has a pairwise similarity function # learned from past data, and the goal is to partition the current set of
documents in a way that correlates with # as much as possible; it can also be viewed as a kind of "agnostic learning" problem. An interesting
- Journal of Computer and System Sciences , 1993
"... In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior---thus, the same input
may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic c ..."
Cited by 197 (8 self)
Add to MetaCart
In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior---thus, the same input may
sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured
variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general
in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of
p-concepts, we study and develop in detail an underlying theory of learning p-concepts. 1 Introduction Consider the following scenarios: A meteorologist is attempting to predict tomorrow's weather as
accurately as pos...
- IN PROCEEDINGS OF THE TWENTY-SIXTH ANNUAL SYMPOSIUM ON THEORY OF COMPUTING , 1994
"... We present new results on the well-studied problem of learning DNF expressions. We prove that an algorithm due to Kushilevitz and Mansour [13] can be used to weakly learn DNF formulas with
membership queries with respect to the uniform distribution. This is the rst positive result known for learn ..."
Cited by 118 (23 self)
Add to MetaCart
We present new results on the well-studied problem of learning DNF expressions. We prove that an algorithm due to Kushilevitz and Mansour [13] can be used to weakly learn DNF formulas with membership
queries with respect to the uniform distribution. This is the rst positive result known for learning general DNF in polynomial time in a nontrivial model. Our results should be contrasted with those
of Kharitonov [12], who proved that AC 0 is not eciently learnable in this model based on cryptographic assumptions. We also present ecient learning algorithms in various models for the read-k and
SAT-k subclasses of DNF. We then turn our attention to the recently introduced statistical query model of learning [9]. This model is a restricted version of the popular Probably Approximately
Correct (PAC) model, and practically every PAC learning algorithm falls into the statistical query model [9]. We prove that DNF and decision trees are not even weakly learnable in polynomial time in
this model. This result is information-theoretic and therefore does not rely on any unproven assumptions, and demonstrates that no straightforward modication of the existing algorithms for learning
various restricted forms of DNF and decision trees will solve the general problem. These lower bounds are a corollary of a more general characterization of the complexity of statistical query
learning in terms of the number of uncorrelated functions in the concept class. The underlying tool for all of our results is the Fourier analysis of the concept class to be learned.
, 1997
"... In the model selection problem, we must balance the complexity of a statistical model with its goodness of fit to the training data. This problem arises repeatedly in statistical estimation,
machine learning, and scientific inquiry in general. ..."
Cited by 110 (5 self)
Add to MetaCart
In the model selection problem, we must balance the complexity of a statistical model with its goodness of fit to the training data. This problem arises repeatedly in statistical estimation, machine
learning, and scientific inquiry in general.
- Neural Computation , 1997
"... In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this
estimate is not much worse than that of the training error estimate. The name sanity-check refers to the fact ..."
Cited by 100 (0 self)
Add to MetaCart
In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate
is not much worse than that of the training error estimate. The name sanity-check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the
training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for limited cases in the
prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of
hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability, and apply it to obtain
sanity-check b...
, 1995
"... Given a function f mapping n-variate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of ag-fieldFintoF, we consider the task of reconstructing a list nostic
learning, the learner is to make no assumptions regarding of alln-variate degreedpolynomials which agree withf ..."
Cited by 87 (18 self)
Add to MetaCart
Given a function f mapping n-variate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of ag-fieldFintoF, we consider the task of reconstructing a list nostic learning,
the learner is to make no assumptions regarding of alln-variate degreedpolynomials which agree withfon a the natural phenomena underlying the input/output relationship tiny but non-negligible
fraction, , of the input space. We give a of the function, and the goal of the learner is to come up with a randomized algorithm for solving this task which accessesfas a simple explanation which
best fits the examples. Therefore the black box and runs in time polynomial in1;nand exponential in best explanation may account for only part of the phenomena. d, provided is(pd=jFj). For the
special case whend=1, In some situations, when the phenomena appears very irregular, we solve this problem for jFj>0. In this case the providing an explanation which fits only part of it is better
than nothing. Interestingly, Kearns et. al. did not consider the use of running time of our algorithm is bounded by a polynomial queries (but rather examples drawn from an arbitrary distribu-and
exponential ind. Our algorithm generalizes a previously tion) as they were skeptical that queries could be of any help. known algorithm, due to Goldreich and Levin, that solves this We show that
queries do seem to help (see below). task for the case whenF=GF(2)(andd=1).
, 1995
"... It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning
performance degrades when the representational power of the neuron is overstrained, i.e., if more complex conce ..."
Cited by 84 (0 self)
Add to MetaCart
It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning
performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that the problem of learning a probably
almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); we obtain the same
hardness result for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor
learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=76907","timestamp":"2014-04-19T09:58:00Z","content_type":null,"content_length":"39943","record_id":"<urn:uuid:041f9521-7a90-4d19-b9ef-59627b2f5168>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plymouth Valley, PA
Find a Plymouth Valley, PA Precalculus Tutor
...My credentials include over 10 years tutoring experience and over 4 years professional teaching experience. I received 800/800 on the GRE math section and perfect marks on the Praxis I math
section, as well as the Award for Excellence on the Praxis II mathematics content test. I possess clean FBI/criminal history and Child Abuse clearances.
58 Subjects: including precalculus, reading, chemistry, calculus
...When this is the case, the student will likely do better on the ACT, probably because the questions are more straight forward. The ACT math section covers trigonometry and elements of
pre-calculus while the SAT goes only through algebra 2. A major difference between the SAT and the ACT is that the ACT has a science section.
23 Subjects: including precalculus, English, calculus, geometry
...I can offer endless alternative ways of presenting material to help you understand. My education includes graduate level physics and math. My teaching experience includes teaching high school
physics and over 20 years of teaching science one-on-one and to groups as a professional physicist in the public and private sectors.
10 Subjects: including precalculus, calculus, physics, geometry
...Discover methods for factoring trinomials quickly and easily. Understand slopes and line equations. Identify a slope from two points or draw a line with a slope and a point.
27 Subjects: including precalculus, calculus, statistics, geometry
...Aside from that, I occasionally tutored high school mathematics and other more advanced college courses, such as Advanced Calculus, Logic and Set Theory, Foundations of Math, and Abstract
Algebra. Many of these subjects I also tutored privately. In addition to this, I've done substantial work i...
26 Subjects: including precalculus, English, writing, reading
Related Plymouth Valley, PA Tutors
Plymouth Valley, PA Accounting Tutors
Plymouth Valley, PA ACT Tutors
Plymouth Valley, PA Algebra Tutors
Plymouth Valley, PA Algebra 2 Tutors
Plymouth Valley, PA Calculus Tutors
Plymouth Valley, PA Geometry Tutors
Plymouth Valley, PA Math Tutors
Plymouth Valley, PA Prealgebra Tutors
Plymouth Valley, PA Precalculus Tutors
Plymouth Valley, PA SAT Tutors
Plymouth Valley, PA SAT Math Tutors
Plymouth Valley, PA Science Tutors
Plymouth Valley, PA Statistics Tutors
Plymouth Valley, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Broad Axe, PA precalculus Tutors
Center Square, PA precalculus Tutors
Cynwyd, PA precalculus Tutors
Erdenheim, PA precalculus Tutors
Jarrettown, PA precalculus Tutors
Lafayette Hill precalculus Tutors
Miquon, PA precalculus Tutors
Ogontz Campus, PA precalculus Tutors
Penllyn, PA precalculus Tutors
Plymouth Meeting precalculus Tutors
Rosemont, PA precalculus Tutors
Saint Davids, PA precalculus Tutors
Southeastern precalculus Tutors
Strafford, PA precalculus Tutors
Upton, PA precalculus Tutors | {"url":"http://www.purplemath.com/Plymouth_Valley_PA_Precalculus_tutors.php","timestamp":"2014-04-19T00:01:19Z","content_type":null,"content_length":"24620","record_id":"<urn:uuid:3aa4520b-a051-432d-90c3-5504539d77c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Weak categorical theories of arithmetic
Aatu Koskensilta Aatu.Koskensilta at uta.fi
Mon Jun 11 05:59:37 EDT 2012
I asked for an example of an axiomatizable, and preferably finitely
axiomatizable, second-order theory of arithmetic that's categorical
but proof-theoretically weak. Unless I'm mistaken, I can now answer
this question.
Here's an example with infinitely many axioms, using the failure of
compactness for second-order logic. We simply take as axioms every
sentence of the form
0 =/= 1 & 1 =/= 2 & 2 =/= 0 & 2 =/= 3 & 3 =/= 1 & 3 =/= 0 & ... ,
saying that the first n numerals name distinct objects, together with
If there are infinitely many objects, the usual axioms of
second-order arithmetic hold.
We also know that the set of sentences that have no finite models
is not recursively enumerable (by Trakhtenbroth's theorem). So for any
given system of (sound) rules of inference, there is necessarily a
sentence A such that A |= "there are infinitely many objects" but not
A |- "there are infinitely many objects". We get a proof-theoretically
weak but categorical finitely axiomatizable theory by taking as axioms
such an A and "if there are infinitely many objects, the usual axioms
of second-order arithmetic hold". Perhaps someone can think of a nice
A (for a standard deductive system for second-order logic)?
Aatu Koskensilta (aatu.koskensilta at uta.fi)
"Wovon man nicht sprechen kann, darüber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2012-June/016517.html","timestamp":"2014-04-17T00:53:07Z","content_type":null,"content_length":"4296","record_id":"<urn:uuid:9f011914-aacb-4d38-a1b0-706284d702da>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Wales Prealgebra Tutor
Find a North Wales Prealgebra Tutor
...Math education has improved in recent decades while grammar education has suffered. I had tough, old-school English teachers and professors, which is why my grammar skills far surpass those of
most English teachers I have known. Most people think of geometry as a math course, which, of course, it is.
23 Subjects: including prealgebra, English, calculus, geometry
...You will be amazed at how easy it will be to familiarize yourself with the various aspects of this program. An understanding of algebra is a foundational skill to virtually all topics in
higher-level mathematics, and it is useful in science, statistics, accounting, and numerous other professional and academic areas. 1. Describe basic operations or numbers and signs. 2.
27 Subjects: including prealgebra, calculus, ACT Math, economics
...It upsets me when I hear students say, 'I'm just not good in math!' Comments like that typically mean that a math teacher along the way wasn't able to present the material in a way that made
sense to the student. I've never met a student who didn't understand once we as a team figured out how t...
9 Subjects: including prealgebra, geometry, precalculus, algebra 2
...I enjoy tutoring students one-on-one, and watching them become stronger math students. I like to help them build their confidence and problem solving ability as well as their skills.I taught
Algebra to 8th and 9th grade students for over 5 years. I taught geometry to high school students every year for nine years.
3 Subjects: including prealgebra, geometry, algebra 1
...I have experience teaching and tutoring American, World, and European History as well as U.S. Government and Politics. I specialize in AP preparation, as well as writing, research, and note
taking skills.
12 Subjects: including prealgebra, reading, algebra 1, special needs | {"url":"http://www.purplemath.com/North_Wales_Prealgebra_tutors.php","timestamp":"2014-04-16T16:00:52Z","content_type":null,"content_length":"24132","record_id":"<urn:uuid:6b6355e8-d472-4c4b-9231-6e5de33e2e9b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conquering Math - Exam Tips
When most people think of having to solve a math problem on the certification exam this is what they picture in their mind:
In reality, the math is not complicated if you don’t let it intimidate you. Here’s a few pointers to help you prepare for, and ace the math portion of your water distribution certification exam.
[message type="custom" width="100%" start_color="#FFFFF7" end_color="#BDCFFF" border="#BBBBBB" color="#333333"]
[custom_list style="list-1"]
• Practice solving every type of problem you’ll expect to see on the certification exam. For example: water velocity, pipe volume, tank surface areas, residual pressures, etc.
• Read the problem carefully. Some of the information provided is not needed to solve the problem. Make sure you understand what the problem is asking you to solve, then find the correct formula.
• Use the formula sheet for reference. You will be provided a formula sheet for use during the exam so it’s not necessary to memorize them, although it may happen naturally after
enough repetitions.
• Don’t be too quick to look for help solving the problem. Try your best to work through it on your own. Once you’re completely stuck then refer to a tutorial or get help from someone that knows
how to work the problem.
• Avoid simple errors. The most common mistakes are the little things… a misplaced decimal, transposed numbers, using the wrong units, etc.
• No matter how confident you are in your math skills, double check your work before completing your exam. The best mathematicians make mistakes, especially when solving multi-part equations. | {"url":"http://www.waterdistributioncertification.com/conquering-math-exam-tips/","timestamp":"2014-04-21T07:23:27Z","content_type":null,"content_length":"22146","record_id":"<urn:uuid:6895d81c-77c4-4849-bdb3-113381e61076>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proofs of Baire category theorem
up vote 4 down vote favorite
I would like to have a list of proofs of the fact that the real line is not meager (also very useful would be a reference to such a list, if it already exists somewhere).
My motivation is the following: in the article Definably complete and Baire structures we defined a first-order notion of Baire structures, and I would like to prove that every definably complete
ordered field is definably Baire. To do that, a possible approach would be to take a proof of the fact that $\mathbb R$ is not meager, and adapt it to the first-order situation. The main obstacle to
such an adaptation is the fact that we cannot define sets by recursion.
gn.general-topology real-analysis lo.logic
11 I'm struggling to think of two proofs that are interestingly different. Don't they all basically involve finding a nested sequence of closed sets with intersection in the complement of the meagre
set? – gowers Oct 9 '10 at 21:45
One somewhat concrete proof is presented in Simpson's Subsystems of second order arithmetic. The proof of the Baire category theorem there is does not literally define sets by recursion, which
also cannot be done in weak subsystems of second-order arithmetic. However, I would view that proof as essentially just an effectivization of the usual proof, and so if you do not have the
ability to define even sequences of points by recursion then it is not clear that you will be able to recast the proof in your setting. – Carl Mummert Oct 11 '10 at 0:53
@gowers: there are at least two different proofs of the fact that R is not meager (corresponding to the two cases of Baire category theorem): one using the fact that R is a complete metric space,
the other using the fact that [0,1] is compact. – Antongiulio Oct 11 '10 at 10:05
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gn.general-topology real-analysis lo.logic or ask your own question. | {"url":"https://mathoverflow.net/questions/41573/proofs-of-baire-category-theorem","timestamp":"2014-04-19T22:45:50Z","content_type":null,"content_length":"50908","record_id":"<urn:uuid:99caeb9e-aebe-44c7-b01b-d697c85a442c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Every piece of code is a theorem. It is a sequence of logical conclusions, each based on the one before, leading up to desired behavior. To validate that behavior, you need to prove the theorem. Even
though most compilers don't prove those theorems for you, they can still provide some assistance.
Q.E.D. coding is based on the combined works of great mathematicians and practitioners in the field of software. Some of the influences include:
• Bertrand Meyer - author of Object Oriented Software Construction
• Robert C. Martin - author of Design Principles and Design Patterns
• Euclid - author of The Elements
• Alan Turing - author of Computing Machinery and Machine Intelligence
• Donald Knuth - author of The Art of Computer Programming
Software has a long tradition as applied mathematics. Q.E.D. coding draws from that tradition to define a practice of creating reliable, provable software. | {"url":"http://qedcode.com/","timestamp":"2014-04-17T06:41:07Z","content_type":null,"content_length":"14835","record_id":"<urn:uuid:e7c34123-bbfc-470b-8f44-bc4f6aac5cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
LAS Supportive Courses
MTH 501 - Topics in Applied Mathematics I (3 hours)
Theory, applications, and algorithms for basic problems of modern applied mathematics. Symmetric linear systems, minimum principles, equilibrium equations, calculus of variations, orthogonal
expansions, and complex variables. Prerequisite: MTH 224 or 345.
MTH 502 - Topics in Applied Mathematics II (3 hours)
Continuation of MTH 501. Selected numerical algorithms: Fast Fourier transform, initial value problems, stability, z-transforms, and linear programming. Prerequisite: MTH 501 or consent of
MTH 510 - Numerical Methods I (3 hours)
Introduction to numerical and computational aspects of various mathematical topics: finite precision, solutions of non-linear equations, interpolation, approximation, linear systems of equations, and
integration. Cross listed as CS 510. Prerequisite: CS 101; MTH 207 and 223.
MTH 511 - Numerical Methods II (3 hours)
Continuation of CS/MTH 510: further techniques of integration, ordinary differential equations, numerical linear algebra, nonlinear systems of equations, boundary value problems, and optimization.
Cross listed as CS 511. Prerequisite: MTH 224 or 345; CS/MTH 510.
MTH 514 - Partial Differential Equations (3 hours)
Fourier series and applications to solutions of partial differential equations. Separation of variables, eigenfunction expansions, Bessel functions, Green's functions, Fourier and Laplace transforms.
Prerequisite: MTH 224 or 345.
PHL 551 - Reading in Philosophy (1-3 hours)
Directed individual study. Prerequisite: 6 hours in philosophy; senior or graduate standing; consent of department chair.
PHL 552 - Reading in Philosophy (1-3 hours)
Directed individual study. Prerequisite: 6 hours in philosophy; senior or graduate standing; consent of department chair.
PHY 501 - Quantum Mechanics I (3 hours)
Inadequacies of classical physics when applied to problems in atomic and nuclear physics. Development of mathematical formalism used in basic quantum theory, with applications to simple models of
physical systems. Prerequisite: PHY 301; PHY 202 or 303, 306 or consent of instructor. MTH 207 recommended.
PHY 502 - Quantum Mechanics II (3 hours)
The mathematical formalism of quantum mechanics with applications to problems of electron spin and many-particle systems will be studied along with the development of approximation techniques with
applications to complex physical systems. Prerequisite: PHY 501.
PHY 539 - Topics in Theoretical Physics (3 hours)
Topics of special interest which may vary each time course is offered. Topic stated in current Schedule of Classes. Prerequisite: PHY 301, 305, 501; consent of instructor.
PHY 541 - Physics Basics (2 hours)
Numerical and graphical analysis of data; basic mechanics including Newton's laws and gas laws; hydrostatics and hydrodynamics; energy conservation principles; thermal physics; electricity and
magnetism; and solubility and transport processes. Only students in the Nurse Administered Anesthesia Program may register.
PHY 555 - Independent Readings (1-3 hours)
Individually assigned reading assignments of relevant topics in physics or astronomy. Prerequisite: senior or graduate standing; background appropriate to the study; consent of instructor.
PHY 563 - Special Problems in Physics (1-3 hours)
Qualified students work on an individually assigned problem and prepare oral and written reports on the problem solution. Approved for off-campus programs when required. May be repeated for a maximum
of 6 hrs. credit. Prerequisite: physics preparation sufficient for the problem; consent of instructor and Department Chair.
PHY 568 - Condensed Matter Physics (3 hours)
Introduction to the physics of the solid state and other condensed matter especially for students of physics, materials science, and engineering; structure of crystals; molecular binding in solids,
thermal properties, introduction to energy band structure and its relation to charge transport in solids, semiconductors, superconductivity. Co-requisite: PHY 306., Prerequisite: Physics majors: PHY
301, 202 or 303; PHY 305. Other majors need instructor consent.
Political Science
PLS 583 - Reading in Political Science (1-3 hours)
Individual in-depth work on a subject approved and supervised by a PLS faculty member. For highly qualified students. Prerequisite: senior standing; political science major; consent of instructor.
PLS 584 - Reading in Political Science (1-3 hours)
Individual in-depth work on a subject approved and supervised by a PLS faculty member. For highly qualified students. Prerequisite: senior standing; political science major; consent of instructor.
SOC 571 - Field Studies (1-3 hours)
Individual research. Prerequisite: senior or graduate standing and consent of Department Chair. | {"url":"http://bradley.edu/academic/gradcat/20112012/las-supportive.dot","timestamp":"2014-04-20T18:25:20Z","content_type":null,"content_length":"26040","record_id":"<urn:uuid:702db98b-f0b9-445f-afb8-a2144174c152>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recent posts by TTrashCAN on Kongregate
• « Prev
• 1
• 2
• 3
• 4
• …
• 9
• 10
• Next »
Flag Post
250 posts
Flag Post Topic: The Arts / [group]canopyart – artist community
TTrashCAN I’m not an alt. I’ve just been gone. How dare you.
250 posts
Topic: The Arts / The Pixel Galleria™
Flag Post
Originally posted by rawismojo:
250 posts Why not post on your main?
It was easier to copy the images over on this account.
Flag Post
250 posts
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Flag Post Originally posted by pivotman99:
TTrashCAN I made a forum, so we don’t crowd here as much. I’ll mod staff and Pirate.
250 posts
I suppose that will work better than one thread. Make me an admin, I’ll start putting somethings together.
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Flag Post I declare dictatorship until piratemonkey returns, and we’ve talked.
TTrashCAN Firstly, stop coming up with random ideas and art.
250 posts
We need to focus on the main things first, and make a design document.
I don’t know how we’re going to do this with so many people with ideas, but right now, everyone just stop posting art and ideas.
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Flag Post You know what, I think this is getting out of hand again, and I’m slightly annoyed that you guys made something canopyart themed w/out my permission.
TTrashCAN +, we are doing this all wrong, such as diving right in and not planning it out.
250 posts
If we are going to finish this, I suggest that we let me direct people around, since I’ve done projects before, and know how to make the pieces mesh together properly.
Also, I agree with inward. This is better off in the collab forum.
Flag Post Topic: The Arts / Canopy collab II. Lets keep this one organised.
TTrashCAN BTW, I may have found us a programmer
250 posts
I’ll know tomorrow for sure.
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Originally posted by pivotman99:
Originally posted by TTrashCAN:
Originally posted by pivotman99:
You enter your name.
Flag Post
You rescue all the members of canopy over the game, and they teach you skills. (A.K.A. I’m underwater.)
250 posts
You collect leaves and cash them in at stores.
Your final weapon… Is a banana.
That’s….. ummm….. yeah…..retarded…the banana part.
You’re… Ummhhmmm… Retarded… *cough*France cough….
Care to explain the France comment?
I’m not saying your entire post is bad, it’s just… A BANANA?
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Originally posted by pivotman99:
Flag Post
You enter your name.
250 posts You rescue all the members of canopy over the game, and they teach you skills. (A.K.A. I’m underwater.)
You collect leaves and cash them in at stores.
Your final weapon… Is a banana.
That’s….. ummm….. yeah…..retarded…the banana part.
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Flag Post
Hmm… I don’t really like the idea of a final fantasy battle system… seems much better with just a regular platformer.
TTrashCAN What I think would be a cooler idea is that you have to rescue other people. When you do, you can switch to their point of view and control them, each new person you save gives you
250 posts new abilities to pass the levels.
A puzzle game. :)
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Originally posted by blardjosh:
Originally posted by ForeverLoading:
Terabyte… Smart thinkin!
Blard, you can help. Join Canopy Art too. :P
Flag Post
I did post an application to canopy art, I don’t think I was accepted though. At least not yet.
250 posts You were accepted, but newer applications aren’t put up on the front page.
Anyone can be accepted.
Hmm… I don’t really like the idea of a final fantasy battle system… seems much better with just a regular platformer.
What I think would be a cooler idea is that you have to rescue other people. When you do, you can switch to their point of view and control them, each new person you save gives you new
abilities to pass the levels.
A puzzle game. :)
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Flag Post Originally posted by ForeverLoading:
TTrashCAN Look at my pixel art :D
250 posts
Teach me your ways!
Lol. Maybe I’ll make a tutorial sometime, but I suck at it.
Topic: The Arts / Canopy collab II. Lets keep this one organised.
Originally posted by ForeverLoading:
Originally posted by TTrashCAN:
Flag Post
There we go. Now, what do I need to do?
250 posts Think of some enemies… And make them :D
We need a stalactite… To fall from the ceiling and kill you.
Could you program a simple thing where if your 1 block away from under the stalactite it falls and if it touches you you lose HP?
I think it would be better if we made the engine first.
Flag Post
250 posts
Flag Post
250 posts
Flag Post
250 posts
Flag Post Topic: The Arts / It's true.....
TTrashCAN Yeah, alts can be pretty lame at times.
250 posts
Topic: Off-topic / What would be the coolest way to demolish a building?
Originally posted by InfiniteHunter:
Originally posted by KooqieToyShow:
Flag Post I once was thinking: How about we seal a building so that no air would come in or out with a vaccum in one of the windows, then we turn on the vaccum? When something is sealed
that way and air is sucked out, the container shrivels into a ball, so I figured that could be done with a building and the building would end up in a ball of bricks and
TTrashCAN concrete!?
250 posts
First off, that’s nearly impossible. We’d have to seal up everything completely, which itself is kinda hard to do. Then, in order to suck drywall out of a standard-sized window, we
need a vacuum with the strength of a 747 to suck it out.
Personally, I’m a fan of blowing it up with several (million) sticks of dynamite.
Flag Post Topic: The Arts / The Story of Jimmy Boy
TTrashCAN I love the wife that has the moustache! I lol’d.
250 posts
Flag Post Topic: The Arts / The Story of Jimmy Boy
TTrashCAN That is awesome. Pure epicness. You win +1 internets.
250 posts
Flag Post Topic: The Arts / Smiley Code
TTrashCAN It’s optional. If you don’t want it, you don’t download the code. It’s personal preference, and a lot of people do like smileys.
250 posts
Topic: General Gaming / Crush the Castle levels
The assembling castle!
If you can solve this, I honor you:
Flag Post
TTrashCAN w_i:1199.75,356.75|w_i:1181.6,354.9|w_i:1158.1,354.9|w_i:1136.45,354.9|w_i:1116.5,353.15|w_i:1093.45,354.95|w_i:1069.75,353.5|w_i:1071.7,225.75|w_i:1095.2,227.6|w_i:1120.1,227|
250 posts w_i:1140.4,228.55|w_i:1161.9,225.3|w_i:1183.8,229.85|w_i:926.75,236.95|w_i:1201.6,228.2|w_i:888.5,232.45|w_i:1199.55,101.9|w_i:1181.7,103|w_i:1161.45,100.9|w_i:1139.9,104.7|
To select it, click the beginning of the line, press Shift, then press End.
Flag Post Topic: The Arts / Smiley Code
TTrashCAN Ever wish you could write :O and have it come out to <img src=“http://www.darkervisions.com/gallery/smiley.jpg” width=25 px height=25 px>?
250 posts
I’ve been trying to create a greasemonkey code to do that, but unfortunately, I’m no great programmer. Is there a programmer in the art forum that is willing to help me out?
Flag Post
250 posts
• « Prev
• 1
• 2
• 3
• 4
• …
• 9
• 10
• Next » | {"url":"http://www.kongregate.com/users/1397752/posts","timestamp":"2014-04-18T05:38:08Z","content_type":null,"content_length":"117779","record_id":"<urn:uuid:4c2f4e46-327f-4b83-b499-15e60276be90>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Villa Rica, PR Trigonometry Tutor
Find a Villa Rica, PR Trigonometry Tutor
...I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24 hour cancellation policy, I often provide make-up sessions. I usually tutor students
in a public library close to their home, however I will travel to another location if that is more convenient for the student.
8 Subjects: including trigonometry, statistics, algebra 1, algebra 2
...My highest ACT Math Score: 35 My highest ACT Science Score: 30 My highest SAT Math Score: 780 Scored a 5 on AP Calculus AB ExamI scored a 5 on the AP Calculus AB Exam, which is the highest
possible score. I have knowledge of math subjects that are lower than Calculus. I have a year's worth of peer-tutoring experience in Chemistry.
17 Subjects: including trigonometry, chemistry, calculus, geometry
...I have worked with VBA macros and design and modification of reports – both simple and complicated. I have used Microsoft Windows daily since the release of version 3.0 in 1990. Since then I
have worked with Windows 95, Windows 98, Windows for Workgroups 3.11 and NT 3.1 and more recently Windows 7 and 8.
126 Subjects: including trigonometry, chemistry, English, calculus
...Try me and you will never regret! I have just graduated from Georgia Tech with a degree in nuclear and radiological engineering, I have been tutoring people from over the place in this topic
since 2009, and I am well qualified. Try me and you will never regret!
10 Subjects: including trigonometry, calculus, physics, algebra 1
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
12 Subjects: including trigonometry, statistics, algebra 1, algebra 2
Related Villa Rica, PR Tutors
Villa Rica, PR Accounting Tutors
Villa Rica, PR ACT Tutors
Villa Rica, PR Algebra Tutors
Villa Rica, PR Algebra 2 Tutors
Villa Rica, PR Calculus Tutors
Villa Rica, PR Geometry Tutors
Villa Rica, PR Math Tutors
Villa Rica, PR Prealgebra Tutors
Villa Rica, PR Precalculus Tutors
Villa Rica, PR SAT Tutors
Villa Rica, PR SAT Math Tutors
Villa Rica, PR Science Tutors
Villa Rica, PR Statistics Tutors
Villa Rica, PR Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Acworth, GA trigonometry Tutors
Austell trigonometry Tutors
Carrollton, GA trigonometry Tutors
Dallas, GA trigonometry Tutors
Douglasville trigonometry Tutors
Fayetteville, GA trigonometry Tutors
Forest Park, GA trigonometry Tutors
Hiram, GA trigonometry Tutors
Newnan trigonometry Tutors
Oxford, AL trigonometry Tutors
Powder Springs, GA trigonometry Tutors
Temple, GA trigonometry Tutors
Tyrone, GA trigonometry Tutors
Union City, GA trigonometry Tutors
Villa Rica trigonometry Tutors | {"url":"http://www.purplemath.com/Villa_Rica_PR_trigonometry_tutors.php","timestamp":"2014-04-21T04:50:30Z","content_type":null,"content_length":"24489","record_id":"<urn:uuid:a6404118-8ded-49ab-92b2-f10cc30a2815>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random variables - density functions
January 14th 2009, 06:11 PM #1
Jan 2009
(a) If Z1 and Z2 are independent and have N(0,1) densities, let
X1 = a11Z1 + a12Z2 + c1 and let X2 = a21Z1 + a22Z2 + c2
where a11,a12,a21,a22,c1 and c2 are constants. Determine the joint density of X1 and X2.
(b) Suppose that the random variable X on (0,1) has density
f(x) = 3x2 0 < x < 1
Determine the density functions of
(i) U = X1/2
(ii) V = -log(X)
(c) Now suppose Y is uniformly distributed on (0,1) and independent of X. Determine the density of
W = max(X,Y)
(a) If Z1 and Z2 are independent and have N(0,1) densities, let
X1 = a11Z1 + a12Z2 + c1 and let X2 = a21Z1 + a22Z2 + c2
where a11,a12,a21,a22,c1 and c2 are constants. Determine the joint density of X1 and X2.
(b) Suppose that the random variable X on (0,1) has density
f(x) = 3x2 0 < x < 1
Determine the density functions of
(i) U = X1/2
(ii) V = -log(X)
(c) Now suppose Y is uniformly distributed on (0,1) and independent of X. Determine the density of
W = max(X,Y)
Part B (i):
We are given that $f_X(x) = 3X^2, X \in [0,1]$. And given that $U = X^{1/2}$, we need the distribution of $U$
Step 1: Determine the domain of the random variable $U$. Since $X$ is at most $1$, $U$ is at most $1$. And similar, because $X$ is at least $0$, $U$ is at least $0$.
So $U \in [0,1]$ as well.
Step 2: find the CDF of $U$: $F_U(u)$
$= P(U < u)$
$= P(X^{1/2} < u)$
$=P(X < u^2)$
Step 3: find the pdf of $U$
And since $f_U(u) = \frac{\partial F_U}{\partial U}$ and we concluded that $F_U(u) = F_X(u^2)$, it must be the case that:
$f_U(u) = \frac{\partial F_X}{\partial U}$
$= f_x(u^2) \frac{\partial (u^2)}{\partial u}$
$=2u f_x(u^2)$
$=2u 3 (u^2)^2$
Step 4: Vary that indeed this is a pdf:
$\int_0^1 6u^5 du = 1$ as desired.
Part B (ii):
This one is a bit tougher. But we can proceed as before:
Step 1: The boundaries of $X$ are $0,1$. So if we let $V = - log(x)$, then the boundaries of $V$ become $0$ and positive infinity.
So $V \in [0, \infty]$
Step 2: find the CDF of $V$: $F_V(v)$
$= P(V < v)$
$= P(-log(X) < v)$
$= P(10^{-log(X)} < 10^v)$
$= P(1/X < 10^v)$
$= P(X < 10^{-v})$
$= F_X(10^{-v})$
Step 3: find the PDF of $V$: $f_V(v)$
$= \frac{\partial F_X(10^{-v})}{\partial v}$
$= -ln(10)(10^{-v}) f_x(10^{-v})$
$=3 ln(10) (1/100)^v (10^{-v})$
Step 4: verify that this is indeed a valid pdf:
$\int_0^\infty 3 ln(10) (1/100)^v (10^{-v}) dv = 1$ as desired.
Part C:
We proceed as before by finding the cdf of $W$:
$= P(W < w)$
Now, here's the trick. Since we have defined $W = max(X,Y)$ or that $W$ represents the larger of $X$ and $Y$, it means that whenever $W$ is less than a constant $w$, then it must be the case that
both $X$ and $Y$ are less than that constant $w$. So we can determine that:
$= P(W < w)$
$= P(X < w, Y < w)$
$= P(X < w) P(Y < w)$ (by independent of $X$ and $Y$)
$= F_X(w) F_Y(w)$
We know that $F_X(x) = \int f_X(x) dx = \int 3X^2 dx= X^3$ and that $F_Y(y) = \int f_Y(y) dy = \int 1 dy = Y$
So $F_X(w) F_Y(w) = (W^3)(W) = W^4$
We conclude that the cdf of $W$ is: $F_W(w) = W^4$
Then, of course, the pdf of $W$ is: $f_W(w) = 4W^3, W \in [0,1]$
Verify that this is a valid pdf and you are done.
Part A:
I am too tired lol...just scroll to the section that says "bivariate case" at: http://en.wikipedia.org/wiki/Bivariate_normal
Plug in the appropriate variables into the formula...
Last edited by Last_Singularity; January 14th 2009 at 07:25 PM.
January 14th 2009, 06:53 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/68255-random-variables-density-functions.html","timestamp":"2014-04-21T15:45:19Z","content_type":null,"content_length":"49185","record_id":"<urn:uuid:f4b917d3-29de-4ce6-a930-cdb63e1fa4ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitland Algebra 2 Tutor
...All 3 of my children took Chemistry when they were in high school. I was very successful as their tutor. I enjoy science and I am very patient.
15 Subjects: including algebra 2, chemistry, physics, calculus
...There are numerous topic areas that must be worked. In a meeting with a new student in Algebra 2, I talk with the student in an attempt to identify the ones for which he/she may need
assistance. To be sure, there are links between the topic areas.
13 Subjects: including algebra 2, chemistry, calculus, physics
...I have taught Chemistry to High School students and University freshmen for more than 4 years. The course generally covers the following topics with greater depth and application being done in
honors courses: Classification of Matter States of Matter Atomic Theory Periodicity Formula Writing Ch...
20 Subjects: including algebra 2, chemistry, algebra 1, GED
...There we tailored a curriculum to teach these women conversational English and the essentials that they would need for work, doctor visits, and everyday interactions. Also, this past summer
(2012) I assisted in establishing a summer camp in Cartago, Costa Rica for school-aged children. As a camp counselor, I interacted with the children in Spanish and gave English lessons as well.
17 Subjects: including algebra 2, Spanish, geometry, physics
...I love math. I love programming. With every class I've taken, I always excelled in these subjects.
17 Subjects: including algebra 2, calculus, algebra 1, trigonometry | {"url":"http://www.purplemath.com/suitland_md_algebra_2_tutors.php","timestamp":"2014-04-17T07:53:05Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:17235c50-3cc7-4325-b783-2c5a5eeb2c56>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
First-order System and the Wave Equation
Next: Centered Difference Schemes and Up: The (1+1)D Transmission Line Previous: The (1+1)D Transmission Line
We recall that the set of PDEs which describes the evolution of the voltage and current distributions along a lossless, source-free transmission line in (1+1)D is:
where and are, respectively, the current in and voltage across the lines, and and , both assumed strictly positive everywhere, are the inductance and capacitance per unit length. For the moment, we
will leave aside the discussion of boundary conditions, and deal only with the Cauchy problem (i.e., we assume the spatial domain of the problem to be the entire axis). Note also that this system
includes the vocal tract model (1.20) as a special case, under an appropriate set of variable and parameter replacements.
As discussed in §4.2.3, if we assume that and are constant, then the set of equations can be reduced to a single second order equation in the voltage alone^:
where the wave speed is given by
This equation and its analogues in higher dimensions (see Appendix A) are collectively known as the wave equation. The solution, as mentioned in §4.2.3, can be written in terms of traveling waves. In
the (1+1)D case, we can write an identical wave equation in the current alone, but this does not hold in higher dimensions.
Next: Centered Difference Schemes and Up: The (1+1)D Transmission Line Previous: The (1+1)D Transmission Line Stefan Bilbao 2002-01-22 | {"url":"https://ccrma.stanford.edu/~bilbao/master/node97.html","timestamp":"2014-04-16T13:07:38Z","content_type":null,"content_length":"7266","record_id":"<urn:uuid:ac0a6cd4-da60-41e4-a179-9f1d9ce25433>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is the restriction map on global sections an embedding
up vote 2 down vote favorite
Given a scheme $X$ with generic point p and a quasi-coherent sheaf $F$ on $X$. Viewing $X$ as a scheme over $Spec(\mathbb{Z})$, let us assume $f: X \rightarrow Spec(\mathbb{Z})$ is a proper map.
What conditions have $X$ and $F$ to satisfy, so that one can embed the $\mathbb{Z}$-module $F(X)=H^0(X,F)$ in $F_p$, respectively when is the restriction map $h: F(X) \rightarrow F_p$ injective?
Are there some mild conditions, like $X$ integral and $F$ coherent or torsion free?
ag.algebraic-geometry ac.commutative-algebra
add comment
1 Answer
active oldest votes
If $X$ is integral and $F$ is torsion-free, then for any non-empty affine open subset $U$ of $X$, the canonical map $F(U)\to F_p$ is injective. So $F(X)\to F_p$ is injective. You
up vote 5 down don't need hypothesis on $X \to Spec(\mathbb Z)$. If $X$ is not necessarily reduced, then the flatness of $F$ over $X$ is also enough (same proof).
vote accepted
Thanks a lot for your answer. – TonyS Mar 1 '10 at 21:03
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/16063/when-is-the-restriction-map-on-global-sections-an-embedding","timestamp":"2014-04-19T07:56:51Z","content_type":null,"content_length":"50950","record_id":"<urn:uuid:baa0416c-e659-4151-b934-6c81ba1fe868>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kernel Experiment - 4.0 updated 2012.05.28 - Page 7 - i8000 Omnia II Android ROM Discussion
4.0 667mhz-166mhz-160mb-24bpp Wi-Fi not work/ GSM work.
ipaq3870 How do wifi ??
ipaq3870 +++ for work. for the development of core
Check the type of security on the router wifi. Sometimes, the more secure wifi does not catch. Check out another wifi access point. | {"url":"http://www.modaco.com/topic/354117-kernel-experiment-40-updated-20120528/page-7","timestamp":"2014-04-18T00:18:35Z","content_type":null,"content_length":"165653","record_id":"<urn:uuid:1069dd64-7aa0-4216-9969-f9e012941642>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: epsilon terms and choice
Matt Insall montez at rollanet.org
Mon Sep 18 17:08:33 EDT 2000
Professor Shavrukov,
You posed the following question to Professor Kanovei:
Let F be a class. Suppose the class E is an equivalence relation
on F. Then there exists a class function C : F -> F s.t.
for all x in F, x E Cx;
for all x,y in F, x E y imples Cx = Cy.
Do you know if this follows from GC without Foundation?
I am wondering if the following works or helps:
1. Let GNB denote the (usual) first-order theory of classes without the
axiom of foundation.
2. Let L be the first-order language obtained from the language for GNB by
adding a unary predicate, ``class''.
3. Let T be the theory of L with the following axioms:
(forall x){[(x is in X)iff(x is in Y)] implies X=Y}
(forall X_1,...,X_n)(thereis Y)(forall x)[x is in Y iff phi(X_1,...,X_n,x)]
(forall x)(forall y)[(x is in y) iff (x is a class)]
(forall x)(forall y){[(x is in y)&(y is a class)] iff (x is a set)}
(forall x,y)(thereis z)(forall w)[{[w is in z] iff [(w=x) or (w=y)]}&{[z is
a set] iff [(x is a set)&(y is a set)]}]
(forall x)(thereis y)(forall z)[{[z is in y] iff (thereis w)[(z is in w)&(w
is in x)]}&{(y is a set) iff (x is a set)}]
(forall x)(thereis y)(forall z)[{[z is in y] iff (forall w)[(w is in z)
implies (w is in x)]}&{(y is a set) iff (x is a set)}]
(thereis x)(thereis y)(thereis z)(thereis w){[(forall t)(t is not in x)]&[x
is in y]&[y is in z]&[z is in w]&(forall p)[(p is in y) implies (thereis
q){(q is in y)&(forall r)[(r is in q) iff (r is in p) or (r=p)]}
(forall f)(forall x){[f is a function] implies [{(x is a class) implies
(f[x] is a class)}&{(x is a set) implies (f[x] is a set)}]}
(thereis f){[f is a function]&(forall x)[(x is a set) implies {(thereis y)[y
is in x] implies [f(x) is in x]}]}
(forall f){[f is a function] implies (thereis x)[(x is a class) & {(thereis
y)[y is in x] & [f(x) is not in x]}]}
4. Is T a conservative, relatively consistent extension of GNB?
5. Let NEC be the statement that negates (in L) the one you asked about:
(there is x){(x is a class)&(thereis e)[e is an equivalence on x]&(forall
f)[(f is a function from x into x) implies {(forall p)[(p is in x) implies
((p,f(p)) is in e)] implies [(thereis y)(thereis z){(y is in x)&(z is in
6. Let EC be the statement (in L) that you asked about: EC=not(NEC).
7. Is NEC a theorem of T?
8. Is EC a theorem of T?
9. Does any subcollection of these questions constitute a reasonable
reduction of your question to a simpler problem?
Dr. Matt Insall
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-September/004385.html","timestamp":"2014-04-16T22:11:50Z","content_type":null,"content_length":"5046","record_id":"<urn:uuid:993f7fad-e238-4303-9be0-104aebb247ab>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kearny, NJ Statistics Tutor
Find a Kearny, NJ Statistics Tutor
...Prior to that, I was on the Middle School Tennis Team and took private lessons on a clay court and community recreation group classes. I was a tennis instructor for two consecutive summers at
a pool club. I've been studying and practicing yoga now (specifically vinyasa style) for two years.
22 Subjects: including statistics, English, reading, algebra 1
...Discrete Math is a collection of various other Math subjects including Logic, Combinatorics, Graph Theory, Algorithms, and more. I have studied each of these, either in a class devoted to the
(like a course in Symbolic Logic alone) or in classes which contained them as special topics (for instan...
32 Subjects: including statistics, physics, calculus, geometry
...When a student does well, I feel like I have done well. Some of the students have told me that they wish I was their teacher. I really enjoy what I do and love children/people.I am currently
tutoring students from K-6th grade, and have been doing well.
47 Subjects: including statistics, chemistry, reading, accounting
...I used Java in stand-alone programs and web based applications. Java is my native language. I taught Computer programming for many years in professional colleges.
24 Subjects: including statistics, calculus, geometry, algebra 1
...I am a patient and effective professor and alter my teaching style to meet the learning needs of my students. In addition to teaching, I have tutored students in statistics for the past six
years and have worked with students on writing/editing skills outside of the classroom. While I have work...
9 Subjects: including statistics, reading, writing, social studies
Related Kearny, NJ Tutors
Kearny, NJ Accounting Tutors
Kearny, NJ ACT Tutors
Kearny, NJ Algebra Tutors
Kearny, NJ Algebra 2 Tutors
Kearny, NJ Calculus Tutors
Kearny, NJ Geometry Tutors
Kearny, NJ Math Tutors
Kearny, NJ Prealgebra Tutors
Kearny, NJ Precalculus Tutors
Kearny, NJ SAT Tutors
Kearny, NJ SAT Math Tutors
Kearny, NJ Science Tutors
Kearny, NJ Statistics Tutors
Kearny, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Belleville, NJ statistics Tutors
Bloomfield, NJ statistics Tutors
East Newark, NJ statistics Tutors
East Orange statistics Tutors
Glen Ridge statistics Tutors
Harrison, NJ statistics Tutors
Irvington, NJ statistics Tutors
Lyndhurst, NJ statistics Tutors
Montclair, NJ statistics Tutors
Newark, NJ statistics Tutors
North Arlington statistics Tutors
Nutley statistics Tutors
Orange, NJ statistics Tutors
South Kearny, NJ statistics Tutors
West Orange statistics Tutors | {"url":"http://www.purplemath.com/Kearny_NJ_statistics_tutors.php","timestamp":"2014-04-20T11:34:43Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:1057a632-7832-4bd3-aef6-66b88214eda9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
The data type Type and its friends
GHC's compiles a typed programming lanuage, and GHC's intermediate language is explicitly typed. So the data type that GHC uses to represent types is of central importance.
The first thing to realise is that GHC uses a single data type for types, even though there are two different "views".
• The "typechecker view" (or "source view") regards the type as a Haskell type, complete with implicit parameters, class constraints, and the like. For example:
forall a. (Eq a, %x::Int) => a -> Int
• The "core view" regards the type as a Core-language type, where class and implicit parameter constraints are treated as function arguments:
forall a. Eq a -> Int -> a -> Int
These two "views" are supported by a family of functions operating over that view:
The module TypeRep exposes the representation becauese a few other modules (Type, TcType, Unify, etc) work directly on its representation. However, you should not lightly pattern-match on Type; it is
meant to be an abstract type. Instead, try to use functions defined by Type, TcType etc.
The single data type Type is used to represent
• Types (possibly of higher kind); e.g. [Int], Maybe
• Coercions; e.g. trans (sym g) h
• Kinds (which classify types and coercions); e.g. (* -> *), T :=: [Int]
• Sorts (which classify types); e.g. TY, CO
GHC's use of coercions and equality constraints is important enough to deserve its own page.
The representation of {{Type}}
Here, then is the representation of types (see compiler/types/TypeRep.lhs for more details):
data Type = TyVarTy TyVar -- Type variable
| AppTy Type Type -- Application
| TyConApp TyCon [Type] -- Type constructor application
| FunTy Type Type -- Arrow type
| ForAllTy TyVar Type -- Polymorphic type
| PredTy PredType -- Type constraint
| NoteTy TyNote Type -- Annotation
data PredType = ClassP Class [Type] -- Class predicate
| IParam (IPName Name) Type -- Implicit parameter
| EqPred Type Type -- Equality predicate (ty1 :=: ty2)
data TyNote = FTVNote TyVarSet -- The free type variables of the noted expression
Kinds are represented as types:
type Kind = Type
Basic kinds are now represented using type constructors, e.g. the kind * is represented as
liftedTypeKind :: Kind
liftedTypeKind = TyConApp liftedTypeKindTyCon []
where liftedTypeKindTyCon is a built-in PrimTyCon. The arrow type constructor is used as the arrow kind constructor, e.g. the kind * ->* is represented internally as
FunTy liftedTypeKind liftedTypeKind
It's easy to extract the kind of a type, or the sort of a kind:
typeKind :: Type -> Kind
The "sort" of a kind is always one of the sorts: TY (for kinds that classify normal types) or CO (for kinds that classify coercion evidence). The coercion kind, T1 :=: T2, is represented by PredTy
(EqPred T1 T2).
Type variables
Type variables are represented by the TyVar constructor of the data type Var.
Type variables range over both types (possibly of higher kind) or coercions. You could tell the differnece between these two by taking the typeKind of the kind of the type variable, adn seeing if you
have sort TY or CO, but for efficiency the TyVar keeps a boolean flag, and offes a function:
isCoercionVar :: TyVar -> Bool | {"url":"https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/TypeType?version=1","timestamp":"2014-04-18T11:19:06Z","content_type":null,"content_length":"15549","record_id":"<urn:uuid:5be2c1c0-16b8-4792-8c16-747e4f6327f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just Me!
im ok w spending $40 on food but wont buy a $40 shirt
So true
(via lohanthony)
Im such a great friend
(via the-absolute-best-posts)
Happy Birthday Heath
today he would’ve been 34
I love you heath
Always in our hearts x
(via bethanierenee)
modern day rebels
(via bethanierenee)
We’re we.
Here fuck.
We’re shit.
Queer up.
(via mileycryrus)
Tell me Pink don’t look like Justin Bieber
Except she has manlier legs
(via mileycryrus)
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square
donuts in than circle donuts if the circumference of the circle touched the each of the corners of the square donut.
So you might end up with more donuts.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R[1] occupies the same space as a square donut with side 2R[1]. If the center circle of a round donut has a radius R[2] and the hole of a square donut has a side
2R[2], then the area of a round donut is πR[1]^2 - πr[2]^2. The area of a square donut would be then 4R[1]^2 - 4R[2]^2. This doesn’t say much, but in general and throwing numbers, a full
box of square donuts has more donut per donut than a full box of round donuts.
The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R[2] = R[1]/4) and replacing in the proper expressions, we have a 27,6%
more donut in the square one (Round: 15πR[1]^2/16 ≃ 2,94R[1]^2, square: 15R[1]^2/4 = 3,75R[1]^2). Now, assuming a large center hole (R[2] = 3R[1]/4) we have a 27,7% more donut in the
square one (Round: 7πR[1]^2/16 ≃ 1,37R[1]^2, square: 7R[1]^2/4 = 1,75R[1]^2). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
Thank you donut side of Tumblr.
This is the highest and best use of conic sections I have ever seen.
(Source: nimstrz, via thacandyman)
Happy 43rd Birthday Selena Quintanilla-Perez {April 16, 1971}
(via castillop)
This is getting ridiculous…
(via castillop) | {"url":"http://marleneray.tumblr.com/page/3","timestamp":"2014-04-21T01:59:46Z","content_type":null,"content_length":"60414","record_id":"<urn:uuid:1fabb08e-47c6-4389-b630-453b59682673>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve for x in this trig jumble?
By the way, I'm interested in the easier way to solve it.
Since the approach already given is pretty simple in itself, I guess there is no problem posting the other way, which seems simpler to me. Either way is pretty straightforward. However, to me it
seems more direct to do this.
2cosx - 2cos2x = 0
2cosx = 2cos2x
cosx = cos2x
acos(cosx) = acos(cos2x) with proper consideration for periodicity (-pi<x<pi) and proper domains for acos(arg)
Of course, one can get the answer visually from here, but to be formal consider the proper domains for acos function (i.e. 0<=arg<pi)
for 0<x<pi/2 we get x=2x which gives x=0
for pi/2<x<pi we get x=2(-x+pi) which gives x=2pi/3
then symmetry about x=0 gives x=-2pi/3
So the three solutions in one period are x=0, 2pi/3, -2pi/3
Then these answers will repeat every integer multiple of 2pi | {"url":"http://www.physicsforums.com/showthread.php?p=3192035","timestamp":"2014-04-20T11:17:18Z","content_type":null,"content_length":"33293","record_id":"<urn:uuid:ce940680-cd96-4a7d-a3fd-47282d8a7572>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
prove that any number that is a square must have one of the following for its units digit: 0,1,4,5,6,9.
Consider a two digit number with unit digit as a and tens digit as b So a and b both are positive interger less than 10 So number can be written as 10b + a Now $(10b + a)^2 = 100b^2 + 20ab + a^2$ Now
$100b^2$ and $20ab$ both have 0 at unit place. So unit digit of $100b^2 + 20ab + a^2$ will be decided by $a^2$ And for a being single digit integer, you only get 0,1,4,5,6,9.at units place of $a^2.$
You can extend this proof for three digit number 100c + 10b + a. Same logic will be applied in this case | {"url":"http://mathhelpforum.com/number-theory/75930-need-help.html","timestamp":"2014-04-17T18:48:52Z","content_type":null,"content_length":"31902","record_id":"<urn:uuid:a51cf421-0920-4f85-937e-9010b6422ca8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Equations with Squares and Cubes - Concept
Many of the formulas for calculating volumes (prism volume, volume of cylinders, volume of cones formulas and volume of pyramids) require solving equations with square roots or cube roots. To solve
these equations, we use many of the same principals that we learned when solving equations with square roots in Algebra such as isolating the unknown variable and simplifying.
In Geometry you're going to solve with square roots and with cube roots. So we'll look at 3 quick problems here.
The first one you have an equation with one variable r and r is being squared. So you're going to have to do a couple of steps here. First step is you're going to eliminate what's multiplying r
squared and that is 4 pi. So I'm going to divide both sides by 4 pi. So it should look familiar, it's something that you did last year in Algebra. 4 pi divided by 4 pi is 1. Anything divided by
itself is 1. The only thing left on the right side is r squared. On the left side here we have pi divided by pi which is 1, and we have 80 divided by 4, which is 20. So to solve this for r, I need to
undo squaring which is squaring square rooting. So the square root of 20 the way I like to simplify that is to think of it as two square roots being multiplied together. And I can say that 20 is 10
times 2, but I don't know either of those square roots as a whole number but I can write it out as square root of 4 times square root of 5. Square root of 4 is 2. So we are going to say that r is
equal to 2 times the square root of 5.
Let's look at two more. Here's a next one. x cubed is equal to 27. Well to undo cubing something, I'm going to take not the square root but the cube root. So the cube root of x cubed is going to be
x. I have to do the same operation on the other side and the cube root of 27 is going to be 3. Now if we go back to our first problem, something that we'll notice is that I could have said that this
was positive or negative 2 times the square root of 5. Since we're in Geometry and we're almost always talking about distances, we're going to almost always take the positive root. Because, 2 times
the square root of 5 times itself is 20 and if I took the negative of that multiplied by itself we'd end up with 20 as well. So there it could be two answers. If we go back to this cube root however,
if I said x could be -3, let's just look at this real briefly. -3 cubed. -3 times -3 is 9. So I'll have 9 times -3 which is -27. So notice that with the cube root you're only going to end up with
this one answer. The negative is not going to be one of your answers.
The last one that we're going to look at is something that you'll be solving when you're talking about the volume of the sphere. To isolate r here, first we're going to take the reciprocal of four
thirds. So I'm going to multiply both sides by three fourths. So it's three fourths times 823. I'm going to type that into my calculator. Three fourths times 823 is 617.25. So what we've done is
we've isolated pi times r cubed. I can't take the cube root just yet so what I'm going to do is I'm going to divide both sides by pi. So now I'm going to get another decimal, I'm going to divide this
by pi and I get 196.5 we'll round. 196.5 is equal to r cubed.
Now that the only thing that we have here is an r cubed, we can take the cube root and isolate r. So we take the cube root of both sides and I'm going to say that the cube root of our cube is r, and
in my calculator, what I'm going to type, if your teacher hasn't shown you, the way you type this in is you're going to type in 196.5 and then to tell it to raise it to a fraction. Because a
fractional root, or excuse me a fractional expone is actually taking a root. So here you're going to raise it to the one third power. So in my calculator I'm going to type in 196.5 and we'll raise it
to, now remember to have these parentheses here, otherwise your calculator will just raise it to the first and then divide everything by 3. So I'm going to say one third and I get 5.8. So I'm going
to write that little bit over here. r = 5.8 and we don't know what our units are so we'll just leave it like that.
So remember that when you're trying to solve problems with surface area, any time something is squared, any time something is cubed, you're going to be taking the square root or the cube root to
isolate your variables.
square root cube root | {"url":"https://www.brightstorm.com/math/geometry/volume/solving-equations-with-squares-and-cubes/","timestamp":"2014-04-17T16:00:25Z","content_type":null,"content_length":"59625","record_id":"<urn:uuid:be56e05e-a4d3-437e-a034-775eeef5abc4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help finding the x-intercepts
how do you find the x-intercepts if you have 0 = x^3 -9x^2 + 15x + 30
I assume you mean "if you have $y= x^3- 9x^2+ 15x+ 30$" so the x-intercepts are where y= 0: $x^3- 9x^2+ 15x+ 30= 0$. Of course, you find them by solving that equation. I recommend crossing your
fingers and hoping that there is at least one rational root. The rational root theorem tells us that if there is a rational root, it must be an integer that evenly divides the constant term, 30. The
only possible rational roots are $\pm 1$, $\pm 2$, $\pm 3$, $\pm 5$, $\pm 6$, $\pm 10$, $\pm 15$, $\pm 30$. Try those and see if one or more satisfy the equation. If one does, you can divide by the
corresponding factor to reduce to a quadratic equation.
As it happens, this particular polynomial has no "nice" rational roots. To find the one real root, you'll have to use numerical methods. Note that y = -44 at x = -2, but y = 5 at x = -1. Then
somewhere between x = -2 and x = -1, you must have y = 0. When x = -1.5, y = -16.125. So y = 0 must be between x = -1.5 and x = -1. Try, say, x = -1.25. And so forth, until you've got enough decimal
places of accuracy. | {"url":"http://mathhelpforum.com/pre-calculus/81956-need-help-finding-x-intercepts.html","timestamp":"2014-04-18T07:27:11Z","content_type":null,"content_length":"37237","record_id":"<urn:uuid:884ec419-9873-4793-a971-bc5396b1dd00>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why yet another course in physics?
Answer to this question is vital to justify yet another course (book) on the subject, specially when, there exists brilliant books on the shelves, successfully meeting the requirement of schools.
Understandably, each of these books / courses has been developed through a rigorous process, conforming to a very high level of standards prescribed by state education boards. Why then yet another
course (book)? Matter of fact, this question had been uppermost in my mind before I undertook the commitment to take up this project. A good part of the reason lies in the basic nature of creative
urge involved in writing and shaping a book. Besides, as an author, I had the strong conviction like others that a subject matter can always be treated in yet another way, which may be a shade
different and may be a shade better than earlier efforts. This belief probably clinched my initiation into this project. Further :
1 : It is no wonder that books have been published regularly - many of which have contributed significantly to the understanding of the nature and natural events. Also, there is no doubt that there
has been a general improvement in the breath and depth of the material and style of presentation in the new books, leading to a better appreciation among the readers about the powerful theories,
propounded by great human minds of all time. However, one book differs to other in content, treatment, emphasis and presentation. This book is different on this count.
2 : Fundamental laws of physics are simple in construct. Take the example of Newton’s second law : F = m a F = ma. This could not have been simpler. Yet, it takes great deal of insight and practice
to get to the best of mechanics – a branch of physics, which is largely described by this simple construct. The simplicity of fundamental laws, matter of fact, is one of the greatest wonders of
nature. Difficulty arises, mostly, from the complexity of the context of natural phenomena, which are generally culmination of a series of smaller events interwoven in various ways. The challenge
here is to resolve complex natural phenomena into simpler components, which can then be subjected to the theories of physics. Resolution of complex natural phenomena into simpler components is an
important consideration in physics. This book keeps this aspect of physics central to its treatment of the subject matter.
3 : Overwhelming and awesome reach of theories in physics, inadvertently, introduces a sense of finality and there is a tendency to take an approach towards the study of physics, which is serene and
cautious – short of ‘do_not_fool_around’ kind of approach. This book takes calculated risk to play around with the hypotheses and theories to initiate readers to think deeper and appreciate physics
with all its nuances. The book is structured and developed from the perspective of inquisitive young minds and not from the perspective of a matured mind, tending to accept theory at its face value.
This shift in approach is the cornerstone of subject treatment in this book.
4 : Mathematics fine tunes physics laws and gives it a quantitative stature. Most of the extension of physical laws into the realm of application is possible with the intelligent use of mathematical
tools at our disposal. Further, adaptation of physical laws in mathematical form is concise and accurate. Consider the magnetic force on a moving charge given by :
F = q ( v X B ) F = q ( vXB)
The mathematical expression is complete and accurate. It tells us about both magnitude and direction of the magnetic force on the moving charge. Matter of fact, direction of force in relation to
velocity of charge and magnetic field is difficult to predict without this formula. Either, we rely on additional rules like Fleming’s left hand rule or interpret the vector quantities on the right
hand side of the equation in accordance with rules concerning cross product of two vectors. The choice of mathematical vector interpretation is found to avoid confusion as Fleming’s rule requires
that we memorize direction of each of the vectors by a specific finger from the set of three fingers stretched in mutually perpendicular directions. There is great deal of uncertainty involved. You
may forget to remember the correct hand (left or right), correct fingers (first, middle or thumb)and what each of them represents. On the other hand, vector interpretation has no element to memorize
to predict the direction of magnetic force! Such is the power of mathematical notation of physical law.
In this sense, mathematics is a powerful tool and preferred language of expression in physics. Separate modules are devoted to describe mathematics relevant to physics in order to prime readers
before these tools are used in the context of physics.
5 : The fundamental laws/ theories of physics are universal and result of great insight into the realm of physical proceedings. New constructs and principles are difficult to come by. The last
defining moment in physics was development of the quantum physics by Erwin Schrodinger and Werner Heisenberg in the year 1925-26. Since then, there had been advancement in the particle physics and
electronics, but no further aggregation of new theories of fundamental nature. Physics, however, has progressed a great deal in its application to other spheres of science, including engineering,
medicine and information technology. This book assigns due emphasis to this aspect of applied physics.
6 : Various Boards of State Education prescribe well thought out framework and standards for development of physics text book for classroom teaching. This book emphasizes these standards and goals
set up by the Boards. | {"url":"http://cnx.org/content/m13249/latest/?collection=col10322/1.175/","timestamp":"2014-04-21T02:14:58Z","content_type":null,"content_length":"88867","record_id":"<urn:uuid:4b0d4586-d04d-4537-9ab8-1cddf63dcc22>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring a Quadratic Polynomial
This is the Classroom Tips & Techniques article for the May, 2011 Maplesoft Reporter, which, after publication, finds its way into the Maple Application Center. The article takes the liberty to rail
against the stress placed on a particular manipulative skill in the precalculus curriculum, and likewise, I take the liberty to post it as a blog. The windmill at which I tilt is the "skill" of
factoring a quadratic polynomial by inspection, a technique in which I find little intrinsic value.
My guess is that for historic reasons, factoring a quadratic was the way to obtain its zeros. The essence of the concept one would want a student to absorb is the factor-remainder theorem, so finding
zeros becomes important. But demanding that students learn about the factor-remainder theorem via the travail of factoring a quadratic by inspection seems to me rather senseless, given that shortly,
the student learns to complete the square and thereby obtain the roots of a quadratic equation. In fact, the quadratic formula is derived by completing the square, and not by "factoring."
I remember in my high school math curriculum (mid 1950s) that I learned to multiply and divide large numbers via the addition and subtraction of their logarithms. This material disappeared from the
curriculum as soon as calculators became a commodity in the 1970s. If the curriculum can change in one way because of technology, why, using the same technology, can't it change in another? The
higher cognitive merit is in understanding the relationship between zeros and factors. It isn't really necessary to torment students with factoring-by-inspection as a way of finding zeros. There are
other ways - either use an electronic technology or use the quadratic formula.
Indeed, consider how a cubic is factored in the precalculus curriculum. First, zeros are found; then from the zeros, the factors are written. Just the opposite of the process imposed in the quadratic
Finally, I would conjecture that in some appropriate space, the set of quadratics that can be factored by "inspection" has measure zero. Any quadratic worth factoring probably doesn't yield to
"inspection" anyway.
OK, now that I've ranted, I'd like to finish with a tool I just wrote, a tool for helping a student with the task of factoring a quadratic polynomial. I never thought I'd find myself creating such an
applet, but I get asked about this as part of many of the webinars I present for Maplesoft. Just recently, after again getting that question, I found I just couldn't let it go. I kept thinking about
what it takes for a student to master the appropriate skill. So, the tool I built reflects the way I think about the task. | {"url":"http://www.mapleprimes.com/maplesoftblog/120910-Factoring-A-Quadratic-Polynomial","timestamp":"2014-04-19T17:03:16Z","content_type":null,"content_length":"77036","record_id":"<urn:uuid:d8253ec7-545d-48db-a6cc-43e0e462dd6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
when does the Hartree-Fock approximation fail?
1. The problem statement, all variables and given/known data
Hi, I've read from Wikipedia that in the Hartree-Fock approximation, "Each energy eigenfunction is assumed to be describable by a single Slater determinant".
My question is... if the approximation fails and the system has to be described by linear combinations of more than one type of Slater determinants, what type of wave functions would they be
determinants of? (besides the one-electron wave functions) | {"url":"http://www.physicsforums.com/showthread.php?t=215474","timestamp":"2014-04-20T08:37:07Z","content_type":null,"content_length":"31069","record_id":"<urn:uuid:3ea95b76-4cae-433c-ba12-6df55d2795ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
ICIAM 2011: Mathematicians Gone Solar!
February 5, 2012
Fadil Santosa, director of the Institute for Mathematics and its Applications at the University of Minnesota, captures highlights of a four-part minisymposium, "Mathematical Sciences in Solar Energy
Research," held at ICIAM 2011. He and Henry Warchall of the National Science Foundation's Division of Mathematical Sciences organized the sessions.
The amount of solar energy that reaches the earth in one hour is sufficient to supply the world's energy needs for one year. Harvesting this energy efficiently is a huge challenge. Most solar cells
have low efficiency and are not very cost-effective. Other current solar energy technologies have drawbacks as well. There is a need for new technology that can capture the energy from the sun. To
develop this technology, we need to understand what the challenges are and to engage a community of researchers who can devote their attention to this important problem.
We organized the four-part minisymposium to showcase cutting-edge mathematical research in solar energy. Our hope was that the sessions would strengthen the solar energy community by connecting
researchers working in this field and by encouraging others to join them.
The idea for the session grew out of NSF's Solar Energy Initiative (SOLAR), which involves the Divisions of Chemistry, Materials Research, and Mathematical Sciences. SOLAR supports interdisciplinary
efforts by groups of researchers to address the scientific challenges of highly efficient harvesting, conversion, and storage of solar energy. The intention of the program creators was to encourage
new collaborations in which the mathematical sciences would be linked in a synergistic way with the chemical and materials sciences to develop novel, potentially transformative approaches in an area
of much activity but largely incremental advances. What is somewhat unusual about this program is that each project must have three (or more) co-principal investigators, one each from chemistry,
materials, and the mathematical sciences.
SOLAR was launched in 2008 and ended in 2011. To mobilize the mathematical sciences community, IMA organized the workshop Scientific Challenges in Solar Energy Conversion and Storage, which was held
in November 2008. Seventeen projects were funded in FY 2009 and 2010; five additional projects were funded in FY 2011. Without this creative effort on the part of NSF, it is likely that few
mathematicians would be working in this nationally important research area.
Believing that the time had come for an overview of solar energy research activities in which mathematical scientists are engaged, we invited the PIs from the 17 projects funded under the SOLAR
program in FY 2009 and 2010 to participate in the minisymposium. Twelve of the 17 projects were represented.
The presentations fell roughly into two categories: materials modeling and process modeling. All the speakers discussed research related to photovoltaic solar cells, with one exception: Irene Gamba
of UT Austin gave an interesting talk on a process that uses solar power to create hydrogen fuel (see Figure 1). This very complex photoelectrochemical process can be modeled by a system of partial
differential equations. In addition to the huge potential impact of this work, it has already produced new and exciting mathematics. Analysis of the PDEs will require new techniques as these
equations are very different from standard ones. Moreover, there is a need to develop computational techniques to solve these PDEs.
Figure 1. Energy conversion strategies for light coming in and fuel going out come in three forms: natural biological photosynthesis in homogeneous chemistry (left), electricity in photovoltaics
(right), and conversion of fluid into fuel by photoelectrochemistry (center). In photoelectrochemical experiments, irradiation of an electrode with light that is absorbed by the electrode material
causes the production of a current (a photocurrent) that depends on wavelength, electrode potential, and solution composition and provides information about the nature of the photoprocess, its
energetics, and its kinetics. Photoelectrochemical studies are frequently carried out to obtain a better understanding of the nature of electron transfer in semiconductor electrode/solution
Models of photoelectrochemical processes for water splitting are simulated by semiconductor–electrolyte systems with a reacting flux transfer interface condition, involving the electron–hole pair
redox reaction in the semiconductor–electrolyte system with a reacting flux transfer interface condition. These simulations are done under "dark" (only electron–hole recombinations) and "illuminated"
(recombinations plus a photo generation term proportional to the absorption coefficient of the semiconductor and the corresponding surface photon flux). These simulations are based on a
Gummel–Schwarz iteration algorithm for a drift–diffusion–Poisson system for the semiconductor region and a Poisson–Nernst–Planck system for the redox pair in the electrolyte region. From "On the
modeling and simulation of semiconductor–electrolyte interfaces," Y. He, I.M. Gamba, A. Bard, H.C. Lee, and K. Ren, ICES, UT Austin, 2011.
Among the speakers on materials modeling was Hector Ceniceros of UC Santa Barbara, who described the modeling of materials needed in organic solar cells. The approach he advocated is based on
field-theoretic models for polymeric fluids (see Figure 2). Keith Promislow of Michigan State discussed his work in modeling the microstructures in solid-state dye-sensitized solar cells. The
phase-field method he developed is capable of capturing a range of microstructures that can emerge in these systems. (An article on Promislow's work appears in this issue.) Tobias Schneider of
Harvard described a process for creating laser-doped silicon, which is a way to produce more efficient solar cells.
Figure 2. Designed electronically active interfacial materials for polymer-blend solar cells. Top, schematic of an all-polymer bulk heterojunction with a multifunctional, tri-block electroactive
polymer surfactant additive. Bottom, due to limited exciton diffusion lengths in polymers, donor and acceptor must be dispersed in a bicontinuous morphology. Bicontinuous microemulsion morphologies
are investigated via polymer field-theoretic models and simulations. Courtesy of project PIs Chabinyk, Ceniceros, Fredrickson, and Hawker.
Optics figure in a big way in solar cells. Boaz Ilan of UC Merced and Stephen McDowall of Western Washington University discussed the creation of devices that concentrate light using luminescent
materials, modeled at the macroscopic level. At the microscopic level, photonic bandgap phenomena can be exploited to increase the efficiency of solar cells, the approach proposed in the talk of
Christoph Kirsch of the University of North Carolina.
In a talk on modeling charge transport in a solar cell, Robert Krasny of the University of Michigan pointed out that a lot of basic processes need to be better understood and modeled before we can
apply optimization tools to increase the efficiency of solar collectors.
Modeling at the electronic-excitation level is another important part of efforts to develop better and cheaper solar cells, as exemplified by the work of Zhaojun Bai of UC Davis and of John Lowengrub
of UC Irvine.
As these sessions made clear, mathematicians are already making a difference in solar energy research. Their collaborative work has indicated new directions to explore, new experiments to conduct,
and a better understanding of the fundamental phenomena at play. We are encouraged to see that the NSF investment in solar energy research is potentially transformative. It is not a stretch to
predict that new mathematical research areas will emerge from these efforts. What a glorious and sunny opportunity for mathematics!
Readers can find a complete list of speakers in the four-part minisymposium by following the links at http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=12247. | {"url":"http://www.siam.org/news/news.php?id=1951","timestamp":"2014-04-20T00:05:09Z","content_type":null,"content_length":"15290","record_id":"<urn:uuid:059d5693-87fc-42a7-b9e5-6e4ef3a059bc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are gaps in $n^2\sqrt{2}$ poissonian?
up vote 6 down vote favorite
I would like to know gaps about the sequence $n^2 \sqrt{2} \mod 1$. Van der Corput's trick shows that $n^2 \sqrt{2}$ is equidistributed on the circle. For large $N$, the fraction $$ \frac{\# \{ 1 \
leq i \leq N: a < n^2 \sqrt{2} \mod 1 < b \} }{N} \approx b-a.$$ Do the spacings between these values approach the Poisson distribution (as they would for uniformly random numbers)? Is there a proof
of this involving Homogenous spaces? This is different from Elkies and McMullen's paper where they consider $\sqrt{n}\mod 1$ and relate it to the 5D space of Euclidean lattices.
nt.number-theory ds.dynamical-systems
add comment
1 Answer
active oldest votes
Rudnick and Sarnak (MR1628282) conjecture that any not-too-well-approximable $\alpha$ has this property in place of $\sqrt{2}$, in particular any algebraic number should have this property.
up vote They prove that almost every $\alpha$ (in the Lebesgue sense) have this property, but no specific $\alpha$ has been exhibited so far. Rudnick-Sarnak-Zaharescu (MR1839285) and Zaharescu
11 down (MR1957276) have examined what happens for too-well-approximable $\alpha$'s. The above mentioned papers and their AMS reviews should serve as good starting points.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory ds.dynamical-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/69040/are-gaps-in-n2-sqrt2-poissonian","timestamp":"2014-04-21T02:51:59Z","content_type":null,"content_length":"50184","record_id":"<urn:uuid:176dcaff-6842-48f8-858c-8007c3ad5eac>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |