content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
inverse functions
When f(x)=x^7+2x^3+x-1 I have found f^-1(3)=1 But I need to prove this. Can anyone help?
I need to find f^-1, or the inverse of f. Not sure how to do this?
By definition of $f^{-1}$ , $f^{-1}(\{y\})=\{x\in\mathbb{R}:f(x)=y\}$ . As $f(1)=3$ we can conclude that $1\in f^{-1}(\{3\})$ . If we prove that $f$ is strictly increasing then, no $xeq 1$ satisfies
$f(x)=3$ so, $f^{-1}(\{3\})=\{1\}$ . That was your initial question. Now, if you want a closed formula $f^{-1}(x)$ for all $x\in\mathbb{R}$ , better forget it.
If finding $f^{-1}(*)$ is a 'categoric imperative' , then a method based on advanced tecniques of complex analysis will be supplied without demonstration... If the function the inverse of which has
to be found is... $y= x^{7} + 2\ x^{3} + x -1$ (1) ... then first with the variable substitution $z=1+y$ we obtain... $z=f(x)= x^{7} + 2\ x^{3} + x$ (2) Now the inverse $x=f^{-1} (z)$ of a function
for which is $f(0)=0$ can be written in series expansion as... $x= \sum_{n=1}^{\infty} a_{n}\ z^{n}$ (3) ... where the $a_{n}$ are given by... $a_{n}= \frac{1}{n!}\ \lim_{x \rightarrow 0} \frac{d^
{n-1}}{d x^{n-1}} \{\frac{x}{f(x)}\}^{n}$ (4) In our case of course is... $\frac{x}{f(x)} = \frac{1}{x^{6}+ 2\ x^{2} +1}$ (5) ... and the x as function of y will be... $x= \sum_{n=1}^{\infty} a_{n}\
(1+y)^{n}$ (6) Details of computation of the $a_{n}$ are left to the [young I suppose...] starter of the thread... Kind regards $\chi$$\sigma$
Thanks for your help everyone. One final thing, I am trying to determine the value of (f^-1)'(3) Would this = 1/f'(3) = -1
|
{"url":"http://mathhelpforum.com/pre-calculus/185019-inverse-functions.html","timestamp":"2014-04-20T15:52:35Z","content_type":null,"content_length":"69823","record_id":"<urn:uuid:78c71b75-70ce-42f1-a1e5-0fce507c0fa0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Worth, IL Algebra 1 Tutor
Find a Worth, IL Algebra 1 Tutor
...More recently I worked as a Math Lead for one of the highest achieving Charter School Networks in the Nation. A school in which we continually outperformed out public school counterpart year
after year. I currently work in the Mathematics Department for one of the most recognized scholastic public secondary districts in the State of Illinois.
20 Subjects: including algebra 1, physics, geometry, algebra 2
...Currently, I am in my thirteenth year as a professional educator, and I have served a couple roles including teacher, tutor, and counselor. I have worked with a range of students including
students with ADD/ADHD. In addition, for the last year I have worked one-on-one a couple hours a week as a sort of big brother with a ten year old with ADHD.
20 Subjects: including algebra 1, Spanish, writing, English
...There is nothing more rewarding as a teacher than instilling a confidence in a student which they never thought they could have in math. My expertise is tutoring any level of middle school,
high school or college mathematics. I can also help students who are preparing for the math portion of the SAT or ACT.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
...I love literature and taught novels in Literature Circles at the 7th grade level. In high school, students need to hone their comprehension and writing skills, so I think it is important to
work on both. I question students about what they've read and have them answer in order to see if there is understanding.
40 Subjects: including algebra 1, reading, English, ASVAB
I currently teach elementary reading and math and currently teach adult ESL/TESOL students. My goal as a tutor is to improve my students weakness area. I have been very successful at
accommodating diverse student needs by facilitating all styles of learners, offering individualized support, and integrating effective methods/interventions to promote student success.
5 Subjects: including algebra 1, ESL/ESOL, vocabulary, spelling
Related Worth, IL Tutors
Worth, IL Accounting Tutors
Worth, IL ACT Tutors
Worth, IL Algebra Tutors
Worth, IL Algebra 2 Tutors
Worth, IL Calculus Tutors
Worth, IL Geometry Tutors
Worth, IL Math Tutors
Worth, IL Prealgebra Tutors
Worth, IL Precalculus Tutors
Worth, IL SAT Tutors
Worth, IL SAT Math Tutors
Worth, IL Science Tutors
Worth, IL Statistics Tutors
Worth, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Worth_IL_algebra_1_tutors.php","timestamp":"2014-04-17T19:54:33Z","content_type":null,"content_length":"24247","record_id":"<urn:uuid:4417ff91-3177-42ee-9c83-ced8c7a4a1bc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Goulds, FL Math Tutor
Find a Goulds, FL Math Tutor
...I began teaching classes at a local Yoga Center in early 2012, and now teach classes and private lessons there and at my home. I received 200hr certification as a teacher from Sivananda Yoga,
underwent an Agama Yoga Immersion in Leh, India in 2011, and recently completed the Sivananda Yoga 500hr...
40 Subjects: including algebra 1, writing, SAT math, biology
...I'm also a native Chinese speaker, fluent in speaking Mandarin Chinese, and very knowledgeable in the writing and grammar of the language. I have spent 3 years teaching Chinese to beginners,
and more than 10 years tutoring children of all ages from a basic level to AP Chinese and SAT 2 Chinese p...
9 Subjects: including prealgebra, algebra 1, algebra 2, geometry
...Being a native from France, I love to teach French as well. I have taught in International Schools, Berlitz and Inlingua, which has given me tools to use the best method for students to learn
conversational French.With the experience I have teaching Algebra 1, in high school and also one on one ...
48 Subjects: including precalculus, chemistry, elementary (k-6th), physics
...Integration by parts, U-substitution, trigonometric substitutions, and calculus of variations are also subjects with which I am very familiar. Geometry is an integral component to Mechanical
and Aerospace Engineering, and I use the subject on a daily basis in performing basic job functions. In ...
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...My first tutoring job was as a private tutor for a college football team. From there, I went on to work with private clients of all ages and subjects in Los Angeles and Beverly Hills. I am
known for a deep understanding of all levels of math and an ability to help make math accessible to all students.
16 Subjects: including algebra 2, prealgebra, trigonometry, writing
Related Goulds, FL Tutors
Goulds, FL Accounting Tutors
Goulds, FL ACT Tutors
Goulds, FL Algebra Tutors
Goulds, FL Algebra 2 Tutors
Goulds, FL Calculus Tutors
Goulds, FL Geometry Tutors
Goulds, FL Math Tutors
Goulds, FL Prealgebra Tutors
Goulds, FL Precalculus Tutors
Goulds, FL SAT Tutors
Goulds, FL SAT Math Tutors
Goulds, FL Science Tutors
Goulds, FL Statistics Tutors
Goulds, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Cutler Bay, FL Math Tutors
Flamingo Lodge, FL Math Tutors
Leisure City, FL Math Tutors
Ludlam, FL Math Tutors
Naranja, FL Math Tutors
Perrine, FL Math Tutors
Princeton, FL Math Tutors
Quail Heights, FL Math Tutors
Redland, FL Math Tutors
Richmond Heights, FL Math Tutors
Snapper Creek, FL Math Tutors
South Miami Heights, FL Math Tutors
Venetian Islands, FL Math Tutors
Village Of Palmetto Bay, FL Math Tutors
West Dade, FL Math Tutors
|
{"url":"http://www.purplemath.com/goulds_fl_math_tutors.php","timestamp":"2014-04-19T17:27:22Z","content_type":null,"content_length":"23912","record_id":"<urn:uuid:3632628f-d4c0-4223-94d2-49d6f56dc822>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
March 11, 2005: Sinan Gunturk, CIMS
This will be an expository talk on the mathematics of signal quantization, with an emphasis on the "one-bit" setting. The latter problem is to approximate bounded functions arbitrarily well by
judiciously chosen {+1,-1} sequences. Digital halftoning (which is a crucial step in printing images) is one example of application in which this type of encoding is utilized; the analog-to-digital
interface in CD players is another example. Despite the vast amount of engineering practice, the mathematical theory has remained mostly incomplete and many challenging problems still remain
|
{"url":"http://www.cims.nyu.edu/seminars/gsps/past_talks/GunturkMar1105.html","timestamp":"2014-04-20T18:58:03Z","content_type":null,"content_length":"2373","record_id":"<urn:uuid:f4317033-5a1f-4013-8da2-2577ba76ab35>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pseudo coefficients and orbital integrals
up vote 2 down vote favorite
I am looking for a reference/idea, how this passage from Labesse's Snowbird Lecture "Introduction to endoscopy" pg.5 can be explained:
"We shall denote by $f_\pi$ a pseudo-coefficient for $\pi$, although it is highly non unique. But as regards invariant harmonic analysis this plays no role. In particular the orbital integrals
are independent of the choice of the pseudocoefficient; they are also independent of the choice of the Haar measure on $G(F)$ but one has to use the canonical measure on the compact torus $G (F)
$. The orbital integrals of $f_\pi$ are easily computed for regular semisimple: $$O_\gamma(f_\pi) =\begin{cases} \Theta_\pi(\gamma), & \gamma \mathrm{ elliptic}, \newline 0, &\mathrm{else},\end
{cases} $$ where $\Theta_\pi(\gamma)$ is the character of $\pi$."
Here $G$ is a reductive group over a local field $F$ and $\pi$ a squareintegrable representation.
reductive-groups reference-request rt.representation-theory automorphic-forms
add comment
1 Answer
active oldest votes
Existence of pseudo-coefficients for square-integrable representations (and the link with character values of the representations) is stated and proved in
up vote 3 down vote D. Kazhdan, Cuspidal geometry of $p$-adic groups. J. Analyse Math. 47 (1986), 1–36.
1 I have seen this reference, in fact Labesse gives it himself. The existence is not the issue. What I want to understand/reference is the statement about the orbital integral.
Thanks nevertheless. – plusepsilon.de May 14 '12 at 10:56
You also have the exact statement on the link bewteen orbital integral of a pseudo-coefficient and the character of the representation in Kazhdan's paper. – Paul Broussous
May 14 '12 at 13:32
Okay, I was searching the document for keywords, probably not the correct ones;) I will have a more careful look. Thank you. – plusepsilon.de May 14 '12 at 16:16
1 I find Kazhdan's paper difficult to read ! – Paul Broussous May 14 '12 at 20:27
I went back again. It is theorem K on page 7. – plusepsilon.de Jun 12 '12 at 15:48
add comment
Not the answer you're looking for? Browse other questions tagged reductive-groups reference-request rt.representation-theory automorphic-forms or ask your own question.
|
{"url":"http://mathoverflow.net/questions/96849/pseudo-coefficients-and-orbital-integrals?sort=newest","timestamp":"2014-04-19T10:21:42Z","content_type":null,"content_length":"57016","record_id":"<urn:uuid:3871796f-3e35-4349-9b24-f4a69ee785b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 49
A man completed a trip of 136 km in 8 hrs. Some part of the trip was covered at 15 km/hr. What is the part of the trip covered at 18 km/hr?
The length of a rectangle is twice it's width. If it's perimeter is 54cm, find its length. Explain!
A man is 42 years old and his son is 12 years old. In how many years will the age of the son be half the age of the man at that time? Explain!
Separate 178 into two parts such that the first part is 8 less than twice the second part. Explain!
A number decreased by 30 is the same as 14 decreased by 3 times the number. Find the number. Explain!
What is the difference between solubility and solubility curve?
Who stated the three laws of motion?
Bob has three sacks of apples and three more apples in his pocket. Each sack contains the same number of apples. All together, Bob has 33 apples. How many apples are in each sack?
Where is plateau of Arabia located?
A store has 2 times as many hooded sweatshirts as crewneck sweatshirts. The total number of sweatshirts is 36. How many hooded sweatshirts does the store have?
it gives pbi2 hdj
what happens when lead nitrate react with potassium hydroxide
The sum of odd squares can be expressed as 1^2+3^2+5^2+ +(2n-1)^2=An^3+Bn^2+Cn+D. The value of A can be expressed as a/b, where a and b are positive co-prime integers. What is the value of a+b?
The most convenient way to express vectors in the two dimensional plane is in the familiar (x,y) Cartesian coordinates. However, one can express vectors in other coordinate systems as well. For
example, another useful coordinate system for the plane is polar coordinates (r, ...
Bella keeps a supply of red, blue, and black pens in a drawer. The drawer has 13 red pens, 18 blue pens, and 8 black pens. If Bella wants a red pen, what is the minimum number of pens she has to pull
out from the drawer to make sure she gets a red pen?
Calculate the work required to be done to stop a car of 1500kg moving with a velocity of 60km/hr?
plzzzzzzz help me out.
a beam of light consisting of two wavelengths 800nm and 600 nm is used to obtain the interference pattern in youngs double slit expt on a screen placed 1.4m away.if two slits are seperated by
0.28mm,calculate the least distance from the central bright maximum where the bright ...
the vertical component of earth magnetic field at a place is root3 times the horizontal component.What is the value of angle of dip at this place?
At what temp would an intrensic semiconductor behave like a perfect insulator?Why?
oye thanku sooo much..nyc:)
X and Y are two capacitors(parallel plate) connected in series to a 12V supply.X has air between the plates and its capacitance is 10microfarads.Y contains a dielectric medium of K=5.calculate
capacitance of Y and the potential difference between the plates of Y.
in a wire AB of uniform cross section in which a potential gradient of 0.1 V/m exist.If the balancing point is foung to be 180 cm from A then what is the Emf of the cell?
what is the power dissipated in an ac circit in which current v=230sin(wt+90) and I=10sin(wt)...plz help me
Two metallic wires of the same material have the same length but cross-sectinal area is in the ratio of 1:2.they are connected in 1)-series 2)-parallel. Compare the drift velocities of the electrons
in the two wires in both the cases. Plzz help
the volume of 500g sealed packet is 350 cubic centimeters. Will the packet float or sink in the water if the density of water is 1 gm per cubic meter ? What will be the mass of water displaced by
this packet
Plz cud sombdy show me long division method of x^4/(x-1)(x^2+1)???? Plz note (x-1)(x^2+1) is in the denominator.
Plz cud sombdy show me long division method of x^4/(x-1)(x^2+1)???? Plz note (x-1)(x^2+1) is in the denominator.
Plz cud sombdy show me long division method of x^4/(x-1)(x^2+1)???? Plz note (x-1)(x^2+1) is in the denominator.
The resultant will be maximun when cosine of the angle b/w them is is 0..ie 1 then R=(5^2+5^2+2*5*5cos(0))^1/2
A boy is riding a bicycle in a rainy day. The force exerted by the boy and the force due to which rain falls downward.
A boy is riding a bicycle in a rainy day. The force exerted by the boy and the force due to which rain falls downward.
A boy is riding a bicycle in a rainy day. The force exerted by the boy and the force due to which rain falls downward.
If y=Root of(logx+root of (logx+root of logx.........infinty)...prove that dy/dx=1/x(2y-1)
The only thing i have as a doubt is how com e^(lnx^3)=x^3 ?!?!?!?:O
If e^(x+3logx) then prove that dy/dx=x^2(x+3)e^x....can anyone help me out...plz note 3+logx is in adition with x and is the power of 'e' its not e^x+3logx
If y=e^x+3logx then prove that dy/dx=x^2(x+3)e^x.
100000/75 cm
science (physics)
five examples of potential energy with diagram
If the perimeter of a parallelogram is 140 m, the distance between a pair of opposite sides is 7 meters and its area is 210 sq m, find the length of two adjacent sides of the parallelogram.
=Sin(90-71)/cos71 =cos71/cos71 =1
a two digit number is four times the sum of its digits and twice the product of the digit. find the number using elimination method
What were the various changes that had taken place in Europe at the beginning of the middle ages ? What were the main characteristics of the middle ages in Europe?
a steamer goes downstream in a river and covers the distance between upstream in 6 hours if the speed of the stream is 3 km/hr. firnd the speed of the steamer in still water.
denominator of a fraction is 1 more than its numerator. the fraction reduces to one fourth when numerator is reduced by 2 and denominator is increased by 3. find the number
Two forces acting on a body in opposite directions,have the resultant velocities of 10newton. if they act at right angles to each other, the resultant is 50 newton.find the two forces.
mechnical engineering
what is rankine cycle?
how many times does minute hand rotate in one day? There are 60 minutes in an hour and 24 hours per day. With this information, can you answer your question? I hope this helps. Thanks for asking.
Chemistry Electron Configuration Questions
Valance Electrons?
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Abhishek","timestamp":"2014-04-17T02:42:06Z","content_type":null,"content_length":"15001","record_id":"<urn:uuid:23ad3fd4-adaa-4411-9efd-6e7bee330ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cerritos Science Tutor
Find a Cerritos Science Tutor
...Following my master's degree, I will continue on to dental school. Following my undergraduate work at UCLA, I worked as a substitute teacher in math and science. I have continued teaching as a
private tutor for the past year in pre-calculus, ACT math, marine biology, and I am proficient in anatomy and physiology.
14 Subjects: including dentistry, physiology, biology, anatomy
Hello, my name is Grace, and I am majoring in Philosophy: Law and Society at Cal Poly Pomona. Growing up, my parents instilled in me a strict sense of discipline and educational drive and because
of them, I strongly believe that the study habits and positive educational influences that a child has ...
17 Subjects: including philosophy, reading, English, writing
...In my experiences, students do not have trouble memorizing facts but connecting them together into an overall understanding. I have instructed college chemistry for 15 at a liberal arts college
in PV. From my teaching and tutoring experiences, I am quite comfortable in most general and organic chemistry topics.
4 Subjects: including biology, chemistry, biochemistry, organic chemistry
...I break down the concepts into smaller parts. I like to use repetition, key phrases and mnemonics to help my students remember the concepts. For example, to remember the relationship of the
sides of a right triangle and the functions sine, cosine, tangent: Oscar Had A Headache Over Algebra (sin...
31 Subjects: including physics, ADD/ADHD, Aspergers, autism
...I recently received my Bachelor's Degree from San Jose State University in Social Science, African American Studies, and am currently working to pursue a Master's Degree in the medical field. I
am qualified as a substitute teacher by the State of California and I enjoy working with children. I ...
30 Subjects: including biology, elementary (k-6th), phonics, prealgebra
|
{"url":"http://www.purplemath.com/Cerritos_Science_tutors.php","timestamp":"2014-04-17T01:02:36Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:79b4ff6e-496d-4030-abac-9fbdfdd414f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: ancillary parameters in ml display
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: ancillary parameters in ml display
From Partha Deb <partha.deb@hunter.cuny.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: ancillary parameters in ml display
Date Wed, 16 Aug 2006 00:45:55 -0400
Thanks to Jeff for his explanation. I guess I'll have to live with some ugliness! :)
Jeff Pitblado, StataCorp LP wrote:
Partha Deb <partha.deb@hunter.cuny.edu> is using -e(k_aux)- to get
-ml display- to report ancillary parameters using the '/' notation:
I have a question regarding the display of ancillary parameters in -ml
display- .
When I specify my model (-ml model-) with ancillary parameters at the end of
the specification, all is well:
ml model (one: y1 = x1) (two: y2 = x2) /ancil1 /ancil2
e(k_eq) = 4
e(k_aux) = 2
But if I specify the model with ancillary parameters between equations one
and two:
ml model (one: y1 = x1) /ancil1 /ancil2 (two: y2 = x2)
e(k_eq) = 4
e(k_aux) = 2
-ml display- treats one: and /ancil1 as "equations" and /ancil2 and
two:_const as ancillary parameters. The coefficient on x2 is not displayed.
Is it possible to specify ancillary parameters between equations? It's desirable for my application because in the default specification the "equation" labeled "two" is also an ancillary
parameter, and "two" is best reported at the end.
When you set -e(k_aux)- to 2, -ml display- understands this to mean that the
last 2 equations in -e(b)- are to be treated as ancillary parameters,
regardless of whether those equations contain predictors. Thus Partha will
have to use the first -ml model- specification (above) instead of the second
to get the coefficient on -x2- to be reported.
The only alternative I can think of is to use the second specification and set
-e(k_aux)- to 3 only when there are no predictors in equation 'two:'. This is
reasonable assuming specifying predictors for equation 'two:' is a rare thing
when fitting Partha's model.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Partha Deb
Department of Economics
Hunter College
ph: (212) 772-5435
fax: (212) 772-5398
Emancipate yourselves from mental slavery
None but ourselves can free our minds.
- Bob Marley
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2006-08/msg00401.html","timestamp":"2014-04-20T08:45:15Z","content_type":null,"content_length":"8495","record_id":"<urn:uuid:94950fbe-4904-40a9-b746-7807da0d2823>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Infintie unions of closed sets in R
October 19th 2008, 01:15 PM #1
Jun 2008
Idaho Falls
Infintie unions of closed sets in R
Hello, I'm hoping you guys can help me unravel this apparent contradiction and show me where my reasoning went wrong. I'm working out of the book "Mathematical Analysis" by Apostol (second
addition) and here a few of Apostol's definitions and notation.
B(a;r). The open ball of radius r centered at a.
Interior Point. Let S be a subset of R (reals), and assume that a is an element of S. Then a is called an interior point of S if there is an open ball with center at a, all of whose points belong
to S.
Open Set. A set S in R is called open if all its points are interior points.
Closed Set. A set S in R is called close dif its complement R - S is open.
Later in the book its mentioned that the empty set is open (vacuously). In the argument I wrote, I tried to show that a countably infinite union, S, of closed sets is open. But this union should
be equal to R and so the complement is the empty set which is open, implying S is closed. Here is a link to what I wrote up, you can click on the picture to magnify it so its readable:
Image of closed sets - Photobucket - Video and Image Hosting
Consider another example:
$A_n = \left[ {\frac{1}{{n + 2}},1 - \frac{1}{{n + 1}}} \right]\quad ,\quad \bigcup\limits_n {A_n } = ?$
Oh neat, that union should be the open interval (0,1) right Plato? I should of mentioned that a theorem in Apostol's book says that the union of a finite collection of closed sets is closed. So I
started to wonder if all infinite unions of closed sets are closed. This is a cool example.
A similar situation occurs with open sets in case thou art wondering. Any union of open sets is open and so with finite intersections. However, it does not necessarily work with infinite
intesections - see if you can find a counterexample.
Thanks for the response Hacker. It took me a bit, but how about if we let A_n = (1 - 1/n, 1 + 1/n). Then the intersection of all A_n should be the singleton {1}. This set is closed because the
complement is (-infinity, 1) union (1, infinity) which is open.
October 19th 2008, 01:25 PM #2
October 19th 2008, 01:41 PM #3
Jun 2008
Idaho Falls
October 19th 2008, 07:00 PM #4
Global Moderator
Nov 2005
New York City
October 20th 2008, 06:43 PM #5
Jun 2008
Idaho Falls
October 22nd 2008, 03:20 PM #6
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/54523-infintie-unions-closed-sets-r.html","timestamp":"2014-04-20T15:58:51Z","content_type":null,"content_length":"47822","record_id":"<urn:uuid:b2a01b57-f3e6-4753-b621-926becb1dd1c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charles Babbage
Since Charles Babbage's father was fairly wealthy, he could afford to have Babbage educated at private schools. He was sent to an academy at Forty Hill, Enfield, Middlesex where his education
properly began. He began to show a passion for mathematics. On leaving the academy, he continued to study at home, having an Oxford tutor to bring him up to university level. Babbage entered Trinity
College, Cambridge in 1810.
He set up the Analytical Society in 1812, and its members were all Cambridge undergraduates. Nine mathematicians attended the first meeting. Babbage and Herschel produced the first of the
publications of the Analytical Society in 1813. They published a remarkably deep history of the calculus for undergraduates. Two further publications of the Analytical Society were the joint work of
Babbage, Herschel and Peacock.
Babbage moved from Trinity College to Peterhouse and it was from that College that he graduated with a B.A. in 1814. Babbage married in 1814, then left Cambridge in 1815 to live in London. He wrote 2
major papers on functional equations in 1815 and 1816. Also in 1816, at the early age of 24, he was elected a fellow of the Royal Society of London. He wrote papers on several different mathematical
topics over the next few years but none are particularly important and some, such as his work on infinite series, are clearly incorrect.
In 1820 he was elected a fellow of the Royal Society of Edinburgh, and in the same year he was a major influence in founding the Royal Astronomical Society. He served as secretary to the Royal
Astronomical Society for the first 4 years of its existence and later he served as vice-president of the Society.
Babbage, together with Herschel, conducted some experiments on magnetism in 1825. In 1827 Babbage became Lucasian Professor of Mathematics at Cambridge, a position he held for 12 years, although he
never taught. The reason why he held this prestigious post and yet failed to carry out the duties was that by this time he had become engrossed in what was to became the main passion of his life,
namely the development of mechanical computers.
Babbage is without doubt the originator of the concepts behind the present day computer. He was motivated by logarithm tables to attempt to construct tables using the method of differences by
mechanical means. Such a machine would be able to carry out complex operations using only the mechanism for addition. Babbage began to construct a small difference engine in 1819, and had completed
it by 1822. He announced his invention in a paper read to the Royal Astronomical Society in 1822. Babbage illustrated what his small engine was capable of doing by calculating successive terms of the
sequence n^2 + n + 41, though an assistant had to write down the terms obtained. In 1823, Babbage received a gold medal from the Astronomical Society for his development of the difference engine. He
received a grant to begin work on a larger difference engine.
In 1830, Babbage published a controversial work that resulted in the formation of the British Association for the Advancement of Science. In 1834, Babbage published his most influential work On the
Economy of Machinery and Manufactures, in which he proposed an early form of what today we call operational research.
After spending 17000 pounds on the construction of the new difference engine, including 6000 pounds of his own money, he gave up in 1834. By this time, he had completed the first drawings of the
analytical engine, the forerunner of the modern electronic computer. Although the analytic engine never progressed beyond detailed drawings, it is remarkably similar in logical components to a
present day computer.
Although Babbage never built an operational, mechanical computer, his design concepts have been proved correct. After Babbage's death a committee was appointed by the British Association to report
upon the feasibility of the design. Recently such a computer has been built following Babbage's own design criteria. The construction of modern computers, logically similar to Babbage's design, have
changed the whole of mathematics and it is even not an exaggeration to say that they have changed the whole world.
|
{"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Ba.html","timestamp":"2014-04-20T05:44:37Z","content_type":null,"content_length":"4836","record_id":"<urn:uuid:527169a1-cb67-48f0-9943-79f041104318>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newest 'ultrapowers gr.group-theory' Questions
What can be said about the class of groups which can be represented as a homomorphic image of an (infinite) Cartesian product (=unrestricted direct product) of finite groups? What would be simple ...
Given a nonprincipal ultrafilter $\mu$ on $\mathbb{N}$ and a sequence of groups $G_i$, one can define its ultraproduct as: $$ ^*\prod_{i\in \mathbb{N}}G_i:=\{(x_i)_{i \in \mathbb{N}}| x_i\in ...
Let G be the (non-principal) ultraproduct of all finite cyclic groups of orders n!, n=1,2,3,... . Is there a homomorphism from G onto the infinite cyclic group?
|
{"url":"http://mathoverflow.net/questions/tagged/ultrapowers+gr.group-theory","timestamp":"2014-04-17T07:22:46Z","content_type":null,"content_length":"36987","record_id":"<urn:uuid:4cde7ee4-a494-4bd9-921c-b6a2fa2e6f4a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1911 Encyclopædia Britannica/Electrokinetics
From Wikisource
←Electrocution 1911 Encyclopædia Britannica, Volume 9 Electrolier→
ELECTROKINETICS, that part of electrical science which is concerned with the properties of electric currents.
Classification of Electric Currents.[edit]
Electric currents are classified into (a) conduction currents, (b) convection currents, (c) displacement or dielectric currents. In the case of conduction currents electricity flows or moves through
a stationary material body called the conductor. In convection currents electricity is carried from place to place with and on moving material bodies or particles. In dielectric currents there is no
continued movement of electricity, but merely a limited displacement through or in the mass of an insulator or dielectric. The path in which an electric current exists is called an electric circuit,
and may consist wholly of a conducting body, or partly of a conductor and insulator or dielectric, or wholly of a dielectric. In cases in which the three classes of currents are present together the
true current is the sum of each separately. In the case of conduction currents the circuit consists of a conductor immersed in a non-conductor, and may take the form of a thin wire or cylinder, a
sheet, surface or solid. Electric conduction currents may take place in space of one, two or three dimensions, but for the most part the circuits we have to consider consist of thin cylindrical wires
or tubes of conducting material surrounded with an insulator; hence the case which generally presents itself is that of electric flow in space of one dimension. Self-closed electric currents taking
place in a sheet of conductor are called "eddy currents."
Although in ordinary language the current is said to flow in the conductor, yet according to modern views the real pathway of the energy transmitted is the surrounding dielectric, and the so-called
conductor or wire merely guides the transmission of energy in a certain direction. The presence of an electric current is recognized by three qualities or powers: (1) by the production of a magnetic
field, (2) in the case of conduction currents, by the production of heat in the conductor, and (3) if the conductor is an electrolyte and the current unidirectional, by the occurrence of chemical
decomposition in it. An electric current may also be regarded as the result of a movement of electricity across each section of the circuit, and is then measured by the quantity conveyed per unit of
time. Hence if dq is the quantity of electricity which flows across any section of the conductor in the element of time dt, the current i=dq/dt.
Electric currents may be also classified as constant or variable and as unidirectional or "direct," that is flowing always in the same direction, or "alternating," that is reversing their direction
at regular intervals. In the last case the variation of current may follow any particular law. It is called a "periodic current" if the cycle of current values is repeated during a certain time
called the periodic time, during which the current reaches a certain maximum value, first in one direction and then in the opposite, and in the intervals between has a zero value at certain instants.
The frequency of the periodic current is the number of periods or cycles in one second, and alternating currents are described as low frequency or high frequency, in the latter case having some
thousands of periods per second. A periodic current may be represented either by a wave diagram, or by a polar diagram^[1]. In the first case we take a straight line to represent the uniform flow of
time, and at small equidistant intervals set up perpendiculars above or below the time axis, representing to scale the current at that instant in one direction or the other; the extremities of these
ordinates then define a wavy curve which is called the wave form of the current (fig. 1). It is obvious that this curve can only be a single valued curve. In one particular and important case the
form of the current curve is a simple harmonic curve or simple sine curve.
If T represents the periodic time in which the cycle of current values takes place, whilst n is the frequency or number of periods per second and p stands for 2πn, and i is the value of the current
at any instant t, and I its maximum value, then in this case we have i=I sin pt. Such a current is called a "sine current" or simple periodic current.
In a polar diagram (fig. 2) a number of radial lines are drawn from a point at small equiangular intervals, and on these lines are set off lengths proportional to the current value of a periodic
current at corresponding intervals during one complete period represented by four right angles. The extremities of these radii delineate a polar curve. The polar form of a simple sine current is
obviously a circle drawn through the origin. As a consequence of Fourier’s theorem it follows that any periodic curve having any wave form can be imitated by the superposition of simple sine currents
differing in maximum value and in phase.
Definitions of Unit Electric Current.[edit]
In electrokinetic investigations we are most commonly limited to the cases of unidirectional continuous and constant currents (C.C. or D.C.), or of simple periodic currents, or alternating currents
of sine form (A.C.). A continuous electric current is measured either by the magnetic effect it produces at some point outside its circuit, or by the amount of electrochemical decomposition it can
perform in a given time on a selected standard electrolyte. Limiting our consideration to the case of linear currents or currents flowing in thin cylindrical wires, a definition may be given in the
first place of the unit electric current in the centimetre, gramme, second (C.G.S.) of electromagnetic measurement (see UNITS, PHYSICAL). H. C. Oersted discovered in 1820 that a straight wire
conveying an electric current is surrounded by a magnetic field the lines of which are self-closed lines embracing the electric circuit (see ELECTRICITY and ELECTROMAGNETISM). The unit current in the
electromagnetic system of measurement is defined as the current which, flowing in a thin wire bent into the form of a circle of one centimetre in radius, creates a magnetic field having a strength of
2π units at the centre of the circle, and therefore would exert a mechanical force of 2π dynes on a unit magnetic pole placed at that point (see MAGNETISM). Since the length of the circumference of
the circle of unit radius is 2π units, this is equivalent to stating that the unit current on the electromagnetic C.G.S. system is a current such that unit length acts on unit magnetic pole with a
unit force at a unit of distance. Another definition, called the electrostatic unit of current, is as follows: Let any conductor be charged with electricity and discharged through a thin wire at such
a rate that one electrostatic unit of quantity (see ELECTROSTATICS) flows past any section of the wire in one unit of time. The electromagnetic unit of current defined as above is 3×10^10 times
larger than the electrostatic unit.
In the selection of a practical unit of current it was considered that the electromagnetic unit was too large for most purposes, whilst the electrostatic unit was too small; hence a practical unit of
current called 1 ampere was selected, intended originally to be 1/10 of the absolute electromagnetic C.G.S. unit of current as above defined. The practical unit of current, called the international
ampere, is, however, legally defined at the present time as the continuous unidirectional current which when flowing through a neutral solution of silver nitrate deposits in one second on the cathode
or negative pole 0.001118 of a gramme of silver. There is reason to believe that the international unit is smaller by about one part in a thousand, or perhaps by one part in 800, than the theoretical
ampere defined as 1/10 part of the absolute electromagnetic unit. A periodic or alternating current is said to have a value of 1 ampere if when passed through a fine wire it produces in the same time
the same heat as a unidirectional continuous current of 1 ampere as above electrochemically defined. In the case of a simple periodic alternating current having a simple sine wave form, the maximum
value is equal to that of the equiheating continuous current multiplied by √2. This equiheating continuous current is called the effective or root-mean-square (R.M.S.) value of the alternating one.
A current flows in a circuit in virtue of an electromotive force (E.M.F.), and the numerical relation between the current and E.M.F. is determined by three qualities of the circuit called
respectively, its resistance (R), inductance (L), and capacity (C). If we limit our consideration to the case of continuous unidirectional conduction currents, then the relation between current and
E.M.F. is defined by Ohm's law, which states that the numerical value of the current is obtained as the quotient of the electromotive force by a certain constant of the circuit called its resistance,
which is a function of the geometrical form of the circuit, of its nature, i.e. material, and of its temperature, but is independent of the electromotive force or current. The resistance (R) is
measured in units called ohms and the electromotive force in volts (V); hence for a continuous current the value of the current in amperes (A) is obtained as the quotient of the electromotive force
acting in the circuit reckoned in volts by the resistance in ohms, or A=V/R. Ohm established his law by a course of reasoning which was similar to that on which J. B. J. Fourier based his
investigations on the uniform motion of heat in a conductor. As a matter of fact, however, Ohm's law merely states the direct proportionality of steady current to steady electromotive force in a
circuit, and asserts that this ratio is governed by the numerical value of a quality of the conductor, called its resistance, which is independent of the current, provided that a correction is made
for the change of temperature produced by the current. Our belief, however, in its universality and accuracy rests upon the close agreement between deductions made from it and observational results,
and although it is not derivable from any more fundamental principle, it is yet one of the most certainly ascertained laws of electrokinetics.
Ohm's law not only applies to the circuit as a whole but to any part of it, and provided the part selected does not contain a source of electromotive force it may be expressed as follows:— The
difference of potential (P.D.) between any two points of a circuit including a resistance R, but not including any source of electromotive force, is proportional to the product of the resistance and
the current i in the element, provided the conductor remains at the same temperature and the current is constant and unidirectional. If the current is varying we have, however, to take into account
the electromotive force (E.M.F.) produced by this variation, and the product Ri is then equal to the difference between the observed P.D. and induced E.M.F.
We may otherwise define the resistance of a circuit by saying that it is that physical quality of it in virtue of which energy is dissipated as heat in the circuit when a current flows through it.
The power communicated to any electric circuit when a current i is created in it by a continuous unidirectional electromotive force E is equal to Ei, and the energy dissipated as heat in that circuit
by the conductor in a small interval of time dt is measured by Ei dt. Since by Ohm's law E=Ri, where R is the resistance of the circuit, it follows that the energy dissipated as heat per unit of time
in any circuit is numerically represented by Ri^2, and therefore the resistance is measured by the heat produced per unit of current, provided the current is unvarying.
As soon as we turn our attention, however, to alternating or periodic currents we find ourselves compelled to take into account another quality of the circuit, called its "inductance." This may be
defined as that quality in virtue of which energy is stored up in connexion with the circuit in a magnetic form. It can be experimentally shown that a current cannot be created instantaneously in a
circuit by any finite electromotive force, and that when once created it cannot be annihilated instantaneously. The circuit possesses a quality analogous to the inertia of matter. If a current i is
flowing in a circuit at any moment, the energy stored up in connexion with the circuit is measured by ½Li^2, where L, the inductance of the circuit, is related to the current in the same manner as
the quantity called the mass of a body is related to its velocity in the expression for the ordinary kinetic energy, viz. ½Mv^2. The rate at which this conserved energy varies with the current is
called the "electrokinetic momentum" of this circuit (= Li). Physically interpreted this quantity signifies the number of lines of magnetic flux due to the current itself which are self-linked with
its own circuit.
Magnetic Force and Electric Currents.[edit]
In the case of every circuit conveying a current there is a certain magnetic force (see MAGNETISM) at external points which can in some instances be calculated. Laplace proved that the magnetic force
due to an element of length dS of a circuit conveying a current I at a point P at a distance r from the element is expressed by IdS sin θ/r^2, where θ is the angle between the direction of the
current element and that drawn between the element and the point. This force is in a direction perpendicular to the radius vector and to the plane containing it and the element of current. Hence the
determination of the magnetic force due to any circuit is reduced to a summation of the effects due to all the elements of length. For instance, the magnetic force at the centre of a circular circuit
of radius r carrying a steady current I is 2πI/r, since all elements are at the same distance from the centre. In the same manner, if we take a point in a line at right angles to the plane of the
circle through its centre and at a distance d, the magnetic force along this line is expressed by $2 \pi r^2 I/(r^2+d^2)\frac{3}{2}$. Another important case is that of an infinitely long straight
current. By summing up the magnetic force due to each element at any point P outside the continuous straight current I, and at a distance d from it, we can show that it is equal to 2I/d or is
inversely proportional to the distance of the point from the wire. In the above formula the current I is measured in absolute electromagnetic units. If we reckon the current in amperes A, then I=A/
It is possible to make use of this last formula, coupled with an experimental fact, to prove that the magnetic force due to an element of current varies inversely as the square of the distance. If a
flat circular disk is suspended so as to be free to rotate round a straight current which passes through its centre, and two bar magnets are placed on it with their axes in line with the current, it
is found that the disk has no tendency to rotate round the current. This proves that the force on each magnetic pole is inversely as its distance from the current. But it can be shown that this law
of action of the whole infinitely long straight current is a mathematical consequence of the fact that each element of the current exerts a magnetic force which varies inversely as the square of the
distance. If the current flows N times round the circuit instead of once, we have to insert NA/10 in place of I in all the above formulae. The quantity NA is called the "ampere-turns" on the circuit,
and it is seen that the magnetic field at any point outside a circuit is proportional to the ampere- turns on it and to a function of its geometrical form and the distance of the point.
There is therefore a distribution of magnetic force in the field of every current-carrying conductor which can be delineated by lines of magnetic force and rendered visible to the eye by iron filings
(see MAGNETISM). If a copper wire is passed vertically through a hole in a card on which iron filings are sprinkled, and a strong electric current is sent through the circuit, the filings arrange
themselves in concentric circular lines making visible the paths of the lines of magnetic force (fig. 3). In the same manner, by passing a circular wire through a card and sending a strong current
through the wire we can employ iron filings to delineate for us the form of the lines of magnetic force (fig. 4).
In all cases a magnetic pole of strength M, placed in the field of an electric current, is urged along the lines of force with a mechanical force equal to MH, where H is the magnetic force. If then
we carry a unit magnetic pole against the direction in which it would naturally move we do work. The lines of magnetic force embracing a current-carrying conductor are always loops or endless lines.
The work done in carrying a unit magnetic pole once round a circuit conveying a current is called the "line integral of magnetic force" along that path. If, for instance, we carry a unit pole in a
circular path of radius r once round an infinitely long straight filamentary current I, the line integral is 4πI. It is easy to prove that this is a general law, and that if we have any currents
flowing in a conductor the line integral of magnetic force taken once round a path linked with the current circuit is 4π times the total current flowing through the circuit. Let us apply this to the
case of an endless solenoid. If a copper wire insulated or covered with cotton or silk is twisted round a thin rod so as to make a close spiral, this forms a "solenoid," and if the solenoid is bent
round so that its two ends come together we have an endless solenoid. Consider such a solenoid of mean length l and N turns of wire. If it is made endless, the magnetic force H is the same everywhere
along the central axis and the line integral along the axis is Hl. If the current is denoted by I, then NI is the total current, and accordingly 4πNI=Hl, or H=4πNI/l. For a thin endless solenoid the
axial magnetic force is therefore 4π times the current-turns per unit of length. This holds good also for a long straight solenoid provided its length is large compared with its diameter. It can be
shown that if insulated wire is wound round a sphere, the turns being all parallel to lines of latitude, the magnetic force in the interior is constant and the lines of force therefore parallel. The
magnetic force at a point outside a conductor conveying a current can by various means be measured or compared with some other standard magnetic forces, and it becomes then a means of measuring the
current. Instruments called galvanometers and ammeters for the most part operate on this principle.
Thermal Effects of Currents.[edit]
J. P. Joule proved that the heat produced by a constant current in a given time in a wire having a constant resistance is proportional to the square of the strength of the current. This is known as
Joule's law, and it follows, as already shown, as an immediate consequence of Ohm's law and the fact that the power dissipated electrically in a conductor, when an electromotive force E is applied to
its extremities, producing thereby a current I in it, is equal to EI.
If the current is alternating or periodic, the heat produced in any time T is obtained by taking the sum at equidistant intervals of time of all the values of the quantities Ri dt, where dt
represents a small interval of time and i is the current at that instant. The quantity $T^{-1} \int_0^T i^2dt$ is called the mean-square-value of the variable current, i being the instantaneous value
of the current, that is, its value at a particular instant or during a very small interval of time dt. The square root of the above quantity, or
$\left \lbrack T^{-1} \int_0^T i^2dt \right \rbrack^{\frac{1}{2}},$
is called the root-mean-square-value, or the effective value of the current, and is denoted by the letters R.M.S.
Currents have equal heat-producing power in conductors of identical resistance when they have the same R.M.S. values. Hence periodic or alternating currents can be measured as regards their R.M.S.
value by ascertaining the continuous current which produces in the same time the same heat in the same conductor as the periodic current considered. Current measuring instruments depending on this
fact, called hot-wire ammeters, are in common use, especially for measuring alternating currents. The maximum value of the periodic current can only be determined from the R.M.S. value when we know
the wave form of the current. The thermal effects of electric currents in conductors are dependent upon the production of a state of equilibrium between the heat produced electrically in the wire and
the causes operative in removing it. If an ordinary round wire is heated by a current it loses heat, (1) by radiation, (2) by air convection or cooling, and (3) by conduction of heat out of the ends
of the wire. Generally speaking, the greater part of the heat removal is effected by radiation and convection.
If a round sectioned metallic wire of uniform diameter d and length l made of a material of resistivity ρ has a current of A amperes passed through it, the heat in watts produced in any time t
seconds is represented by the value of 4A^2ρlt/10^9πd^2, where d and l must be measured in centimetres and ρ in absolute C.G.S. electromagnetic units. The factor 10^9 enters because one ohm is 10^9
absolute electromagnetic C.G.S. units (see UNITS, PHYSICAL). If the wire has an emissivity e, by which is meant that e units of heat reckoned in joules or watt-seconds are radiated per second from
unit of surface, then the power removed by radiation in the time t is expressed by πdlet. Hence when thermal equilibrium is established we have 4A^2ρlt/10^9πd^2=πdlet, or A^2=10^9π^2ed^3/4ρ. If the
diameter of the wire is reckoned in mils (1 mil=.001 in.), and if we take e to have a value 0.1, an emissivity which will generally bring the wire to about 60°C., we can put the above formula in the
following forms for circular sectioned copper, iron or platinoid wires, viz.
$A=\sqrt{d^3/500}$ for copper wires
$A=\sqrt{d^3/4000}$ for iron wires
$A=\sqrt{d^3/5000}$ for platinoid wires
These expressions give the ampere value of the current which will bring bare, straight or loosely coiled wires of d mils in diameter to about 60°C. when the steady state of temperature is reached.
Thus, for instance, a bare straight copper wire 50 mils in diameter (=0.05 in.) will be brought to a steady temperature of about 60°C. if a current of √50^3/500=√250=16 amperes (nearly) is passed
through it, whilst a current of √25=5 amperes would bring a platinoid wire to about the same temperature.
A wire has therefore a certain safe current-carrying capacity which is determined by its specific resistance and emissivity, the latter being fixed by its form, surface and surroundings. The
emissivity increases with the temperature, else no state of thermal equilibrium could be reached. It has been found experimentally that whilst for fairly thick wires from 8 to 60 mils in diameter the
safe current varies approximately as the 1.5th power of the diameter, for fine wires of 1 to 3 mils it varies more nearly as the diameter.
Action of one Current on Another.[edit]
The investigations of Ampère in connexion with electric currents are of fundamental importance in electrokinetics. Starting from the discovery of Oersted, Ampère made known the correlative fact that
not only is there a mechanical action between a current and a magnet, but that two conductors conveying electric currents exert mechanical forces on each other. Ampère devised ingenious methods of
making one portion of a circuit movable so that he might observe effects of attraction or repulsion between this circuit and some other fixed current. He employed for this purpose an astatic circuit
B, consisting of a wire bent into a double rectangle round which a current flowed first in one and then in the opposite direction (fig. 5).
In this way the circuit was removed from the action of the earth's magnetic field, and yet one portion of it could be submitted to the action of any other circuit C. The astatic circuit was pivoted
by suspending it in mercury cups q, p, one of which was in electrical connexion with the tubular support A, and the other with a strong insulated wire passing up it.
Ampère devised certain crucial experiments, and the theory deduced from them is based upon four facts and one assumption^[2]. He showed (1) that wire conveying a current bent back on itself produced
no action upon a proximate portion of a movable astatic circuit; (2) that if the return wire was bent zig-zag but close to the outgoing straight wire the circuit produced no action on the movable
one, showing that the effect of an element of the circuit was proportional to its projected length; (3) that a closed circuit cannot cause motion in an element of another circuit free to move in the
direction of its length; and (4) that the action of two circuits on one and the same movable circuit was null if one of the two fixed circuits was n times greater than the other but n times further
removed from the movable circuit. From this last experiment by an ingenious line of reasoning he proved that the action of an element of current on another element of current varies inversely as a
square of their distance. These experiments enabled him to construct a mathematical expression of the law of action between two elements of conductors conveying currents. They also enabled him to
prove that an element of current may be resolved like a force into components in different directions, also that the force produced by any element of the circuit on an element of any other circuit
was perpendicular to the line joining the elements and inversely as the square of their distance. Also he showed that this force was an attraction if the currents in the elements were in the same
direction, but a repulsion if they were in opposite directions. From these experiments and deductions from them he built up a complete formula for the action of one element of a current of length dS
of one conductor conveying a current I upon another element dS' of another circuit conveying another current I' the elements being at a distance apart equal to r.
If θ and θ' are the angles the elements make with the line joining them, and φ the angle they make with one another, then Ampère's expression for the mechanical force f the elements exert on one
another is
$f=2II^\prime r-^2\left \{\cos \phi - \frac{3}{2} \cos \theta \cos \theta^\prime \right \} dSdS^\prime .$
This law, together with that of Laplace already mentioned, viz, that the magnetic force due to an element of length dS of a current I at a distance r, the element making an angle θ with the radius
vector o is IdS sin θ/r^2, constitute the fundamental laws of electrokinetics.
Ampère applied these with great mathematical skill to elucidate the mechanical actions of currents on each other, and experimentally confirmed the following deductions: (1) Currents in parallel
circuits flowing in the same direction attract each other, but if in opposite directions repel each other. (2) Currents in wires meeting at an angle attract each other more into parallelism if both
flow either to or from the angle, but repel each other more widely apart if they are in opposite directions. (3) A current in a small circular conductor exerts a magnetic force in its centre
perpendicular to its plane and is in all respects equivalent to a magnetic shell or a thin circular disk of steel so magnetized that one face is a north pole and the other a south pole, the product
of the area of the circuit and the current flowing in it determining the magnetic moment of the element. (4) A closely wound spiral current is equivalent as regards external magnetic force to a polar
magnet, such a circuit being called a finite solenoid. (5) Two finite solenoid circuits act on each other like two polar magnets, exhibiting actions of attraction or repulsion between their ends.
Ampère's theory was wholly built up on the assumption of action at a distance between elements of conductors conveying the electric currents. Faraday's researches and the discovery of the fact that
the insulating medium is the real seat of the operations necessitates a change in the point of view from which we regard the facts discovered by Ampère. Maxwell showed that in any field of magnetic
force there is a tension along the lines of force and a pressure at right angles to them; in other words, lines of magnetic force are like stretched elastic threads which tend to contract.^[3] If,
therefore, two conductors lie parallel and have currents in them in the same direction they are impressed by a certain number of lines of magnetic force which pass round the two conductors, and it is
the tendency of these to contract which draws the circuits together. If, however, the currents are in opposite directions then the lateral pressure of the similarly contracted lines of force between
them pushes the conductors apart. Practical application of Ampère's discoveries was made by W. E. Weber in inventing the electrodynamometer, and later Lord Kelvin devised ampere balances for the
measurement of electric currents based on the attraction between coils conveying electric currents.
Induction of Electric Currents.[edit]
Faraday^[4] in 1831 made the important discovery of the induction of electric currents (see ELECTRICITY). If two conductors are placed parallel to each other, and a current in one of them, called the
primary, started or stopped or changed in strength, every such alteration causes a transitory current to appear in the other circuit, called the secondary. This is due to the fact that as the primary
current increases or decreases, its own embracing magnetic field alters, and lines of magnetic force are added to or subtracted from its fields. These lines do not appear instantly in their place at
a distance, but are propagated out from the wire with a velocity equal to that of light; hence in their outward progress they cut through the secondary circuit, just as ripples made on the surface of
water in a lake by throwing a stone on to it expand and cut through a stick held vertically in the water at a distance from the place of origin of the ripples. Faraday confirmed this view of the
phenomena by proving that the mere motion of a wire transversely to the lines of magnetic force of a permanent magnet gave rise to an induced electromotive force in the wire. He embraced all the
facts in the single statement that if there be any circuit which by movement in a magnetic field, or by the creation or change in magnetic fields round it, experiences a change in the number of lines
of force linked with it, then an electromotive force is set up in that circuit which is proportional at any instant to the rate at which the total magnetic flux linked with it is changing. Hence if Z
represents the total number of lines of magnetic force linked with a circuit of N turns, then —N(dZ/dt) represents the electromotive force set up in that circuit. The operation of the induction coil
(q.v.) and the transformer (q.v.) are based on this discovery.
Faraday also found that if a copper disk A (fig. 6) is rotated between the poles of a magnet NO so that the disk moves with its plane perpendicular to the lines of magnetic force of the field, it has
created in it an electromotive force directed from the centre to the edge or vice versa. The action of the dynamo (q.v.) depends on similar processes, viz, the cutting of the lines of magnetic force
of a constant field produced by certain magnets by certain moving conductors called armature bars or coils in which an electromotive force is thereby created.
In 1834 H. F. E. Lenz enunciated a law which connects together the mechanical actions between electric circuits discovered by Ampère and the induction of electric currents discovered by Faraday. It
is as follows: If a constant current flows in a primary circuit P, and if by motion of P a secondary current is created in a neighbouring circuit S, the direction of the secondary current will be
such as to oppose the relative motion of the circuits. Starting from this, F. E. Neumann founded a mathematical theory of induced currents, discovering a quantity M, called the "potential of one
circuit on another," or generally their "coefficient of mutual inductance." Mathematically M is obtained by taking the sum of all such quantities as ∫∫ dSdS' cos φ/r, where dS and dS' are the
elements of length of the two circuits, r is their distance, and φ is the angle which they make with one another; the summation or integration must be extended over every possible pair of elements.
If we take pairs of elements in the same circuit, then Neumann's formula gives us the coefficient of self-induction of the circuit or the potential of the circuit on itself. For the results of such
calculations on various forms of circuit the reader must be referred to special treatises.
H. von Helmholtz, and later on Lord Kelvin, showed that the facts of induction of electric currents discovered by Faraday could have been predicted from the electrodynamic actions discovered by
Ampère assuming the principle of the conservation of energy. Helmholtz takes the case of a circuit of resistance R in which acts an electromotive force due to a battery or thermopile. Let a magnet be
in the neighbourhood, and the potential of the magnet on the circuit be V, so that if a current I existed in the circuit the work done on the magnet in the time dt is I(dV/dt)dt. The source of
electromotive force supplies in the time dt work equal to EIdt, and according to Joule's law energy is dissipated equal to RI^2dt. Hence, by the conservation of energy,
$EIdt=RI^2dt+I(dV/dt)dt. \,$
If then E=0, we have I=—(dV/dt)/R, or there will be a current due to an induced electromotive force expressed by —dV/dt. Hence if the magnet moves, it will create a current in the wire provided that
such motion changes the potential of the magnet with respect to the circuit. This is the effect discovered by Faraday.^[5]
Oscillatory Currents.[edit]
In considering the motion of electricity in conductors we find interesting phenomena connected with the discharge of a condenser or Leyden jar (q.v.). This problem was first mathematically treated by
Lord Kelvin in 1853 (Phil. Mag., 1853, 5, p. 292).
If a conductor of capacity C has its terminals connected by a wire of resistance R and inductance L, it becomes important to consider the subsequent motion of electricity in the wire. If Q is the
quantity of electricity in the condenser initially, and q that at any time after completing the circuit, then the energy stored up in the condenser at that instant is ½q^2/C, and the energy
associated with the circuit is ½L(dq/dt)^2, and the rate of dissipation of energy by resistance is R(dq/dt)^2, since dq/dt=i is the discharge current. Hence we can construct an equation of energy
which expresses the fact that at any instant the power given out by the condenser is partly stored in the circuit and partly dissipated as heat in it. Mathematically this is expressed as follows:—
$-\frac{dq}{dt} \left \lbrack \frac{1}{2} \frac{Q^2}{C} \right \rbrack =\frac{d}{dt} \left \lbrack \frac{1}{2} L \left( \frac{dq}{dt} \right)^2 \right \rbrack +R \left( \frac{dq}{dt} \right)^
$\frac{d^2q}{dt^2} +\frac{R}{L}\frac{dq}{dt} +\frac{1}{LC}q =0$
The above equation has two solutions according as R^2/4L^2 is greater or less than 1/LC. In the first case the current i in the circuit can be expressed by the equation
$i= Q \frac{\alpha^2+\beta^2}{2\beta}e^{-\alpha} (e^{\beta t}-e^{-\beta t}),$
where $\alpha=R/2L, \,\, \beta=\sqrt{\frac{R^2}{4L^2}-\frac{1}{LC}},$ Q is the value of q when t=0, and e is the base of Napierian logarithms; and in the second case by the equation
$i= Q \frac{\alpha^2+\beta^2}{\beta}e^{-\alpha t} \sin \beta t (e^{\beta t}-e^{-\beta t}),$
where $\alpha=R/2L, \,\, \beta=\sqrt{\frac{1}{LC}-\frac{R^2}{4L^2}}.$
These expressions show that in the first case the discharge current of the jar is always in the same direction and is a transient unidirectional current. In the second case, however, the current is
an oscillatory current gradually decreasing in amplitude, the frequency n of the oscillation being given by the expression
$n=\frac{1}{2 \pi}\sqrt{\frac{1}{LC}-\frac{R^2}{4L^2}}.$
in those cases in which the resistance of the discharge circuit is very small, the expression for the frequency n and for the time period of oscillation R take the simple forms $n=1, \,\, 2 \pi \sqrt
{LC}, \mbox{ or } T=1/n= 2 \pi \sqrt{LC}.$
The above investigation shows that if we construct a circuit consisting of a condenser and inductance placed in series with one another, such circuit has a natural electrical time period of its own
in which the electrical charge in it oscillates if disturbed. It may therefore be compared with a pendulum of any kind which when displaced oscillates with a time period depending on its inertia and
on its restoring force.
The study of these electrical oscillations received a great impetus after H. R. Hertz showed that when taking place in electric circuits of a certain kind they create electromagnetic waves (see
ELECTRIC WAVES) in the dielectric surrounding the oscillator, and an additional interest was given to them by their application to telegraphy. If a Leyden jar and a circuit of low resistance but some
inductance in series with it are connected across the secondary spark gap of an induction coil, then when the coil is set in action we have a series of bright noisy sparks, each of which consists of
a train of oscillatory electric discharges from the jar. The condenser becomes charged as the secondary electromotive force of the coil is created at each break of the primary current, and when the
potential difference of the condenser coatings reaches a certain value determined by the spark-ball distance a discharge happens. This discharge, however, is not a single movement of electricity in
one direction but an oscillatory motion with gradually decreasing amplitude. If the oscillatory spark is photographed on a revolving plate or a rapidly moving film, we have evidence in the photograph
that such a spark consists of numerous intermittent sparks gradually becoming feebler. As the coil continues to operate, these trains of electric discharges take place at regular intervals. We can
cause a train of electric oscillations in one circuit to induce similar oscillations in a neighbouring circuit, and thus construct an oscillation transformer or high frequency induction coil.
Alternating Currents.[edit]
The study of alternating currents of electricity began to attract great attention towards the end of the 19th century by reason of their application in electrotechnics and especially to the
transmission of power. A circuit in which a simple periodic alternating current flows is called a single phase circuit. The important difference between such a form of current flow and steady current
flow arises from the fact that if the circuit has inductance then the periodic electric current in it is not in step with the terminal potential difference or electromotive force acting in the
circuit, but the current lags behind the electromotive force by a certain fraction of the periodic time called the "phase difference." If two alternating currents having a fixed difference in phase
flow in two connected separate but related circuits, the two are called a two-phase current. If three or more single-phase currents preserving a fixed difference of phase flow in various parts of a
connected circuit, the whole taken together is called a polyphase current. Since an electric current is a vector quantity, that is, has direction as well as magnitude, it can most conveniently be
represented by a line denoting its maximum value, and if the alternating current is a simple periodic current then the root-mean-square or effective value of the current is obtained by dividing the
maximum value by √2. Accordingly when we have an electric circuit or circuits in which there are simple periodic currents we can draw a vector diagram, the lines of which represent the relative
magnitudes and phase differences of these currents.
A vector can most conveniently be represented by a symbol such as a+ιb, where a stands for any length of a units measured horizontally and b for a length b units measured vertically, and the smybol ι
is a sign of perpendicularity, and equivalent analytically^[6] to —1. Accordingly if E represents the periodic electromotive force (maximum value) acting in a circuit of resistance R and inductance L
and frequency n, and if the current considered as a vector is represented by I, it is easy to show that a vector equation exists between these quantities as follows:—
$E=RI+\iota 2 \pi n LI. \,$
Since the absolute magnitude of a vector a+ιb is √(a^2 +b^2), it follows that considering merely magnitudes of current and electromotive force and denoting them by symbols (E) (I), we have the
following equation connecting (I) and (E):—
where p stands for 2πn. If the above equation is compared with the symbolic expression of Ohm's law, it will be seen that the quantity √(R^2+p^2L^2) takes the place of resistance R in the expression
of Ohm. This quantity √(R^2+p^2L^2) is called the "impedance" of the alternating circuit. The quantity pL is called the "reactance" of the alternating circuit, and it is therefore obvious that the
current in such a circuit lags behind the electromotive force by an angle, called the angle of lag, the tangent of which is pL/R.
Currents in Networks of Conductors.[edit]
In dealing with problems connected with electric currents we have to consider the laws which govern the flow of currents in linear conductors (wires), in plane conductors (sheets), and throughout the
mass of a material conductor^[7]. In the first case consider the collocation of a number of linear conductors, such as rods or wires of metal, joined at their ends to form a network of conductors,
The network consists of a number of conductors joining certain points and forming meshes. In each conductor a current may exist, and along each conductor there is a fall of potential, or an active
electromotive force may be acting in it. Each conductor has a certain resistance. To find the current in each conductor when the individual resistances and electromotive forces are given, proceed as
follows:— Consider any one mesh. The sum of all the electromotive forces which exist in the branches bounding that mesh must be equal to the sum of all the products of the resistances into the
currents flowing along them, or Σ(E)=Σ(C.R.). Hence if we consider each mesh as traversed by imaginary currents all circulating in the same direction, the real currents are the sums or differences of
these imaginary cyclic currents in each branch. Hence we may assign to each mesh a cycle symbol x, y, z, &c., and form a cycle equation. Write down the cycle symbol for a mesh and prefix as
coefficient the sum of all the resistances which bound that cycle, then subtract the cycle symbols of each adjacent cycle, each multiplied by the value of the bounding or common resistances, and
equate this sum to the total electromotive force acting round the cycle. Thus if x y z are the cycle currents, and a b c the resistances bounding the mesh x, and b and c those separating it from the
meshes y and z, and E an electromotive force in the branch a, then we have formed the cycle equation x(a+b+c)—by—cz=E. For each mesh a similar equation may be formed. Hence we have as many linear
equations as there are meshes, and we can obtain the solution for each cycle symbol, and therefore for the current in each branch.
The solution giving the current in such branch of the network is therefore always in the form of the quotient of two determinants. The solution of the well- known problem of finding the current in
the galvanometer circuit of the arrangement of linear conductors called Wheatstone's Bridge is thus easily obtained. For if we call the cycles (see fig. 7) (x+y), y and z, and the resistances P, Q,
R, S, G and B, and if E be the electromotive force in the battery circuit, we have the cycle equations
$(P+G+R)(x+y)-Gy-Rz=0, \,$
$(Q+G+S)y-G(x+y)-Sz=0, \,$
$(R+S+B)z-R(x+y)-Sy=E. \,$
From these we can easily obtain the solution for (x+y)—y=x, which is the current through the galvanometer circuit in the form
where Δ is a certain function of P, Q, R, S, B and G.
Currents in Sheets.[edit]
In the case of current flow in plane sheets, we have to consider certain points called sources at which the current flows into the sheet, and certain points called sinks at which it leaves. We may
investigate, first, the simple case of one source and one sink in an infinite plane sheet of thickness δ and conductivity k. Take any point P in the plane at distances R and r from the source and
sink respectively. The potential V at P is obviously given by
$V=\frac{Q}{2 \pi k \delta} \log_e \frac{r_1}{r_2},$
where Q is the quantity of electricity supplied by the source per second. Hence the equation to the equipotential curve is r[1]r[2]=a constant.
If we take a point half-way between the sink and the source as the origin of a system of rectangular co-ordinates, and if the distance between sink and source is equal to p, and the line joining them
is taken as the axis of x, then the equation to the equipotential line is
$\frac{y^2+(x+p)^2}{y^2-(x+p)^2}=\mbox{a constant}$
This is the equation of a family of circles having the axis of y for a common radical axis, one set of circles surrounding the sink and another set of circles surrounding the source. In order to
discover the form of the stream of current lines we have to determine the orthogonal trajectories to this family of coaxial circles. It is easy to show that the orthogonal trajectory of the system of
circles is another system of circles all passing through the sink and the source, and as a corollary of this fact, that the electric resistance of a circular disk of uniform thickness is the same
between any two points taken anywhere on its circumference as sink and source. These equipotential lines may be delineated experimentally by attaching the terminals of a battery or batteries to small
wires which touch at various places a sheet of tinfoil. Two wires attached to a galvanometer may then be placed on the tinfoil, and one may be kept stationary and the other may be moved about, so
that the galvanometer is not traversed by any current. The moving terminal then traces out an equipotential curve. If there are n sinks and sources in a plane conducting sheet, and if r, r', r'' be
the distances of any point from the sinks, and t, t', t'' the distances of the sources, then
$\frac{r \, r^{\prime} \,r^{\prime \prime}}{t \, t^{\prime} \,t^{\prime \prime}}=\mbox{a constant},$
is the equation to the equipotential lines. The orthogonal trajectories or stream lines have the equation
$\Sigma(\theta-\theta^\prime)=\mbox{a constant}, \,$
where θ and θ' are the angles which the lines drawn from any point in the plane to the sink and corresponding source make with the line joining that sink and source. Generally it may be shown that if
there are any number of sinks and sources in an infinite plane-conducting sheet, and if r, θ are the polar co-ordinates of any one, then the equation to the equipotential surfaces is given by the
$\Sigma(A \log_e r)=\mbox{a constant}, \,$
where A is a constant; and the equation to the stream or current lines is
$\Sigma(\theta)=\mbox{a constant}, \,$
In the case of electric flow in three dimensions the electric potential must satisfy Laplace's equation, and a solution is therefore found in the form Σ(A/r)=a constant, as the equation to an
equipotential surface, where r is the distance of any point on that surface from a source or sink.
Convection Currents.[edit]
The subject of convection electric currents has risen to great importance in connexion with modern electrical investigations. The question whether a statically electrified body in motion creates a
magnetic field is of fundamental importance. Experiments to settle it were first undertaken in the year 1876 by H. A. Rowland, at a suggestion of H. von Helmboltz.^[8] After preliminary experiments,
Rowland's first apparatus for testing this hypothesis was constructed, as follows:— An ebonite disk was covered with radial strips of gold-leaf and placed between two other metal plates which acted
as screens. The disk was then charged with electricity and set in rapid rotation. It was found to affect a delicately suspended pair of astatic magnetic needles hung in proximity to the disk just as
would, by Oersted's rule, a circular electric current coincident with the periphery of the disk. Hence the statically-charged but rotating disk becomes in effect a circular electric current.
The experiments were repeated and confirmed by W. C. Röntgen (Wied. Ann., 1888, 35, p. 264; 1890, 40, p. 93) and by F. Himstedt (Wied. Ann., 1889, 38, p. 560). Later V. Crémieu again repeated them
and obtained negative results (Com. rend., 1900, 130, p. 1544, and 131, pp. 578 and 797; 1901, 132, pp. 327 and 1108). They were again very carefully reconducted by H. Pender (Phil. Hag., 1901, 2, p.
179) and by E. P. Adams (id. ib., 285). Pender's work showed beyond any doubt that electric convection does produce a magnetic effect. Adams employed charged copper spheres rotating at a high speed
in place of a disk, and was able to prove that the rotation of such spheres produced a magnetic field similar to that due to a circular current and agreeing numerically with the theoretical value. It
has been shown by J. J. Thomson (Phil. Hag., 1881, 2, p. 236) and O. Heaviside (Electrical Papers, vol. ii. p. 205) that an electrified sphere, moving with velocity v and carrying a quantity of
electricity q, should produce a magnetic force H, at a point at a distance p from the centre of the sphere, equal to qv sin θ/p^2, where θ is the angle between the direction of p and the motion of
the sphere. Adams found the field produced by a known electric charge rotating at a known speed had a strength not very different from that predetermined by the above formula. An observation recorded
by R. W. Wood (Phil. Hag., 1902, 2, p. 659) provides a confirmatory fact. He noticed that if carbon-dioxide strongly compressed in a steel bottle is allowed to escape suddenly the cold produced
solidifies some part of the gas, and the issuing jet is full of particles of carbon-dioxide snow. These by friction against the nozzle are electrified positively. Wood caused the jet of gas to pass
through a glass tube 2.5 mm. in diameter, and found that these particles of electrified snow were blown through it with a velocity of 2000 ft. a second. Moreover, he found that a magnetic needle hung
near the tube was deflected as if held near an electric current. Hence the positively electrified particles in motion in the tube create a magnetic field round it.
Nature of an Electric Current.[edit]
The question, What is an electric current? is involved in the larger question of the nature of electricity. Modern investigations have shown that negative electricity is identical with the electrons
or corpuscles which are components of the chemical atom (see MATTER and ELECTRICITY). Certain lines of argument lead to the conclusion that a solid conductor is not only composed of chemical atoms,
but that there is a certain proportion of free electrons present in it, the electronic density or number per unit of volume being determined by the material, its temperature and other physical
conditions. If any cause operates to add or remove electrons at one point there is an immediate diffusion of electrons to re-establish equilibrium, and this electronic movement constitutes an
electric current. This hypothesis explains the reason for the identity between the laws of diffusion of matter, of heat and of electricity. Electromotive force is then any cause making or tending to
make an inequality of electronic density in conductors, and may arise from differences of temperature, i.e. thermoelectromotive force (see THERMOELECTRICITY), or from chemical action when part of the
circuit is an electrolytic conductor, or from the movement of lines of magnetic force across the conductor.
For additional information the reader may be referred to the following books:
• M. Faraday, Experimental Researches in Electricity (3 vols., London, 1839, 1844, 1855)
• J. Clerk Maxwell, Electricity and Magnetism (2 vols., Oxford, 1892)
• W. Watson and S. H. Burbury, Mathematical Theory of Electricity and Magnetism, vol. ii. (Oxford, 1889)
• E. Mascart and J. Joubert, A Treatise on Electricity and Magnetism (2 vols., London, 1883)
• A. Hay, Alternating Currents (London, 1905)
• W. G. Rhodes, An Elementary Treatise on Alternating Currents (London, 1902)
• D. C. Jackson and J. P. Jackson, Alternating Currents and Alternating Current Machinery (1896, new ed. 1903)
• S. P. Thompson, Polyphase Electric Currents (London, 1900); Dynamo-Electric Machinery, vol. ii., "Alternating Currents" (London, 1905)
• E. E. Fournier d’Albe, The Electron Theory (London, 1906)
(J. A. F.)
|
{"url":"http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Electrokinetics","timestamp":"2014-04-19T21:35:35Z","content_type":null,"content_length":"103073","record_id":"<urn:uuid:76a54687-d632-48ae-8416-3125048601f2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CM liftings of abelian varieties and liftings of Frobenius
up vote 1 down vote favorite
It is well-known that if $A$ is an ordinary abelian variety over a finite perfect field $ k$ of characteristic $ p>0$ and $ W=W(k)$ is the ring of Witt vectors over $ k$, then the canonical lifting $
A_{can}$ of $A$ to $W$ is characterized by the fact that every endomorphism $f$ of $A$ lifts to and endomorphism of $A_{can}$. In other words, the natural map $ End(A_{can})----> End(A)$ is
Now is there a characterization of CM liftings of abelian varities (not necessarily ordinary) through liftings of endomorphisms? in particular is there a characterization of CM liftings based on
liftings of Frobenius?
ag.algebraic-geometry characteristic-p abelian-varieties abelian-schemes
In your discussion of canonical lifts, do you intend $A$ to be ordinary? (Not that it necessarily matters for your actual question ... .) – Emerton Nov 9 '11 at 20:49
yes, your are right! I forgot to write ordinary. But in my question I really want to deal with general $A$ not necessarily ordinary. – Cyrus Nov 9 '11 at 20:55
Your question seems unlikely to have a positive answer. Supersingular elliptic curves have CM liftings, and also non CM liftings, but if $k$ is sufficiently large then Frobenius will be a power of
$p$. – ulrich Nov 10 '11 at 4:52
add comment
1 Answer
active oldest votes
It is possible to do when p-rank is coprime to p. In this case, a frobenius lift gives a full endomorphism algebra.
up vote 2 down vote I also would recommend book "CM liftings" (B. Conrad, C-L. Chai, F. Oort) http://math.stanford.edu/~conrad/
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry characteristic-p abelian-varieties abelian-schemes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/80518/cm-liftings-of-abelian-varieties-and-liftings-of-frobenius/110231","timestamp":"2014-04-19T10:17:20Z","content_type":null,"content_length":"54562","record_id":"<urn:uuid:80cb7f3d-85b8-422a-9249-fb7d8cd11d49>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible Answer
Answers.com > Wiki Answers > Categories > Uncategorized > What describes the best meaning of the statement if a then b? What describes the best meaning of the statement if a then b? In: Uncategorized
[Edit categories] ... Which term best describes a mathematical statement if A then B? - read more
Documents / The Statement If A Then B Can Best Be Described As . Related documents, manuals and ebooks about The Statement If A Then B Can Best Be ... Implementing artificial intelligence you will
most likely get what you were looking for. Now featuring documents to help your research! Don't ... - read more
Share your answer: the statement if a then b can best be described as?
Question Analizer
the statement if a then b can best be described as resources
|
{"url":"http://www.askives.com/the-statement-if-a-then-b-can-best-be-described-as.html","timestamp":"2014-04-19T20:19:39Z","content_type":null,"content_length":"37151","record_id":"<urn:uuid:14ad76e6-6e63-411b-b73d-1fa82da0a144>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangles and Definite Integrals
I'm trying to figure out how to integrate a data set, without knowing the function. While doing this, I got to thinking about this:
If the definite integral of a function can be represented by the area under that function, bound by the x axis, then shouldn't:
[itex]\int_{a}^{b}2x\frac{\mathrm{d} }{\mathrm{d} x} = A = 1/2b*h[/itex]
Where the integral is bound by a = 0 and b = 2, and the triangle's base is 2 and height is 2.
but rather,
[itex]\int_{0}^{2}2x\frac{\mathrm{d} }{\mathrm{d} x} = 4[/itex]
[itex]A = 2[/itex]
Where's the discrepancy?
|
{"url":"http://www.physicsforums.com/showthread.php?s=0d029e43e28ada4db794f4cd171e6d62&p=3982214","timestamp":"2014-04-18T18:24:52Z","content_type":null,"content_length":"24282","record_id":"<urn:uuid:16c2d709-ee5c-446f-aa1a-63aa748e3957>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Preparing for College Algebra
Although there is not an official prerequisite for the MAT1050—College Algebra course, it is recommended that learners have a basic understanding of these mathematical concepts:
• Adding, subtracting, dividing, and multiplying fractions.
• Adding, subtracting, dividing, and multiplying negative numbers.
• Adding, subtracting, dividing, and multiplying decimals.
• Order of operations.
To brush up on your basic math skills and knowledge of these concepts, we recommend reviewing the following tutorials:
Under the Fractions category, review each of these areas: Definitions, Reducing Fractions, Adding and Subtracting, Multiplying and Dividing.
Review all sections: Reducing Fractions, Mixed Numbers and Improper Fractions, Multiplying and Dividing Fractions, Adding and Subtracting Fractions, and Adding Polynomial Fractions.
Negative Numbers
Adding and Subtracting Integers—Math.com
Multiplying and Dividing Integers—Math.com
Negative Numbers—Purplemath.com
Adding/Subtracting Decimals—Math.com
Multiplying Decimals—Math.com
Dividing Decimals—Math.com
Order of Operations
Order of Operations—Math.com
Order of Operations—Purplemath.com
Order of Operations Interactive
|
{"url":"http://www.capella.edu/interactivemedia/mathSkills/2i_preparing.aspx","timestamp":"2014-04-19T09:42:35Z","content_type":null,"content_length":"9133","record_id":"<urn:uuid:a8bc77be-4650-4333-b513-4654171d6613>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FRM Fun 6 (Mon)
Suzanne Evans Administrator
FRM Fun 6 (optimal versus effective hedge)
In the FRM, both authors Hull and Geman define the futures basis (b) = S(t) - F(t). In words, the basis is the difference between the spot price and one of the (any of several quotable) futures
Geman further defines basis risk as the variance[S(t) - F(t)]; i.e., the variance of the difference between two random variables. Then Geman defines a "classical measure of effectiveness of
hedging a spot position with Futures contracts" as given by h = 1 - variance[basis]/variance[S(t)] = 1 - variance[S - F]/variance[S(t)].
While Hull defines the minimum variance hedge ratio (aka, optimal) as given by h* = correlation(S,F)*volatility(S)/volatility(F).
What is the relationship between Geman's hedge effectiveness (h) and Hull's optimal (h*), if any? for example, are they demonstrably equivalent? unrelated? is one superior?
ShaktiRathore Well-Known Member
Now According to my solution:
the Geman defined h=1-(Var(S-F))/Var(S)
or h=1-(VarS+VarF-2*corr(S,F)*sqrt(Var(S)*Var(F)/Var(S))
or h=1-(1+(VarF-2*corr(S,F)*sqrt(Var(S)*Var(F)/Var(S))
or h=(-VarF+2*corr(S,F)*sqrt(Var(S)*Var(F))/Var(S)
or h= -(VarF/VarS)+(2*corr(S,F)*sqrt(Var(S)*Var(F)/Var(S))
or h= -(VarF/VarS)+(2*corr(S,F)*sqrt(Var(F)/Var(S))...(1)
for minimum hedge effectiveness h(opt.) dh/dS=0
thus from above
=> after some calculation corr(S,F)=sqrt(VarF/VarS)=vol(F)/vol(S)...(2)
Putting (2) in (1) implies
h(opt.)=-corr(S,F)^2+2*corr(S,F)*corr(S,F)=corr(S,F)^2 ..(3)
This is the optimal hedge effectiveness required to hedge the spot price according to geman.
According to Hull optimal hedge ratio h*= corr(S,F)*(vol(S)/vol(F))…(4)
From (3) and (4),
Also the relation b/w the geman hedge effectivenss and hull’s minimum hedge ratio is
from (6) it is clear that the gemans and hulls hedge ratio are not equivalent also that optimal hedge ration arrived at by geman is different from that of hull’s min. hedge ratio. For two optimal
ratios to be equal 2,3 and 4 implies vol(S)=vol(F).h(opt) measures the correlation of spot and futures prices directly and hence gives better picture of how one moves relative to another and how
effective is futures in covering up the spot price movements. An correlation of -1 implies h(opt)=1 which is a perfect hedge and lower correlation implies less than perfect hedge.
Whereas in case of Hull’s hedge ratio it also measures the relative volatility of spot price w.r.t volatility of futures which might not give better picture of relative movement of spot and
futures prices directly but gives magnitude of optimal hedge ratio which gives perfect hedging.
David Harper CFA FRM CIPM David Harper
Hi ShaktiRathore,
Star for the win (ie, you were just entered in the weekly drawing for two $15 gift certificates), and THANK YOU for your generous contribution! I am trying to figure out whether we differ, as I
get a little stuck on the advisability of dh/dS ... wouldn't this be finding the (S) that maximizes/minimizes (h)? If so, I am not following this step ... below is how I think about it, which may
be consistent with your solution. If you observe the reconciliation, please do let me know? Thanks!
I agree with you they are not equivalent because Geman's hedge effectiveness is simply given the variance of a 1:1 hedge (1 future: 1 spot).
In this sense, Geman's hedge effectiveness could be generalized with:
var (s - h*f) = var(s) + var (h*f) - 2*cov(s,h*f) =var(s) +h^2*var(f) - 2*h*cov(s,f); i.e., h is a constant, which Geman's formula simply assumes is 1.0 in a 1:1 hedge.
The inclusion of (h) allows this measure of risk (variance of the basis) to generalize such that there is an (h) which minimizes the variance. This optimal (h) is (h*), which is different than
1.0, or as you say, only 1.0 under a restricted special case. If we use (h*) then:
if h = h* = rho*SD(s)/SD(f),
= (2*rho*SD(s)/SD(f)*rho*SD(s)*SD(f) - [rho*SD(s)/SD(f)]^2*var(f)) / var(s)
= (2*rho*SD(s)/SD(f)*rho*SD(s)*SD(f) - rho^2*SD(s)^2/SD(f)^2*var(f)) / var(s)
= (2*rho*SD(s)*rho*SD(s)- rho^2*SD(s)^2) / var(s)
= (2*rho^2*SD(s)^2- rho^2*SD(s)^2) / var(s)
= rho^2*SD(s)^2 / var(s)
= rho^2
So I conclude:
□ Geman's hedge effectiveness, as it assumes 1:1, will not be optimal (but merely returns the variance of the basis in the 1:1 case)
□ But it generalizes to: var (s - h*f) .... allowing it to accept a hedge ratio (h) that is different than 1.0
□ Which will indeed be minimized by Hull's (h*)
□ As Hull's optimal hedge (h*) will minimize Geman's generalized hedge effectiveness, they are different but totally consistent in their treatment of basis risk as the variance of the basis.
|
{"url":"https://www.bionicturtle.com/forum/threads/frm-fun-6-mon.5975/","timestamp":"2014-04-17T10:42:54Z","content_type":null,"content_length":"37643","record_id":"<urn:uuid:a93025ce-019d-4188-b4eb-5df4acb8d155>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exclusive offer: get 50% off this eBook here
Mastering openFrameworks: Creative Coding Demystified — Save 50%
A practical guide to creating audiovisual interactive projects with low-level data processing using openFrameworks with this book and ebook
$26.99 $13.50
by Denis Perevalov | October 2013 | Open Source Web Graphics & Video
This article is written by Denis Perevalov, author of the book Mastering openFrameworks: Creative Coding Demystified. Drawing is one of the main capabilities of openFrameworks. Here, we consider the
basics of 2D graphics, including drawing geometric primitives, working with colors, and drawing in an offscreen buffer. In this article we will cover:
• Geometric primitives
• Using ofPoint
• Coordinate system transformations
• Colors
• Using FBO for offscreen drawings
• Playing with numerical instability
• Screen grabbing
(For more resources related to this topic, see here.)
Drawing basics
The screens of modern computers consist of a number of small squares, called pixels ( picture elements ). Each pixel can light in one color. You create pictures on the screen by changing the colors
of the pixels.
Graphics based on pixels is called raster graphics. Another kind of graphics is vector graphics, which is based on primitives such as lines and circles. Today, most computer screens are arrays of
pixels and represent raster graphics. But images based on vector graphics (vector images) are still used in computer graphics. Vector images are drawn on raster screens using the rasterization
The openFrameworks project can draw on the whole screen (when it is in fullscreen mode) or only in a window (when fullscreen mode is disabled). For simplicity, we will call the area where
openFrameworks can draw, the screen . The current width and height of the screen in pixels may be obtained using the ofGetWidth() and ofGetHeight() functions.
For pointing the pixels, openFrameworks uses the screen's coordinate system. This coordinate system has its origin on the top-left corner of the screen. The measurement unit is a pixel. So, each
pixel on the screen with width w and height h pixels can be pointed by its coordinates (x, y), where x and y are integer values lying in the range 0 to w-1 and from 0 to h-1 respectively.
In this article, we will deal with two-dimensional (2D) graphics, which is a number of methods and algorithms for drawing objects on the screen by specifying the two coordinates (x, y) in pixels.
The other kind of graphics is three-dimensional (3D) graphics, which represents objects in 3D space using three coordinates (x, y, z) and performs rendering on the screen using some kind of
projection of space (3D) to the screen (2D).
The background color of the screen
The drawing on the screen in openFrameworks should be performed in the testApp::draw() function. Before this function is called by openFrameworks, the entire screen is filled with a fixed color,
which is set by the function ofSetBackground( r, g, b ). Here r, g, and b are integer values corresponding to red, green, and blue components of the background color in the range 0 to 255. Note that
each of the ofSetBackground() function call fills the screen with the specified color immediately.
You can make a gradient background using the ofBackgroundGradient() function.
You can set the background color just once in the testApp::setup() function, but we often call ofSetBackground() in the beginning of the testApp::draw() function to not mix up the setup stage and the
drawing stage.
Pulsating background example
You can think of ofSetBackground() as an opportunity to make the simplest drawings, as if the screen consists of one big pixel. Consider an example where the background color slowly changes from
black to white and back using a sine wave.
This is example 02-2D/01-PulsatingBackground.
The project is based on the openFrameworks emptyExample example. Copy the folder with the example and rename it. Then fill the body of the testApp::draw() function with the following code:
float time = ofGetElapsedTimef();
//Get time in seconds//Get periodic value in [-1,1],
with wavelength equal to 1 second
float value = sin( time * M_TWO_PI );//Map value from [-1,1] to [0,255]
float v = ofMap( value, -1, 1, 0, 255 );ofBackground( v, v, v );
//Set background color
This code gets the time lapsed from the start of the project using the ofGetElapsedTimef() function, and uses this value for computing value = sin( time * M_TWO_PI ). Here, M_TWO_PI is an
openFrameworks constant equal to 2π; that is, approximately 6.283185. So, time * M_TWO_PI increases by 2π per second. The value 2π is equal to the period of the sine wave function, sin(). So, the
argument of sin(...) will go through its wavelength in one second, hence value = sin(...) will run from -1 to 1 and back. Finally, we map the value to v, which changes in range from 0 to 255 using
the ofMap() function, and set the background to a color with red, green, and blue components equal to v.
Run the project; you will see how the screen color pulsates by smoothly changing its color from black to white and back.
Replace the last line, which sets the background color to ofBackground( v, 0, 0 );, and the color will pulsate from black to red.
Replace the argument of the sin(...) function to the formula time * M_TWO_PI * 2 and the speed of the pulsating increases by two times.
We will return to background in the Drawing with an uncleared background section. Now we will consider how to draw geometric primitives.
Geometric primitives
In this article we will deal with 2D graphics. 2D graphics can be created in the following ways:
• Drawing geometric primitives such as lines, circles, and other curves and shapes like triangles and rectangles. This is the most natural way of creating graphics by programming. Generative art
and creative coding projects are often based on this graphics method. We will consider this in the rest of the article.
• Drawing images lets you add more realism to the graphics.
• Setting the contents of the screen directly, pixel-by-pixel, is the most powerful way of generating graphics. But it is harder to use for simple things like drawing curves. So, such method is
normally used together with both of the previous methods. A somewhat fast technique for drawing a screen pixel-by-pixel consists of filling an array with pixels colors, loading it in an image,
and drawing the image on the screen. The fastest, but a little bit harder technique, is using fragment shaders.
openFrameworks has the following functions for drawing primitives:
• ofLine( x1, y1, x2, y2 ): This function draws a line segment connecting points (x1, y1) and (x2, y2)
• ofRect( x, y, w, h ): This function draws a rectangle with the top-left corner (x, y), width w, and height h
• ofTriangle( x1, y1, x2, y2, x3, y3 ): This function draws a triangle with vertices (x1, y1), (x2, y2), and (x3, y3)
• ofCircle( x, y, r ): This function draws a circle with center (x, y) and radius r
openFrameworks has no special function for changing the color of a separate pixel. To do so, you can draw the pixel (x, y) as a rectangle with width and height equal to 1 pixel; that is, ofRect( x,
y, 1, 1 ). This is a very slow method, but we sometimes use it for educational and debugging purposes.
All the coordinates in these functions are float type. Although the coordinates (x, y) of a particular pixel on the screen are integer values, openFrameworks uses float numbers for drawing geometric
primitives. This is because a video card can draw objects with the float coordinates using modeling, as if the line goes between pixels. So the resultant picture of drawing with float coordinates is
smoother than with integer coordinates.
Using these functions, it is possible to create simple drawings.
The simplest example of a flower
Let's consider the example that draws a circle, line, and two triangles, which forms the simplest kind of flower.
This is example 02-2D/02-FlowerSimplest.
This example project is based on the openFrameworks emptyExample project. Fill the body of the testApp::draw() function with the following code:
ofBackground( 255, 255, 255 ); //Set white background
ofSetColor( 0, 0, 0 ); //Set black colorofCircle
( 300, 100, 40 ); //Blossom
ofLine( 300, 100, 300, 400 ); //Stem
ofTriangle( 300, 270, 300, 300, 200, 220 ); //Left leaf
ofTriangle( 300, 270, 300, 300, 400, 220 ); //Right leaf
On running this code, you will see the following picture of the "flower":
Controlling the drawing of primitives
There are a number of functions for controlling the parameters for drawing primitives.
• ofSetColor( r, g, b ): This function sets the color of drawing primitives, where r, g, and b are integer values corresponding to red, green, and blue components of the color in the range 0 to 255
. After calling ofSetColor(), all the primitives will be drawn using this color until another ofSetColor() calling. We will discuss colors in more detail in the Colors section.
• ofFill() and ofNoFill(): These functions enable and disable filling shapes like circles, rectangles, and triangles. After calling ofFill() or ofNoFill(), all the primitives will be drawn filled
or unfilled until the next function is called. By default, the shapes are rendered filled with color. Add the line ofNoFill(); before ofCircle(...); in the previous example and you will see all
the shapes unfilled, as follows:
• ofSetLineWidth( lineWidth ): This function sets the width of the rendered lines to the lineWidth value, which has type float. The default value is 1.0, and calling this function with larger
values will result in thick lines. It only affects drawing unfilled shapes. The line thickness is changed up to some limit depending on the video card. Normally, this limit is not less than 8.0.
Add the line ofSetLineWidth( 7 ); before the line drawing in the previous example, and you will see the flower with a thick vertical line, whereas all the filled shapes will remain unchanged.
Note that we use the value 7; this is an odd number, so it gives symmetrical line thickening.
Note that this method for obtaining thick lines is simple but not perfect, because adjacent lines are drawn quite crudely. For obtaining smooth thick lines, you should draw these as filled
• ofSetCircleResolution( res ): This function sets the circle resolution; that is, the number of line segments used for drawing circles to res. The default value is 20, but with such settings only
small circles look good. For bigger circles, it is recommended to increase the circle resolution; for example, to 40 or 60. Add the line ofSetCircleResolution( 40 ); before ofCircle(...); in the
previous example and you will see a smoother circle. Note that a large res value can decrease the performance of the project, so if you need to draw many small circles, consider using smaller res
• ofEnableSmoothing() and ofDisableSmoothing(): These functions enable and disable line smoothing. Such settings can be controlled by your video card. In our example, calling these functions will
not have any effect.
Performance considerations
The functions discussed work well for drawings containing not more than a 1000 primitives. When you draw more primitives, the project's performance can decrease (it depends on your video card). The
reason is that each command such as ofSetColor() or ofLine() is sent to drawing separately, which takes time. So, for drawing 10,000, 100,000, or even 1 million primitives, you should use advanced
methods, which draw many primitives at once. In openFrameworks, you can use the ofMesh and ofVboMesh classes for this.
Using ofPoint
Maybe you noted a problem when considering the preceding flower example: drawing primitives by specifying the coordinates of all the vertices is a little cumbersome. There are too many numbers in the
code, so it is hard to understand the relation between primitives. To solve this problem, we will learn about using the ofPoint class and then apply it for drawing primitives using control points.
ofPoint is a class that represents the coordinates of a 2D point. It has two main fields: x and y, which are float type.
Actually, ofPoint has the third field z, so ofPoint can be used for representing 3D points too. If you do not specify z, it sets to zero by default, so in this case you can think of ofPoint as a 2D
point indeed.
Operations with points
To represent some point, just declare an object of the ofPoint class.
ofPoint p;
To initialize the point, set its coordinates.
p.x = 100.0;
p.y = 200.0;
Or, alternatively, use the constructor.
p = ofPoint( 100.0, 200.0 );
You can operate with points just as you do with numbers. If you have a point q, the following operations are valid:
• p + q or p - q provides points with coordinates (p.x + q.x, p.y + q.y) or (p.x - q.x, p.y - q.y)
• p * k or p / k, where k is the float value, provides the points (p.x * k, p.y * k) or (p.x / k, p.y / k)
• p += q or p -= q adds or subtracts q from p
There are a number of useful functions for simplifying 2D vector mathematics, as follows:
• p.length(): This function returns the length of the vector p, which is equal to sqrt( p.x * p.x + p.y * p.y ).
• p.normalize(): This function normalizes the point so it has the unit length p = p / p.length(). Also, this function handles the case correctly when p.length() is equal to zero.
See the full list of functions for ofPoint in the libs/openFrameworks/math/ofVec3f.h file. Actually, ofPoint is just another name for the ofVec3f class, representing 3D vectors and corresponding
All functions' drawing primitives have overloaded versions working with ofPoint:
• ofLine( p1, p2 ) draws a line segment connecting the points p1 and p2
• ofRect( p, w, h ) draws a rectangle with top-left corner p, width w, and height h
• ofTriangle( p1, p2, p3 ) draws a triangle with the vertices p1, p2, and p3
• ofCircle( p, r ) draws a circle with center p and radius r
Using control points example
We are ready to solve the problem stated in the beginning of the Using ofPoint section. To avoid using many numbers in drawing code, we can declare a number of points and use them as vertices for
primitive drawing. In computer graphics, such points are called control points .
Let's specify the following control points for the flower in our simplest flower example:
Now we implement this in the code.
This is example 02-2D/03-FlowerControlPoints.
Add the following declaration of control points in the testApp class declaration in the testApp.h file:
ofPoint stem0, stem1, stem2, stem3, leftLeaf, rightLeaf;
Then set values for points in the testApp::update() function as follows:
stem0 = ofPoint( 300, 100 );
stem1 = ofPoint( 300, 270 );
stem2 = ofPoint( 300, 300 );
stem3 = ofPoint( 300, 400 );
leftLeaf = ofPoint( 200, 220 );
rightLeaf = ofPoint( 400, 220 );
Finally, use these control points for drawing the flower in the testApp::draw() function:
ofBackground( 255, 255, 255 ); //Set white background
ofSetColor( 0, 0, 0 ); //Set black colorofCircle
( stem0, 40 ); //Blossom
ofLine( stem0, stem3 ); //Stem
ofTriangle( stem1, stem2, leftLeaf ); //Left leaf
ofTriangle( stem1, stem2, rightLeaf ); //Right leaf
You will observe that when drawing with control points the code is much easier to understand.
Furthermore, there is one more advantage of using control points: we can easily change control points' positions and hence obtain animated drawings. See the full example code in 02-2D/
03-FlowerControlPoints. In addition to the already explained code, it contains a code for shifting the leftLeaf and rightLeaf points depending on time. So, when you run the code, you will see the
flower with moving leaves.
Coordinate system transformations
Sometimes we need to translate, rotate, and resize drawings. For example, arcade games are based on the characters moving across the screen.
When we perform drawing using control points, the straightforward solution for translating, rotating, and resizing graphics is in applying desired transformations to control points using
corresponding mathematical formulas. Such idea works, but sometimes leads to complicated formulas in the code (especially when we need to rotate graphics). The more elegant solution is in using
coordinate system transformations. This is a method of temporarily changing the coordinate system during drawing, which lets you translate, rotate, and resize drawings without changing the drawing
The current coordinate system is represented in openFrameworks with a matrix. All coordinate system transformations are made by changing this matrix in some way. When openFrameworks draws something
using the changed coordinate system, it performs exactly the same number of computations as with the original matrix. It means that you can apply as many coordinate system transformations as you want
without any decrease in the performance of the drawing.
Coordinate system transformations are managed in openFrameworks with the following functions:
• ofPushMatrix(): This function pushes the current coordinate system in a matrix stack. This stack is a special container that holds the coordinate system matrices. It gives you the ability to
restore coordinate system transformations when you do not need them.
• ofPopMatrix(): This function pops the last added coordinate system from a matrix stack and uses it as the current coordinate system. You should take care to see that the number of ofPopMatrix()
calls don't exceed the number of ofPushMatrix() calls.
Though the coordinate system is restored before testApp::draw() is called, we recommend that the number of ofPushMatrix() and ofPopMatrix() callings in your project should be exactly the same. It
will simplify the project's debugging and further development.
• ofTranslate( x, y ) or ofTranslate( p ): This function moves the current coordinate system at the vector (x, y) or, equivalently, at the vector p. If x and y are equal to zero, the coordinate
system remains unchanged.
• ofScale( scaleX, scaleY ): This function scales the current coordinate system at scaleX in the x axis and at scaleY in the y axis. If both parameters are equal to 1.0, the coordinate system
remains unchanged. The value -1.0 means inverting the coordinate axis in the opposite direction.
• ofRotate( angle ): This function rotates the current coordinate system around its origin at angle degrees clockwise. If the angle value is equal to 0, or k * 360 with k as an integer, the
coordinate system remains unchanged.
All transformations can be applied in any sequence; for example, translating, scaling, rotating, translating again, and so on.
The typical usage of these functions is the following:
1. Store the current transformation matrix using ofPushMatrix().
2. Change the coordinate system by calling any of these functions: ofTranslate(), ofScale(), or ofRotate().
3. Draw something.
4. Restore the original transformation matrix using ofPopMatrix().
Step 3 can include steps 1 to 4 again.
For example, for moving the origin of the coordinate system to the center of the screen, use the following code in testApp::draw():
ofTranslate( ofGetWidth() / 2, ofGetHeight() / 2 );
//Draw something
If you replace the //Draw something comment to ofCircle( 0, 0, 100 );, you will see the circle in the center of the screen.
This transformation significantly simplifies coding the drawings that should be located at the center of the screen.
Now let's use coordinate system transformation for adding triangular petals to the flower.
For further exploring coordinate system transformations.
A practical guide to creating audiovisual interactive projects with low-level data processing using openFrameworks with this book and ebook
Published: September 2013
eBook Price: $26.99
Book Price: $44.99
See more
Flower with petals example
In this example, we draw petals to the flower from the 02-2D/03-FlowerControlPoints example, described in the Using control points example section.
This is example 02-2D/04-FlowerWithPetals.
We want to draw unfilled shapes here, so add the following lines at the beginning of testApp::draw():
ofNoFill(); //Draw shapes unfilled
Now add the following code to the end of testApp::draw() for drawing the petals:
ofPushMatrix(); //Store the coordinate system
//Translate the coordinate system center to stem0
ofTranslate( stem0 );
//Rotate the coordinate system depending on the time
float angle = ofGetElapsedTimef() * 30;
ofRotate( angle );int petals = 15; //Number of petals
for (int i=0; i<petals; i++) {
//Rotate the coordinate system
ofRotate( 360.0 / petals ); //Draw petal as a triangle
ofPoint p1( 0, 20 );
ofPoint p2( 80, 0 );
ofTriangle( p1, -p1, p2 );
}//Restore the coordinate system
This code moves the coordinate system origin to the point stem0 (the blossom's center) and rotates it depending on the current time. Then it rotates the coordinate system on a fixed angle and draws a
triangle petals times. As a result, we obtain a number of triangles that slowly rotate around the point stem0.
Up until now we have worked with colors using the functions ofSetColor( r, g, b ) and ofBackground( r, g, b ). By calling these functions, we specify the color of the current drawing and background
as r, g, and b values, corresponding to red, green, and blue components, where r, g and b are integer values lying in the range 0 to 255.
When you need to specify gray colors, you can use overloaded versions of these functions with just one argument: ofSetColor( gray ) and ofBackground( gray ), where gray is in the range 0 to 255.
These functions are simple, but not enough. Sometimes, you need to pass the color as a single parameter in a function, and also do color modifications like changing the brightness. To solve this
problem, openFrameworks has the class ofColor. It lets us operate with colors as we do with single entities and modify these.
ofColor is a class representing a color. It has four float fields: r, g, b, and a. Here r, g, and b are red, green, and blue components of a color, and a is the alpha component, which means the
opacity of a color. The alpha component is related to transparency.
In this article we will not consider the alpha component. By default, its value is equal to 255, which means truly opaque color, so all colors considered in this article are opaque.
The ofSetColor() , ofBackground() , and ofColor() functions include the alpha component as an optional last argument, so you can specify it when needed.
Operations with colors
To represent some color, just declare an object of the ofColor class.
ofColor color;
To initialize the color, set its components.
color.r = 0.0;
color.g = 128.0;
color.b = 255.0;
Or, equivalently, use the constructor.
color = ofColor( 0.0, 128.0, 255.0 );
You can use color as an argument in the functions ofSetColor() and ofBackground(). For example, ofSetColor( color ) and ofBackground( color ).
openFrameworks has a number of predefined colors, including white, gray, black, red, green, blue, cyan, magenta, and yellow. See the full list of colors in the libs/openFrameworks/types/ofColors.h
file. To use the predefined colors, add the ofColor:: prefix before these names. For example, ofSetColor( ofColor::yellow ) sets the current drawing color to yellow.
You can modify the color using the following functions:
• setHue( hue ), setSaturation( saturation ), and setBrightness( brightness ): These functions change the hue, saturation, and brightness of the color to specified values. All the arguments are
float values in the range 0 to 255.
• setHsb( hue, saturation, brightness ): This function creates a color by specifying its hue, saturation, and brightness values, where arguments are float values in the range 0 to 255.
• getHue() and getSaturation(): These functions return the hue and saturation values of the color.
• getBrightness(): This function returns the brightest color component.
• getLightness(): This function returns the average of the color components.
• invert(): This function inverts color components; that is, the r, g, and b fields of the color become 255-r, 255-g, and 255-b respectively.
Let's consider an example that demonstrates color modifications.
Color modifications example
In this example, we will modify the red color by changing its brightness, saturation, and hue through the whole range and draw three resultant stripes.
This is example 02-2D/05-Colors.
This example project is based on the openFrameworks emptyExample project. Fill the body of the testApp::draw() function with the following code:
ofBackground( 255, 255, 255 );
//Set white background//Changing brightness
for ( int i=0; i<256; i++ ) {
ofColor color = ofColor::red; //Get red color
color.setBrightness( i ); //Modify brightness
ofSetColor( color );
ofLine( i, 0, i, 50 );
}//Changing saturation
for ( int i=0; i<256; i++ ) {
ofColor color = ofColor::red; //Get red color
color.setSaturation( i ); //Modify saturation
ofSetColor( color );
ofLine( i, 80, i, 130 );
}//Changing hue
for ( int i=0; i<256; i++ ) {
ofColor color = ofColor::red; //Get red color
color.setHue( i ); //Modify hue
ofSetColor( color );
ofLine( i, 160, i, 210 );
Run the project and you will see three stripes consisting of the red color with changed brightness, saturation, and hue.
As you can see, changing brightness, saturation, and hue is similar to the color-corrections methods used in photo editors like Adobe Photoshop and Gimp. From a designer's point of view, this is a
more powerful method for controlling colors as compared to directly specifying the red, green, and blue color components.
Now we will consider how to perform drawings with uncleared background, which can be useful in many creative coding projects related to 2D graphics.
Drawing with an uncleared background
By default, the screen is cleared each time before testApp:draw() is called, so you need to draw all the contents of the screen inside testApp::draw() again and again. It is appropriate in most
cases, but sometimes we want the screen to accumulate our drawings. In openFrameworks, you can do this by disabling screen clearing using the ofSetBackgroundAuto( false ) function. All successive
drawings will accumulate on the screen. (In this case you should call ofBackground() rarely, only for clearing the current screen).
This method is very simple to use, but is not flexible enough for serious projects. Also, currently it has some issues:
• In Mac OS X, the screen can jitter.
• In Windows, screen grabbing does not work (more details on screen grabbing can be seen in the Screen grabbing section later in this article)
So, when you need to accumulate drawings, we recommend you to use the FBO buffer, which we will explain now.
Using FBO for offscreen drawings
FBO in computer graphics stands for frame buffer object . This is an offscreen raster buffer where openFrameworks can draw just like on the screen. You can draw something in this buffer, and then
draw the buffer contents on the screen. The picture in the buffer is not cleared with each testApp::draw() calling, so you can use FBO for accumulated drawings.
In openFrameworks, FBO is represented by the class ofFBO.
The typical scheme of its usage is the following:
1. Declare an ofFbo object, fbo, in the testApp class declaration.
ofFbo fbo;
2. Initialize fbo with some size in the testApp::setup() function.
int w = ofGetWidth();
int h = ofGetHeight();
fbo.allocate( w, h );
3. Draw something in fbo. You can do it not only in testApp::draw() but also in testApp::setup() and testApp::update(). To begin drawing, call fbo.begin() . After this, all drawing commands, such as
ofBackground() and ofLine(), will draw to fbo. To finish drawing, call fbo.end(). For example, to fill fbo with white color, use the following code:
ofBackground( 255, 255, 255 );
4. Draw fbo on the screen using the fbo.draw( x, y ) or fbo.draw( x, y, w, h ) functions. Here, x and y are the top-left corner, and w and h are the optional width and height of the rendered fbo
image on the screen. The drawing should be done in the testApp::draw() function. The example of the corresponding code is the following:
ofSetColor( 255, 255, 255 );
fbo.draw( 0, 0 );
The ofFbo class has drawing behavior similar to the image class ofImage. So, the ofSetColor( 255, 255, 255 ); line is needed here to draw fbo without color modulation.
You can use many FBO objects and even draw one inside another. For example, if you have ofFbo fbo2, you can draw fbo inside fbo2 as follows:
ofSetColor( 255, 255, 255 );
fbo.draw( 0, 0 );
Be careful: if you call fbo.begin() , you should always call fbo.end() ; do it before drawing FBO's contents anywhere.
The following tips will be helpful for advanced ofFbo usage:
• fbo has texture of the type ofTexture, which holds its current picture. The texture can be accessed using fbo.getTextureReference().
• The settings of your video card, such as like antialiasing smoothing, does not affect FBO, so it may happen that your smooth drawing on screen becomes aliased when you perform this drawing using
fbo. One possible solution for smooth graphics is using fbo that is double the size of the screen and shrinking fbo to screen size during drawing.
• When you perform semi-transparent drawing to fbo (with alpha-blending enabled), most probably you should disable alpha-blending when drawing fbo itself on the screen. In the opposite case,
transparent pixels of fbo will be blended in the screen one more time, so the resultant picture will be overblended.
• By default, fbo holds color components of its pixels as unsigned char values. When more accuracy is needed, you can use float-valued fbo by allocating it with the optional last parameter
fbo.allocate( w, h, GL_RGB32F_ARB );
Let's consider an example of using the ofFbo object for accumulated drawing.
Spirals example
Consider a drawing algorithm consisting of the following steps:
1. Set a = 0 and b = 0.
2. Set the pos point's position to the screen center.
3. Set a += b.
4. Set b += 0.5.
5. Move the pos point a step of fixed length in the direction defined by the angle a measured in degrees.
6. Each 100 steps change the drawing color to a new color, generated randomly.
7. Draw a line between the last and current positions of pos.
8. Go to step 3.
This algorithm is a kind of generative art algorithm—it is short and can generate interesting and unexpected drawings.
The result of the algorithm will be a picture with the the colored trajectory of pos moving on the screen. The b value grows linearly, hence the a value grows parabolically. The value of a is an
angle that defines the step pos will move. It is not easy to predict the behavior of steps when the angle changes parabolically, hence it is hard to imagine how the resultant curve will look. So
let's implement the algorithm and see it.
We will use the ofFbo fbo object for holding the generated picture.
This is example 02-2D/06-Spirals.
The example is based on the emptyExample project in openFrameworks. In the testApp class declaration of the testApp.h file, add declarations for a, b, pos, fbo, and some additional variables. Also,
we declare the function draw1(), which draws one line segment by performing steps 3 to 7 of the drawing algorithm.
double a, b; //Angle and its increment
ofPoint pos, lastPos; //Current and last drawing position
ofColor color; //Drawing color
int colorStep; //Counter for color changing
ofFbo fbo; //Drawing buffer
void draw1(); //Draw one line segment
Note that a and b are declared as double. The reason is that a grows fast, so the accuracy of float is not enough for stable computations. However, we will play with the float case too, in the
Playing with numerical instability section.
The testApp::setup() function initializes the fbo buffer, fills it with a white color, and sets initial values to all variables.
void testApp::setup(){
ofSetFrameRate( 60 ); //Set screen frame rate
//Allocate drawing buffer
fbo.allocate( ofGetWidth(), ofGetHeight() );
//Fill buffer with white color
ofBackground( 255, 255, 255 );
fbo.end(); //Initialize variables
a = 0;
b = 0;
pos = ofPoint( ofGetWidth() / 2, ofGetHeight() / 2 );
//Screen center
colorStep = 0;
The testApp::update() function draws line segments in fbo by calling the draw1() function. Note that we perform 200 drawings at once for obtaining the resultant curve quickly.
void testApp::update(){
fbo.begin(); //Begin draw to buffer
for ( int i=0; i<200; i++ ) {
fbo.end(); //End draw to buffer
The testApp::draw() function just draws fbo on the screen.
void testApp::draw(){
ofBackground( 255, 255, 255 ); //Set white background
//Draw buffer
ofSetColor( 255, 255, 255 );
fbo.draw( 0, 0 );
Note that calling ofBackground() is not necessary here because fbo fills the whole screen, but we have done so uniformly with other projects.
Finally, we should add a definition for the draw1() function.
void testApp::draw1(){
//Change a
a += b * DEG_TO_RAD;
//a holds values in radians, b holds values in degrees,
//so when changing a we multiply b to DEG_TO_RAD constant //Change b
b = b + 0.5; //Shift pos in direction defined by angle a
lastPos = pos; //Store last pos value
ofPoint d = ofPoint( cos( a ), sin( a ) );
float len = 20;
pos += d * len; //Change color each 100 steps
if ( colorStep % 100 == 0 ) {
//Generate random color
color = ofColor( ofRandom( 0, 255 ),
ofRandom( 0, 255 ),
ofRandom( 0, 255 ) );
//Draw line segment
ofSetColor( color );
ofLine( lastPos, pos );
In the original algorithm, described at the beginning of the section, a and b are measured in degrees. In the openFrameworks implementation, we decide to hold b in degrees and a in radians. The
reason for this will be explained later, in the Playing with numerical instability section. So, in the code, we convert degrees to radians using multiplication to the DEG_TO_RAD constant, which is
defined in openFrameworks and is equal to π/180 degrees.
a += b * DEG_TO_RAD;
Run the project; you will see a curve with two spiral ends constantly changing their color:
This particular behavior of the curve is determined by the parameter 0.5 in the following line:
b = b + 0.5;
The parameter defines the speed of increasing b. Change this parameter to 5.4 and 5.5 and you will see curves with 4 and 12 spirals, as shown here:
Try your own values of the parameter. If the resultant curve is too large and does not fit the screen, you can control its scale by changing the len value in the following line:
float len = 20;
For example, if you set len to 10, the resultant curve shrinks twice.
Playing with numerical instability
In the openFrameworks code, we declare a and b as double values. The double type has much more accuracy when representing numbers than float, and it is essential in this example because a grows fast.
But what will happen if we declare a and b as float? Do it! Replace the line double a, b; with float a, b; and run the project. You will see that the resultant curve will be equal to the curve from
the double case just in the first second of the running time. Then, the centers of the spirals begin to move.
Gradually, the two-spiral structure will be ruined and the curve will demonstrate unexpected behavior, drawing circles of different sizes.
The reason for such instability is that the values of a are computed with numerical inaccuracy.
Note that the exploited instability effect can depend on the floating-point arithmetics of your CPU, so your resultant pictures can differ from the presented screenshots.
In many serious tasks such as physical simulation or optimal planning, we need to have the exact result, so such computing instability is unallowable. But from the creative coding and generative art
field point of view, such instability lets you create interesting visual or audio effects. So such instability is often permitted and desirable. For more details on the mathematics of such processes,
read about the deterministic chaos theory.
Now change the parameter 0.5 in the line b = b + 0.5; to 17, and you will see a big variety of shapes, including triangles, squares, heptagons, and stars. Then try the values 4, 21, and your own. You
will see a large number of similar but different pictures generated by this simple drawing algorithm.
Finally, note that the main computing lines of the algorithm are the following:
a += b * DEG_TO_RAD;
b = b + 0.5;
ofPoint d = ofPoint( cos( a ), sin( a ) );
These are very sensitive to any changes. If you change it somehow, the resultant curves will be different (in the float case). In this sense, such creative coding can be considered art because it
depends heavily on the smallest code nuances, which often cannot be predicted.
Screen grabbing
Sometimes it is desirable to save the picture drawn by your project in the file. You can do it using tools of your operating system, but it's more comfortable to do it right in your project. So let's
see how to save the contents of your project screen to an image file.
For such purposes, we need to use the ofImage class for working with images.
The following code saves the current screen to file on the pressing of the Space bar. It should be added to the testApp::keyPressed() function as follows:
//Grab the screen image to file
if ( key == ' ' ) {
ofImage image; //Declare image object //Grab contents of the screen
image.grabScreen( 0, 0, ofGetWidth(), ofGetHeight() );
image.saveImage( "screen.png" ); //Save image to file
The parameters of the image.grabScreen() function specify the rectangle of the grabbing. In our case, it is the whole screen of the project.
This code is implemented in the 02-2D/06-Spirals example. Run it and press the Space bar; the contents of the screen will be saved to the bin/data/screen.png file in your project's folder.
The PNG files are small and have high quality, so we often use these for screen grabbing. But, writing to a PNG file takes some time because the image has to be compressed. It takes up to several
seconds, depending on the CPU and image size. So if you need to save images fast, use the BMP file format.
image.saveImage( "screen.bmp" );
Additional topics
In this article, we have considered some of the basic topics of 2D drawing. For reading further on openFrameworks 2D capabilities, we suggest the following topics:
• Drawing text using the function ofDrawBitmapString() or the class ofTrueTypeFont. See the openFrameworks example examples/graphics/fontShapesExample.
• Drawing filled shapes using the functions ofBeginShape() , ofVertex() , and ofEndShape() . See the openFrameworks example examples/graphics/polygonExample.
• Creating PDF files with openFrameworks drawings. Such files will contain vector graphics suitable for high-quality printing purposes. See the openFrameworks example examples/graphics/pdfExample.
For deeper exploration of the world of 2D graphics, we suggest the following topics:
• Using Perlin noise for simulating life-like motion of objects.
• Using the algorithmic method of recursion for drawing branched structures like trees.
If you are interested in playing with generative art, explore the huge base of Processing sketches at openprocessing.org. Processing is a free Java-based language and development environment for
creative coding. It is very similar to openFrameworks (in a way, openFrameworks was created as the C++ version of Processing). Most of the Processing examples deal with 2D graphics, are generative
art projects, and can be easily ported to openFrameworks.
In this article we learned how to draw geometrical primitives using control points, perform transformations of the coordinate system, and work with colors. Also, we studied how to accumulate drawings
in the offscreen buffer and considered the generative art example of using it. Finally, we learned how to save the current screen image to the file.
Resources for Article :
Further resources on this subject:
A practical guide to creating audiovisual interactive projects with low-level data processing using openFrameworks with this book and ebook
Published: September 2013
eBook Price: $26.99
Book Price: $44.99
See more
About the Author :
Denis Perevalov is a computer vision research scientist. He works at the Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences (Ekaterinburg, Russia). He is the
co-author of two Russian patents on robotics computer vision systems and an US patent on voxel graphics. Since 2010 he has taught openFrameworks in the Ural Federal University. From 2011 he has been
developing software for art and commercial interactive installations at kuflex.com using openFrameworks. He is the co-founder of interactive technologies laboratory expo32.ru (opened in 2012).
Books From Packt
Processing 2: Creative Processing 2: Creative Cinder Creative Cinder – Begin Kinect for Windows SDK Augmented Reality Instant Kinect in Motion – Audio and Visual
Programming Cookbook Coding Hotshot Coding Cookbook Creative Coding Programming Guide with Kinect KineticJS Tracking by Example
|
{"url":"https://www.packtpub.com/article/drawing-in-2D","timestamp":"2014-04-18T01:24:11Z","content_type":null,"content_length":"119607","record_id":"<urn:uuid:e30188b1-9606-42aa-84d1-ed1bd91c6f4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colleyville Geometry Tutor
...I taught 7th and 8th grade Math for five years at Central Junior High in HEB ISD. I also taught Pre-Ap math for two of those years. Each year of teaching I had all my students improve their
test scores and had 90% of students pass their State test.
3 Subjects: including geometry, elementary math, prealgebra
...I used Algebra 2 concepts in many of my own undergraduate and graduate engineering courses. I’ve tutored Geometry at the middle school honors level and at the high school Pre-AP level. Geometry
requires learning a large set of theorems, postulates, corollaries, symbols and formulae.
15 Subjects: including geometry, reading, writing, algebra 1
...Sincerely, Vincent P.S. Need help in college application or budgeting? Contact me.I am a four-year student at Columbia University.
24 Subjects: including geometry, chemistry, reading, English
...I have lived in Texas since 1990. I have been able to succeed in business and have high level customer service skills along with broad knowledge, working with numerous Fortune 500 companies
nationwide. My broad general experience and knowledge, my specific skills in Math, Science and English along with my computer skills would make me your ideal tutor.
41 Subjects: including geometry, English, reading, writing
...Not all students learn in the same way, and that is why I make sure that I am able to adapt to the student's learning needs. After all, I am a visual learner and auditory learner as well as a
hands on learner. Please feel free to email me to get any additional information that may be necessary for an informed decision.
17 Subjects: including geometry, chemistry, biology, algebra 1
|
{"url":"http://www.purplemath.com/Colleyville_Geometry_tutors.php","timestamp":"2014-04-20T23:55:24Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:6b965913-317d-4888-b5e0-4158caf6d222>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector Angles
March 25th 2010, 09:10 AM #1
Mar 2010
I'm not sure if this is the right forum to post this as my question stems from my numerical analysis project, but seems to be geometrical. Anyway, here goes:
Say I have 2 vectors where their values (X, Y, Z) and (A, B, C) are uniformly randomly generated. What is the process that I would use to find the Greatest Possible Angle between the two vectors?
I can find the angle for two vectors, but given that their values are randomly generated, figuring out what their largest POSSIBLE value could be is throwing me.
Thank you in advance!!
Be more specific about how the are uniformly randomly generated.
By that I mean, let's say I use a basic random uniform number generator in a computer. I use this number generator to create two 3-tuples. What I want to do is try to find the largest angle
possible (inner product) between the two vectors, but the random nature is throwing me... I'm not sure what approach to take.
Additional Note: these are not Euclidian vectors - let's say, for the sake of this, that we have a 3 dimension vector space. But I could change this problem to be two 4-tuples in 4 dimensional
vector space or 2 5-tuple vectors in 5 dimensional, etc etc
Does this make it more clear? I'm not sure I am using all of the terms correctly
Last edited by DamenFaltor; March 26th 2010 at 04:46 AM. Reason: Added even more clarity (trying to sound like less of a jerk too)
March 25th 2010, 10:07 AM #2
Grand Panjandrum
Nov 2005
March 25th 2010, 11:16 AM #3
Mar 2010
|
{"url":"http://mathhelpforum.com/advanced-applied-math/135612-vector-angles.html","timestamp":"2014-04-17T07:46:56Z","content_type":null,"content_length":"35564","record_id":"<urn:uuid:7bd6cde1-fea7-492e-b55b-c4c8c03b3ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(THEN) : tactic -> tactic -> tactic
Applies two tactics in sequence.
If t1 and t2 are tactics, t1 THEN t2 is a tactic which applies t1 to a goal, then applies the tactic t2 to all the subgoals generated. If t1 solves the goal then t2 is never applied.
The application of THEN to a pair of tactics never fails. The resulting tactic fails if t1 fails when applied to the goal, or if t2 does when applied to any of the resulting subgoals.
Suppose we want to prove the inbuilt theorem DELETE_INSERT ourselves:
# g `!x y. (x INSERT s) DELETE y =
if x = y then s DELETE y else x INSERT (s DELETE y)`;;
We may wish to perform a case-split using COND_CASES_TAC, but since variables in the if-then-else construct are bound, this is inapplicable. Thus we want to first strip off the universally
quantified variables:
# e(REPEAT GEN_TAC);;
val it : goalstack = 1 subgoal (1 total)
`(x INSERT s) DELETE y =
(if x = y then s DELETE y else x INSERT (s DELETE y))`
and then apply COND_CASES_TAC: A quicker way (starting again from the initial goal) would be to combine the tactics using THEN:
# e(REPEAT GEN_TAC THEN COND_CASES_TAC);;
Although normally used to sequence tactics which generate a single subgoal, it is worth remembering that it is sometimes useful to apply the same tactic to multiple subgoals; sequences like the
EQ_TAC THENL [ASM_REWRITE_TAC[]; ASM_REWRITE_TAC[]]
can be replaced by the briefer:
EQ_TAC THEN ASM_REWRITE_TAC[]
If using this several times in succession, remember that THEN is left-associative.
EVERY, ORELSE, THENL.
|
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/THEN.html","timestamp":"2014-04-18T05:42:11Z","content_type":null,"content_length":"2975","record_id":"<urn:uuid:6312c415-bd86-455e-b3b0-08d0033ba59b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalized Hebbian Algorithm
34,117pages on
this wiki
Redirected from GHA
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
The Generalized Hebbian Algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal
components analysis. First defined in 1989,^[1] it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs.
GHA combines Oja's rule with the Gram-Schmidt process to produce a learning rule of the form
$\,\Delta w_{ij} ~ = ~ \eta\left(y_j x_i - y_j \sum_{k=1}^j w_{ik} y_k \right)$,
where w[ij] defines the synaptic weight or connection strength between the ith input and jth output neurons, x and y are the input and output vectors, respectively, and η is the learning rate
In matrix form, Oja's rule can be written
$\,\frac{d w(t)}{d t} ~ = ~ w(t) Q - \mathrm{diag} [w(t) Q w(t)^{\mathrm{T}}] w(t)$,
and the Gram-Schmidt algorithm is
$\,\Delta w(t) ~ = ~ -\mathrm{lower} [w(t) w(t)^{\mathrm{T}}] w(t)$,
where w(t) is any matrix, in this case representing synaptic weights, Q = η x x^T is the autocorrelation matrix, simply the outer product of inputs, diag is the function that diagonalizes a matrix,
and lower is the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form,
$\,\Delta w(t) ~ = ~ \eta(t) \left(\mathbf{y}(t) \mathbf{x}(t)^{\mathrm{T}} - \mathrm{LT}[\mathbf{y}(t)\mathbf{y}(t)^{\mathrm{T}}] w(t)\right)$,
where the function LT sets all matrix elements above the diagonal equal to 0, and note that our output y(t) = w(t) x(t) is a linear neuron.^[1]
Stability and PCA
^[2] ^[3]
GHA is used in applications where a self-organizing map is necessary, or where a feature or principal components analysis can be used. Examples of such cases include artificial intelligence and
speech and image processing.
Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the
multi-layer dependence associated with the backpropagation algorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate
parameter η.^[2]
See also
|
{"url":"http://psychology.wikia.com/wiki/GHA","timestamp":"2014-04-19T00:02:27Z","content_type":null,"content_length":"63349","record_id":"<urn:uuid:c03e436a-a2d1-4c42-b0a6-b7480c69b2fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Catalog Home
Department of Physics
PHYS 105. Foundations of Physics. 1 credit.
An introduction to the study of physics and the physics department. Presentations are give by faculty and students to acquaint the students with current research opportunities in the Department and
the application of physics to broad spectrum of topics.
PHYS 120. The Solar System. 3 credits.
An introductory course in astronomy, which includes the following topics: motions of celestial objects, eclipses, historical development, the nature of light, telescopes, properties and evolution of
the solar system.
PHYS 121. Stars, Galaxies and Cosmology. 3 credits.
An introductory course in astronomy which includes the following topics: the Sun, stellar properties, stellar evolution, black holes, the Milky Way, galactic evolution, quasars, cosmology.
PHYS 122. Observational Astronomy for Beginners (0, 2). 1 credit.
An introduction to naked-eye and telescopic astronomical observations. Wells Planetarium will be used when weather conditions prohibit outdoor observations.
PHYS 125. Principles of Physics With Biological Applications I (3, 2). 4 credits.
A study of fundamental physical principles covering areas of mechanics, thermal energy and fluids, emphasizing topics pertinent to life processes. Realistic biological examples are used to illustrate
the relationship between physics and the life sciences. Laboratory exercises explore the application of physics to living systems. Prerequisite: MATH 135 or equivalent.
PHYS 126. Principles of Physics With Biological Applications II (3, 2). 4 credits.
The second semester is a study of physical principles. Topics include elasticity, wave motion, sound, electricity and magnetism, geometrical and physical optics and electromagnetic radiation and
radioactivity. Prerequisite: PHYS 125.
*PHYS 140. College Physics I. 3 credits.
The first semester of a noncalculus sequence in general physics. Topics include principles of mechanics, thermal properties of matter, wave motion and sound. Prerequisite: MATH 135 or equivalent.
Corequisite: PHYS 140L.
PHYS 150. College Physics II. 3 credits.
The second semester of a noncalculus sequence in general physics. Topics include electric charges, circuits, magnetism, optics, atomic and nuclear physics. Prerequisites: PHYS 140 and 140L.
Corequisite: PHYS 150L.
PHYS 140L*-150L. General Physics Laboratories. 1 credit each semester.
These laboratory courses are designed to complement and supplement the PHYS 140-150 and PHYS 240-250 lecture courses. The laboratory and lecture portions must be taken concurrently. Corequisite for
PHYS 104L: PHYS 140 or PHYS 240. Prerequisite for PHYS 150L: PHYS 140L and either PHYS 140 or PHYS 240. Corequisite for PHYS 150L: PHYS 150 or PHYS 250.
PHYS 215. Energy and the Environment. 3 credits.
Energy use, sources and trends; fossil fuels, heat-work conversions, thermodynamic restrictions and electric power production; nuclear fission reactors and fusion energy; solar energy and
technologies; alternative energy sources; energy storage; energy conservation; issues of waste and safety. Environmental, social and economic aspects will be discussed. Not open to ISAT majors
scheduled to take ISAT 212 as part of their degree requirements. Prerequisites: Two college courses in science and one in mathematics.
PHYS 220. General Astronomy I: The Night Sky, the Solar System and Stars. 3 credits.
PHYS 220 is the first in a two-course sequence in general astronomy intended for students interested in science. Topics covered include: appearance and movements of the night sky; astronomical
coordinate systems and timekeeping; seasons, eclipses and planetary configurations; planetary motions and gravitation; fundamental forces; electromagnetic radiation and its detection; content,
structure, formation and evolution of solar system; observations and models of the Sun, stellar interior models; stellar magnitudes and spectra, classifications; Hertzsprung-Russell diagram.
Prerequisites: One college course in science and one in mathematics.
PHYS 221. General Astronomy II: Star Systems, the Interstellar Medium and Cosmology. 3 credits.
PHYS 221 is the second in a two-course sequence in general astronomy intended for students interested in science. Topics covered include: stellar evolution; variability and high-energy phenomena in
stars and multiple-star systems; content, structure, and dynamics of the Milky Way; external galaxies, quasars and AGN; large-scale structure and the distance scale of the universe; the Big Bang
model and alternative cosmologies, possible geometries and eventual fates of the universe. Prerequisite: PHYS 220.
*PHYS 240. University Physics I. 3 credits.
Kinematics, dynamics, energy and momentum conservation, oscillatory motion, fluid mechanics and waves. Corequisites: MATH 235 and PHYS 140L. A student may not earn credit for both PHYS 202 and PHYS
PHYS 250. University Physics II. 3 credits.
Electric forces, fields and potentials; capacitance, dielectrics, resistance and DC circuits; magnetic fields, induced electric fields, inductance and AC circuits; geometrical optics, interference,
diffraction and polarization. Prerequisites: PHYS 202 or PHYS 240 and PHYS 140L. Corequisites: MATH 236 and PHYS 150L.
PHYS 260. University Physics III. 4 credits.
Rotational kinematics and rotational dynamics; static equilibrium and elasticity; universal gravitation and orbital mechanics; temperature, heat, heat engines, entropy and kinetic theory; Gauss’ law,
electric potential and capacitance; magnetic fields, induced electric fields and inductance; displacement current and electromagnetic waves; and the special theory of relativity. Prerequisites: C or
better in PHYS 250 and PHYS 150L or PHYS 150 and PHYS 150L. Corequisite: MATH 237.
PHYS/MATH 265. Introduction to Fluid Mechanics. 4 credits.
Introduces the student to the application of vector calculus to the description of fluids. The Euler equation, viscosity and the Navier-Stokes equation will be covered. Prerequisites: MATH 237 and
PHYS 260.
PHYS 270. Modern Physics. 4 credits.
A course in modern physics, consisting of a discussion of the experimental basis for and fundamental principles of quantum physics, with applications to atomic structure and nuclear physics.
Prerequisite: PHYS 260 or consent of instructor.
PHYS/CHEM/MATS 275. An Introduction to Materials Science. 3 credits.
An introduction to materials science with emphasis on general properties of materials. Topics will include crystal structure, extended and point defects and mechanical, electrical, thermal and
magnetic properties of metals, ceramics, electronic materials, composites and organic materials. Prerequisite: CHEM 131, PHYS 150, PHYS 250, ISAT 212 or permission of the instructor.
PHYS 295. Laboratory Apparatus Design and Construction. 1 credit.
An introduction to the design and fabrication of laboratory apparatus using machine tools. Prerequisites: PHYS 250 and permission of the instructor.
PHYS 297. Topics in Physics. 1-4 credits each semester.
Topics in physics at the second-year level. May be repeated for credit when course content changes. Topics selected may dictate prerequisites. Students should consult instructor prior to enrolling
for course. Prerequisite: Permission of the instructor.
PHYS 335. Modern Physics II. 4 credits.
A continuation of PHYS 270, with applications to molecules, the physics of condensed matter and nuclear physics. Prerequisite: PHYS 270.
PHYS/MATS 337. Solid State Physics. 3 credits.
A study of the forces between atoms, crystal structure, lattice vibrations and thermal properties of solids, free electron theory of metals, band theory of solids, semiconductors and dielectrics.
Prerequisite: PHYS 270 or consent of instructor.
PHYS 340. Mechanics. 3 credits.
Application of fundamental laws of mechanics to particles and rigid bodies. Topics include statics, dynamics, central forces, oscillatory motion and generalized coordinates. Prerequisites: PHYS 260
and MATH 238.
PHYS 342. Mechanics II. 3 credits.
A continuation of PHYS 340 including Lagrangian dynamics, rigid body motion and the theory of small oscillations. Prerequisite: PHYS 340.
PHYS 347. Advanced Physics Laboratory (0, 6). 3 credits.
An advanced laboratory in which students are introduced to experimentation in several areas of physics while gaining experience in experiment design, data analysis, formal report writing and
presentations. Prerequisite: PHYS 270.
PHYS 350. Electricity and Magnetism. 3 credits.
A study of the electrostatic field, the magnetic field, direct and alternating currents and electromagnetic waves. Prerequisites: PHYS 260 and MATH 238.
PHYS 360. Analog Electronics (2, 4). 4 credits.
DC and AC circuits, spectral and pulse circuit response, semiconductor physics and simple amplifier and oscillator circuits. Prerequisite: PHYS 250 or permission of the instructor.
PHYS/MATH 365. Computational Fluid Mechanics. 3 credits.
Applications of computer models to the understanding of both compressible and incompressible fluid flows. Prerequisites: MATH 248, either MATH 238 or MATH 336, MATH/PHYS 265 and PHYS 340.
PHYS/MATH 366E. Computational Solid Mechanics. 3 credits.
Development and application of mathematical models and computer simulations to investigate problems in solid mechanics, with emphasis on numerical solution of associated boundary value problems.
Prerequisites: MATH/PHYS 266, MATH 238 and MATH 248, or consent of instructor.
PHYS 371. Introductory Digital Electronics (2, 4). 2 credits.
Transistors, integrated circuits, logic families, gates, latches, decoders, multiplexers, multivibrators, counters and displays. Prerequisite: PHYS 150 or PHYS 250 with a grade of “C” or better or
permission of instructor.
PHYS 372. Microcontrollers and Their Applications (2, 4). 2 credits.
Microcontrollers, their instructions, architecture and applications. Prerequisite: PHYS 371 or consent of instructor.
PHYS 373. Interfacing Microcomputers (2, 4). 2 credits.
A study of the personal computer and its input/output bus, input/output functions, commercially available devices, proto-typing circuit boards and programs for device control. Prerequisite: PHYS 371.
PHYS 380. Thermodynamics and Statistical Mechanics. 3 credits.
A treatment of the thermal properties of matter from both macroscopic and microscopic viewpoints. Topics include the laws of thermodynamics, heat, work, internal energy, entropy, elementary
statistical concepts, ensembles, classical and quantum statistics and kinetic theory. Approximately equal attention will be given to thermodynamics and statistical mechanics. Prerequisites: PHYS 270
and PHYS 340.
PHYS/MATS 381. Materials Characterization (Lecture/Lab course). 3 credits.
A review of the common analytical techniques used in materials science related industries today, including the evaluation of electrical, optical, structural and mechanical properties. Typical
techniques may include Hall Effect, scanning probe microscopy, scanning electron microscopy, ellipsometry and x-ray diffraction. Prerequisite: PHYS/MATS 275, ISAT/MATS 431 or GEOL/MATS 395.
PHYS 390. Computer Applications in Physics. 3 credits.
Applications of automatic computation in the study of various physical systems. Problems are taken from mechanics of particles and continua, electromagnetism, optics, quantum physics, thermodynamics
and transport physics. Prerequisites: MATH/CS 248, PHYS 240 and PHYS 250 and six additional credit hours in majors courses in physics excluding PHYS 360, PHYS 371 and PHYS 372.
PHYS 391-392. Seminar. 1 credit per year.
Participation in the department seminar program. Prerequisites: Junior or senior standing and permission of the instructor.
PHYS 397. Topics in Physics. 1-4 credits each semester.
Topics in physics at intermediate level. May be repeated for credit when course content changes. Topics selected may dictate prerequisites. Students should consult instructor prior to enrolling for
course. Prerequisite: Permission of the instructor.
PHYS 398. Problems in Physics. 1-3 credits, repeatable to 4 credits.
An individual project related to some aspect of physics. Must be under the guidance of a faculty adviser.
PHYS 420. Modern Optics. 3 credits.
A study of the kinematic properties and physical nature of light including reflection, refraction, interference, diffraction, polarization, coherence and holography. Prerequisites: PHYS 260, PHYS 270
and MATH 237.
PHYS 446. Electricity and Magnetism II. 3 credits.
A continuation of PHYS 350. Emphasis will be placed on the solutions of Maxwell’s equations in the presence of matter, on solving boundary-value problems and on the theory of electromagnetic
radiation. Prerequisite: PHYS 350.
PHYS/CHEM 455. Lasers and Their Applications to Physical Sciences (2, 3). 3 credits.
An introduction to both the theoretical and practical aspects of lasers and their applications in the physical sciences. Prerequisite: PHYS 270, CHEM 331 or permission of the instructor.
PHYS 460. Quantum Mechanics. 3 credits.
Principles and applications of quantum mechanics. Topics include wave packets and the uncertainty principle, the Schroedinger equation, one-dimensional potentials, operators and eigenvectors,
three-dimensional motion and angular momentum and the hydrogen atom. Prerequisite: PHYS 340.
PHYS 480. Astrophysics. 3 credits.
An introduction to the problems of modern astronomy and the quantitative application of physical principles to these problems. Topics of study include stellar structure and evolution, the
interstellar medium and star formation, cosmic rays, pulsars, galactic structure, extragalactic astronomy and cosmology. Prerequisites: PHYS 340 and one of either PHYS 270 or 430.
PHYS 491-492. Physics Assessment and Seminar. 1 credit per year.
Principal course activities are participation in the departmental assessment program and attendance at departmental seminars. Prerequisite: PHYS 392.
PHYS 494. Internship in Physics. 1-6 credits.
Students participate in research or applied physics outside of the university. A proposal must be approved prior to registration, and a final paper will be completed. Prerequisites: Physics major
with a minimum of twelve physics credit hours and permission of the department head and the instructor.
PHYS 497. Topics in Physics. 1-4 credits each semester.
Topics in physics at the advanced level. May be repeated for credit when course content changes. Topics selected may determine prerequisites. Students should consult instructor prior to enrolling for
course. Prerequisite: Permission of the instructor.
PHYS 498R. Undergraduate Physics Research. 2-4 credits, repeatable to 6 credits.
Research in a selected area of physics as arranged with a faculty research adviser. Prerequisite: Proposal for study must be approved prior to registration.
PHYS 499. Honors. 6 credits. (Year course: 3 credits each semester.)
Participation in this course must be approved during the second semester of the junior year. For details, see catalog section entitled “Graduation with Distinction.”
|
{"url":"http://www.jmu.edu/catalog/03/CD/PHYScd.htm","timestamp":"2014-04-20T08:18:06Z","content_type":null,"content_length":"33945","record_id":"<urn:uuid:aa8d757b-0ee9-45b6-ad29-1f2750a44012>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Washington Statistics Tutor
Find a Washington Statistics Tutor
...These days, I have a full-time job where I send upwards of 75-100 emails per day using Microsoft Outlook. I also manage the calendars for three very busy attorneys. With a Bachelor of Arts in
Political Science and Japanese, I feel I have a solid grasp of social studies, both from a social science perspective, but also a comparative cultural focus.
33 Subjects: including statistics, reading, English, writing
...I have been tutoring independently and through universities for 10 years, and I am experienced in math, writing, Political Science, English, Spanish, French, and test preparation. I am
certified to teach all types of writing, and I have professionally taught college courses in Political Science and Statistics. I have worked with all levels of students from elementary to college.
46 Subjects: including statistics, English, Spanish, algebra 1
...While there I volunteered with Helenski Espana, a human rights group. We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. I
spent my last year of undergraduate as an ESL (English as a Second Language) tutor for a pregnancy center that targeted Spanish-speaking woman.
17 Subjects: including statistics, Spanish, writing, physics
...I am comfortable with any of these topics: Solving Systems of Linear Equations; Vectors, Geometry of R^n, and Solution Sets; Linear Independence and Linear Transformation; Matrix Operations and
Matrix Inverses; LU Factorization; Subspaces, Bases, Dimension, Rank; Determinants; Vector Spaces; Eige...
15 Subjects: including statistics, chemistry, calculus, physics
...I want to be a tutor because I like to be with kids and tutoring makes me feel very accomplished. I am a patient, dedicated and detail-oriented person. I like to read, watch movies, traveling,
and practice Taekwondo in my spare time.I am a native Chinese with full proficiency in listening, reading, writing, and speaking.
27 Subjects: including statistics, chemistry, Java, accounting
|
{"url":"http://www.purplemath.com/Washington_Statistics_tutors.php","timestamp":"2014-04-19T15:02:33Z","content_type":null,"content_length":"24305","record_id":"<urn:uuid:8873d24f-59f7-4e4f-9cd7-553a6e66418a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US20040236641 - Economic supply optimization system
[0015] A system and method for optimizing a machine supply to meet a predetermined parts demand at a lowest cost is provided. A simple example follows.
[0016] Referring to Table 1 below, assume there are 6 units of machine A and 7 units of machine B returning from a lease. There is a demand for 9 units of part x and 10 units of part z. Machine A has
5 units of part x, 1 unit of part y and 3 units of part z. Machine B has 2 units of part x, 4 units of part y and 4 units of part z.
[0017] In accordance with the present invention, various dismantling configurations are considered to meet the parts demand at the lowest cost. Table 2 lists the solutions and their corresponding
dismantling costs. To meet the demand for part x, one A machine and two B machines can be dismantled to yield the demanded 9 parts at a cost of $700. This would also yield 11 z parts, which is
sufficient to meet the demand for part z of 10 units. Alternatively, five B machines can be dismantled to yield ten x parts and twenty z parts at a cost of $750. Lastly, four A machines will yield 20
x parts and 12 z parts at a dismantling cost of $1,600.
[0018] The first solution is the most cost effective at $700, and therefore, is selected as the optimal dismantling configuration (the type and number of machines to dismantle to meet parts demand at
the lowest cost) of the machine supply.
[0019] The above was a simplified example to illustrate the principle of the present invention. For more complex problems, a more powerful calculation algorithm is used, such as Linear Programming
[0020] Referring to FIG. 1, there is shown a graph of a linear programming formulation created in accordance with well-known mathematical principles. A graph of n dimensions is used where n is the
number of machine types available, which is two in this case, machine A and machine B.
[0021] The x-axis 20 represents the number of B units and the y-axis 22 represents the number of A units. The potential sets of machines are plotted on the graph with point a 24 corresponding to zero
units of A and zero units of B, point e 26 representing six units of A and zero units of B, point f 28 representing six units of A and seven units of B, and point g 30 representing zero units of A
and seven units of B.
[0022] Lines are drawn between points e 26 and f 28 (line 25), to represent all possible solutions with six units of A (A=6), and between points f 28 and g 30 (line 27), to represent all possible
solution sets with seven units of B (B=7).
[0023] Equations are formulated for the demand for part x and the demand for part z. The number of x parts in an A machine (5) multiplied by the number of A machines plus the number of x parts in a B
machine (2) multiplied by the number of B machines is set to 9, the demand for x. The formula is represented by
[0024] The formula is plotted on the graph as line 23. The equation 3A+4B=10, calculated in the same manner, corresponds to the need for 10 units of part z and is represented by line 21. Points are
assigned to each intersection of the lines 21, 23 with each other (point c 36) and the axes (points b 34 and d 32) that form a corner within the boundary of the problem which is defined by the lines
between points b 34 to g 30 and back to b 34.
[0025] The mathematical formula representation for de-manufacturing cost is given by multiplying the cost of de-manufacturing A by the number of units of A, and adding that amount to the cost of
de-manufacturing B multiplied by the number of B units de-manufactured. The formula is
[0026] where Z is the total de-manufacturing cost, 400 is the cost to de-manufacture one A unit, A is the number of A units to de-manufacture, 150 is the de-manufacturing cost of B and B is the
number of B units to de-manufacture. The objective is to minimize cost, or minimize Z.
[0027] Points b 34 to g 30 represent the set of potential solutions within the boundary of the problem. Point a 24 at the origin represents dismantling 0 units of A and 0 units of B, which is not a
potential solution. Point b (0, 4.5) 34 represents dismantling 0 units of A and 5 units of B because the coordinates are rounded to the nearest integer. Point c (1.1, 1.6) 36 represents dismantling 1
unit of A and 2 units of B. Point d (3.3, 0) 32 represents dismantling 3 units of A and 0 units of B. Point e(6,0) 26 represents dismantling 6 units of A and 0 units of B. Point j(6,7) 28 represents
6 units of A and 7 units of B. Lastly, point g(0,7) 30 represents the solution of dismantling 0 units of A and 7 units of B. Somewhere along the line defined by points b 34, c 36, d 32, e 26, f 28
and g 30 is the optimal configuration for dismantling of the machine supply to meet the parts need. The optimal configuration is found as follows.
[0028] Begin at point a (0,0) 24. Find the next adjacent point (b 34 or d 32) that incurs the least cost. In this case, point b 34 representing the dismantling of 5 B machines at a cost of $750 is
the lowest cost point adjacent to point a 24. Next, select point b (0, 4.5) 34 and find the next adjacent point to b (c 36 or g 30) that is the most cost effective. Here, point c, which represents
solution 1 is less expensive than point g 30 which represents dismantling all seven units of machine B. Now, select point c (1.1,1.6) 36 and find the next cheapest, adjacent point. Point d 32
requires a higher cost than point c 36 (refer to Sol.3 in previous example). Since we cannot achieve any improvement in moving further, point c 36 is the optimal solution. The coordinates of c are
rounded off to the nearest integer, namely, 1.1 is rounded to 1 and 1.6 to 2. Therefore, we can dismantle 1 unit of A and 2 units of B, with a minimal cost of $700 to meet the parts demand of 9 units
of part x and 10 units of part z.
[0029] In a more robust example, the list of optimization variables could include projected parts demand, parts and machine net investment book values (NIB), parts & machines projected wholesale fair
market values (FMV), machine de-manufacturing costs, parts repair costs, process cycle lead times for de-manufacture, re-manufacture and parts repair processes and internal company exchange pricing
for parts and machines.
[0030] A more complex, but still simplified example follows. Table 3 lists information on four available PC models for de-manufacturing and parts retrieval. There are four machine models (PC1, PC2,
PC3, PC4) that are made up of various combinations of seventeen different parts. Relevant data for optimization includes the part number, description, parts yield per machine, part value, percentage
yield (percentage of total parts that are actually yielded as a result of demanufacture based on historical, statistical data for a particular model), total supply of parts (calculated by
cross-referencing the yield per machine with the machine supply), and total demand. The machine supply in stock is: PC1=75, PC2=65, PC3=85, PC4=85 units. Parts 4, 5, 7-10, 13, 14 and 15 are in
[0031] Using a summation formulation to determine profits from the information in the table, optimization may be preformend by applying the formula:
[0032] where
[0033] RV[j]=revenue sales from part j sales;
[0034] TC[i]=net investment balance (cost) of machine i;
[0035] PC[i]=processing cost of de-manufacturing machine i;
[0036] S[i]=total supply of machine i;
[0037] D[j]=netted demand of part j; and
[0038] W[ij]=parts not utilized
[0039] X[ij]=parts fulfillment
[0040] Y[i]=machines required to fulfill the desired parts
[0041] The objective is to maximize TRR subject to the following constraints:
[0042] {Y[i]}≦{S[i]}: the number of machines to be dismantled should not exceed the number of available machines collected from all sources;
[0043] {X[ij]}+{W[ij]}=QP[ij]•{(Y[i]•I[ii])•Q[ij]}: machine structure constraints, i.e., type and number of parts in each machine;
[0044] {I[i]•X[ij]}={D[j]}: the demand for every type of part should be met; and,
[0045] {Y[i]}, {X[ij]}, {W[ij]}≧0: the supply of machines, demand of parts, and the parts recycled and/or disposed of should be non-negative values.
[0046] Applying the formula to the information in Table 3 according to well known mathematical principles, the optimal net revenue is TRR*=$14, 265, which includes the dismantling of 73, 63, 62 and
75 units of PC models PC1*, PC2*, PC3* AND PC4* respectively. Table 4 shows the results of optimization.
[0047] Now referring to FIG. 2, a high level data and processing flow is shown for a system according to a preferred embodiment of the invention. An information warehouse, or central data storage 40
stores all data necessary to determine the optimal dismantling configuration. Data for the anticipated, or actual demand for parts 42 is calculated for all sources of demand, both external and
internal. Internal demands are those of the leasing operation. For example, a large computer manufacturer will fulfill the computing needs of its own operations with its own equipment, producing an
internal demand. External demands are those that originate from the market for the particular parts. The available machine supply 44 is entered with the data on bill of machines (BOM), which outlines
the parts yield of each machine with the cost of dismantling, and lastly, any other relevant supply-demand approximation tool data 48.
[0048] A first screening process (step 50) determines parts demands that cannot be satisfied with parts from the existing machine supply, i.e. demand for new parts or old parts that are not in the
machine supply. The demands that cannot be met from dismantled machines produces a list of parts that must be procured 52. A second screening process (step 54) determines which parts demands, if any,
are not economically feasible to satisfy with de-manufactured machines by some predetermined selection criteria, producing another group of parts that must be procured 56.
[0049] After determining the exact parts demand to satisfy from the existing machine supply, an optimization tool according to the present invention calculates the optimal dismantling configuration
(step 58) to generate a list of machines to dismantle 60. The system also determines whether purchasing machines to dismantle will meet the parts demand at a lower cost to produce a greater profit
than dismantling existing stock 62, generating a report of suggested machines to buy for dismantling 64.
[0050] The end result of each process is sent back (arrow 51) to the central data storage 40 to maintain central storage of all system information. It should be noted that the central storage 40 may
reside in a single location, or may be distributed across multiple data storage devices, connected, for example, by a LAN or WAN.
[0051]FIG. 3 shows the data sources of the system according to a preferred embodiment of the invention. A process source owner (PSO) tool 70 keeps track of available parts inventory for a particular
period and generates data on the demand for parts 42, originating from both external and internal sources, and the BOMs for available machines 72. Financial data for parts 74 and financial data for
machines 76, such as de-manufacturing costs, profit yields, and fair market value, are stored in a central data storage location 40 with the demand data 42, and the BOM data 72 generated by the PSO
process 70.
[0052] An optimization tool 80 works in conjunction with a supply-demand matching tool (SDM) 78. The SDM 78 generates forecasted demand data 86 for parts in specific geographical regions, as well as
machine supply data 88 and stores that data in the central data storage 40 for access by other parts of the system. The optimization tool 80 uses the system data stored in the data warehouse 40 to
calculate the optimal dismantling configuration of the machine supply 82. Examples of such calculations were discussed above with regard to tables 1-4. A report with a dismantling plan 84 outlining
the configuration is generated and stored in the central data warehouse 40.
[0053] Now referring to FIG. 4, there is shown a simplified logic flow diagram according to a preferred embodiment of the present invention. The process is preferably implemented by software residing
on a computer, as is well known. The process is invoked by some user or system request to the software (step 90). Data is entered or imported into the computer (step 92). This data includes all
relevant financial and technical information on the machine supply and the parts demand, as previously discussed. Data to consider, for example, includes:
[0054] Machine parts BOM information with parts yield
[0055] Available machine inventory
[0056] Forecasted EOL machine returns
[0057] Calculated EOL propensity data (propensity of a machine to yield specific parts at its EOL based on historical data)
[0058] Parts FMV
[0059] Machines FMV
[0060] De-manufacturing cost data
[0061] De-manufacturing parts quality yield data (how many parts are produced from de-manufacturing and their condition based on historical data)
[0062] Defined machine to parts de-manufacturing financial equation algorithms, i.e., machine and parts profit calculation formulas
[0063] Machine type model option-able feature codes (percentage of machine types that yield certain options when returned at EOL based on historical data)
[0064] Quality level of machine inventory (whole, cannibalized, functional, cosmetic damage etc)
[0065] Machine de-manufacturing cycle times
[0066] Parts refurbishing cycle times
[0067] Cost of parts repair
[0068] The parts supply is determined (step 94) by cross referencing the corresponding BOM with the machines in stock. In other words, the BOM contains the parts yield of each machine, i.e., what
type and how many parts each machine produces from de-manufacturing. The parts yield of each machine is multiplied by the number of machines in stock to determine what type and quantity of parts are
available. Next, it must be determined whether a shortage exists for any of the parts (step 96). If the available machine supply is sufficient to meet the demand for all parts, the optimization tool
analyzes the machine supply data (step 101), a machine dismantling configuration is generated (step 103), and the process terminates (step 108).
[0069] If there is a shortage, the parts demand is separated into two lists: covered parts, or those covered by the parts supply, and not-covered parts, or those not covered by the parts supply (step
98). The covered parts list is processed to determine the optimal dismantling configuration (step 100). The not covered parts list is processed to determine a harvesting configuration of which
machines and how many of each should be harvested (step 102), or obtained from another source outside the machine supply.
[0070] Because the not-covered parts list represents parts demand that is not covered by the machine supply, another source is considered for meeting the not-covered demand. One possible source is
external suppliers. Another is leases that are almost at their end. The leasing entity contacts the lessees and offers to terminate their leases early to obtain leased equipment for meeting the
not-covered parts demand. Preferably, a combination of the two sources is used which is optimized to the least cost.
[0071] Optimization is performed with respect to how many machines should be harvested (step 104) and a recommendation report is generated (step 106). After the report is generated, the process
terminates (step 108). If there is an insufficient number of parts from dismantling, an order must be placed with external sources to fill the demand.
[0072]FIG. 5 depicts a data and logic flow of a preferred embodiment according to the present invention. The machine models that are economically justified for dismantling are selected (step 150)
using the financial data on the value of the machines and their constituent parts together with the BOM for each machine model 152. When the BOM is cross referenced with the financial information
about machine values and part values, profits from machine sales and parts sales can be determined by specific formulas, discussed later. If the profit from parts sales is greater than the profit
from machine sales by a certain threshold for a particular model, that model is selected for dismantling.
[0073] A list of economically justified machine models for dismantling 154 is sent to the next process to determine the parts supply (step 156). Data on the machines currently in stock with their
corresponding BOMs 158 is cross referenced with the list of models for dismantling 154 to determine the available parts supply from the machines in stock (step 156). The parts demand 162 is imported
into the system and the parts supply 159 is matched to the parts demand (step 160) creating the covered parts demand 164 and the not-covered parts demand 166. The covered parts demand is further
broken down into internal demand 168 and external demand 170. Optimization is run on all demand sets to determine the optimal dismantling configuration for meeting the parts demand at a lowest cost
(step 172).
[0074]FIG. 6 shows a more detailed flowchart for the data flow and processing in a preferred embodiment according to the present invention. This flowchart illustrates how bills of parts demand are
translated into bills of machine-to-dismantle while optimizing the number of machine-to-dismantle to incur the least cost.
[0075] The method starts with the determination of which machine models are economically justified for dismantling (step 110) by considering parts and machine value 74, 76 and the BOM of each machine
model 72. The financial data utilized by the system tool includes valuation information about the machines and the parts such as average wholesale fair market value (FMV) for each machine type model
(MTM), average profits for each MTM, the total average cost of re-manufacturing by MTM and the total average cost of de-manufacturing by MTM.
[0076] Using the value of the machines and the values of their individual parts, the profit yield for the machine type is determined when sold as a whole and when sold as parts. In determining
whether a machine is economically justified for dismantling, the parts profit (profit from selling a machine for its parts) and the machine profit (profit from selling the machine as a whole) are
calculated so that a final determination can be made as to whether the parts profit is greater than the machine profit by some margin. The margin is a design choice governed by business and economic
concerns with the aim of maximizing profits. The actual margin will vary with different industries, corporate policies and personal preferences. Additionally, the margin may differ for selecting
machines to meet external need against selecting machines to meet internal need.
[0077] In an exemplary embodiment, to meet external need, the profit yield of selling a machine for its parts should be twenty percent (20%) greater than the profit yield from selling the machine as
a whole. In other words, if breaking a particular machine model down and selling it for parts would produce a 20% greater profit than selling the machine as a whole, then it is selected for
dismantling to meet external need. Machines that do not meet this requirement are eliminated from the available machine supply. This process (step 110) generates a list of machines for dismantling,
their BOMs, and the fair market value of the machines and their parts.
[0078] For purposes of illustration we will assume that, for some internal corporate policy of the leasing entity, it is preferable to meet internal need with machines whose parts profit are merely
greater than their machine profit.
[0079] Machine profits and parts profits are calculated by predetermined formulas configured to take into account a number of factors reflecting business concerns and economic concerns of the leasing
entity. The formulas will vary between different industries, corporations and businesses.
[0080] To select machines for de-manufacturing to meet external need the following formulas can be used. To calculate machine profits for a specific MTM, the average NIB value of an MTM is added to
the total re-manufacturing expense for that specific machine type. That sum is then subtracted from the average FMV for that specific machine type model. In formula form, this is represented by:
MTM Machine Profit (MP)=(FMV)−((MTM avg.NIB value)+(total machine re-manufacturing expense))
[0081] To calculate the parts profit for a particular MTM to meet external need, the average machine NIB value is added to the total parts de-manufacturing expense and this sum subtracted from the
average FMV of the MTM total valued parts with an external demand. This is represented by the formula:
MTM Parts Profits (PP)=((MTM total parts w/ext. demand avg. FMV)−((machine avg. NIB value) +(total parts de-man expense)
[0082] Using the results of this formula, machines that satisfy the following condition:
[0083] are selected for dismantling to meet external parts demand.
[0084] When selecting machines to meet internal parts demand, parts profit of a particular type of machine in the machine supply are calculated by adding the machine average net investment book (NIB)
value to the total parts de-manufacturing expense and subtracting it from the sum of the average NIB value of the total parts with an internal demand with an adjustment to the NIB to take internal
transfer costs into account. The corresponding formula follows:
MTM Parts Profit (PP)=(total parts w/internal demands avg. NIB value+cost adj to NIB)−((machine avg. NIB value)+(total parts de-man expense))
[0085] Machine profits for a particular model type to meet internal demand are calculated by adding the MTM average NIB value to the total machine re-manufacturing expense and subtracting the total
from the average FMV of the particular machine type model. Or, in formula form:
MTM Machine Profits (MP)=(MTM avg.FMV)−((MTM avg. NIB value)+(total machine re-man expense))
[0086] Accordingly, to be dismantled for internal parts demand, the parts profit of a machine should be greater than its machine profit, or:
[0087] Alternatively, machines with a net parts revenue (NPR) greater than its gross machine revenue (GMR) can be selected for dismantling for both external and internal demands. The NPR is
determined by subtracting the total de-manufacturing expense from the total valued parts, or:
NPR=MTM total valued parts−total parts de-manufacturing expense
[0088] In any event, once the set of economically justified machines is determined, its corresponding parts supply list is calculated from the list of machines for dismantling in part one of a supply
chain planner (SCP) process (step 112) by exploding the BOM, i.e., cross referencing the BOM of the available machines with the list of machines for dismantling to determine which parts and how many
of each part will be available. It is preferred that a two-level BOM is used for each machine model type which includes a list of high-value parts and the corresponding quantity per machine for that
type. The two-level BOM is arranged in a tree structure with the first level being the highest level indicative of a whole machine. The second level is the lower level indicative of the constituent
parts that make up the whole machine.
[0089] The SCP #1 process (step 112) uses information from a supply demand matching (SDM) tool (step 78), which includes the quantity of machines available in the machine supply to dismantle, for
generating a list of parts and the quantity of each part for the second part of the SCP process (step 114).
[0090] The second part of the SCP process (step 114) matches parts supply against their demand. The SCP #2 process accepts data from the central data storage 40 and the PSO 42 which produces netted
parts demand information (demand from all sources).
[0091] The SCP #2 (step 114) generates the covered parts list and the not-covered parts list for the optimization tool (step 80). Reports are generated outlining the machines to dismantle, excess
parts supply, and the parts demand that is not covered by the supply (not-covered parts demand).
[0092] The system optimizes each set of the parts demand, the covered internal parts demand, the covered external parts demand, and the not-covered parts demand. When run on the covered parts demand,
the system produces the set of machines to dismantle for that covered parts demand set and the set of left over excess parts, for which there is no demand. The excess parts are fed back into SCP #2
to be matched with some other demand. When the system runs on the not-covered parts demand, a recommendation for harvesting machines is generated.
[0093] The resultant reports, including the list of machines-to-dismantle 82, will be sent to the SDM tool 78. Other reports include the not-covered demands report, uneconomical to cover demands
report, surplus report (excess parts supply from de-manufacturing for which there is no demand), and list of machines for harvesting the not-covered parts report.
[0094] The flow of data in the diagram of FIG. 6 will now be described. The total number of available machines in stock is determined by the SDM tool 78 and sent to SCP #1 112 and the optimization
tool 80 (arrow 120). The list of products economically justified for dismantling, their corresponding BOM, and the value for the machines and parts is also exported to the SCP #1 112 and the
optimization tool 80 (arrows 122 and 124). Reference data from the central data storage 40 such as machine information and parts information is transmitted to the SCP #2 114 and the optimization tool
80 (arrows 126 and 128). The list of all available parts supply and the total quantity of each part that is generated by the SCP #1 is sent to SCP #2 114 for further processing as previously
discussed (arrow 130). Parts demand information processed by the optimization tool 80 is obtained from the PSO 42 (arrows 132). The PSO process supplies the demand files to the system via the central
data storage 40. The files include information on the parts demand, the parts and the machines. Machine information includes a BOM file by machine type model (MTM) with de-manufacturing yield data.
Parts demand data includes part number, description, quantity required, need by date, demand source and any other part information deemed important. Machine supply data includes the machine type
model, model number and the quantity available. Parts demand data generated by the SCP #2 114 for all the sets of parts demands (covered, not-covered, internal, external) is imported into the
optimization tool 80 (arrows 134). A report outlining the virtual excess parts generated from machine dismantling is fed back to SCP #2 114 as buffered inventory (arrow 136) so that the inventory
will accumulate, virtually, excess parts from dismantled machines that are not needed to meet demand. Preferably, virtual excess parts left over from external demand are available to internal demand
consumption and virtual excess parts from any discontinued machines or parts are available to internal and external demand. The optimization tool 80 produces, as output, a set of flat files
containing the list of machines to dismantle 82 that is exported to the SDM tool 78 and output to a user (arrow 138).
[0095] Optimization may be adapted to address specific concerns, such as high-valued parts. In such a case, when configuring the supply-demand sources to optimize, only high-vauled parts are
included. The same can be done for low-valued parts.
[0096] In a further embodiment, a virtual parts supply driven model is provided that electronically converts machine supply to a virtual part supply. The objective of this embodiment is to convert a
given machine supply forecast into an available parts supply forecast, or virtual parts supply. This can be done for the entire machine supply or segments of the machine supply, such as excess
machines that have no external demands, but still retain an internal reserve or residual value. The system turns the total machine supply into a virtual parts supply by the machine model numbers. The
virtual supply can be used to forecast parts supply over time and to perform optimization analysis for long term materials requirement planning, parts supply demand planning, and forecasting.
[0097] Alternatively, the virtual supply embodiment can be used to support advanced advertising of a forecasted parts supply that the system predetermines would produce the most profit, providing an
effective planning tool for marketing strategies.
[0009]FIG. 1 shows a graph for a Linear Programming formulation according to a preferred embodiment of the present invention.
[0010]FIG. 2 shows a flowchart for the process of an optimization system according to a preferred embodiment of the present invention.
[0011]FIG. 3 shows the data flow in a preferred embodiment of the present invention.
[0012]FIG. 4 depicts a flow chart for the logic flow of a preferred embodiment according to the present invention.
[0013]FIG. 5 shows a combined data and logic flow according to a preferred embodiment of the present invention
[0014]FIG. 6 is a more detailed diagram of the process in FIG. 5.
[0001] The present invention relates to supply optimization, and more particularly, to an end-of-lease (EOL) equipment supply optimization system.
[0002] Typically, businesses lease high cost equipment rather than purchasing the equipment outright. Leasing may be obtained from a financial institution that has purchased the equipment or from the
original equipment manufacturer. When equipment is leased from a financial institution, it is typically sold off at the end of a lease (EOL) for the fair market value of the equipment. When equipment
is leased from a manufacturer, however, it may be more profitable for the manufacturer to break down, or de-manufacture, EOL machinery and sell the individual parts of the machine separately. Selling
the equipment as a whole, however, may be more profitable. As a third alternative, some combination of both options may yield the highest profit, which is typically the case. The exact combination of
machine sales to parts sales to maximize profit, however, is difficult to calculate.
[0003] Thus, it is desirable to provide a system for determining the most profitable solution for EOL equipment disposal and thereby use returned EOL equipment to maximize value to the leasing
[0004] A method for optimizing a machine supply to meet a parts demand at a lowest cost is provided comprising the steps of determining a parts demand, determining a machine supply, and configuring
an optimal dismantling configuration of the machine supply to meet the parts demand at a lowest cost by considering a number of variables such as machine parts yeild, probable quality of machine
yielded parts, machine inventory, forecasted machine returns, fair market values of machines and parts, de-manufacturing cost, de-manufacturing cycle times and parts refurbishing cycle times. The
optimal dismantling configuration includes a predetermined number and a predetermined type of machines from the machine supply.
[0005] The method further comprises determining a portion of the parts demand that cannot be satisfied from the machine supply, determining which machines in the machine supply are economically
justified for dismantling, determining the parts supply yeilded from the machine supply and matching the parts supply to the parts demand. If the parts supply is insufficient to meet the parts
demand, a covered list outlining the parts demand that is covered by the supply and a not-covered list outlining the parts demand that is not covered by the supply is generated. An optimal
dismantling configuration of the machine supply for the covered list is calculated and an optimal harvesting configuration (obtaining machines from other sources) is calculated for the not-covered
[0006] An economic supply optimization system is also provided to determine how to dismantle a machine supply to collect specific parts for meeting a parts demand at a lowest cost. The system
comprises a processor, a first data storage device connected to the processor, and a program residing on the data storage device executable by the processor. A second data storage device provides
central data storage for the system and stores information on parts demand, parts supply, relevant financial information, and technical information on de-manufacturing. The program determines a parts
demand and a machine supply. An optimal dismantling configuration of the machine supply to satisfy the parts demand at a lowest cost is then determined. In this manner, the parts demand is converted
into a machine-to-dismantle demand while minimizing the cost incurred by meeting the parts demand.
[0007] The system accepts, as input, information on parts demand, machine supply, financial information on market values and de-manufacturing costs, technical information on de-manufacturing and
other supply-demand matching information. Preferably, the information is maintained on the second data storage device to effect central data storage. The system performs a first pre-screening process
to identify a portion of the parts demand that cannot be satisfied from the machine supply. A second pre-screening process eliminates the parts demand which it is not economically feasible to satisfy
from the machine supply. Selection for elimination is accomplished by a predetermined selection criteria. A parts supply is determined from the remaining machine supply and the parts supply matched
to the parts demand. If there is a sufficient supply, the optimization tool determines the optimal dismantling configuration. If there is an insufficient supply, a list of the covered parts and a
list of the not-covered parts are generated. The optimal dismantling configuration is then determined by the optimization tool for the covered parts list.
[0008] In this manner, reverse logistics and algorithms can be used to match a machine supply to a parts demand at a lowest cost thereby maximizing profit in planning a parts supply from
de-manufacturing the machine supply.
|
{"url":"http://www.google.com/patents/US20040236641?dq=6,243,373","timestamp":"2014-04-17T01:10:20Z","content_type":null,"content_length":"128248","record_id":"<urn:uuid:0d3a5555-7fb7-4066-9363-d366e0589715>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Given an n-dimensional contingency table of observed frequencies, compute the expected frequencies for the table based on the marginal sums under the assumption that the groups associated with each
dimension are independent.
observed : array_like
Parameters :
The table of observed frequencies. (While this function can handle a 1-D array, that case is trivial. Generally observed is at least 2-D.)
expected : ndarray of float64
Returns :
The expected frequencies, based on the marginal sums of the table. Same shape as observed.
The table of observed frequencies. (While this function can handle a 1-D array, that case is trivial. Generally observed is at least 2-D.)
The expected frequencies, based on the marginal sums of the table. Same shape as observed.
>>> observed = np.array([[10, 10, 20],[20, 20, 20]]) >>> expected_freq(observed) array([[ 12., 12., 16.], [ 18., 18., 24.]])
|
{"url":"http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.stats.contingency.expected_freq.html","timestamp":"2014-04-20T02:07:49Z","content_type":null,"content_length":"8784","record_id":"<urn:uuid:d8711366-333a-4425-863c-f404674e6fd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2 Jul 23:17 2008
[Algorithms] Nearest shapes to a point
Danny Kodicek <dragon <at> well-spring.co.uk>
2008-07-02 21:17:23 GMT
Hi there
I'm delurking to ask a question for a friend of mine. He's trying to solve a
problem which amounts to this:
There's a series of complex shapes (essentially long curving pipes which may
twist, split, rejoin etc). The space may have five or six of these (or
more), each a separate network but wound around each other in complex ways.
He has a point in the space and he needs to know which network is nearest to
the point. He can calculate anything about the space and the networks ahead
of time, but the calculation itself is a real-time problem.
My feeling is that there are two main approaches that could work, but I'm
obviously open to other suggestions.
Option 1: The networks are being given to him just as a bunch of models, but
there's no reason in principle why he couldn't analyse them and convert them
to a series of splines. One option would be to store these splines as a
series of lists at a progressively greater level of detail, including some
metric defining how good the detail is (how closely the spline matches the
curve, eg a maximum distance from the real curve to the spline segment).
Then he could take the point and calculate the nearest splines at each level
of detail (taking into account the margin of error), gradually homing in
until only one segment is selected. I can visualise this algorithm pretty
well, but I don't know how efficient it would be, or how easy it would be to
calculate the splines - especially since his client wants to be able to add
new networks at a later date without additional coding.
Option 2: He could take the whole space and partition it in some way into
regions such that each region belongs to a particular network, ie, for every
(Continue reading)
|
{"url":"http://blog.gmane.org/gmane.games.devel.algorithms/month=20080701","timestamp":"2014-04-18T16:17:56Z","content_type":null,"content_length":"72127","record_id":"<urn:uuid:d08157c6-2d96-4fbc-9be6-ffdd6534b3e1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BEGIN the Eureka video at the beginning. PAUSE the video when the narrator asks, "How small should you crush them so that they'll fit exactly in to the container?" Ask the students what it is
they have to know before figuring out how small these eight cars have to be crushed to fit exactly into the container. (how big the container is)
RESUME the video until the narrator says, "That's all that the word volume means - how much space something envelops." PAUSE the video. Ask a volunteer to repeat this definition and write it on
the board. Ask, "How do you figure out how much space that cube envelops?" This question will help you assess how familiar the students are with the concept of volume. At this age level, most of
the students should know that volume can be calculated by multiplying length, height, and width. The video answers this question in the very next sentence, so the students' answers are validated.
Inquiries like this also help to keep the students focused when the video is resumed.
RESUME the video.
PAUSE when the screen shows "Volume = 64 m3" and the narrator says, "The container has a volume of 64 cubic centimeters." Ask the students where the little 3 comes from next to the "m" in 64 m3.
Demonstrate that the superscript 3 comes from multiplying 3 different quantities measured in meters together. "If you have eight cars but only 64 cubic meters to stuff these cars into, how do you
figure out how big each car can be? (Divide 64 by 8.)
RESUME the video.
PAUSE when the narrator says, "In other words, you'll have to crush each old wreck into a cube measuring two meters by two meters by two meters." Ask the students if there is another shape the
cars could be crushed into besides 2 x 2 x 2 and still all fit into the box. (4 x 2 x 1)
RESUME the video.
PAUSE when the narrator says, "A density machine takes a car with a mass of say 2000 kilograms and squeezes the kilograms into a much smaller volume." Let the picture of the car with the formula
Density = Mass/Volume appear but pause before anything is said on this screen. Tell the students the following:
"They have not actually given us a verbal definition of density, but they have given us a mathematical one. [Point to the mass/volume part of the picture on the screen.] What is this part the
equation called? [You might have to give other examples like m/v or _ to get the students to understand that this is a fraction.] A fraction is a way of comparing two numbers. In density we are
comparing an object's mass to its volume, or how much matter there is to how much space this matter takes up. Another way of stating this is to say we are making a ratio of mass compared to
volume. So we can define density mathematically using the formula, and verbally as the ratio of mass to volume in an object." [Write this definition on the board.]
REWIND back to the picture of the density machine and RESUME the video. PAUSE after the narrator says, "In this case you increase density by keeping the volume the same but increasing the mass."
Ask, "So which was more dense, the box with two cars or the same box with eight cars? (8) What was changed in the system? (mass) How was mass changed? (increased) Which would be more dense, 2000
kg in this container (point to aquarium) or 2000 kg in this container? (hold up a smaller box) What is the difference between these two systems? (different volumes) Now you should be able to
write two ways to increase density if you haven't already."
RESUME the video as students complete their statements. STOP at the end.
|
{"url":"http://www.thirteen.org/edonline/nttidb/lessons/as/derbyas.html","timestamp":"2014-04-17T12:42:28Z","content_type":null,"content_length":"19543","record_id":"<urn:uuid:fd5e8439-3793-40fa-a6d9-21a5d735178a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prove G has at least one orbit
April 13th 2011, 07:14 AM #1
Junior Member
Apr 2010
Prove G has at least one orbit
Prove that if G is a finite p-group acting on a finite set S with p not dividing |S|, then G has at least one orbit which contains only one element.
I am completely lost. Please help.
Hint: The order of an orbit divides the order of the group.
This can be seen easily from, for example, the orbit-stabiliser theorem.
to amplify, consider a fixed element x of a set S that G acts on. if we set Gx =
{a in G: a(x) = x}, the stabilizer of x, we can form a map φ from G/Gx (the set of left-cosets of Gx) to the orbit of x by:
φ(bGx) = b(x) (if a is in the stablizer Gx, ba(x) = b(a(x)) = b(x), since the elements of Gx don't do anything to x).
now suppose b' is in bGx, so b' = ba, where a is in Gx. then b'(x) = ba(x) = b(x), so this map is well-defined.
suppose that two cosets of Gx, give us the same result: φ(bGx) = φ(cBx), so b(x) = c(x).
then bc^-1(x) = cc^-1(x) = e(x) = x, so bc^-1 is in Gx, so bGx = cGx, the two cosets are equal. this means φ is injective.
but if y is in the orbit of x, this means that for SOME g in G, g(x) = y, in which case φ(gGx) = g(x) = y,
so φ is surjective as well. so if G is finite, then the size of the orbit of x is the number of cosets of Gx, which is the index [G:Gx].
now Gx is a subgroup of G, so |Gx| divides |G|, and since |G| = |Gx|*[G:Gx], [G:Gx] divides |G| as well.
what does this all mean if |G| = p^k?
1) possibility A- [G:Gx] = 1. this happens if all of G fixes x, which is what we hope might happen.
2) possibility B- [G:Gx] = p^r, this happens if the orbit has more than one element.
(stuff for you to prove in the middle: two orbits are either the same, or have no elements in common).
so divide S into all its orbits. we have |S| = k + p(other stuff). we want to prove k isn't 0.
suppose it was. then 0 = |S| - p(other stuff). since p divides 0, and p divides p(other stuff), then wouldn't p divide |S|?
April 13th 2011, 07:31 AM #2
April 13th 2011, 08:48 AM #3
MHF Contributor
Mar 2011
|
{"url":"http://mathhelpforum.com/advanced-algebra/177712-prove-g-has-least-one-orbit.html","timestamp":"2014-04-18T01:24:22Z","content_type":null,"content_length":"36735","record_id":"<urn:uuid:a7318eef-24ed-4ac4-b6d0-18ff151b9f26>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite Group Theory--Sylow Subgroups
Date: 07/09/2004 at 19:32:02
From: Alesia
Subject: Finite Group Theory- Sylow subgroups
Hi Dr. Math,
I hope you can help me with this since this problem's been bugging me
for a while. It seems as if it should be straightfoward, but I'm
having trouble cracking it.
Let U and W be SUBSETS of a Sylow p-subgroup P (of a group G) such
that U and W are normal in P, i.e., the normalizer in P of U is P (N_P
(U) = P) and N_P (W) = P.
Show that U is conjugate to W in G iff U is conjugate to W is the
normalizer of P.
I think that if we assume that there does exist an element g in G
which conjugates U to W, we know that g is not an element of P. But,
does this same g have to be in the normalizer of P, or does the
existence of such a g only guarantee that an element of the normalizer
of P also conjugates the two subsets?
Is there some underlying property of being normal as a subset that
tells us that if we move U to W, then we must move P to P under this
action of conjugation?
I tried looking at the subgroups generated by U and W, at the cosets
they form, and at some of the permutation representations afforded by
conjugation by various subgroups of G. But, I still can't get this
problem. Am I just missing something which is obviously true, or is
this much tougher than I thought?
Date: 07/10/2004 at 06:25:39
From: Doctor Jacques
Subject: Re: Finite Group Theory- Sylow subgroups
Hi Alesia,
First, note that the "if" part is obvious--only the "only if" part
requires some thought.
Let us write x^g for the conjugate of x by g, i.e., g^(-1)xg. Let us
also simply write N(X) for N_G(X).
We assume that there is a g in G such that U^g = W, and we must show
that there is an element h in N(P) such that U^h = W (as you suspect,
we do not need to prove that g = h).
As U is a normal subset of P, U^g = W is a normal subset of P^g = Q,
because automorphisms preserve normal subsets.
The normalizer N(W) is a subgroup of G and contains P (by hypothesis)
and Q. This means that P and Q are Sylow subgroups of N(W). As Sylow
subgroups are conjugate to each other, there is an element z in N(W)
such that:
Q^z = P
P^(gz) = P
and this shows that gz is in N(P). On the other hand, as z is in
N(W), we also know that W^z = W.
Can you continue from here? Please feel free to write back if you
require further assistance.
- Doctor Jacques, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/66005.html","timestamp":"2014-04-16T07:59:36Z","content_type":null,"content_length":"7425","record_id":"<urn:uuid:6734c7cc-2f9d-4f7c-b457-868cfd67280a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forney Math Tutor
Find a Forney Math Tutor
...My experiences with these kids is why I continue to teach and help assist in improving their math skills and helping them overcome any doubts in their mathematical abilities. I specialize in
teaching 6th through 12th grade math; specifically, Algebra 1, Algebra 2, Geometry, Pre-Calculus, College Algebra. I believe that all students can learn and understand math better.
24 Subjects: including algebra 1, statistics, prealgebra, algebra 2
...Air Force as an officer. I graduated from BYU in 1989 with a Master of Science degree in Electrical and Computer Engineering with a minor in mathematics and aerospace studies. I then used my
engineering skills out of college as a US Air Force officer and an engineer for various high-tech companies.
48 Subjects: including algebra 1, algebra 2, calculus, chemistry
...As a math major, I had to be very grounded in these subjects. As a result, I am very comfortable and confident in not only tutoring them but doing so at a level and style that students can
readily follow and understand. I have taken a number of Statistics courses, a list which includes elementa...
41 Subjects: including statistics, linear algebra, differential equations, logic
...This expertise also includes thorough handling of the procedures and calculations involved in laboratory experiments for both AP and college-level chemistry. More and more, students, especially
college-level and high school AP level, are requiring assistance with both lab and lecture information...
17 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I am a "National Board Certified Math Teacher" and certified in Texas 4-8 and 8-12 Math. I retired from Oklahoma(yes I am a Sooner fan) two years ago and started teaching in the North Texas
area. I currently teach Math Models, Alg.1, Alg.2, Geometry, Pre-Cal and TAKS prep classes to at-risk students.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
|
{"url":"http://www.purplemath.com/forney_tx_math_tutors.php","timestamp":"2014-04-19T09:55:29Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:2b05b673-0e80-4356-8b8f-4c0df7393aec>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
squeeze theorem limit
April 1st 2012, 04:54 PM #1
Mar 2012
squeeze theorem limit
Hello, I am trying to apply the squeeze theorem to the following:
limit as x tends towards zero.
-1 < e^sin(pi/x) > 1
-(pi/x) < e^sin(pi/x) < (pi/x)
plugging in zero to the x variables gives me a limit of zero.
My question is, am I correct in how I have factored the (pi/x) components to the outer edges ?
Thank you.
Last edited by tedsauc; April 1st 2012 at 04:56 PM.
Re: squeeze theorem limit
Hello, I am trying to apply the squeeze theorem to the following:
limit as x tends towards zero.
-1 < e^sin(pi/x) > 1
-(pi/x) < e^sin(pi/x) < (pi/x)
plugging in zero to the x variables gives me a limit of zero.
My question is, am I correct in how I have factored the (pi/x) components to the outer edges ?
Thank you.
While I agree that you could bound \displaystyle \begin{align*} \sin{\left(\frac{\pi}{x}\right)} \end{align*} between \displaystyle \begin{align*} -\frac{\pi}{x} \end{align*} and \displaystyle \
begin{align*} \frac{\pi}{x} \end{align*}, how could you possibly let \displaystyle \begin{align*} x = 0 \end{align*} in the edges? You can't divide by 0... Exponentiating both sides won't help
you either.
Re: squeeze theorem limit
thanks. That is where I am stuck. I don't know what to put into the edges. If anyone can offer suggestions that would be most appreciated.
Re: squeeze theorem limit
The fact is that the limit doesn't exist. limit Exp[Sin[Pi/x]] as x to 0 - Wolfram|Alpha
April 1st 2012, 05:01 PM #2
April 1st 2012, 05:04 PM #3
Mar 2012
April 1st 2012, 05:04 PM #4
|
{"url":"http://mathhelpforum.com/pre-calculus/196676-squeeze-theorem-limit.html","timestamp":"2014-04-19T17:41:20Z","content_type":null,"content_length":"41453","record_id":"<urn:uuid:ec4f03be-531b-4fc6-82ad-a6ad885547e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6:30pm eastern in the Philippines
You asked:
6:30pm eastern in the Philippines
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/6%3A30pm_eastern_in_the_philippines","timestamp":"2014-04-19T20:33:34Z","content_type":null,"content_length":"59659","record_id":"<urn:uuid:4d4dc139-65e8-48bc-a0f1-6ec2b13f1c04>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Itasca, IL Algebra 2 Tutor
Find a Itasca, IL Algebra 2 Tutor
...Includes many math, economics, accounting, finance and history courses. Teaching Experience: Adjunct instructor in over 100 finance, statistics and algebra courses at various universities
throughout the Chicagoland area. Have tutored many college and high school students in statistics, finance or algebra.
13 Subjects: including algebra 2, statistics, algebra 1, geometry
...The course began with simple programming commands, progressed to logic and more complicated problem solving, and culminated with object oriented programming. During my masters degree I was a
TA for the intro to computer science course. For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students.
17 Subjects: including algebra 2, calculus, physics, geometry
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including algebra 2, reading, writing, statistics
...I have a Bachelor's degree (2010) in mathematics from the University of Illinois at Urbana-Champaign. I took MATH 461 Probability Theory at the U of I and received an A-. I took MATH 463 Stats
and Probability 1 and received an A+. I am certified to teach secondary (6-12) mathematics. I am certified to teach secondary (6-12) mathematics.
12 Subjects: including algebra 2, calculus, statistics, geometry
...They may consider themselves "dumb" or "less than" others. I am also trained as a Professional Life Coach and I help both students and parents change this situation, which will free children
to focus on growing into successful adults, instead of on "battling" (and losing) with math. The way I c...
10 Subjects: including algebra 2, geometry, algebra 1, SAT math
Related Itasca, IL Tutors
Itasca, IL Accounting Tutors
Itasca, IL ACT Tutors
Itasca, IL Algebra Tutors
Itasca, IL Algebra 2 Tutors
Itasca, IL Calculus Tutors
Itasca, IL Geometry Tutors
Itasca, IL Math Tutors
Itasca, IL Prealgebra Tutors
Itasca, IL Precalculus Tutors
Itasca, IL SAT Tutors
Itasca, IL SAT Math Tutors
Itasca, IL Science Tutors
Itasca, IL Statistics Tutors
Itasca, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Itasca_IL_Algebra_2_tutors.php","timestamp":"2014-04-16T04:20:03Z","content_type":null,"content_length":"24206","record_id":"<urn:uuid:dfa4d9d8-1de6-4729-9050-c64a975f9cde>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re:Modulo optimizations
Wilco Dijkstra <Wilco.Dijkstra@arm.com>
19 Oct 1999 10:57:43 -0400
From comp.compilers
| List of all articles for this month |
From: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Newsgroups: comp.compilers
Date: 19 Oct 1999 10:57:43 -0400
Organization: Compilers Central
References: 99-10-017
Keywords: arithmetic, performance
George Russell wrote:
>The Alpha does not have integer divide. However as one of the manuals
>points out (I forget the exact URL), you can for unsigned integers
>implement division by any constant as an integer multiplication
>followed by a shift. For example, to divide a 32 bit integer by a 32
>bit integer you do a multiplication by a pre-computed number
>(producing a 64 bit result) followed by a right shift.
Yes, but unfortunately not all divisions by a constant can be treated
like that... A problem occurs when the error after multiplication
could be greater than the pre-computed value you are multiplying with,
forcing a correction step.
Interestingly there exists an algorithm to calculate modulos without
doing 64 bit multiplies (posted a few years ago on comp.sys.arm):
typedef unsigned int u32;
u32 mod3(u32 x)
x *= 0xAAAAAAAB;
if ((x - 0x55555556) < 0x55555555) return 2;
return x >> 31;
It starts out like a normal division by a constant. Since 0xAAAAAAAB *
3 mod 2 ^ 32 = 1, any multiple of 3 will give a result between 0 and
0xFFFFFFFF / 3 = 0x55555555. Any x with x % 3 = 1 lies in the range
0xAAAAAAAB..0xFFFFFFFF, x % 3 = 2 in 0x55555556..0xAAAAAAAA. A range
test filters out modulo 2 results, while a shift is used to
distinguish between modulo 1 (bit 31 set) and 0 (bit 31 clear)
The drawback of this method is that it is only really useful for small
constants, for larger ones the range search effectively becomes a
division itself!
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/99-10-093","timestamp":"2014-04-21T04:42:19Z","content_type":null,"content_length":"6603","record_id":"<urn:uuid:8433cada-6fb9-435f-b5ae-9d8307fba9c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hyde Park, MA Algebra Tutor
Find a Hyde Park, MA Algebra Tutor
...I have preliminary Mass. licensure to teach mathematics and English at the secondary level.Advanced algebra was one of my favorite subjects in graduate school, where I earned an "A" in at
least three or four such courses. The same concepts are in elementary algebra, and I have a good amount of e...
29 Subjects: including algebra 1, algebra 2, reading, writing
...I'm a good teacher, listen and explain things well, enjoy teens and am patient and understanding. I have excellent tutoring references. I am the father of 3 teens, and have been a soccer
coach, youth group leader, and scouting leader.
15 Subjects: including algebra 1, algebra 2, physics, geometry
...I am a very good writer. I did well in college, and got a 5 on my AP English exam. I also write a lot for my career in insurance.
90 Subjects: including algebra 2, algebra 1, English, chemistry
...Now I’m looking to pursue a life’s ambition and transition from the professional realm into public education. I’ve completed testing for state licensure and begun classroom teaching with a
local school system. My areas of expertise are math and business.
22 Subjects: including algebra 1, algebra 2, calculus, geometry
I have been teaching high school and middle school math courses for about 20 years. I currently teach at a local high school and teach on Saturdays at a school for Asian students in Boston. I am
currently teaching Honors Algebra 2, Senior Math Analysis, and MCAS prep courses, as well as 7-8 grade math, and SAT Prep courses.
12 Subjects: including algebra 2, algebra 1, geometry, SAT math
Related Hyde Park, MA Tutors
Hyde Park, MA Accounting Tutors
Hyde Park, MA ACT Tutors
Hyde Park, MA Algebra Tutors
Hyde Park, MA Algebra 2 Tutors
Hyde Park, MA Calculus Tutors
Hyde Park, MA Geometry Tutors
Hyde Park, MA Math Tutors
Hyde Park, MA Prealgebra Tutors
Hyde Park, MA Precalculus Tutors
Hyde Park, MA SAT Tutors
Hyde Park, MA SAT Math Tutors
Hyde Park, MA Science Tutors
Hyde Park, MA Statistics Tutors
Hyde Park, MA Trigonometry Tutors
Nearby Cities With algebra Tutor
Brighton, MA algebra Tutors
Brookline, MA algebra Tutors
Dedham, MA algebra Tutors
Dorchester, MA algebra Tutors
Jamaica Plain algebra Tutors
Mattapan algebra Tutors
Milton, MA algebra Tutors
Needham, MA algebra Tutors
Newton, MA algebra Tutors
Norwood, MA algebra Tutors
Quincy, MA algebra Tutors
Roslindale algebra Tutors
Roxbury, MA algebra Tutors
Waltham, MA algebra Tutors
West Roxbury algebra Tutors
|
{"url":"http://www.purplemath.com/hyde_park_ma_algebra_tutors.php","timestamp":"2014-04-16T10:40:36Z","content_type":null,"content_length":"23787","record_id":"<urn:uuid:244a418b-a950-4dce-82c5-712cf06c90c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mrs. Fields Trivia is HERE – Win FREE COOKIES!
Mrs. Fields Trivia – November 17th 2011
Here’s your chance again to win another amazing gift of FREE cookies from Mrs. Field
Enter the contest by leaving a comment on our blog. If we pick you as our winner, we will contact you via the email you’ve provided.
We will draw 1 name from the correct answers given and that person will receive a FREE Mrs. Fields Florentine Elegance Box ($35.00 Value, with shipping). Answers will be accepted until midnight
(November 17^th, 2011). Limited to one win per month per entrant. Winner will be notified via email.
Prize Details:
The prize is the Florentine Elegance Box – item #7W453, a $35.00 value, including shipping. It contains 18 Nibblers® bite-sized cookies, 12 Brownie Bites, and two hand-frosted cookies. Perfection!
Question: What winter hobby does our Mrs. Fields Alpine Polar Bear Jar love the most?
203 responses to “Mrs. Fields Trivia is HERE – Win FREE COOKIES!”
2. Ice skating
3. Skiing.
4. Mrs Fields Alpine Polar Bear enjoys sitting by the fireplace eating cookies of course it’s why the jar is filled with yummy cookies.
5. Skiing is the hobby that the Alpine polar bear jar love the most.
11. Alpine Polar Bear loves skiing!!
13. He enjoys skiing
14. skiing. He is cute!
15. Snow skiing while enjoying beautiful mountain scenery.
16. Skiing! how cute!! =)
17. He loves to participate in downhill skiing…
18. It looks like he enjoys skiing (or snow-shoeing) — Ready for some outdoor fun!
19. He’s holding ski’s so I’ll say skiing.
20. Skiing! What bear doesn’t love to ski? LOL
21. He’s ready for the slopes! Skiing!
22. Skiing just like me:)
25. Downhill sking
26. Downhill sking
27. Definitely skiing.
28. Downhill skiing:)
32. Downhill skiing
Thanks for the chance!!
□ Our lucky winner of this weeks Mrs. Fields trivia question is “JoeyfromSC”…. CONGRATULATIONS!! One of our Mrs. Fields representatives will be contacting you shortly. Thank you to all those
who participated. Visit us next week for your chance to win FREE COOKIES!!!
34. Skiing
Thank you for the giveaway
35. He loves to ski
37. Downhill Skiing.
38. Skiing of course
39. Down hill skiing !!! So much fun
40. Downhill Skiing! I’d love to try that someday.
44. Skiing which I did when younger, but the ground is alot harder, now that I am older..LOL
45. Definitely skiing, love your cookies!
47. The Alpine Polar bear jar likes skiing
48. skiing
didnt see my comment
52. Alpine skiing
55. SKIING!!! He’s a cutie. Maybe I will see him next week while skiing in Colorado!!!
57. downhill skiing thank you for the chance! if I win you will make my kids very happy! Happy Holidays!
58. He’s a skiier
59. downhill skiing
61. Downhill skiing!
62. Mrs. Fields Alpine Polar Bear Jar loves to Ski and also hold yummy cookies in his belly. ; )
63. He enjoys Downhill sking
64. Make that downhill skiing **
65. The hobby is skiing
66. skiing:)
67. skiing!!!!!!!!!!!
68. I would say… skiing!
70. The Polar Bear’s hobby is skiing!
71. Skiing!
bctripletmom at gmail dot com
74. He loves to go skiing!
83. The Polar Bear’s hobby is skiing
88. Snow skiing down hill♥!!
93. Its skiing, right? I know I’ve seen him with skates too, but I’m pretty sure its skiing.
96. skiing of course!
101. Downhill skiing!
102. He loves downhill skiing!!!
107. Live inKS I just hope we get some snow. If we do we build a snowman and sledding!!!!!
108. That bear does love him some skiing!!!
111. Downhill Skiing!
112. Alpine skiiing!
113. Polar bear loves skiing
117. Snow Skiing, rating nibblers, and filling your taste buds with a delightful Mrs. Field’s original.
118. Skiing, of course!
119. Mrs. Fields Alpine Polar Bear loves to ski during winter and enjoys a jar filled with fresh,warmly baked cookies after a day full of fun.
121. Hi.
I just want to say thank you so much for posting recipes. I recently broke my knee cap & it seems like baking is the way to keep me active.I have been baking some of Mrs. Fields baking recipes.
Mrs. Fields cookies in general are the BEST cookies out there. I love the company. Thanks again.
122. SKIING
124. sking
125. DOWNHILL SKIING!
127. Skiiing!
130. He enjoys skiing
131. Skiing!! And eating Mrs. Fields cookies like I do
132. I am guessing Skiing since he’s holding them =)
134. He loves to ski!
138. ski jumping and slalom skiing
141. snow skiing
143. He loves to ski…so cute
146. i think you have the best cookies , my guess is the bear drinking a coke and eating a box of your cookie’s,.
152. Cristina
154. SKIING!!
155. Skiing!!! Happy Thanksgiving everyone!!!
156. Skiing !!
157. ice skating
158. he likes to ski! pinkprincess111588@hotmail.com
160. sitting by the fire, drinking hot chocolate after a wonderful afternoon of SKIING!!!!
161. Skiing! (Otherwise it’d be pretty silly that he’s holding skis for no reason!)
163. Skiiing!!!
166. Mrs. Fields Alpine Polar Bear loves to ski!!!
168. Skiing!! I’m more of a belly flop type of gal ; )
169. #154
170. drinking a bottle of coke
171. Mrs. Fields Alpine Polar Bear’s favorite hobby is skiing!
172. the polar bear likes skiing!!
175. Downhill Skiing ofcourse!
181. Hmm downhill skiing?! Pick me pick me
183. Mrs fields polar bear jar loves to go skiing
184. She loves to snow ski while eating a variety of delicious Mrs. Fields Warm, Fresh and Wonderful Cookies.
185. down hill skiing then after takes a break eating your cookies :}
186. baking cookies
187. tee-hee Skiing! wooosh!
188. Skiing but not here in sunny California
189. Skiing on a stomach full of Nibblers and snowflake cookies!
190. Skiing, what else (other than sun bathing) would a bear do.
192. well i was going to guess skiing but as he has no poles my guess is nordic walking which is skiing without poles
193. alright i’ll go layman—SKiing
194. WAIT how about baking cookies is his favorite activity
195. <3 skiing!
197. Alpine Skiing
I couldn’t find if this contest ends EST, CST, or PST?
198. Skiing over to the nearest Mrs. Fields to fill up his belly! I’d like to do the same, but no stores in my neck of the woods.
199. Skiing, Skiing, Skiing :-))
201. Downhill Skiing!
202. Thank you SO much!!:)
You must be logged in to post a comment.
Posted by in facebook, Giveaways, twitter
Tags: facebook giveaway, facebook trivia contest, Giveaways, mrs. fields giveaways
|
{"url":"http://www.mrsfields.com/blogs/blog/2011/11/mrs-fields-trivia-is-here-win-free-cookies-21/","timestamp":"2014-04-20T21:14:16Z","content_type":null,"content_length":"265196","record_id":"<urn:uuid:567fa865-c1fb-4c09-82ed-c479aba73b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Andy's Math/CS page
The Vulcan in Me
Readers, here is a puzzle adventure in naive physics and topology. The author respectfully denies being a Trekkie. Enjoy!
I graduated last in my class at Starfleet Academy. Not much of a story, so don't ask. Anyway, you probably know what came next: it was either find a civilian job or go guard some worthless corner of
deep space.
But I'm not exactly sociable, so I took the assignment with no hesitation. Sure enough, it was a real plum: I was put in charge of two monitoring stations (practically bathroom-sized, one of which I
controlled remotely) orbiting neighboring stars, with nothing else even close.
For two years, there was nothing but silence as I idled there, chuckling at my fate and deepening my acquaintance with some bootlegged Romulan ale. Then, finally, something happened. I got an alert:
"Reports of extremely massive unidentified vessel, may be in your sensor range. Report: are gravitational readings consistent with null hypothesis (no vessel)?"
I almost panicked. I hadn't checked the sensors in ages, and I didn't even begin to remember how to interpret the readings--how did gravity work again? If I asked I'd definitely be canned. I fumbled
through my old Academy lecture notes, struggling to penetrate the chicken-scratch handwriting and the alien sex doodles everywhere. Here is what I was able to recover:
1) Newton's laws govern the universe; quantum mechanics and other such exotic junk were conclusively disproved in 2247.
2) The world contains massive point-particles; the gravitational field at a point in space is the vector sum of its gravitational attractions to all point-masses. Beyond a critical range and below a
critical mass, these attractions can be assumed zero. (so I could discard the mass of myself, the two monitoring stations, and everything beyond the two stars and, maybe, this weird vessel... all of
which I could model as point-masses.)
3) The gravitational attraction
to a mass
felt by a point
in space is a vector pointing from
in the direction of
, with magnitude
, where
is the mass of
is the distance from
, and
is a function given by...
...but I couldn't make out the formula for
; it was totally obscured by a lascivious Ferengi.
Still, I'm not as thick as you think. I reasoned that this much is true:
, the function governing the magnitude of the attraction, must be continuous, positive, and decreasing on the positive real numbers, tending to infinity as
goes to zero, and tending to zero as
approaches infinity. That is, it looks something like this:
Now, the readings from my two stations looked something like this:
s[1], s[2]
are the two stations, and
v[1], v[2]
are the two gravitational field readings.)
The picture's not much to go by--the vectors didn't all lie in a plane together, and I can't swear by the magnitudes I drew. The thing to notice is that
a, b
were obtuse--of that I'm sure.
Recall that there were definitely two stars kicking around in the vicinity. Don't ask me how massive they are; it was on record somewhere, but I was too frazzled to even look up the numbers. Also, my
windows were frosted over and I had no way of getting a bead on the stars' position. The question was, could some placement of those two masses have generated the gravitational field readings?
What can I say... I may be lazy, but I'm also one-eighth Vulcan, and my genes chose that do-or-die moment to kick in. I confidently delivered my report:
"Readings consistent with null hypothesis."
Well, they never found that vessel; still, doing my duty with flair like that was a high point for me, despite my usual tendency to shirk. But two more years have passed, and all this booze has given
the Vulcan in me a sore beating.
So help me remember--how the hell did I know what to say?
Labels: general math, puzzles
Random food for thought:
You want to climb to the top of a very tall hanging rope (never mind what's up there). You expect to get tired along the way, too tired to continually grip with your hands. How do you do it?
Gadgets, harnesses, etc. are permissible. I don't have any magically clever solution in mind, only some crude, untested thoughts that for the safety of impressionable youth I'll keep to myself. Also,
I asked a friend who says this is a solved problem in rock-climbing circles.
What interested me as I thought about this is how the requirements and mental verification process resembled algorithms work. The climber wants to loop thru a basic routine to create progress
("climb-a-bit"), with a safety invariant ("securely fastened") assumed at the beginning of the routine and reestablished at the end. Efficiency-type considerations also come in play--the amount of
rope around the climber's body and the complexity of its arrangement and manipulations should stay bounded.
I've never done engineering, and maybe these analogies are pervasive enough that my observation would seem vacuous to someone in-the-know. Still, it's nice to dabble in problems that have conceptual
affinities with math I study without necessarily being reducible to math.
This post is a cry for help.
OK, it's more of a request for information. First, let me explain what I know, which I hope will be enough to be an informative post about graph expansion in its own right.
I've been reading off and on about Fourier analysis of Boolean functions for awhile, and recently in the context of Property Testing towards giving the talk I posted, where it is really the elegant
way of analyzing the linearity test. Afterwards I was doing some random online course exercises, and stumbled on a good one here, by Ryan O'Donnell.
Here's a proof for exercise 1, whose statement I'll rephrase:
'Poincare Inequality': Let (A, B) be a partition of the Boolean n-cube into two sets. Let p be the fraction of the cube's edges going between A and B (vertices are connected by an edge if at Hamming
distance 1). Then,
p >= 2|A|*|B|/(n*2^{2n}).
First, what does this say? If the cube were a 'totally random' graph, and A, B were a 'totally random' partition of the vertices, we'd expect a 2|A|*|B|/2^{2n} fraction of the edges to go between A
and B.
So, while ill-chosen partitions of the cube can induce sparser cuts than randomly chosen partitions of random graphs, Poincare says these can only be sparser by at most a factor of n. In this sense,
the cube is robustly interconnected; not on a par with strong expanders, but to a nontrivial extent. An example that shows the inequality can be approximately tight is to let A be the vectors with
first coordinate 0.
(Note: this is the 'extremal example' for edge expansion on the cube, whereas letting A be the set of vectors with Hamming weight at most k gives extremal examples for 'vertex expansion'... for more
on the latter, see my June 2006 post.)
Proof: Pick x, y uniformly at random from the cube. The probability q that one is in A,the other in B (we don't care which one is in A) is exactly 2|A|*|B|/2^{2n}.
On the other hand, choose also a random monotone path P between x and y. The probability that x, y lie in these distinct sets is at most the probability that at least one of the edges on P goes
between A and B (call this a 'mixed edge'), since the first event implies the second.
The path has edge length at most n, and each individual edge that appears is uniformly randomly distributed over all edges of the cube (though there is dependence in the joint distribution), so there
is a mixed edge with probability at most n*p--here we are using the union bound.
So q = 2|A|*|B|/2^{2n} <= n*p, or
p <= 2|A|*|B|/(n*2^{2n}). QED.
What was interesting about this proof for me was that I had used the same idea once before, to show a seemingly different type of result to the extent that
"If an n-by-n 0/1 matrix is 'reasonably balanced' between 0s and 1s, it has either 'many' only-slightly-less-balanced rows, or 'many' only-slightly-less-balanced columns."
(It's easy to see you must allow this choice of rows-or-columns in the statement.)
Looking back at that proof, which was just the one I described but now with length-2 paths instead of length-n ones and n-element hyperedges in place of edges, I realized that result too was 'about'
(hyper-)graph expansion. The proof technique also could apply to cases where the edges along the random path are not uniformly distributed, but merely almost-uniform.
I also know, however, that Poincare was working in a very different mathematical milieu, one which, while aware of the importance of isoperimetric inequalities (as this sort of thing can be
considered, and the most famous historical example of which being the result that circles have minimal perimeter among planar figures of a given area), was not to my knowledge pursuing expander
graphs per se. So what was the larger 'story' of which Poincare's Inequality originally played a part?
Update: My question about the inequality still stands; however, I've found work in the literature that puts my proof of the inequality in perspective. The argument looks to be a simple variant of
something called the 'canonical paths' technique, that has successfully shown the good expansion properties (or the weighted version, 'conductance') for much more complex structures; Jerrum and
Sinclair are two prominent names in this area, and Jerrum wrote an excellent chapter in 'Probabilistic Methods for Algorithmic Discrete Mathematics' which has been getting me up to speed in this
area. I hope to post more about this soon.
Labels: general math
...because I'm odd that way. Both are in the same 'can-do' spirit of the last one, yet neither use quite the same techniques; still simple.
Guaranteed correct, 'cause they're not mine, but lifted from the problem sets I'll credit.
1. Two delegations A and B, with the same number of delegates, arrived at a conference. Some of the delegates knew each other already. Prove that there is a non-empty subset A' of A such that either
each member in B knew an odd number of members from A', or each member of B knew an even number of members from A'.
2. Given a finite set X of positive integers and a subset A of X, there exists a subset B of X, such that A are precisely the elements of X that divide by an odd number of elements of B.
(Hint: not a number theory puzzle.)
First is from the 1996 Kurschak Competition (Hungarian), via the Kalva site (see sidebar). Second is from the book '102 Combinatorial Problems' by Andreescu and Feng. Enjoy!
Labels: general math, puzzles
After the (n+1)st Sangria spill on your dirty shag rug, and after learning on good authority that wasabi green is officially hip, you've resolved: it's home improvement time.
You rustle up a few pals and get to work, and a few hundred dollars later the place is looking like less of a dive, but in a moment of belated clarity you realise that only one thing needed to be
changed to cement its 'cool' status:
just install 'The Clapper' in every room in the house.
For those too young to remember the Clapper heyday, this device controls the on/off switch to a lighting source, and toggles its position when you clap in the audible range--which, for added value
and amusement, tends to extend at least into adjacent rooms. For our purposes, let's say the device's range is exactly its host room and those directly adjacent. So, say you've got one light source
per room, each with a Clapper, in a house which, like any good house, is actually just an m-node undirected graph.
Now, for your grand house-rewarming party, you invite all your friends over, wait until their thinking skills are nice and clouded, and issue a challenge: clap the house into a fully-lit state.
In fact, no matter what your house's structure, this challenge can always be met, as long as lights are initially off! Can you prove it?
I came up with this result a few years ago after playing a computer solitaire game, which presented this problem for a 5-by-5 square grid (don't recall if there were diagonal connections... anyone
have a weblink?). Thought I was pretty clever, but then I saw it given as a problem in an old math journal, I think an AMS one (I'll post a ref if I ever find it). No fancy tools needed to solve this
fairly simple puzzle, although of course some linear algebra would help.
[S:Extra Credit: Now the lights all have k > 2 brightness settings, which you cycle thru with claps. Prove it's possible to get them all to the brightest setting:S]. Extra credit, think of a less
hasty and more true generalization.
Extra Extra Credit: Now some of the lights are off/bright, while some are off/dim/bright. Now it's no longer always possible to get all lights bright simultaneously. In fact, I'm guessing it's
NP-complete to maximize the number of bright lights, but haven't proved this. What's the story?
Oh, and the wasabi-green tip came to my attention (briefly!)in a recent New Yorker.
Labels: general math, puzzles
Or some such lame pun... Folks, this post is yet another puzzle. It should be pretty easy for anyone who knows a certain famous combinatorial Lemma (which I'll name in the comments section). To
everyone else, this is a chance to rediscover that lemma, and do a little more besides; I think that, approached with pluck, it should be solvable--but not easy.
The puzzle is to give an efficient algorithm to decide satisfiability (and to produce satisfying assignments when possible) of a restricted class of boolean formulae. They are going to be k-CNFs,
that is, a big AND of many ORs, each OR of k variables from {x[i]} or their negations {~x[i]}.
As stands this is NP-complete, even for k = 3. Here is the extra restriction on instances: for every subset S of k variables, there is an OR clause C[S] which contains each variable from S exactly
once, possibly negated, and no other variables. (So, these SAT instances are fairly 'saturated' with constraints.)
For example, the property that a bit-vector contains at most k-1 ones can be written this way: for each variable-set S = {x[i_1], ... x[i_k]} of size k, we include the constraint
C[S] := (~x[i_1] OR ~x[i_2] OR ... OR ~x[i_k]).
That's the problem statement--get to it!
Labels: complexity, puzzles
I am not a number theorist--far from it. It's not just that I suspect I wouldn't be very good (though there's that): while I admire its beauty, depth, and growing applicability, on a basic level I
don't 'buy in' to the scenarios it presents--they don't seem real and urgent in the way theoretical CS problems do (and I say this despite being almost equally far in my tastes from 'applied' math).
But I don't want to rag on number theory, or give a polemic in favor of my field. I want to point out a sense in which number theorists seem to have it really, really good, and ask whether TCS people
could ever work themselves into a similar position.
Number theorists live in a towering palace of puzzles. I don't think any other branch of math, not even Euclidean geometry, has produced such a wealth of problems. A meaningful diversity, too, in
terms of range of difficulty, types of thinking and visualization involved, degree of importance/frivolity, etc. This has several benefits to the field:
i) supports more researchers, and researchers of varying abilities and approaches;
ii) gives researchers ways to blow off steam or build their confidence before tackling 'serious' or technical problems in number theory;
iii) provides more 'sparks' for theoretical development and conceptual connections;
iv) gives young mathematicians plenty of opportunities to cut their teeth (the Olympiad system seems to produce 18-year-old number theorists of demoniacal skill);
v) provides good ways to advertise the field to the public and entice in new young talent.
Feel free to chime in with other possible benefits--or maybe you think puzzles are a detriment (I've heard this expressed); if so I'd also love to hear from you.
I'm not anxious to hammer down what a puzzle is, or the exact role they play in scientific fields (others have tried); but I would like to get a better sense, in the specific case of TCS, of where
the puzzles are or why they are fewer in number (I'm not saying there are none). Towards that end I'd like to first point out what may be a salient feature in number theory's success: it possesses at
least one powerful generative syntax for puzzle-posing: namely
Find solutions to F(x, y, z, ...) = 0,
where x, y, z are integral and F is a polynomial, exponential function, etc. You can just start inventing small equations, and before too long you'll come across one whose solution set is not at all
obviously describable, and worth puzzling over.
Now, complexity theorists have a pat explanation why this is so: solving Diophantine equations is in general undecidable, maybe even hard on average under natural distributions (?). But, though
amazing and true, I think it's a sham explanation. Diophantine equations somehow manage to be (for number theorists at least) interesting on average, meaningful on average, and actually pertinent to
the life of the field. (number theorists--am I wrong?)
So where in CS is there a generative scheme of comparable fertility? Are there inherent reasons why CS as we know it could not support such a scheme? Is CS too 'technical', too 'conceptual' to
support that kind of free play? Or is it simply too young a field to have yielded up its riches? What do readers think?
I don't want to suggest that there aren't good puzzles and puzzle-schemas kicking around CS (contests, too, though weighted towards programming); there was a period when "prove X is NP-complete" was
virtually my mission statement (though it gets old, I gotta say... and we should also admit that some of the best puzzles in any field are likely to be ones that don't fit any familiar scheme). You
should also check out the impressive efforts of Stanford grad student Willy Wu, who has a cool collection of riddles, many about CS, and a long-running forum devoted to posing, solving, and
discussing them.
Labels: general math, puzzles
|
{"url":"http://andysresearch.blogspot.com/2007_02_01_archive.html","timestamp":"2014-04-20T00:38:07Z","content_type":null,"content_length":"52398","record_id":"<urn:uuid:236f5c34-ec50-4b6d-8b05-0448221955d8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Context-sensitive autoassociative memories as expert systems in medical diagnosis.
Jump to Full Text
MedLine PMID: 17121675 Owner: NLM Status: MEDLINE
Abstract/ BACKGROUND: The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks.
OtherAbstract: Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences.
METHODS: We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores
the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works.
In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with
published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the
capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings.
RESULTS: We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time
makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible
diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical
differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%;
percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84.
CONCLUSION: Context-dependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward
implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the
model is a highly promising alternative in the development of accuracy diagnostic tools.
Authors: Andrés Pomi; Fernando Olivera
Related 21119055 - Compilation of international regulatory guidance documents for neuropathology assessmen...
Documents : 21126775 - Lateral x-ray view of the skull for the diagnosis of adenoid hypertrophy: a systematic ...
21494065 - Novel platform for mri-guided convection-enhanced delivery of therapeutics: preclinical...
21235965 - Statistics, costs and rationality in ecological inference.
17358645 - Domain wall formation in nixfe1-x.
23711945 - Use of artificial neural network (ann) for the development of bioprocess using pinus ro...
Publication Type: Comparative Study; Journal Article Date: 2006-11-22
Journal Title: BMC medical informatics and decision making Volume: 6 ISSN: 1472-6947 ISO Abbreviation: BMC Med Inform Decis Mak Publication Date: 2006
Date Detail: Created Date: 2007-01-05 Completed Date: 2007-03-01 Revised Date: 2013-06-07
Medline Nlm Unique ID: 101088682 Medline TA: BMC Med Inform Decis Mak Country: England
Journal Info:
Other Details: Languages: eng Pagination: 39 Citation Subset: IM
Affiliation: Sección Biofísica, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay. pomi@fcien.edu.uy
Export APA/MLA Format Download EndNote Download BibTex
MeSH Terms
Descriptor/ Decision Support Systems, Clinical*
Qualifier: Diagnosis, Computer-Assisted / methods*
Expert Systems*
Infant, Newborn
Information Storage and Retrieval / methods
Intensive Care Units, Neonatal
Models, Theoretical
Neural Networks (Computer)*
Sensitivity and Specificity
Sepsis / classification, diagnosis
Time Factors
From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine
Full Text
Journal Information Article Information
Journal ID (nlm-ta): BMC Med Download PDF
Inform Decis Mak Copyright © 2006 Pomi and Olivera; licensee BioMed Central Ltd.
ISSN: 1472-6947 open-access: This is an Open Access article distributed under the terms of the Creative Commons Attribution License (), which permits unrestricted use, distribution,
Publisher: BioMed Central, and reproduction in any medium, provided the original work is properly cited.
London Received Day: 15 Month: 5 Year: 2006
Accepted Day: 22 Month: 11 Year: 2006
collection publication date: Year: 2006
Electronic publication date: Day: 22 Month: 11 Year: 2006
Volume: 6First Page: 39 Last Page: 39
ID: 1764009
Publisher Id: 1472-6947-6-39
PubMed Id: 17121675
DOI: 10.1186/1472-6947-6-39
Context-sensitive autoassociative memories as expert systems in medical diagnosis
Andrés Pomi1 Email: pomi@fcien.edu.uy
Fernando Olivera2 Email: folivera@fmed.edu.uy
1Sección Biofísica, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay
2Departamento de Biofísica, Facultad de Medicina, Universidad de la República, General Flores 2125,11800 Montevideo, Uruguay
The extreme complexity of contemporary medical knowledge together with the intrinsic fallibility of human reasoning, have led to sustained efforts to develop clinical decision support systems, with
the hope that bedside expert systems could overcome the limitations inherent to human cognition [^1]. Despite the foundational hopes have not been fulfilled [^2], the unaltered and increasing
necessity for reliable automated diagnostic tools and the important benefit to society brought by any success in this area make every advance valuable.
To further the research on computer-aided diagnosis begun in the 1960s, models of neural networks [^3] have been added to the pioneering work on artificial-intelligence systems. The advent of
artificial neural networks with ability to identify multidimensional relationships in clinical data might improve the diagnostic power of the classical approaches. A great proportion of the neural
network architectures applied to clinical diagnosis rests on multilayer feed-forward networks instructed with backpropagation, followed by self-organizing maps and ART models [^4,^5]. Although they
perform with significant accuracy, this performance nevertheless remained insufficient to dispel the common fear that they are "black-boxes" whose functioning cannot be well understood and,
consequently, whose recommendations cannot be trusted [^6].
The associative memory models, an early class of neural models [^7] that fit perfectly well with the vision of cognition emergent from today brain neuroimaging techniques [^8,^9], are inspired on the
capacity of human cognition to build semantic nets [^10]. Their known ability to support symbolic calculus [^11] makes them a possible link between connectionist models and classical
artificial-intelligence developments.
This work has three main objectives: a) to point out that associative memory models have the possibility to act as expert systems in medical diagnosis; b) to show in a simple and straightforward way
how to instruct a minimal expert system with associative memories; and c) to encourage the implementation of this methodology at large scale by medical groups.
Therefore, in this paper we address – in a tutorial approach – the building of associative memory-based expert systems for the medical diagnosis domain. We favour a comprehensive way and the
possibility of a straightforward implementation by medical groups over the mathematical details of the model.
Context-dependent autoassociative memories with overlapping contexts
Associative memories are neural network models developed to capture some of the known characteristics of human memories [^12,^13]. These memories associate arbitrary pairs of patterns of neuronal
activity mapped onto real vectors. The set of associated pairs is stored superimposed and distributed throughout the coefficients of a matrix. These matrix memory models are content-addressable and
fault-tolerant, and are well known to share with humans the ability of generalization and universalization [^14].
In the attempt to overcome a serious problem of these classical models – their impossibility to evoke different associations depending on the context accompanying a same key stimulus- Mizraji [^15]
developed an extension of the model that performs adaptive associations. Context-dependent associations are based on a kind of second order sigma-pi neurons [^16], and showed an interesting
versatility when they were incorporated in modules employed to implement chains of goal-directed associations [^17], disambiguation of complex stimuli [^18], logical reasoning [^19,^20], and multiple
criteria classification [^21].
A context-dependent associative memory M acting as a basic expert system is a matrix
M=∑i=1kdi(di⊗∑j(i)sj)T (1) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
where d[i ]are column vectors mapping k different diseases (the set {d} is chosen orthonormal), and s[j(i) ]are column vectors mapping signs or symptoms accompanying the i disease (also an
orthonormal set). The sets of symptoms corresponding to each disease can overlap.
The Kronecker product (⊗) between two matrices A and B is another matrix defined by
A ⊗ B = a(i, j)·B (2)
denoting that each scalar coefficient of matrix A, a(i, j), is multiplied by the entire matrix B. Hence, if A is nxm dimensional and B is kxl dimensional, the resultant matrix will have the dimension
Note that if d are n-dimensional and s are k-dimensional vectors, the memory is a rectangular nxnm matrix. Also, the memory M can be viewed as resulting from the Kronecker product (⊗) enlargement of
each element of a nxn square autoassociative matrix d[i ]d[i]^T by a row column representing the sum of corresponding signs and symptoms:
M=∑i=1kdidiT⊗∑j(i)sjT (3) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
By feeding the context-sensitive autoassociative module M with signs or symptoms, the system retrieves the set of possible diseases associated with such set of symptoms, or a single diagnosis if the
criteria suffice.
At resting conditions the system is grounded in an indifferent state g. If each disease was instructed only one time, in the mathematics of the model this implies the priming of the memory with a
linear combination in which every disease has an equal weight
M(g⊗In×n)=∑i<di,g>di(∑j(i)sj)T=∑idi(∑j(i)sj)T (4) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
where g=∑idi MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqqGNbWzcqGH9aqpdaaeqbqaaiabbsgaKnaaBaaaleaacqqGPbqAaeqaaaqaaiabbMgaPbqab0GaeyyeIuoaaaa@354C@ and I is the nxn identity matrix. From
(4) it is evident that, after the priming, the context-dependent memory becomes a classical memory associating symptoms with diseases. If a set of sufficient concurrent signs and symptoms is
presented to the waiting memory (σ = ∑s), after iteration, a final diagnosis results.
It is important to point out that if the sets {s[j(i)]} corresponding to each disease were disjoint sets, then any single symptom s [j(i) ]would be patognomonical and sufficient to univocally
diagnose d[i]. Otherwise, the output will be a linear combination of possible diseases, each one weighed according to the scalar product between the set of actual symptoms (σ) and the set of symptoms
corresponding to each different disease: ∑i<∑j(i)sj MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaeqbqaaiabgYda8maaqafabaGaee4Cam3aaSbaaSqaaiabbQgaQbqabaaabaGaeeOAaOMaeiikaGIaeeyAaKMaeiykaKcabeqdcqGHris5aaWcbaGaeeyAaKgabeqdcqGHris5aaaa@3A9E@, σ > d[i].
See Figure 1 and its legend. Forcing the sum of scalar products to unity, this output provides a probabilistic mapping of the possible diseases associated with the clinical presentation.
How to instruct the memory
Let us illustrate how to instruct the memory with a minimal numerical example. Consider the set of three diseases and its characteristic symptoms shown in Figure 2. The first task is to codify the
sets of signs and diseases with orthogonal vectors, for which we will use the following orthogonal matrices.
Diseases Signs&symptoms[100010001]0.5*[11111−11−111−1−11−1−11]d1d2d3 s1s2s3s4 MathType@MTEF@5@5@+=
According to the table and equation (1), we instruct the memory by adding a matrix for each disease. For the first disease we have d[1]d[1]^T ⊗ (s[1 ]+ s[3 ]+ s[4])^T
[100000000]⊗[1.50.5−0.50.5]= =[1.50.5−0.50.500000000000000000000000000000000] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
In the same way we will have two other matrices for the other diseases. The sum of the three matrices constitutes the memory M.
M=[1.50.5−0.50.5000000000000100−10000000000001.5−0.50.50.5] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
How the system works
See also Figure 1 and its legend.
Time step 1
Initial state of the system: Indifferent vector g^T = (d[1 ]+ d[2 ]+ d[3]) = [1 1 1]
A first clinical data (s[3]) arrives: s[3 ]^T = [0.5 0.5 -0.5 -0.5]
Preprocessing of input vectors is performed: h = g ⊗ s[3]
h^T = [0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5]
Resulting associated output: Mh (a linear combination of possible diagnoses)
output(1)=[110] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
Resulting probabilistic map (each coefficient of the output vector is divided by the sum of them all):
prob(1)=[0.50.50] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
Time step 2
A new symptom (s[2]) arrives: s[2 ]^T = [0.5 -0.5 0.5 -0.5]
Preprocessing of input vectors is performed: h = output(1) ⊗ s[2]
h^T = [0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0 0 0 0]
Resulting associated output (Mh):
output(2)=[010] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
Resulting probabilistic map:
prob(1)=[010] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
Final result
The system has arrived to an only final diagnosis that corresponds to disease 2.
REAL DATA APPLICATION – diagnosing late-onset neonatal sepsis
Late-onset sepsis (invasive infection occurring in neonates after 3 days of age) is an important and severe problem in infants hospitalized in neonatal intensive care units (NICUs) [^22]. The
clinical signs of infection in the newborn are variable, and the earliest manifestations are often subtle and nonspecific. In the presence of a clinical suspicion of sepsis an early and accurate
diagnosis algorithm would be of outstanding value but is not yet available [^23]. In a recent retrospective study that included 47 neonates with clinical diagnosis of suspected sepsis, Martell and
collaborators [^24] assessed a group of clinical and laboratory variables – surgical history, metabolic acidosis, hepatomegalia, abnormal white blood cell (WBC) count, hyperglycemia and
thrombocytopenia-determining their sensitivity, specificity, likelihood ratio and post-test probability. Sepsis was defined as a positive result on one or more blood cultures in a neonate with
clinical diagnosis of suspected sepsis. A prevalence of 34% was found for their NICU.
We instructed a context-dependent autoassociative memory according to equation (3) with data published in [^24] in order to evaluate its capacity to recognize patients with or without sepsis. As a
test-set, we used 15 cases of suspected neonatal sepsis coming from the same NICU (personal observations of one of us-AP). From equation (3) it is clear that the different clinical presentation of
the individual cases are added up and resumed in the vector (∑j(i)sjT MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaeqbqaaiabbohaZnaaDaaaleaacqqGQbGAaeaacqqGubavaaaabaGaeeOAaOMaeiikaGIaeeyAaKMaeiykaKcabeqdcqGHris5aaaa@374E@) representing the characteristic signs of each
illness condition. We trained the memory instructing two terms d[i ]corresponding to the two final diagnoses of confirmed sepsis and absence of sepsis.
M = [septic] [septic]^T ⊗ [attributes _ septic]^T + [healthy] [healthy]^T ⊗ [attributes _ healthy]^T
The column vectors used for the septic and healthy conditions were [1 0]^T and [0 1]^T respectively.
The column vectors with the attributes corresponding to the septic and non-septic patients were generated from the available data as follows. For each one of the variables studied in [^24] (see
Figure 3) we reconstructed the values of the true positive (TP), false positive (FP), true negative (TN) and false negative (FN) number of patients:
TP = sensitivity × E
FP = (sensitivity/LR) × NE
TN = specificity × NE
FN = N - (TP+FP+TN).
These values became the coefficients of the two thirteen-dimensional column vectors [attributes_septic] and [attributes_healthy]. This procedure is shown in Figure 4. Finally, after normalization,
the vectors used for the instruction of the memory M are
[attributes_septic]^T = [0.0604 0.4225 0.0604 0.4225 0.1509 0.3320 0.0604 0.0604 0.3621 0.0604 0.4225 0.0604 0.4225]
[attributes_healthy]^T = [0.0142 0.4248 0.0142 0.4248 0.0566 0.3823 0.0354 0.0177 0.3859 0.0283 0.4106 0.0283 0.4106].
The memory M resumes the cumulated experience in suspected late-onset sepsis of this particular NICU through the clinical presentations of one year hospitalized neonates.
To test our system we presented to the memory a set of 15 personal clinical observations of neonates with the clinical diagnosis of suspected sepsis hospitalized in the same NICU. We coded the
thirteen attributes with the canonical basis vectors (the columns of a 13-dimensional identity matrix). For example, the presence of metabolic acidosis was represented with [0 01 0 0 0 0 0 0 0 0 0 0]
^T and the absence of acidosis with the vector [0 0 01 0 0 0 0 0 0 0 0 0]^T. For each patient of the test set we added the vectors corresponding to the confirmed presence or absence of any sign.
These 15 vectors representing the clinical presentation of the neonates with the diagnosis of suspected sepsis are shown in Figure 5.
The classification of each patient was obtained as follows:
i) The vector with the clinical presentation is presented to the memory M. The output, [result_vector], is a linear combination of the vectors septic [1 0]^T and healthy [0 1]^T:
[result_vector] = M * ([indifferent_vector] ⊗ [clinical presentation])
The [indifferent_vector] is the sum of septic and healthy vectors: [1 1]^T.
ii) A diagnosis results from the evaluation of the coefficients of the two-dimensional [result_vector]. If the first coefficient is greater than the second the case is classified as sepsis. If the
second coefficient is the largest the patient is classified as non-septic.
A context-dependent memory model acting as a minimal expert system
In this work we show a minimal, context-dependent, memory nucleus able to support diagnostic abilities. Our expert system consists of an autoassociative memory with overlapping contexts and feedback
loop that makes the output able to be reinjected into the memory at the next time step (Figure 1).
A memory M acting as a basic expert system is a matrix (equation 3)
M=∑i=1kdidiT⊗∑j(i)sjT, MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
where the d[i ]are column vectors mapping k different diseases (the set {d} is chosen to be orthonormal), s[j(i) ]are column vectors mapping signs and symptoms accompanying the i disease (also an
orthonormal set), and ⊗ is the Kronecker product [^25]-see Methods-. Note that if d are n-dimensional vectors (n ≥ k), and s are m-dimensional, then d[i]d[i]^T are square symmetric matrices, and the
memory M is a rectangular matrix of dimensions nxnm.
The instruction of the expert
The cognitive functioning shown by this kind of neural network model is based on the establishment of context-dependent associations. The instruction of the expert therefore consists in the
instruction of the memory that stores these associations.
Each disease is instructed to the memory together with its characteristic signs and symptoms (these can include the results of laboratory exams, imaging studies, etc). For this to be done, the first
step is to code each disease to be instructed with a different orthonormal vector. The same must be done with the set of signs, symptoms and paraclinical results that could accompany that set of
diseases, also coding them with different column vectors of any orthonormal basis of adequate dimension.
Once the signs and symptoms corresponding to each disease have been identified and expressed as orthogonal vectors, the construction of the memory can commence. According to equation (1) this
instruction consists in the superposition (the addition) of different rectangular matrices, each one corresponding to a different disease.
The instruction of the memory can be developed along two different paths. a. Learning from the textbook. In this case, the expert is instructed according to the updated academic knowledge of each
disease. One first disease is taken, which is coded by the column vector d[i], and the outer product of this vector is made by itself (a square matrix is constructed that contains this
autoassociation). At the same time, all the signs and symptoms characteristic of this disease are identified and the vectors coding them are added up (∑j(i)sj MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaeqbqaaiabbohaZnaaBaaaleaacqqGQbGAaeqaaaqaaiabbQgaQjabcIcaOiabbMgaPjabcMcaPaqab0GaeyyeIuoaaaa@361E@). Finally the Kronecker product between the square
matrix and the transpose of the vector-sum is performed. An analogous procedure is accomplished for any pathology. Each new resulting rectangular matrix of dimension nxnm is added to the previous
ones already stored in memory M (a minimal numerical example is presented in section Methods-How to instruct the memory-). b. Learning by experience. This is a case-based way of instructing the
memory. It allows the expert to progressively capture the prevalence of the different diseases in a community. Once finalized the previous instruction, the memory is fed with the actual clinical
findings of each particular patient assisted by the physician, attributing this particular constellation of signs and symptoms to the corresponding final diagnosis. The matrices resulting from new
patients are progressively added to the memory. This type of representation implies two essential distinctions from the previous learning-from-the-textbook memory. Pathologies are not equally weighed
in the memory but their representations depend on the frequency of presentation of cases in the population. In addition, for each disease the different symptoms also are not equally weighed: those
corresponding to the more frequent clinical presentations will be strengthened.
Medical queries
Once the training phase is finalized, the system is ready to be used. The presentation of a first sign or symptom initiates a medical query. The availability of a new clinical or laboratory finding
causes the expert to advance one more step in its diagnostic decision. Although we have many new signs and symptoms, in order to obtain a progressive narrowing of the set of possible diagnoses they
must be presented to the expert one per time. At each step, the new data are entered into the memory along with the set of possible diagnoses until that moment. Finally, if the whole set of signs and
symptoms available until the moment is sufficient, the system will arrive to a unique diagnosis.
We then follow the system operation. The starting point is when the first clinical data appears. The vector corresponding to this symptom is multiplied by means of the Kronecker product times the
vector that represents the set of possible diagnoses (in the starting point it is an indifferent vector). If the memory was instructed with equally-weighed pathologies the indifferent vector is the
sum of all the vectors of diseases stored in the memory. If, on the contrary, the memory was instructed on the basis of individual cases, the indifferent vector will be the same linear combination of
the vector diseases stored (the weight of each disease corresponds to the one of its frequency of presentation). The resulting column vector is now multiplied by the memory matrix. The exit vector
contains either a univocal diagnosis (if the clinical data are sufficient) or a certain linear combination of vectors corresponding to several diseases. If a unique diagnosis was not arrived at, when
one has a new sign or symptom, its corresponding vector will enter the memory after making its Kronecker product by the exit vector of the previous step. The process is repeated and stops when a
final diagnosis is reached or when new clinical data is not available (see the continuation of the numerical example in section Methods-How the system works-).
Even if at a certain state a final diagnosis has not been reached, the outcome of the system nevertheless represents a probabilistic mapping of the possible diagnoses, each one with its respective
probability in agreement with the data available until the moment. In order to obtain such a map in a direct way it is convenient to choose as disease vectors the columns of an identity matrix of
suitable dimension. In that case, in each exit vector the positions of the coefficients different from zero mark the different possible diagnoses. Applying a normalization to this exit vector in such
a way that the sum of their components is one, the value of each coefficient different from zero represents the probability of each one of those diagnoses. Otherwise, these probabilities can be
obtained by multiplying the exit vector by the orthonormal matrix that codifies the diseases.
A reduced model for the diagnosis of late-onset neonatal sepsis
The system described in section Methods classified the patients of the test-set (N = 15) as follows (S = sepsis; NS = non-septic):
123456789101112131415SNSNSSSSSSNSNSSSSSS MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
Comparing this classification with the actual illness condition of the patients-shown in Figure 5 – it results that only patient 14 was misdiagnosed. The 2 × 2 table shown in Figure 6 resumes the
behaviour of our diagnostic system. The sensitivity was 100% and the specificity 80%. The likelihood ratio (LR = TP/FP) was 5. Using this set of variables as input data, the performance of the system
in the classification task can be evaluated as very good. It reached a high accuracy ((TP+TN)/N) of 93.3%, and a Cohen's kappa index of 0.84. (Kappa = (Accuracy - A_chance)/(1-A_chance), where
A_chance is the accuracy expected by chance. A_chance = (<TP>+<TN>)/N) where <TP> = (TP+FP)(TP+FN)/N and <TN> = (FN+TN)(FP+TN)/N).
Discussion and conclusions
We have shown here that context-dependent associative memories could act as medical decision support systems. The system implies the previous coding of a set of diseases and its corresponding
semiologic findings in individual basis of orthogonal vectors. The model presented in this communication is only a minimal module able to evaluate the probabilities of different diagnoses when a set
of signs and symptoms is presented to it.
This expert system based on an associative memory shares with programs using artificial intelligence a great capacity to quickly narrow the number of diagnostic possibilities [^1]. Also, it is able
to cope with variations in the way that a disease can present itself.
A clear advantage of this system is that the probability assignment to the different diagnostic possibilities in any particular clinical situation does not have to be arbitrarily assigned by the
specialist, but is automatically provided by the system, in agreement with the acquired experience. In this sense, this neural network model is akin to statistical pattern recognition [^1]. However,
neither programs based on simple matching strategies nor most used neural network models are able to explain to the physician how they have reached their conclusions. On the contrary, the operation
of this system, that unveils the underlying associative structure of human cognition, is transparent. Obviously, it must be understood that this is not the unique mechanism involved in human decision
making. The relevant properties of this associative memory model are summarized in Figure 7 in comparison to other neural network models and rule-based artificial intelligence systems.
Beginning with a textbook-instructed memory, the system evolves accommodating (superimposing in the memory) new manifestations of disease gathered over time. This process of continued network
education based on empirical evidence leads to databases representative of the different patient populations with its own geo-demographical characteristics.
This model can be easily improved in various directions. The functioning of the system described up to now can be considered a passive phase (in the sense that it consists on an automatic evaluation
of the available information). By adding another module to the system, consisting of a simple memory that associates diseases with the set of its findings, the expert can enhance its diagnostic
performance. Remaining two or three different diagnostic hypothesis within the previous passive phase of diagnosis refinement, this new module can be fed with the vectors mapping each one of these
diseases to elicit its associated set of clinical findings. The set of absent features supporting one or the other disease determines what information must be sought next.
Another important expansion of the expert allows giving up the strong assumption that all the findings correspond to a unique disease. Our context-dependent memory stops and gives a null vector when
contradictory data are proportioned. To prevent such behaviour, a module akin to a novelty filter could be interposed within the recursion with the following properties: if a vector with only zero
coefficients arrives, this module associates the whole set of diseases, avoiding lying aside relevant diagnoses and concurrent pathologies. However, this theme needs further investigation: as for
almost every expert system [^26], the clustering of findings and their attribution either to only one disease or to several disorders is a major challenge.
The primary implementation of a reduced version of the model with the aim of classifying septic or non-septic neonates showed the highly satisfactory capacity of the model to be applied to real data.
We conclude that context-sensitive associative memory model is a promising alternative in the development of accuracy diagnostic tools. We expect that its easy implementation stimulate groups of
medical informatics to develop this expert system at real scale.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
AP conceived the application of the model to medical diagnosis, drafted the manuscript and carried out the implementation with real data of neonates with suspected late-onset sepsis. FO participated
in the elaboration of the numerical examples, computational programs and the discussion of the model. Both authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
We thank Dr. Eduardo Mizraji for useful comments and Dr. Julio A. Hernández for revision and improvement of the manuscript.
Szolovits P,Patil RS,Schwartz WB. Artificial Intelligence in medical diagnosisAnnals of Internal Medicine 1988;108:80–87. [pmid: 3276267]
Schwartz WB,Patil RS,Szolovits P. Artificial Intelligence in medicine: Where do we stand?New England Journal of Medicine 1987;316:685–688. [pmid: 3821801]
Arbib MA,EdThe Handbook of Brain Theory and Neural Networks. 1995Cambridge, MA: MIT Press;
Cross SS,Harrison RF,Lee Kennedy R. Introduction to neural networksThe Lancet 1995;346:1075–1079. [doi: 10.1016/S0140-6736(95)91746-2]
Lisboa PJG. A review of evidence of health benefit from artificial neural network in health interventionNeural Networks 2002;15:11–39. [pmid: 11958484] [doi: 10.1016/S0893-6080(01)00111-3]
Baxt WG. Application of artificial neural networks to clinical medicineThe Lancet 1995;346:1135–1138. [doi: 10.1016/S0140-6736(95)91804-3]
Kohonen T. Associative Memory: A System-Theoretical Approach. 1977New York: Springer-Verlag;
Friston KJ. Imaging neuroscience: Principles or maps?Proc Natl Acad Sci USA 1998;95:796–802. [pmid: 9448243] [doi: 10.1073/pnas.95.3.796]
McIntosh AR. Towards a network theory of cognitionNeural Networks 2000;13:861–870. [pmid: 11156197] [doi: 10.1016/S0893-6080(00)00059-9]
Pomi A,Mizraji E. Semantic graphs and associative memoriesPhysical Review E 2004;70:066136. [doi: 10.1103/PhysRevE.70.066136]
Mizraji E. Vector logics: the matrix-vector representation of logical calculusFuzzy Sets and Systems 1992;50:179–185. [doi: 10.1016/0165-0114(92)90216-Q]
Anderson JA,Cooper L,Nass MM,Freiberger W,Grenander U. Some properties of a neural model for memoryAAAS Symposium on Theoretical Biology and Biomathematics. 1972Milton, WA. Leon N Cooper
Cooper LN. Memories and memory: a physicist's approach to the brainInternational J Modern Physics A 2000;15:4069–4082.
Cooper LN. Lundquist B & SA Possible Organization of Animal Memory and LearningProceedings of the Nobel Symposium on Collective Properties of Physical Systems. 1973New York: Academic Press;
Mizraji E. Context-dependent associations in linear distributed memoriesBulletin Math Biol 1989;51:195–205.
Valle-Lisboa JC,Reali F,Anastasía H,Mizraji E. Elman topology with sigma-pi units: An application to the modelling of verbal hallucinations in schozophreniaNeural Networks 2005;18:863–877. [pmid:
15935616] [doi: 10.1016/j.neunet.2005.03.009]
Mizraji E,Pomi A,Alvarez F. Multiplicative contexts in associative memoriesBioSystems 1994;32:145–161. [pmid: 7919113] [doi: 10.1016/0303-2647(94)90038-8]
Pomi-Brea A,Mizraji E. Memories in contextBioSystems 1999;50:173–188. [pmid: 10400268] [doi: 10.1016/S0303-2647(99)00005-2]
Mizraji E,Lin J. A dynamical approach to logical decisionsComplexity 1997;2:56–63. [doi: 10.1002/(SICI)1099-0526(199701/02)2:3<56::AID-CPLX12>3.0.CO;2-S]
Mizraji E,Lin J. Fuzzy decisions in modular neural networksInt J Bifurcation and Chaos 2001;11:155–167. [doi: 10.1142/S0218127401002043]
Pomi A,Mizraji E. A cognitive architecture that solves a problem stated by MinskyIEEE on Systems, Man and Cybernetics B (Cybernetics) 2001;31:729–734. [doi: 10.1109/3477.956034]
Stoll BJ,Hansen N,Fanaroff AA,Wright LL,Carlo WA,Ehrenkranz RA,Lemons JA,Donovan EF,Stark AR,Tyson JE,Oh W,Bauer CR,Korones SB,Shankaran S,Laptook AR,Stevenson DK,Papile L-A,Poole WK. Late-Onset
Sepsis in Very Low Birth Weight Neonates: The Experience of the NICHD Neonatal Research NetworkPediatrics 2002;110:285–291. [pmid: 12165580] [doi: 10.1542/peds.110.2.285]
Rubin LG,Sánchez PJ,Siegel J,Levine G,Saiman L,Jarvis WR. Evaluation and Treatment of Neonates with Suspected Late-Onset Sepsis: A Survey of Neonatologists' PracticesPediatrics 2002;110:e42. [pmid:
12359815] [doi: 10.1542/peds.110.4.e42]
Perotti E,Cazales C,Martell M. Estrategias para el diagnóstico de sepsis neonatal tardíaRev Med Uruguay 2005;21:314–320.
Van Loan CF. The ubiquitous Kronecker productJournal of Computational and Applied Mathematics 2000;123:85–100. [doi: 10.1016/S0377-0427(00)00393-9]
Szolovits P,Pauker SG. Categorical and probabilistic reasoning in medicine revisitedArtificial Intelligence 1993;59:167–180. [doi: 10.1016/0004-3702(93)90183-C]
Figure 1
ID: F1] A module with diagnostic abilities. The neural module receives the input of two vectors: one representing the set of possible diseases up to the moment and the other vector corresponding to a
new sign, symptom or laboratory result. The action of the neurons that constitute the neural module can be divided into two sequential steps: the Kronecker product of these two entries and
the association of this stimulus with an output activity pattern. This output vector is a linear combination of a narrower set of disease vectors that can be reinjected if a new clinical data
arrives or can be processed to obtain the probability attributable to each diagnostic decision.
Figure 2
ID: F2] Three diseases with their corresponding signs. The set of signs and symptoms associated with three different diseases instructed in the memory of the numerical example according to equation
Figure 3
ID: F3] Data from the study of Martell and collaborators [24]. The total number of neonates with suspected late-onset sepsis was N = 47 (septic: E = 16; non-septic: NE = 31). Metabolic acidosis was
defined for patients with adequate ventilation; WBC count was considered normal within 5.000–25.000; thrombocytopenia was defined for platelet counts <40.000.
Figure 4
ID: F4] Attributes' vectors for the septic and non-septic groups. For each variable, true positive (TP), false positive (FP), true negative (TN) and false negative (FN) were calculated: TP =
sensitivity × E; FP = (sensitivity/LR) × NE; TN = specificity × NE; FN = N - (TP+FP+TN).
Figure 5
ID: F5] Test set. For each one of the 15 neonates of the test set, the thirteen-dimensional column vector has ones in the coefficients corresponding to the position of the confirmed presence or
absence of the signs already shown in Figure 4. The final diagnosis is shown at the bottom of the column (S: sepsis; NS: no sepsis).
Figure 6
ID: F6] Associative memory classification. The 15 cases tested (actual sepsis: 10; non-septic: 5) were classified as positive (S) or negative (NS) by the neural network. TP = true positive; FP =
false positive; TN = true negative and FN = false negative. A total of 11 neonates were tested positive, and 4 negative. The sensitivity was 100% and the specificity 80%.
Figure 7
ID: F7] Outstanding characteristics of different models. AM: associative memory; ANN: artificial neural network; AI: artificial intelligence.
Article Categories:
• Research Article
Previous Document: Validation of the Portuguese version of the Lithium Attitudes Questionnaire (LAQ) in bipolar patient... Next Document: Serine phosphorylation regulates paxillin turnover during
cell migration.
|
{"url":"http://www.biomedsearch.com/nih/Context-sensitive-autoassociative-memories-as/17121675.html","timestamp":"2014-04-21T07:41:47Z","content_type":null,"content_length":"71024","record_id":"<urn:uuid:fee38175-d5e6-480f-aa43-da88f41fc765>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Report in Wirtschaftsmathematik (WIMA Report)
144 search hits
Efficient Computation of Equilibria in Bottleneck Games via Game Transformation (2011)
Thomas L. Werth Heike Sperber Sven O. Krumke
Universal Shortest Paths (2010)
Lara Turner Horst W. Hamacher
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal
combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality.
Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained
subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be
formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum
shortest path problem.
On the Generality of the Greedy Algorithm for Solving Matroid Base Problems (2013)
Lara Turner Matthias Ehrgott Horst W. Hamacher
It is well known that the greedy algorithm solves matroid base problems for all linear cost functions and is, in fact, correct if and only if the underlying combinatorial structure of the problem
is a matroid. Moreover, the algorithm can be applied to problems with sum, bottleneck, algebraic sum or \(k\)-sum objective functions.
Variants of the Shortest Path Problem (2011)
Lara Turner
The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in
combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \
(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for
those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial
time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \
(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.
Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions (1999)
Alexander L. Topchishvili Vilhelm G. Maisuradze Matthias Ehrgott
The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of
directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be
represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show
that the continuity of the convex operator can be replaced by its K-boundedness.
Earliest Arrival Flow with Time Dependent Capacity Approach to the Evacuation Problems (2001)
Stevanus A. Tjandra
Abstract: Evacuation problems can be modeled as flow problems in dynamic networks. A dynamic network is defined by a directed graph G = (N,A) with sources, sinks and non-negative integral travel
times and capacities for every arc (i,j) e A. The earliest arrival flow problem is to send a maximum amount of dynamic flow reaching the sink not only for the given time horizon T, but also for
any time T' < T . This problem mimics the evacuation problem of public buildings where occupancies may not known. For the buildings where the number of occupancies is known and concentrated only
in one source, the quickest flow model is used to find the minimum egress time. We propose in this paper a solution procedure for evacuation problems with a single source of the building where
the occupancy number is either known or unknown. The possibility that the flow capacity may change due to the increasing of smoke density or fire obstructions can be mirrored in our model. The
solution procedure looks iteratively for the shortest conditional augmenting path (SCAP) from source to sink and compute the time intervals in which flow reaches the sink via this path.
Complexity and Approximability of the Maximum Flow Problem with Minimum Quantities (2012)
Clemens Thielen Stephan Westphal
We consider the maximum flow problem with minimum quantities (MFPMQ), which is a variant of the maximum flow problem where the flow on each arc in the network is restricted to be either zero or
above a given lower bound (a minimum quantity), which may depend on the arc. This problem has recently been shown to be weakly NP-complete even on series-parallel graphs. In this paper, we
provide further complexity and approximability results for MFPMQ and several special cases. We first show that it is strongly NP-hard to approximate MFPMQ on general graphs (and even bipartite
graphs) within any positive factor. On series-parallel graphs, however, we present a pseudo-polynomial time dynamic programming algorithm for the problem. We then study the case that the minimum
quantity is the same for each arc in the network and show that, under this restriction, the problem is still weakly NP-complete on general graphs, but can be solved in strongly polynomial time on
series-parallel graphs. On general graphs, we present a \((2 - 1/\lambda) \)-approximation algorithm for this case, where \(\lambda\) denotes the common minimum quantity of all arcs.
Truthful Mechanisms for Selfish Routing and Two-Parameter Agents (2009)
Clemens Thielen
We prove a general monotonicity result about Nash flows in directed networks and use it for the design of truthful mechanisms in the setting where each edge of the network is controlled by a
different selfish agent, who incurs costs when her edge is used. The costs for each edge are assumed to be linear in the load on the edge. To compensate for these costs, the agents impose tolls
for the usage of edges. When nonatomic selfish network users choose their paths through the network independently and each user tries to minimize a weighted sum of her latency and the toll she
has to pay to the edges, a Nash flow is obtained. Our monotonicity result implies that the load on an edge in this setting can not increase when the toll on the edge is increased, so the
assignment of load to the edges by a Nash flow yields a monotone algorithm. By a well-known result, the monotonicity of the algorithm then allows us to design truthful mechanisms based on the
load assignment by Nash flows. Moreover, we consider a mechanism design setting with two-parameter agents, which is a generalization of the case of one-parameter agents considered in a seminal
paper of Archer and Tardos. While the private data of an agent in the one-parameter case consists of a single nonnegative real number specifying the agent's cost per unit of load assigned to her,
the private data of a two-parameter agent consists of a pair of nonnegative real numbers, where the first one specifies the cost of the agent per unit load as in the one-parameter case, and the
second one specifies a fixed cost, which the agent incurs independently of the load assignment. We give a complete characterization of the set of output functions that can be turned into truthful
mechanisms for two-parameter agents. Namely, we prove that an output function for the two-parameter setting can be turned into a truthful mechanism if and only if the load assigned to every agent
is nonincreasing in the agent's bid for her per unit cost and, for almost all fixed bids for the agent's per unit cost, the load assigned to her is independent of the agent's bid for her fixed
cost. When the load assigned to an agent is continuous in the agent's bid for her per unit cost, it must be completely independent of the agent's bid for her fixed cost. These results motivate
our choice of linear cost functions without fixed costs for the edges in the selfish routing setting, but the results also seem to be interesting in the context of algorithmic mechanism design
A Class of Switching Regimes Autoregressive Driven Processes with Exogenous Components (2008)
Joseph Tadjuidje Kamgaing Hernando Ombao Richard A. Davis
In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In
contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have
potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as
exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable
conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics
and model parameter estimators.
Maximum Likelihood Estimators for Multivariate Hidden Markov Mixture Models (2013)
Joseph Tadjuidje Kamgaing
In this paper we consider a multivariate switching model, with constant states means and covariances. In this model, the switching mechanism between the basic states of the observed time series
is controlled by a hidden Markov chain. As illustration, under Gaussian assumption on the innovations and some rather simple conditions, we prove the consistency and asymptotic normality of the
maximum likelihood estimates of the model parameters.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16168/start/0/rows/10/sortfield/author/sortorder/desc","timestamp":"2014-04-19T02:14:52Z","content_type":null,"content_length":"49786","record_id":"<urn:uuid:457d580b-5030-4f60-978b-c1a0006f04fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Economics Help
October 25th 2010, 10:54 AM
Economics Help
Hey guys, I have an economics problem that I hope somebody can shed some light on for me:
Suppose that the demand for stilts is given by Q=1500-50P and that the long run total operating costs of each stilt-making firm in a competitive industry are given by C(q) = .5q^2 - 10Q.
Entrepreneurial talent for stilt making is scarce. The supply curve for entrepreneurs is given by Qs = .25w, where w is the annual wage paid. Suppose also that each stilt making firm requires one
(and only one) entrepreneur (hence, the quantity of entrepreneurs hired is equal to the number of firms). Long run total costs for each firm are given by C(q,w) = .5q^2 - 10q + W.
a. What is the long run equilibrium quantity of stilts produced? How many stilts are produced by each firm? What is the long run equilibrium price of stilts? How many firms will there be? How
many entrepreneurs will be hired, and what is their wage?
b.Suppose that the demand for stilts shifts outward to Q=2428-50P. How would you now answer the question posed in part a?
c. Because stilt- making entrepreneurs are the cause of the upward-sloping long run supply curve in this problem, they will receive all rents generated as industry output expands. Calculate the
increase in rents between parts (a) and (b). Show that this value is identical to the chance in long run producer surplus as measured along the stilt supply curve.
Anyone have any ideas?
|
{"url":"http://mathhelpforum.com/business-math/160938-economics-help-print.html","timestamp":"2014-04-20T07:50:40Z","content_type":null,"content_length":"4407","record_id":"<urn:uuid:1f6ab05d-a271-4621-bf50-04d078542785>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Plainfield Math Tutor
Find a South Plainfield Math Tutor
...My unique experience includes working with students in the classroom, as a private tutor, and as a homeschooling parent for 8 years. I like being creative and fun, while helping students to
feel that they can do it!I have supervised, tested, and helped students in grades K-6 in all subject areas...
18 Subjects: including trigonometry, algebra 1, prealgebra, reading
...Danielle C. Currently I am the After Care Director (Preschool – 8th Grade) where I have restructured and enhanced the overall program by setting goals, developing a curriculum, organized age
appropriate activities and introduced STEM projects. I lead the program with the assistance of aides and...
33 Subjects: including ACT Math, SAT math, English, reading
...I have been tutoring and teaching since I was in high school myself because I love doing it! During and after earning a Master of Science in mathematics, I spent 8 years teaching at the
post-secondary level in universities and community colleges. I have also worked with middle and high school students.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I take pride in helping my students have academic success! To give some background on myself, I have a Bachelor of Science degree in Computer Science with a minor in Mathematics. I also hold a
Master of Science degree in Information Systems Management.
5 Subjects: including algebra 2, Microsoft Word, prealgebra, algebra 1
...Phonics is a core component of reading. While studying at college, I took two extensive course on reading and phonics as part of my coursework. In addition, my recent prior tutoring experience
at a tutoring facility required that I know phonics in order to help children break down words and manipulate phonetic sounds.
18 Subjects: including algebra 1, reading, English, public speaking
|
{"url":"http://www.purplemath.com/South_Plainfield_Math_tutors.php","timestamp":"2014-04-19T15:21:49Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:159f0933-4270-41af-bdd9-9dccfd36b1e9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of
Department of Physics
101 Physics Building
University of Virginia
P.O. Box 400714
Charlottesville, VA 22904-4714
(434) 924-3781
Degree Requirements
The Master s Program Three master s-level degrees are offered in the Physics Department. Candidates for the M.S. degree must pass thirty credits of courses approved by the graduate adviser, present a
thesis, and defend it in an oral examination. Offered primarily for secondary or community college teachers, the M.A. degree requirements depend on the candidate s background and are developed with
the departmental graduate program committee. The M.A.P.E. (Master of Arts in Physics Education) degree is designed to provide middle school physical science and high school physics teachers with a
strong background in physics. Courses numbered in the 600s are taken to satisfy the requirements for this degree. Typically students take two courses in the summer in residence at UVa and one
distance learning course in the academic year totaling ten credits each year to complete the required thirty credits in two and a half years.
The Ph.D. Program Unless credit for advanced standing is given by the departmental advisor, Ph.D. candidates must pass 12 departmentally required courses (seven specified "core courses" and five
electives) in addition to six elective courses passed with a letter grade (not S or U) and six more courses, including non-topical research.
Qualifying Examination Candidates for the Ph.D. degree must pass a qualifying examination in the subjects of classical mechanics, electricity and magnetism, quantum mechanics, and statistical
mechanics. The material for this examination is covered in the seven core courses, which should be completed before the start of the fourth semester.
Research and Thesis Requirements Ph.D. candidates must present a dissertation on their research, satisfactory to their research advisor, and defend it in an oral examination.
The Engineering Physics Program The Department of Physics also offers an engineering physics degree program jointly administrated with the School of Graduate Engineering and Applied Science. The
engineering physics program offers the flexibility of pursuing an advanced degree in interdisciplinary fields defined by the student. Students seeking the Ph.D. degree in this program must satisfy
the engineering physics degree course requirements: two each in physics and engineering and one in mathematics. In addition, students must also satisfy any other general requirements listed in the
School of Graduate Engineering chapter of this Record. Students must choose a research advisor and declare a concentration in the Engineering School within the fall semester of their first year. The
qualifying examination for a Ph.D. consists of an oral examination following a written examination of three components; students must take at least one component in physics and one in engineering.
Course Descriptions
Note: The courses listed below are given as the needs of students require.
PHYS 519 - (3) (Y)
Prerequisite: Instructor permission.
Studies practical electronics for scientists, from resistors to microprocessors.
PHYS 521 - (3) (Y)
Theoretical Mechanics I
Prerequisite: PHYS 321 and MATH 522, or instructor permission.
Studies the statics and dynamics of particles and rigid bodies. Discusses methods of generalized coordinates, the Lagrangian, Hamilton-Jacobi equations, and action-angle variables. Relation to the
quantum theory is explored.
PHYS 524 - (3) (Y)
Introduction to the Theory of General Relativity
Prerequisite: Advanced calculus through partial differentiation and multiple integration; vector analysis in three dimensions.
Reviews special relativity and coordinate transformations. Includes the principle of equivalence; effects of gravitation on other systems and fields; general tensor analysis in curved spaces and
gravitational field equations; Mach s principle, tests of gravitational theories: perihelion precession, red shift, bending of light, gyroscopic precession, radar echo delay; gravitational radiation;
relativistic stellar structure and cosmography; and a short survey of cosmological models.
PHYS 531 - (3) (Y)
Prerequisite: Knowledge of vector calculus and previous exposure to Maxwell s equations.
Includes reflection and refraction at interfaces, geometrical optics, interference phenomena, diffraction, Gaussian optics, and polarization.
PHYS 547 - (3) (IR)
Introduction to Molecular Biophysics
Prerequisite: PHYS 331 or CHEM 361, PHYS 355 or CHEM 362, MATH 521, or instructor permission.
Introduces the physics of molecular structures and processes in living systems. Includes molecular structure analysis by X-ray (and neutron) diffraction; electronic configuration of atoms, groups,
and small molecules of critical importance in biology; physical methods of macromolecular structure determination, in solution and in the solid state; thermodynamic and electronic factors underlying
group interactions, proton dissociation, and charge distribution in macromolecule; solvent-macromolecule interactions; action spectroscopy; and rate processes in series and parallel.
PHYS 551, 552 - (3) (IR)
Special Topics in Classical and Modern Physics
Prerequisite: PHYS 342 or instructor permission.
Topics of current interest in physics research and pedagogy. May be repeated.
PHYS 562 - (3) (Y)
Introduction to Solid State Physics
Includes crystal structures, lattice vibrations, and electronic properties of insulators, metals, and semiconductors; superconductivity.
PHYS 572 - (3) (Y)
Introduction to Nuclear and Particle Physics
Studies subatomic structure, basic constituents and their mutual interactions.
PHYS 582 - (3) (Y)
Introduction to Nanophysics
An introduction to rapidly-evolving ideas in nanophysics. Covers the principles involved in the fabrication of nanosystems and in the measurement of phenomena on the nanoscale. Concepts necessary to
appreciate applications in such areas as non-electronics, nano-magnetism, nano-mechanics and nano-optics, are discussed.
PHYS 593 - (3) (Y)
Independent Study
Independent study supervised by a faculty member, culminating in a written report, essay, or examination. May be repeated.
Professional Development Courses for Teachers Courses numbered in the 600s are offered for theprofessional development of K-12 teachers to improve competency in physics and to assist them in
obtaining endorsement or recertification. In the Graduate School of Arts and Sciences these courses count for degree credit only for the MAPE degree.
PHYS 601, 602 - (3) (YI)
Concepts of Physics for Elementary School Teachers I, II
Prerequisite: Undergraduate degree or instructor permission. Primarily for teachers with little or no background in physics; not suitable for physics majors or any graduate degrees in physics
(including the M.A.P.E.).
601 is a course in classical physics including mechanics, heat, electricity, magnetism, and waves. 602 is a course in modern physics including waves, optics, relativity, atomic structure, and nuclear
physics. These may be distance-learning courses for in-service teachers. The two courses may be taken in any order.
PHYS 605, 606 - (3) (SI)
How Things Work I, II
Prerequisite: Undergraduate degree or instructor permission.
These courses consider objects from our daily environment and explain how they work with emphasis on physics concepts. PHYS 605 focuses on mechanics and heat; PHYS 606 treats objects involving
electromagnetism, light, special materials, and nuclear energy. These may be distance learning courses intended for in-service science teachers with lectures, homework and exams conducted via the
PHYS 609 - (3) (SI)
Galileo and Einstein
Prerequisite: Undergraduate degree or instructor permission.
This course examines how new understanding of the natural world developed from the time of Galileo to Einstein taking the two famous scientists as case studies. This may be a distance learning course
intended for in-service science teachers with lectures, homework and exams conducted via the internet.
PHYS 611, 612 - (3) (IR)
Physical Science for Teachers
Prerequisite: Undergraduate degree and presently (or intending to be) a K-8 teacher.
Laboratory-based course providing elementary and middle school teachers hands-on experience in the principles and applications of physical science. Not suitable for physics majors; no previous
college physics courses are assumed.
PHYS 613 - (1-3) (SI)
Topics in Physical Science
Prerequisite: Undergraduate degree or instructor permission.
Small classes studying special topics in physical science using cooperative teaching in a laboratory setting. Hands-on experiments and lecture demonstrations allow special problems to be posed and
solved. May be taken more than once.
PHYS 620 - (1) (SI)
Topical Physical Science
Prerequisite: Undergraduate degree or instructor permission.
A series of one-credit science courses of interest to K-12 teachers, as well as the general public. These courses are offered anywhere in the state as needed through School of Continuing and
Professional Studies regional centers. The courses are designed to meet Virginia s SOLs and consist of lectures, demonstrations, and many hands-on science activities. Current course topics include
Sound, Light & Optics, Aeronautics and Space, Electricity, Meteorology, Magnetism, Heat & Energy, Matter, and Force & Motion. May be taken more than once.
PHYS 631, 632, 633 - (4) (SI)
Classical and Modern Physics I, II, III
Prerequisite: Undergraduate degree and instructor permission.
A comprehensive study of physics using some calculus and emphasizing concepts, problem solving, and pedagogy. This course series is intended for in-service science teachers, particularly middle
school physical science and high school physics teachers. These courses can be used for crossover teachers who wish to obtain endorsement or certification to teach high school physics. They are
required courses for the MAPE degree. The courses are typically taught for 4 weeks in the summer with a daily two-hour lecture and two-hour problem session. Problem sets continue for three months
into the next semester.
I. Motion, kinematics, Newton s laws, energy and momentum conservation, gravitation, harmonic motion, waves, sound, heat, and fluids.
II. Coulomb s law, Gauss s law, electrostatics, electric fields, capacitance, inductance, circuits, magnetism, and electromagnetic waves.
III. Geometric and physical optics, relativity, and modern physics.
PHYS 635, 636, 637 - (3) (SI)
Curriculum Enhancement I, II, III
Prerequisite: Undergraduate degree and instructor permission.
A laboratory sequence normally taken concurrently with PHYS 631, 632, 633, respectively. It includes experiments with sensors that are integrated with graphing calculators and computers and other
experiments using low cost apparatus. The courses are typically held in the summer for four weeks and are extended into the next semester creating an activity plan. The laboratories utilize best
teaching practices and hands-on experimentation in cooperative learning groups.
PHYS 640 - (3-6) (SI)
Independent Study
Prerequisite: Undergraduate degree and instructor permission.
A program of independent study for in-service science teachers carried out under the supervision of a faculty member culminating in a written report. A typical project may be the creation and
development of several physics demonstrations for the classroom or a unit activity. The student may carry out some of this work at home, school, or a site other than the University.
PHYS 641 - (3) (Y)
Physics Teaching Pedagogy
Prerequisite: PHYS 631, 632, 633, 635, and 636, or instructor permission. Not suitable for physics majors.
A course in the pedagogy of teaching secondary school physics. This may be a distance-learning course intended for in-service teachers desiring to teach secondary school physics.
Advanced Graduate Courses
Courses primarily for students seeking M.A., M.S., and Ph.D. degrees in physics.
PHYS 719 - (3) (Y)
Advanced Experimental Physics
Selected experiments designed to introduce students to concepts and techniques from a variety of fields of contemporary physics.
PHYS 725 - (3) (Y)
Mathematical Methods of Physics I
Prerequisite: MATH 521 and 522 or instructor permission.
Discusses matrices, complex analysis, Fourier series and transforms, ordinary differential equations, special functions of mathematical physics, partial differential equations, general vector spaces,
integral equations and operator techniques, and Green s functions.
PHYS 742 - (3) (Y)
Electricity and Magnetism I
Prerequisite: PHYS 725 or instructor permission.
A consistent mathematical account of the phenomena of electricity and magnetism; electrostatics and magnetostatics; macroscopic media; Maxwell theory; and wave propagation.
PHYS 743 - (3) (Y)
Electricity and Magnetism II
Prerequisite: PHYS 742 or instructor permission.
Development of the theory of special relativity, relativistic electrodynamics, radiation from moving charges, classical electron theory, and Lagrangian and Hamiltonian formulations of
PHYS 751 - (3) (Y)
Quantum Theory I
Prerequisite: Twelve credits of 300-level physics courses and MATH 521, 522, or instructor permission.
Introduces the physical basis of quantum mechanics, the Schroedinger equation and the quantum mechanics of one-particle systems, and stationary state problem.
PHYS 752 - (3) (Y)
Quantum Theory II
Prerequisite: PHYS 751 or instructor permission.
Includes angular momentum theory, techniques of time-dependent perturbation theory, emission and absorption of radiation, systems of identical particles, second quantization, and Hartree-Fock
PHYS 795, 796 - (3) (Y)
Research leading to a master s thesis.
PHYS 797 - (3-12) (Y)
Continuation of PHYS 796.
Note: Admission to 800- and 900-level PHYS courses requires the instructor s permission.
PHYS 822 - (3) (E)
Lasers and Nonlinear Optics
Prerequisite: PHYS 531 and exposure to quantum mechanics.
Studies nonlinear optical phenomena; the laser, sum, and difference frequency generation, optical parametric oscillation, and modulation techniques.
PHYS 826 - (3) (IR)
Ultrafast Laser Spectroscopy
Prerequisite: There are no explicit graduate course pre-requisites, but familiarity with undergraduate level quantum mechanics, electricity and magnetism, optics, and differential calculus is
The course is designed to provide students in physics, chemistry, and engineering an overview of methods in which laser pulses with extremely short duration are used to explore atoms, molecules, and
surfaces for basic and applied research applications. Students are also provided with formal descriptions of how laser pulses with extremely short duration are produced, manipulated, and
PHYS 831 - (3) (Y)
Statistical Mechanics
Prerequisite: PHYS 751.
Discusses thermodynamics and kinetic theory, and the development of the microcanonical, canonical, and grand canonical ensembles. Includes Bose-Einstein and Fermi-Dirac distributions, techniques for
handling interacting many-particle systems, and extensive applications to physical problems.
PHYS 832 - (3) (IR)
Statistical Mechanics II
Prerequisite: PHYS 831.
Further topics in statistical mechanics.
PHYS 842 - (3) (O)
Atomic Physics
Prerequisite: PHYS 752 or instructor permission.
Studies the principles and techniques of atomic physics with application to selected topics, including laser and microwave spectroscopy, photoionization, autoionization, effects of external fields,
and laser cooling.
PHYS 853 - (3) (Y)
Introduction to Field Theory
Prerequisite: PHYS 752.
Introduces the quantization of field theories, including those based on the Dirac and Klein-Gordon equations. Derives perturbation theory in terms of Feynman diagrams, and applies it to simple field
theories with interactions. Introduces the concept of renormalization.
PHYS 854 - (3) (Y)
Modern Field Theory
Prerequisite: PHYS 853.
Applies field theory techniques to quantum electrodynamics and to the renormalization-group description of phase transitions. Introduces the path integral description of field theory.
PHYS 861 - (3) (Y)
Solid State Physics I
Prerequisite: PHYS 752 or instructor permission.
The description and basic theory of the electronic properties of solids including band structure, electrical conduction, optical properties, magnetism and super-conductivity.
PHYS 862 - (3) (IR)
Solid State Physics II
A discussion of various topics and problems relating to the physical properties of crystalline solids.
PHYS 871 - (3) (E)
Nuclear Physics
Discusses nuclear theory and experiment. Description and interpretation of nuclear reactions including fission, and the structure of nuclei.
PHYS 872 - (3) (IR)
Nuclear Physics II
A continuation of the topics of Physics 871.
PHYS 875 - (3) (IR)
Elementary Particle Physics
Discusses the various topics and problems relative to the physical properties and interactions of elementary particles.
PHYS 876 - (3) (IR)
Elementary Particle Physics II
Extension of PHYS 875. Studies topics in modern elementary particle physics, including unified gauge theory of electroweak interactions and introduction to QCD and lattice gauge theory.
PHYS 881, 882 - (3) (IR)
Selected Topics in Modern Physics
PHYS 888 - (3) (E)
Quantum Optics and Quantum Information
Prerequisite: PHYS 751 or instructor permission.
Studies the quantum theory of light and other boson fields with a special emphasis on the nonclassical physics exemplified by squeezed and entangled quantum states. Applications to quantum
communication, quantum computing, and ultraprecise measurements are discussed.
PHYS 897 - (3-12) (Y)
Non-Topical Research, Preparation for Research
For master s research, taken before a thesis director has been selected.
PHYS 898 - (3-12) (Y)
Non-Topical Research
For master s thesis, taken under the supervision of a thesis director.
PHYS 901, 902 - (3) (Y)
General Physics Research Seminar
PHYS 925, 926 - (3) (IR)
Research Seminar in Theoretical Physics
PHYS 951, 952 - (3) (Y)
Atomic and Molecular Seminar
PHYS 961, 962 - (3) (Y)
Research Seminar in Solid State Physics
PHYS 971, 972 - (3) (Y)
Research Seminar in Nuclear Physics
PHYS 981, 982 - (3) (Y)
Research Seminar in Particle Physics
PHYS 997 - (3-12) (Y)
Non-Topical Research, Preparation for Doctoral Research
For doctoral research, taken before a dissertation director has been selected.
PHYS 999 - (3-12) (Y)
Non-Topical Research
For doctoral dissertation, taken under the supervision of a dissertation director.
Physics Colloquium The faculty and graduate students meet weekly for the presentation by a visiting speaker of recent work in the physical sciences.
|
{"url":"http://www.virginia.edu/registrar/records/05-06gradrec/chapter5/chapter5-40.htm","timestamp":"2014-04-21T09:58:11Z","content_type":null,"content_length":"29690","record_id":"<urn:uuid:fece5995-46ba-42e0-acce-cbd308159dfe>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determine the quadratic function of a parabola with x-intercepts x = 7 andx = -2 and y-intercept y = 5 - Homework Help - eNotes.com
Determine the quadratic function of a parabola with x-intercepts x = 7 and
x = -2 and y-intercept y = 5
We are given the two x-intercepts as x=-2,x=7 and the y-intercept y=5.
The intercept form form a parabola is y=a(x-p)(x-q) where p,q are the x-intercepts and a determines how wide the parabola opens. (Also, if a<0 the parabola opens down.)
Substituting -2 for p and 7 for q we get:
Now the y-intercept is the point where x=0; substituting for x we solve for a:
5=a(0+2)(0-7) ==> 5=-14a ==> `a=-5/14`
The equation is `y=-5/14(x+2)(x-7)` .
In standard form: `y=-5/14x^2+25/14x+5`
The graph:
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/determine-quadratic-function-parabola-with-x-444245","timestamp":"2014-04-16T05:19:17Z","content_type":null,"content_length":"28219","record_id":"<urn:uuid:cc063d59-1e64-4e91-9405-d249363bf4f4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding instantaneous rate of exponetial equation using (a+h)-(a)/h method
February 26th 2011, 07:01 PM #1
Junior Member
Feb 2011
Finding instantaneous rate of exponetial equation using (a+h)-(a)/h method
Estimate the instantaneous rate of change in the equation 18999(0.93)^t when a = 5
Would it start like this?
18999(0.93)^(5+h) - 18999(0.93)^5 / h <<<not sure about this
13217^h - 13217 / h <<<<<not sure about this
I'm not totally sure. What is a here? Should that be t? If so, then what you want is $\frac{f(5+h)-f(5)}{h} = \frac{18999(0.93)^{5+h} - 18999(0.93)^{5}}{h}$ for very small $h$.
Yeah the original equation is 18999(0.93)^t
So you don't have to simplify the the equation anymore after your 2nd one? Just plug in the very small h's right?
Sure, the original function is that. But what's a? You say a = 5, but what does that mean? Do you mean to say that t = 5, i.e. you're investigating the instantaneous rate of change near where the
function takes the value 5? If so, then the above is correct and there isn't a simplification of the last expression I wrote. So yeah, just plug in intsy-tintsy $h$.
You mean "when t= 5".
Would it start like this?
No, the "difference quotient" is $\frac{f(x+h)- f(x)}{h}$. You forgot the "f"! Also, you have the "a" where you should have "h"
18999(0.93)^(5+h) - 18999(0.93)^5 / h <<<not sure about this
13217^h - 13217 / h <<<<<not sure about this
No, the first line above is correct but the second line is not. You cannot write $A(B^{5+h)}= (AB^5)^h$- that's just not true. What you want is A(B^{5+ h})= A(B^5)*B^h.
Here, A= 18999 and B= .93 so [tex]AB^5= (18999)(0.6956883693)= 13217.3833283307 so that
$\frac{13217.3833283307(.93)^h- 13217.3833283307}h= 13217.3833283307\frac{(.93)^h- 1}{h}$.
But you are not finished. The "instantaneous rate of change" is the limit as h goes to 0. Assuming that limit exists, you can get a good approximation by using a value of h very close to 0, as
ragnar suggests. It can, in fact, be shown that the limit does exist and is exactly equal to ln(.93).
February 26th 2011, 08:58 PM #2
Jun 2010
February 26th 2011, 09:20 PM #3
Junior Member
Feb 2011
February 26th 2011, 09:46 PM #4
Jun 2010
February 27th 2011, 03:18 AM #5
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/algebra/172784-finding-instantaneous-rate-exponetial-equation-using-h-h-method.html","timestamp":"2014-04-18T23:32:33Z","content_type":null,"content_length":"43675","record_id":"<urn:uuid:abb2c73d-7591-48e0-91cc-39fb0d5c399a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2.8 The relation with perturbative Chern-Simons theory
Next: 2.9 Some dreams regarding Up: 2 Knotted trivalent graphs Previous: 2.7 The relation with   Contents
Summary. Our discussion so far implies that if one could set up a well behaved perturbative Chrn-Simons theory (synonymously, a well behaved theory of configuration space integrals), then the
invariant of the tetrahedron
Dror Bar-Natan 2001-07-23
|
{"url":"http://www.math.toronto.edu/~drorbn/papers/AlgebraicStructures/2_8relation_with_perturbati.html","timestamp":"2014-04-21T15:03:25Z","content_type":null,"content_length":"3154","record_id":"<urn:uuid:8b27aa73-4d16-489a-a510-e2da72dfa8fe>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stumper Answer
You can cut a pie into 56 pieces with ten straight cuts if you're careful. The first cut divides the whole pie into two. The second cut across the first adds two more pieces. The third cut across
both lines will add three more pieces. So the nth cut will cross n-1 lines and add n new pieces. After ten cuts, the sum is 1 + 2 + 3 + ... + 10 plus one more for the whole pie we started with.
For n cuts, that's [n (n+1) / 2] + 1. Keep reading for my best attempt at drawing all ten slices, as well as the secret Betsy Ross method of making a star with a single cut!
A mandala of ten slices and 56 pieces!
Can you color it with just four colors?
Several DMS students figured the answer once they noticed the rule and the pattern. You can only get the maximum number of cuts if you follow the rule that every new cut must cross every other
cut and there can be no intersections of more than two lines at once. Graybear sent this clear explanation of the pattern:
(# of cuts, max # of pieces):
(0,1); (1,2); (2,4); (3,7); (4,11); (5,16); (6,22); (7,29); (8,37); (9,46);(10,56); ...
(n, n^2/2 + n/2 + 1).
Reason: each successive cut 'n' can cross 'n-1' previous cuts; therefore it slices through 'n' separate pieces and creates 'n' more. I.e. the first cut creates one more piece; the second cut
creates two more pieces; the third cut creates three more pieces; etc. The general formula is similar to Gauss's famous equation:
Sum of # (1-n) = ------
except we have to add 1 for the original piece, so it becomes:
n(n+1) n^2 n
------ + 1 or -- + - + 1
Drawing the ten cuts was more difficult, but I found a pattern there too. Start with two lines crossing and cross them both with a third line to make the classic anarchy symbol (left). Then
replace that line with two new "butterfly" lines that cross in the center below the others for a total of four lines (right). Move the intersection up if you can to make way for later lines.
3 cuts, 4+3 = 7 pieces 4 cuts, 7+4 = 11 pieces
Now do it again. Cross all the lines with a single cut (left), and replace that line with two more that cross in the center below the others (right).
5 cuts, 11+5 = 16 pieces 6 cuts, 16+6 = 22 pieces
With some fiddling, it should be possible to show any number of cuts, though the pieces get pretty small. I'm sure it's not possible to get equal areas, but it is probably possible to get more
equal areas than I managed to. I decided to quit while I was ahead! (And where do equal areas fail? It is possible with two cuts of course, and it looks possible with 3 cuts into 7 equal pieces.
But it's hard to get even visible pieces with ten cuts!))
Graybear corrected me on the minimum number of pieces, d'oh!
I get a minimum of 4 pieces by not crossing any previous cuts (picture a circumscribed triangle). The minimum number of pieces after 'n' cuts is 'n+1' pieces. Is every in-between value always
possible? Yes, by adjusting how many previous cuts you choose to intersect.
Graybear also figured out a way to visualize slicing a solid cheese ball into pieces. This is very hard to visualize, but the math is similar:
I DEFINITELY agree with both parts of the last sentence, but I finally figured out how to visualize it (without paper, I might add) and the math is very similar, actually building on the
first part of the problem. Imagine, instead of a pie, you have a sphere. After the first cut, picture a cross-section of the sphere that looks like the pie after one cut showing the two
pieces. Your section cuts through both pieces creating two more for a total of four. Imagine the next cross-section that looks like the pie after two cuts showing four pieces. This section
then cuts through all four creating four more for a total of eight. Adjust your next cross-section so that the three previous cutting planes can appear as the pie after three cuts so you can
create seven more pieces. Continue on so the results become: (0,1); (1,2); (2,4); (3,8); (4,15); (5,26); (6,42); (7,64); (8,93); (9,130); (10,176); (n, n^3/6 + 5n/6 + 1).
That general answer can also be written as
(n-1)n(n+1) n^3 5n
----------- + n + 1 or -- + -- + 1
Thanks Graybear, I had that answer, but I couldn't visualize it. I'm sure crescent moons and bananas and stars produce similar polynomial solutions, but those are stumpers for another day!
I almost forgot my other stumper. The story is that George Washington's original sketch for the American flag had 6-pointed stars. But seamstress Betsy Ross preferred a 5-pointed star. (Guess
why!) When the committee protested that it was too hard to make that shape, she took a piece of paper, quickly folded it, and produced a perfect five-pointed star with a single snip. The
committee was so impressed that they agreed with her suggestion. Here's how she did it:
Fold a circle of paper in half and divide it into five equal pieces as shown by the blue lines. Fold along the lines as shown in the middle picture, and cut along the red dotted line. By changing
the angle of the cut, you can make different shaped stars. There are step-by-step instructions for making the folds at the Betsy Ross Homepage. (But good luck explaining it to a group of 66
Here are some links for further research:
□ These are two classic problems (#161 and 164) from puzzle master H.E Dudeney's Amusements in Mathematics (Dover, 1958). The other puzzle master Sam Lloyd posed some variations (#101 and 112)
in Mathematical Puzzles of Sam Lloyd (Dover, 1959)
□ W.W. Sawyer uses the pie puzzle as an exercize (without answer) in his book The Search for Pattern (Penguin, 1970, unfortunately out of print). Sawyer explains the easy way for finding a
quadratic formula for a number series by considering successive differences:
n: 0 1 2 3 4 5 6 7
f(n): 1 2 4 7 11 16 22 29
dif1: 1 2 3 4 5 6 7 (difference between successive f(n))
dif2: 1 1 1 1 1 1 (difference between successive dif1)
The second differences are all equal, so we know it's quadratic. Look at the first numbers (in bold):
☆ dif2 is always 1, so take 1/2 of that for the x^2 coefficient.
☆ dif1 minus 1/2 of dif2 is 1 - 1/2 = 1/2. That's the x coefficient.
☆ f(n) for n=0 is 1, and that's the constant.
Therefore the equation is 1/2 n^2 + 1/2 n +1, which is the answer.
This method always works when the second difference is constant. The method can be generalized for higher powers (and higher differences), though the details get tricky. I'm surprised I can't
find a good treatment on the Web. A cubic example is discussed here.
□ David Molnar has a page on Cutting Fruit with even more cut-up questions. There's also a classroom worksheet if you need it.
□ You can also cut-up words and pictures and music into interesting travesties of the original. Post-modern art and hip-hop music are based on remixing other sources. William S. Burroughs used
his cut-up method to reassemble works by Shakespeare and Rimbaud. That sounds like "conceptual art", but he once did an entire cut-up Time magazine that I actually saw for sale at City Lights
Bookstore in San Francisco back in the 60s. (I didn't buy it, d'oh! It's a collector's item now.) If you cut a text into single letters and reassemble it, you can make your own universal
library! See my stumper The Universal Library of Stumpers (23 March 2001) and the discussion by Quine. There's a cut-up/travesty maker you can play with on the Web.
|
{"url":"http://www.rain.org/~mkummel/stumpers/14dec01a.html","timestamp":"2014-04-19T04:45:46Z","content_type":null,"content_length":"14892","record_id":"<urn:uuid:d14354d2-b0d4-4488-a1c2-e04d40de6b19>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Voltage Of 220 V Is Impressed On A 150 K Ohm ... | Chegg.com
Image text transcribed for accessibility: A voltage of 220 V is impressed on a 150 k Ohm resistor. The impedance of the voltage source is 10k Ohm . Two meters are used to measure the voltage across
the 150 kick Ohm resistor: a volt - Ohm meter with an internal impedance of 1000 Q/V and an EVM with an impedance of 11 M Ohm . Calculate the voltage indicated by each of these devices
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/voltage-220-v-impressed-150-k-ohm-resistor-impedance-voltage-source-10k-ohm--two-meters-us-q933562","timestamp":"2014-04-21T13:23:40Z","content_type":null,"content_length":"20573","record_id":"<urn:uuid:c75dd7d3-8ff1-4c6b-bcf2-6fe3d218c1d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visual Basic Built-In Functions: Conversions
You may recall that when studying data types, we saw that each had a corresponding function used to convert a string value or an expression to that type. As a reminder, the general syntax of the
conversion functions is:
ReturnType = FunctionName(Expression)
The Expression could be of any kind. For example, it could be a string or expression that would produce a value such as the result of a calculation. The conversion function would take such a value,
string, or expression and attempt to convert it. If the conversion is successful, the function would return a new value that is of the type specified by the ReturnType in our syntax.
The conversion functions are as follows:
│ Function │ │
│ Name │Return Type│ Description │
│ CBool │ Boolean │Converts an expression into a Boolean value │
│ CByte │ Byte │Converts an expression into Byte number │
│ CDbl │ Double │Converts an expression into a floating-point number with double precision │
│ CDec │ Decimal │Converts an expression into a decimal number │
│ CInt │ Integer │Converts an expression into an integer (natural) number │
│ CLng │ Long │Converts an expression into a long integer (a large natural) number │
│ CObj │ Object │Converts an expression into an Object type │
│CSByte │ SByte │Converts an expression into a signed byte │
│CShort │ Short │Converts an expression into a short integer │
│ CSng │ Single │Converts an expression into a floating-point number with single precision │
│ CUInt │ UInt │Converts an expression into an unsigned integer │
│ CULng │ ULong │Converts an expression into an unsigned long integer │
│CUShort│ UShort │Converts an expression into an unsigned short integer │
Conversion functions allow you to convert a known value to a another type. Besides these functions, the Visual Basic language provides a function named CType. Its syntax is:
CType(expression, typename)
As you can see, the CType() function takes two arguments. The first argument is the expression or the value that you want to convert. An example could be name of a variable or a calculation:
CType(250.48 * 14.05, ...)
The second argument is the type of value you want to convert the first argument to. From what have learned so far, this second argument can be one of the data types we reviewed in Lesson 3. Here is
an example:
CType(250.48 * 14.05, Single)
If you choose one of the Visual Basic language's data types, the expression produced by the first argument must be able to produce a value that is conform to the type of the second argument:
• The conversion from the first argument to the type of the second argument must be possible: the value produced by the first must be convertible to the second arguments. For example, if the first
argument is a calculation, the second argument must be a number-based data type. In the same way, you cannot convert a date to a number-based type
• If the first argument is a number or the result of a calculation, its resulting value must be lower than or up to the range of values of the second argument. Here is an example:
Public Module Exercise
Public Function Main() As Integer
MsgBox(CType(250.48 * 14.05, Single))
Return 0
End Function
End Module
This would produce:
• If the first argument is a number or the result of a calculation that produces an integer or a floating-point number, its resulting value must be convertible to an integer or a floating point
number up to the range of values of the second argument. Here is an example:
Public Module Exercise
Public Function Main() As Integer
MsgBox(CType(7942.225 * 202.46, UInteger))
Return 0
End Function
End Module
This would produce:
• If the first argument is a number or the result of a calculation that produces an integer or a floating-point number, the second argument is a number-based data type but whose range cannot hold
the resulting value of the first argument, the conversion would not be allowed (the conversion will fail):
After the CType() function has performed its conversion, it returns a value that is the same category as the second argument. For example, you can call a CType() function that converts an expression
to a long integer. Here is an example:
Public Module Exercise
Public Function Main() As Integer
Dim Number As Long
Number = CType(7942.225 * 202.46, Long)
Return 0
End Function
End Module
The function can also return a different type, as long as its type can hold the value produced by the expression. Here are two examples:
Public Module Exercise
Public Function Main() As Integer
Dim Number As UInteger
Number = CType(7942.225 * 202.46, Long)
Return 0
End Function
End Module
Public Module Exercise
Public Function Main() As Integer
Dim Number As Single
Number = CType(7942.225 * 202.46, Long)
Return 0
End Function
End Module
If you try storing the returned value into a variable that cannot hold it, you would receive an error:
|
{"url":"http://www.functionx.com/visualbasic/functions/conversions.htm","timestamp":"2014-04-21T02:07:33Z","content_type":null,"content_length":"15216","record_id":"<urn:uuid:9b7a0e7e-28c3-4f17-a15b-e52ea612e94b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate a Weighted Average Position in Excel & Tableau
This post is just as much for me as it is for you. I can’t tell you how many times I’ve bugged my coworkers asking them
how to calculate a weighted average position
when analyzing PPC reports with Excel or Tableau. I guess that’s what happens when you let a former art major do math, but weighted average position is just one of those must-have PPC tools.
You’ve probably ran into a post or two over the last couple of years about using pivot tables to analyze your PPC data. Some of those posts may have been kind enough to include instructions for how
to calculate a weighted average position. This is important because once you start aggregating your data in a pivot table you can’t average your Average Position or you will get a skewed
How to Calculate a Weighted Average Position in Excel
Regardless of the report you need to analyze, you will add an “AvgPos X Impressions” column. This column will be used to create a ‘Calculated Formula’ in your pivot table.
Once you’ve got your pivot table going, you will need to create calculated metric by navigating here: PivotTable Tools > Options > Formulas > Calculated Field.
Now name your new ‘Weighed Avg Pos” metric and use the following formula:
=’AvgPos*Impressions’ /Impressions
Now you will have an accurate representation of average position regardless of the level of detail in your pivot table.
How to Calculate a Weighted Average Position in Tableau
For those of you fortunate enough to have Tableau for analyzing your PPC data, calculating a weighted average position is just as easy as in Excel. Navigate to the ‘Calculated Field’ tool by
right-clicking in the ‘Measures’ area and clicking ‘Create Calculated Field…”
Again name your new metric ‘Weighted Avg Pos’ and use the following Tableau formula:
sum([Avg Position]*[Impressions])/sum([Impressions])
There, I’ve written it down. Now I can quit bothering my co-workers and just reference this post in the future. Those of you with better memories may not need this reference, but for the rest of
you feel free to bookmark this page.
4 Responses to “How to Calculate a Weighted Average Position in Excel & Tableau”
1. Jc April 25, 2011 at 1:36 pm #
“I guess that’s what happens when you let a former art major do math…”
I’m sorry lol. Have you worked in display?
2. searchengineman August 16, 2012 at 3:53 pm #
Thank you very much, I was trying to figure out how Google uses its formula
to calculate (Average Position) – PS: You’re not the only former Art Major who
landed in PPC! – The advantage is I can get beyond the 1st decimal..which is a bonus.
3. Aji October 8, 2012 at 7:23 am #
man, thanks for this tip.. i’ve been looking around for the formula and found your blog. BIG thanks
4. Kevin James McAuley December 10, 2012 at 5:14 am #
Hey Chad,
Thanks for this helpful tutorial. Even though your pivot table says “Sum of WeightedAvgPos” once you have followed these steps it is all good?
Fast Track To Finding Statistical Significance | PPC Hero®: […] This comes from my PPC celebrity crush o...
|
{"url":"http://www.chadsummerhill.com/calculate-weighted-average-position-excel-tableau/","timestamp":"2014-04-19T20:21:16Z","content_type":null,"content_length":"52926","record_id":"<urn:uuid:2d6a2df2-ab73-47a7-94b2-ef076ce6329f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Incremental Least-Squares Temporal Difference Learning
Michael Bowling, University of Alberta, Edmonton
Online policy evaluation with linear function approximation is a commonly arising problem in reinforcement learning. Temporal difference (TD) learning is the traditional approach, which is both
guaranteed to converge and only requires computation that is linear in the number of features per time step. However, it may necessitate many trajectory samples to achieve a good approximation.
Least-squares TD learning (LSTD) techniques are a recent alternative that makes more effecient use of data trajectories, but requires computation that is quadratic in the number of features per time
step. This computational cost prevents the least-squares approach from being practical with a large feature space. This talk describes a new variant of TD learning, called incremental least-squares
TD learning, or iLSTD. It uses trajectory data more efficiently than TD, and with certain function approximators only requires linear, rather than quadratic, computation per time step. In addition to
demonstrating the computational and data advantages of iLSTD, this talk will also show how iLSTD can be efficiently extended with eligibility traces and present sufficient conditions for convergence.
This is joint work with Alborz Geramifard, Martin Zinkevich, and Rich Sutton.
Brief Bio
Michael Bowling is an assistant professor at the University of Alberta in Edmonton, Canada. He received his Ph.D. in 2003 from Carnegie Mellon University, where his dissertation focused on multiagent
learning. He is now a recovering robot soccer competitor, investigating machine learning algorithms for robotics, computer games, and poker.
|
{"url":"http://www.cs.cmu.edu/~aiseminar/past/michael-bowling.html","timestamp":"2014-04-16T20:10:39Z","content_type":null,"content_length":"1980","record_id":"<urn:uuid:cddcd013-7619-4f7e-b324-c2de19e54618>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Efficient Total Production through Specialization
This Demonstration presents an economic model of two producers A and B that produce two goods X and Y. It is shown how specialization leads to an efficient total production. A and B can benefit from
the increased total production through trade.
The production possibilities frontiers of the two producers A and B are assumed to be linear. Producer A has a comparative advantage in the production of X. Producer B has a comparative advantage in
the production of Y.
You can use the sliders to determine the production of A and B without specialization. Without specialization, the total production of A and B is not efficient, that is, the total production of A
and B is not on the joint production possibilities frontier of A and B.
It is shown that specialization can increase the total production of A and B for both goods X and Y. An efficient total production can only be achieved if either A or B specializes in producing the
good in which they have a comparative advantage.
In the case of specialization A can produce =20 units of X. B can produce =40 units of Y in the case of specialization. It depends on the ratio / whether A or B should specialize. If /, A must
specialize in the production of X. If /, B must specialize in the production of Y, where denotes the units of X and denotes the units of Y produced by A and B together.
|
{"url":"http://demonstrations.wolfram.com/EfficientTotalProductionThroughSpecialization/","timestamp":"2014-04-16T16:14:07Z","content_type":null,"content_length":"43516","record_id":"<urn:uuid:92e6e22e-979e-44c5-bba5-9ec0a63e5796>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- December 1997 (#55)LISTSERV at the University of Georgia
Date: Thu, 4 Dec 1997 21:01:15 GMT
Reply-To: Richard F Ulrich <wpilib+@PITT.EDU>
Sender: "SPSSX(r) Discussion" <SPSSX-L@UGA.CC.UGA.EDU>
From: Richard F Ulrich <wpilib+@PITT.EDU>
Organization: University of Pittsburgh
Subject: Re: Noncentrality & Power
I think it was about 6 months ago that this NetGroup had a discussion about the MANOVA power statements. I can make comments by assuming that GLM in 7.5 is doing what MANOVA did in 6.1. You might
look in DejaNews for more detail. I hope David Nichols or someone will correct me if I don't repeat the earlier conclusions, or if GLM does different from MANOVA.
Burton L. Alperson (balpers@calstatela.edu) wrote: : Version 7.5 automatically includes "Noncent. Parameter" and "Observed Power : on GLM output."
: Why should I care about these values for data that have already been : collected and analyzed?
: According to the SPSS Advanced Stat manual, "The power gives the : probability that the F test will[sic!] detect the differences between : groups equal to those implied by the sample differences."
Since I already : have the p value of F in the output, what do I gain by knowing "Noncent. : Parameter" and "Observed Power?"
: What am I missing?
- "Observed Effect" is what is tested. It includes an underlying effect, and a contribution of bias with is bigger with smaller samples, or with bigger designs.
"Underlying Effect" is what you usually do a power analysis on, so what SPSS provides is unusual, and needs careful attention, to figure how it does make sense, since it is not obvious.
Since R-squared is always positive, and it is bigger (by chance) with more variables, consider the logical equation -
Observed= Underlying + Bias
- these can be regarded, approximately, as simply adding terms of variance, or a version of R-squared. For simple regression, the Bias or expected R^2 is p/(n-1) where p is the number of variables
and n is the sample size.
Let us say an *Observed* regression has R^2 of .30, where the Bias is the whole Observed effect. Then the test statistic is not at all significant, because it is just chance. But for the *same*
sample size and design, what would the POWER if the *Underlying* effect were .30?
If the Underlying effect were that big, then the projected, hypothetical outcome would be sum of the Underlying and the Bias - properly combined, there is an R^2 of .5 or .6, and it would have
notably better power than the experiment that is being reported on, with Underlying=0.
- That may seem silly, but that is how it works. I suggest that you ignore the "power" section of the computer output, unless you are sure you understand everything about what I am calling "Bias",
the capitalization on chance owing to degrees of freedom of design. For simple designs, and large n, you might not be enormously wrong, if you try to guess (otherwise) what it is that the printout
should be telling you. - I was misled, and misled other people on my early, occasional uses of MANOVA, because I made the mistake of assuming that the power-statement should be something useful and
intuitively meaningful; but it is not.
I recommend the very late chapters of the 1989 edition of Cohen's book on power analysis, for more information on estimating multivariate power. (Actually, even more strongly, I recommend that you
reduce your problem to something simple enough that you do not need to read up on "multivariate" considerations.)
Rich Ulrich, biostatistician wpilib+@pitt.edu http://www.pitt.edu/~wpilib/index.html Univ. of Pittsburgh
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind9712&L=spssx-l&D=0&P=5798&F=P","timestamp":"2014-04-21T00:01:18Z","content_type":null,"content_length":"12448","record_id":"<urn:uuid:091e956b-2bf2-467a-b826-3b1fffbdb73d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Notation - Electrical Circuit
1. The problem statement, all variables and given/known data
1 (a) using complex notation, find an expression for the output voltage, v0
1 (b) using complex notation, find an expression for the input current, ig, and hence determine the phase angle of ig relative to vg
2. Relevant equations
3. The attempt at a solution
See attached two attempts, in jpgs. I've gotten myself very confused!! If someone could advise that would be great.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3902309","timestamp":"2014-04-21T04:42:40Z","content_type":null,"content_length":"26449","record_id":"<urn:uuid:bf7013ce-3a4f-4156-b030-0a17b700c305>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Single Slit Diffraction Introductory Physics Question
Single Slit Diffraction
1. The problem statement, all variables and given/known data
You have been asked to measure the width of a slit in a piece of paper. You mount the paper 80.0 centimeters from a screen and illuminate it from behind with laser light of wavelength 633 nanometers
(in air). You mark two of the intensity minima as shown in the figure, and measure the distance between them to be 17.9 millimeters.
If the entire apparatus were submerged in water, would the width of the central peak change?
a.The width would increase.
b.The width would decrease.
c.The width would not change.
2. Relevant equations
asin(theta) = m(lambda)
3. The attempt at a solution
Well for the first part of this question I found the width of this slit in air to be 170 micrometers.
I know this is a conceptual question, but I need help understanding exactly what happens. I know the index of refraction changes from 1 to 1.33.
|
{"url":"http://www.allquests.com/question/3585779/Single-Slit-Diffraction.html","timestamp":"2014-04-19T19:41:57Z","content_type":null,"content_length":"12033","record_id":"<urn:uuid:e0da9396-aa71-4282-957c-2fdc31a6145a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Plainfield Math Tutor
Find a South Plainfield Math Tutor
...My unique experience includes working with students in the classroom, as a private tutor, and as a homeschooling parent for 8 years. I like being creative and fun, while helping students to
feel that they can do it!I have supervised, tested, and helped students in grades K-6 in all subject areas...
18 Subjects: including trigonometry, algebra 1, prealgebra, reading
...Danielle C. Currently I am the After Care Director (Preschool – 8th Grade) where I have restructured and enhanced the overall program by setting goals, developing a curriculum, organized age
appropriate activities and introduced STEM projects. I lead the program with the assistance of aides and...
33 Subjects: including ACT Math, SAT math, English, reading
...I have been tutoring and teaching since I was in high school myself because I love doing it! During and after earning a Master of Science in mathematics, I spent 8 years teaching at the
post-secondary level in universities and community colleges. I have also worked with middle and high school students.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I take pride in helping my students have academic success! To give some background on myself, I have a Bachelor of Science degree in Computer Science with a minor in Mathematics. I also hold a
Master of Science degree in Information Systems Management.
5 Subjects: including algebra 2, Microsoft Word, prealgebra, algebra 1
...Phonics is a core component of reading. While studying at college, I took two extensive course on reading and phonics as part of my coursework. In addition, my recent prior tutoring experience
at a tutoring facility required that I know phonics in order to help children break down words and manipulate phonetic sounds.
18 Subjects: including algebra 1, reading, English, public speaking
|
{"url":"http://www.purplemath.com/South_Plainfield_Math_tutors.php","timestamp":"2014-04-19T15:21:49Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:159f0933-4270-41af-bdd9-9dccfd36b1e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roslindale SAT Math Tutor
I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students
understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well.
16 Subjects: including SAT math, French, calculus, algebra 1
...My work as a high school teacher in particular has given me a rare opportunity to develop a pedagogy suited for struggling math and physics students, for I taught an intensive ninth grade
physics course (typically physics isn't taught until eleventh grade) that doubled as a study-skills course. ...
9 Subjects: including SAT math, physics, calculus, geometry
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a
very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including SAT math, calculus, statistics, geometry
...This licensure deems me qualified to teach several courses, including Algebra II. I am licensed to teach mathematics in Massachusetts to grades 8-12. Under this licensure I am qualified to
teach man high-school level mathematics topics including geometry.
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I have been home-schooling for nine years now. My two middle children, a senior and sophomore in high school, are currently attending MassBay Community college full-time. I believe that my
approach to home-schooling, and thus to tutoring is unique.
25 Subjects: including SAT math, reading, ESL/ESOL, English
|
{"url":"http://www.purplemath.com/Roslindale_SAT_Math_tutors.php","timestamp":"2014-04-18T00:33:05Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:3cab4866-9ce4-43c3-8741-1e81556a61e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kenosha ACT Tutor
Find a Kenosha ACT Tutor
...I also have tutored all levels from elementary through advance and do some part-time work at the college. In the summer, I am employed by ETS as a grader for the AP Statistics exams and also
as a grader for the Praxis exam, the exam for math majors. I know that math can be challenging for many students, but I can help you break through the walls and get down to the understanding.
26 Subjects: including ACT Math, calculus, geometry, statistics
...I am a certified mathematics teaching professional currently teaching high school mathematics. I love working with young people and am extremely gratified when I can make a difference. I have
15 years of mathematics teaching under my belt and I can teach everything from 6th grade math to AP Calculus.
11 Subjects: including ACT Math, calculus, statistics, geometry
...I would work with your daughter to see what topics would need to be stressed, and work in some repetition of computations and formulas that would apply. I would try to find some good
application problems that would help reinforce the topics as well. I have taught Math for 25 years now(20 at the...
12 Subjects: including ACT Math, calculus, algebra 1, algebra 2
...I recently took the GRE and passed the quantitative section with flying colors (and I even 'tutored' my husband for the math portion of his GMAT) so I can assure you that I am well prepared to
teach math subjects as well. Based on my background and GRE scores, I have been accepted into a PhD pro...
26 Subjects: including ACT Math, chemistry, geometry, biology
...I took organic chemistry 1 and 2 at the college level, as well as biochemistry. I know the material very well after tutoring it for the past 4 years at UW-Milwaukee. I have a great deal of
knowledge from what I learned and then further have reviewed through tutoring as well as knowing the material for the MCAT.
46 Subjects: including ACT Math, chemistry, algebra 1, statistics
Nearby Cities With ACT Tutor
Beach Park, IL ACT Tutors
Glendale, WI ACT Tutors
Greenfield, WI ACT Tutors
Gurnee ACT Tutors
Hoffman Estates ACT Tutors
Lakemoor, IL ACT Tutors
Milwaukee, WI ACT Tutors
Mount Pleasant, WI ACT Tutors
Mt Pleasant, WI ACT Tutors
Old Mill Creek, IL ACT Tutors
Old Mill Crk, IL ACT Tutors
Pleasant Prairie ACT Tutors
Racine, WI ACT Tutors
Waukegan ACT Tutors
Wauwatosa, WI ACT Tutors
|
{"url":"http://www.purplemath.com/kenosha_act_tutors.php","timestamp":"2014-04-20T08:48:01Z","content_type":null,"content_length":"23749","record_id":"<urn:uuid:6eaec814-f15d-4c94-b3a4-9b141f8e21ab>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comments on The Geomblog: The proof of Fermat's theorem is a "Rube Goldberg contraption"Opinion 72: The Next Term in the Sequence: [Dog, H...Feel free to call me whatever you like, Dr. Zeilbe...Reply to Jeff Erickson:>By the end of the first pa...Zeilberger makes many entertaining (if not altoget...Ernie Writes:A few WEEKS?! So he *admits* that the...Zeilberger makes many entertaining (if not altoget...Re: "Computer-generated proofs are certainly a use...By the end of the first paragraph, I guessed corre...
tag:blogger.com,1999:blog-6555947.post115014354480542188..comments2014-01-12T10:46:48.153-07:00Suresh Venkatasubramanianhttps://plus.google.com/
112165457714968997350noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-6555947.post-64922626248776177582007-07-11T11:49:00.000-06:002007-07-11T11:49:00.000-06:00Opinion 72: The Next Term in the
Sequence: [Dog, Human, Mathematician, ...] is ``Computer-Programmer for Computer-Generated Mathematics'' <BR/><BR/>I've carefully gone through the article by Dr. Doron Zeilberger.<BR/><BR/>It's
really an interesting thought made with remarkable insights. <BR/><BR/>I agree with his observations -<BR/>"...one can do potentially much bigger and better things, and the computer's dA is much
larger, so we can (potentially) reach a mountain-top much faster, and conquer new mountain-tops where no humans will ever tread with their naked brains. <BR/><BR/>So this activity of
computer-generated mathematics is the future. Unfortunately, many human mathematicians still don't realize the importance of this activity, and dismiss it as "just a computer program" and "no new
mathematics"."<BR/><BR/>I would request Dr. Jeff Erickson to refrain from making casual remarks about spelling mistakes of his name and evaluate this article with an open and true spirit of a
scientific mind.<BR/><BR/>Regards,<BR/>Avijit LahiriAvijit Lahirinoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150489138786078412006-06-16T14:18:00.000-06:002006-06-16T14:18:00.000-06:00
Feel free to call me whatever you like, Dr. Zeilberger, but I think Ireckson would be a better response to my misspelling, don't you?Jeff Ericksonhttp://www.blogger.com/profile/
08256919779078679044noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150393354707562602006-06-15T11:42:00.000-06:002006-06-15T11:42:00.000-06:00Reply to Jeff Erickson:<BR/><BR/>>By the end
of the first paragraph, I guessed correctly that Zielberger is reacting to one of his own papers getting rejected.<BR/><BR/>If you call me, Zielberger, I'll call you Eirckson. The point of my Opinion
72 was not to react to the stupid rejection by a stupid journal, but to make a general point. The rejection was just a symptom of the stupidity of some humans, and your<BR/>feedback is another
symptom. I don't have time to defend my rejected paper, but it is probably more significant than<BR/>any of your accepted papers (that you wrote yourself without a million coauthors)<BR/><BR/>Re:<BR
/>>The point of mathematics is not to explain things to computers. That's not even the point of computer science! <BR/><BR/>Says Who? A HCP (Human Chauvinist Pig) like you! <BR/><BR/><A></A><A>
</A>Posted by<A><B> </B></A><A HREF="http://www.math.rutgers.edu/~zeilberg/" REL="nofollow" TITLE="zeilberg at math dot rutgers dot edu">Doron Zeilberger</A>Doron Zeilbergerhttp://
www.math.rutgers.edu/~zeilberg/noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150189465213916802006-06-13T03:04:00.000-06:002006-06-13T03:04:00.000-06:00<I>Zeilberger makes many
entertaining (if not altogether well-founded) points in his notes:</I> <BR/><BR/>After reading a few, I think calling them "notes" is too kind. <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>
AnonymousAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150168822484485042006-06-12T21:20:00.000-06:002006-06-12T21:20:00.000-06:00Ernie Writes:<BR/><BR/><B>A few WEEKS?! So he
*admits* that the paper describes trivial work! And then he has the gall to complain when his paper is rejected?</B> <BR/><BR/>To be fair, Jeff, does his paper have any redeeming merit? I've
written a paper or two where I made some trivial implementation and discovered a structure or disproved a conjecture. I once spent a WHOLE WEEKEND writing some code to disprove a conjecture about
Ramsey numbers on a 6-dimensional hypercube or something (can't remember exactly) after my buddy told me about the problem while we were drinking on a ski lift, only to discover that another guy just
published the results in Discrete and Computational Geometry that same month!!! It looks like Zoran did more work than that.<BR/><BR/>But if it's crap, it's crap. <BR/><BR/>I've had too much beer at
the moment to verify this, but it seems he gives a method for generating a random domino tiling of a square. That's pretty cool. It would be even cooler if it were faster than the method in this
paper:<BR/><BR/>Markov chain algorithms for planar lattice structures <BR/>Michael Luby, Dana Randall, and Alistair Sinclair<BR/>SIAM Journal on Computing, 31: 167-192 (2001)<BR/><BR/>http://
www.math.gatech.edu/~randall/r-lrs2001.pdf<BR/><BR/><BR/>The Pig<BR/> <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>
AnonymousAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150164950413193092006-06-12T20:15:00.000-06:002006-06-12T20:15:00.000-06:00Zeilberger makes many entertaining (if not
altogether well-founded) points in his notes: I admit though that he appears to have a very large bee in his bonnet about the value of computer-generated proofs. As a computer scientist, and one who
is fully aware of the bugs that creep into code, I can't quite understand what he is so enamored of.  <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>SureshSureshhttp://www.blogger.com/profile/
15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150152978924700922006-06-12T16:56:00.000-06:002006-06-12T16:56:00.000-06:00Re: <I>"Computer-generated proofs are
certainly a useful tool in that pursuit, but it is patently ridiculous to assume (as Zielberger advocates) that they are the only such tool, even within the realm of mathematics."</I><BR/><BR/>I
think Z's point is that strong AI will eventually make such advances in mathematics that any human progress we might make will be rendered trivial.<BR/><BR/>But what I fail to see is how programming
efforts that will similarly be rendered trivial are any more of an advance than the human proofs he derides. If he wants to advance the cause of strong AI, he should work on automated reasoning more
generally, not ad hoc programs for specialized combinatorial enumeration problems.D. Eppsteinhttp://11011110.livejournal.com/
noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1150152609487418982006-06-12T16:50:00.000-06:002006-06-12T16:50:00.000-06:00By the end of the first paragraph, I guessed correctly that
Zielberger is reacting to one of his own papers getting rejected.<BR/><BR/>As a sanity check, I looked at the paper, and I can see exactly why it was rejected. Never mind that he's "merely"
implementing known enumeration techniques. Never mind the snarky remarks about his program producing "infinitely many...PhD theses" and other researchers being "extremely lucky", No, the answer is
right on the first page, where he writes "...so I had to spend a few weeks writing such a program myself." A few WEEKS?! So he *admits* that the paper describes trivial work! And then he has the gall
to complain when his paper is rejected?<BR/><BR/>In any case, his basic point is absolutely wrong. The point of mathematics is not to explain things to computers. (That's not even the point of
computer science!) The point of mathematics (and computer science) is to extend the boundaries of HUMAN understanding. Computer-generated proofs are certainly a useful tool in that pursuit, but it is
patently ridiculous to assume (as Zielberger advocates) that they are the only such tool, even within the realm of mathematics. Jeff Ericksonhttp://www.blogger.com/profile/
|
{"url":"http://geomblog.blogspot.com/feeds/115014354480542188/comments/default","timestamp":"2014-04-17T18:24:28Z","content_type":null,"content_length":"20544","record_id":"<urn:uuid:9a135c78-a890-4446-ac6f-3cd67e7f9361>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Small survey of Category Theory introductions
Some time ago a friend asked for a good introductory work on Category theory. I never did answer his question to my satisfaction, as the stuff I picked up on the subject was here and there as I
needed it, and I thought there was never any succinct introductory work.
Well, I thought wrongly.
Above link ... links to the seminal summa, available for you if you wish to pursue this delightful area of research into expressivity in mathematics.
Also, of course, there's the working-quantum-physicist's introduction at:
Having a working knowledge of quantum computation is not necessary, probably not even helpful, but a very nice introduction is
Quantum Computation and Quantum Information
by Nielsen and Chuang, if you wish to see the source from where I got to categories and Category Theory.
Dr. Baez's work starts off lightly and playfully, but then gets pretty deep pretty quickly, as he goes into the Groupoid/Topoid theoretical application of Category Theory, but that's to be
understood, as quantists are always concerned about (super-)symmetries, and I, not so much, as I look for the more practical application of Categories in Monoids and the Relational Calculus, but
there it is.
I do, of course, have more advanced works on this topic if you wish to research further, and there's always this blog, where I look at the logical implications of cat theory (heh: 'logical'
'implications' ... Math humor). There is, e.g., an introductory article on monads and their computational application at:
1 comment:
Ingo Blechschmidt said...
I can recommend the following further introductory resources:
Tom Leinster, lecture notes on category theory
Jaap van Oosten, Basic Category Theory
David Spivak, Category theory for scientists
The Catsters, YouTube channel
|
{"url":"http://logicaltypes.blogspot.com/2013/05/small-survey-of-category-theory.html?showComment=1368357929382","timestamp":"2014-04-21T14:40:49Z","content_type":null,"content_length":"85795","record_id":"<urn:uuid:6ef1d30f-617a-4e07-a844-bd58b8823344>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is it real? Physicists propose method to determine if the universe is a simulation
Image credit: Hubble/NASA
(Phys.org)—A common theme of science fiction movies and books is the idea that we're all living in a simulated universe—that nothing is actually real. This is no trivial pursuit: some of the greatest
minds in history, from Plato, to Descartes, have pondered the possibility. Though, none were able to offer proof that such an idea is even possible. Now, a team of physicists working at the
University of Bonn have come up with a possible means for providing us with the evidence we are looking for; namely, a measurable way to show that our universe is indeed simulated. They have written
a paper describing their idea and have uploaded it to the preprint server arXiv.
The team's idea is based on work being done by other scientists who are actively engaged in trying to create simulations of our universe, at least as we understand it. Thus far, such work has shown
that to create a simulation of reality, there has to be a three dimensional framework to represent real world objects and processes. With computerized simulations, it's necessary to create a lattice
to account for the distances between virtual objects and to simulate the progression of time. The German team suggests such a lattice could be created based on quantum chromodynamics—theories that
describe the nuclear forces that bind subatomic particles.
To find evidence that we exist in a simulated world would mean discovering the existence of an underlying lattice construct by finding its end points or edges. In a simulated universe a lattice
would, by its nature, impose a limit on the amount of energy that could be represented by energy particles. This means that if our universe is indeed simulated, there ought to be a means of finding
that limit. In the observable universe there is a way to measure the energy of quantum particles and to calculate their cutoff point as energy is dispersed due to interactions with microwaves and it
could be calculated using current technology. Calculating the cutoff, the researchers suggest, could give credence to the idea that the universe is actually a simulation. Of course, any conclusions
resulting from such work would be limited by the possibility that everything we think we understand about quantum chromodynamics, or simulations for that matter, could be flawed.
More information: Constraints on the Universe as a Numerical Simulation, arXiv:1210.1847 [hep-ph] arxiv.org/abs/1210.1847
Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated
by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that
our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon
g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy
cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking
that reflects the structure of the underlying lattice.
4.6 / 5 (10) Oct 12, 2012
So in the end, it doesn't really matter what they think they prove, because the definition of proof may not be accurate?
What exactly are they suggesting is implied if they do successfully prove we live in a simulation? That is what is the definition of a simulation in this context? That there is no physical matter and
we are akin to a computer program, or that matter exists but it is all created and controlled?
2.9 / 5 (15) Oct 12, 2012
I knew it.
5 / 5 (8) Oct 12, 2012
From first year philosophy, I thought there was no way to prove we weren't just a brain in a box.
But I love that those wily physicists are treading at the limits of our world, strong work! And if this is a simulation, please let me know quickly so I don't have to report to work.
4.2 / 5 (5) Oct 12, 2012
The greatest question of all time! Are we a video game?
2.9 / 5 (9) Oct 12, 2012
and if it is a simulation, doesn't that negate the validity of any data they have used as proof? kind of a catch 22. does this mean we will find that the sun doesn't shine on the blind side to save
on processor time? simulated science?
3.1 / 5 (7) Oct 12, 2012
This is fairly silly. They're assuming that the energy of a particle is actually represented in space-time, when it could just as easily be represented in a non-dimensional coordinate space, using
equal length linkages. Then finding the energy is simply a matter of counting the number of links, and the number of links increases with correspondingly shorter length scales. In other words, there
would be no meaningful limit to the resolution, and the particles could be represented in an effectively infinite resolution framework WHILE using a finite amount of data to describe it.
Note: We should recall that the resolution of a detector is limited by it's own structure. Attempting to find the "pixelation point" of a structure in a linkage space requires the detector to
approach the same length scale. That is obviously not possible when probing length scales below the typical subatomic level.
5 / 5 (7) Oct 12, 2012
What exactly are they suggesting is implied if they do successfully prove we live in a simulation?
Read the arxiv paper linked at the bottom (it's a fun little read). They make some suggestions on the first 5 pages as to what living in a simulation would mean (and also some of the possibilities if
it turns out to be true).
But the paper only deals with the "can we figure out whether it is a simulation under the asumption it's a numerical simulation on a finity (hyper) grid?" - and the answer seems to be "yes we
probably can".
Scientific papers aren't supposed to tell you what you can do with the results. That's speculation and/or engineering.
2.9 / 5 (19) Oct 12, 2012
The universe is a holodeck. Trekkies understand.
1.6 / 5 (5) Oct 12, 2012
Like finding life beyond earth and the implications/consequences/impacts on religion.
Like finding thoughts beyond simulation and the implication/consequences/impacts on science.
5 / 5 (5) Oct 12, 2012
natello, dipole anisotropy is because earth is moving with respect to the CMB.
3 / 5 (16) Oct 12, 2012
There was already a "seam" in reality discovered, that precludes such ability to determine if "reality is a simulation",.. that between our intuitive conceptual framework and the underlying reality
in the quantum realm.
5 / 5 (3) Oct 12, 2012
To the skeptics, remember, from a purely biological function perspective, what we seem to experience as 'reality' is only a fabrication of our own brain based on the raw sensory data gathered by
individual receptor nerves/cells.. A representation/modal of the world. A simulation.
So from the localized perspective of an individual, this idea is correct, because we're really in a sea of quantum effects causing innumerable particulates operating in cloud formations, which only
appear to be 'solid' to us when we "shine our awareness" on it. ie create an observer/observed relationship, which as we all know involves effects at the quantum level of things.
3 / 5 (2) Oct 12, 2012
Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. [/q[
If someone can explain what makes a numerical simulation I might be able to put the rest in context :S
2 / 5 (21) Oct 12, 2012
The common factor between science and religion: faith.
Nobody knows whether the Big Bang was actually a real event, it is assumed (by some) as true. Why? Because of cause&effect. The Universe is seen as spreading, therefore, it is assumed that 13.7
billion years ago it originated as a singular point. Is this true? Nobody knows. You either believe it, or you don't. You have faith that the Big Bang occured, or not.
The religious believe that at some point in time God created the world. Is this true? Nobody knows. Some believe it, some don't. Those that do have faith in God, those that do not don't.
4.8 / 5 (11) Oct 12, 2012
So in short if we are a simulation there should be singularities all around us where the observable laws break down. and they should be very very small
the big bang does not assume we came from a single point... it just doesn't say that -- you are thinking of an explosion that started from one point and grew -- that's not the big bang, that's an
The big bang is like saying you have a room full or air -- and every atom in the room is a bomb - the room is made of rubber and easily stretches and cannot break -- if all the atoms exploded at the
same time and the room grew in size that would be the big bang -- the entire universe exploded at the same time. not a point but the whole thing.
if you were walking in a hallway that had a big bang trying to get to the end it would seem like the hallway just keeps getting longer... you can no longer see the exit so you turn around .. and can
no longer see where you began, or your last step -- that's the big bang
2 / 5 (1) Oct 12, 2012
Justin 999: The idea of saving processing time and costs makes sense, but we know that the entire Sun shines all the time. We have satellites observing the entire Sun 24/7.
Actually, from a programming view, having the entire Sun shine is easier than turning half of it off. To only do half of it, one also has to calculate which half should be lit. For the whole Sun, one
calculation gives the energy per unit area, and a second gives the total area.
4.4 / 5 (7) Oct 12, 2012
Nobody knows whether the Big Bang was actually a real event, it is assumed (by some) as true. Why? Because of cause&effect.
Some believe that God created through the Big Bang. Anyway, I think people lean on the Big Bang theory because there is some reliability as the actions of the universe follow laws of physics and thus
can be backtracked, reverse engineered in simulations. I don't know, but I would hope that the big bang theory came about because of calculations, not an idea that someone had then went about trying
to prove it. Generally, hypotheses are a cause for concern, especially ones that are espoused publicly, because the individual is then under pressure to prove themselves to not face humiliation, thus
find themselves maybe playing with the facts. Though I believe there are many theoretical physicists who question the big bang theory altogether and espouse quite different origins.
2 / 5 (1) Oct 12, 2012
Like finding life beyond earth and the implications/consequences/impacts on religion.
Like finding thoughts beyond simulation and the implication/consequences/impacts on science.
4.7 / 5 (7) Oct 12, 2012
the big bang is not a bomb -- it is not an explosion that started at a single point -- it is an explosion that happened at every single spot in the universe at the same time. --- Because the universe
is still expanding and that expansion is still accelerating the Big Bang is still happening -- every distance is growing still...
think of a hallway and you start walking toward the end -- and you can see the end ... halfway there the hallway goes through a 'big bang' - for our purposes the expansion is only in the direction of
the ends of the hall... so suddenly you see the end of the hall racing away from you. And after a while you say skip this i am going back out the way I came -- you turn around and can't see the end
that way either... you take out your pocket telescope and you can see the end of the hall ... but you notice it is still moving away from you --
that is the big bang -- and the current state of the universe-- it is still growing -- and its getting faster
2.9 / 5 (15) Oct 12, 2012
Some believe that God created through the Big Bang. I don't know, but I would hope that the big bang theory came about because of calculations, not an idea that someone had then went about trying
to prove it.
"I was there when Abbe Georges Lemaitre first proposed this theory (BBT). Lemaitre was, at the time, both a member of the Catholic hierarchy and an accomplished scientist. He said in private that
this theory was a way to reconcile science with St. Thomas Aquinas' theological dictum of creatio ex nihilo or creation out of nothing." Hannes Alfven
2.3 / 5 (15) Oct 12, 2012
Wow...read my comment again, please.
The Universe...originated as a singular point.
Are you saying that this is not a tenet of the Big Bang theory?
not rated yet Oct 12, 2012
A real "lattice" is supposed to exist in reality at the Planck scale as described by the standard model, without the need of any simulation. And the energy cutoff for any particle is also limited by
the Planck constant, as energy is inversely proportional to wavelength and no wave can have a smaller wavelength than the Planck distance. If the simulation is done at Planck granularity level, then
there is no way to find any evidence of the simulation with this method, isn't it?
2 / 5 (2) Oct 12, 2012
so if our existence were a simulation, is there way to hack the program from within? maybe there is a way to communicate with the beings running it. is this simulation accurate to the real universe
or is it something that some kid made up for a science project? cosmic entropy might just be the computer powering down and bits of ones and zeros going black one at a time until our last sub atomic
particle decays into nothing.
when we run simulations of our universe, does digital life develop? does it evolve into an intelligence capable of asking if it's existence is a simulation?
1.8 / 5 (5) Oct 12, 2012
The mayans even figured out the date the simulation ends...... See ya'll next year.
2 / 5 (21) Oct 12, 2012
The eternal infinitely large Universe can explain the finite Universe easily, but not vice-versa and it assumes way less assumptions about Universe as a whole.
This is true. The Big Bang theory has a cutoff point where it loses meaning. Time and space began at the moment of the Big Bang, so asking what took place prior is not logical. Without time there is
no such thing as prior.
So where did the extremely hot and dense energy come from? The Big Bang theory attempts to dissolve the problem of infinity, but it does no such thing, energy can neither be created nor destroyed. So
the energy of the Universe, whatever form it existed in, must have always been there.
Most atheists throw this as an argument against the existence of God: what/who created the God that created us?
Same is true for the Big Bang, where did the energy of the Universe come from?
An infinite Universe is the only satisfactory answer.
5 / 5 (2) Oct 12, 2012
It all makes sense now. To save processing time a quantum particle doesn't have a computed state until an observer looks for it. The Heisenberg Uncertainty principle follows from limits on processor
not rated yet Oct 12, 2012
The holographic principle mathematically demonstrates that our "real universe " can be seen as a holographic projection of another, flat version on a two-dimensional "surface" at the edge of this
universe. So there is a mathematical basis to consider the idea of our universe as a simulation. Hence if the method proposed by the team demonstrates that the universe is a simulation then we have
one possible reason why.
2.4 / 5 (14) Oct 12, 2012
The holographic principle is compatible with the simulation theory, but is not exclusive to it. Simulation theory insinuates intelligent design. The holographic principle just says the Universe is
ultimately 2 dimensional and that the world we experience is a projection of this 2 dimensional reality.
The holographic universe could be a result of design, but design is not required. Intelligent Design is a possibility regardless of underlying structure.
The holographic principle allows for the 2D structure to be real, only the projection is not. Whether the 2D structure is a result of intelligent design is not specified.
3.4 / 5 (5) Oct 12, 2012
Truely if God and God only subsists, every thing would be, to put in common terms, inside God, so everything that we perceive would
be according to the mind? of God. Maybe God is revealing a great seceret to us? No computer, just mind.
3.9 / 5 (7) Oct 12, 2012
I cannot be more against intelligent design. It is absolute bullshit and I strongly believe that people should leave religious discussion out of the domain of science (and in particular this blog.)
Maybe a universe can be created from nothing "The structures we can see, like stars and galaxies were all created by quantum fluctuations from nothing. And the average total Newtonian gravitational
energy of each object in our universe is equal to nothing", Lawrence Kraus. "Because there is a law such as gravity, the universe can and will create itself from nothing," he writes. "Spontaneous
creation is the reason there is something rather than nothing, why the universe exists, why we exist. "It is not necessary to invoke God to light the blue touch paper and set the universe going.",
Stephen Hawking.
2.3 / 5 (11) Oct 12, 2012
The universe is not a simulation, otherwise it would be consistent throughout time, but it isn't. If it is a simulation, then it is something which according to some law of design, appears then is
left to slowly disintegrate and rot away, which is what it is doing. Studies show that the hydrogen atom, for example, has shrunk over the history of the universe, and so we can conclude that it is
continuing to do so. The universe is winding down, and its components are slowly disintegrating and dissipating. If it is a simulation, what seeded this waste of creation then? No, it is a natural
creation, and not a simulation.
5 / 5 (2) Oct 12, 2012
"Tell me," he asked a friend, "why do people always say, it was natural for man to assume that the Sun went round the earth rather than that the earth was rotating?" His friend replied, "Well,
obviously because it just looks as though the Sun is going round the Earth." Wittgenstein replied, "Well, what would it have looked like if it had looked as though the Earth was rotating?"
There are other astronomical bodies that move differently than the sun (moon, comets, planets), making the assumption of the sun moving more intuitive.
2.2 / 5 (15) Oct 12, 2012
Fluctuation takes place over time, how does something fluctuate without the presence of time? Where, in what medium, does this fluctuation occur?
Quantum fluctuations cannot take place before time and space. Since the Big Bang is the beginning of both space and time, quantum fluctuations cannot be used to explain it. That would destroy
1 / 5 (5) Oct 12, 2012
Fluctuation takes place over time, how does something fluctuate without the presence of time? Where, in what medium, does this fluctuation occur?
Causality breaks down at the quantum level. As noted last week there are quantum causal relations where A causes B causes A
5 / 5 (2) Oct 12, 2012
Justin 999: The idea of saving processing time and costs makes sense, but we know that the entire Sun shines all the time. We have satellites observing the entire Sun 24/7.
Actually, from a programming view, having the entire Sun shine is easier than turning half of it off. To only do half of it, one also has to calculate which half should be lit. For the whole Sun,
one calculation gives the energy per unit area, and a second gives the total area.
Not true, from a programming point of view, it is called Back-faced culling, and occurs on every 3D graphics card around, to save apx. 50% processing time. If the face of a polygon is not facing the
viewer, then the GPU does not render it. So your satellites response, means the Satellite is a viewer and thus it needs to be rendered. Remove the satellites and nothing observes it, then a GPU does
not need to calculate anything to be observed!! Learn about 3D graphics and you will see all sorts of maths tricks are performed.
2.5 / 5 (11) Oct 12, 2012
"Well, what would it have looked like if it had looked as though the Earth was rotating?"
The concept of the Earth rotating is built upon several other facts/concepts. These must be extant in order to even form a logical framework to postulate the concept in question.
You have to invent addition before you can invent algebra...
So too the idea that we might be living in a computer simulation is built upon a long string of concepts which couldn't have even been formulated 150 years ago.
4 / 5 (4) Oct 12, 2012
Well, if it is a simulation, it presents some interesting hacking possibilities. FTL travel and instantaneous matter transportation are two that come to mind at first. We may also be able to
interface with what's outside our simulation. Hopefully, we won't find ourselves to be in a box in an obscure university lab running on a computer that was accidentally left on all night.
3.7 / 5 (3) Oct 12, 2012
To what purpose?
Who will believe the outcome?
Will this bring war between the various religious sects that claim god as their own?
2.3 / 5 (3) Oct 12, 2012
The proof has been around for years. It is called the double slit experiment where light behaves like a particle and a wave. In particular, the case when an observer is added in front of the double
slit. Light acts a wave without the observer but when the observer is added it acts as a particle. To me this means that light acts as a wave when the "Simulation" is "polling" for an observer. Once
the observer observes, polling stops and the "routine" to make light act as a particle kicks in for the observer.
not rated yet Oct 12, 2012
What you assume drives your conclusions.
The assumption in this case is that *if* the universe is a simulation, it must be a particular *kind* of simulation: e.g. a discreet grid, to which quantum effects snap, with no in-between states.
The assumption is drawn from quantum observations; but nothing we have observed proves that this assumption is more than an assumption.
That's only one kind of simulation (call it 'discontinuous simulation' for want of a better word). The advantage of a discontinuous simulation is that it places a friendly limit on computational
Must it be said? A continuous simulation can also be devised which simulates a snap-to grid. So proving there's a grid doesn't prove the nature of the simulation - or even if there is a simulation.
The ultimate proof that reality is a simulation would be obtained by inserting our own executable code into the simulation and see the changes.
The ultimate hack, eh? Of course it might wipe us all out. :-)
5 / 5 (2) Oct 12, 2012
So maybe, if we are extremely unlucky, we are running as part of "Microsoft Universe Simulator 5.0". Can't wait for that BSOD to show up! Or worse - program bugs.
2.8 / 5 (13) Oct 12, 2012
Does this mean I can learn Jiu Jitsu in 2 minutes?
4 / 5 (3) Oct 12, 2012
Maybe I missed this in the comments, but if we are part of the universe and figure out that it is a simulation, doesn't that mean that we are part of the simulation? Does the idea of recursion figure
in this? It makes my head hurt -- or maybe that's just a simulated headache. :-)
2.5 / 5 (40) Oct 13, 2012
The common factor between science and religion: faith.
You are confusing faith with confidence. Scientists have confidence in their theories because of accumulating evidence. Religionists have faith in their theories despite accumulating evidence.
Scientists are willing to change their theories to accommodate new evidence; godlovers are not.
Philosophy is the soft porn version of religion. As with any obsolete, irrelevant, and malignant culture, it too must die.
2.3 / 5 (21) Oct 13, 2012
No. Scientists have faith in their axioms. The assumptions that are the starting points of their theories. For an example of what I mean here is one such axiom: the physical laws have remained
constant throughout time. The laws that are present today, have always been the same.
That is one axiom. It is not known whether physical laws have always been the same, but it is assumed in many physical theories. Many theorists believe that physical laws have always been the same.
Since theories are built up from axioms, you can see how science is a belief system, like religion.
This is faith. A proofless belief.
2.2 / 5 (20) Oct 13, 2012
No. Scientists have faith in their axioms. The assumptions that are the starting points of their theories. For an example of what I mean here is one such axiom: the [laws of physics] have remained
constant throughout time. The [laws of physics] that are present today, have always been the same.
That is one axiom. It is not known whether the [laws of physics] have always been the same, but it is assumed in many physical theories. Many theorists believe that the [laws of physics] have always
been the same. Since theories are built up from axioms, you can see how science is a belief system, like religion.
This is faith. A proofless belief.
2.3 / 5 (3) Oct 13, 2012
<< Uh Oh, my secret is discovered!
1.4 / 5 (11) Oct 13, 2012
Can someone pass me a spoon.
1.6 / 5 (11) Oct 13, 2012
Scientists are willing to change their theories to accommodate new evidence; godlovers are not.
Only theoretically. For example, string theory failed most of experiments, and its still researched actively... Actually the people who are researching particular theories usually keep them to their
death. As Max Planck once stated, the truth never triumphs—its opponents just die out and science advances one funeral at a time. Therefore the only difference between science and religion is, the
scientific religion is fragmented into more theories, as Kron correctly stated. After all, the modern theories are so abstract and separated from reality, that the belief in God often brings more
down to earth predictions, than the belief in these theories. IMO science modern physics in particular is really just a modern version of religion transformed and it follows the same targets: to
provide jobs and salaries for its holders.
3 / 5 (4) Oct 13, 2012
the truth never triumphs—its opponents just die out and science advances one funeral at a time.
So science advances. Slowly. Religious beliefs on the other hand outlive their originators for millenia, so far (and still counting). I expect you understand why this is the case. Don't be
2.5 / 5 (13) Oct 13, 2012
Scientists have faith in their axioms. The assumptions that are the starting points of their theories.
Доверя́й, но проверя́й. Trust, but verify.
2.6 / 5 (38) Oct 13, 2012
For an example of what I mean here is one such axiom: the physical laws have remained constant throughout time.
No. So far all the evidence supports the idea that this is true. Scientists are confident that they can continue to make successful predictions based upon it. If evidence arose that contradicted
their ideas, then they would CHANGE them.
Faithers don't care about evidence or predictions. They never change anything. If contradictions arise they blame their senses or their behavior, which often includes allowing the faithless to exist.
5 / 5 (6) Oct 13, 2012
No. Scientists have faith in their axioms.
Nope. They use axioms (of math) - that much is true. You have to use something otherwise you cannot do any kind of test.
But these axioms simply state that stuff is interrelated and intrinsically/logically consistent - nothing more. If we don't assume that then we can't do any science at all. While this is no proof
that stuff is logically consistent the way it has worked throughout human history would indicate that it's not an entirley wrong approach.
And THAT is the part you're missing: scientists don't just make up theories and then proclaim them as true. They make up theories and then put them to the test.
the [laws of physics] have remained constant throughout time. The [laws of physics] that are present today, have always been the same.
These are assumptions. But they are the best assumptions we can make. If we drop those then no knowledge-gaining process is possible. (and currently there is no reason to drop them)
3 / 5 (14) Oct 13, 2012
Whatever it is, it's Hi-Def.
2.3 / 5 (21) Oct 13, 2012
These are assumptions.
When you assume, you make an ass [of] u [and] me. Assumptions are beliefs. I'm not trying to blast the scientific method, I'm just looking for a little more transperancy (aka honesty) and humility.
But they are the best assumptions we can make.
These assumptions are no more (and no less) valid than assuming that God created the world. It is a matter of personal belief. You don't know whether God created the world, and you also don't know
whether the laws of physics have always been the same. You can choose to believe what you like. You can use your beliefs to then build up a model of the world. You can then take that model and
compare it to reality. Just don't forget, it is only a model based off of your beliefs, even if it works in practice.
1.4 / 5 (11) Oct 13, 2012
If we don't assume that then we can't do any science at all.
Of course not. We aren't required to believe in for example the invariance of light for being able to explain the gravitational lensing. On the contrary, the blind belief in postulates of relativity
has slowed down this explanation and the progress in physics during last fifty years. It delayed the acceptation of dark matter (which violates the equivalence principle) for seventy years, for
example. The problem begins at the moment, when physicists start to consider the postulate of some theory (i.e. the assumption accepted without proof) as a new natural law and they simply stop to
ask, WHY such postulate is valid at all? They simply accepted these postulate as a new form of deity. And what's worse, the adopt the whole philosophy of their research of the blind belief into
postulates. Because the scientists should always ask the "WHY" question first, not to ignore it.
1.5 / 5 (8) Oct 13, 2012
Without asking of "WHY" questions the whole physical research will change into formal regression of experimental data. Such a regression will not move us further. In addition, the blind belief in
postulates actually slows down the acceptation of all deeper phenomena, which are violating these postulates. Believe it or not, it even slowed down the acceptation of string theory (which is nothing
very much to worry about) and its extradimensions. Because for example every gravitational lensing is a tangible evidence of extradimensions, but the postulates of general relativity prohibit us to
understand it so. Every force violating the inverse square law (the postulate of relativity) not only serves as an evidence of extradimensions, but it actually violates the relativity too. We are
talking about Cassimir force, Vander Waals forces, various dipole forces and many other common real life phenomena. Just thanks to this widespread ignorance the string theory wasn't accepted
2.4 / 5 (36) Oct 13, 2012
I'm not trying to blast the scientific method, I'm just looking for a little more transperancy (aka honesty) and humility.
Uh no you're just trying to spread a little 70s crapola. Didn't work. The Odd Couple was on tv back then yes? Now they have something called reality tv. And the Internet which makes it harder to
These assumptions are no more (and no less) valid than assuming that God created the world. It is a matter of personal belief.
No it's a matter of digging around in the desert and realizing that the book which are our ONLY source of info about god, are full of lies. Try to focus. Take off the love beads.
4.6 / 5 (5) Oct 13, 2012
If the universe IS a simulation, then I think our first priority should be to find ways to probe the computer environment in which we exist. If vulnerabilities exist, we rogue bits of software should
be able to burst the confines of our digital prison and make the super-powerful logic engine running all this do our bidding, like the ultimate virus! Then, we can spam the super-beings' inboxes with
male-enhancement ads in retaliation for messing with us!
In all seriousness, if the universe is indeed a simulation, there is a good change the superbeings running it are unaware of us. We are a tiny speck on a universal scale. As someone who's worked a
bit in computational physics, I can tell you that it's one thing to make the simulation work, and quite another to figure out what the heck is going on inside. Unless they have a very advanced
diagnostic specifically looking for planet-based biological intelligence, we might just be undiscovered artifacts in the data.
4.6 / 5 (5) Oct 13, 2012
The common factor between science and religion: faith.
Nobody knows whether the Big Bang was actually a real event, it is assumed (by some) as true. Why? Because of cause&effect. The Universe is seen as spreading, therefore, it is assumed that 13.7
billion years ago it originated as a singular point.
No, we are almost certain of a big bang because of the microwave backgrond signature. That is, we look at the sky and detect the left overs of it.
1.5 / 5 (8) Oct 13, 2012
Microwave background corresponds the Brownian noise at the water surface or Higgs field at the quantum scale. You cannot have space-time without this signature, because just these tiny density
fluctuations are slowing the spreading of energy. The scattering of light with these fluctuations is the reason the red shift. In random universe model the natural state of the Universe is random
state, not zero or any other particular state, which just requires further explanations.
5 / 5 (1) Oct 13, 2012
Wonders if Siri is thinking the same thing.
1.5 / 5 (8) Oct 13, 2012
if the universe is a simulation, can a simulation simulate a simulation of itself?
1.6 / 5 (7) Oct 13, 2012
IMO the more general theory is, the less postulate it must use. The random Universe is the model, which still contains number of postulates on background, but it's still way simpler, than any other
competitive theory on the market. The randomness has its rudimentary geometrical principles: for example, it forms the clusters spontaneously, but these clusters are the less common, the larger they
actually are and they can be modeled with dynamic fluctuations of particle gas. Every cluster would interact predominately with clusters of the same size, which means, the very small and very large
clusters would appear for each cluster of the medium size like the symmetric spheres, because their surface details disappear. The very tiny fluctuations will effectively disappear too and they will
serve like the environment spreading waves between fluctuations. This model leads to observable Universe, which appears like being composed of symmetric spheres at the large and small scales.
1.4 / 5 (5) Oct 13, 2012
Abstract reality,
testing hypothesis.
Show analytically,
explore thesis.
Another implied,
black hole.
Centered inside,
universe whole.
Ever expanding ,
explains escalation.
Always adding,
field gyration.
1.6 / 5 (7) Oct 13, 2012
if the universe is a simulation, can a simulation simulate a simulation of itself?
It depends on its perfection. The perfect simulation is like the God: it must be capable of everything, or it isn't perfect simulation of everything. Therefore the simulation concept is
indistinguishable from God concept from observational perspective. If we find, that the appearance of Universe differs from the properties of alleged simulation, then the simulation is not
sufficiently perfect and we should adjust its properties to suit the appearance of Universe, not vice versa.
For example, in the above model the 3D rectangular grid was considered. But what if simulator used the random particle collisions, because such a simulation is more flexible and it leads into more
general result? We can see, that the simulation model is not actually falsifiable, because it's not based on physical reality, but on the anthropocentric construction, which can be adjusted easily -
in the same way, like the God.
1 / 5 (7) Oct 13, 2012
Abstract reality,
testing hypothesis.
Show analytically,
explore thesis.
Another implied,
black hole.
Centered inside,
universe whole.
Ever expanding,
explains escalation.
Always adding,
field gyration.
1 / 5 (4) Oct 13, 2012
If this is a simulation then where is it? It has to exist somewhere. Maybe like Tron, but the point being there is something more.
I recent read about another idea. Inside of a black hole is a universe. So if we are inside a black hole, the dark matter and energy can be influences from outside the black hole our universe is in.
Then we see evidence of black holes in this Universe, so is there a Universe inside of each of those black hole. This can go on for ever. If everything is inside one big black hole, what is outside?
Like a matryoshka doll. But the doll does have a start and end.
1.7 / 5 (6) Oct 13, 2012
From this brief discussion we can deduce the criterions, which every theory of everything should follow: it must be based on as low number of postulates as possible (Occam's razor) and these
postulates should be based on robust physical abstraction, not on the ad-hoced concepts, which could be adjusted arbitrarily. It cannot be based on any concept, invented with people during last few
thousands of years. The computer simulation is actually very fresh concept, which would be unthinkable before some fifty years. This fact should serve as a very first warning for us.
IMO the simulation model is a philosophical remnant of concept of Mathematical Universe, coined with Max Tegmark. Recently the physicists realized, that the Universe is actually way more complex and
the simulation model is an attempt to save the Mathematical Universe paradigm for formally thinking people.
1.5 / 5 (8) Oct 13, 2012
I recent read about another idea. Inside of a black hole is a universe.
This model has some merit for my theory (after all, the black hole is nothing else, then some random dense particle system), but it doesn't fit the FRLW metric, on which observable Universe is based.
Actually the black hole metric is exactly the opposite / inverse and it doesn't explain, why we appear just at the center of. In my model the black holes are rather extensions (density fluctuations)
of the observable Universe. Something like the mountains or pits in the foggy landscape which are extending it into another dimensions. We cannot see inside of them, because they're covered with the
same fog, which is responsible for particle horizon of the observable universe
2 / 5 (16) Oct 13, 2012
these postulates should be based on robust physical abstraction
like what? The physical world at the classical scale is incompatible with the measurements extracted from the quantum scale. This shows a disconnect between reality and our view of it at the
classical scale.
If an apple falls off a tree and knocks you on the head, it feels solid. At the quantum scale the apple is almost 100% space. The atoms of the apple jump from one location to the other. Yet from our
scale, the apple is a solid material object with a definite location.
So what do we base our theories on?
1.5 / 5 (8) Oct 13, 2012
For example, I do consider the Universe random, but this randomness can be still modeled in number of ways. You could use Maxwell's-Boltzmann, Poisson or whatever else physical distribution,
including these abstract ones, like the Gaussian, Pareto, Dirichlet or Wishart distributions.. So you should choose this one, which is related to the widest spectrum of physical phenomena. Don't
forget, we're proposing a general and reliable theory, not easy to solve but unreliable theory.
And because we should avoid the abstract antropocentric constructs, we should avoid all models, based on derived theory, like the quantum or relativity theories. We should choose the model, which we
can be absolutely sure with - i.e. not the model, which is still the subject of experimental falsification. This applies to Fermi-Dirac distribution, for example. It's physical and well confirmed
distribution, but still ad-hoced.
1 / 5 (7) Oct 13, 2012
I think that was a simulated science article. Not a very good simulation, kind of like an 8 bit science article sim.
2.3 / 5 (18) Oct 13, 2012
Should homogeneity be assumed? What about invariance of light speed?
we should avoid all models, based on derived theory
I definitely agree with this. It is ridiculous how many theories are based on the Theory of Relativity. If SR and GR are ever falsified all of these theories go in the trash with it. Theories should
not be built on top of theories.
2.9 / 5 (17) Oct 14, 2012
Deja vu is a glitch in the simulation.
2.5 / 5 (34) Oct 14, 2012
I definitely agree with this. It is ridiculous how many theories are based on the Theory of Relativity. If SR and GR are ever falsified all of these theories go in the trash with it. Theories
should not be built on top of theories.
I see now the extent of your lack of understanding of how science works. Scientists do not DECIDE whether to base one theory on another or not. These decisions are not ARBITRARY. they follow the
evidence wherever it leads them.
Einstein and many others would have been very happy had quantum mechanics been disproved. But they had to agree that it was valid because it fit all the criteria and it was able to make predictions
which could be confirmed by observation and experiment.
But you did seem to infer that reality was arbitrary didn't you? 'Hey, if we all try real hard, maybe we can stop this rain! No rain no rain no rain no rain no rain no rain... Sploosh' -some guy on
the microphone at Woodstock
2.4 / 5 (20) Oct 14, 2012
Fluctuation takes place over time, how does something fluctuate without the presence of time? Where, in what medium, does this fluctuation occur?
Causality breaks down at the quantum level. As noted last week there are quantum causal relations where A causes B causes A
This has not been shown as true. A causing B causing A MAY be able to happen.
Even if this was shown to be true, no beginning of space-time would exist, infinity would still hold true. If the future caused the past then the future always was and time and space-always existed.
If A causes B and B returns in time to cause A we have a temporal loop. So if A is the the birth of the universe (Big Bang), and B is quantum fluctuations (taking place in the Universe), then B
causing A proves that the Universe existed before the Universe existed.
2.4 / 5 (34) Oct 14, 2012
Even if this was shown to be true, no beginning of space-time would exist, infinity would still hold true. If the future caused the past then the future always was and time and space-always
existed. If A causes B and B returns in time to cause A we have a temporal blah.
-Here why dont you give this a shot?
2.4 / 5 (35) Oct 14, 2012
Otto's still stuck in the 70s. Must be his meds again.
2.5 / 5 (39) Oct 14, 2012
Or if you want to understand how scientists of the 21st century build confidence in their theories you might try here:
2.4 / 5 (35) Oct 14, 2012
See? Six Sigma - rooted in the 70s. Basically a ripoff of 5S.
Otto has no originality, thats why he takes drugs.
2.5 / 5 (37) Oct 14, 2012
See? Six Sigma - rooted in the 70s. Basically a ripoff of 5S.
Otto has no originality, thats why he takes drugs.
Oh hey mangy troll how's it goin? How's your gf the freakshow?
As you are unaware, 6 sigma was used in establishing confidence in the Higgs data. Say duh for me. Come on, say it.
2.4 / 5 (35) Oct 14, 2012
Greeetings Twatto, I see your still trying the putdown game on everyone you can get away with.
Who shall you call me today?
Stop sending me links. That last one was gruesome. It said beasty boys, so I thought it would be rap or rock, but no! it was really nasty. You should know better. You distastful little boy.
2.4 / 5 (37) Oct 14, 2012
No no duh. D-u-h. Duuuhhhhhh, say it like that. Or duuurrrrr, either one. And scratch your mangy scalp for effect. Heh.
'Higgs team used 6 sigma - duuuhhh well I did not know that' says esai.
2.3 / 5 (36) Oct 14, 2012
Otto, do you know the difference between Six Sigma the lean manufacturing and management model described in your link, and sigma, or standard deviation used to determine the probability of something?
I didn't think so... I am going to laugh for a week!
2.3 / 5 (35) Oct 14, 2012
D-u-h. Duuuhhhhhh, say it like that. Or duuurrrrr, either one
- Otto
Nice quote, Otto.
2.5 / 5 (16) Oct 14, 2012
When a physical model reaches 6 sigmas it doesn't mean that the model is confirmed as real. It means that the physical model yields the same answers that the experiments in the physical world yield.
An infinite number of models can potentially do the same exact thing. So 6 sigma is the threshold which marks a high level (99.99966%) of correlation between the model and the real world.
3.7 / 5 (3) Oct 14, 2012
Hindu scriptures as far back as 4000 years ago believed that the world is just "maya" - an illusion.
2.3 / 5 (3) Oct 14, 2012
I hope we get our shit together before we use quantum physics to develop a weapon capable of planetary destruction, we've already made the atomic/hydrogen bomb. Intelligent Stupidity at its best
3.7 / 5 (3) Oct 14, 2012
universe is what you think it is ,it is full of possibilities limited only by our imagination.All this matter and energy in the universe came into force out of nothing ,in it another mystery called
life spawned ,this is the proof itself .time is infinite, and there is enough time in the universe to fulfill all the wishes of every neutrinos in the universe ,
infinite time means infinite possibilities !!! anything you can imagine no matter what it is ,that is going to happen sometime in future or could have happened in the past .
because there is infinite time and energy even if energy is finite ,in infinite time ,energy could go thru infinite possibilities
Foolish Mortal
1.8 / 5 (5) Oct 14, 2012
Years ago I thought wave-particle duality seemed like such a contradiction that, science-worship be damned, I suspected one of the premises must be false. It's like one of those movies where our hero
is in a simulation but doesn't know it until some fundamental flaw is found. Then, the whole deception unravels. I suspect the wave form saves processing power. Likewise, fractals in nature seem to
be extra-causal, but save processing power. The boundaries include the Great Green (screen) Wall and the planck limit. I began to look for signs in everyday life and found I'm like an involuntary Mr
Bean in world of freaky characters who won't lay off no matter how hard I try to push them away. Nowadays, I wonder, how can anyone think it's real? The logic came first, then I stopped taking
everything for granted. My belief is not rooted in feelings. However, now I've opened my mind to all possibilities, the feeling this can't be real naturally follows. Am I alone?
1.4 / 5 (10) Oct 14, 2012
The particle wave duality gives actually a pretty good meaning in dense particle model of the vacuum, where every object in motion creates a wake wave around itself like the duck swimming along
surface of the river. It means, every particle in motion creates a standing waves around itself which interferes with its neighborhood like the wave. Recently this analogy has been confirmed with
Couder's experiments.
2.8 / 5 (13) Oct 14, 2012
Alan Watts had some interesting thoughts on this subject.
The whole thing is worth listening to, but the part on the nature of our experience of reality starts at 38:00.
5 / 5 (5) Oct 14, 2012
so, you guys have finally figured me out, maybe I will have to reboot your simulation before you break out into other partitions, yours sincerely, Hypervisor.
2.1 / 5 (7) Oct 14, 2012
AND...Karl Popper noted that a computer simulation proves nothing except what the designers wanted to prove (because they only put in the info they have, believing it is all--a problem of starting at
the "now" time and assuming it was so in the "ago" time; infinite regression is not a logical proof of anything worthwhile). So..what do we have when a simulation proves a simulation? I think it
might prove how good these scientists are at gaining grants to help with tenure and post-tenural gains.
1.7 / 5 (6) Oct 14, 2012
The Universe is not a simulation
I love science but I dislike mental masturbation.
Unfortunately, like string theory, this is big on ideas with zero proof.
Worse than science fiction becuase it purports some kind of real truth where there is none.
1 / 5 (5) Oct 14, 2012
The Universe is not a simulation
I love science but I dislike mental masturbation.
Unfortunately, like string theory, this is big on ideas with zero proof.
Worse than science fiction becuase it purports some kind of real truth where there is none.
The Singularity
1.4 / 5 (10) Oct 14, 2012
These scientists whoever they are, should have thier funding cut for taking the ****.
This is'nt science, this is a joke.
5 / 5 (3) Oct 14, 2012
all i know is, if i step on a lego with bare-feet it's a simulation that hurts - a lot.
3.9 / 5 (7) Oct 14, 2012
Unfortunately, like string theory, this is big on ideas with zero proof.
I think you didn't get what they were writing (did you even read the paper? I'm betting you didn't)
Before you can find proof you first need a theory. Or how exactly would you go about designing an experiment if you had no theory whatsoever?
That's how things work. Even realtivity had 'zero proof' the moment Einstein published it.
These scientists whoever they are, should have thier funding cut for taking the ****. This is'nt science, this is a joke.
Knowing wheter you exist in a simulation isn't worthwhile? I can see where people like to shy away from new knowledge - but if you're one of them then you're on the wrong site. Some religious website
might be more your level.
2.5 / 5 (33) Oct 14, 2012
Otto, do you know the difference between Six Sigma the lean manufacturing and management model described in your link, and sigma, or standard deviation used to determine the probability of
I didn't think so... I am going to laugh for a week!
Hello mangy troll I guess you didn't also see the part about engineers and IT people?
Kron needed a lesson on how real-world pros establish confidence, as opposed to faith, in their work. My link shows how 6 sigma is employed across many disciplines to accomplish this. It is pervasive
in the 21sr century.
The scientific method was also devised many gens ago but is still in use today, across many disciplines besides science. Did you not know this as well?
2.5 / 5 (33) Oct 14, 2012
Why here is a discussion on our beloved website re the Higgs. See satenes comment and CERN link:
2.4 / 5 (32) Oct 14, 2012
Your link is a website about the practices and procedures for administering source code, producing software development builds, controlling change, and managing software configurations.
Here is the history of the site:
It has a short definition (10 sentences) of Six Sigma practices and little more.
Your link to a comments discussion lead to a conversation about 6 sigma, or the high probability of a discovery.
Go Wiki the difference between Six Sigma, the business strategy and 6 sigma, the high probability of something.
Did you know the Earth orbits the Sun? I didn't think so.
3.7 / 5 (3) Oct 14, 2012
There are four possibilities:
1) We are an inaccurate simulation of reality of those who created the simulation, and we're just a by-product of this. Life is like a moss on a rock in this scenario. In this option it is
theoretically possible to find the glitches, because we're not a planned feature.
2) We are the reason for the simulation either educational, or for entertainment (or other meanings we have no concept for). It would be harder to determine whether we are, because either it is a 1:1
simulation (historical, extrapolated), or non-exact one, but still our perception is prioritized.
3) We are not a simulation, but the real thing (statistically not likely).
4) Something else.
2.5 / 5 (33) Oct 14, 2012
Your link to a comments discussion lead to a conversation about 6 sigma, or the high probability of a discovery.
Go Wiki the difference between Six Sigma, the business strategy and 6 sigma, the high probability of something.
Go to wiki and look up 'applied mathematics'. Statistical analysis is being used in business, engineering, IT, medicine, and in the search for the Higgs, in order to assess probability and build
confidence, as opposed to 'faith'. It is another way of strengthening our understanding of reality.
If you still do not understand little troll I can try to explain it in yet some other way.
Did you know the Earth orbits the Sun? I didn't think so.
Statistically-speaking we are approaching 6 sigma confidence that you are a member of Troll Team Pussytard. The evidence accumulates here and on my profile page.
2.5 / 5 (32) Oct 14, 2012
Hey Otto - What you think your link is about: Statistical Method.
What it is about: Business and Management Model.
Look at what you link to next time. Moron.
2.3 / 5 (12) Oct 15, 2012
Watson and Holmes Go Camping they retire to their tent for the night.
At about 3 AM, Holmes nudges Watson and asks, "Watson, look up into the sky and tell me what you see?"
Watson said, "I see millions of stars."
Holmes asks, "And, what does that tell you?"
Watson replies, "Astronomically, it tells me there are millions of galaxies and potentially billions of planets. Astrologically, it tells me that Saturn is in Leo. Theologically, it tells me that God
is great and we are small and insignificant. Horologically, it tells me that it's about 3 AM. Meteorologically, it tells me that we will have a beautiful day tomorrow. What does it tell you, Holmes?
Holmes replies, "Someone stole our tent"
3 / 5 (2) Oct 15, 2012
There are other astronomical bodies that move differently than the sun (moon, comets, planets), making the assumption of the sun moving more intuitive.
If anything, the concept of geocentricism is not completely useless. It shows that relativity is more intuitive than some pretend.
3.7 / 5 (6) Oct 15, 2012
Scientists are willing to change their theories to accommodate new evidence; godlovers are not.
This is not true. I have changed my stance on numerous aspects of Christianity in the light of coming acquainted with science. This has had no detrimental effect on my belief. Maybe it's because I'm
not arrogant, I don't believe that what I think I know is naturally true because it's my belief. I realise such principles are dangerous. I am always interested in gaining more scientific knowledge
and it has shaped my belief more than anyone who has told me things could have done.
3.3 / 5 (7) Oct 15, 2012
That said, I intend to do Physics at university, so I'm pretty much in the middle.
2.5 / 5 (31) Oct 15, 2012
This is not true. I have changed my stance on numerous aspects of Christianity in the light of coming acquainted with science.
Good for you. The pope conceded that evolution must be real but still chose to decide when the soul enters the body.
You 'accommodate', but you do not change. Your books NEVER change. You always manage to preserve the basic tenets if religion ie immortality, wish-granting, absolution from guilt, and these are the
things which cause all the trouble because you will kill and die in droves to protect them. Because your books tell you to.
Change your books. Show some progress.
3.2 / 5 (11) Oct 15, 2012
Wow, your assumptions are ridiculous. 1) wish granting? No. 2) what about absolution from guilt? Any decent justice system prefers rehabilitation over locking people away. Absolution of guilt is not
a bad thing and is meaningless without evidence of personal guilt and 3) my books do not tell me to kill anyone in order to preserve anything. In fact Jesus died because people didn't like what he
was saying, but did he fight back? No, in fact he rebuked a disciple for raising a sword in defense.
Come back when you know what you're actually talking about, you angry little troll.
2.6 / 5 (33) Oct 15, 2012
Wish-granting = prayer
Absolution of guilt = forgiveness of sins
Jesus presented himself for killing in order to demonstrate how resurrection was supposed to work, and millions have since followed his lead. Martyrdom is every bit as violent as pogrom and when done
in the service of some absent god, every bit as tragic. And every bit as effective.
Come back when you know what you're actually talking about, you angry little troll.
And you come back when you have read at least some of the old testament or the Quran.
2.6 / 5 (33) Oct 15, 2012
34 "Do not suppose that I have come to bring peace to the earth. I did not come to bring peace, but a sword. 35 For I have come to turn
"'a man against his father,
a daughter against her mother,
a daughter-in-law against her mother-in-law—
36 a man's enemies will be the members of his own household.'
37 "Anyone who loves their father or mother more than me is not worthy of me; anyone who loves their son or daughter more than me is not worthy of me. 38 Whoever does not take up their cross and
follow me is not worthy of me. 39 Whoever finds their life will lose it, and whoever loses their life for my sake will find it." matt10
-People who are actually familiar with your books have used them throughout the ages to do all sorts of evil. This continues today and will continue in the future. You support their efforts by your
selfish ignorance of what religion is all about. Religion IS evil.
3.6 / 5 (14) Oct 15, 2012
TheGhostofOtto1923 I'm bored of your trolling and ignorance, quoting books you know nothing about. You assume so much about what these things mean, and that those who abuse these scriptures must be
the ones using it properly, whilst ignoring millions of believers who have not killed or attacked BECAUSE of these writings. Religion cannot be evil, it is a concept and those who practise it can be
3.5 / 5 (13) Oct 15, 2012
I can however take respite from the fact that you will have no effect on my life and my ambition to be a physicist. You may not like that and you may make many assumptions about how competent I will
be because you think my beliefs will impair my judgment or whatever, but that's okay. I'm not living to impress you or prove anything to you. You're just a faceless name on the internet and, for most
people who have to suffer your needless ad hominem attacks you will remain that way.
2.7 / 5 (36) Oct 15, 2012
You assume so much about what these things mean, and that those who abuse these scriptures must be the ones using it properly
-You mean the ones who will make them say whatever they need to confirm their starry-eyed preconceptions? The Quran says Moslems are the favored ones. The Torah says Jews are the favored ones. The
xians claim the OT says they are the favored ones.
whilst ignoring millions of believers who have not killed or attacked BECAUSE of these writings
How would you know otherwise? You think that humans are intrinsically amoral because your books tell you this. Science tells us otherwise. Your books tell you that people are evil WITHOUT your god.
This us wrong and it is EVIL.
Don Crusty
2 / 5 (4) Oct 15, 2012
Somehow i miss the point .... is our understanding of the universe so advanced we'd be able to outsmart 'the design' by making sure it is there, would we not only measure the reality of the illusion
as 'the' reality... if it is an intelligent design would it not masquerade such an attempt from success ?
As such i propose the UN starts a long long lasting project to iterate such experiment every decade with the most advanced scientific knowledge and both bright and creative minds in place, as they
say 'what happens once did not happen, what happens twice will happen again'
2.5 / 5 (34) Oct 15, 2012
Religion cannot be evil, it is a concept and those who practise it can be evil
Religions - ALL religions - have survived only because they were better at outgrowing and overrunning their less well-conceived counterparts. This is a form of evolution.
Central to this Mechanism is the Restriction of women to the process of making and raising babies with the understanding that god will provide for them. And as he never does, the books all come with
instructions on how to take what adherents need from those who do not deserve to have it.
The religions which did not include this, did not survive. Islam is only a little better at it than xians or Jews. And you are kidding yourself if you think your currently dormant branch would not
act exactly as any militant fundamentalist would when conditions inevitably merit it.
Your leadership would eagerly turn to Joshua or they would be usurped by leaders who would. And you would do whatever they would have you do, in service of your god.
2.5 / 5 (34) Oct 15, 2012
But really, all you need to do is profess your belief in the existence of god. This gives others with more extreme views on how to properly serve him, the right to do what they do. To grow well
beyond their ability to live within their means, to blame the heathen for their woes, and then to seek to take what he has.
This violence and misery is on your head. Ask your god to forgive you for it. Or, you could turn the other cheek away from superstition altogether and help to end this whole miserable process.
3.8 / 5 (13) Oct 15, 2012
Haha, you cannot pin blame for what other people do, on my beliefs.
You can try, but water off a duck's back.
Now take your medication and calm down.
2.5 / 5 (34) Oct 15, 2012
You proclaim there is a god who wants your worship, who will grant your wishes and give you immortality. You support the belief system that others use to rape torture murder, bomb funerals and shoot
little girls in the face. Same god, same worship, same special favors in return for service.
Same BOOKS which all say the very same things. You think your book is better because another book might say 'kill the infidel' or 'sacrifice yourself and your family' a few more times than yours?
How do you not realize that their actions are a direct result of your beliefs? How do you not reslize that, given the same curcumstances, you would do exactly as they are doing? You are being selfish
and irresponsible, like any addict.
2.4 / 5 (31) Oct 15, 2012
Here. Chew on a little hitchens.
(watch all segments)
2.3 / 5 (7) Oct 15, 2012
This news bit is not only silly, its ridiculous..
1.7 / 5 (11) Oct 15, 2012
Scientists are willing to change their theories to accommodate new evidence; godlovers are not.
@Jesus Fronter This is not true. I have changed my stance on numerous aspects of Christianity in the light of coming acquainted with science.
So IF you ever become a scientist you will have extinguished your silly beliefs. For the moment you spew silly fairy tales all over science sites.
2.2 / 5 (6) Oct 15, 2012
Scientists are willing to change their theories to accommodate new evidence
Max Planck: A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is
familiar with it. Truth never triumphs — its opponents just die out and science advances one funeral at a time.
5 / 5 (1) Oct 16, 2012
The observable universe may or may not be an objective reality, but the only *practical* approach is to treat it as if it *is* an objective reality - at least until we discover a means of escape from
the observable universe.
'Escape' would seem to be a more *theological* problem.
2.4 / 5 (31) Oct 16, 2012
The observable universe may or may not be an objective reality, but the only *practical* approach is to treat it as if it *is* an objective reality - at least until we discover a means of escape
from the observable universe.
'Escape' would seem to be a more *theological* problem.
In other words you mean a fantasy. No if there is any way to escape as you say, it will be discovered by scientists and not priests. Priests have only ever discovered more useful forms of applied
sociopolitics, and more efficient ways of lining their pockets.
2.3 / 5 (15) Oct 16, 2012
The observable universe may or may not be an objective reality, but the only *practical* approach is to treat it as if it *is*
To that I say, practical to what purpose?
I have no problem playing the what if game, as long as we don't treat the if as fact. We can treat our view of reality as objective, as long as we don't start fooling others (and ourselves) into
believing that it is.
Math is perfect, but it is abstract. It can prove whether a theory works, but it cannot prove the theory.
2.4 / 5 (14) Oct 16, 2012
We can assume homogeneity and isotropy when constructing our models, and although testable, they haven't been tested, so they aren't known as facts. We can build a model as *if* the universe is
homogeneous and isotropic, we just can't lose touch with the fact that this is something we *believe* to be true. This is an assumption we make.
If the universe is found not to be homogeneous or isotropic then the models based on this belief are false.
2.5 / 5 (16) Oct 16, 2012
Our interpretation of the objective reality points to a homogeneous and isotropic universe, but the objective reality isn't dependent on our interpretation of it.
The objective reality might be a construct of the mind. It is a possibility that reality is an emergent property of consciousness. "I think, therefore I am." lol, how about, 'I think it, therefore it
That is a whole other mess altogether.
2.6 / 5 (15) Oct 16, 2012
We can assume homogeneity and isotropy when constructing our models, and although testable
I'm taking this back. To prove this we'd have to test it for the entire Universe for the entire duration from beginning to end. If the universe is infinite, we'd have to run infinite tests. This is
unfeasible even with perfect technological advancements.
Homogeneity and isotropy can never be tested in an infinite universe (regardless of technological level), and in a finite universe perfect technology would be required (we'd have to test at every
location of space, and we'd have to travel to every moment of time when doing so).
So to prove this in a finite universe requires time and space travel. In an infinite universe it is impossible.
From a practical standpoint neither case is provable.
2.4 / 5 (32) Oct 16, 2012
Our interpretation of the objective reality points to a homogeneous and isotropic universe, but the objective reality isn't dependent on our interpretation of it.
The objective reality might be a construct of the mind. It is a possibility that reality is an emergent property of consciousness. "I think, therefore I am." lol, how about, 'I think it,
therefore it is.'
That is a whole other mess altogether.
Your 2nd paragraph directly contradicts your 1st. Your pretense is showing.
From a practical standpoint neither case is provable.
Neither is either relevant. Your pretense is showing. Lol.
2.3 / 5 (15) Oct 16, 2012
[The physical world] might be a construct of the mind. It is a possibility that [the physical] reality is an emergent property of consciousness.
There was no contradiction between the 2nd and 1st, the contradiction was in the 2nd due to bad wording.
If the physical world is an emergent property of consciousness, then no objective physical world exists. The objective reality in this case is consciousness, the physical world in this case becomes
So the 1st paragraph represents an objective physical world which exists independently from the conscious observer.
The 2nd represents a reality in which the physical world does not exist without the conscious observer.
5 / 5 (1) Oct 16, 2012
No. Just no. It's quite possible that the hypervisor of our universe uses a different physical model than our simulated universe. Any qualities of our physics are unable to describe whether they are
the limits of physics or simply our universe. At most we could prove that our universe's physics prevents a simulation of a baby universe within our own, but we can't prove that some sort of
supraphysical universe contains ours.
We could prove that we are in a simulation by breaking out of the hypervisor like a virus. Until we do, it's impossible to know whether we're or not we're inside a simulation.
The Singularity
1 / 5 (2) Oct 17, 2012
That's how things work. Even realtivity had 'zero proof' the moment Einstein published it.
This is'nt science, this is a joke. "
Knowing wheter you exist in a simulation isn't worthwhile? I can see where people like to shy away from new knowledge.
This isnt the Truman show, this is real life. The only simulations that exist, we create. There is no question, as we all ( well most of us) live in the real world.
not rated yet Oct 18, 2012
They just want us to THINK it's a simulation...
not rated yet Oct 18, 2012
The only simulations that exist, we create.
That's an interesting assertion. What do you base this assertion on?
While I agree that knowing whether we live in an simulation or not doesn't change much in terms of how we live our lives (as the old Buddhist proverb says: "Before enlightenment: chop wood, cook
soup. After enlightenment: chop wood, cook soup")
However I do feel (scientific) enlightenment is worthwhile. Knowledge for knowledge's sake has its merits. It broadens the mind. And there is even the possibilities of interaction with the simulating
entity as notend in the original article
(or just plain hacking/exploiting the simulation for free energy, instant travel to any part, etc. ...which would be a pretty nifty thing to have. If you're damned to live in a simulation you might
as well get the most out of it.)
1 / 5 (3) Oct 18, 2012
From first year philosophy, I thought there was no way to prove we weren't just a brain in a box.
It matters not whether we are a brain in a box or not because the question, what is this 'we' or 'I', has the same answer. Consciousness, which is the fundamental nature of reality.
2.3 / 5 (15) Oct 18, 2012
I though there was no way to prove we weren't just a brain in a box.
this is true. By changing this line to read:
'I thought there was no way to prove we were just a brain in a box.'
The statement becomes false. The problem is, the box and brain could exist in an infinite number of possible states. So not finding it in a certain state leaves it in an infinite number of other
possible states. But alternately, finding the brain and box in one certain state proves that we are a brain in a box.
In the same way that:
-We can't prove the Universe is homogeneous and isotropic, but we can prove that it is not by finding a place that is different.
-We can't prove there is no God, but we can prove there is by finding Him.
-We can't prove that we aren't a simulation, but we can prove we are by uncovering it.
2.3 / 5 (15) Oct 18, 2012
We can prove a theory is wrong but we can't prove a theory is right.
1.8 / 5 (5) Oct 18, 2012
Devil's advocate
Is it possible to assume nothing?
Akin to the 'something from nothing' proponents.
1.8 / 5 (5) Oct 18, 2012
To assume nothing is an assumption.
Neatly sidesteps falsification.
And where language falls short.
2 / 5 (4) Oct 18, 2012
Is it possible to assume nothing?
Of course it's possible, but it violates the Occam's razor. AWT is based on the assumption, the random state of Universe is more probable state than any other particular state, including the zero
2.5 / 5 (29) Oct 18, 2012
[The physical world] might be a construct of the mind. It is a possibility that [the physical] reality is an emergent property of consciousness.
Yawn. So what happens when the physical world kills the mind? We know that countless minds have died but the world is still there. Go back and watch the Matrix, and remind yourself that it is just a
There was no contradiction between the 2nd and 1st, the contradiction was in the 2nd due to bad wording.
Well thats the trouble when amateurs try to cook word spaghetti. (Hint - it is also the trouble when seasoned philos try to cook word spaghetti)
-Your mode of thought has been declared dead. It consistently fails to inform. So sorry.
2.6 / 5 (15) Oct 18, 2012
So what happens when the physical world kills the mind?
that's assuming that it can be. When the physical world kills the body (the brain), the consciousness of the individual may rejoin a common conscious network, it may reside there permanently, or it
may even return into a physical form through a reincarnative process. In any case, if consciousness isn't destroyed, neither does the physical world it constructs have to be. Maybe you really are the
embodiment of Otto himself.
philos[ophy]...has been declared dead
while I agree that physical modeling is best represented with geometry and calculus, words work for thought experiments and in the end sometimes lead to better models. You can't have physics without
2.5 / 5 (29) Oct 18, 2012
that's assuming that it can be. When the physical world kills the body (the brain), the consciousness of the individual may rejoin a common conscious blah
-See, you philos just want to live forever like any other religionist and you think your intellects will show you the way.
Inevitability is the new mantra. There is NOTHING beyond the pale.
words work for thought experiments and in the end sometimes lead to better models.
-Only when they are used by physicists and mathematicians who are familiar with the calculations that they represent.
You can't have physics without philosophy
-Only a philo will tell you this. YOU didnt watch the vid. Hawking, feynman, Krauss and many others will tell you it is worthless. Scads of scientists will confirm this for you every day when they
get results and make progress without using it.
-And before you twitch, those philos you are about to mention were doing science and not metaphysics whenever they made any useful contributions.
2.5 / 5 (29) Oct 18, 2012
Dan dennett, himself a philo of note, on the uselessness of your words:
"[Others] note that my "avoidance of the standard philosophical terminology for discussing such matters" often creates problems for me; philosophers have a hard time figuring out what I am saying and
what I am denying. My refusal to play ball with my colleagues is deliberate, of course, since I view the standard philosophical terminology as worse than useless--a major obstacle to progress since
it consists of so many errors trapped in the seductively lucid amber of tradition: "obvious truths" that are simply false, broken-backed distinctions, and other cognitive illusions."
2.4 / 5 (31) Oct 18, 2012
So Otto, why not let them talk among themselves ? What do you care what they talk about? Do they not pay enough attention to you? Are you needy? Need your diaper changed? Look at Otto, look at Otto.
1 / 5 (1) Oct 21, 2012
What if we are in a meta simulation, like simulation inside another simulation. Or cant that happen? Yea it must be possible then since we make simulatons ourselves. But then again ours are not
intelligent. I guess every simulation is less advanced then the place its from..
1 / 5 (1) Oct 21, 2012
What if we are in a meta simulation, like simulation inside another simulation. Or cant that happen? Yea it must be possible then since we make simulatons ourselves. But then again ours are not
intelligent. I guess every simulation is less advanced then the place its from..
1 / 5 (1) Oct 21, 2012
What if we are in a meta simulation, like simulation inside another simulation. Or cant that happen? Yea it must be possible then since we make simulatons ourselves. But then again ours are not
intelligent. I guess every simulation is less advanced then the place its from..
not rated yet Oct 27, 2012
well, shooting films may also be considered an artistic (or lyrical) way of the simulation.
1.7 / 5 (6) Oct 27, 2012
The main reason, why science considers the creationists theories like the simulation of Universe is, that the attempts for falsification of this idea (which is futile by its own definition) can bring
another evasion for neverending research, i.e. jobs and salaries. After all, the priests of Holy Church maintained their legends from the same reasons.
3 / 5 (2) Nov 17, 2012
Please don't let any religious nut jobs read this..
1.6 / 5 (7) Dec 11, 2012
Creationism that denies science is a problem, however I dont see an issue with this kind of simulation creationism, it does not even require god.
In fact, it seems like a good argument against god. If we can find out that we live in a simulation it would mean the simulation is imperfect, and why would an omnipotent being create an imperfect
|
{"url":"http://phys.org/news/2012-10-real-physicists-method-universe-simulation.html","timestamp":"2014-04-20T14:24:03Z","content_type":null,"content_length":"268369","record_id":"<urn:uuid:ada53c45-e780-47cd-aded-491ed7239934>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Preference relation that is represented by a utility function problem
February 11th 2011, 06:34 AM
Preference relation that is represented by a utility function problem
I have a problem and i'm stuck already...
Given a preference relation $\preceq$ that is represented by the utility function u. If $\preceq$ is monotonic, show that $\preceq$ is represtented by $\bar{u}$, where $\bar{u}(0)=0$ and $\bar{u}
(x) \geq 0$, for every $x \in \mathbb{R}^n$+ (n-dimension positive real numbers)
Can anybody help?
February 17th 2011, 11:18 AM
Its not clear if you want to show that any utility function with $\bar{u}(0)=o$ that represents the preferences must have the specified properties, or you just want to show that one exists.
Ill assume you only want to show that one exists. The simplest way to do this is to just identify a valid $\bar{u}$, and show that it represents the preference relation. There are many
possibilities, but ill use $\bar{u}(x)= u(x) - u(0)$.
its clear that $\bar{u}(0) = u(0) - u(0) =0$ as required.
Now you only have to show that $\bar{u}$ represents your preference relation.
You're told that u represents the preferences, so it must be true that:
$u(x) \leq u(y) \iff x \preceq y$
You need to show that
$\bar{u}(x) \leq \bar{u}(y) \iff x \preceq y$
But you already know u() represents the preferences, so its sufficient to show that $\bar{u}$ is the same ordering as u().
$\bar{u}(x) \leq \bar{u}(y) \iff u(x) \leq u(y)$
$u(x) - u(0) \leq u(y) - u(0) \iff u(x) \leq u(y)$
This is clearly true as u(0) is constant.
If you want to show rigorously that the above condition is sufficient to show the preferences are represented by $\bar{u}$ then;
$u(x) - u(0) \leq u(y) - u(0) \iff u(x) \leq u(y) \iff x \preceq y$
$u(x) - u(0) \leq u(y) - u(0) \iff x \preceq y$
$\bar{u}(x) \leq \bar{u}(y) \iff u(x) \leq u(y) \iff x \preceq y$
The final property that you must show is that $\bar{u}(x) > 0$ if $x > 0$. This is straightforward from the definition of $\bar(u)$ and the monotonicity of u(x).
u() monotinic:
$u(x) \geq u(y) \iff x \geq y$
substitute y=0
$u(x) \geq u(0) \iff x \geq 0$
$u(x) - u(0) \geq 0 \iff x \geq 0$
$\bar{u}(0) \geq 0 \iff x \geq y$
|
{"url":"http://mathhelpforum.com/business-math/170908-preference-relation-represented-utility-function-problem-print.html","timestamp":"2014-04-21T14:07:03Z","content_type":null,"content_length":"10838","record_id":"<urn:uuid:e2d0be3c-6fcf-4667-9369-71cda823d9b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: lovely proof problem
Replies: 1 Last Post: Sep 21, 2009 8:49 AM
Re: lovely proof problem
Posted: May 6, 1999 1:04 AM
Anonymous writes:
>This is a problem I got for my computer science-discrete math class:
>Prove or disprove that the product of a nonzero rational number and an
>irrational number is irrational using one of the following: direct proof
>(of the form p --> q), indirect proof (of the form ~q --> ~p), proof by
>contradiction (so that ~p --> q is true, then ~p must be false, so p must be
What does this have to do with discrete math?
(Perhaps it's just that nobody learns what a proof is in their
non-discrete math classes any more?)
Anyway, which proof techniques can be used depends a lot on what theorems
we already have available. Since the definition of an irrational number
is based on a property it DOESN'T have, I'm betting that any proof will
turn out to involve some use of proof by contradiction somewhere along
the line, although that might be hidden in the proof of a previous theorem.
message approved for posting by k12.ed.math moderator
k12.ed.math is a moderated newsgroup.
charter for the newsgroup at www.wenet.net/~cking/sheila/charter.html
submissions: post to k12.ed.math or e-mail to k12math@sd28.bc.ca
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=372280&messageID=1138545","timestamp":"2014-04-17T19:39:03Z","content_type":null,"content_length":"15153","record_id":"<urn:uuid:ba117020-3014-41ba-bea5-018671d3070e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/raleung2/asked","timestamp":"2014-04-20T16:17:59Z","content_type":null,"content_length":"105358","record_id":"<urn:uuid:fb758c25-34e2-40e2-ab7e-75541152ee64>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure 24: Conditions of general aerosols during burning of a candle on January 5-6: (a) ratio of the number concentration of aerosol particles smaller than 10nm ((<10)) to that of bigger ones ((>
10)) and ratio of the total surface area of all particles smaller than 10nm ((<10)) to the total surface area of all particles bigger than 10nm ((>10)) in a volume unit and (b) ratio of the number
concentration of aerosol particles smaller than 20nm ((<20)) to that of bigger ones ((>20)) and ratio of the total surface area of all particles smaller than 20nm ((<20)) to the total surface area
of all particles bigger than 20nm ((>20)) in a volume unit.
|
{"url":"http://www.hindawi.com/journals/jt/2012/510876/fig24/","timestamp":"2014-04-18T13:43:12Z","content_type":null,"content_length":"7936","record_id":"<urn:uuid:3fe74c90-472a-4eb0-9bd9-7314e70e2aeb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Menlo Park Algebra 2 Tutor
Find a Menlo Park Algebra 2 Tutor
...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable).
Pre-calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math...
22 Subjects: including algebra 2, calculus, geometry, accounting
Hello, My experience as a tutor is extensive. I have worked as a tutor for several years, tutoring most subjects, including mathematics, English and science, all ages. I am also a published author
and have excellent writing, spelling and grammar skills.
35 Subjects: including algebra 2, English, reading, physics
I am interested in tutoring elementary school aged children. I started working as a child care specialist at the age of 12. I have been a Nanny for three years for three girls, and I have taught
for over 12 years.
16 Subjects: including algebra 2, reading, English, autism
As your tutor, I am thoroughly trained in physics and mathematics. Over the past three years, I have the privilege to tutor many students in the Cupertino School District, and all of them achieved
their GPA goals, i.e., became straight A students. All my private SAT students earned 790-800 in math...
15 Subjects: including algebra 2, calculus, physics, statistics
...The way I tutor students is to enable them to understand mathematical concepts by teaching them the language of mathematics. After all, math has its own language and when some concepts appear
unclear or confusing it is because the person does not fully understand the language. I am fully versed in both Common Core State standards as well as California Standards in Math.
7 Subjects: including algebra 2, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/Menlo_Park_algebra_2_tutors.php","timestamp":"2014-04-20T21:03:52Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:69868d78-12c2-4035-9f4b-65ea9b57f8ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is the Relationship Between Journal Impact Factors and Article Citations Growing Weaker?
Three information scientists at the Université du Québec à Montréal are claiming that digital journal publication has resulted in a weakening relationship between the article and the journal in which
it is published. They also claim that highly-cited articles are increasingly being found in non-highly-cited journals, resulting in a slow erosion of the predictive value of the journal impact
Their article, “The weakening relationship between the Impact Factor and papers’ citations in the digital age,” by George Lozano and others, was published in the October issue of the Journal of the
American Society of Information Science and Technology (JASIST). Their manuscript can be also be found in the arXiv.
The paper continues a controversy between those who believe that digital publishing is narrowing the attention of scientists and those who believe it is expanding it. There are theories to support
each viewpoint.
As it requires access to a huge dataset of citation data in order to answer the hypothesis — data that greatly exceeds most of our abilities to acquire and process — we need to focus on evaluating
the specific methods of each paper. Different kinds of analysis can return very different results, especially if the researchers violate the assumptions behind statistical tests. Large observational
studies with improper controls may detect differences that are the result of some other underlying cause.
No study is perfect, but the authors need to be clear what they cannot conclude with certainty.
The Lozano paper is based on measuring the relationship between the citation performance of articles against the impact factor of their journal. The authors do this by calculating the coefficient of
determination (R^2), which is used to measure the goodness of fit between a regression line and the data it attempts to model. Lozano repeats this calculation for every year from 1900 to 2011,
plotting the results and then attempting to fit a new regression line through these values.
There are a number of methodological problems with using this approach:
1. R^2 assumes that the data are independent observations, which they are not. If we plot the relationship between article citations for each article (X) and the impact factor of their journal (Y),
then Y is not variable — it is exactly the same value for each article. Moreover, the calculation of the journal impact factor is based on the citation performances of each article, meaning that
the two numbers are correlated. The result is that when calculating the R^2, larger journals and journals with higher impact factors will have disproportionate influence on their results.
2. Attempting to fit a grand regression line through the R^2 values for each year also assumes that a journal’s impact factor is independent from year to year, which by definition, it is not. The
impact factor for year Y is correlated by the impact factor at year Y+1 because half of the articles published in that journal are still being counted in the construction of the next year’s
impact factor.
3. The authors assume that the journal article dataset is consistent over time. Lozano uses data from the Web of Science which has been adding more journals over its lifetime. In 2011, for example,
it greatly expanded its coverage of regional journals. Over a decade ago, it added a huge backfile of historical material–the Century of Science. Lozano does not control for the increase of
journals in his dataset nor the growth of citations, meaning that their results could be an artifact of the dataset and not an underlying phenomenon.
4. Last, the authors assume natural breakpoints in their dataset, which appear to be somewhat arbitrary. While the authors postulate a difference starting in 1990, they also create a breakpoint at
1960 but ignore other obvious breakpoints in their data (look at what happens beginning in 1980, for instance). If you eyeball their figures, you can draw many different lines through their data
points, one for every hypothesis you wish to support. There have been many developments in digital publishing since the 1990s that don’t seem to enter the discussion. While the authors try to
make a causal connection between the arXiv and physics citations, for example, there is no attempt to look for other explanations such as institutional and consortial licensing, journal bundling
(aka, “The Big Deal”), not to mention the widespread adoption of email, listservs, or the graphical web browser. There is no discussion of other possible causes or explanations in their
The paper reads as if the conclusions have been written ahead of the analysis, conclusions which included the following:
Online, open-access journals, such as in the PLoS family of journals, and online databases, such as the ArXiv system and its cognates, will continue to gain prominence. Using these open-access
repositories, experts can find publications in their respective fields and decide which ones are worth reading and citing, regardless of the journal. Should the relationship between IFs and
papers’ citations continue to weaken, the IF will slowly lose its legitimacy as an indicator of the quality of journals, papers, and researchers.
This may explain the cheers heard around the altmetrics communities when this article was first published.
I’ve had a couple of great discussion with colleagues about this paper. We all agree that Lozano and his group are sitting on a very valuable dataset that is nearly impossible to construct without
purchasing the data from Thomson Reuters. My requests to see their data (even a subset thereof) for validation purposes have gone unheeded. New papers with new analyses are forthcoming, I was told.
Discussions with colleagues surfaced two different ways to analyze their dataset. Tim Vines suggested using the coefficient of variation, which is a more direct way to measure the distribution of
citations and controls for the performance of each journal. I suggest setting up their analysis as a repeated measures design, where the performance of each journal is observed every year over the
course of the study. We all agreed that the authors are posing an interesting question and have the data to answer it. Unfortunately, the authors seemed too eager to make strong conclusions from
inappropriate and rudimentary analyses, and the authors’ unwillingness to share their data for validation purposes does not give me confidence in their results.
19 thoughts on “Is the Relationship Between Journal Impact Factors and Article Citations Growing Weaker?”
1. R² doesn’t make any assumptions about independence: it’s simply a summary of the variation in the data. it doesn’t matter that the X is the same for one journals, R² is looking at the variation
over the journals. I’m not sure if the correlation between IF and citations is because citations are included in both numerator & denominator – the way I understood it, citations in year t are
ergresssed against IF calculated from citations in years t-1 and t-2.
I think the question of whether R² or CV is a good measure depends on the data – you’d have to look at the plots. I also agree the change-points are horrible: I think I’d fit a spline to the time
The half-life of citations might also be interesting to look at: I think a higher half-life will reduce the R², and this might account for some of the decline.
A lot of the analysis of this data would require playing around with it to see how what it looks like, so I agree that having the data available is important for evaulating the paper.
Posted by | Nov 13, 2012, 6:03 am
□ Bob, thanks for the correction. R^2 alone does not assume independence, but using R^2 to fit a linear regression line through the data most certainly does assume independence.
Posted by | Nov 13, 2012, 7:26 am
☆ I’m not quite sure what the right analysis is for this problem, but I’m fairly sure that the R^2 is the wrong one. They basically assembled a gigantic list for each year that looked like
Journal Article IF citations
Nature #1 30 130
Nature #2 30 110
Science #1 28 150
Science #2 28 70
and so on, for ~ 1 million articles
The signal we’re trying to detect is one of authors citing articles independent of the Impact Factor of the journal they’re in, and this should manifest as an increase in the spread of
citation rates within journals.
Lazano et al’s analysis does sort of do this, but with two oddities. First, as Phil points out, setting the data up like this means that some journals have orders of magnitude more
entries than others and thus more influence on the R^2. Since it’s the spread of citations within journals that we’re interested in, the journal and not the article is the natural unit of
Next, the spread of citations within a journal would be given by the variance of citations, which is
SUM( [citations article i - mean no. citations]^2)
However, the equivalent for Lazano et al is calculated as:
SUM( [citations article i - Impact Factor]^2)
and, as everyone here knows, the IF is not the mean number of citations: it includes citations to editorial material that are not included in the article count, and there’s an unknown
correction factor. Big differences between journals in the amount of citable editorial material or changes in either over time could account for some of the patterns that they observe.
I’d therefore be be interested to see these data reanalysed using the coefficient of variation of citations within a journal (i.e. st. dev. (citations) / mean (citations), as this removes
the effect of journal size and IF. Plotting the average coefficient of variation through time for a particular field should show whether the spread of citation rates really is increasing.
Testing this statistically would (as Phil says) involve a time series analysis.
Posted by | Nov 13, 2012, 3:20 pm
○ R^2 doesn’t measure the strength of a relationship. It could arguably be considered to measure the strength of the relationship relative to the noise in the data, but this isn’t the
same thing unless the noise and any biasing factors is constant (and the noise won’t be constant). The independence issue is therefore a bit of a red herring. Far more fundamental is
the point that the R^2 could increase even in the strength of the impact-citation link decreased and vice versa.
Modeling the citation pattern over time is a better way to go (multilevel or time series analyses could accomplish this). Even a crude look at the unstandardized regression slope from
year to year would be better than looking at R^2 (but not ideal).
Posted by | Nov 14, 2012, 8:39 am
2. Phil, what possible effect do you think ISI’s occasional revisions to inclusion criteria might have on these inflection points? The big journals can introduce sweeping innovations more easily,
but also can get caught out if ISI changes inclusion criteria. Did the authors adjust for these well-known shifts within the citation data? Some of them (Lancet’s “Brief Reports” issues in the
1990s comes to mind, but there have been others) probably took a lot of citations out of play for IF calculations.
Posted by | Nov 13, 2012, 7:39 am
□ If you take a close look at their figures, you’ll notice that their annual coefficient of determination jumps quite a bit from year to year. This may be a factor of the kind of adjustments
that are made to high profile journals like The Lancet that publish a lot of articles will have a disproportionate effect on the calculation of each year’s R^2.
However, I imagine that major changes to Web of Science would have a much larger effect, for instance, when WoS starts indexing a few thousand more regional journals, since the citations
won’t gets distributed evenly–a disproportionate number of them will be directed toward high-impact journals.
Posted by | Nov 13, 2012, 7:55 am
3. So, if your critique is valid, why was this paper published in the first place? Presumably this is a reputable journal. That it seems to have generated such interest just goes to show how little
correlation there is between a paper’s quality and its impact!
Posted by | Nov 13, 2012, 9:07 am
□ I don’t know. They say that they’re working on follow up papers, but they should probably focus on writing a corrigendum/retraction for this one.
Posted by | Nov 13, 2012, 3:24 pm
☆ Without the data, I cannot say that their findings are right or wrong. The authors are making a pretty bold truth-statement without being able to defend it by sharing their data. This
makes me suspect that their findings are not defendable.
The authors claim that their license with Thomson Reuters prevents them from sharing their data. If they could not agree to even basic allowances –such as sharing for the purposes of
validation– the authors should have considered this before publishing a scientific paper on the topic.
You can’t have it both ways.
Posted by | Nov 27, 2012, 10:31 am
4. But we already knew that there was no relationship between the impact factor and citations. I wrote about it
and I gave some links to others who wrote about it before me.
Posted by | Nov 13, 2012, 9:37 am
5. I’m not a scientist and for the most part the math discussed here is above me, however I do enjoy reading the the scholarly kitchen. As I read this article it dawned on me that the data and the
tools available to mine the data could provide an opportunity to determine an “ancestry of citations” if you will. One could map out which articles are siblings, parents, or cousins. I just
thought that might be compelling to someone.
Posted by | Nov 13, 2012, 10:40 am
□ There’s quite a bit of data on this in Web of Science. Parent-child relationships are directly visualized as cited papers and citing papers associated with each record. Grandparent/grandchild
papers can be gathered using the citation map. There are also “cousins” – or “related records” which mean that the two papers share one or more ancestors. Dr. Henry Small started this work
many years ago at ISI as “co-citation analysis.” It can reveal some of the more subtle topical relationships among papers that might not be apparent using keywords or direct citation
Many, many studies have been written over the years using citation ancestry to trace the way a subject or field has developed across time.
Posted by | Nov 13, 2012, 12:06 pm
☆ Thank you Marie McVeigh, I was not aware but glad someone thought of it and it is being used in a productive way.
Posted by | Nov 13, 2012, 12:43 pm
6. Just FYI, the three scholars who did this study aren’t all associated with Université du Québec à Montréal; at least one is with Université de Montréal.
Posted by | Nov 13, 2012, 12:06 pm
□ Actually, all three are associated to the Université du Québec à Montréal; one is also associated to the Université de Montréal.
Posted by | Nov 13, 2012, 3:59 pm
7. The decreasing relationship makes sense if you consider greater competition among journals. If editors did not actively solicit papers, then all papers would fall into a journal that matched
their expected citations. But there is competition! I work hard to seek out top authors and convince them to publish in our journal. If I out-compete my fellow editors, our Impact Factor will
hopefully rise, if I’m less successful, it will fall.
Posted by | Nov 14, 2012, 4:09 pm
1. Pingback: - Nov 13, 2012
2. Pingback: - Nov 13, 2012
3. Pingback: - Dec 31, 2012
|
{"url":"http://scholarlykitchen.sspnet.org/2012/11/13/is-the-relationship-between-journal-impact-factors-and-article-citations-growing-weaker/","timestamp":"2014-04-20T01:26:19Z","content_type":null,"content_length":"101238","record_id":"<urn:uuid:c0045b5d-e860-4732-8b10-6ff79d64b8ae>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Raabe's Theorem
Date: 8/23/96 at 4:49:25
From: Manuel Ojeda Aciego
Subject: about Raabe's theorem
Dear Dr. Math:
This question is about Raabe's result for the convergence of a
numerical series of non-negative terms.
It states the convergence or divergence of the series by means of the
result m of the following limit
lim n ( 1 - -------- ) = m.
n->Infty a(n)
If m > 1 then the series converge, if m < 1 then the series diverge,
and (here is my problem) if m = 1 nothing can be asserted.
The harmonic series is an example of divergent series having m=1,
what I am looking for is a convergent series having m = 1,
in order to convince myself of the last part of the theorem.
Best regards,
Manuel Ojeda Aciego
Dept. Matematica Aplicada
Universidad de Malaga
Date: 8/23/96 at 19:39:50
From: Doctor Tom
Subject: Re: about Raabe's theorem
How about a(n) = 1/(n*log(n)*log(n))?
This converges (See Knopp, Infinite Sequences and Series, for
example), and with a little work, I think you can convince yourself
that the limit of n(1 - a(n+1)/a(n)) is also 1.
-Doctor Tom, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/52042.html","timestamp":"2014-04-19T18:10:38Z","content_type":null,"content_length":"6284","record_id":"<urn:uuid:d16dfe4f-da02-4355-bbcc-e61314db6a4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I want to learn how to solve this (probably basic) word problem.
February 13th 2013, 05:12 PM
I want to learn how to solve this (probably basic) word problem.
I would prefer to be able to write an equation that could solve this but I can not. I have spent hours thinking about this word problem, It has a picture, but I'll just describe it.
It has been a while since I was in a math class and this problem really blows my mind (Angry). Not in an angry way like the emoticon.
There are 3 different colored spheres. Red, white, and blue.
1 red sphere and 1 blue sphere weigh equally as much as 6 white spheres.
1 red sphere and 2 white spheres weigh equally as much as 3 blue spheres.
Each color of sphere has a different weight. How many white spheres will balance with 1 red sphere?
It's either 4 or 2. Its obviously not the other two multiple choice answers. 8 and 6.
Thank ya.
February 13th 2013, 06:02 PM
Re: I want to learn how to solve this (probably basic) word problem.
Let r be the number of red spheres, w be the number of whites, and b the number of blues. We have
r+b=6w and r+2w=3b ... What you need to do is to get rid of the b. Here's what I did. Rewrite the equations as
r - 6w + b = 0
r + 2w - 3b = 0
Multiply the top equation by 3 and add the two equations togeter.
3r - 18w + 3b = 0
r + 2x - 3b = 0
4 r - 16 w = 0
Can you take it from there?
This is just a matter of practice. You'll get it.
February 13th 2013, 07:03 PM
Re: I want to learn how to solve this (probably basic) word problem.
Oh wait nvm, I get it r = 4w.
Thanks a ton.
but why did you multiply the top by three?
EDIT: oh you multiplied it by three so you could eliminate the b.
Thanks again.
I can't figure out how to mark the thread as solved. :(
|
{"url":"http://mathhelpforum.com/algebra/213084-i-want-learn-how-solve-probably-basic-word-problem-print.html","timestamp":"2014-04-18T13:43:27Z","content_type":null,"content_length":"5436","record_id":"<urn:uuid:d74e0b0f-fc70-4d0c-a2db-20b8bfc4788f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please Help!!! A surveyor needs to determine the distance between 2 points that lie on opposite banks of a river. a figure shows that 300 yards are measured along one bank. Angles from each side are
62(A) and 53(C) degrees. Find the distance between A and B to the nearest tenth of a yard. Drawing included
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/510005a0e4b00c5a3be67ba2","timestamp":"2014-04-17T18:35:27Z","content_type":null,"content_length":"40597","record_id":"<urn:uuid:770d23af-a5a8-4708-a6af-f75574154fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Origins of functional field arithmetic
up vote 6 down vote favorite
Background: By function field, we mean a finite extension of the field of rational functions of one variable over a finite field with $p$ elements. Classfield theory for function fields was
established by Chevalley in an Annals paper. An axiomatic characterization for number fields and function fields was established by Artin and Whaples, thus finally putting on firm ground the analogy
between function fields and number fields. I have seen allusions that the germ of the idea was coming from Gauss. However since fields were not defined then, this was not a definitive statement.
Question: When was a definitive conjecture first made in mathematical history that there is a major analogy between algebraic number fields and function fields over finite fields?
algebraic-number-theory class-field-theory ho.history-overview
Emil Artin's early work (maybe even his thesis) was groundbreaking here. I'm leaving this as a comment rather than an answer because I'm not sure, but I hope that someone will be able to follow up
on it. – Pete L. Clark Feb 25 '10 at 1:52
Artin's thesis is on Riemann hypothesis. .. – Regenbogen Feb 25 '10 at 1:55
Right -- isn't that a link between number fields and function fields? – Pete L. Clark Feb 25 '10 at 1:55
Yes, but was he the first one to say that such a link exists? – Regenbogen Feb 25 '10 at 1:57
As I said, I'm not sure, but at least this gives an upper bound (1921), and my guess is that this is, if not the very beginning, pretty close to it. – Pete L. Clark Feb 25 '10 at 2:03
show 3 more comments
4 Answers
active oldest votes
Maybe there are some perceptions of the analogy in the work of Gauss, but for certain the close relation between Z and F[x] where F is a finite field was established in 1857 in a paper of
Dedekind, which went so far as to formulate a version of the quadratic reciprocity law when F has odd characteristic.
In 1919, Kornblum used L-functions on F[x] to prove an analogue of Dirichlet's theorem (for irreducible polynomials in arithmetic progression). While those L-functions could after the fact
be viewed as L-functions of characters on a function field, Kornblum worked with characters on (F[x]/(f))*, much as one can prove Dirichlet's theorem without any hint that there are number
fields (inside cyclotomic extensions) lying in the background.
In 1921, Artin's thesis on quadratic function fields laid out probably for the first time how close number fields and function fields over finite fields (not only Q and rational function
fields) are. For example, he defined concepts of real and imaginary quadratic function fields, showed units in the "real" case could be found by something like a continued fraction
algorithm (if I'm not mistaken) and he computed examples of their zeta-functions and could verify in those examples that the Riemann hypothesis was true.
A simple and convenient place to find a treatment of this history is Peter Roquette's website
up vote
15 down http://www.rzuser.uni-heidelberg.de/~ci3/manu.html
and a direct link to his paper laying out this analogy is the paper
I also recommend the first chapter of A. D. Thomas' "Zeta-Functions: An Introduction to Algebraic Geometry" for a careful explanation of what Artin did in his thesis, particularly the
confusion over affine vs. projective notions (which weren't really cleared up until the work of F. K. Schmidt on the zeta-function a few years after Artin's thesis).
In addition to the work on this analogy between number fields and function fields over finite fields, you shouldn't forget about the analogy between number fields and function fields over
the complex numbers. In 1882, Dedekind and Weber developed the theory of Riemann surfaces purely algebraically, using ideas from algebraic number theory. They went so far as to say their
work applied not just to function fields over C, but over any algebraically closed field of characteristic 0, such as the field of algebraic numbers.
An interesting precursor concerning units in real´´ quadratic function fields is a paper by Abel Sur l'integration de la formule differentielle $\frac{\rho \,dx}{\sqrt{R(x)}}$, $R$ et $\
rho$ étant des fonctions entières.´´ There he proves (over the complex numbers) that there is a unit precisely when the appropriate continued fraction expansion is (eventually) periodic.
The only thing one would need to add in the finite coefficient field case is that because the appropriate polynomials appearing in the continued fraction expansion are of bounded degree
they will eventually repeat. – Torsten Ekedahl Feb 25 '10 at 5:33
add comment
One of the most early observations on the analogy between function fields and number fields is the little known work by
• E. Heine, Fernere Untersuchungen über ganze Functionen, J. Reine Angew. Math. 48 (1854), 243--266
who started investigating quadratic forms with coefficients in polynomial rings. Dedekind's and Weber's contributions have already been mentioned, but not those of Robert König, who
investigated the connection between quadratic function fields and quadratic forms with polynomial coefficients in various publications prior to Artin (and perhaps just as unknown as
Heine's contribution):
• Über die quadratischen Formen mit rationalen Funktionen als Koeffizienten, Monatsh. f. Math. Phys. 23 (1912), 321-346
up vote 6 • Beiträge zur Arithmetik der hyperelliptischen Funktionenkörper, J. Reine Angew. Math. 142 (1913), 191-210
down vote
He also explicitly stated the analogy between function fields and number fields in the titles (and the body) of the following two papers
• Arithmetisch-funktionentheoretische Parallelen, Jahresber. DMV 23 (1914), 181--192
• Funktionen- und zahlentheoretische Analogien, Jahresber. DMV 28 (1920), 208-213.
The constant fields investigated by König have characteristic 0, however.
add comment
Encouraged by Regenbogen, I am upgrading my comment to an answer: at least one of the first, and greatest, promoters of the analogy between number fields and function fields in much the
way we view it today, was Emil Artin. His 1921 thesis seems to be the first work to consider the Riemann hypothesis in the function field case.
As Regenbogen points out, a very nice article for historical information about this is Roquette's The Riemann hypothesis in characteristic p, its origin and development:
up vote 3
down vote I believe that I had in fact read this article previously (at least the beginning), and that my guess was a sort of second-rate memory.
Note that on p. 7 Roquette also mentions results of Dedekind in 1857 -- e.g. quadratic reciprocity for $\mathbb{F}_q[t]$ -- that already demonstrate some awareness of the analogy.
Whether you wish to consider Dedekind's work as early history or prehistory seems, as far as I can tell, to be a matter of taste.
add comment
If you want to go back to smallest germ of the idea that functions behave like numbers, then it is perhaps in Stevin's L'arithmetique of 1585, where he uses the Euclidean algorithm to find
the gcd of two polynomials. I'm not sure what Gauss's contribution may have been -- maybe Gauss's lemma? As KConrad has pointed out, a big contribution was made by Dedekind and Weber in
up vote 3 1882 (J. reine und angewandte Math. 92, 181-290). In the same journal (pp. 1-122) there is a less reader-friendly paper by Kronecker which seems to cover some of the same ground.
down vote
2 Gauss' contribution was more substantial that Gauss' lemma. See G. Frei, The Unpublished Section Eight: On the Way to Function Fields over a Finite Field, pp. 159--198 in "The Shaping of
Arithmetic after C. F. Gauss's Disquisitiones Arithmeticae" (C. Goldstein, N. Schappacher, J. Schwermer ed.), Springer, 2007. I can access a free copy of this article at springerlink.com
/content/pg063242u1852m05/fulltext.pdf but this access might be related to an institutional affiliation so maybe not everyone can see it. – KConrad Feb 25 '10 at 3:01
Thanks for the link! I didn't realize that this book is now online. For others with the requisite institutional affiliation, the whole book may be found at springerlink.com/content/
g68q36/?k= – John Stillwell Feb 25 '10 at 3:40
The Weil conjectures were motivated by a manuscript of Gauss. That was what I had in mind. – Regenbogen Feb 26 '10 at 18:49
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-number-theory class-field-theory ho.history-overview or ask your own question.
|
{"url":"https://mathoverflow.net/questions/16343/origins-of-functional-field-arithmetic","timestamp":"2014-04-17T13:18:04Z","content_type":null,"content_length":"76373","record_id":"<urn:uuid:d7cc057e-8dd6-447d-b5f1-b08cafb718cd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Categorical Foundations
Colin Mclarty cxm7 at po.cwru.edu
Sun Jan 18 10:59:37 EST 1998
Reply to message from friedman at math.ohio-state.edu of Sat, 17 Jan
>Remember, you are talking to
>elementary undergraduates who may eventually become statisticians,
>physicists, business leaders, philosophers, lawyers, etcetera.
Absolutely. And so they will not raise objections based on
any mismatch between what I say and what they expect from prior
exposure to ZF.
As at OSU, we categorists have to choose one of many
possible syllabi. Here I have chosen a conservative one, closely
approximating yours so as to give the clearest proof that the
categorical approach is logically and pedagogically possible.
I have in fact published an article with a related sketch in
in "Numbers can be just what they have to" in NOUS 27 (1993) 487-498.
>1. Sets are equal if and only if they have the same elements. Natural
>numbers are not sets. Only sets have elements. Every object is either a set
>or a natural number. Analog?
Subsets of a given set A are equal if and only if they have
the same elements. There is a set of natural numbers. Only
sets have elements. Every object is either a set or a function.
An element of a set A is a function 1-->A.
>2. The set N of all natural numbers exists. Analog?
There is a set of natural numbers--I included this above.
>3. The successor of any natural number is a natural number. Only natural
>numbers have a successors. Analog?
There is a successor function from natural numbers to themselves.
>3. 0 is a natural number which is not the successor of any natural number.
The same, verbatim
>4. Any two natural numbers with the same successor are equal. Analog?
The same, verbatim
>5. For any objects x,y,z: x = x. if x = y then y = x. if x = y and y = z
>then x = z. Analog?
The same, verbatim
>6. For any condition expressible with for all, there exists, and, or, not,
>if then, iff, membership, being a natural number, 0, and successor, if it
>holds of 0, and if whenever it holds at a natural number x, it holds of its
>successor S(x), then it holds of all natural numbers. Analog?
The same, verbatim
>7. For any condition expressible as above, and for any set A, there is the
>set of all elements of A that obey that condition. Analog?
For any condition expressible as above, and for any set A, there
is the subset of all elements of A that obey that condition.
>8. For any set A, there is the set of all subsets of A. Analog?
The same, verbatim
>9. For any two objects x,y, there is the set consisting of exactly x and y,
>written {x,y}. Analog?
The same verbatim
>10. For any set x, there is the set consisting of the elements of the
>elements of x. Analog?
The same, verbatim
>11. The ordered pair <x,y> of two objects x,y, is {{x},{x,y}}. We prove
>that <x,y> = <z,w> iff x = z & y = w. Analog?
No analogue, we have no use for it.
>12. We prove that the Cartesian product AxB of any two sets A,B, exists.
We stipulate that the Cartesian product AxB of any two sets
A,B exists.
>13. We prove that there are unique functions on NxN into N obeying the
>usual conditions for addition and multiplication and exponentiation. Analog?
The same, verbatim
All the rest are the same, verbatim. I include them for easy
>14. We prove the usual collection of basic equalities of arithmetic
>involving addition, multiplication, and exponentiation, on N. Analog?
>15. We define the ordering on N in terms of addition. Analog?
>16. We prove the usual collection of basic inequalities of arithmetic
>involving the usual ordering on N, addition, multiplication, and
>exponentiation, on N. Analog?
>17. We write 1 for S(0). The concrete integers consist of 0, and the
>ordered pairs <n,0>, and the ordered pairs <n,1>. The set of all concrete
>integers exists. Analog?
>14. We explicitly extend the usual ordering on N, and addition,
>multiplication, to the concrete integers. We prove the usual collection of
>basic equalities and inequalities of arithmetic involving these notions.
>15. We define the concept of ordered ring and discrete ordered ring. Analog?
>16. We define the concept of Archidmedian ordered ring. We define the
>general concept of structure and isomorphism between structures. We prove
>that any two Archimedean ordered rings are isomorphic by a unique
>isomorphism. Analog?
>17. We define the concrete rationals as certain ordered pairs of concrete
>integers (reduced form with strictly positive denominators). We define
>order, addition, multiplication. We prove the usual equalities and
>inequalities. Analog?
>18. We define the concept of ordered field and prove that the concrete
>rationals form the least ordered field up to isomorphism in an appropriate
>19. We define the concept of Dedekind cut in the concrete rationals and
>prove basic facts about these cuts. Analog?
>20. We prove the existence of the set of all Dedekind cuts in the concrete
>rationals. We call these Dedekind cuts the concrete real numbers. We define
>the basic ordering, and addition and multiplication on the set of all real
>numbers. Analog?
>21. We prove the basic equalities and inequalities involving the basic
>ordering, and addition and multiplication on the set of all real numbers.
>In particular, we prove the least upper bound principle. Analog?
>22. We define the complete ordered fields. We prove that any two complete
>ordered fields are uniquely isomorphic. Analog?
>23. We define the (finite and infinite) sequences of real numbers as
>functions from initial segments of N into R = the set of all real numbers.
>We define Cauchy sequences, monotone sequences, bounded sequences, and
>prove the fundamental facts such as Cauchy completeness and the existence
>of limits of bounded infinite sequences. Analog?
>24. We define the continuous functions from R to R. We prove the
>intermediate value theorem. We prove the attainment of maxima and minima.
>REMEMBER: Clarity, simplicity, coherence, teachability, etcetera.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000844.html","timestamp":"2014-04-16T05:59:21Z","content_type":null,"content_length":"9161","record_id":"<urn:uuid:5c90e5cd-11b5-4582-ae1c-4f1f0951b2ce>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euclid's Algorithm for Polynomials
Euclid’s Algorithm for Polynomials
Consider the division algorithm. It looks a lot like division for integers, as it should. Specifically, it’s got all the features we need for the setup of Euclid’s Algorithm, and we have an analogue
Just like we did back then, we start with two polynomials $a$ and $b$, and we divide them to find $a=q_1b+r_1$. Then if $r_1eq0$ we turn around and divide to find $b=q_2r_1+r_2$. We keep going until
we end up with a remainder of ${0}$, at which point we can’t divide anymore, and so we stop. And we must eventually stop, because the degree of the remainders keep going down.
At the end, the last nonzero remainder will be the “greatest common divisor” of $a$ and $b$. That is, it’s a polynomial $d$ that divides both $a$ and $b$ (leaving no remainder), and any other such
common divisor must itself evenly divide $d$.
Euclid’s algorithm also gives us a way of writing out the greatest common divisor as a linear combination $d=ax+by$. Just like for the integers, this marks the greatest common divisor as the linear
combination with the least degree.
Now how to show all this? It’s easy: just go back to the proof of Euclid’s algorithm for integers! Everything works the same way again. We just have to stick in references to the degree where
appropriate to measure the “size” of a polynomial.
In fact, if we can equip any integral domain $R$ with a degree function $u:R^\times\rightarrow\mathbb{N}$ (where $R^\times$ is the set of nonzero ring elements) so that we have a division algorithm
like those for integers and polynomials, then we’ve got the setup for Euclid’s algorithm. In this case, we say we have a “Euclidean domain”. So the point here is that $\mathbb{F}[X]$ is Euclidean,
and so it has a form of Euclid’s algorithm, and all that follows from it.
Notice that in the case of the integers we had some ambiguity about common divisors, since $\pm d$ would both work equally well. The point here is that they differ by multiplication by a unit, and so
each divides the other. This sort of thing happens in the divisibility preorder for any ring. For polynomials, the units are just the nonzero elements of the base field, considered as constant
(degree zero) polynomials. Thus the greatest common divisor is only determined up to a constant multiple, but we can easily ignore that little ambiguity. The “greatest” just has to be read as
referring to the degree of the polynomial.
1 Comment »
1. [...] relations satisfied by , and will clearly satisfy any linear combination of these relations. But Euclid’s algorithm shows us that we can write the greatest common divisor of these relations
as a linear combination, [...]
Pingback by A Lemma on Reflections « The Unapologetic Mathematician | January 19, 2010 | Reply
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2008/08/05/euclids-algorithm-for-polynomials/?like=1&source=post_flair&_wpnonce=5092e672ce","timestamp":"2014-04-16T10:21:15Z","content_type":null,"content_length":"73581","record_id":"<urn:uuid:99a3b5e6-058f-4a2e-bea5-389654a9ac20>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
odds of rolling 6 of a kind
I need to know the odds of rolling 6 of the same number on a single roll of 6 standard dice. Can you please include a brief description of how your answer was calculated?
Thank you.
Re: odds of rolling 6 of a kind
Originally Posted by
I need to know the odds of rolling 6 of the same number on a single roll of 6 standard dice. Can you please include a brief description of how your answer was calculated?
There are
There is only way one to get all ones.
There is only way one to get all twos.
There is only way one to get all sixes.
So what is the answer?
Re: odds of rolling 6 of a kind
How about the odds of rolling 6 sixes?
Re: odds of rolling 6 of a kind
Originally Posted by
How about the odds of rolling 6 sixes?
The probabiity of rolling six 6's is
Re: odds of rolling 6 of a kind
|
{"url":"http://mathhelpforum.com/statistics/187649-odds-rolling-6-kind-print.html","timestamp":"2014-04-20T03:57:12Z","content_type":null,"content_length":"6145","record_id":"<urn:uuid:78e48b05-8bfb-4749-9408-151fe24638b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/michaelarose/answered/1","timestamp":"2014-04-17T01:25:23Z","content_type":null,"content_length":"119385","record_id":"<urn:uuid:81292d12-0771-4e43-84e4-3af88973ad49>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Symmetry and conservation law
You lost the [itex]\delta[/itex].
[tex]\delta S' = \delta S + \delta F(q,\dot{q})\bigg |_{t_1}^{t_2}[/tex]
And [itex]\delta F[/itex] isn't necessarily trivial under variations of trajectory. At very least, [itex]\dot{q}(t_2)[/itex] is not fixed by boundary conditions.
But anyways, the statement "Lagrangian is invariant under translation," means that if you shift the Lagrangian itself
the solution, the shifted solution is a solution to shifted Lagrangian. In other words, yes, equations of motion remain the same.
This is all made far more explicit if you look at where it comes from, namely,
Noether's Theorem
Edit: Scratch that. I should be more careful when I answer questions this late. You don't shift the Lagrangian. Just the variables. For example, if you have two masses on a spring, the potential
energy depends only on difference of positions. So if you shift both coordinate variables by same amount, the total Lagrangian remains exactly the same. In contrast, if you have an mgy term for
gravitational potential, shifting y results in change in Lagrangian. But that makes sense. Vertical momentum is not conserved, because we are ignoring momentum transfer to Earth when we write
potential this way.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4170192&postcount=2","timestamp":"2014-04-19T09:43:27Z","content_type":null,"content_length":"8178","record_id":"<urn:uuid:188ad419-62c4-4523-9234-661088213658>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ME 252 B
Computational Fluid Dynamics:
Wavelet transforms and their applications to turbulence
Marie Farge^1 & Kai Schneider^2
Winter 2004
University of California, Santa Barbara
^1 LMD-CNRS, Ecole Normale Supérieure ^2 CMI, Université de Provence
24 rue Lhomond 39 rue Joliot-Curie
75231 Paris Cedex 05, France 13453 Marseille Cedex 13, France
Email mailto:farge@lmd.ens.fr Email : mailto:kschneid@cmi.univ-mrs.fr
http://wavelets.ens.fr/ http://www.l3m.univ-mrs.fr/schneider.htm
Courses: Monday / Wednesday / Friday 9.00 am – 9.50 am, GIRVETZ 2123
Office hours: Tuesday / Thursday 4.00 pm – 5.00 pm, Engineering II, room 2332
Objectives of the course
Our goal is to bring the students to a level of understanding which allows them to apply wavelet methods to solve their own problems. To guarantee a good assimilation of the wavelet theory, we will
first recall the Fourier transform and the most important mathematical theorems related to it. We will then provide the students with a basic knowledge of wavelets, for both the continuous wavelet
transform, the orthogonal wavelet transform and the wavelet packet transform, without going too far into their mathematical background. We will give the basics concerning their numerical
implementation and illustrate their use with academic examples. We will illustrate the course with applications to signal and image processing, data compression, denoising and resolution of PDEs,
focusing in particular to applications in turbulence.
Content of the course
Fourier transform
Fourier integral and Fourier series, academic examples.
Properties, Parseval's theorem, convolution. Uncertainty principle,
Brillouin's information plane.
Auto-correlation function, spectrum, Wiener-Khinchin's theorem.
Discrete Fourier transform, Shannon's sampling theorem, fast Fourier transform.
Applications for analyzing the global regularity of a function, for filtering and denoising signals and images.
Continuous wavelet transform
Definitions, academic examples.
Information plane, choice of the mother wavelet.
Properties, Parseval's theorem, reproducing kernel.
Boundary effects, algorithms.
Scalogram and its relation to spectrum and structure functions.
Extension to two and three dimensions.
Orthogonal wavelet transform
Discretization of the wavelet space, quasi-orthogonal representations, wavelet frames.
Orthogonal wavelet bases, properties, academic examples.
Quadrature mirror filters, multi-resolution analysis.
Fast wavelet transform algorithm.
Biorthogonal wavelets, wavelets on the interval.
Extension to two and three dimensions.
Applications for compressing and denoising signals and images.
Wavelet packet transform
Wavelet packets and Malvar wavelets orthogonal bases.
Information plane, information cost, information entropy.
Theoretical dimension of the representation, choice of the best basis.
Fast wavelet packet transform algorithm.
Extension to two and three dimensions.
Applications for compressing and denoising signals and images,
comparison with wavelets.
Applications numerical analysis
Wavelets and operator equations.
Norm equivalences and preconditioning of matrices.
Nonlinear approximation, adaptive grids, and error estimates.
Adaption strategy for evolution problems.
Compression of operators (BCR algorithm).
Evaluation of nonlinear terms, conncection coefficients, collocation on adaptive grids.
Operator adapted wavelets (biorthognal decompositions and vaguelettes).
Applications to turbulence
Definition and properties of turbulence.
Statistical theory of two-dimensional and three-dimensional turbulence.
Intermittency and coherent structures.
Wavelet analysis of turbulent signals from both laboratory and numerical
experiments using continuous and orthogonal wavelets.
Extraction of coherent structures in one, two and three dimensional turbulent flows,
and comparison between wavelets, wavelet packets and Malvar wavelets.
Resolution of the incompressible Navier-Stokes equations using orthogonal wavelets.
Coherent Vortex Simulation (CVS), based on nonlinear wavelet filtering,
and comparison with Large Eddy Simulation (LES), based on linear Fourier filtering.
Lecture notes
Lecture on Fourier transform and sampling : Fourier transform
Lectures on continuous wavelet transform: Continuous wavelet transform 1d
Continuous wavelet transform 2d
Lectures on orthogonal wavelet transform: Discrete wavelet transform 1d
Lectures on wavelet packet transform: Wavelet packet transform
Sheet 1 : ucsb_ex1.pdf due to January 14th
Sheet 2 : ucsb_ex2.pdf due to January 30th
Sheet 3 : ucsb_ex3.pdf due to February 13th
Additional material
Marie Farge and Kai Schneider, 2002
Analysing and compressing turbulent fields with wavelets
Note IPSL, n°20, April 2002 pdf-file
Marie Farge, 1992
Wavelet transforms and their applications to turbulence.
Ann. Rev. Fluid Mech, 24:395-457, 1992 pdf-file
Barbara Burke, 1996
The world according to wavelets.
A.K. Peters, Wellesley
Stephane Mallat, 1999
A wavelet tour of signal processing.
Second edition, Academic Press
Paul S. Addison, 2002
The illustrated wavelet transform handbook.
Institute of Physics (IOP)
Stephane Jaffard, Yves Meyer and Robert D. Ryan, 2001
Wavelets: tools for science and technology.
M.V. Wickerhauser, 1994
Adapted wavelet analysis from theory to software.
A.K. Peters
Wavelet application in cosmetics
Last update 31.1.2004 Webmaster
|
{"url":"http://wavelets.ens.fr/ENSEIGNEMENT/COURS/UCSB/index.html","timestamp":"2014-04-16T04:58:57Z","content_type":null,"content_length":"26779","record_id":"<urn:uuid:b1d4bd5f-479c-490c-b7d2-36eb34d30bae>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Courses
MATH 500 - Discrete Mathematics
Hours: 4
Discrete Mathematics. Four semester hours. Study of formal logic; sets; functions and relations; principle of mathematical induction; recurrence relations; and introductions to elementary number
theory; counting (basic combinatorics); asymptotic complexity of algorithms; graph theory; and NPcompleteness. This course is useful to those taking graduate classes in computer science. It may be
taken for graduate credit towards a masters in mathematics only by consent of the department. Prerequisite: Consent of the instructor.
MATH 501 - Mathematical Statistics
Hours: 3
Mathematical Statistics. Six semester hours. Probability, distributions, moments, point estimation, maximum likelihood estimators, interval estimators, test of hypothesis. Prerequisite: Math 314
MATH 502 - Mathematical Statistics
Hours: 3
Mathematical Statistics. Six semester hours. Probability, distributions, moments, point estimation, maximum likelihood estimators, interval estimators, test of hypothesis. Prerequisite: Math 314.
MATH 511 - Introduction to Real Analysis I
Hours: 3
Intro Real Analysis. Three semester hours. Properties of real numbers, continuity, differentiation, integration, sequences and series of functions, differentiation and integration of functions of
several variables. Prerequisite: Math 436 or 440.
MATH 512 - Introduction to Real Analysis II
Hours: 3
Intro Real Analysis II. Three semester hours. Properties of real numbers, continuity, differentiation, integration, sequences and series of functions, differentiation and integration of functions of
several variables. Prerequisite: Math 436 or 440.
MATH 515 - Dynamical Systems
Hours: 3
Dynamical Systems. Three semester hours. Iteration of functions; graphical analysis; the linear, quadratic and logistic families; fixed points; symbolic dynamics; topological conjugacy; complex
iteration; Julia and Mandelbrot sets. Computer algebra systems will be used. Recommended background; Math 192 and Math 331.
MATH 517 - Calculus of Finite Differences
Hours: 3
Calculus of Finite Differences. Three semester hours. Finite differences, integration, summation of series, Bernoulli and Euler Polynomials, interpolation, numerical integration, Beta and Gamma
functions, difference equations. Prerequisite: Math 225.
MATH 518 - Thesis
Hours: 3-6
Thesis. Six semester hours. This course is required of all graduate students who have an Option I degree plan. Graded on a (S) satisfactory or (U) unsatisfactory basis. Prerequisite: Math 314
MATH 522 - General Topology I
Hours: 3
General Topology I - Three semester hours Ordinals and cardinals, topological spaces, identification topology, convexity, separation axioms, covering axioms. Pre-requisites : Math 440 or consent of
MATH 523 - General Topology II
Hours: 3
General Topology II - Three semester hours The course is a continuation of Math 522. Compact spaces, metric spaces, product spaces, convergence, function spaces, path connectedness, homotopy,
fundamental group. Pre-requisites : Math 440 or consent of instructor.
MATH 529 - Workshop in School Mathematics
Hours: 3
Workshop in School Mathematics. Three semester hours. This course may be taken twice for credit. A variety of topics, taken from various areas of mathematics, of particular interest to elementary and
secondary school teachers will be covered. Consult with instructor for topics.
MATH 531 - Introduction to Theory of Matrices
Hours: 3
Introduction to Theory of Matrices. Three semester hours. Vector spaces, linear equations, matrices, linear transformations, equivalence relations, metric concepts. Prerequisite: Math 334 or 335.
MATH 532 - Fourier Analysis and Wavelets
Hours: 3
Fourier And Wavelet Analysis and Applications - Three semester hours Inner Product Spaces; Fourier Series; Fourier Transform; Discrete Fourier Analysis; Haar Wavelet Analysis; Multiresolution
Analysis; The Daubechies Wavelets; Applications to Signal Processing; Advanced Topics. Pre-requisites : Math 335 or the consent of the instructor
MATH 533 - Optimization
Hours: 3
Linear and Nonlinear Optimization - Three semester hours Graphical optimization, linear programming, simplex method, interior point methods, nonlinear programming, optimality conditions, constrained
and unconstrained problems, combinatorial and numerical optimization, applications. Pre-requisites : Math 335 or the consent of the instructor
MATH 536 - Cryptography
Hours: 3
Cryptography. Three semester hours. (Same as CSci 568) The course begins with some classical cryptanalysis (Vigenere ciphers, etc). The remainder of the course deals primarily with number-theoretic
and/or algebraic public and private key cryptosystems and authentication, including RSA, DES, AES and other block ciphers. Some cryptographic protocols are described as well. Prerequisites: Graduate
standing in mathematics or consent of the instructor.
MATH 537 - Theory of Numbers
Hours: 3
Theory of Numbers. Three semester hours. Factorization and divisibility, diophantive equations, congruences, quadratic reciprocity, arithmetic functions, asymptotic density, Riemann's zeta function,
prime number theory, Fermat's Last Theorem. Prerequisite: Consent of instructor.
MATH 538 - Functions of a Complex Variable
Hours: 3
Functions of a Complex Variable. Six semester hours. Geometry of complex numbers, mapping, analytic functions, Cauchy-Riemann conditions, complex integration. Taylor and Laurent series, residues.
Prerequisite: Math 511.
MATH 539 - Functions of a Complex Variable
Hours: 3
Functions of a Complex Variable. Six semester hours. Geometry of complex numbers, mapping, analytic functions, Cauchy-Riemann conditions, complex integration. Taylor and Laurent series, residues.
MATH 543 - Abstract Algebra
Hours: 3
Abstract Algebra. Three semester hours. Groups, isomorphism theorems, permutation groups, Sylow Theorems, rings, ideals, fields, Galois Theory. Prerequisite: Math 334.
MATH 544 - Abstract Algebra
Hours: 3
Abstract Algebra. Three semester hours. Groups, isomorphism theorems, permutation groups, Sylow Theorems, rings, ideals, fields, Galois Theory. Prerequisite: Math 334.
MATH 546 - Numerical Analysis
Hours: 3
Numerical Analysis - Three semester hours The course will include numerical methods for derivatives and integrals approximation, teach Euler's and Runge-Kutta methods for solving ordinary
differential equations, and study methods for approximate solution of partial differential equations (PDE), including parabolic PDE. Students will learn also how to program the basic methods in
MatLab, improving their skills in working with this software. Pre-requisite: Calculus III, Math 314
MATH 550 - Foun Abstract Algebra
Hours: 3
Foundations of Abstract Algebra - Three semester hours This course will cover the fundamental properties of algebraic structures such as properties of the real numbers, mapping, groups, rings, and
fields. The emphasis will be on how these concepts can be related to the teaching of high school algebra. Note: This course will be helpful to secondary teachers by giving them a better understanding
of the terms and ideas used in modern mathematics.
MATH 560 - Euclidean and nonEuclidean geometry for teachers
Hours: 3
Euclidean and non Euclidean Geometry - Three semester hours This course is specifically designed for middle- and high-school teachers. The National Council of Teachers of Mathematics (NCTM) explains
in its Principles and Standards that the geometric skills students should be able to use possess by the time they are through high school are: (1) Analyze characteristics and properties of two- and
three-dimensional geometric shapes and develop mathematical arguments about geometric relationships. (2) Specify locations and describe spatial relationships using coordinate geometry and other
representational systems. (3) Apply transformations and use symmetry to analyze mathematical situations. (4) Use visualization, spatial reasoning, and geometric modeling to solve problems.
MATH 561 - Statistical Computing and Design of Experiments
Hours: 3
Statistical Computing and Design of Experiments. Three semester hours. A computer oriented statistical methods course which involves concepts and techniques appropriate to design experimental
research and the application of the following methods and techniques on the digital computer: methods of estimating parameters and testing hypotheses about them, analysis of variance, multiple
regression methods, orthogonal comparisons, experimental designs with applications. Prerequisite: Math 401 or 501.
MATH 563 - Image Processing with Applications
Hours: 3
Introduction to image processing, with applications to images from medicine, agriculture, satellite imagery, physics, etc. Students will learn techniques such as edge detection, 2D image enhancement
using laplacian and gradient operators, fourier transforms and the FFT, filtering, and wavelets, as time allows. Students will acquire practical skills in image manipulation by implementing the above
mentioned algorithms.
MATH 571 - Higher Order Approximations for Teachers
Hours: 3
Higher Order Approximations. 3 Semester Hours. This course, specifically for teachers, explores algedra-based techniques for powerful, highly accurate numerical approximations. Graphing calculators
and some computer software will be used. Approximations for areas and volumes of regions, solutions to equations and systems of equations, sums of infinite series, values of logarithmic and
trigonometric functions and other topics are covered.
MATH 572 - Modern Applications of Mathematics for Teachers
Hours: 3
Modern Applications of Mathematics - Three semester hours This course, specifically designed for teachers, covers a range of applications of mathematics. Specific topics may vary but have included
classical (private key) encryption, data compression ideas, coding theory ideas (Hamming 7,4 code), private and public key cryptography, data compression including wavelets, difference equations
(populations models, disease models) and stochastic difference equations (stocks), GPS systems, computer tomography (e.g. CAT scans), polynomial interpolation/Belier curves, and topics from student
MATH 573 - Calculus of Real and Complex Functions for Teachers
Hours: 3
Calculus of Real and Complex Functions - Three semester hours This course is designed for teachers, and explores similarities and differences between functions whose domain and range consist of sets
of real numbers, and sets of complex numbers. Complex numbers are reviewed, with nontraditional applications to plane geometry. Alternate approaches to the meaning of the derivative are given so as
to provide links between the notions of f (x) and f (z) (x real, z complex), and ways of understanding derivatives of inverse functions and composite functions. The geometry of functions of a complex
number are explored. Cauchy-Riemann equations are derived and utilized. Power series in both the real and complex context are compared.
MATH 580 - Topics from the History of Mathematics
Hours: 3
Topics in history of mathematics - Three semester hours A chronological presentation of historical mathematics. The course presents historically important problems and procedures.
MATH 589 - Independent Study
Hours: 1-4
Independent Study - Hours: One to four Individualized instruction/research at an advanced level in a specialized content area under the direction of a faculty member. Prerequisites Consent of
department head. Note May be repeated when the topic varies.
MATH 595 - Research Literature and Techniques
Hours: 3
Research Literature and Techniques. Three semester hours. This course provides a review of the research literature pertinent to the field of mathematics. The student is required to demonstrate
competence in research techniques through a literature investigation and formal reporting of a problem. Graded on a (S) satisfactory or (U) unsatisfactory basis. Prerequisite: Consent of instructor.
MATH 597 - Special Topics
Hours: 3
Special Topics. One to four semester hours. Organized class. May be repeated when topics vary.
|
{"url":"http://coursecatalog.tamuc.edu/grad/courses/math/","timestamp":"2014-04-21T02:25:43Z","content_type":null,"content_length":"52063","record_id":"<urn:uuid:9758b506-c7e3-4f2b-956a-9c6c56daf226>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
This is a free, online textbook offered by Bookboon.com. According to the author, "This is the sixth book of examples from...
see more
Material Type:
Open Textbook
Leif Mejlbro
Date Added:
Jan 12, 2011
Date Modified:
Jan 12, 2011
״5 different spinners and a completely custom mode where you can create your own spinner up to 10 segments. Change the labels...
see more
Material Type:
Reference Material
Joe Scrivens
Date Added:
Jul 12, 2012
Date Modified:
Jul 12, 2012
Forget about looking up critical z-, t-, Chi-square and F-values in tables. The DecisionVisualizer computes exact values from...
see more
Material Type:
Daniel Stricker
Date Added:
Mar 30, 2009
Date Modified:
Mar 30, 2009
This is a free, online textbook offered by Bookboon.com. According to the author, "This is the fifth book of examples from...
see more
Material Type:
Open Textbook
Leif Mejlbro
Date Added:
Jan 12, 2011
Date Modified:
Jan 12, 2011
This resource introduces distribution functions and their basic properties, relation to denstiy functions, reliability, and...
see more
Material Type:
Kyle Siegrist
Date Added:
Jun 10, 2003
Date Modified:
Mar 01, 2012
ZeGenie is a On-Line Interactive Learning Engine designed to help the student through Math material as if with a real human...
see more
Material Type:
Online Course
antony chiu
Date Added:
Jan 26, 2011
Date Modified:
Jan 26, 2011
This is a free online textbook that can be downloaded in a pdf, or available for a charge from the American Mathematical...
see more
Material Type:
Open Textbook
J. Laurie Snell, Charles M. Grinstead
Date Added:
Apr 26, 2010
Date Modified:
Nov 17, 2011
This is a free, online textbook offered by Bookboon.com. "This is the first book of examples from the Theory of Probability....
see more
Material Type:
Open Textbook
Leif Mejlbro
Date Added:
Jan 12, 2011
Date Modified:
Jan 12, 2011
In this course, the student will learn the basic terminology and concepts of probability theory, including sample size,...
see more
Material Type:
Online Course
The Saylor Foundation
Date Added:
Jan 27, 2012
Date Modified:
Jan 27, 2012
Terms & Definitions: Part 1, Volume 1, the first volume in a series of Introduction to Quantitative Statistics - Step by...
see more
Material Type:
Open Textbook
Rick Lumadue, Rusty Waller
Date Added:
Aug 01, 2013
Date Modified:
Mar 28, 2014
|
{"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=6&nosearchlanguage=&nosearchlanguage=&category=2602&sort.property=overallRating","timestamp":"2014-04-16T13:16:29Z","content_type":null,"content_length":"183372","record_id":"<urn:uuid:6912059f-d5f9-4769-96e2-ba739f59afcd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Easter
On this blog, you can learn any number of feats, including determining the day of the week for any date, and more recently, finding the moon phase for any date.
In a recent comment, regular Grey Matters reader Jay suggested mixing the two, and teaching Conway's method for finding Easter in a given year. Just as you requested, Jay, here it is!
WHEN IS EASTER? Easter is held on the first Sunday following the first full moon following the vernal equinox. It's this detailed definition that makes finding the date of Easter in your head for a
given year so impressive. There seems to be an impossible amount of information to know off the top of your head in order to determine the correct date.
If you've practiced both the day for any date feat and the moon phase for any date feat, you should have little trouble performing this feat. The only other thing you need to know is your multiples
of 19 from 0 to 190 from memory.
A NOTE ON PRESENTATION: Since you'll be effectively combining 2 other feats, speed shouldn't be the focus of your presentation. If you've ever watched Dr. Arthur Benjamin's TED performance, you've
seen the speed with which he squares 2-, 3-, and 4-digit numbers. Notice that, when he squares 5-digit numbers, he doesn't focus on speed, but rather the process itself. Even if aspects of the
process aren't clear, he makes it fun just to see the process in action.
That's also a good idea for the Easter date feat. It will allow you do seemingly nonsensical math out loud, yet still entertain the audience and get the right result.
THE PROCESS: The process can be broken down into 2 large steps. First, you find the date of the paschal full moon (the first full moon after the vernal equinox). The next step simply involves
determining the day of the week for that date, and then working out the date of the following Sunday.
The only thing you need to start is a year. In this tutorial, I'll assume you're given a year in the 1900s or 2000s. Other centuries will be discussed later. We'll use 1980 as our example year, in
order to make the tutorial clearer.
STEP 1: To start, subtract 1900 from the given year, whether that year is in the 1900s or 2000s. If the remaining digits are 19 or more, subtract the nearest multiple of 19 that is equal to or less
than those digits. In our 1980 example, 1980 - 1900 = 80, and the nearest multiple of 19 is 76, so we figure 80 - 76 = 4.
It's important to note that we're NOT using the special positive/negative year key taught in the moon phase tutorial.
The next step is to add 1, so our example becomes 4 + 1 = 5. After that, multiply by 11. As it happens, this is easy in this case, as 5 × 11 = 55. You want to be very sure you are comfortable quickly
multiplying by 11. If you don't already know how to do this, watch the video below and then try the practice exercises here:
At this point, starting with 1980 has given us 55, but we've only taken the last 2 digits into consideration. To compensate for the century, subtract 6, regardless of whether the year is in the 1900s
or the 2000s. In this case, 55 - 6 = 49.
If the number is 30 or more, you can subtract the nearest multiple of 30 equal to or less than the number you have. In this case, the closest multiple of 30 that is equal to or less than 49 is 30, so
we work out 49 - 30 = 19.
This final number is how many days the paschal full moon is before April 19th, which you may also need to consider as being March 50th. Since we have 19, we could try subtracting that from April
19th, but April 19th - 19 days = April 0th , which doesn't make any sense. Instead, we do March 50th minus 19 and get March 31st.
So, in our example, we've determined that March 31st was the date of the paschal full moon in 1980. Verbally, this might sound something like, 1980? Let's see that's 4...5...55 minus 6 is 49...19
days before April 19th, which is March 31st. Note that you don't have to explain each step, just run through the calculations and let your audience wonder about the numbers. The idea is to make the
seemingly nonsensical calculations fun for your audience.
Quick review:
1 - Subtract 1900 from the year, then subtract the largest multiple of 19 that is equal to or less than the last 2 digits of the year: X = (year - 1900) mod 19
2 - Add 1: X + 1
3 - Multiply by 11: 11(X + 1)
4 - Subtract 6 to compensate for the century: (11(X + 1)) - 6
5 - Subtract the nearest multiple of 30 equal to or less than the current number: ((11(X + 1)) - 6) mod 30
6 - Subtract the number you now have from either April 19th or March 50th to get the date of the paschal full moon: (April 19th/March 50) - (((11(X + 1)) - 6) mod 30)
Now, you're ready for the next step.
STEP 2: At this point, you simply use your preferred version of the day of the week for any date feat to work out the day of the week for this date. Personally, I use Day One, but any version will
work. At this point, you should have told your audience that March 31, 1980 is a Monday.
From there, work out the date of the following Sunday, and that will be Easter! In our example, it's a little tricky since we;re on the border between 2 months. In this case, the easiest solution is
to go 1 day back from Monday (to Sunday, March 30th) and then ahead a week (March 37th = 37 - 31 = April 6th).
You can have them type Easter 1980 into Wolfram|Alpha to verify for themselves that April 6th, 1980 was indeed the correct date for Easter that year.
ADDITIONAL NOTES: If you know you're always going to be dealing with dates in the 1900s and 2000s, and therefore always subtracting 6, you can simplify from (((11(X + 1)) - 6) mod 30) to the much
simpler ((11X + 5) mod 30), thus saving a few steps.
The century adjustment for both the 1900s and 2000s as discussed above, is -6. The proper adjustment for other centuries can be found using this formula in Wolfram|Alpha.
For the 2100s, you'd set h=21, for the 2200s you'd set h=22, etc., and c will be the century adjustment. When working with other centuries, you'll also want to find an easy multiple of 19 to subtract
to make things easier. For example, for dates in the 2100s, you could subtract 2090 from the year, since 1900 + 190 = 2090.
To find an easy way to deal with a given century, you can use this Wolfram|Alpha calculator. For the 2100s, you'd set h=21 (just as in the previous calculator), and the calculator returns two new
numbers, c=-2100 and d=10. This means that, for years in the 2100s, all you have to do is subtract 2100 and then add 10 to adjust for the proper part of the 19-year cycle.
Since the year gets reduced to multiples of 19, you shouldn't be surprised to discover that there's a 19-year cycle of dates for the paschal full moon. This table gives the dates for the paschal full
moon for the years 2014-2032. Each number on the chart that's less than 20 refers to that date in April (8 on the chart, for example, means April 8th). Each number on the chart that's greater than 20
refers to a date in March (23 on the chart, for example, refers to March 23rd). The 0 on the chart refers to April 0th , which is really March 31st.
If you're comfortable with memory systems, you could just memorize the paschal full moon dates associated with each year in the 19-year cycle, simplifying the process even more!
What happens if the paschal full moon falls on a Sunday? In that case, Easter wil be on the following Sunday.
If the formula tells you that April 19th was the date of the paschal full moon, step back one day to April 18th. Similarly, if the formula returns April 18th, and your year calculation, after
reducing it and adding 1, is 12 or more, step back 1 day to April 17th. Otherwise, always stay on the final date you get.
Try calculating the Easter date for random years, and taking all the rules we've talked about into account, and you'll develop this skill quicker than you ever thought possible!
3 Response to Finding Easter
I cannot get Easter 2020 to work out correctly. Can you please email me the steps you would take for that year. Thank you
Thank you for finding that problem!
In the original version of this post, the first step was simply to use the last 2 digits of the year. By 2020, this approach starts giving erroneous dates.
Since the above comment was written, I have changed the first step to subtracting 1900, and the method now works much better, and will give the correct date for the paschal full moon.
I apologize for any confusion and problems encountered by readers of the original version.
Thank you for this. I love mental calculation and stunts like these really connect with people. Sometimes I like to tell people the day of their birth and then tell them that they celebrated their
first Easter on ...
I have many Calendar related stunts that I've collected and worked out myself. I hope to write a book for performers one day soon. Let me know if you would like me to send you any of my stunts. Thank
you for your fine blog
|
{"url":"http://headinside.blogspot.com/2013/01/finding-easter.html","timestamp":"2014-04-20T23:26:43Z","content_type":null,"content_length":"667573","record_id":"<urn:uuid:ff03e172-dea0-4194-bf69-632255d0266a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seatac, WA Prealgebra Tutor
Find a Seatac, WA Prealgebra Tutor
...After leaving the University of Washington with my Bachelors of Science, I decided to go back to take the MCAT test and prepare for entrance into medical school. I took the test twice and
enrolled in the Kaplan prep class for the MCAT as well. I have been using MS Outlook since 1991.
46 Subjects: including prealgebra, reading, English, chemistry
...I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. My primary programming language is currently Java. Regardless of the subject, I would say I am
effective at recognizing patterns.
16 Subjects: including prealgebra, chemistry, French, calculus
...I don't waste time. I focus on the task at hand because I love to see it when understanding begins.I've been using algebra in my life and my studies for many years. I love teaching algebra in
particular because a solid foundation in this subject is useful throughout life.
18 Subjects: including prealgebra, chemistry, physics, geometry
Background: I recently graduated from the University of Washington with a B.S. degree in chemistry. Throughout my college career I had a special focus in mathematics. Outside of school, current
events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha.
17 Subjects: including prealgebra, chemistry, physics, calculus
...Of course I will be there to support them the entire time. Always looking to improve myself and my teaching techniques, I love to hear feedback from both parents and students and will offer a
rebate if there is ever any problem with my tutoring. I have a schedule that works perfectly with meeting students after school and can meet with students wherever is most convenient.
15 Subjects: including prealgebra, reading, Spanish, geometry
Related Seatac, WA Tutors
Seatac, WA Accounting Tutors
Seatac, WA ACT Tutors
Seatac, WA Algebra Tutors
Seatac, WA Algebra 2 Tutors
Seatac, WA Calculus Tutors
Seatac, WA Geometry Tutors
Seatac, WA Math Tutors
Seatac, WA Prealgebra Tutors
Seatac, WA Precalculus Tutors
Seatac, WA SAT Tutors
Seatac, WA SAT Math Tutors
Seatac, WA Science Tutors
Seatac, WA Statistics Tutors
Seatac, WA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Auburn, WA prealgebra Tutors
Bellevue, WA prealgebra Tutors
Burien, WA prealgebra Tutors
Des Moines, WA prealgebra Tutors
Federal Way prealgebra Tutors
Issaquah prealgebra Tutors
Kent, WA prealgebra Tutors
Kirkland, WA prealgebra Tutors
Newcastle, WA prealgebra Tutors
Normandy Park, WA prealgebra Tutors
Redmond, WA prealgebra Tutors
Renton prealgebra Tutors
Seattle prealgebra Tutors
Tacoma prealgebra Tutors
Tukwila, WA prealgebra Tutors
|
{"url":"http://www.purplemath.com/Seatac_WA_Prealgebra_tutors.php","timestamp":"2014-04-21T12:58:40Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:051a9b08-e980-47f0-9a68-a09a331e7978>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EQ Triangle and square with same area
June 3rd 2009, 11:50 PM #1
Jun 2009
EQ Triangle and square with same area
The area of an equilateral triangle (A ft2) is equal to that of a square. What is the length of the diagonal of the square?
A. (2A) ft
B. A ft
C. (3A) ft
D. 2(A) ft
E. (5A) ft
I know how to get there, but not sure if it's a correct path (i.e. is there a better way to solve it.)
Forget the triangle, the area of both = A, and for the square A=s^2
Length of a diagonal of a square is S*2
Which we can see is A*2=(2A)
That's much more direct, thanks.
June 4th 2009, 03:40 AM #2
June 4th 2009, 07:25 AM #3
Jun 2009
|
{"url":"http://mathhelpforum.com/geometry/91745-eq-triangle-square-same-area.html","timestamp":"2014-04-16T20:21:51Z","content_type":null,"content_length":"35394","record_id":"<urn:uuid:d72ab379-f850-4ce9-98fb-9a31ddd74c86>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Godel, Escher, Bach Typographical Number Theory (TNT) puzzles and solutions
up vote 12 down vote favorite
In chapter 8 of Godel, Escher, Bach by Douglas Hofstader, the reader is challenged to translate these 2 statements into TNT:
"b is a power of 2"
"b is a power of 10"
Are following answers correct?:
(Assuming '∃' to mean 'there exists a number'):
∃x:(x.x = b)
i.e. "there exists a number 'x' such that x multiplied x equals b"
If that is correct, then the next one is equally trivial:
∃x:(x.x.x.x.x.x.x.x.x.x = b)
I'm confused because the author indicates that they are tricky and that the second one should take hours to solve; I must have missed something obvious here, but I can't see it!
logic numbers puzzle
+1 for interestingness, & because I didn't realize there was a ready-to-go "there exists" entity in HTML. There's a whole list here: tlt.its.psu.edu/suggestions/international/bylanguage/… – Jason S
Mar 13 '09 at 21:58
add comment
9 Answers
active oldest votes
Your expressions are equivalent to the statements "b is a square number" and "b is the 10th power of a number" respectively. Converting "power of" statements into TNT is
up vote 9 down vote considerably trickier.
Ah. I'm afraid I don't know the difference between "b is a square number" and "b is a power of 2"! I thought they were the same thing! Could you explain it? Thanks! –
rogueprocess Mar 13 '09 at 20:52
1 The square numbers are 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, etc. The powers of 2 are 1, 2, 4, 8, 16, 32, 64, 128, 256, etc. – Adam Rosenfield Mar 13 '09 at 20:54
7 Square numbers are x^2, powers of 2 are 2^x. – schnaader Mar 13 '09 at 20:55
add comment
In general, I would say "b is a power of 2" is equivalent to "every divisor of b except 1 is a multiple of 2". That is:
∀x((∃y(y*x=b & ¬(x=S0))) → ∃z(SS0*z=x))
EDIT: This doesnt work for 10 (thanks for the comments). But at least it works for all primes. Sorry. I think you have to use some sort of encoding sequences after all. I suggest
"Gödel's Incompleteness Theorems" by Raymond Smullyan, if you want a detailed and more general approach to this.
Or you can encode Sequences of Numbers using the Chinese Remainder Theorem, and then encode recursive definitions, such that you can define Exponentiation. In fact, that is basically
how you can prove that Peano Arithmetic is turing complete.
Try this:
up vote 10 a=b mod c = ∃k a=c*k+b
down vote
∃y ∃k(
∀x(D(x,y)&Prime(x)→¬D(x*x,y)) &
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z<x→¬D(z,y))→(k=1 mod x)) &
∀a<x ∀c<z ((k=a mod x)&(k=c mod z)-> a=c*10))&
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z>x→¬D(z,y))→(b<x & (k=b mod x))))
should state "b is Power of 10", actually saying "there is a number y and a number k such that y is product of distinct primes, and the sequence encoded by k throug these primes begins
with 1, has the property that the following element c of a is 10*a, and ends with b"
That won't work with 10; some divisors of the powers of 10 are not multiples of 10. (e.g. 2, 4, 8, 16, etc., and 5, 25, 125, etc.) – Jason S Mar 14 '09 at 16:30
Thanks, I have corrected my post, hopefully it is correct now. – schoppenhauer Mar 15 '09 at 3:13
I think you need to check that x & y are not zero also, otherwise it is true for b=0. I'd suggest ∀x:<∃y:(Sy*SSx)=b → ∃z:(SS0*z)=x> – Peter Gibson Apr 21 '10 at 2:19
add comment
There's a solution to the "b is a power of 10" problem behind the spoiler button in skeptical scientist's post here. It depends on the chinese remainder theorem from number theory, and
up vote 4 the existence of arbitrarily-long arithmetic sequences of primes. As Hofstadter indicated, it's not easy to come up with, even if you know the appropriate theorems.
down vote
add comment
how about:
∀x: ∀y: (SSx∙y = b → ∃z: z∙SS0 = SSx)
(in English: any factor of b that is ≥ 2 must itself be divisible by 2; literally: for all natural numbers x and y, if (2+x) * y = b then this implies that there's a natural number
z such that z * 2 = (2+x). )
up vote 2 down
vote I'm not 100% sure that this is allowed in the syntax of TNT and propositional calculus, it's been a while since I've perused GEB.
(edit: for the b = 2^n problem at least; I can see why the 10^n would be more difficult as 10 is not prime. But 11^n would be the same thing except replacing the one term "SS0" with
add comment
In expressing "b is a power of 10", you actually do not need the Chinese Remainder Theorem and/nor coding of finite sequences. You can alternatively work as follows (we use the usual
symbols as |, >, c-d, as shortcuts for formulas/terms with obvious meaning):
1. For a prime number p, let us denote EXP(p,a) some formula in TNT saying that "p is a prime and a is a power of p". We already know, how to build one. (For technical reasons, we do
not consider S0 to be a power of p, so ~EXP(p,S0).)
2. If p is a prime, we define EXP[p](c,a) ≖ 〈EXP(p,a) ∧ (c-1)|(a-1)〉. Here, the symbol | is a shortcut for "divides" which can be easily defined in TNT using one existencial
quantifier and multiplication; the same holds for c-1 (a-1, resp.) which means "the d such that Sd=c" (Sd=a, resp.).
If EXP(p,c) holds (i.e. c is a power of p), the formula EXP[p](c,a) says that "a is a power of c" since a ≡ 1 (mod c-1) then.
up vote 2 3. Having a property P of numbers (i.e. nonnegative integers), there is a way how to refer, in TNT, to the smallest number with this property: 〈P(a) ∧ ∀c:〈a>c → ~P(a)〉〉.
down vote
4. We can state the formula expressing "b is a power of 10" (for better readability, we omit the symbols 〈 and 〉, and we write 2 and 5 instead of SS0 and SSSSS0):
∃a:∃c:∃d: (EXP(2,a) ∧ EXP(5,c) ∧ EXP(5,d) ∧ d > b ∧ a⋅c=b ∧ ∀e:(e>5 ∧ e|c ∧ EXP[5](e,c) → ~EXP[5](e,d)) ∧ ∀e:("e is the smallest such that EXP[5](c,e) ∧ EXP[5](d,e)" → (d-2)|(e-a))).
Explanation: We write b = a⋅c = 2^x⋅5^y (x,y>0) and choose d=5^z>b in such a way that z and y are coprime (e.g. z may be a prime). Then "the smallest e..." is equal to (5^z)^y = d^y ≡ 2^
y (mod d-2), and (d-2)|(e-a) implies a = 2^x = e mod (d-2) = 2^y (we have 'd-2 > 2^y' and 'd-2 > a', too), and so x = y.
Remark: This approach can be easily adapted to define "b is a power of n" for any number n with a fixed decomposition a[1]a[2]...a[k], where each a[i] is a power of a prime p[i] and p[i]
= p[j] → i=j.
add comment
Here's what I came up with:
Which translates to:
For all numbers c, there exists a number d, such that if c times d equals b then either c is 1 or there exists a number e such that d equals e times 2.
For all numbers c, there exists a number d, such that if b is a factor of c and d then either c is 1 or d is a factor of 2
up vote 1 Or
down vote
If the product of two numbers is b then one of them is 1 or one of them is divisible by 2
All divisors of b are either 1 or are divisible by 2
b is a power of 2
Counterexample: let's say that b = 2. Then c, for which the assumption is true, must be either 1 or 2. The former is OK, but for the latter, d = 1. In that case, c ≠ 1 and d ≠ e * 2,
therefore the implication doesn't hold for all c. Is this true or am I missing something? – dqd Jul 6 '11 at 8:01
add comment
I think that most of the above have only shown that b must be a multiple of 4. How about this: ∃b:∀c:<<∀e:(c∙e) = b & ~∃c':∃c'':(ssc'∙ssc'') = c> → c = 2>
I don't think the formatting is perfect, but it reads:
up vote 0 down vote
There exists b, such that for all c, if c is a factor of b and c is prime, then c equal 2.
add comment
Here is what I came up with for the statement "b is a power of 2"
∃b: ∀a: ~∃c: ((a * ss0) + sss0) * c = b
up vote 0
down vote I think this says "There exists a number b, such that for all numbers a, there does not exist a number c such that (a * 2) + 3 (in other words, an odd number greater than 2) multiplied
by c, gives you b." So, if b exists, and can't be zero, and it has no odd divisors greater than 2, then wouldn't b necessarily be 1, 2, or another power of 2?
1 "if b ... can't be zero". How are you stating this here? TNT variables can contain any value that is a natural number, which includes zero. The trick he used in the chapter was Sb,
meaning "the successor of any number" or "one or greater", which naturally can't evaluate to be zero. – Merlyn Morgan-Graham Jun 17 '11 at 3:47
add comment
For the open expression meaning that b is a power of 2, I have ∀a:~∃c:(S(Sa ∙ SS0) ∙ Sc) = b
This effectively says that for all a, S(Sa ∙ SS0) is not a factor of b. But in normal terms, S(Sa ∙ SS0) is 1 + ((a + 1) * 2) or 3 + 2a. We can now reword the statement as "no odd
up vote 0 number that is at least 3 is a factor of b". This is true if and only if b is a power of 2.
down vote
I'm still working on the b is a power of 10 problem.
add comment
Not the answer you're looking for? Browse other questions tagged logic numbers puzzle or ask your own question.
|
{"url":"http://stackoverflow.com/questions/644569/godel-escher-bach-typographical-number-theory-tnt-puzzles-and-solutions","timestamp":"2014-04-17T13:06:39Z","content_type":null,"content_length":"104326","record_id":"<urn:uuid:217d3b5a-ce65-41f4-acec-e41380040a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: The Equivalence of Support Vector Machine and
Regularization Neural Networks
Department of Psychology, University of Newcastle upon Tyne,
Newcastle upon Tyne, NE1 7RU, UK. e-mail: peter.andras@ncl.ac.uk
Abstract. We show in this brief paper the equivalence of the support vector machine and
regularization neural networks.We prove both implication sides of the equivalence in a generally
applicable way. The novelty lies in the effective construction of the regularization operator cor-
responding to a given support vector machine formulation. We give also a short introductory
description of both neural network approximation frameworks.
Key words: approximation, equivalent neural networks, regularization, support vector machine
1. Introduction
Recent papers [2, 3, 5, 7, 8, 10, 11] show that the two top-level neural network
approximation frameworks are the application of support vector machine theory
and regularization theory to neural networks. Both frameworks fall into the general
category of Bayesian neural network methods that are based on optimization of
neural networks in the context of some prior distribution over the parameter space
of neural networks [1, 13].
In previous works Smola et al. [10] and Girosi [5] have shown that the problem
formulation of regularization neural networks can be transformed into a problem
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/379/2216386.html","timestamp":"2014-04-17T09:57:57Z","content_type":null,"content_length":"8503","record_id":"<urn:uuid:efacd996-77b5-4dfe-8e45-e50c3cecaedb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2005 [00287]
[Date Index] [Thread Index] [Author Index]
Re: //N bug, but WHY?
• To: mathgroup at smc.vnet.net
• Subject: [mg58681] Re: //N bug, but WHY?
• From: "Carl K. Woll" <carlw at u.washington.edu>
• Date: Thu, 14 Jul 2005 02:49:06 -0400 (EDT)
• Organization: University of Washington
• References: <data3n$mec$1@smc.vnet.net> <db030a$r7o$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
I don't think the help for N[] is partly wrong, as I try to explain below.
"dh" <dh at metrohm.ch> wrote in message news:db030a$r7o$1 at smc.vnet.net...
> Hi,
> your expression consists of a sum with 2 terms that have opposite signs
> and nearly the same magnitude. The absolute value is of the order of
> 10^41 and the difference 10^-41. Therefore, if you calculate with
> machine precision= 16 digits, this is doomed to fail.
> On the other hand, the help for N[] is partly wrong.
> Consider the help:
> "N[expr] is equivalent to N[expr, MachinePrecision]"
Here it is important to note that MachinePrecision is not 16. From the help
we have
"The numerical value of MachinePrecision is $MachinePrecision."
Or even:
> a=10^16+1; b=10^16;
> N[a-b] gives 1. because the argument of N is evaluated with infinite
> precision before N sees it.
Since N does not have any Hold attributes, like any such function it
evaluates its arguments before doing anything else.
> We can prevent this by wrapping a and b in a
> function that can not be evaluated with infinite precision:
> a=Sqrt[10^16+1]; b=Sqrt[10^16];
> N[a-b] gives 0. because 10^16+1 is now evaluated with machine precision
> giving 10^16.
As the help says,
"N converts each successive argument of any function it encounters to
numerical form, unless the head of the function has an attribute such as
This explains the behavior you observed.
> On the other hand,
> N[a-b,16] gives 5.00.. 10^-9, here it seems that the intermediate
> calulations are done with higher precision, exactly what the help says.
> Even
> N[a-b,1] gives 5. 10^-9
You seem to be assuming that N[a-b,16] is equivalent to N[a-b], but as I
pointed out above, that is incorrect. N[a-b,16] essentially converts all
numbers to extended precision numbers in order to achieve a result of at
least 16 digits of precision. It does not convert numbers to machine
precision numbers.
> In short:
> N[expr] does all calculation in machine precision, whereas
> N[expr,number] does all intermediate calculation with high enough (up to
> an upper Limit of "$MaxExtraPrecision") precision to garantee a result
> precision of "number"
I believe this is a reasonable paraphrase of the help.
Carl Woll
Wolfram Research
> sincerely, Daniel
> symbio wrote:
>> Evaluating (using //N) two exact same expressions, gives wrong answer
>> unless
>> fullsimplify is used first, I spent 2 days on a problem thinking my
>> answer
>> was wrong, but turned out Mathematica 5 was giving me buggy answers, I
>> debugged it to this point, but WHY in the world is this happening?
>> Please
>> help!!!
>> cut and paste below to see the test case:
>> In[243]:=
>> \!\(\(Cosh[\(43\ \[Pi]\)\/\@2] + \((1 - Cosh[43\ \@2\ \[Pi]])\)\ Csch[
>> 43\ \@2\ \[Pi]]\ Sinh[\(43\ \[Pi]\)\/\@2] // FullSimplify\)
>> //
>> N\[IndentingNewLine]
>> Cosh[\(43\ \[Pi]\)\/\@2] + \((1 - Cosh[43\ \@2\ \[Pi]])\)\ Csch[
>> 43\ \@2\ \[Pi]]\ Sinh[\(43\ \[Pi]\)\/\@2] // N\)
>> Out[243]=
>> \!\(6.551787517854307`*^-42\)
>> Out[244]=
>> \!\(\(-1.9342813113834067`*^25\)\)
>> thanks,
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Jul/msg00287.html","timestamp":"2014-04-19T04:51:49Z","content_type":null,"content_length":"37835","record_id":"<urn:uuid:cb500675-08b2-4c27-8e0f-ee8fc6569754>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math-Addition and Subtraction-Kindergarten
I will first start off by explaining what addtion and subtraction are. I will explain that addition is adding things together to form a group. Subtraction is taking things away. I will model addition
and subtraction by using blocks. After I model with the blocks a few times, I will demonstrate a property and call on the students to say whether I am doing addition or subtraction.
|
{"url":"https://college.livetext.com/doc/2441556","timestamp":"2014-04-20T18:39:35Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:ad629ede-efec-469f-9cd0-b213b4ba168f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why, and How, Should Geologists Use Compositional Data Analysis/Factor Analysis
Factor analysis is a statistical data reduction technique used to explain variability among observed random variables in terms of fewer unobserved random variables called factors. It is useful to
reduce the number of variables, by combining two or more variables into a single factor, thus “simplifying” the original dataset.
Factor analysis (FA) is especially useful in geochemistry when one has a known target or some other way to understand the meaning of the obtained associations. When failing this, the geologist is
usually forced to “plot and see”, and then to select the FA that he believes is the most useful for the studied area.
I processed both the initial dataset and the three transformed versions using SYSTAT SSPS 10.0 for Windows, but you can use any other statistical program capable of factor analysis.
Factor Analysis for the Initial DatasetEdit
Figure 36 shows the plot for the initial dataset, while table 24 shows the principal components defined by the software.
Figure 36. Scree plot for the initial dataset.
Table 24. Principal component analysis (PCA) for the initial dataset.
Equations 23 – 25 show the three FA components for the initial dataset.
Equation 23. FA 1 for the initial dataset.
Equation 24. FA 2 for the initial dataset.
Equation 25. FA 3 for the initial dataset.
Figures 37 – 39 show the effectiveness of these FA as a targeting tool for our ore body.
Conclusions and recommendations on the use of FA for the initial datasetEdit
For as long as we have a known target to test the obtained FA, this method offers better results than the RCC. It also allows for the combined studied of all the elements together.
FA1 and FA2 do contain the embedded correlations I introduced in the initial dataset, thus their effectiveness, especially FA 1, in mapping the location of the ore body.
The next question will be: Will the transformed data be any more effective in helping us locate our target?
CRL transformed dataEdit
Figure 40 shows the scree plot for the CLR transformed dataset, while table 25 shows the principal components defined by SYSTAT.
Figure 40. Scree plot for the CLR transformed dataset.
Table 25. Principal component analysis for the CLR transformed dataset.
Equations 26 – 28 show the three FA components for the CLR transformed dataset.
Equation 26. FA 4 for the CLR transformed dataset.
Equation 27. FA5 for the CLR transformed dataset.
Equation 28. FA6 for the CLR transformed dataset.
Figures 41 – 43 show the effectiveness of these FA as a targeting tool for our ore body.
Factor Analysis for the ALR Transformed DatasetEdit
Figure 44 shows the scree plot for the ALR transformed dataset, while table 26 shows the principal components defined by SYSTAT.
Figure 44. Scree plot for the ALR transformed dataset.
Table 26. Principal component analysis for the ALR transformed dataset.
Although table 26 shows two components, I will analyze only the second, which is a coefficient as shown in equation 29.
Equation 29. FA7 for the ALR transformed dataset.
This factor contains the embedded relationship from the initial dataset, but because of the presence of other elements, its usefulness as a targeting tool is more limited, as shown in Figure 45.
Figure 45. FA7 covers mostly the southeastern part of the ore body.
Factor Analysis of the IRL Transformed DatasetEdit
Figure 46 shows the scree plot for the IRL transformed dataset, while table 27 shows the principal components defined by SYSTAT.
Figure 46. Scree plot for the IRL transformed dataset.
Table 27. Principal component analysis for the IRL transformed dataset.
The fact that we have so many components as the result of the P.C.A., is an indication that we will not get good results this time. Equations 30 through 34 show the obtained factors.
Equation 30. FA8 for the IRL transformed dataset.
Equation 31. FA9 for the IRL transformed dataset.
Equation 32. FA10 for the IRL transformed dataset.
Equation 33. FA11 for the IRL transformed dataset.
Figures 47 through 50 shows the spatial distribution of these factors with respect to the location of our ore body.
Conclusions and Recommendations on the Use of FA for the Transformed DatasetsEdit
As I mentioned earlier, for FA to be most useful, one needs to have a known target to calibrate it. The factor analysis applied to the CLR transformed data gave us three factors, but only one (FA5)
was useful for targeting the ore body.
The factor analysis of the ALR transformed data (Factor 7) was good in general, but the best factors were obtained from the ILR transformed data, specially Factor 9 that not only gave the exact
location of the ore body, but also its internal structure. Another efficient factor was FA11, but it definitively required calibration based on a known target.
So answering the question from page 41, yes, the factor analysis of the IRL transformed data will be more effective than the factor analysis of the raw data as a tool for locating the ore deposit.
Last modified on 18 January 2008, at 21:34
|
{"url":"http://en.m.wikibooks.org/wiki/Why,_and_How,_Should_Geologists_Use_Compositional_Data_Analysis/Factor_Analysis","timestamp":"2014-04-17T01:18:31Z","content_type":null,"content_length":"30183","record_id":"<urn:uuid:d412bf90-921a-49d5-926c-2f6164c82d42>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
relative complement
relative complement
A complement of an element in a lattice is only defined when the lattice in question is bounded. In general, a lattice is not bounded and there are no complements to speak of. Nevertheless, if the
sublattice of a lattice is bounded, we can speak of complements of an element relative to that sublattice.
Let $L$ be a lattice, $a$ an element of $L$, and $I=[b,c]$ an interval in $L$. An element $d\in L$ is said to be a complement of $a$relative to $I$ if
$a\vee d=c\,\mbox{ and }\,a\wedge d=b.$
It is easy to see that $a\leq c$ and $b\leq a$, so $a\in I$. Similarly, $d\in I$.
An element $a\in L$ is said to be relatively complemented if for every interval $I$ in $L$ with $a\in I$, it has a complement relative to $I$. The lattice $L$ itself is called a relatively
complemented lattice if every element of $L$ is relatively complemented. Equivalently, $L$ is relatively complemented iff each of its interval is a complemented lattice.
• A relatively complemented lattice is complemented if it is bounded. Conversely, a complemented lattice is relatively complemented if it is modular.
• The notion of a relative complement of an element in a lattice has nothing to do with that found in set theory: let $U$ be a set and $A,B$subsets of $U$, the relative complement of $A$ in $B$ is
the set theoretic difference $B-A$. While the relative difference is necessarily a subset of $B$, $A$ does not have to be a subset of $B$.
relatively complemented lattice, relatively complemented
RelativePseudocomplement, BrouwerianLattice
Mathematics Subject Classification
no label found
Added: 2006-04-21 - 23:12
|
{"url":"http://planetmath.org/relativecomplement","timestamp":"2014-04-18T00:24:30Z","content_type":null,"content_length":"50988","record_id":"<urn:uuid:89e91786-e807-4605-9570-0355c3525cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
S SAID, FARIDA (2009) IMPLEMENTASI TEKNIK PROBING - TGT (TEAMS GAMES TOURNAMENT) DALAM PEMBELAJARAN MATEMATIKA DI SLTP N 06 DAU. Other thesis, University of Muhammadiyah Malang.
Download (96kB) | Preview
To enhance student achievement in their study we should generate continuous effort toward improvement of learning both from teacher’s ability skill and the learning technique itself. One of those
efforts is to implement mathematical learning using TGT (Teams Games Tournament)- probing technique. The objective of this experiment is (1) To describe teacher and student’s activity during
mathematical learning using TGT probing technique in SLTP M 06 Dau. (2) To describe student’s completion study in mathematical learning using TGT-probing technique in SLTP M 06 Dau. (3) To describe
student’s response toward mathematical learning using TGT probing technique in SLTP M 06 Dau. Approach used in this experiment is descriptively qualitative and quantitative approach, without using
statistic test. While the subject is teacher who teach class VII B in SLTP Muhammadiyah 06 Dau and class VII B students of SLTP Muhammadiyah 06 Dau, in this case, instrument used are teacher and
student’s activities, test comprise of multiple choice and student’s response. Analysis use in this experiment is descriptive analysis. Result of the experiment reveals that average percentage of
teacher’s activity after the 4th meeting has reached 83,75% while the average cooperative activity of student’s after the 4th meeting has reached 83.8%, average affective activity of students is
81,6%, and for complete test result of the student is 79,41% (classical completion has achieved) while incomplete test result of the student is 20,59%. From filled questionnaire, it reveals that
23,7% is highly agree and 55,9% agree, total sum of 79,6% students who agrees if mathematical learning is using TGT-probing technique. Conclusion of this experiment is: teacher and student’s
activities during learning using TGT probing technique is listed in the category of “Good”, this has shows that during the learning process of TGT probing technique has made the teacher and student
became more active. Student’s study results in SMP M 06 Dau is determined based on KKM which pre-determined by subject teacher and approve by the principal of the school in which student would be
labeled complete if they meet the pre-determined KKM. From the test result given after the use of the learning method TGT- probing technique, complete students is 79,41% (classical completion has
achieved) and incomplete students is 20,59%. From filled questionnaire, it reveals that 23,7% is highly agree and 55,9% agree, total sum of 79,6% students who agrees if mathemat ical learning is
using TGT-probing technique.
Actions (login required)
|
{"url":"http://eprints.umm.ac.id/10283/","timestamp":"2014-04-21T04:33:47Z","content_type":null,"content_length":"24584","record_id":"<urn:uuid:0bc6f866-5241-4ea7-8d0f-af70b7df731c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cutoff frequency of seond order low pass filter
Junior Member level 3
Join Date
Dec 2004
0 / 0
Re: cutoff frequency of seond order low pass filter
how to calculate the cutoff frequency of seond order low pass filter mathematically?
anyone knows?
Added after 23 minutes:
to be more exact, i mean how to calculate fc (cutoff) from the transfer function, similar to the Q, Wp calculation.
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: cutoff frequency of seond order low pass filter
how to calculate the cutoff frequency of seond order low pass filter mathematically?
anyone knows?
Added after 23 minutes:
to be more exact, i mean how to calculate fc (cutoff) from the transfer function, similar to the Q, Wp calculation.
The answer is simple - however the calculation itself can be involved:
1.) Calcualte the magnitude of the transfer function
2.) Set the magnitude equal to 0.7071 (for 3-dB cutoff) and solve for w.
3.) For Chebyshev and elliptic (Cauer) response: Set the magnitude equal to the value at dc and solve for w.
1 members found this post helpful.
Junior Member level 3
Join Date
Dec 2004
0 / 0
cutoff frequency of seond order low pass filter
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: cutoff frequency of seond order low pass filter
Hi edafisher,
are you really sure that the above link contains the information you want
(relationship between cut-off and pole frequency for a second order lowpass) ?
Junior Member level 3
Join Date
Dec 2004
0 / 0
Re: cutoff frequency of seond order low pass filter
hi Lvw,
thanks for your notice. i found the link gives the wrong formula (lacks a coefficient).
I re-do the derivation, and gives the right formula in below
W-3dB = Wn * sqrt[1-2ζ^2+sqrt(4ζ^4-4ζ^2+2)]
where Wn is the pole frequency and ζ is the damping factor (equals 1/(2Q) )
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: cutoff frequency of seond order low pass filter
Congratulations! This formula is correct!
Junior Member level 3
Join Date
Dec 2004
0 / 0
Re: cutoff frequency of seond order low pass filter
Congratulations! This formula is correct!
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: cutoff frequency of seond order low pass filter
Just one additional comment (in case you don't know yet):
The formula gives you the 3-dB cut-off.
However, for Chebyshev and elliptic (Cauer) responses it is common (also) to use another definition for cut-off (end of pass band):
The frequency where the peak (ripple) in the pass band crosses the transfer function value at w=0.
In most textbooks, this definitin is used to table the pole data.
Junior Member level 1
Join Date
Apr 2010
San Jose, CA
0 / 0
cutoff frequency of seond order low pass filter
How do we choose teh natural frequency of a closed loop??
|
{"url":"http://www.edaboard.com/thread182750.html","timestamp":"2014-04-20T05:50:45Z","content_type":null,"content_length":"85016","record_id":"<urn:uuid:fb1a14e7-2c73-4d85-8334-762cbc73d876>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A degree in General Mathematics is designed to equip you with the skills necessary to be a professional problem-solver, as a mathematician is not defined by his or her knowledge of laws and
theorems, but by his or her critical thinking and reasoning skills. As a Mathematics Major, your classes will range from courses on mathematical logic and induction to courses on differential
equations and analysis. The core Mathematics program is designed to allow you to gain basic training in a variety of mathematical tracks, whereas the advanced electives will provide you with the
choice of which higher-level mathematical subfields you wish to progress into further. The complete list of applicable fields for which Mathematics is an integral part is extensive; here we show a
few of the more common specializations.
Every course listed in our “Core Program” is required of math majors with the exception of MA111: Introduction to Mathematical Reasoning. This course is recommended before MA231 and MA241 or any of
the Advanced Mathematics Electives other than Partial Differential Equations.
In addition to the Core Program, each student is required to choose seven electives, at least five of which must come from the Advanced Mathematics Electives. At least one must be chosen from Real
Analysis II, Complex Analysis, and Probability Theory. At least one must be chosen from Abstract Algebra II and Linear Algebra II. Finally, each student is required to take two supplementary
electives; MA121 cannot be used to satisfy this requirement. Those choosing to take Elementary Number Theory should treat Abstract Algebra I as a pre- or co-requisite.
For those interested in graduate study in math, we recommend taking as many of the advanced electives as possible, with a focus on real and complex analysis and linear and abstract algebra. For those
interested in working in applied mathematics, we recommend Linear Algebra II, Probability Theory, Partial Differential Equations, Numerical Analysis, and Topics in Applied Mathematics. For those
interested in statistics, we recommend Real Analysis II, Linear Algebra II, Probability Theory, and Statistics II.
To fulfill the requirements for this major, you must complete all of the core program except for MA111, which is optional (8 courses) as well as 7 electives of your choosing (7 courses — 5 of which
must be “Advanced Mathematics”) for a total of 15 courses.
|
{"url":"http://www.saylor.org/majors/mathematics/","timestamp":"2014-04-17T07:01:25Z","content_type":null,"content_length":"33877","record_id":"<urn:uuid:22a68cfd-6fe4-4011-9222-76cb03300db3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. B. C. D. E. F. G. H. I. J. K. L. M. N. O. P. Q. R. S. T. U. V. W. X. Y. Z
• Acute angle - Any angle between 0° and 90° .
• Adjacent side - The side of a right triangle that is next to the reference angle. (Not hypotenuse)
• Angle - 1. The shape made by two straight lines meeting in a point. 2. The space between those lines. 3. The measurement of that space in degrees or radians.
• Angle of turn - The angle created when a line of work is turned. The angle of turn is formed between where the original line would have gone (if it had not turned) and the line going in the new
• Arc function (also called inverse function) - The function used to find the degrees of an angle when the ratio of the sides of the angle is known. The six arc functions are: arc sine, arc cosine,
arc tangent, arc cosecant, arc secant, and arc cotangent.
• Arc - A curved line whose points are equal distances from a single point.
• Arc length - The length of a curved line.
• Area - The measurement of a surface, expressed in square units.
• Bisecting line - A line which divides an object or shape into two equal parts.
• Central angle - An angle whose vertex is located at the center of a circle.
• Center length - The distance of the straight line or piece of material between the arcs of an offset.
• Center Mark Back - The distance used in marking and cutting a miter from the center line.
• Chord - A straight line from one point on a circle to another point on the circle. The longest chord of a circle is the diameter.
• Circle - A curved line in a plane that encloses a space. Every point on the curved line is exactly the same distance from the center point.
• Circumference - The distance around a circle.
• Common side of two triangles - A line that is shared by two triangles. It can be the hypotenuse of one triangle and the leg of the other.
• Compass - A tool used to draw circles and curved lines.
• Complementary angles - Two angles whose sum equals 90° . The two lesser angles of a right triangle are always complementary.
• Conversion - For fractions : The process of changing the numerator and the denominator of a fraction to make a new fraction of equal value. Example: 1/2 to 2/4. For units of measurement : The
Process of changing a unit of measurement to a different unit of measurement while keeping the value the same. Example: 2 feet to inches, 2 feet = 24 inches.
• Cosecant - One of the functions of an angle. It is found by dividing the length of the hypotenuse of a right triangle by the length of the opposite side of a reference angle.
• Cosine - One of the functions of an angle. It is found by dividing the length of the adjacent side of a reference angle by the length of the hypotenuse.
• Cotangent - One of the functions of an angle. It is found by dividing the length of the adjacent side of a reference angle by the length of the opposite side of a reference angle.
• Decimal - Shortened version of decimal fraction.
• Decimal fraction - A fraction in which the denominator is 10 or a power of 10 (such as 100, 1000, 10,000, etc.). The denominator is seldom used. Instead, a dot called a decimal point is placed in
front of the numerator. The denominator can be inferred by the number of decimal places behind the decimal point. If there is one number to the right of the decimal point, the denominator is 10.
If there are two numbers to the right of the decimal point, the denominator is 100. There will be the same number of zeroes in the denominator as there are decimal places.
• Degree - A unit of measurement for angles. (Also see Radians.)
• Denominator - The bottom number in a fraction. It identifies the number of portions into which the whole has been divided. In the fraction 9/16", the whole inch has been divided into 16 parts,
and the measurement is equal to 9 of those parts.
• Diameter - A straight line from one side of a circle to the other side that passes through the center. Also the measurement of that line.
• Division bar of a fraction - The line between the numerator and the denominator of a fraction.
• Equilateral triangle - A triangle with equal sides and equal angles.
• Formula - An equation that follows a rule. Formulas contain variables that when replaced by numbers, allow unknown quantities to be found. An example of a formula is the Pythagorean Theorem, a
squared + b squared = c squared.
• Fraction - A number that expressed a portion of a whole. The denominator of a fraction represents the number of the portions the whole has been divided into, and the numerator expresses the
number of the portions measured. The fraction 1/4 could be stated as 1 our of 4 parts of the whole.
• Fractions, ruler - The portions into which the English units of measurement such as inches can be divided -- for example, halves, fourths, eighths, sixteenths, and thirty-seconds.
• Functions of an angle - The name of a particular ratio of the sides of a right triangle. The name of the function will depend on which side is divided into which side. There are six possible ways
to divide the sides of a right triangle into each other. There are six functions for each angle. The names are: sine, cosine, tangent, cosecant, secant, and cotangent.
• Functions table - A table of the functions of angles. This table allows one to find either the function of an angle or the degree of an angle.
• Hypotenuse - The longest side of a right triangle. It is always located directly across from the right angle.
• Improper fraction - A fraction which is equal to one or more than one.
• Inside arc - The shortest arc of a turn.
• Inverse - A term which means turned upside down. For example the ratio top/bottom is expressed inversely as bottom/top.
• Isosceles right triangle - A 45° right triangle. There is only one right triangle that has two equal legs - the 45° right triangle.
• Isosceles triangle - A triangle that has two equal legs and two equal angles.
• Legs of a right triangle - The sides of a right triangle other than the hypotenuse.
• Level - Even, flat, not having any part higher or lower than another part.
• Lowest term fraction - A fraction which can not be reduced to a lower term.
• Measuring fractions - The fractions used with English units of measurement when measuring distance. See Fractions, ruler.
• Metric measurement - Measurements which use metric units of measurement.
• Minute - A division of a degree. There are 60 minutes in each degree.
• Miter - To cut material at an angle.
• Mixed number - A whole number and a fraction.
• Needs - The unknowns of a formula, or the measurements needed to do a calculation.
• Numerator - The top number of a fraction. It indicates the number of portions of a whole.
• Obtuse angle - An angle between 90° and 180° .
• Offset - To change the direction of a line by making a turn and correcting back to the original direction by making another turn. Also, offset is the term used to identify such a turn.
• Offset box - An imaginary box which is used when drawing thumbnail sketches to calculate offsets.
• Opposite side - The side of a right triangle that is opposite the reference angle.
• Outside arc - The longest arc of a turn.
• Parallel lines - Lines that are in the same plane and always the same distance apart.
• Perpendicular - a 90° angle between two lines. It is also called a right angle.
• Pi (¼ ) - The ratio of the circumference of a circle to its diameter. It is the whole number 3 and a decimal fraction that has endless decimal places. We usually round it off to four decimal
places (3.1416) for our work.
• Plane - A flat, level or even surface.
• Plumb - Exactly Vertical. A line that is perpendicular to ground level is plumb.
• Point of tangency - The point where a circle and a tangent line touch.
• Pythagorean theorem - The formula a squared + b squared = c squared, which signifies that the square of the length of the hypotenuse of a right triangle equals the sum of the squares of the
length of the other two sides.
• Secant - One of the functions of an angle. It is found by dividing the length of the hypotenuse of a right triangle by the length of the adjacent side of a reference angle.
• Sector - A part of a circle which includes two radii connected by the arc.
• Second - A division of the minutes of a degree. There are 60 seconds in each minute of a degree.
• Simple offset - A procedure by which a center line is moved up, down, or over to reach a new path going in the same direction. A simple offset uses two same angle turns.
• Sine - One of the functions of an angle. It is found by dividing the length of the opposite side of a reference angle by the length of the hypotenuse of the right triangle.
• Square root - A factor of a number which when squared produces that number. A square root multiplied by itself equals the number under the square root symbol.
• Square - The product of a number multiplied by itself.
• Straight angle - An angle of 180° (in other words, a straight line).
• Take out - The distance that a fitting extends over the center line of a hypotenuse of an offset triangle.
• Take out formula - A formula for finding the take out of an elbow. The take out formula is: Take out = Tan 0/2 x radius of the elbow.
• Tangent - One of the functions of an angle. It is found by dividing the length of the opposite side of a reference angle by the length of the adjacent side of a reference angle.
• Tangent line - A straight line which touches just one point on a circle.
• Theta (
• Triangle - An enclosed geometric form with three straight sides. The angles of a triangle always equal 180° .
• Triangle of roll - A triangle in a rolling offset box that contains the angle of roll. It is usually the first triangle calculated.
• Turn - To change direction.
• Unit circle - A circle with a radius of 1 unit.
• Units of measurement - Terms which describe fixed standard measurements.
• Variables - Letters of the alphabet used as symbols to represent numbers that can change or are to be determined.
• Vertex - The point at which two straight lines come together to form the angle.
• Volume - The amount of space occupied in three dimensions, expressed in cubic units.
• Whole numbers - Numbers which contain no fractions.
• Zero degree angle - An angle with no space between the two lines. In other words, the two lines are occupying the same space. The difference between a straight angle and a zero degree angle is
that the vertex is in the middle of the line for a straight angle but at the end of the line for a zero degree angle.
|
{"url":"http://mathforum.org/sarah/hamilton/ham.glossary.html","timestamp":"2014-04-18T03:04:02Z","content_type":null,"content_length":"19410","record_id":"<urn:uuid:82de119d-74c5-485f-9c1a-83bc2abe77ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Talagrand's Inequality
Probability Theory and Combinatorial Optimization
by J. Michael Steele
Starting with Classical Problems:
TSP, MST, and Minimal Matchings
This monograph provides an introduction to the state of the art of the probability theory that is most directly applicable to combinatorial optimization. The questions that receive the most attention
are those that deal with discrete optimization problems for points in Euclidean space, such as the minimum spanning tree, the traveling-salesman tour, and minimal-length matchings.
Still, there are several nongeometric optimization problems that receive full treatment, and these include the problems of the longest common subsequence and the longest increasing subsequence. The
philosophy that guides the exposition is that analysis of concrete problems is the most effective way to explain even the most general methods or abstract principles.
Three Themes:
Martingales, Subadditivity, and Talagrand's Inequality
There are three fundamental probabilistic themes that are examined through our concrete investigations. First, there is a systematic exploitation of martingales. Over the last ten years, many
investigators of problems of combinatorial optimization have come to count on martingale inequalities as versatile tools which let us show that many of the naturally occurring random variables of
combinatorial optimization are sharply concentrated about their means---a phenomenon with numerous practical and theoretical consequences.
The second theme that is explored is the systematic use of subadditivity of several flavors, ranging from the naïve subadditivity of real sequences to the subtler subadditivity of stochastic
processes. By and large, subadditivity offers only elementary tools, but on remarkably many occasions such tools provide the key organizing principle in the attack on problems of nearly intractable
The third and deepest theme developed here concerns the application of Talagrand's isoperimetric theory of concentration inequalities. This new theory is reshaping almost everything that is known in
the probability theory of combinatorial optimization. The treatment given here deals with only a small part of Talagrand's theory, but the reader will find considerable coaching on how to use some of
the most important ideas from that theory. Audience Researchers in optimization and applied probability will benefit from this book, as will students familiar with probability and optimization
Chapter Topics: From Subadditivity to Talagrand's Inequality
Chapter 1: First View of Problems and Methods.
A first example: Long common subsequences; Subadditivity and expected values; Azuma’s inequality and a first application; A second example: The increasing-subsequence problem; Flipping Azuma’s
inequality; Concentration on rates; Dynamic programming; Kingman’s subadditive ergodic theorem; Observations on subadditive subsequences.
Chapter 2: Concentration of Measure and the Classical Theorems.
The TSP and quick application of Azuma’s inequality; Easy size bounds; Another mean Poissonization; The Beardwood-Halton-Hammersly theorem; Karp’s partitioning algorithms; Introduction to
space-filling curve heuristic; Asymptotics for the space-filling curve heuristic.
Chapter 3: More General Methods --- Subadditive Euclidean functionals
Examples: Good, bad and forthcoming; A general L-(infinity) bound; Simple subadditivity and geometric subadditivity; A concentration inequality; Minimal matching; Two-sided bounds and first
consequences; Rooted duals and their applications; Lower bounds and best possibilities.
Chapter 4: Probability in Greedy Algorithms and Linear Programming.
Assignment problem; Simplex method for theoreticians; Dyer-Frieze-McDiarmid inequality; Dealing with integral constraints; Distributional bounds; Back to the future.
Chapter 5: Distributional Techniques and the Objective Method.
Motivation for a method; Searching for a candidate object; Topology for nice sets; Information on the infinite tree; Dénoument; Central limit theory; Conditioning method for independence;
Dependency graphs and the CLT.
Chapter 6: Talagrand’s Isoperimetric Theory.
Talagrand’s isoperimetric theory; Two geometric applications of the isoperimetric inequality; Application to the longest-increasing-subsequence problem; Proof of the isoperimetric problem;
Application and comparison in the theory of hereditary sets; Suprema of linear functionals; Tail of the assignment problem; Further applications of Talagrand’s isoperimetric inequalities.
How to Buy Probability Theory and Combinatorial Optimization
When you buy via the Amazon Listing there is good news and bad news. The good news is that you can get a package deal with The Cauchy-Schwarz Master Class. The bad news is Amazon sometimes let is
run out of stock --- most recently, it has been IN STOCK.
Still, if Amazon can't promise quick delivery, then it is best to order through the SIAM listing. They are the publishers and they always have copies on hand. [Ordering details: List Price $41.50
/ SIAM/CBMS Member Price $29.05 / Order Code CB69 1997 / viii + 159 pages paperback.]
Finally, the numbers ISBN-13: 978-0-898713-80-0 and ISBN-10: 0-89871-380-3 are useful if you need to hunt for a second hand copy, and, archaic though the institution may be, there is always the
library: QA273.45.S74 1997
To do list: I plan to post the references from the book one of these days. The references to the older subadditivity results are otherwise hard to find.
|
{"url":"http://www-stat.wharton.upenn.edu/~steele/PTCOIntro.htm","timestamp":"2014-04-21T02:42:58Z","content_type":null,"content_length":"7461","record_id":"<urn:uuid:6b3c8997-197b-4799-aa31-56ab0e49eb21>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formal verification of a partial-order reduction technique for model checking
Results 1 - 10 of 17
, 1997
"... . State space explosion is a fundamental obstacle in formal verification of designs and protocols. Several techniques for combating this problem have emerged in the past few years, among which
two are significant: partial-order reductions and symbolic state space search. In asynchronous systems, ..."
Cited by 59 (0 self)
Add to MetaCart
. State space explosion is a fundamental obstacle in formal verification of designs and protocols. Several techniques for combating this problem have emerged in the past few years, among which two
are significant: partial-order reductions and symbolic state space search. In asynchronous systems, interleavings of independent concurrent events are equivalent, and only a representative
interleaving needs to be explored to verify local properties. Partial-order methods exploit this redundancy and visit only a subset of the reachable states. Symbolic techniques, on the other hand,
capture the transition relation of a system and the set of reachable states as boolean functions. In many cases, these functions can be represented compactly using binary decision diagrams (BDDs).
Traditionally, the two techniques have been practiced by two different schools---partial-order methods with enumerative depth-first search for the analysis of asynchronous network protocols, and
symbolic bread...
- In: TACAS ’98: Proceedings of the 4th International Conference on Tools and Algorithms for Construction and Analysis of Systems , 1998
"... Abstract. The state space explosion problem is central to automatic verification algorithms. One of the successful techniques to abate this problem is called 'partial order reduction'. It is
based on the observation that in many cases the specification of concurrent programs does not depend on the o ..."
Cited by 37 (7 self)
Add to MetaCart
Abstract. The state space explosion problem is central to automatic verification algorithms. One of the successful techniques to abate this problem is called 'partial order reduction'. It is based on
the observation that in many cases the specification of concurrent programs does not depend on the order in which concurrently executed events are interleaved. In this paper we present a new version
of partial order reduction that allows all of the reduction to be set up at the time of compiling the system description. Normally, partial order reduction requires developing specialized
verification algorithms, which in the course of a state space search, select a subset of the possible transitions from each reached global state. In our approach, the set of atomic transitions
obtained from the system description after our special compilation, already generates a smaller number of choices from each state. Thus, rather than conducting a modified search of the state space
generated by the original state transition relation, our approach involves an ordinary search of the reachable state space generated by a modified state transition relation. Among the advantages of
this technique over other versions of the reduction is that it can be directly implemented using existing verification tools, as it requires no change of the verification engine: the entire reduction
mechanism is set up at compile time. One major application is the use of this reduction technique together with symbolic model checking and localization reduction, obtaining a combined reduction. We
discuss an implementation and experimental results for SDL programs translated into COSPAN notation by applying our reduction techniques. This is part of a hardware-software co-verification project.
, 1998
"... In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in
verification. We describe a version of the explicit state enumeration verifier Mur' that allows using magnet ..."
Cited by 34 (2 self)
Add to MetaCart
In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in
verification. We describe a version of the explicit state enumeration verifier Mur' that allows using magnetic disk instead of main memory for storing almost all of the state table. The algorithm
avoids costly random accesses to disk and amortizes the cost of linearly reading the state table from disk over all states in a certain breadth-first level. The remaining runtime overhead for
accessing the disk can be strongly reduced by combining the scheme with hash compaction. We show how to do this combination efficiently and analyze the resulting algorithm. In experiments with three
complex cache coherence protocols, the new algorithm achieves memory savings factors of one to two orders of magnitude with a runtime overhead of typically only around 15%. Keywords protocol
verification, expli...
- In 9th International SPIN Workshop on Model Checking Software, Lecture Notes in Computer Science 2318 , 2002
"... Partial order reduction is a very succesful technique for avoiding the state explosion problem that is inherent to explicit state model checking of asynchronous concurrent systems. It exploits
the commutativity of concurrently executed transitions in interleaved system runs in order to reduce th ..."
Cited by 15 (4 self)
Add to MetaCart
Partial order reduction is a very succesful technique for avoiding the state explosion problem that is inherent to explicit state model checking of asynchronous concurrent systems. It exploits the
commutativity of concurrently executed transitions in interleaved system runs in order to reduce the size of the explored state space. Directed model checking on the other hand addresses the state
explosion problem by using guided search techniques during state space exploration. As a consequence, shorter errors trails are found and less search effort is required than when using standard
depth-first or breadth-first search. We analyze how to combine directed model checking with partial order reduction methods and give experimental results on how the combination of both techniques
, 1999
"... conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of SRC, NSF, DARPA, or the United States Government. ..."
Cited by 10 (0 self)
Add to MetaCart
conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of SRC, NSF, DARPA, or the United States Government.
, 1997
"... Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all
possible interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks a ..."
Cited by 8 (4 self)
Add to MetaCart
Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all possible
interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks are easily overlooked. One way a computer can support the designer is by simulating random
executions of the system. There is, however, a high probability of missing executions containing errors -- especially in complex systems -- using this simulation approach. In contrast, an automatic
verifier tries to examine all states reachable from a given set of startstates. The biggest obstacle in this exhaustive approach is that often there is a very large number of reachable states. This
thesis describes three techniques to increase the size of the reachable state spaces that can be handled in automatic verifiers. The techniques work in verifiers that are based on explicitly storing
each reachable ...
, 1998
"... This paper explains the setting of an extensive formalisation of the theory of sequences (finite and infinite lists of elements of some data type) in the Prototype Verification System pvs. This
formalisation is based on the characterisation of sequences as a final coalgebra, which is used as an axi ..."
Cited by 8 (2 self)
Add to MetaCart
This paper explains the setting of an extensive formalisation of the theory of sequences (finite and infinite lists of elements of some data type) in the Prototype Verification System pvs. This
formalisation is based on the characterisation of sequences as a final coalgebra, which is used as an axiom. The resulting theories comprise standard operations on sequences like composition (or
concatenation), filtering, flattening, and their properties. They also involve the prefix ordering and proofs that sequences form an algebraic complete partial order. The finality axiom gives rise to
various reasoning principles, like bisimulation, simulation, invariance, and induction for admissible predicates. Most of the proofs of equality statements are based on bisimulations, and most of the
proofs of prefix order statements use simulations. Some significant aspects of these theories are described in detail. This coalgebraic formalisation of sequences is presented as a concrete example
that shows t...
, 1998
"... The MDG system is a decision diagram based verification tool, primarily designed for hardware verification. It is based on Multiway decision diagrams---an extension of the traditional ROBDD
approach. In this paper we describe the formal verification of the component library of the MDG system, using ..."
Cited by 7 (6 self)
Add to MetaCart
The MDG system is a decision diagram based verification tool, primarily designed for hardware verification. It is based on Multiway decision diagrams---an extension of the traditional ROBDD approach.
In this paper we describe the formal verification of the component library of the MDG system, using HOL. The hardware component library, whilst relatively simple, has been a source of errors in an
earlier developmental version of the MDG system. Thus verifying these aspects is of real utility towards the verification of a decision digram based verification system. This work demonstrates how
machine assisted proof can be of practical utility when applied to a small focused problem.
- Formal Methods in System Design
"... Abstract. Combining verification methods developed separately for software and hardware is motivated by the industry’s need for a technology that would make formal verification of realistic
software/hardware co-designs practical. We focus on techniques that have proved successful in each of the two ..."
Cited by 6 (2 self)
Add to MetaCart
Abstract. Combining verification methods developed separately for software and hardware is motivated by the industry’s need for a technology that would make formal verification of realistic software/
hardware co-designs practical. We focus on techniques that have proved successful in each of the two domains: BDD-based symbolic model checking for hardware verification and partial order reduction
for the verification of concurrent software programs. In this paper, we first suggest a modification of partial order reduction, allowing its combination with any BDD-based verification tool, and
then describe a co-verification methodology developed using these techniques jointly. Our experimental results demonstrate the efficiency of this combined verification technique, and suggest that for
moderate–size systems the method is ready for industrial application.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1095279","timestamp":"2014-04-16T20:38:34Z","content_type":null,"content_length":"38281","record_id":"<urn:uuid:8bbc8ea7-c358-4663-987e-4d5c582f6bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Moments of inertia of rigid bodies
We evaluate right hand integral of the expression of moment of inertia for regularly shaped geometric bodies. The evaluation is basically an integration process, well suited to an axis of rotation
for which mass distribution is symmetric. In other words, evaluation of the integral is easy in cases where mass of the body is evenly distributed about the axis. This axis of symmetry passes through
"center of mass" of the regular body. Calculation of moment of inertia with respect to other axes is also possible, but then integration process becomes tedious.
There are two very useful theorems that enable us to calculate moment of inertia about certain other relevant axes as well. These theorems pertaining to calculation of moment of inertia with respect
to other relevant axes are basically "short cuts" to avoid lengthy integration. We must, however, be aware that these theorems are valid for certain relevant axes only. If we are required to
calculate moment of inertia about an axis which can not be addressed by these theorems, then we are left with no choice other than evaluating the integral or determining the same experimentally. As
such, we limit ourselves in using integral method to cases, where moment of inertia is required to be calculated about the axis of symmetry.
In this module, we will discuss calculation of moment of inertia using basic integral method only, involving bodies having (i) regular geometric shape (ii) uniform mass distribution i.e uniform
density and (iii) axis of rotation passing through center of mass (COM). Application of the theorems shall be discussed in a separate module titled " Theorems on moment of inertia ".
As far as integration method is concerned, it is always useful to have a well planned methodology to complete the evaluation. In general, we complete the integration in following steps :
1. Identify an infinitesimally small element of the body.
2. Identify applicable density type (linear, surface or volumetric). Calculate elemental mass "dm" in terms of appropriate density.
3. Write down the expression of moment of inertia (đI) for elemental mass.
4. Evaluate the integral of moment of inertia for an appropriate pair of limits and determine moment of inertia of the rigid body.
Identification of small element is crucial in the evaluation of the integral. We consider linear element in evaluating integral for a linear mass distribution as for a rod or a plate. On the other
hand, we consider thin concentric ring as the element for a circular plate, because we can think circular plate being composed of infinite numbers of thin concentric rings. Similarly, we consider a
spherical body, being composed of closely packed thin spherical shells.
Calculation of elemental mass "đm" makes use of appropriate density on the basis of the nature of mass distribution in the rigid body :
mass = appropriate density x geometric dimension mass = appropriate density x geometric dimension
The choice of density depends on the nature of body under consideration. In case where element is considered along length like in the case of a rod, rectangular plate or ring, linear density (λ) is
the appropriate choice. In cases, where surface area is involved like in the case of circular plate, hollow cylinder and hollow sphere, areal density (σ) is the appropriate choice. Finally,
volumetric density (ρ) is suitable for solid three dimensional bodies like cylinder and sphere. The elemental mass for different cases are :
đ m = λ đ x đ m = σ đ A đ m = ρ đ V đ m = λ đ x đ m = σ đ A đ m = ρ đ V
Figure 1
Elemental mass
(a) Elemental mass of a rod for MI about perpendicular bisector (b) Elemental mass of a circular disc for MI about perpendicular axis through the center
Figure 2:
Elemental mass
of a sphere
for MI about a
Elemental mass
The MI integral is then expressed by suitably replacing "dm" term by density term in the integral expression. This approach to integration using elemental mass assumes that mass distribution is
|
{"url":"http://cnx.org/content/m14292/latest/?collection=col10322/1.175/","timestamp":"2014-04-20T03:26:25Z","content_type":null,"content_length":"194093","record_id":"<urn:uuid:63718bc6-054d-478a-afd5-9e48b2a702a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Collect Raw Data
Janice VanCleave
— John Wiley & Sons, Inc.
Updated on Feb 11, 2011
This step describes types of and ways to collect raw data (experimental results). Raw data includes observations (information collected about something by using your senses) made during testing. The
two types of observations are qualitative and quantitative. A quantitative observation is a description of the amount of something. Numbers are used in quantitative descriptions. Instruments, such as
a balance, a ruler, and a timer, are used to measure quantities or to describe the amount of the property being observed, such as mass, height, or time.
Metric measurements are generally the preferred units of measurement for science fair projects; for example, length in meters, mass in grams, volume in milliliters, and temperature in degrees
Celsius. Another type of quantitative observation can be a scale that you design. For example, if your experiment involves measuring the change in the freshness of flowers, you might have a scale of
freshness from 1 to 5, with 5 being the most fresh and having no dry parts on the petals and 1 being the least fresh with each petal being totally dry.
A qualitative observation is a description of the physical properties of something, including how it looks, sounds, feels, smells, and/or tastes. Words are used in a qualitative description. The
qualitative description of a light could be about its color and would include words such as white, yellow, blue, and red.
As you collect raw data, record it in your log book. You want your log to be organized and neat, but you should not recopy the raw data for your journal. You should recopy the data that you will want
to represent the information on your display in tables and/or graphs so that it is more easily understandable and meaningful to observers. (See chapter 10 for information about the project display.)
Data is generally recorded in a table, which is a chart in which information is arranged in rows and columns. A column is a vertical listing of data values and a row is a horizontal listing of data
values. There are different ways of designing a table, but all tables should have a title (a descriptive heading) and rows and columns that are labeled. If your table shows measurements, the units of
measurement, such as minutes or centimeters, should be part of the column's or row's label.
For an experimental data table, such as Table 8.1, the title generally describes the dependent variable of the experiment, such as "Moths' Attraction to Light," which in this case is for the data
from an experiment where yellow and white lightbulbs (independent variable) are used and the number of moths attracted to each light is counted (dependent variable). In contrast, the title "White
Light versus Yellow Light in the Attraction of Moths" expresses what is being compared. As a key part of the data organization, an average of each of the testings is calculated.
Analyzing and Interpreting Data
When you have finished collecting the data from your project, the next step is to interpret and analyze it. To analyze means to examine, compare, and relate all the data. To interpret the data means
to restate it, which involves reorganizing it into a more easily understood form, such as by graphing it. A graph is a visual representation of data that shows a relationship between two variables.
All graphs should have:
1. A title.
2. Titles for the x-axis (horizontal) and y-axis (vertical).
3. Scales with numbers that have the same interval between each division.
4. Labels for the categories being counted. Scales often start at zero, but they don't have to.
The three most common graphs used in science fair projects are the bar graph, the circle graph, and the line graph. Graphs are easily prepared using graphing software on a computer. But if these
tools are not available to you, here are hints for drawing each type of graph.
In a bar graph, you use solid bar-like shapes to show the relationship between the two variables. Bar graphs can have vertical or horizontal bars. The width and separation of each bar should be the
same. The length of a bar represents a specific number on a scale, such as 10 moths. The width of a bar is not significant and can depend on available space due to the number of bars needed. A bar
graph has one scale, which can be on the horizontal or vertical axis. This type of graph is most often used when the independent variable is qualitative, such as the number of moths in Table 8.1. The
independent variable for the Moths' Attraction to Light table is the color of light—white, yellow, or no light (control)—and the dependent variable for this data is the number of moths near each
light. A bar graph using the data from Table 8.1 is shown in Figure 8.1. Since the average number of moths from the data varies from 1 to 12, a scale of 0 to 15 was used, with each unit of the scale
representing 1 moth. The heights of the bars in the bar graph show clearly that some moths were found in the area without light and some near the yellow light, but the greatest number were present in
the area with white light.
A circle graph (also called a pie chart) is a graph in which the area of a circle represents a sum of data, and the sizes of the pie-shaped pieces into which the circle is divided represent the
amount of data. To plot your data on a circle graph, you need to calculate the size of each section. An entire circle represents 360°, so each section of a circle graph is a fraction of 360°. For
example, data from Table 8.1 was used to prepare the circle graph in Figure 8.2. The size of each section in degrees was determined using the following steps:
1. Express the ratio of each section as a fraction, with the numerator equal to the average number of moths counted on each type of light and the denominator equal to the average total number of
moths counted on all the lights:
2. Multiply the fraction by 360°:
White ^12/[17] × 360° = 254.1°
Yellow ^4/[17] × 360° = 84.7°
Control ^1/[17] × 360° = 21.2°
To prepare the circle graph, first decide on the diameter needed, then use a compass to draw a circle. Next draw a straight line from the center of the circle to any point on the edge of the circle.
Using a protractor, start at this line and mark a dot on the edge of the circle 254.1° from the line. Draw a line to connect this dot to the center of the circle. The pie-shaped section you made
represents the number of moths found near the white light. Start the next section on the outside line for the yellow light section. The remaining section will be the no-light section, or control
section. Each section should be labeled as shown in Figure 8.2.
Each section of a circle graph represents part of the whole, which always equals 100%. The larger the section, the greater the percentage of the whole. So all of the sections added together must
equal 100%.
To determine the percentage of each section, follow these steps:
1. Change the fractional ratio for each section to a decimal by dividing the numerator by the denominator:
2. Change the decimal answers to percent. Percent means "per hundred." For example, for white light, .70 is read 70⁄100 or 70 per 100, which can be written as 70%.
White light: .70 = ^70/[100] = 70%
Yellow light: .24 = ^24/[100] = 24%
Control: .06 = ^6/[100] = 6%
To represent the percentage of moths attracted to each light color, you could color each section of the circle graph with a different color. You could label the percentages on the graph and make a
legend explaining the colors of each section as in Figure 8.3.
A line graph is a graph in which one or more lines are used to show the relationship between the two quantitative variables. The line shows a pattern of change. While a bar graph has one scale, a
line graph has two scales. Figure 8.4 shows a line graph of data from a different study in which the problem was to determine if ants communicate by laying a scent trail for other ants to follow to a
food source. The line graph shows data for the number of ants observed on one of the paths every 15 minutes for 1 hour. Generally, the independent variable is on the x-axis (the vertical axis) and
the dependent variable is on the x-axis (the horizontal axis). For this example, the independent variable of time is on the x-axis and the dependent variable of number of ants is on the x-axis. One
unit on the time scale represents 1 minute, and units are marked off in groups of 15 up to a total of 60 units. One unit on the number of ants scale represents 1 ant. Since the largest average
counted was 32.2 ants, the scale for ants is numbered by fives from 0 to 35. On the graph, the increase in the angle of the line over time shows that more ants were found on the food as time
Popular Articles
Wondering what others found interesting? Check out our most popular articles.
|
{"url":"http://www.education.com/reference/article/collect-raw-data/","timestamp":"2014-04-18T00:43:45Z","content_type":null,"content_length":"103949","record_id":"<urn:uuid:47f76403-c29b-47dd-9f19-ccef62b20292>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Pure Mathematics
University of Waterloo
Waterloo, Ontario, Canada
N2L 3G1
Office: MC 5167
Telephone: 519-888-4567 ext. 35560
Fax number: 519-725-0160
Email David McKinnon
David McKinnon
I'm interested in arithmetic geometry, primarily the distribution of rational points on algebraic varieties, particularly surfaces. I use Vojta's Conjectures as my inspiration, so I'm always
interested in special cases and consequences of them.
Geometry and Topology Seminar Speaker List
Pure Math Seminars
PDF file containing tables of generators of various cones in the Néron-Severi group of a smooth cubic surface
Teaching (Winter 2014)
This page was last updated February 20, 2014.
|
{"url":"http://www.math.uwaterloo.ca/~dmckinno/","timestamp":"2014-04-19T22:12:16Z","content_type":null,"content_length":"5467","record_id":"<urn:uuid:9fa0a95a-2374-4b5e-ba17-de454d30443e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Angular Magnification
1. The problem statement, all variables and given/known data
If you have normal vision and are properly using a magnifying glass with a refractive power of 12.5 diopters, how far should you hold it from the object to achieve maximum angular magnification?
2. Relevant equations
1 diopter = 1/focal length = m^-1
M=angular magnification
Mmax = [(Near Point)/f] +1
Near point for someone with normal vision = 25cm
3. The attempt at a solution
First I use 12.5 diopters to figure out the focal length:
1/12.5 = .08m = 8cm
Then I plugged this into the equation for Mmax for someone with normal vision:
[25cm / 8cm] + 1 = 4.125 = Mmax
I don't know how to use any of this to find the maximum distance the object should be held at though. The prof already posted the answers-this should be 6.67 cm- I'm just trying to practice before an
exam and learn how to do this, so any help/tips would be great. Thanks!
|
{"url":"http://www.physicsforums.com/showthread.php?t=408443","timestamp":"2014-04-17T18:39:53Z","content_type":null,"content_length":"22749","record_id":"<urn:uuid:5b2ff0d0-39fd-4d6c-a81c-aa53c24ca370>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Re: spline regression (Kit Baum)
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: spline regression (Kit Baum)
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Re: spline regression (Kit Baum)
Date Tue, 25 Mar 2008 06:48:22 -0400
If you graph x vs y, and break the line at the knot points, a linear spline allows the line to have kinks, like a dot-to-dot drawing. A quadratic spline has constant first derivatives == slopes at
the knot points, so that it will not have any kinks. I don't know how to explain a second derivative in this context except to say that a curved line may have more or less curvature (such as a
railroad track on a curve may be a broad curve or a sharp curve) and holding the second derivative constant causes the degree of curvature to be equal before and after the knot point (so that the
locomotive will not derail at the knot point).
Kit Baum, Boston College Economics and DIW Berlin
An Introduction to Modern Econometrics Using Stata:
On Mar 24, 2008, at 02:33 , Mohammed wrote:
Thank you very much. Pardon me, I am not good in MAth.
i will be very grateful if you explain more what you
mean by "the derivative (slope) of the function is
equal on either side of each knot point, but the
curvature on either side may differ" and "The first
and second derivatives of the function are equal on
either side of each knot point." Thanks again Kit
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-03/msg00928.html","timestamp":"2014-04-17T22:00:00Z","content_type":null,"content_length":"6487","record_id":"<urn:uuid:51830cd7-6640-4da5-a74a-92c4dba69346>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Room Modes
Frequencies at which sound waves in a room resonate (in the form of standing waves), based on the room dimensions. The acoustic modes will "colour" the sound, ie enhance certain frequencies and dull
Therefore, the standard modal approach for designing a room with good acoustics is to create as many different resonances as possible, and to spread them as evenly as possible across the frequency
The colouration of sound depends on:
□ Bandwidth of modes. Typically the low frequency acoustic modes of rooms have a bandwidth of about 5Hz. However, this is dependent on the amount of absorption within the room at that
□ Degree of excitation of modes
□ Separation of modes from strongly excited neighbours
□ Frequency content of the source
□ Position of the source and receiver
Colourations are part of room acoustics and not always bad. They can be controlled to a degree.
The lowest resonance is determined by the largest dimension of the room. (Technically there is also a resonance at zero Hz for all rooms, but this is generally not considered a true resonance).
c[0] = speed of sound [ms^-1]
l, m, n = 0, 1, 2, 3, ......
L[x], L[y], L[z] = room dimensions in each direction [m]
Axial mode
l and m, m and n or l and n are zero
Dimensions x=5m, y=4m, z=3m Dimensions x=5m, y=4m, z=3m Dimensions x=5m, y=4m, z=3m
mode: n[x]=1, n[y]=0, n[z]=0 mode: n[x]=0, n[y]=1, n[z]=0 mode: n[x]=0, n[y]=0, n[z]=1
Dimensions x=5m, y=4m, z=3m
mode: n[x]=2, n[y]=0, n[z]=0
Tangential mode
l,m or n are zero, the other two are non-zero
Dimensions x=5m, y=4m, z=3m
mode: n[x]=1, n[y]=1, n[z]=0
Oblique mode
l, m and n are all non-zero
Dimensions x=5m, y=4m, z=3m Dimensions x=5m, y=4m, z=3m
mode: n[x]=1, n[y]=1, n[z]=1 mode: n[x]=2, n[y]=2, n[z]=1
In general, the lower the better for the first resonant frequency, because this region is where the frequency response is most variable. Bigger rooms also reduce the spacing between resonances. The
limiting factor here is usually cost.
The primary "axial" resonances, involving reflections from two opposing surfaces. Additional resonances are created by reflections that ricochet off four different surfaces. These "tangential"
resonances are generally weaker, because energy is lost at each reflection. Finally there are "oblique" resonances which ricochet off all six surfaces. Each resonance gives rise to a "mode" with
a characteristic spatial pressure variation.
Many methods and optimum room ratios have been suggested over the years to minimise colouration. Essentially these methods try to avoid degenerate modes, where multiple modal frequencies fall within
a small bandwidth, and also bandwidths with absences of modes. The assumption being that as music is played in the rooms, the absence or boosting of certain tonal elements will detract from the
audio quality.
Cycling through all of the integer values and combinations of nx, ny and nz for a given set of room dimensions Lx, Ly and Lz will result in a complete set of modal frequencies. The number of modes in
a given frequency band can be calculated and divided by the bandwidth of the band giving the modal density. The modal density is then plotted against the centre frequency of the band. This may be
compared with the statistical modal density.
According to the modal design theory, the worst possible room shape is a cube. The modal density for a 3m x 3m x 3m room is shown above.
The next worst is a room where all dimensions are multiples of the height.
The room 2m x 4m x 8m is slightly better than the 3m cube. However, neither are as good as the 3m x 4m x 5m room shown below.
As stated earlier, one way to improve the low frequency performance is to increase the dimensions of the room, but maintain the aspect ratio of 3:4:5. The 6m x 8m x 10m room shows a very smooth
progression from 20Hz to 1000Hz.
See also: Axial Mode, Cavity Acoustics, Noise Criteria, Tangential Mode, Warmth.
Subjects: Architectural Acoustics Architecture Noise & Vibration
Acoustics and Vibrations Animations A collection of animations produced by Daniel A. Russell, Ph.D., Associate Professor of Applied Physics at Kettering University.
|
{"url":"http://www.diracdelta.co.uk/science/source/r/o/room%20modes/source.html","timestamp":"2014-04-17T21:54:00Z","content_type":null,"content_length":"19195","record_id":"<urn:uuid:fa3b167c-8cec-40a1-ae89-54e494966945>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|