content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Type Space Associated with Schrödinger Operators on Stratified Lie Groups
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 483951, 13 pages
Research Article
The Higher Order Riesz Transform and BMO Type Space Associated with Schrödinger Operators on Stratified Lie Groups
^1School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
^2Department of Mathematics, Shanghai University, Shanghai 200444, China
Received 1 October 2013; Accepted 7 November 2013
Academic Editor: Yoshihiro Sawano
Copyright © 2013 Yu Liu and Jianfeng Dong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Assume that is a stratified Lie group and is the homogeneous dimension of . Let be the sub-Laplacian on and a nonnegative potential belonging to certain reverse Hölder class for . Let be a
Schrödinger operator on the stratified Lie group . In this paper, we prove the boundedness of some integral operators related to , such as , , and () on the space BMO[L](G).
1. Introduction
In recent years, some problems related to Schrödinger operators on the Euclidean space with nonnegative potentials have been investigated by a number of scholars (cf. [1–12], etc.). Later, more
scholars want to generalize the above results related to Schrödinger operators to a more general setting, such as Heisenberg group, nilpotent Lie groups, and spaces of homogeneous type (cf. [13–24],
etc.). The auxiliary function plays an important role in the Harmonic analysis problems related to Schrödinger operators. Recently, Yang et al. introduced the admissible function. It is known that
the auxiliary function is a special case of the admissible function. Accordingly, they investigated function spaces, such as , , and Hardy space, related to the admissible function in [22, 24]. Among
the above problems, Riesz transforms and higher order Riesz transforms related to Schrödinger operators are one of hottest issues. Their boundedness has been obtained by Shen [13] and Li [4] in the
different settings. Dziubański and Zienkiewicz proved that Riesz transforms related to Schrödinger operators are bounded from Hardy spaces associated with Schrödinger operators into in [1]. Endpoint
boundedness of Riesz transforms related to Schrödinger operators had been investigated in [11, 25]. Dong and Liu established the spaces associated with Schrödinger operators for the Riesz transform
related to Schrödinger operators in [26]. Lin et al. obtained the corresponding results on the Heisenberg group in [14, 15]. Just now, Dong and Liu established the estimates for the higher order
Riesz transform in [27]. The aim of this paper is to obtain the estimates for the higher order transform on stratified Lie groups.
Firstly, we recall some basic facts of stratified Lie groups (cf. [28]). A Lie group is called stratified if it is nilpotent, connected, and simple connected, and its Lie algebra admits a vector
space decomposition such that for and . If is stratified, its Lie algebra admits a family of dilations, namely, Assume that is a Lie group with underlying manifold for some positive integer .
inherits dilations from : if and , we write where . The map is an automorphism of . The left (or right) Haar measure on is simply , which is the Lebesgue measure on . For any measurable set , denote
by the measure of . The inverse of any is simply . The group law has the following form: for some polynomials in .
The number is called the homogeneous dimension of . We fix a homogeneous norm function on , which is smooth away from , where is the unit element of . Thus, for all for all , and if . The homogeneous
norm induces a quasi-metric which is defined by . In particular, and . The ball of radius centered at is written by The measure of is where is a constant. In particular set for and .
Let be a basis for (viewed as left-invariant vector fields on ). Following [29], one can define a left invariant metric associated with which is called the Carnot-Caratheodory metric: let , and for
every define Let us define
The Carnot-Caratheodory metric is equivalent to the quasi-metric . From the results of Nagel et al. in [29], we deduce that there exists a constant such that, for any ,
It follows from [28] that , , are skew adjoint; that is, . Let be the sub-Laplacian on . This operator (which is hypoelliptic by Hörmander’s theorem in [30]) plays the same fundamental role on as the
ordinary does on . The gradient operator is denoted by .
Definition 1. A nonnegative locally integrable function on is said to belong to the reverse Hölder class if there exists such that the reverse Hölder inequality holds for every ball in .
Moreover, a locally bounded nonnegative function if there exists a positive constant such that holds for every ball in .
Furthermore, it is easy to see that for any .
Let be a Schrödinger operator on the stratified Lie group , where is a nonnegative potential belonging to the reverse Hölder class for some . Denote by the higher order Riesz transform. Accordingly,
denote by its dual operator.
It follows from [13] that the integral operators and are bounded on for and is bounded on for . Lin et al. introduced the Hardy type space related to the Schrödinger operator on the Heisenberg group
in [14]. The dual space of is the type space investigated by Lin and Liu in [15]. and were also introduced as applications of results in [11, 22].
Next, we recall the definition of and . Since and , the Schrödinger operator generates a () semigroup . The maximal function with respect to the semigroup is given by The Hardy space associated with
the Schrödinger operator is defined as follows in terms of the maximal function mentioned above.
Definition 2. A function is said to be in if the semigroup maximal function belongs to . The norm of such a function is defined by
Assume for . The auxiliary function is defined by It follows from Lemma 9 in Section 2 that for any .
The dual space of is the type space (cf. [22]). Let be a locally integrable function on and be a ball. Set
Definition 3. Let be a locally integrable function on . One says if
It is clear that and . Some remarks are given as follows.
Remark 4. Let . If , then there exists a positive constant : The above inequality can be easily deduced by Lemma3.1 in [11].
Similar to Remark1 in [15], we conclude that a function if and only if there exist some suitable constants and depending on and satisfying whenever such that
Our main results are given as follows.
Theorem 5. Suppose for some . Then the operators and are bounded on the space .
Theorem 6. Suppose for some , for some , and and for some positive constant . Then operator is bounded on the space .
It shoud be noted that because the left invariant vector fields in are skew-adjoint and they interact with convolution (see (41) for the details), we generalized the main results in [27] to the
stratified Lie groups instead of nilpotent Lie groups.
This paper is organized as follows. In Section 2, we collect some known facts about the auxiliary function . Section 3 gives some estimates of kernel for some operators in this paper. Section 4 gives
the proof of the boundedness of on the space . In Section 5, we establish the boundedness of . Finally, we give some examples for the potentials which satisfy the assumptions in Theorem 6 in
different settings.
Throughout this paper, we will use to denote the positive constant, which is not necessarily the same at each occurrence and may depend on the dimension , and the constant in (9). By and , we mean
that there exist some constants such that and , respectively.
2. Some Lemmas about the Auxiliary Function
In this section, we collect some known results about auxiliary function . We refer to [13] for the details. Throughout this section, unless otherwise indicated, we always assume that for some .
Lemma 7. is a doubling measure; that is, there exists a constant such that
Lemma 8. There exist constants such that In particular, if .
Lemma 9. There exists such that, for ,
Lemma 10. If , then Moreover,
Lemma 11. There exist and such that Moreover, if , then there exists such that
3. Estimates for the Kernels
In this section we will investigate some necessary estimates about the kernel of the operators in the paper.
Let be the heat kernel of the semigroup ,, associated with . Via Theorem 4.2 of [31], the following estimates hold true; that is, there exist positive constants and such that where is the unit
element of . Moreover, for any and , by using (3.5) in [13] we obtain Let Then for ,
Let be the fundamental solution of the operator for . In particular, we denote by . Then we have the following.
Proposition 12. There exists a positive constant such that for .
Proof. Equations (29) and (30) have been proved by Li in [13]. We only need to show that (32) holds true, because (31) and (33) can be proved similarly.
By (26) and (28), Firstly, for , we have In addition, for any positive integer , Therefore, Secondly, we have Therefore, (32) holds true.
Moreover, we need some other basic facts of fundamental solutions for sub-Laplacian on the stratified Lie group (see [32]).
In the first place, we use the standard notations , , and for the spaces of functions with compact support, functions, and distributions on .
A measurable function on will be called homogeneous of degree if for all . Likewise, a distribution will be called homogeneous of degree if for all and . A distribution which is away from and
homogeneous of degree will be called a kernel of type .
A differential operator will be called homogeneous of degree if for all and . Since is stratified, is homogeneous of degree if and only if . In particular, sub-Laplacian is homogeneous of degree 2.
It follows from [32] that if is a kernel of type and is homogeneous of degree , then is a kernel of type .
For sub-Laplacian , is homogeneous of due to the fact that is homogeneous of , where . In addition, by using Proposition1.7 in [28], we have
By [28], the left-invariant fields are formally skew-adjoint; that is, Moreover, interacts with convolution in the following way:
Let be the fundamental solution of for . Then, Theorem3.6 in [13] implies that In particular, is the fundamental solution of Schrödinger operator , which satisfies the following.(i)For each there
exists such that (ii)If , then for each there exists such that where the above estimate can be deduced by Lemma5.1 in [13].
The operator is defined by where the kernel . Also, its adjoint operator is defined by where the kernel . Since , then .
Moreover, we also need other estimates for the kernel and in order to prove the main results.
Lemma 13. Assume for . Let and . Then for any there exists such that where is the constant appearing in Lemma 8 and is the constant appearing in Lemma 11.
Proof. Let be the solution of in the ball . By Lemma3.2 in [13], we choose such that on , , and , where and are fixed constants, which are independent of and .
For , By (31) and (32) we have
By (9), the Calderón-Zygmund estimates, and Lemma 11, Let and . Then is a solution of in . By the above inequality and Lemma 8, we immediately have
This finishes the proof of Lemma 13.
Lemma 14. Suppose for some and for some . Then where and , if for some positive constant .
Proof. Note that . Let be the fundamental solution of . Then, we have It follows that By (41), we have Set . Thus, we have
For ,
Firstly, by (43) and (30), it holds that Secondly, via Lemmas 9 and 11, we similarly have where .
Now, we turn to estimating . By Lemma 7, (43), and (30), we have that if we choose large enough.
A similar argument implies that
The proof is completed.
Lemma 15. Suppose for some . Let be the conjugate index of . (1) and are bounded on the space , where .(2) is bounded on the space for .
The above lemmas hold true due to Theorem 4.1 and Theorem in [13], respectively.
4. The Boundedness of and on
Proof of Theorem 5. To prove Theorem 5, we adopt the method used in the proof of Theorem1.6 in [27].
Suppose and . Firstly, we suppose . Set . Then, we have where denotes the characteristic function of the set . Since is bounded on , by Remark 4, we have
Let . By Lemma 8, . Set . By using (43) and Lemma 11, we obtain where we choose large enough. Thus The above argument also shows that is well defined on without the ambiguity of an additive constant.
Suppose . Set . Then, we can write Via Lemma 8, for any . Similar to (66), we have Note that Set . By Hölder inequality and (9), we have, for any , where . Then, we have
Therefore, we prove that and , where is an absolute constant independent of .
Since , as an immediate consequence, is a bounded operator on .
The proof is completed.
5. The Boundedness of on
Proof of Theorem 6. Similar to the proof of Theorem1.7 in [27], we show that Theorem 6 holds true.
Suppose and . Firstly, we suppose . Set . Then, we decompose as
Due to Lemma 15, we conclude that is bounded on . By Remark 4, we have
Let . By Lemma 8, . Set . Then, by Hölder inequality and Lemma 13, we have where we choose sufficiently large.
Thus The above argument also shows that are well defined on without the ambiguity of an additive constant.
Suppose . Set . Then, we can write as follows: Note that for any . Similar to (66), we have To complete the proof of the theorem, by Remark 4, we only need to prove that there exists a constant such
that The left side of (78) is bounded by where is the dual operators of classical higher order Riesz transform . Let and . Note that . It is clear that , (cf. [28, Page 148]); therefore, It follows
that By Lemma 14, we get
Since , . It is easy to see that By the same argument and noting that , that is, , Because for , then . Thus, the last series converges. Using the fractional integral and the condition , we have
where and . Thus It remains to show Let , where satisfies . Note that . Set Since are bounded on , by Remark 4, we have Since is homogeneous of degree , then by (39), | {"url":"http://www.hindawi.com/journals/jfs/2013/483951/","timestamp":"2014-04-17T19:40:41Z","content_type":null,"content_length":"1045475","record_id":"<urn:uuid:e5bd29a6-950e-402a-8740-85633ac6085e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chance Workshop
Group Presentations
Dartmouth College
July 1998
I. Black Smokers and Nicotine--An Absorbing Issue
Melissa Cass, Carol Janik, Josephine Rodriguez, Steve Terry,
Beth Walters
II. Non-Cents: A Simulation Activity
Kal Godbole, Leona Mirza, Alice Richardson, Mark Rizzardi
III. Clinical Trials
Charlotte Buffington, Angela Hare, Ellen Musen, Nancy Roper
IV. Therapeutic Touch
Eli Brettler, Rita Kolb, Raj Prasad
V. Chasing the Home Run Record
Michael Dutko, Richard Iltis, Steve Samuels, Linda Thiel,
John Wasik
VI. Where Have All the Boys Gone?
Steve Givant, Eunice Goldberg, Ellen King, Barbara Stewart
I. Black Smokers and Nicotine--An Absorbing Issue
Melissa Cass, Carol Janik, Josephine Rodriguez, Steve Terry, Beth Walters
Workshop Handouts
Studies suggest black absorb more nicotine: Link to higher cancer rate is hinted. The Boston Globe, 8 July 1998, p. By Richard Saltus.
Studies show that black smokers absorb more nicotine. The Valley News, 8 July 1998, pB1. By John Schwartz.
Black smokers retain more nicotine, ABC News Online, July 10, 1998.
[links updated daily--it does not appear that archives can be accessed!]
Discussion questions based on above articles.
Additional References
The group also distributed copies of the following three articles from the July 8 issue of JAMA:
Caraballo, Ralph S. et. al., Racial and ethnic differences in serum cotinine levels of cigarette smokers. Journal of the American Medical Association 1998, 280: 135-139.
Prez-Stable, Elisio J. et. al., Nicotine metabolism and intake in black and white smokers. Journal of the American Medical Association 1998, 280: 152-156.
(Editorial), Pharmacogenetics and ethnoracial differences in smoking. Journal of the American Medical Association 1998, 280: 179-180.
II. Non-Cents: A Simulation Activity
Kal Godbole, Leona Mirza, Alice Richardson, Mark Rizzardi
Workshop Handouts
Exceprt from article in Milwaukee Journal, May 1992.
Worksheets for classroom simulation activities and discussion questions.
III. Clinical Trials
Charlotte Buffington, Angela Hare, Ellen Musen, Nancy Roper
Workshop Handouts
Magazine ad for ZYRTEC (allergy medication) & discussion questions
Quiz on terms from medical experiments
Additional References
Jessica Utts, Seeing Through Statistics. Chapter 5 "Experiments and Observational Studies"
IV. Therapeutic Touch
Eli Brettler, Rita Kolb, Raj Prasad
Workshop Handouts
Hallelujah! Science looks at prayer for friend and fungus. The New York Times on the Web, 5 April 1998. By Jeff Stryker.
Rosa, Linda et. al., A close look at therapeutic touch. Journal of the American Medical Association, 1998, 279, 1005-1010.
Chance class discussion handout to accompany above
Additional References
Eli Brettler posted the group's handout to his web page, including links to the articles online!
V. In Pursuit of the Home Run Record
Michael Dutko, Richard Iltis, Steve Samuels, Linda Thiel, John Wasik
Workshop Handouts
McGwire gets better, and a record looks more vulnerable. The New York Times, 9 July 1998, A1. By Buster Olney
Discussion questions based on the above article
Additional References
VegasINSIDER online sports betting site report on wagers regarding MdGwire
SportingNews online statistics on McGwire
Players who have hit 30 home runs before the All-star game.
Plaryes who hit 30 or more homeruns after the All-star game.
Side-by-side boxplots of McGwire's home run distances, home and away
and time series plot of home run distances, with home/away plotting symbols
VI. Where Have All the Boys Gone?
Steve Givant, Eunice Goldberg, Ellen King, Barbara Stewart
Workshop Handouts
¥ Alpert, Mark. Where have all the boys gone? Scientific American, July 1998 issue online:
¥ Discussion questions based on the article
Additional References
¥ Online JAMA abstract. Davis. D.L. et. al., Reduced ration of male to female births in several industrial countries. A sentinel health indicator? Journal of the American Medical Association 1998,
279: 1018-1023.
¥ Vital Statistics. No. 89. Birth and Birth Rates: 1970-1992. (Photocopy from US Statistical Abstract)
¥ Report: Environmental factors may support dip inmale birth rates.
The group sent us an extended write-up after the meeting. It is reproduced on the following pages.
Chance Workshop
July 1998
An Addendum to the Group Presentation:
Ellen King
Eunice Goldberg
Barbara Stewart
Steven Givant
from Eunice Goldberg, in consultation with Ellen King
Albert, Mark
Where Have All the Boys Gone
Scientific American -July 1998
Other Articles and Sources:
Tanner, Lindsey
Report: Environmental factors may support dip in male birth rates
Chicago, Associated Press, 1998
Davis, Gottlieb, Stamplinitzky
Reduced Ratio of Male to Female Births in Several Industrial Countries
JAMA Abstracts, April 1, 1998
U.S. National Center for Health Statistics: Vital Statistics of the United States (Birth rates by gender and ethnicity, infant mortality, ages of mothers and fathers, etc.)
JAMA, New England Journal of Medicine
Recommended searches
Information on male-female ratios in the rest of the world
Government policies that promote one gender over the other (China, India, Middle East, Latin America, etc.)
Long range effects of unbalanced ratios
Environmental factors that may be gender biased.
War, etc.
Recommended Activity--Population Simulation Problem
Several versions of this problem have circulated:
One version is that a country prefers males and families keep having children until a male is born. A second version says that females are preferred and therefore families keep having children until
they have a female.
Other versions say that a country does not want to take a chance on having too many girls-- so once a girl is born, the family must quit having children. The government does not care how many boys
are born.
What can be learned?
1.The analysis of the article helps students become quantitatively literate. By asking questions and doing research, the students will learn to evaluate and make sense of data.
2. The problem helps students learn about issues of probability, simulation, and distributions of data.
3. There can be meaningful integration of subject areas: social studies and mathematics and/or science.
For the analysis of the article students could possibly: learn to define the issues being presented and investigate whether the data actually explains or enlightens
Think about ways to obtain needed information either through further research or investigation
ask other questions about this issue that are compelling or need clarification and should be investigated, ie. explain the relationship, if any, between a declining ratio and a declining birth rate
interpret, analyze, question, and evaluate data: enough, convincing, misleading, contradictory, etc.
think about what information is needed to inform the issue: demographics, original studies, census data, etc.
learn to find backup data: internet, journals, newspapers, experts, etc.
gain experience looking at large quantities of information and sifting out what is important and/or meaningful
learn to ask "what ifs": what if the ratio became very disparate? what if it continued for many generations? what if science changes the way babies are reproduced?
By doing the problem students could:
generate their own data
have the experience of dealing with a data set that is messy yet still shows the theoretical patterns
look at measures of central tendency: mean, median, mode
look at spread: standard deviation and eyeballing the distribution
learn to simulate two-choice problems using different two choice simulations
learn the need for lots of data before patterns appear--learn that any one sample may not show the pattern-- or even a few samples may not--
compare the results of experimental simulations to the theoretical model of this problem
compare the variation between the number of boys or girls (depending on the version) in individual families-- and the average for many families.
look at the shape of the distributions for average number of boys vs. girls, and the distribution of the number of families with zero girls, 1 girl, 2 girls, 3 girls, etc.
compare real life to the simulation:
Is there a physical limit on the number of children one family can have?
Is the ratio really 50-50?
Can we simulate other ratios using these methods?
What do those other ratios say for future generations?
Are the simulations reasonable models?
Time frame: At least four hours of class time
(Copies of the article, questions presented in class, additional articles and data can be included here)
I am adding a copy of the problem as I used it in a workshop. It is similar to what Ellen said she did.
Males Preferred
Does it Work?
There are countries in the world where wives are considered to be a failure unless they produce a male child. King Henry the VIII beheaded several wives because they didn't produce a male heir.
There are still countries in the world that express a preference for boys. A certain unnamed country prefers boys to girls and thinks it has come up with a way to guarantee more boys. The leaders of
the country developed this family-planning scheme:
A couple will continue to bear children until a son is born, at which time they will stop having children.
Do you think this family-planning scheme will produce more boys or more girls?
Explain how you made your decision.
(As presented by Eunice Goldberg at Barat College, 1998)
SAMPLE 1: Using coin tosses simulate the generation of 20 families. Let heads represent boys and tails represent girls.
# GIRLS
SAMPLE 2: Using coin tosses simulate the generation of 20 families. Let heads represent boys and tails represent girls.
SAMPLE 3: Using THE RANDOM NUMBER TABLE to simulate the generation of 20 families. Let even numbers stand for boys and odd numbers stand for girls.
SAMPLE 4: Using THE RANDOM NUMBER TABLE to simulate the generation of 20 families. Let even numbers stand for boys and odd numbers stand for girls.
SAMPLE 5: Using a DIE simulate the generation of 20 families. Let 1,2, or 3 be a boy and 4, 5, and 6 be a girl.
SAMPLE 6: Using DICE simulate the generation of 20 families. Let 1,2, or 3 be a boy and 4,5, and 6 be a girl.
Answer the following questions:
1. How many families did you generate altogether?____________
2. How many families had zero girls?_______
3. How many families had exactly 1 girl?_____
4. How many families had exactly 2 girls?_____
5. How many families had exactly 3 girls?____
6. Do you see any patterns?_______
7. What number occurred the most often?____
8. What is the average number of girls in a family?_______
Notes to me:
Make charts:
1. Make a spread sheet of samples derived from coin tosses, random number tables, and tosses of die. Subject name on the side, type of sample across the top. (This can be done in EXCEL-- with a split
screen and a graph so the graph will change as data is added in-- or else it can be done on the board where students make dots along the distribution values.)
2. Look at the shape of the data after each set of data has been input. Does it gradually get a more normal shape?
Choices: Make separate plots for coins, random number tables, and die-- all should have similar shape--and put all on one plot to see how the shape normalizes.
3. What is the overall mean?
4. Find the mean for each subject's six samples. Then find the mean of those means. How does it compare to the overall mean? Look at the histogram of that data. How does it compare to the overall
histogram? Look at the spread.
CONCEPT: THE MEAN OF THE MEANS SHOULD BE CLOSE TO THE OVERALL (POPULATION) MEAN.
5. Find the mean for each of the six samples. How do the different methods of sampling compare to each other?
What is the mean of the six samples?
How does it compare to the overall mean?
How does the graph compare to the graph of the overall data and the data of the means of the subjects?
Make a chart of individual families.
How many times were there zero girls?
How many times was/were there 1 girl? 2 girls? 3 girls? 4 girls?
CONCEPT: ZERO GIRLS IN A FAMILY WAS THE MODE, AND EACH INCREASE OF A GIRL WAS HALF AGAIN AS MANY FAMILIES AS THE PREVIOUS NUMBER.
What does the distribution of boys look like? What is the mean, median, and mode?
Compare that to the distribution of girls. Does this method produce more boys? | {"url":"http://www.dartmouth.edu/~chance/course/Workshops/Presentations.html","timestamp":"2014-04-16T16:57:51Z","content_type":null,"content_length":"16028","record_id":"<urn:uuid:f55acd3e-b485-476e-9367-e14370d9eec6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How many zeros does this function have?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f87546fe4b0505bf086e267","timestamp":"2014-04-19T07:12:40Z","content_type":null,"content_length":"40344","record_id":"<urn:uuid:332baed7-d7a7-4833-965c-9daf675a2bbe>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circles Resources
Consider the points (4,-7) and (-6, 13). (a) Find the midpoint. Show work. (b) If the point you found in (a) is the center of a circle, and the other two points are points on the circle, find...
How do you find the x intercepts and y intercepts of a circle with a given circle besides graphing?
The circle is not in the formula already. The equation is x^2+y^2+4x-6y+4=0 Also I am not sure whether I should complete the square or not.
Optimization Problems
A piece of wire 200 cm is cut into two pieces: one a square and another a circle. Where should the cut be made if the sum of the two areas (square and circle) is to be a minimum.
Optimization Problems
A piece of plexiglass is in the shape of a semicircle with radius 2 m. Determine the dimensions of the rectangle with the greatest area that can be cut from the piece of plexiglass.
How to you put x^2+y^2=x+2 into the standard form of a circle without graphing?
I know the standard form of a circle is (x-h)^2+(y-k)^2=r^2 but I don't see how to put x^2+y^2=x+2 into that form.
if the radius is 18 inches then what is the diameter
i need help with calculating circles and spheres
finding the center of a circle given the equation
what is the center of the circle with the equation: x squared plus y squared minus 6x plus 2y minus 6 equals 0
Cirlce P is tangent to each side of ABCD. AB= 20, BC=11, and DC= 14. Let AQ= x and find AD
helppppppppppppppppp :(
AB and AC are tangents to circle O, and OC= 5X. find OC.
AB= 14+4X AC= 19-6x Making clear OC is just the radius its not the whole diameter. Help me please! I don't know what to do after I make equal the two tangents (I know they are congruent)...
I need help on finding the Circumference and Area of Circles.
I need help on this, my math teacher is not a very good teacher when it comes to teaching. Please explain this to me. This > π < is the pi symbol. Thanks! Solve using: ...
A real life application problem involving the conic : circle ?
Include an explanation (answer)
A problem (real life application) and explanation (answer)
Using all four (4) conic sections: Cirlcle, Ellipse, Hyperbola, and Parabola. | {"url":"http://www.wyzant.com/resources/circles","timestamp":"2014-04-18T01:37:53Z","content_type":null,"content_length":"52734","record_id":"<urn:uuid:68b859d2-8a8c-499e-9cbb-02df429451b4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
%0 Journal Article %D 2007 %T A Globally Convergent Filter Method for MPECs %A Sven Leyffer %A T. S. Munson %X
We propose a new method for mathematical programs with complementarity constraints that is globally convergent to B-stationary points. The method solves a linear program with complementarity
constraints to obtain an estimate of the active set. It then fixes the activities and solves an equality-constrained quadratic program to obtain fast convergence. The method uses a filter to promote
global convergence. We establish convergence to B-stationary points.
%8 09/2007 %G eng %1 http://www.mcs.anl.gov/papers/P1457.pdf | {"url":"http://www.anl.gov/publications/export/tagged/4957","timestamp":"2014-04-18T05:33:48Z","content_type":null,"content_length":"1311","record_id":"<urn:uuid:0d2c2269-faac-4e69-88fb-2791c8c88b73>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
JSM 2013 Keynote Speakers
1 May 2013 381 views No Comment
The keynote addresses are special. Each speaker was chosen specifically for his or her vast knowledge of statistics and dedicated work in the field. Here, we introduce these speakers.
ASA President’s Invited Address
Nate Silver
August 5, 4:00 p.m.
Nate Silver is founder of the award-winning political website FiveThirtyEight.com, where he publishes a running forecast of current elections and hot-button issues. Called a “number-crunching
prodigy” by New York Magazine, he first gained national attention during the 2008 presidential election when he correctly predicted the results of the primaries and the presidential winner in 49
states. Silver’s prediction of the 2012 presidential election in all 50 states, silencing the traditional political pundits, has made him the public face of statistical analysis, data-driven
journalism, and political forecasting.
Silver has appeared on national television programs ranging from MSNBC’s “Morning Joe” to Comedy Central’s “The Daily Show.” His New York Times bestseller, The Signal and the Noise: Why Most
Predictions Fail—But Some Don’t, takes the reader on a tour of predictive statistical modeling and analysis across a host of fields, making it essential reading for anyone interested in the power of
data-driven forecasting.
Before he came to politics, Silver established his credentials as an analyst of baseball statistics. He developed the widely acclaimed PECOTA system, which predicts player performance, career
development, and seasonal winners and losers. He is author of a series of books on baseball statistics, including Mind Game, Baseball Between the Numbers, and It Ain’t Over ’til It’s Over. He has
written for ESPN.com, Sports Illustrated, and The New York Times.
Silver has been honored with a series of accolades, from being named one of the World’s 100 Most Influential People by TIME in 2009 to one of Rolling Stone’s 100 Agents of Change. And
FiveThirtyEight.com won “best political coverage” in the 2008 Weblog Awards.
IMS Presidential Address
Hans Rudolf Kuensch
“Ars conjectandi: 300 Years Later”
August 5, 8:00 p.m.
Hans Kuensch was born in Zurich, Switzerland, and earned both his undergraduate degree and his PhD from ETH Zurich in 1975 and 1980, respectively. He was a researcher and postdoc in Japan from
1976–1977 and 1982–1983. In 1983, he took a position as professor in the department of mathematics at ETH Zurich that he still holds today. His work covers areas from probability theory and
theoretical statistics to applications in environmental models.
ASA Deming Lecture
Vijay Nair
August 6, 4:00 p.m.
Vijay Nair is the D.A. Darling Professor of Statistics and Professor of Industrial and Operations Engineering at the University of Michigan. He is a Fellow of the American Statistical Association,
American Association for the Advancement of Science, and Institute of Mathematics Statistics, as well as an elected member of the International Statistical Institute. His scientific interests are
broad and include methodology, theory, and applications. He has worked in engineering statistics, reliability and degradation modeling, network tomography, design and analysis of experiments
(including applications in behavioral intervention research), and quality improvement.
Wald Lectures I and II
Piet Groeneboom
Wald Lecture I: “Nonparametric Estimation Under Shape Constraints”
August 6, 4:00 p.m.
Wald Lecture II
August 7, 8:30 a.m.
Piet Groeneboom has been professor of statistics at Delft University since 1988, having previously been professor of statistics at the University of Amsterdam. He earned his PhD in mathematics in
1979 under the direction of J. Oosterhoff. He has been visiting professor at the University of Washington, Stanford University, and Université Paris VI and has done research in the areas of large
deviations, stochastic geometry, particle systems, inverse statistical problems, and statistical inference under order restrictions.
Groeneboom has been on the editorial board of the Annals of Statistics (three times) and is a fellow of the Institute of Mathematical Statistics and elected member of the International Statistical
Institute. He also received the Rollo Davidson Prize and is finishing a book to be published by Cambridge University Press on the topic of his Wald lectures.
ASA Presidential Address
Marie Davidian
“The International Year of Statistics: A Celebration and a Call to Action”
August 6, 8:00 p.m.
ASA President Marie Davidian is William Neal Reynolds Professor of Statistics at North Carolina State University and adjunct professor of biostatistics and bioinformatics at Duke University. She
earned her doctoral degree in statistics from The University of North Carolina at Chapel Hill.
Davidian is an ASA Fellow and former chair of the Committee on Nominations, Samuel S. Wilks Memorial Medal Committee, and Biometrics Section. She is a past president of the Eastern North American
Region of the International Biometric Society and current executive editor of the journal Biometrics. She is recipient of the George W. Snedecor and Florence Nightingale David awards, presented by
the Committee of Presidents of Statistical Societies.
Since 2004, Davidian has co-directed the joint NC State-Duke Clinical Research Institute Summer Institute for Training in Biostatistics, which is funded by the National Heart, Lung, and Blood
Institute and seeks to encourage U.S. undergraduates to pursue advanced training in biostatistics and statistics.
Rietz Lecture
Larry Wasserman
“Geometric and Topological Inference”
August 7, 10:30 a.m.
Larry Wasserman is a professor in the department of statistics and machine learning at Carnegie Mellon University. His research interests include nonparametric inference, machine learning,
statistical topology, and astrostatistics. He writes the wildly popular blog Normal Deviate. In his spare time, he enjoys mountain climbing, parachuting, and big game hunting.
Public Lecture to Commemorate the 300th Anniversary of Ars Conjectandi
David John Spiegelhalter
“From Gambling to Global Catastrophe: Metaphors and Images for Communicating Numerical Risks”
Wednesday, August 7, 2:00 p.m.
COPSS Fisher Lecture
Peter J. Bickel
“From Fisher to Big Data: Continuities and Discontinuities”
August 7, 4:00 p.m.
Peter Bickel earned his bachelor’s and master’s degrees in mathematics and his Phd in statistics at the University of California at Berkeley under the supervision of Erich Lehmann. He retired from
the State Department in 2006, but continues an active research program in network theory and bioinformatics.
Bickel has made wide-ranging contributions to statistical science. His research in the early period was mostly theoretical, including nonparametrics, sequential analysis, classical asymptotic theory,
robust statistics, higher-order asymptotics, and nonparametric function estimation. His applied work includes an often-cited 1975 Science paper, in which he, Eugene A. Hammel, and J.W. O’Connell gave
an explanation of an apparent gender bias in graduate admissions at UC Berkeley by relating it to Simpson’s paradox.
Bickel has received the Institute of Mathematical Statistics’ Wald and Rietz lectureships, the COPSS Presidents’ Award, and a MacArthur Fellowship. He was elected to the National Academy of Sciences,
the American Academy of Arts and Sciences, and the Royal Netherlands Academy of Arts and Sciences and earned an honorary doctoral degree from the Hebrew University of Jerusalem. Bickel contributed to
the profession and society more broadly as president of the Institute of Mathematical Statistics and Bernoulli Society and by serving on committees of the National Academies and other organizations.
IMS Medallion Lecture I
Gady Kozma
“Linearly Reinforced Random Walk”
August 4, 4:00 p.m.
IMS Medallion Lecture II
Jeremy Quastel
“The Kardar-Parisi-Zhang Equation and Universality Class”
August 5, 8:30 a.m.
A specialist in probability theory, stochastic processes, and partial differential equations, Jeremy Quastel has been at the University of Toronto since 1998. A native of Canada, he studied at McGill
University, then the Courant Institute at New York University, where he completed his PhD in 1990 under the direction of S.R.S. Varadhan. Quastel was a postdoctoral fellow at the Mathematical
Sciences Research Institute in Berkeley, then a faculty member at the University of California at Davis until he returned to Canada in 1998.
Quastel’s research is on the large-scale behavior of interacting particle systems and stochastic partial differential equations. He was a Sloan Fellow from 1996–1998 and an invited speaker at the
International Congress of Mathematicians in Hyderabad 2010. He gave the Current Developments in Mathematics 2011 and St. Flour 2012 lectures and was a plenary speaker at the International Congress of
Mathematical Physics in Aalborg 2012.
IMS Medallion Lecture III
Martin Wainwright
“Statistics Meets Computation: Efficiency Trade-Offs in High Dimensions”
August 5, 2:00 p.m.
Martin Wainwright joined the faculty at the University of California at Berkeley in 2004, and is currently a professor with a joint appointment between the department of statistics and department of
electrical engineering and computer sciences. He earned his bachelor’s degree in mathematics from the University of Waterloo, Canada, and his PhD degree in electrical engineering and computer science
from the Massachusetts Institute of Technology, for which he was awarded the George M. Sprowls Prize in 2002. He is an associate editor for the Annals of Statistics, Journal of Machine Learning
Research, and Information and Inference.
Wainwright is interested in large-scale statistical models and their applications to communication and coding, machine learning, and statistical signal and image processing. He received an NSF-CAREER
Award in 2006, an Alfred P. Sloan Foundation Research Fellowship in 2005, an Okawa Research Grant in Information and Telecommunications in 2005, IEEE Best Paper awards from the Signal Processing
Society in 2008 and Communications Society in 2010, the Joint Paper Award from IEEE Information Theory and Communication Societies in 2012, and several outstanding conference paper awards.
IMS Medallion Lecture IV
Lutz Duembgen
“Multiscale Methods and Shape Constraints”
August 6, 8:30 a.m.
Lutz Duembgen studied mathematics and biology/chemistry at the University of Heidelberg, where he joined Statlab and finished his PhD thesis about nonparametric change-point estimation in 1990 under
D.W. Müller.
From 1990–1992, Duembgen spent two years at the University of California at Berkeley as a research fellow of the Miller Institute for Basic Research in Science. He then returned to Heidelberg and,
after a short stay at the University of Bielefeld in 1993, started working on his habilitation thesis, which he finished in 1996. In 1997, he accepted a position as professor of stochastics at the
(Medical) University at Lübeck and joined the Mathematical Institute. In 2002, Duembgen started working at the University of Bern, Switzerland, as a professor of statistics within the Institute of
Mathematical Statistics and Actuarial Science and department of mathematics and statistics and continues there today.
IMS Medallion Lecture V
Peter Guttorp
“Pointing in New Directions”
August 6, 10:30 a.m.
Peter Guttorp is professor of statistics, guest professor at the Norwegian Computing Center, project leader for the Nordic Network on Statistical Approaches to Regional Climate Models for Adaptation,
co-director of the Research Network on Statistical Methods for Atmospheric and Ocean Sciences, adjunct professor of statistics at Simon Fraser University, and member of the interdisciplinary
faculties in quantitative ecology and resource management and Urban Design and Planning.
He earned a degree in journalism from the Stockholm School of Journalism in 1969; a BS in mathematics, mathematical statistics, and musicology from Lund University, Sweden, in 1974; a PhD in
statistics from the University of California at Berkeley in 1980; and a TechD hc from Lund University in 2009. He joined the University of Washington faculty in September 1980.
Guttorp’s research interests include uses of stochastic models in scientific applications in hydrology, atmospheric science, geophysics, environmental science, and hematology. He is co-editor of
Environmetrics. He is also former president of the International Environmetrics Society, a Fellow of the American Statistical Association, and an elected member of the International Statistical
Institute. From 2004–2005, he was the Environmental Research Professor of the Swedish Institute of Graduate Engineers.
IMS Medallion Lecture VI
Judea Pearl
“The Mathematics of Causal Inference”
August 6, 2:00 p.m.
Judea Pearl, professor of computer science at the University of California at Los Angeles, is known for his contributions to artificial intelligence and his theories for inference under uncertainty,
most notably the Bayesian network approach, which has influenced diverse fields such as statistics, philosophy, health, economics, social sciences, and cognitive sciences. A member of the National
Academy of Engineering and a founding Fellow of the American Association for Artificial Intelligence, Pearl has won numerous awards, including the prestigious Turing Award “for fundamental
contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.” Using some of the proceeds from the award, Pearl established the Causality in
Statistics Education Award, aimed at encouraging the teaching of basic causal inference in introductory statistics courses.
IMS Medallion Lecture VII
Ya’acov Ritov
“A Priori Analysis of Complex Models”
August 8, 8:30 a.m.
Ya’acov Ritov is a professor in the department of statistics of the Hebrew University of Jerusalem. He earned a BSc and MSc in electrical engineering from the Technion, Israel Institute of
Technology, and a PhD in statistics from the Hebrew University of Jerusalem. After a short post-doctorate period at the University of California at Berkeley, he was appointed in Jerusalem in 1984.
Leave your response!
Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.
Be nice. Keep it clean. Stay on topic. No spam.
You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar. | {"url":"http://magazine.amstat.org/blog/2013/05/01/keynote-speakers/","timestamp":"2014-04-19T19:34:27Z","content_type":null,"content_length":"61375","record_id":"<urn:uuid:609df3a3-cac5-49bd-bf1d-3e90ce65b40d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thus far, scientific evidence is pointing to an actual Infinity existing
reply to post by dominicus
Yes I know what
Set theory
and still say that what you stated made no sense in the words that it was expressed.
I disagree that there are such thing...
There are relative and different size Infinities
Even as abstract concepts there is no logic in those definitions, infinite is a statement that qualifies something as not bounded by limits, it can not be relative nor have different sizes as it has
by definition no size. An
Infinite set
is by its narrower limitation of characteristics a distinct concept than the broader concept of infinite (not the same thing), even so the size of infinite is not changed.
The space from 1 to forever counted numbers, is different that between 0 and 1. It's been proven long time ago. If you got beef with it, go disprove Cantor or set theory.
This statement is valid, I have no beef with it, infinites are distinct in relation to the elements they may contain, but not in relation to their size (unless you limit it to a subsection of the
set, that was the point I was making in my first reply).
The microwave afterglo tests tell us enough and give us various glimpses into whether there is an Infinity, in that a certain range and set of the after glow, can be deduced to be background
constant that does not expand, or have a directional expansion which has already provide evedince of multi-verses. Go read this stuff up!!!
I do no understand the deep physics of the test but my affirmation remains valid. A finite and localized observer will never be able to validate the existence of infinite, he can theorize and
speculate but will be unable to ascertain the physical existence of infinite. I'm not defending that there is an impossibility of a physical infinite to exist (one concept where it may be prevalent
is on scale, size), just that we will never be able to prove one does. | {"url":"http://www.abovetopsecret.com/forum/thread914788/pg1","timestamp":"2014-04-19T22:25:37Z","content_type":null,"content_length":"72746","record_id":"<urn:uuid:81eff837-1e34-4653-87a4-a95e8ad773e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine the slope of the line shown whose equation is y=3/2x+1 - WyzAnt Answers
Determine the slope of the line shown whose equation is y=3/2x+1
Tutors, please
to answer this question.
Hello, the equation y = (3/2)x +1 is a line written in slope intercept form (y =mx + b). Here the slope (m) is equal to 3/2, and the intercept (b) is equal to 1.
hope that helps.
If the equation is in Slope-Intercept form, y=mx + b, then the slope = m and the y-intercept (the point where the line crosses the Y axis) is b (or the ordered pair (0,b) )
Your equation is already in slope-intercept form, so the slope is ... left for the student | {"url":"http://www.wyzant.com/resources/answers/1303/determine_the_slope_of_the_line_shown_whose_equation_is_y_3_2x_1","timestamp":"2014-04-16T09:02:06Z","content_type":null,"content_length":"45010","record_id":"<urn:uuid:c8ed90af-7336-4fda-bfe3-2f36d7fb0cca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplify the following:
Re: Simplify the following:
27^n+2 - 6*3^3n+3/3^n9^n+2
That is it.
Re: Simplify the following:
Okay, so far I have this,where is the mistake?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
That's perfect no mistake!
Re: Simplify the following:
How about doing some factoring there?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
It says we should simplify it. But I don't know if it could be factorized. I tried doing it but the six has given me a tough time, it cannot be reduced to have 3 in order to have the same base as the
Re: Simplify the following:
How about simplifying 27^{3n+3} to start?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
= 3^3(n+2) = 3^(3n+6). You changed the exponent, please look at the original one above.
Last edited by EbenezerSon (2013-07-24 05:19:10)
Re: Simplify the following:
I was thinking of
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
At the back of the book the answer given was 21.
Re: Simplify the following:
That is incorrect. If post #102 is correct the answer I am getting is 7. Please look closely at post #102 and make sure I have the right problem as you see it in your text book.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
Yes, they are the ones in the book, the book could be wrong so please let proceed.
I have instances, I had my calculations correct while it had it wrong.
I will post a question I know I am correct while it has it wrong.
Last edited by EbenezerSon (2013-07-24 05:59:38)
Re: Simplify the following:
Answer is 7.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
But I don't seem to understand those methods
Re: Simplify the following:
They are based on the laws of exponents. As far as I can see that is a tedious problem. There maybe something simpler but I can not see it.
What step is a problem?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
This problem is from indices. So I had thought all the bases would be equal so I can take them off and simplify the exponent.
So I multiplied the six and the three which is eighteen and cannot be reduced to three, so that all the bases would be equall(to be three).
Re: Simplify the following:
You mean multiple 6 * 3^(3n+3) ?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
Yes to be > 18^(3n+3). But I see the eighteen cannot further be reduced to three, in order to have the same base with the others.
Re: Simplify the following:
That is incorrect. You can not say
6 * 3^(3n+3) = 18^(3n+3).
You can always test an idea by substituting some numbers for the variable. Try n = 1 and use a calculator.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
Because so far all the problems I solved have the same bases, which is easy for me to take them off and simplify the exponent.
So I thought I could apply that on this problem.
Re: Simplify the following:
That is why I turned them all into the same bases, that way you can cancel and multiply when needed.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
Then I need to learn the laws of exponents, or do indices also teaches that? If not then please could you assist me learn it?
Thanks for your assistance, God bless!
Re: Simplify the following:
Yes, we can over the laws of exponents. Try here first.
Please look at these pages, they will help a lot.
http://www.mathsisfun.com/algebra/varia … tiply.html
Do not worry if you can not absorb it all. It will come in time. Ask questions about anything you do not understand.
I am going to take a little break to do some chores be back later. Please look over those pages in the meantime.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
A^n = A*A*A*.....A*
I think it should be impossible in that regard, because it has raised to the n. Meaning n is dividing the A, like n/A.
Re: Simplify the following:
A^n means A * A * A ... n times.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Simplify the following:
I have learnt that 5^0 = 1. Can you explain to me why it is equal to one? | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=279211","timestamp":"2014-04-16T19:00:18Z","content_type":null,"content_length":"39134","record_id":"<urn:uuid:353a6e26-2a78-4576-a5fe-5b8ec2c344a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
controversy about a result in the wolrd3-03 model
Hi everybody
I have a controversy with someone about the world-3-03 as published in the Vensim documentation. When applying the extreme condition test, at a certain time, all people from cohort 0 to 14 years old
die in a semester (the time step), the maturation goes negative, which means that people from cohort 15 to 44 are getting younger!.
Joined two published models in Vensim, with both results in two tables in the population view.
One with the original formulation and the other with the changed formulation.
To force mortality to 2, in a time step one uses the reality checks tool.
If one forces the mortality to 1, by setting the proportion of population 0 to 14 dying in the time step to 0.5, the number of people maturing goes to 0, which is still impossible, about half of the
population from the beginning of the period maturing.
If one changes the formulation
( ( Population 0 To 14 ) )
* ( 1
- mortality 0 to 14 )
/ 15
( ( Population 0 To 14 ) )
* ( 1
- (mortality 0 to 14 * time step))
/ 15
The result resists the test.
The second formulation seems correct too because:
The population that matures in the time step is equal to the population at the beginning of the time step less the people that died during the time step divided by the number of time steps included
in the duration of the cohort.
People maturing during the time step = (population - (population * mortality * time step)) / (15/time step) = maturation 14 to 15 * time step
Maturation 14 to 15 = People maturing during the time step / time step =
(population - (population * mortality * time step)) / 15 = population * (1 – (mortality * time step)) / 15.
The formula is too dimensionaly consistent.
People maturing during the time step (person)= (population (person) - (population * mortality * time step) (person)) / (15 (year) / time step (year)) = maturation 14 to 15 * time step
To make the formula
( ( Population 0 To 14 ) )
* ( 1
- mortality 0 to 14 )
/ 15
consistent one is obliged to affect a 1/year dimension to the 1 value and a no dimension to the 15.
For what reasons? 1 is in fact dimensionless and 15 is the duration of the cohort in years.
Both models can be read with either Vensim or the Vensim player.
Who is right?
Best regards.
Jean-Jacques Laublé
(610.37 KiB) Downloaded 130 times
(609.82 KiB) Downloaded 130 times
Re: controversy about a result in the wolrd3-03 model
Hi JJ,
In noticed this quite recently as well. The formulation in the world 3 model is odd, and I have not dug into its genesis. It is quite possible that the mortality and cohort promotion logic was
conceptualized as year on year difference equations and never adjusted after moving to a smaller solution interval.
In any case your refinement is the correct one and, assuming a continuous time conceptualization, simply leaving the mortality term out of the equation all together also makes sense. The original
formulation does pass Vensim's unit checking - though for all the wrong reasons. The expression (1-mortality) resolves to units of 1/year which when multiplied by a population gives person/year then,
since the cohort sizes are simply numbers that person/year sticks in the final computation instead of person/year/year which is what it actually is. Another example showing that embedding constant in
equations is not a good idea.
Re: controversy about a result in the wolrd3-03 model
Hi Bob
Thank you for your answer.
About the unit checking, when one checks the units using the strictest unit checking feature Vensim detects 9 errors. 6 errors are due to values automatically assigned a no dimension when in fact
they have a defined dimension. The three other errors are the three maturation formulation. All errors are susceptible to be corrected with assigning a name and dimension to the constants. Doing this
does not change the results for 6 unit errors that are in fact no real formulation errors, but it corrects the flaw in the maturation formulation.
I discovered lately the strictest unit testing lately and use it now systematically.
The only values that I leave in equations are constant unchangeable and that have no dimensions, Vensim assigning automatically a no dimension to values in an equation when using the strictest unit
The world 3 model having matured for more than thirty years should not have that sort of bugs in it.
And it is particularly embarrassing that it is the first equation in the first view.
Somebody reviewing the model and finding such an error will automatically think that it is not the only one.
I just read David Lane’s address to the late SD conference with the different possible objectives for the SDS. He noticed that mathematic works are often blamed.
I think that before wanting to save the world, SD people should learn to produce high quality products that take advantage of its mathematical formalization. It is not the mathematics that are to be
blamed but its incorrect utilization. Without a high quality standard, there is no hope.
Best regards.
Re: controversy about a result in the wolrd3-03 model
Hi Bob
I did not quite understand what you mean by taking the mortality out of the equation ?
Does the equation ( Population 0 To 14 ) )
* ( 1- mortality 0 to 14 )
/ 15
becomes Population / 15 ?
If it is the case the maturation will no more become negative, but the equation is still wrong, because a proportion of the people dead in the period will mature to the second period.
With a mortality of 2 per year or 1 per semester, the population will become negative the maturation being positive when it should be equal to zero.
One must take away from the population susceptible to mature, the people that are dead, which necessitates the use of the mortality.
Best regards.
Re: controversy about a result in the wolrd3-03 model
Hi JJ,
When I say conceptualize in continuous time that means view the equations as a set of differential (or integral) equations, and not as a set of difference equations. In this frame the solution
interval (dt or time step) is 0, therefore any equation having dt*something can be simplified. Conversely, any equation using something/dt will simply not work.
To my mind the conceptual clarity arrived at by removing integration correction terms generally outweigh the numerical oddities that they are meant to prevent. Certainly is mortality is 2/year then
the appropriate solution interval is at most 1/8 and probably better 1/16 or 1/32.
The world 3 model is 30 years old (actually close to 40) - but it has not been under development for 30 years. Rather the original model is essentially unchanged (only a few minor adjustments were
made with each book). There are good points and bad points in that, but I can understand the motivation for doing it this way.
Re: controversy about a result in the wolrd3-03 model
Hi Bob.
Your reference to a differential approach, at least, brings light to the origin of the formulation as it politically justifies the approximation.
But I do not agree totally with your approach.
It is true, provided that the mortality is the same that the differential equation will be
(population / 15) * dt as the limit of the difference equation if the time step is decreasing to 0.
But the equation (population * (1 – (mortality * time step)) / 15) is right too even in the differential equation and has the advantage to be right too in the difference equation where the time step
is larger.
But if you stick to the extreme hypothesis that all people die in a time step, you cannot erase the mortality * dt because the mortality is equal to 1/dt and the term with mortality is no more
marginal compared with the other terms. In this case the error is increasing as the time step is decreasing.
I prefer a formulation that is always right, even if slightly more complex, whatever the time step and the formulation (difference or differential).
I do not totally agree when you write.
<Certainly if mortality is 2/year then the appropriate solution interval is at most 1/8 and probably better 1/16 or <1/32.
I think that if mortality is 2/year during a semester then the appropriate solution interval is at most the smaller possible taken into account the software used, with the original formulation,
because it is difficult to evaluate the effect of the integration error on the rest of the model, unless one has done the appropriate sensibility analysis. Why not just use the correct formulation
without having to bother about eventual marginal effects generated by the approximations?
I am not ready to sacrifice exactitude for a vague notion of conceptual clarity that is only understandable for people with notions of calculus. I do not think either that the formulation is odd
unless one is sure that what you name numerical oddities do not affect the objectives of the model.
The proposed formulation is approximately right with a very small time step, conceptually right with a differential equation, but neither approximately nor conceptually with bigger time steps as
those used in SD.
I respect your approach, but I am more interested with very pragmatic approaches, even if they look less conceptually interesting. Finally what counts is the usefulness of the model. Is the world3
model effectively used by people and implemented?
Best regards. | {"url":"http://www.systemdynamics.org/forum/viewtopic.php?f=13&t=237&p=1198","timestamp":"2014-04-16T19:03:58Z","content_type":null,"content_length":"28921","record_id":"<urn:uuid:59d21825-6556-4d81-99e7-74a046772bff>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hidden coinduction: behavioral correctness proofs for objects
, 1993
"... This is an introduction to the philosophy and use of OBJ, emphasizing its operational semantics, with aspects of its history and its logical semantics. Release 2 of OBJ3 is described in detail,
with many examples. OBJ is a wide spectrum first-order functional language that is rigorously based on ..."
Cited by 120 (29 self)
Add to MetaCart
This is an introduction to the philosophy and use of OBJ, emphasizing its operational semantics, with aspects of its history and its logical semantics. Release 2 of OBJ3 is described in detail, with
many examples. OBJ is a wide spectrum first-order functional language that is rigorously based on (order sorted) equational logic and parameterized programming, supporting a declarative style that
facilitates verification and allows OBJ to be used as a theorem prover.
- In Proceedings of Automated Software Engineering 2000 , 2000
"... Circular coinductive rewriting is a new method for proving behavioral properties, that combines behavioral rewriting with circular coinduction. This method is implemented in our new BOBJ
behavioral specification and computation system, which is used in examples throughout this paper. These examples ..."
Cited by 46 (11 self)
Add to MetaCart
Circular coinductive rewriting is a new method for proving behavioral properties, that combines behavioral rewriting with circular coinduction. This method is implemented in our new BOBJ behavioral
specification and computation system, which is used in examples throughout this paper. These examples demonstrate the surprising power of circular coinductive rewriting. The paper also sketches the
underlying hidden algebraic theory and briefly describes BOBJ and some of its algorithms.
- In Proceedings of DEXA’99 , 1999
"... . Ontologies allow the abstract conceptualisation of domains, but a given domain can be conceptualised through many different ontologies, which can be problematic when ontologies are used to
support knowledge sharing. We present a formal account of ontologies that is intended to support knowledg ..."
Cited by 28 (1 self)
Add to MetaCart
. Ontologies allow the abstract conceptualisation of domains, but a given domain can be conceptualised through many different ontologies, which can be problematic when ontologies are used to support
knowledge sharing. We present a formal account of ontologies that is intended to support knowledge sharing through precise characterisations of relationships such as compatibility and refinement. We
take an algebraic approach, in which ontologies are presented as logical theories. This allows us to characterise relations between ontologies as relations between their classes of models. A major
result is cocompleteness of specifications, which supports merging of ontologies across shared sub-ontologies. 1 Introduction Over the last decade ontologies --- best characterised as explicit
specifications of a conceptualisation of a domain [17] --- have become increasingly important in the design and development of knowledge based systems, and for knowledge representations generally.
, 2000
"... This paper describes the Tatami project at UCSD, which is developing a system to support distributed cooperative software development over the web, and in particular, the validation of
concurrent distributed software. The main components of our current prototype are a proof assistant, a generator fo ..."
Cited by 13 (8 self)
Add to MetaCart
This paper describes the Tatami project at UCSD, which is developing a system to support distributed cooperative software development over the web, and in particular, the validation of concurrent
distributed software. The main components of our current prototype are a proof assistant, a generator for documentation websites, a database, an equational proof engine, and a communication protocol
to support distributed cooperative work. We believe behavioral specification and verification are important for software development, and for this purpose we use first order hidden logic with
equational atoms. The paper also briefly describes some novel user interface design methods that have been developed and applied in the project
- Principles of Declarative Programming , 1998
"... : The benefits of the object, logic (or relational), functional, and constraint paradigms ..."
- Annals of Software Engineering , 2001
"... recent advances in web technology, interface design, and specification. Our effort to improve the usability of such systems has led us into algebraic semiotics, while our effort to develop
better formal methods for distributed concurrent systems has led us into hidden algebra and fuzzy logic. This p ..."
Cited by 7 (2 self)
Add to MetaCart
recent advances in web technology, interface design, and specification. Our effort to improve the usability of such systems has led us into algebraic semiotics, while our effort to develop better
formal methods for distributed concurrent systems has led us into hidden algebra and fuzzy logic. This paper discusses the Tatami system design, especially its software architecture, and its user
interface principles. New work in the latter area includes an extension of algebraic semiotics to dynamic multimedia interfaces, and integrating Gibsonian affordances with algebraic semiotics. 1
- OBJ/CAFEOBJ/MAUDE AT FORMAL METHODS '99 , 1999
"... ..."
- and Expert Systems Applications, 14th International Conference, DEXA 2003 , 2002
"... Any builder of an information system, whether a database or a knowledge based system, will start from some conceptualisation of the domain, which will embody a number of fundamental assumptions
about the domain. Often these underlying assumptions... ..."
Cited by 5 (0 self)
Add to MetaCart
Any builder of an information system, whether a database or a knowledge based system, will start from some conceptualisation of the domain, which will embody a number of fundamental assumptions about
the domain. Often these underlying assumptions...
"... We show that for any behavioral Sigma-specification B there is an ordinary algebraic specification ~ B over a larger signature, such that a model behaviorally satisfies B if and only if it
satisfies ~ B, where is the information hiding operator exporting only the Sigma-theorems of ~ B. The idea is t ..."
Add to MetaCart
We show that for any behavioral Sigma-specification B there is an ordinary algebraic specification ~ B over a larger signature, such that a model behaviorally satisfies B if and only if it satisfies
~ B, where is the information hiding operator exporting only the Sigma-theorems of ~ B. The idea is to add machinery for contexts and experiments (sorts, operations and equations), use it, and then
hide it. We develop a procedure, called unhiding, that takes a finite B and produces a finite ~ B. The practical aspect of this procedure is that one can use any standard equational or inductive
theorem prover to derive behavioral theorems, even if neither equational reasoning nor induction is sound for behavioral satisfaction. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1417356","timestamp":"2014-04-18T13:39:23Z","content_type":null,"content_length":"32007","record_id":"<urn:uuid:803c4c9c-7f50-4434-844b-8cc1ab201c7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
I could use some help on this Calculus problem
August 30th 2007, 03:13 PM #1
I could use some help on this Calculus problem
The problem reads: A constant force with vector representation F=10i+18j-6k moves an object along a straight line from the point (2,3,0) to the point (4,9,15). Find the work done if the distance
is measured in meters and the magnitude of the force is measured in Newtons. From this I got the force by using sqrt(10^2+18^2+(-6)^2)=2*sqrt(115)*N Additionally I got the distance to be sqrt
((4-2)^2+(9-3)^2+(15-0)^2)=sqrt(265)*m And because J=m*N I got the answer (2*sqrt(115)*N)*(sqrt(265)*m)=10*sqrt(1219) J However, the answer book gives is 38 J so could someone tell me what I'm
doing wrong
Is the force parallel to the movement of the point?
The problem reads: A constant force with vector representation F=10i+18j-6k moves an object along a straight line from the point (2,3,0) to the point (4,9,15). Find the work done if the distance
is measured in meters and the magnitude of the force is measured in Newtons. From this I got the force by using sqrt(10^2+18^2+(-6)^2)=2*sqrt(115)*N Additionally I got the distance to be sqrt
((4-2)^2+(9-3)^2+(15-0)^2)=sqrt(265)*m And because J=m*N I got the answer (2*sqrt(115)*N)*(sqrt(265)*m)=10*sqrt(1219) J However, the answer book gives is 38 J so could someone tell me what I'm
doing wrong
$<br /> \bold{W}=\int \bold{F}.\bold{dx}<br />$
In this case we have constant force and a straight line so this reduces to:
$<br /> \bold{W}=\bold{F}.\bold{d}<br />$
where $\bold{d}=(4,9,15) - (2,3,0)$
August 30th 2007, 07:55 PM #2
MHF Contributor
Aug 2007
August 30th 2007, 11:49 PM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/18240-i-could-use-some-help-calculus-problem.html","timestamp":"2014-04-18T23:36:12Z","content_type":null,"content_length":"35976","record_id":"<urn:uuid:778f53a1-fedf-4727-91eb-b0c13f416023>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - help! can someone differentiate e^(nx)
Originally posted by futz
The product rule applies fine for a constant term, since a constant is a perfectly good function. It does not apply the way he said though; his exponential relation is wrong.
It's mightily unnecessary as one term will automatically go to zero, and yes I missed his algebraic mistake[g)] | {"url":"http://www.physicsforums.com/showpost.php?p=105733&postcount=8","timestamp":"2014-04-17T21:36:07Z","content_type":null,"content_length":"7284","record_id":"<urn:uuid:3b575274-f3f1-4047-b23c-48fae1151fd0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lafayette, CA
Find a Lafayette, CA Calculus Tutor
...Teaching math and physics is exciting for me because I am passionate about these subjects and enjoy sharing that passion with my students. I find that many students shy away from the core
concepts in math and physics, preferring instead to learn only the specific problems they are assigned. This can result in the student becoming confused when confronted with a new problem.
25 Subjects: including calculus, physics, algebra 1, statistics
...My teaching methodology is to aid my students in developing an intuition that will allow them to easily tackle complex problems, understand underlying principles, and independently overcome
difficulties when faced with challenging material. Feel free to contact me for information about my availa...
15 Subjects: including calculus, Spanish, geometry, ESL/ESOL
...That said, I’ve been through the education system, and have seen its flaws, and places where it could work better. I personally am able to grasp concepts much easier when I know why I am being
taught something, and how it would be useful to me. Having a comfortable atmosphere while helping kids...
6 Subjects: including calculus, physics, algebra 1, algebra 2
...I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every
single one, which will dramatically increase their test scores! I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more.
59 Subjects: including calculus, chemistry, reading, physics
...I enjoy Jeopardy, math and logic puzzles, and reading. I look forward to helping people to learn new subjects and overcoming their fears regarding learning and testing. My background includes a
Bachelor of Science degree in Physics and a Masters in Business Administration, plus various courses in data processing, programming, technical training and electronics.
21 Subjects: including calculus, physics, geometry, algebra 1 | {"url":"http://www.purplemath.com/lafayette_ca_calculus_tutors.php","timestamp":"2014-04-17T13:34:47Z","content_type":null,"content_length":"24294","record_id":"<urn:uuid:89fe3b97-0afb-4f75-9148-096badd89058>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need Help with my ticket Program
05-01-2013, 07:26 PM
Need Help with my ticket Program
Alright so I am still kinda new to java
I am working on a lottery program that will ask the user how many tickets they would like (default being five) and each ticket has a random set of numbers plus a jackpot number:
this is the code have so far...
import javax.swing.JOptionPane;
import java.util.Arrays;
public class LuckyNumbers
public static void main(String[] args)
// number of lucky numbers
// number of elements for the lucky numbers
final int LUCKY_NUMBERS = 5;
int[] luckyNumbers = new int[LUCKY_NUMBERS];
// generate five random numbers
for (int i = 0; i < luckyNumbers.length; i++)
luckyNumbers[i] = (int) (Math.random() * 56) + 1;
Arrays.sort(luckyNumbers); // sort the elements
int goldenNumber = (int) (Math.random() * 46) + 1; // draw the golden number
// print the numbers
JOptionPane.showMessageDialog(null, "Your Lucky Numbers arrrrre....\n*drumroll*\n\n"
+ "Ticket 1:\n" + Arrays.toString(luckyNumbers) + " Golden Number:" + goldenNumber,
"WELCOME TO THE LOTTERY!",
thanks for any help you can give
05-01-2013, 08:16 PM
Re: Need Help with my ticket Program
Shouldn't those five numbers all be different? Note that the random number generator can generate the same numbers in a sequence.
kind regards,
05-01-2013, 08:25 PM
Re: Need Help with my ticket Program
yeah I forgot about that x.x i was having a bit of a problem with that also, due to not knowing how to keep it from generating the same numbers
05-01-2013, 09:25 PM
Re: Need Help with my ticket Program
The collections utility class has a shuffle( ... ) method; if you have a List (<--- a Collection) that contains the numbers 1 ... 56 you can shuffle it and take the first five numbers from it and
the sixth number can be the 'golden number'; if you do it right it'll be just a few lines of code ...
kind regards,
05-02-2013, 05:59 AM
Re: Need Help with my ticket Program | {"url":"http://www.java-forums.org/new-java/72236-need-help-my-ticket-program-print.html","timestamp":"2014-04-17T14:05:36Z","content_type":null,"content_length":"7977","record_id":"<urn:uuid:ef2024f4-27a8-46ac-bbe2-64c5903698da>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
cell seeding calculation - General Lab Techniques
Could you please help me in this calculation
I centifuged my cells and dissolved the pellet in 10 ml media.
In this 10 ml cell suspension I counted the cells and I have 0.5 mil cells/ml
I need 0.625 mil cells/ml in total volume 2,5 ml(in single well of 6 well plate)
Hope anyone here can help me with this calculation
Thank you very much in advance
Edited by harry348, 05 May 2011 - 07:03 AM. | {"url":"http://www.protocol-online.org/forums/topic/20913-cell-seeding-calculation/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024","timestamp":"2014-04-19T22:17:17Z","content_type":null,"content_length":"102400","record_id":"<urn:uuid:f4a8bf21-7a7a-4dbd-bf15-0a2ab6621267>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set of efficient 3D intersection algorithms
up vote 28 down vote favorite
Anyone knows a source, website where I can get some good implementations of 3D intersection algorithms, like
• intersection of sphere and sphere
• sphere/ellipsoid
• sphere/cuboid
• ellipsoid/ellipsoid
• ellipsoid/cuboid
• cuboid/cuboid
• sphere/ray
• ellipsoid/ray
• cuboid/ray
• triangle/ray
• quad/ray
• triangle/triangle
• quad/quad
1 I bet some of the Quake source code would have something along these lines. – Rafe Kettler Jan 28 '11 at 17:33
Don't have a reference site, but you might want to add GJK to your list. Video describing GJK can be found here – Krypes Jan 28 '11 at 17:55
1 The ONLY and BEST source for such things is the Wild Magic Library by Dave Eberly geometrictools.com – Matthieu N. Jan 28 '11 at 19:41
add comment
6 Answers
active oldest votes
up vote 24 down vote It's a huge matrix of algorithms that calculate intersections between various types of objects. Excellent resource.
1 +1, good link. But notice he also mentions Real Time Collision Detection as a 'definitive source' on the subject. Depends on how much detail you want/need, I guess. –
James Jan 28 '11 at 18:02
Actually, that page links to several other things mentioned here, RTCD, Gems, etc. It's just a large maintained collection of references. – luke Jan 28 '11 at 18:03
add comment
Not really a website, but this book Real-Time Collision Detection is well worth it for what you are looking for.
up vote 6 down
It's a good book. The only problem is that it has so much math in it! – James McNellis Jan 28 '11 at 17:51
@James McNellis: Yes, but also code. :) – James Jan 28 '11 at 17:54
Right. I was going for the tongue-in-cheek "there's so much math in this math book!" type comment. I should probably avoid trying to be funny until after I've had a few cups of
coffee in the morning. – James McNellis Jan 28 '11 at 17:58
He wants to do intersections between ellipsoids. The general solution to that is a 4th order curve in R3 - it requires math :-) – phkahler Jan 28 '11 at 19:15
add comment
Graphics Gems is a good place to look for this type of thing.
up vote 1 down vote
add comment
You might want to put Eberly's Game Engine Design on your bookshelf. It has detailed algorithms and discussion for each of the intersections you've listed.
up vote 1 down vote
add comment
If you're doing raytracing, then asking at ompf.org and looking through the RTNews archives might help. In any case, it depends on what you're going to use these for.
up vote 0 down vote
add comment
The source code for the POVRay ray tracer has some implementations that may be of use.
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged c++ math graphics 3d intersection or ask your own question. | {"url":"http://stackoverflow.com/questions/4831216/set-of-efficient-3d-intersection-algorithms","timestamp":"2014-04-21T10:42:14Z","content_type":null,"content_length":"88399","record_id":"<urn:uuid:742668d4-e40e-4955-ad64-442d87f13e47>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving |a+b| = |a|+|b|
June 5th 2010, 04:38 AM
Proving |a+b| = |a|+|b|
Hi guys, I'm doing a case by case proof of
$|a+b| \leq |a|+|b|$
from Spivak, and he says:
When $a \geq 0$ and $b \leq 0$, we must prove that:
$|a+b| \leq a-b$
I'm a bit stuck on his line of reasoning, could someone explain why we have to prove the above?
June 5th 2010, 05:05 AM
June 5th 2010, 05:28 AM
Typo, sorry. Fixed original post
June 5th 2010, 08:53 AM
If $a \geq 0$ then $|a| = a$
and if $b \leq 0$ then $|b| = -b$
Therefore when $a \geq 0$ and $b \leq 0$, $|a|+|b| = a-b$ | {"url":"http://mathhelpforum.com/algebra/147839-proving-b-b-print.html","timestamp":"2014-04-16T17:18:51Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:481b5ede-fbc1-4b61-8185-40d7f8294e99>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are looking at historical revision 26221 of this page. It may differ significantly from its current revision.
Directed graph based on adjacency intervals.
(require-extension interval-digraph)
The interval-digraph library is an implementation of a directed graph, where the nodes and edges may be stored as integer interval objects from the cis library.
The library defines a digraph "object" -- a procedure that takes a method name as a symbol, and returns the procedure that implements the respective operation.
Directed graph procedures
An empty digraph object can be created by procedure make-digraph
[procedure] make-digraph:: NAME LABEL -> SELECTOR
where NAME is the graph name (string or symbol), LABEL is an optional metadata object of an arbitrary type or #f.
The returned selector procedure can take one of the following arguments:
returns the graph name (string or symbol)
returns the graph metadata (arbitrary type)
returns a procedure with no arguments, which returns a list with the node indices of the graph
returns a procedure with no arguments, which returns a list with the node indexes of the graph, along with optional label
returns the node indices of the graph as a cis interval object
returns a procedure with no arguments, which returns a list with the edges of the graph
returns a procedure with no arguments, which returns a list with the edges of the graph and their labels
returns a procedure with no arguments, which returns the number of nodes in the graph
returns a procedure with no arguments, which returns the number of edges in the graph
returns a procedure LAMBDA N which returns a list with the outgoing edges of node N
returns a procedure LAMBDA N which returns a list with the successor nodes of node N
returns a procedure LAMBDA N which returns a cis interval object with the successor nodes of node N
returns a procedure LAMBDA I J which returns true if edge I -> J exists in the graph and false otherwise
returns a procedure LAMBDA N which returns true if node N exists in the graph and false otherwise
returns a procedure LAMBDA I which returns true if interval I exists in the graph and false otherwise
returns a procedure LAMBDA P I J which returns the property P of edge I -> J, if it exists, #f otherwise
returns a procedure without arguments, which returns a list with all edge property names
returns a procedure LAMBDA P I J which returns the property P of all edges defined on the intervals I, J, if it exists, #f otherwise
returns a procedure LAMBDA P I J which returns the prototype P of all edges defined on the intervals I, J, if it exists, #f otherwise; a prototype is a user-supplied procedure of the form LAMBDA
G I J which returns a property value for the edge I -> J
returns a procedure LAMBDA P N which returns the property P of node N, if it exists, #f otherwise
returns a procedure without arguments, which returns a list with all node property names
returns a procedure LAMBDA P I which returns the property P of node interval I, if it exists, #f otherwise
returns a procedure LAMBDA N which returns the label of node N if it exists, #f otherwise
returns an iterator procedure LAMBDA F which iterates over the nodes in the graph by invoking function F on the node index of each node
returns an iterator procedure LAMBDA F which iterates over the nodes in the graph by invoking function F on the node index and label of each node
returns an iterator procedure LAMBDA F which iterates over the nodes in the graph by invoking function F on the node indices of each edge
returns a procedure LAMBDA N [LABEL] which when given a node index N and optional label, returns a new graph containing the original graph plus the given node
returns a procedure LAMBDA I [LABEL] which when given a cis interval object I and optional label, returns a new graph containing the original graph plus the given node interval
returns a procedure LAMBDA E [LABEL] which when given edge E = (list I J) and optional label, returns a new graph containing the original graph plus the given edge
returns a procedure LAMBDA N LABEL which when given a node index N and label, returns a new graph with the labeled node
returns a procedure LAMBDA P N V which when given property name P, node index N and property value, returns a new graph with the property P set for node N
returns a procedure LAMBDA P I V which when given property name P, cis interval object I and property value, returns a new graph with the property P set for all nodes in the interval I
returns a procedure LAMBDA P I J V which when given property name P, node indices I,J and property value, returns a new graph with the property P set for edge I -> J
returns a procedure LAMBDA P I J V which when given property name P, cis interval objects I,J and property value, returns a new graph with the property P set for all defined edges on the
intervals I, J
returns a procedure LAMBDA P I J V which when given property name P, cis interval objects I,J and prototype procedure, returns a new graph with the prototype P set for all defined edges on the
intervals I, J; a prototype is a user-supplied procedure of the form LAMBDA G I J which returns a property value for the edge I -> J
[procedure] make-random-gnp-digraph :: NAME LABEL N P R S loops -> SELECTOR
Naive implementation of a random uniform graph: given number of nodes N, probability P, random number generator function R, and initial state S, samples node indices from a binomial distribution N,P,
and creates edges determined by the sample values. Argument loops specifies if a node can connect to itself in the graph.
[procedure] digraph-union:: GRAPH * GRAPH * MERGE-LABEL-FN -> SELECTOR
Union of directed graphs: given two digraph objects, returns a new digraph containing the union of nodes and edges from the given digraphs. Argument MERGE-LABEL-FN is a procedure that returns a node
label given a node index and the labels for that node from the two given digraphs.
[procedure] digraph-disjoint-union :: GRAPH * GRAPH -> SELECTOR
Disjoint union of directed graphs: given two digraph objects, returns a new digraph containing the disjoin union of nodes and edges from the given digraphs. The disjoint property is enforced by
reindexing all the nodes in the second digraph to have an index higher than the highest index in the first digraph.
[procedure] digraph-rename :: K * GRAPH -> SELECTOR
Given a digraph and a number K, returns a new digraph that has K added to all node indices and edges.
About this egg
Version history
Bug fix in foreach-edge-with-property
Bug fix to size message handler
Bug fixes and addtions to edge iterator interfaces
Removed methods pred, in-edges, roots
Added node-property-keys and edge-property-keys
Added edge-interval-prototype and edge-interval-prototype-set
Initial release
Copyright 2010-2012 Ivan Raikov and the Okinawa Institute of Science and Technology
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
A full copy of the GPL license can be found at | {"url":"http://wiki.call-cc.org/eggref/4/interval-digraph?action=show&rev=26221","timestamp":"2014-04-20T08:45:37Z","content_type":null,"content_length":"13410","record_id":"<urn:uuid:e04ef0cb-2773-4f5a-be6b-c5d3193df71f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
What defines an algorithm?
"It's like an alarm clock, WOO WOO" -Bubb Rubb
I think when referring to puzzle solving that an alg is a series of moves that accomplish a task that is either too slow or too difficult to do intuitively.
Building a cross intuitively is a piece of cake, memorizing algs would seem to be a waste of time and effort.
That's my 2¢
-mike grimsley
"It's like an alarm clock, WOO WOO" -Bubb Rubb
What about mirrors being different algorithims? I think they are.
Yes, if you add a twist then of course you get a different algorithm.
Btw, to solve the cube using only three algorithms is easy:
1. U
2. x
3. y
David J wrote:
There are only six corner orientations and three edge orientations which need solving.
If you want a minimum set four algs will suffice, though they may need to be done more than once:
Permute corners R U' L' U R' U' L (U)
Orient corners R U R' U R U2 R' (U2)
Orient edges r U r' U2 r U r'
Permute edges F2 U r U2 r' U F2
dblthnk84 wrote:
in a different thread, I saw someone state that four algorithms is the fewest needed to solve the cube.
For my example I can use the Fridrich 'T' (R B U'B'U B U B²R'B U B U'B') to solve the last layer corners. By using B' before I use this algorithm I can rotate corners, and then use the pattern as
stated above to permute the corners correctly. Would this count as two algorithms, or one?
The reason I am asking this question is because, in a different thread, I saw someone state that four algorithms is the fewest needed to solve the cube. I would think that the single twist would
count as a seperate algorithm, but would like others thoughts. If it does not count as a seperate algorithm then it is possible to solve the cube with only 3 algorithms, and I suspect that it could
be reduced down to 2. I need at least 3 twists to set up my second algorithm though to solve the cube currently, but I would not be supprised if that could be reduced.
dblthnk84 wrote:
I know that an algorithm is a sequence of twists. My question is does it count as a different algorithm if you perform an extra twist before you perform the algorithm?
There are different definitions of the term algorithm. It does not apply to a single turn. It applies to a series of moves, which repeated, produce the same effect.
A recipe is an algorithm, a method for solving the cube is also an algorithm, but the main way it is used in cubing a the mathematical definition, that is, it is recursive. Recursive means that that
after sufficient repititions of an algorithm you will return to the initital state.
For example begin with a solved cube and do R2 B2 R F2 R' B2 R F2 R three times and it returns to the solved state.
To answer your question, there are set up moves which make for different algorithms. For example R2 B2 R F2 R' B2 R F2 R solves one position and creates another. Adding a Back side turn before and
after: B R2 B2 R F2 R' B2 R F2 R B' solves and creates two other positions respectively.
Another way to look at your question:
Start with a solved cube, do R U R' U R U2 R' six times and it returns to solved position. Now add U2 at the end: do R U R' U R U2 R' U2 three times and it returns to the solved position.
David J | {"url":"http://www.twistypuzzles.com/forum/viewtopic.php?p=24267","timestamp":"2014-04-17T15:48:46Z","content_type":null,"content_length":"53798","record_id":"<urn:uuid:5a271953-c391-4903-9fc4-20cf7d263401>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
If points A and C both lie on the circle with center B and
Author Message
If points A and C both lie on the circle with center B and [#permalink] 09 Dec 2012, 00:46
5% (low)
Question Stats:
(00:00) correct
0% (00:00)
based on 0 sessions
At the end of the explanation for this question, they state what would happen if the information was different, just to explain for interest's sake.
Joined: 08 Dec
2012 But I cannot understand how you can know that if AC is 2 1/2 times larger that the triangle is isosceles.
Posts: 7 Read till the end:
Followers: 0 Image here: platinumgmat dot com/global/images/inline_questions/00024-1.gif
If points A and C both lie on the circle with center B and the measurement of angle ABC is not a multiple of 30, what is the ratio of the area of the circle centered at point B to the
area of triangle ABC?
A) 2π
B) 2π(AB)2/(BC)2
C) 4π
D) π(BC)2/.5(BC)(AB)
E) None of the Above
Begin by finding the area of the circle:
Areacircle = πr2
Areacircle = π(AB)2 = π(BC)2
In dealing with triangle ABC, BC = AB since both are radii. At this point, some students make a mistake and assume that AB is the height of the triangle and BC is the base of the
triangle (or vice versa). However, we cannot assume that BC is the base and AB is the height since we have not yet shown that ABC is a right triangle. You could only make BC the base
and AB the height if triangle ABC were a right triangle (in which case AB would be a perpendicular segment drawn from a vertex, A, to the side opposite that vertex, B).
By definition, the height of a triangle is the length of a segment drawn from a vertex perpendicular to the side opposite that vertex. A line that is perpendicular to the side opposite
a vertex will, by definition, form a 90 degree angle. Consequently, for line AB to be the height of triangle ABC, angle ABC must be a right angle (i.e., 90 degrees).
Since the question states that "the measurement of angle ABC is not a multiple of 30," angle ABC cannot be 30, 60, 90, 120, etc. Consequently, angle ABC is not a right angle and line
AB is not the height of triangle ABC.
Without the height, you cannot determine the area of the triangle. Without the area of the triangle, you do not have enough information to solve the problem. The correct answer is It
Cannot Be Determined.
Then they say this:
Note: If the question omitted the words "the measurement of angle ABC is not a multiple of 30" and instead said that the length of line AC is 2 1/2 times larger than the radius, you
would be dealing with a 45-45-90 right triangle with sides r, r, and r*2 1/2. In this instance with a right triangle, the area of the triangle would be (1/2)bh = (1/2)(r)(r) = .5r2 and
the ratio of the area of the circle centered at point B to the area of triangle ABC would be 2π.
Can anyone shed light on this?
Manhattan GMAT Discount Codes Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes
Re: AC is 2 1/2 times larger than the radius [#permalink] 11 Dec 2012, 06:09
jessello wrote:
At the end of the explanation for this question, they state what would happen if the information was different, just to explain for interest's sake.
But I cannot understand how you can know that if AC is 2 1/2 times larger that the triangle is isosceles.
Read till the end:
Image here: platinumgmat dot com/global/images/inline_questions/00024-1.gif
If points A and C both lie on the circle with center B and the measurement of angle ABC is not a multiple of 30, what is the ratio of the area of the circle centered at point B to the
area of triangle ABC?
jumsumtak A) 2π
B) 2π(AB)2/(BC)2
VP C) 4π
D) π(BC)2/.5(BC)(AB)
Joined: 23 Mar E) None of the Above
Then they say this:
Posts: 1048 Note: If the question omitted the words "the measurement of angle ABC is not a multiple of 30" and instead said that the length of line AC is 2 1/2 times larger than the radius, you
would be dealing with a 45-45-90 right triangle with sides r, r, and r*2 1/2. In this instance with a right triangle, the area of the triangle would be (1/2)bh = (1/2)(r)(r) = .5r2 and
Followers: 65 the ratio of the area of the circle centered at point B to the area of triangle ABC would be 2π.
Kudos [?]: 411 Can anyone shed light on this?
[0], given:
435 A couple of things:
1.) If you post your specific questions on the GMAT quant P.S forum, you will get a better response.
2.) There is an option to upload an image. Please use that instead of using URLs.
Answer to your question:
The triangle will always be isosceles because AB=BC=radius. (unless AC=AB=AC is given; then it will be an equilateral triangle)
Also, triangle will be a 45-45-90 when AC= (2r)^(1/2) and not two and a half times larger than radius. Try to apply the Pythagoras theorem on triangle ABC.
Does that help?
My Debrief | MBA Timeline - New! Stay on top of deadlines, receive recommendations for each stage, get reminders.
jessello Re: AC is 2 1/2 times larger than the radius [#permalink] 11 Dec 2012, 13:06
Intern Hi jumsumtak
Joined: 08 Dec Quote:
A couple of things:
Posts: 7 1.) If you post your specific questions on the GMAT quant P.S forum, you will get a better response.
2.) There is an option to upload an image. Please use that instead of using URLs.
Followers: 0
I could not post URLs or IMGs until I had posted 5 times. Not my fault.
jessello Re: AC is 2 1/2 times larger than the radius [#permalink] 11 Dec 2012, 13:08
Intern I'm talking about this statement:
Joined: 08 Dec Quote:
If the question omitted the words "the measurement of angle ABC is not a multiple of 30" and instead said that the length of line AC is 2 1/2 times larger than the radius, you would be
Posts: 7 dealing with a 45-45-90 right triangle
Followers: 0 How do they know that?
Re: AC is 2 1/2 times larger than the radius [#permalink] 12 Dec 2012, 02:59
jessello wrote:
I'm talking about this statement:
If the question omitted the words "the measurement of angle ABC is not a multiple of 30" and instead said that the length of line AC is 2 1/2 times larger than the radius, you would be
dealing with a 45-45-90 right triangle
How do they know that?
Joined: 23 Mar
2011 In general:
Posts: 1048 If you know the relation between 3 sides (AB:BC:AC::x:y:z) then you can calculate the angles of the triangle.
Followers: 65 2.)
Kudos [?]: 411 In this case: 2 sides are equal AB=BC=r and the length of the third side is given. Hence, you can calculate the angles of the triangle
[0], given:
435 3.)
Either you have copied the explanation incorrectly or the source is wrong.
As I explained earlier if AC= (2r)^(1/2) then it will be a 45-45-90 triangle.
we know, AB=BC=r and AC= (2r)^(1/2); apply AB^2+BC^2=AC^2 (Pythagoras theorem) and you will find that this equation holds true for triangle ABC. This in turn means the triangle is
right angled at angle ABC.
My Debrief | MBA Timeline - New! Stay on top of deadlines, receive recommendations for each stage, get reminders.
gmatclubot Re: AC is 2 1/2 times larger than the radius [#permalink] 12 Dec 2012, 02:59
Similar topics Author Replies Last post
The points A,B and C lie on a circle that has a radius 4. If positive soul 3 20 Jun 2006, 04:35
In the figure, point P and Q lie on the circle with center mbunny 11 03 Sep 2007, 08:07
Points A, B, and C lie on a circle of radius. What is the bmwhype2 5 19 Nov 2007, 05:42
9 Points A, B, and C lie on a circle of radius 1. What is the Economist 18 27 Sep 2009, 08:00
If points A, B, and C lie on a circle of radius 1, what is zaur2010 9 19 Mar 2011, 12:20 | {"url":"http://gmatclub.com/forum/if-points-a-and-c-both-lie-on-the-circle-with-center-b-and-143819.html","timestamp":"2014-04-16T20:13:31Z","content_type":null,"content_length":"171050","record_id":"<urn:uuid:3454b9fd-4ced-4fa5-81c5-2edb692490ec>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract Algebra: Groups
March 4th 2008, 11:30 PM #1
Abstract Algebra: Groups
Hi all,
A general, and perhaps stupid, question: Can we have a group with one element? If so, what's an example?
Thanks all
EDIT: could a group with a single element and the identity function as the operation work?
Yes. I quote from Group (mathematics) - Wikipedia, the free encyclopedia:
"A trivial group is a group consisting of a single element. All such groups are isomorphic so one often speaks of the trivial group. The single element of the trivial group, variously labeled e,
1, or 0, is the identity element. The group operation is e + e = e."
See also Trivial Group -- from Wolfram MathWorld
March 5th 2008, 03:13 AM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/30027-abstract-algebra-groups.html","timestamp":"2014-04-18T19:36:06Z","content_type":null,"content_length":"35607","record_id":"<urn:uuid:f0416dc9-c1be-41bd-ae67-e25386c5e933>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instantaneous Rate of Change - Problem 1
To estimate the instantaneous rate of change of an object, calculate the average rate of change over smaller and smaller time intervals. Recall that the average rate of change is the change in some
quantity divided by the change in time. Eventually, these values will get closer to some point, which you can call the estimated instantaneous rate of change. When the rate of change is positive,
that means that quantity is increasing. When the rate of change is negative, the quantity is decreasing.
We’re talking about instantaneous rate of change and I have a problem here. The amount A(t) in milligrams of pain reliever in a patients system after t minutes is given by this function. A(t) equals
8 times t times e to the (-t over 50). What I want to do first is complete this table of average rates of change.
When you look at this expression here, this is an average rate of change, 100 plus delta t, 100. I’m calculating the change in the amount of medication in the patient’s system over time. So let’s
start with the increments. Delta t, when delta t is 100, 100 plus delta t is 200. Remember t is in minutes, 200 minutes after the medication’s been taken.
I want to calculate A(200) minus A(100) and I want to divide that by delta t which is 100. That’s an average rate of change. I’ll do that on my calculator. I get -0.7896. I want to do the same thing
with smaller and smaller increments of time. So delta t equals 10, 100 plus delta t is 110, so I’m going to have A(110) minus A(100) over 10. Again using my calculator I get -1.0761. And I’m going to
keep doing this for smaller and smaller values and see if I get some sense of what value this number is approaching.
I’ll do it for delta t equals1, I get -1.0826, and I’ll do it again for 0.1, I get -1.0827. It looks like it’s getting pretty close to converging as my values of delta t are getting smaller and
smaller. So part b asks me to estimate the instantaneous rate of change of A(t), at t equals 100. That’s the instantaneous change of the amount of drug in this patient’s system.
Now, these values are getting successively closer and closer, to some magical value. And it looks like it's -1.083, and that’s my instantaneous rate. What are the units? Remember, the amount of drug
in the system was in milligrams and time was in minutes, so this is milligrams per minute.
That means that, at this point in time, the amount of drugs is actually decreasing at this rate. We represent this rate on the graph. If this is a graph of my function remember my function was 80
times e to the -t over 50. There’s a surge in the beginning and then at t equals 50, the amount of drugs starts to decrease. Here’s where we are right now. The rate of decrease, this rate actually
gives me the slope of the curve at this point. That’s a way of thinking about instantaneous rate of change. It’s the slope of the curve at that point. So the slope would be -1.083 milligrams per
average rate of change instantaneous rate of change rate of decrease change in quantity change in time limits quantity vs. time graph tangent line slope slope of a curve | {"url":"https://www.brightstorm.com/math/calculus/the-derivative/instantaneous-rate-of-change-problem-1/","timestamp":"2014-04-20T03:20:00Z","content_type":null,"content_length":"64399","record_id":"<urn:uuid:38b50251-dfcc-4dbc-ba2b-2df7e27361be>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Memorize Math and Physics Formulas
Edit Article
Memorization HelpMemorizing Math and Physics Formulas
Edited by Kd, Unksnogan, Maluniu, Flickety and 21 others
Have you ever stayed up all night just to try to memorize formulas for your math? Or do you memorize a set of formulas today and forget all about them tomorrow? It seems to me that these formulas
keep pulling you back to your old books instead of moving on.
Memorization Help
Memorizing Math and Physics Formulas
1. 1
Relax. Math and Physics problems are not meant to be studied under stress. Relax your mind. By doing this, you will be able to focus more on your task.
2. 2
Minimize your reference checks. Many people think that once they take a glance at a formula, it is in their mind, but when they wake up the next day, they are shocked to realize that the formula
leaked out during the night. This is why it is a good idea to practice solving a problem with the formula without looking it up. You must do this as many times as you can. Repetition leads to
3. 3
Analyze the Units. Put the raw units of each variable into the formula and see if you can get the units of the answer.
4. 4
Understand How the Formula is structured. You already have a decent gut feeling about the concept. Make sense of the formula. For instance, a = F / m. F is on the top of the fraction. That makes
sense, since if you exert more force on an object, it will speed up more quickly. Mass is on the bottom of the fraction, since more mass means more inertia, making the object more difficult to
accelerate. The opposite formula (a = m / F) does not make sense. Using this incorrect formula, a strong force (large number on the bottom of the fraction) would cause a smaller acceleration,
which does not make sense.
5. 5
Satisfaction. Do you ever study while you are hungry or thirsty? How does it feel? You always feel reluctant to focus because you are in a rush to go grab some pizza. If you start to feel hungry
or thirsty, quit studying those formulas and satisfy yourself with some food or drink.
6. 6
Take them with you. Find a small book and put down all those formulas. Keep the book at your back pocket and try to review them anytime you feel like you are missing something. This will bring
back the memories of what you have learned, making those jaw-breaking formulas stick into your mind forever.
• Write all the formulas on a sheet and stick it to the wall of your room so that whenever you look at them you will be able to remember what you have forgotten. This really works.
• Try to use a story, such as quadratic formula, (picture on top of page) for me is: A negative boy (-b) couldn't decide (+ or -) whether to go to a radical (the radical) party or be square (b
squared) and miss out on four awesome chicks (-4ac), the whole thing was over at 2 A.M. (over 2a)
• Try to create a game that involves memorizing formulas with your friends. This can automatically make those formulas stick into your mind because everyone wants to win and so do you. You can also
try a little rhyme or a song, if you like to sing.
• Some physics teachers do not require you to memorize formulas. Don't waste time memorizing formulas that will be provided on the test. Ask your teacher what formulas to memorize and which ones
will be on the test.
Article Info
Thanks to all authors for creating a page that has been read 122,443 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Memorize-Math-and-Physics-Formulas","timestamp":"2014-04-19T23:02:56Z","content_type":null,"content_length":"69027","record_id":"<urn:uuid:7244468e-55eb-421e-b874-96db7476bc96>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgewater, NJ Math Tutor
Find an Edgewater, NJ Math Tutor
...I study college level courses at home consistently and have been doing that for years for continuing education. I love to learn! I am a passionate teacher and will do more than my best to see
my students be the best at what they seek to learn.
81 Subjects: including algebra 1, algebra 2, probability, prealgebra
...My previous experiences with education have been through The Johns Hopkins University CTY Summer Program as a Resident Assistant and Teaching Assistant. I have also volunteered after-school in
an underprivileged middle school in Chester. I currently work with Veritas Prep, but have also taught through The Yale Academy, an organization, similar to Kaplan or Princeton Review.
26 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...I am also on the board of the North Jersey Princeton Alumni Association. I was a History major at Princeton. I took several classes in Anthropology and achieved at least a B in any
Anthropology class that I've taken.
34 Subjects: including algebra 2, algebra 1, prealgebra, SAT math
I am a highly motivated professional with 12 years experience in tutoring. I am a detail-oriented, efficient and organized individual with strong analytical and problem solving capabilities and
the ability to make well thought out decisions. As a graduate with a BS in Mathematics my goal is to help students exceed to their full potential in math and science.
13 Subjects: including calculus, geometry, tennis, differential equations
...I have worked with children, pre- teens, teenagers, and adults. Furthermore, I have an extensive amount of experience working with individuals who have ADD/ADHD, Asperger's, Autism Spectrum
Disorder, and Speech Apraxia. I am also TEFL certified, and have just recently returned back to New York from Brazil, where I taught English to adults for four months.
14 Subjects: including algebra 1, algebra 2, vocabulary, grammar
Related Edgewater, NJ Tutors
Edgewater, NJ Accounting Tutors
Edgewater, NJ ACT Tutors
Edgewater, NJ Algebra Tutors
Edgewater, NJ Algebra 2 Tutors
Edgewater, NJ Calculus Tutors
Edgewater, NJ Geometry Tutors
Edgewater, NJ Math Tutors
Edgewater, NJ Prealgebra Tutors
Edgewater, NJ Precalculus Tutors
Edgewater, NJ SAT Tutors
Edgewater, NJ SAT Math Tutors
Edgewater, NJ Science Tutors
Edgewater, NJ Statistics Tutors
Edgewater, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Edgewater_NJ_Math_tutors.php","timestamp":"2014-04-18T14:16:59Z","content_type":null,"content_length":"24080","record_id":"<urn:uuid:12b406f9-8751-4635-8e97-20aa73e2285c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Teacher2Teacher - Q&A #174
View entire discussion
[<<prev] [next>>]
From: Claudia (for Teacher2Teacher Service)
Date: May 02, 1998 at 21:28:44
Subject: Re: Helping my 7th grader understand math
Gosh, sounds like things are way out of hand. Your daughter obviously has
MAJOR math anxiety. Too much pressure put on a child at this age is very
critical to her success in future math courses. Math anxiety is worse than any
other "disease" I know because once the child buries herself in lack of self-
confidence, it takes LOTS of TLC and patience to undo the damage done.
Perhaps you or your husband have been in a tense situation (sports) whatever,
and suddenly all eyes are on you and you choke! Sounds like that is what your
poor daughter is doing. Trying to analyze it is probably only making things
If she can do the work on daily assignments, then she probably doesn't have a
learning disability, just math anxiety. This could ruin you daughter's math
life forever! How can you CALM the situation?
Sometimes children this age are just not mature enough for algebra. Maybe
things have been presented in a very abstract way and it doesn't make sense to
her, but instead of finding a different teaching approach, it sounds like the
situation has made her feel stupid and incompetent. She needs some real help,
some real understanding, and someone who can rebuild her mathematical
Post a public discussion message
Ask Teacher2Teacher a new question | {"url":"http://mathforum.org/t2t/message.taco?thread=174&message=2","timestamp":"2014-04-16T20:05:17Z","content_type":null,"content_length":"5550","record_id":"<urn:uuid:e5b85961-4637-41cc-bb4b-60023e4d820d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
Hello and sorry if I'm not posting in the right section, but I have the following problem:
If a and b are rational numbers and sqrt from a+b does Not equal 0, how do I demonstrate that sqrt from a-b does not equal 0?
Re: rational numbers
no, i'm sorry
i meant a+sqrt(b) !=0 and prove a-sqrt(b)!=0, where sqrt stands for square root and != for does not equal
i'm sorry for my terminology, but i'm not used to the english one | {"url":"http://www.purplemath.com/learning/search.php?author_id=33624&sr=posts","timestamp":"2014-04-18T14:02:36Z","content_type":null,"content_length":"14644","record_id":"<urn:uuid:d31eaa5c-4cdf-4ecc-b01e-9272e5ac0050>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alternate Formula for the Area of a Triangle
<iframe src="//www.googletagmanager.com/ns.html?id=GTM-WVB47G" height="0" width="0" style="display:none;visibility:hidden"></iframe>
Alternate Formula for the Area of a Triangle
Practice Alternate Formula for the Area of a Triangle
Best Score
Practice Now
• Read
An alternate formula for the area of a triangle
• Read
• Video
Shows an example of determining the area of a triangle by using the sine function.
• Video
Shows an example of determining the area of a triangle by using the sine function.
• Practice
Practice Alternate Formula for the Area of a Triangle questions
Please wait...
You need to be signed in to perform this action. Please sign-in and try again.
Please wait...
Oops, looks like cookies are disabled on your browser. Click
to see how to enable them.
Original text
Contribute a better translation | {"url":"http://www.ck12.org/trigonometry/Alternate-Formula-for-the-Area-of-a-Triangle/?by=ck12","timestamp":"2014-04-17T10:50:57Z","content_type":null,"content_length":"57873","record_id":"<urn:uuid:699577bf-c503-4711-9cd1-44caafe8e2b4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Hills, NY Geometry Tutor
Find a North Hills, NY Geometry Tutor
...I have an undergraduate degree in Mathematics and Master's and PhD degrees in Computer Science. I am a NJ-certified math teacher (K-12), scored a perfect 200 on the Praxis II Middle School
Mathematics test, and received an ETS Recognition of Excellence on the Mathematics Content Knowledge exam. ...
36 Subjects: including geometry, reading, ESL/ESOL, algebra 1
...I am very familiar with the new comon core standards that students are expected to demonstrate. In one or two sessions, I am able to assess each student's needs and tailor a lesson plan to
ensure that he or she thoroughly understands all the necessary material and is well-prepared for exams.For ...
29 Subjects: including geometry, reading, biology, piano
...My patient, polite and easy-going manner coupled with my ability to model various methods for understanding and “seeing” things, accentuates my success as a teacher and tutor. My teaching
strategies include giving a mixed review of problems. I also always have the student model and explain what they've learned, showing their process for deriving an answer.
16 Subjects: including geometry, chemistry, calculus, algebra 1
...I have taken Symbolic Logic, Mathematical Logic, Advanced Logic, Computability, and Modal Logic, each with excellent marks. While taking Symbolic Logic the professor requested to refer students
in the class to me for questions with the material, and after taking the class, asked me to serve as a...
32 Subjects: including geometry, calculus, physics, statistics
...I started with a major test prep company, and have experience in the following tests: SAT I (math, reading, and writing), ACT, GRE, GMAT, MCAT Verbal, LSAT, SSAT, SHSAT, ISEE, SSAT, and SAT
Subject Tests and AP Tests (Math level 1 and 2, World History, US History, and Government). I have tutored...
42 Subjects: including geometry, reading, English, biology | {"url":"http://www.purplemath.com/North_Hills_NY_Geometry_tutors.php","timestamp":"2014-04-18T08:39:34Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:0da3bb42-d2b0-4982-aa83-3cb5636baaa0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Change in scalar upcasting rules for 1.6.x?
Travis Oliphant travis@continuum...
Mon Feb 13 23:01:53 CST 2012
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
> On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant <travis@continuum.io> wrote:
> I disagree with your assessment of the subscript operator, but I'm sure we will have plenty of time to discuss that. I don't think it's correct to compare the corner cases of the fancy indexing and regular indexing to the corner cases of type coercion system. If you recall, I was quite nervous about all the changes you made to the coercion rules because I didn't believe you fully understood what had been done before and I knew there was not complete test coverage.
> It is true that both systems have emerged from a long history and could definitely use fresh perspectives which we all appreciate you and others bringing. It is also true that few are aware of the details of how things are actually implemented and that there are corner cases that are basically defined by the algorithm used (this is more true of the type-coercion system than fancy-indexing, however).
> I think it would have been wise to write those extensive tests prior to writing new code. I'm curious if what you were expecting for the output was derived from what earlier versions of NumPy produced. NumPy has never been in a state where you could just re-factor at will and assume that tests will catch all intended use cases. Numeric before it was not in that state either. This is a good goal, and we always welcome new tests. It just takes a lot of time and a lot of tedious work that the volunteer labor to this point have not had the time to do.
> Very few of us have ever been paid to work on NumPy directly and have often been trying to fit in improvements to the code base between other jobs we are supposed to be doing. Of course, you and I are hoping to change that this year and look forward to the code quality improving commensurately.
> Thanks for all you are doing. I also agree that Rolf and Charles have-been and are invaluable in the maintenance and progress of NumPy and SciPy. They deserve as much praise and kudos as anyone can give them.
> Well, the typecasting wasn't perfect and, as Mark points out, it wasn't commutative. The addition of float16 also complicated the picture, and user types is going to do more in that direction. And I don't see how a new developer should be responsible for tests enforcing old traditions, the original developers should be responsible for those. But history is history, it didn't happen that way, and here we are.
> That said, I think we need to show a little flexibility in the corner cases. And going forward I think that typecasting is going to need a rethink.
No argument on any of this. It's just that this needs to happen at NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6 release. That's my major point, and I'm surprised others are more cavalier about this.
New developers are awesome, and the life-blood of a project. But, you have to respect the history of a code-base and if you are re-factoring code that might create a change in corner-cases, then you are absolutely responsible for writing the tests if they aren't there already. That is a pretty simple rule.
If you are changing semantics and are not doing a new major version number that you can document the changes in, then any re-factor needs to have tests written *before* the re-factor to ensure behavior does not change. That might be annoying, for sure, and might make you curse the original author for not writing the tests you wish were already written --- but it doesn't change the fact that a released code has many, many tests already written for it in the way of applications and users. All of these are outside of the actual code-base, and may rely on behavior that you can't just change even if you think it needs to change. Bug-fixes are different, of course, but it can sometimes be difficult to discern what is a "bug" and what is just behavior that seems inappropriate.
Type-coercion, in particular, can be a difficult nut to crack because NumPy doesn't always control what happens and is trying to work-within Python's stunted type-system. I've often thought that it might be easier if NumPy were more tightly integrated into Python. For example, it would be great if NumPy's Int-scalar was the same thing as Python's int. Same for float and complex. It would also be nice if you could specify scalar literals with different precisions in Python directly. I've often wished that NumPy developers had more access to all the great language people who have spent their time on IronPython, Jython, and PyPy instead.
> Chuck
> On Feb 13, 2012, at 9:40 PM, Mark Wiebe wrote:
>> I believe the main lessons to draw from this are just how incredibly important a complete test suite and staying on top of code reviews are. I'm of the opinion that any explicit design choice of this nature should be reflected in the test suite, so that if someone changes it years later, they get immediate feedback that they're breaking something important. NumPy has gradually increased its test suite coverage, and when I dealt with the type promotion subsystem, I added fairly extensive tests:
>> https://github.com/numpy/numpy/blob/master/numpy/core/tests/test_numeric.py#L345
>> Another subsystem which is in a similar state as what the type promotion subsystem was, is the subscript operator and how regular/fancy indexing work. What this means is that any attempt to improve it that doesn't coincide with the original intent years ago can easily break things that were originally intended without them being caught by a test. I believe this subsystem needs improvement, and the transition to new/improved code will probably be trickier to manage than for the dtype promotion case.
>> Let's try to learn from the type promotion case as best we can, and use it to improve NumPy's process. I believe Charles and Ralph have been doing a great job of enforcing high standards in new NumPy code, and managing the release process in a way that has resulted in very few bugs and regressions in the release. Most of these quality standards are still informal, however, and it's probably a good idea to write them down in a canonical location. It will be especially helpful for newcomers, who can treat the standards as a checklist before submitting pull requests.
>> Thanks,
>> -Mark
>> On Mon, Feb 13, 2012 at 7:11 PM, Travis Oliphant <travis@continuum.io> wrote:
>> The problem is that these sorts of things take a while to emerge. The original system was more consistent than I think you give it credit. What you are seeing is that most people get NumPy from distributions and are relying on us to keep things consistent.
>> The scalar coercion rules were deterministic and based on the idea that a scalar does not determine the output dtype unless it is of a different kind. The new code changes that unfortunately.
>> Another thing I noticed is that I thought that int16 <op> scalar float would produce float32 originally. This seems to have changed, but I need to check on an older version of NumPy.
>> Changing the scalar coercion rules is an unfortunate substantial change in semantics and should not have happened in the 1.X series.
>> I understand you did not get a lot of feedback and spent a lot of time on the code which we all appreciate. I worked to stay true to the Numeric casting rules incorporating the changes to prevent scalar upcasting due to the absence of single precision Numeric literals in Python.
>> We will need to look in detail at what has changed. I will write a test to do that.
>> Thanks,
>> Travis
>> --
>> Travis Oliphant
>> (on a mobile)
>> 512-826-7480
>> On Feb 13, 2012, at 7:58 PM, Mark Wiebe <mwwiebe@gmail.com> wrote:
>>> On Mon, Feb 13, 2012 at 5:00 PM, Travis Oliphant <travis@continuum.io> wrote:
>>> Hmmm. This seems like a regression. The scalar casting API was fairly intentional.
>>> What is the reason for the change?
>>> In order to make 1.6 ABI-compatible with 1.5, I basically had to rewrite this subsystem. There were virtually no tests in the test suite specifying what the expected behavior should be, and there were clear inconsistencies where for example "a+b" could result in a different type than "b+a". I recall there being some bugs in the tracker related to this as well, but I don't remember those details.
>>> This change felt like an obvious extension of an existing behavior for eliminating overflow, where the promotion changed unsigned -> signed based on the value of the scalar. This change introduced minimal upcasting only in a set of cases where an overflow was guaranteed to happen without that upcasting.
>>> During the 1.6 beta period, I signaled that this subsystem had changed, as the bullet point starting "The ufunc uses a more consistent algorithm for loop selection.":
>>> http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055156.html
>>> The behavior Matthew has observed is a direct result of how I designed the minimization function mentioned in that bullet point, and the algorithm for it is documented in the 'Notes' section of the result_type page:
>>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.result_type.html
>>> Hopefully that explains it well enough. I made the change intentionally and carefully, tested its impact on SciPy and other projects, and advocated for it during the release cycle.
>>> Cheers,
>>> Mark
>>> --
>>> Travis Oliphant
>>> (on a mobile)
>>> 512-826-7480
>>> On Feb 13, 2012, at 6:25 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I recently noticed a change in the upcasting rules in numpy 1.6.0 /
>>> > 1.6.1 and I just wanted to check it was intentional.
>>> >
>>> > For all versions of numpy I've tested, we have:
>>> >
>>> >>>> import numpy as np
>>> >>>> Adata = np.array([127], dtype=np.int8)
>>> >>>> Bdata = np.int16(127)
>>> >>>> (Adata + Bdata).dtype
>>> > dtype('int8')
>>> >
>>> > That is - adding an integer scalar of a larger dtype does not result
>>> > in upcasting of the output dtype, if the data in the scalar type fits
>>> > in the smaller.
>>> >
>>> > For numpy < 1.6.0 we have this:
>>> >
>>> >>>> Bdata = np.int16(128)
>>> >>>> (Adata + Bdata).dtype
>>> > dtype('int8')
>>> >
>>> > That is - even if the data in the scalar does not fit in the dtype of
>>> > the array to which it is being added, there is no upcasting.
>>> >
>>> > For numpy >= 1.6.0 we have this:
>>> >
>>> >>>> Bdata = np.int16(128)
>>> >>>> (Adata + Bdata).dtype
>>> > dtype('int16')
>>> >
>>> > There is upcasting...
>>> >
>>> > I can see why the numpy 1.6.0 way might be preferable but it is an API
>>> > change I suppose.
>>> >
>>> > Best,
>>> >
>>> > Matthew
>>> > _______________________________________________
>>> > NumPy-Discussion mailing list
>>> > NumPy-Discussion@scipy.org
>>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120213/6e7bc7d3/attachment-0001.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060404.html","timestamp":"2014-04-19T19:38:20Z","content_type":null,"content_length":"19545","record_id":"<urn:uuid:87760dbe-f0a1-433e-a577-c67a0b017656>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who Invented The Number Zero?
Posted In: Science & Math.
Believe it or not, every number has a story behind its inception and zero, is not an exception to the rule despite essentially being the absence of a number. In the beginning, zero was not used very
often, if at all, and the funny thing about its creation is the fact that, on record, it was created three different times by three different sets of people. But let us start from the beginning on
who exactly created this absent number.
The Babylonian empire
The first recorded time the zero was “invented” was during the third century BC by the Babylonian empire. This would be the only time the zero would be used for a long, long time until the Mayans
came to pass. This would be one hundred years later when the Mayans in Central America created the zero for their own uses to make things easier for their people to discern certain inequities. Again
another one hundred years later, the zero would be “invented” again, and for the final time, in India, where it would finally catch on worldwide as a legitimate number endorsed by most civilizations
at the time across the world. By time it reached Europe in the 12^th century, most if not, all mathematicians utilized the zero.
Brahmagupta explicitly defined the number zero and how to use it.
For a more specific creator of this absence of a number, in India, it was made by Brahmagupta, a Hindu mathematician who first truly endorsed the number. Prior to its “official” creation and
implementation into the mainstream, no other countries used the number until it reached India’s subcontinent. Before the zero was used universally, the thing that mathematicians utilized was blank
spaces to decide how to calculate a problem as if there was nothing there.
The last number to be created
Another quick little fun fact about the creation of the zero is the fact that it was the last number, in recorded history, to be created. No new numbers have been created since, or at least no new
numbers that are actually used and utilized each and every day. It was also the only number to be placed before the beginning of the count.
Other math invention: Algebra, geometry, numbers and calculus. | {"url":"http://invention.yukozimo.com/who-invented-the-number-zero/","timestamp":"2014-04-21T10:30:45Z","content_type":null,"content_length":"17475","record_id":"<urn:uuid:6b4a6682-2d49-4bbb-a3e2-27f0baa04692>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
12.4 Differential Equations
Download PDFs
Additional Resources
12.4 Differential Equations
• A differential equation is an expression that relates quantities and their rates of change.
• The solution to a differential equation is not simply a number; it is a function.
With a solid mathematical tool, calculus, in hand, we can set out to try to understand the phenomena of the world mathematically. Let's start with a simple example. Imagine an object in free-fall. At
any given time during its fall, it will have some specific velocity, v. Furthermore, we intuitively know that the longer something falls, the faster it goes. This suggests that the velocity of the
object should be expressed as a function of elapsed time, t.
To write the specific expression that will tell us the object's velocity at any point in time, let's first assume that the object begins from a state of rest. This gives us an "initial condition," of
v(0) = 0, or "the velocity at time zero equals zero." The velocity of the object as it falls will then be due solely to the influence of gravity. If we multiply the time spent falling t by the
acceleration due to gravity g, which is the experimentally observed rate at which the velocity of a freely falling object changes, we can determine the speed at which our object is falling at any
point in time:
v(t) = gt
Notice here that what interests us is not a specific value for velocity or time, but rather the exact relationship between the two. In this example, we have a non-constant velocity. If we take the
derivative of this, we should get an expression that tells us how fast velocity is changing. Doing this, we get:
This is a very simple example of what is known as a differential equation. A differential equation is simply an equation that relates quantities with their rates of change. In this example, we see
that the amount by which v changes, dv, in some small amount of time, dt, is equal to a constant, g.
To solve this equation, we are looking for a function whose derivative is the constant g. Notice that solving a differential equation does not give us a simple number, as we would expect were we to
solve the equation 10 = 4x -2 for the variable x. Rather, our solution to a differential equation is a function v(t). This example is somewhat contrived because we already know that the answer will
be v(t) = g. After all, that's what we started with. But if we didn't already know, how could we figure it out?
There are a variety of methods that one can use to solve different types of differential equations. No one method can solve every differential equation, and there are many differential equations that
can't be solved at all. In the next example, we'll get a sense of the methods and thinking that go into solving differential equations.
• Exponential growth is a classic example of a real-world situation that lends itself to a solvable differential equation.
Let's look at another example, one that gives us an equation involving both a quantity and its derivative. Imagine a single bacterium surrounded by nutrients—perhaps it's in a bottle of milk.
Bacteria divide asexually by binary fusion, their population basically doubling at set intervals. The more bacteria there are, the more that are "born." This implies a rate of change, or growth, that
is not steady, as was the case in the previous example of the velocity of a falling object. Furthermore, the rate of increase in the bacteria population depends on how many there are to begin with.
If there are two bacteria initially, the first increase is by two, the second increase is by four, the third increase is by 8, etc.
Let's designate P(t) as the number of bacteria at any given time, t. The rate of change in this population is then
The a is just a constant that is related to the specifics of the situation—what type of bacteria, how long it takes them to reproduce, etc. In this situation, we have a rate of change that is
directly proportional to the quantity that is changing; in other words, we have an equation that relates a certain quantity to its derivative. This is a classic differential equation that describes
exponential growth.
We could use a process known as integration to solve this by separating the variables, putting the parts having to do with P on one side of the equation and the parts having to do with t on the other
side. Integration and differentiation are two of the most important concepts of calculus. Whereas differentiation seeks to explain rates of change, integration makes sense of the accumulation of an
infinite number of tiny changes. Integration is in a very real sense the "opposite" of differentiation, but it can be very complicated for anything but the simplest of equations. A faster way, for
our purposes, might be simply to try a few possible solutions and see if they work.
First let's try P(t) = at. According to our table from the previous section,
P(t) = at
Since this is true only for t = 1, let's try something else.
How about P(t) = sin at? cos(at) and we would have:
a cos at = a sin at
Again, this is true only sometimes, in much the same way that a stopped clock is right twice a day. We need something that is always true regardless of what value of t we consider. Let's try
something else.
How about P(t) = e^at? ae^at, which is just aP(t)! This gives us ae^at = ae^at, which is always true, no matter what t is. So the solution to our differential equation is P(t) = e^at.
In this example, we see again how the solution to a differential equation is a function, not a number. In our example here, this function describes how to find the population of bacteria at any point
in time, even though the rate of increase is changing. It's a nice, simple expression that encompasses the complexity of the situation under examination.
• Many differential equations are not solvable, but they can, upon analysis, yield information about the system they represent.
In addition to integration and the "guess and check" method we just used, there are other ways of solving differential equations (sometimes nicknamed "diff EQs"), and they generally fall into two
categories: exact and numerical methods. Exact methods yield exact solutions, as did the function in our example above. Numerical methods give approximations based on different algorithms. Often,
however, we can discover interesting behavior regarding our situation without having to solve any equation. We can look at its qualitative behavior via what is called a phase portrait, a picture that
shows a system's "phase space."
Phase space is handy because it provides a way to represent all the possible states of a system with one picture. It is a graph of the variables, such as position and velocity, that determine the
state of a system. We will talk about phase space in more depth in Unit 13. For our purposes here, it suffices to say that examining graphical representations of systems of differential equations can
yield a wealth of qualitative information about the system, such as whether or not it will display cyclical or synchronous behavior.
Now that we have an idea how to model certain real-life situations using equations that use both quantities and rates of change, we can tackle the issue of how synchronization arises in nature. We
are going to look at one of the most basic and accessible types of synchronization, that of cyclical behavior.
Next: 12.5 Cycles | {"url":"http://www.learner.org/courses/mathilluminated/units/12/textbook/04.php","timestamp":"2014-04-20T21:46:02Z","content_type":null,"content_length":"50880","record_id":"<urn:uuid:533fe24a-d023-4c7e-be27-98d139543dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wisconsin Legislature: AB558-ASA1-AA4-AA2,2,16
2011 - 2012 LEGISLATURE
TO ASSEMBLY AMENDMENT 4,
TO 2011 ASSEMBLY BILL 558
March 13, 2012 - Offered by Representatives Krusick and Fields.
AB558-ASA1-AA4-AA2,1,10 74. Page 5, line 16
: after "type" insert "and updated with current information
as soon as the information becomes available and, in the case of scores on
standardized examinations, no later than 30 days after receiving scores from the
testing company".
AB558-ASA1-AA4-AA2,2,10 9
"6. Provide a list of the names of the members of the board of directors of the
private school or of the entity that oversees the private school.".
AB558-ASA1-AA4-AA2,2,12 1112. Page 9, line 7
: after "site" insert "as soon as the information becomes
available and no later than 30 days after receiving scores from the testing company".
AB558-ASA1-AA4-AA2,2,16 1313. Page 12, line 25
: after "type" insert "and updated with current information
as soon as the information becomes available and, in the case of scores on
standardized examinations, no later than 30 days after receiving scores from the
testing company".
AB558-ASA1-AA4-AA2,3,7 6
"6. Provide a list of the names of the members of the board of directors of the
private school or of the entity that oversees the private school.".
AB558-ASA1-AA4-AA2,3,9 821. Page 16, line 16
: after "site" insert "as soon as the information becomes
available and no later than 30 days after receiving scores from the testing company". | {"url":"http://docs.legis.wisconsin.gov/2011/related/amendments/ab558/aa2_aa4_asa1_ab558/_29","timestamp":"2014-04-18T15:47:11Z","content_type":null,"content_length":"28801","record_id":"<urn:uuid:43eeeced-524f-4245-b145-90d6dc67aadd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] on Martin Davis's "bounds and Hilbert's 10th Problem"
Gabriel Stolzenberg gstolzen at math.bu.edu
Sat Apr 15 20:17:52 EDT 2006
This is in response to Martin Davis's apparent belief that I am
skeptical that classical mathematicians are interested in realistic
bounds. (Of course, for all I know, some are not. But that's not
the point.) I must have written something that encouraged this (to
put it kindly, as Martin does) "strange" idea. But I have no idea
what it was.
I begin with the relevant quote from Martin's message, followed
by my reply and then, for reference, the body of his message.
> This is in response to Gabriel's strange (to my mind) skepticism
> about the interest of mathematicians of the classical "mindset"
> about bounds.
Martin, I don't see that we're on different sides. The work you
describe, which seems terrific, is about realistic bounds for certain
important problems.
My skepticism is only about the idea that EVERY bound or algorithm
is "intrinsically interesting" to classical mathematicians (and hence,
like Skewes' number, worth getting), no matter how unrealistic it is,
no matter that we have no idea how to use it to get a realistic bound
or algorithm and no matter that we didn't learn anything of value by
constructing it.
Martin's message reads:
In my dissertation I proved that every r.e. set of natural numbers could
be defined by an expression of the form:
(Ey)(Ak < y)(E x_1, ... , x_n) [p(x,k,y,x_1, ... , x_n) = 0]
where p is a polynomial with integer coefficients. The proof was easy
using methods of G?del, and it followed from general considerations that
there was an absolute upper bound on n, easily seen to be in the
neighborhood of 50. Soon thereafter, Raphael Robinson published a very
intricate proof that n could be taken to be 4. Much later, using MRDP
Matiyaevich showed that n=2 works.
MRDP itself showed that the bounded universal quantifier could be removed
from this expression raising the question of a bound on n on this simple
polynomial equation. In a paper published in Acta Arithmetica, Julia
Robinson and Yuri Matiyasevich showed that n=13 works. Refining their
techniques, Yuri later improved the result to n=9.
I have lectured about these matters many times over the years, and have
never failed to find intense interest in the bounds.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010418.html","timestamp":"2014-04-18T04:40:34Z","content_type":null,"content_length":"4938","record_id":"<urn:uuid:80b222d1-a30d-41fd-a0fa-af164ab53c1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotes of Secant, Cosecant, and Cotangent - Problem 1
I want to talk about the asymptotes and x intercepts of this function y equals 5 secant 1/6 x. Now in order to analyze both the asymptotes in the intercepts I want to make a little substitution I’m
going to call this theta.
And so this function becomes 5 secant theta and of course 5 secant theta is the same as 5 over cosine theta. Now if you look at this function this is never going to equal 0 the only way it’s going to
equal is if the numerator equals 0 and the numerator is 5.
So this is never going to equal 0 and that means no x intercepts. Now what about vertical asymptotes, we will have vertical asymptotes when cosine theta equals 0 and we already know that that happens
at theta equals pi over 2 plus and pi.
Now theta was the substitution let’s put 1/6 x back in there. 1/6 x equals pi over 2 plus n pi. So to find what x has to be I multiply everything by 6 and I get x equals 3 pi plus 6 n pi. So that
means that the vertical asymptotes are x equals 3 pi, 9 pi, 15 pi and so on. -3 pi, -9 pi, -15 pi and so on.
secant x intercepts vertical asymptotes | {"url":"https://www.brightstorm.com/math/precalculus/trigonometric-functions/asymptotes-of-secant-cosecant-and-cotangent-problem-1/","timestamp":"2014-04-20T13:48:44Z","content_type":null,"content_length":"69605","record_id":"<urn:uuid:d3280f6a-de38-4a77-a4a0-88c81a5fb1af>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Viviescas Ramírez, Carlos Leonardo: Quantum theory of amplifying random media
Quantum theory of amplifying random media
Duisburg-Essen (2004), 124 S.
Dissertation / Fach: Physik
Fakultät für Physik
Fakultät für Physik » Theoretische Physik
A quantum theory of lasing in random media is presented. The theory constitutes a generalization of the standard laser theory, accounting for lasing in resonators with spectrally overlapping modes due to large outcoupling losses, and incorporating in a natural fashion the statistical properties of chaotic modes when apply to lasers in random media or inside chaotic resonators. We study the photocount statistics of the radiation emitted from a chaotic laser resonator in the regime of single-mode lasing. The random spatial variations of the resonator eigenfunctions are incorporated in the theory, and showed to lead to strong mode-to-mode uctuations of the laser emission. The distribution of the mean photocount over an ensemble of modes changes qualitatively at the lasing transition, and displays up to three peaks above the lasing threshold. We then address the quantization of the electromagnetic field in weakly confining resonators using Feshbach's projection technique. We consider both inhomogeneous dielectric resonators with a scalar dielectric constant epsilon(r) and cavities defined by mirrors of arbitrary shape. The field is quantized in terms of a set of resonator and bath modes. We rigorously show that the field Hamiltonian reduces to the system-and-bath Hamiltonian of quantum optics. The field dynamics is investigated using the inputoutput theory of Gardiner and Collet. In the case of strong coupling to the external radiation field we find spectrally overlapping resonator modes. The mode dynamics is coupled due to the damping and noise in icted by the external radiation field. We derived Langevin equations and a master equation for the resonator modes. For linear optical systems, including gain/loss contributions, it is shown that the field dynamics is described by the system S matrix. For wave chaotic resonator the dynamics is determined by a non-Hermitian random matrix. After including an amplifying medium, we use the open-resonator dynamics to construct a quantum theory for lasing in random media. We investigate the emission spectrum of lasers in cavities with overlapping modes operating in the single-mode regime. The noise properties of such lasers are seen to differ from traditional lasers due to the presence of excess noise. Our theory not only accounts for the Petermann linewidth enhancement, but predicts deviations of the laser line from a Lorentzian shape. To conclude, the emission spectrum of random lasers is discussed.
Dieser Eintrag ist freigegeben. | {"url":"http://duepublico.uni-duisburg-essen.de/servlets/DozBibEntryServlet?mode=show&id=993","timestamp":"2014-04-17T19:11:53Z","content_type":null,"content_length":"19785","record_id":"<urn:uuid:e9a650db-3fba-4ade-8674-fd124c8b214f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
32-XX Several complex variables and analytic spaces {For infinite-dimensional holomorphy, see 46G20, 58B12}
32Pxx Non-Archimedean analysis (should also be assigned at least one other classification number from Section 32 describing the type of problem)
32P05 Non-Archimedean analysis (should also be assigned at least one other classification number from Section 32 describing the type of problem)
32P99 None of the above, but in this section | {"url":"http://ams.org/mathscinet/msc/msc2010.html?t=32Pxx","timestamp":"2014-04-17T16:08:10Z","content_type":null,"content_length":"11988","record_id":"<urn:uuid:bea6e531-bfb5-490a-9794-680da891aa38>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the strict transform a finite morphism?
up vote 3 down vote favorite
Woe is me! I'm again resorting to this forum to ask a silly question.
Here is the example I had in mind: observe the (complex) curve $y^3=x^2(x-1)$. In attempt to normalize this curve, I've begun by blowing it up once at the origin. Of the two affines resulting from
the blow up, the $x=ys$ affine is the one that will meet the strict transform. Indeed the strict transform will be $\mathbb{C}[x,y,s]/x=ys,y=s^2(x-1)$. I've always assumed the natural morphism from
the strict transform to the original curve is a finite one (otherwise using blow-ups to desingularize would be an odd concept!). This would imply that the above ring is integral over $\mathbb{C}[x,y]
/y^3-x^2(x-1)$. But I've been staring at this, and staring at this, and by the life of me I can't come up with a monic polynomial that $s=\frac{x}{y}$ would satisfy over this ring.
Is the strict transform a finite morphism?
Really? This perplexes me. This curve has only one tangent at the origin, and it is x=0. Indeed, the other affine would give (say v=1/s) y(v^2)=x-1 - which has no points above x=y=0. – James D.
Taylor Aug 28 '10 at 23:48
You are right: I read your equation as being $y^2=...$! Sorry for the confusion. It is a sign that I should not try to say anything else! – damiano Aug 28 '10 at 23:56
Sure, strict transforms are always proper by construction, hence finite when quasi-finite as for curves. But your chart is integral over its image in the base curve $C$, not the entirety of $C$
3 necessarily (unless that is its image). Your error is that you have got to remove the image of the points outside of your chart, which amounts to removing the point $(1,0)$. This is the unique
point where $C$ meets $x=1$, so invert $x-1$. There's an integral relation $s^3 = x/(x-1)$ over $C[1/(x-1)]$. – BCnrd Aug 29 '10 at 2:27
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged algebraic-curves or ask your own question. | {"url":"http://mathoverflow.net/questions/37004/is-the-strict-transform-a-finite-morphism?answertab=active","timestamp":"2014-04-20T11:19:54Z","content_type":null,"content_length":"49876","record_id":"<urn:uuid:55b426ed-0151-4d0a-b5b9-e22d3dbbc64a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
with high-order transformations
« journal navigation
Designs for optical cloaking with high-order transformations
Optics Express, Vol. 16, Issue 8, pp. 5444-5452 (2008)
Recent advances in metamaterial research have provided us a blueprint for realistic cloaking capabilities, and it is crucial to develop practical designs to convert concepts into real-life devices.
We present two structures for optical cloaking based on high-order transformations for TM and TE polarizations respectively. These designs are possible for visible and infrared wavelengths. This
critical development builds upon our previous work on nonmagnetic cloak designs and high-order transformations.
© 2008 Optical Society of America
1. Introduction
Cloaking (or being invisible) is a longtime dream that may date back to the very beginning of human civilization. In the last few years, this dream moves one step closer to reality, thanks to various
schemes proposed to control and manipulate electromagnetic waves in unprecedented ways [
1. G. W. Milton and N. A. P. Nicorovici, “On the cloaking effects associated with anomalous localized resonance,” Proc. R. Soc. London, Ser. A 462, 3027–3059 (2006). [CrossRef]
]. From among these methods, the transformation approach, which generalized a similar idea on cloaking of thermal conductivity [
7. A. Greenleaf, M. Lassas, and G. Uhlmann, “Anisotropic conductivities that cannot be detected by EIT,” Physiol. Meas. 24, 413–419 (2003). [CrossRef] [PubMed]
8. Y. Benveniste and T. Miloh, “Neutral inhomogeneities in conduction phenomena,” J. Mech. Phys. Solids 47, 1873–1892 (1999). [CrossRef]
], has generated enormous interest [
9. A. Hendi, J. Henn, and U. Leonhardt, “Ambiguities in the scattering tomography for central potentials,” Phys. Rev. Lett. 97, 073902 (2006). [CrossRef] [PubMed]
], partially because of its similarity to the mythological version of cloak: a closed surface is created which renders arbitrary objects within its interior invisible to detection. The constitutive
parameters of the cloak are determined by the specific form of the spatial transformation. They are usually anisotropic with gradient requirements that are only possible using artificially engineered
structures, such as the previously demonstrated microwave cloak [
13. D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr, and D. R. Smith, “Metamaterial electromagnetic cloak at microwave frequencies,” Science 314, 977–980 (2006).
[CrossRef] [PubMed]
2. Material properties in cylindrical cloaks
For a cloak in the cylindrical geometry, a coordinate transformation function on
′) from (
′) to (
) is used to compress the region
into a concentric shell of
, and the permittivity and permeability tensors required for an exact cloak can be determined as [
15. W. Cai, U. K. Chettiar, A. V. Kildishev, V. M. Shalaev, and G. W. Milton, “Nonmagnetic cloak with minimized scattering,” Appl. Phys. Lett. 91, 111105 (2007). [CrossRef]
For the standard states of incident polarization, the requirement in Eq. (
) can be relaxed such that only three of the six components are relevant. For example, for TE (TM) polarization, only
) enter into Maxwell’s equations. Moreover, the parameters can be further simplified to form reduced parameters which are more realistic for practical applications. Since the trajectory of the waves
is determined by the cross product components of the
tensors instead of the two tensors individually, the cloaking performance is sustained as long as
) are kept the same as those determined by values in Eq. (
). This technique results in a specific set of reduced parameters which allow for a permeability gradient along only the radial direction for the TE mode [
13. D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr, and D. R. Smith, “Metamaterial electromagnetic cloak at microwave frequencies,” Science 314, 977–980 (2006).
[CrossRef] [PubMed]
and can be purely non-magnetic for the TM mode [
14. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Optical cloaking with metamaterials,” Nat. Photonics 1, 224–227 (2007). [CrossRef]
3. Optical cloak with high-order transformations I: TM mode
First we focus on the non-magnetic cloak in the TM mode with parameters given in Eq. (
). In this case, the design of cloak is essentially to produce the required gradient in
using readily available materials. Apparently, a cloak cannot consist of only a single-constituent material, because a spatial variation in material properties is critical to building a cloak. To
start the design, we first examine the overall flexibility we can achieve in the effective permittivity of a general two-phase composite medium. When an external field interacts with a composite
consisting of two elements with permittivity of
respectively, minimal screening occurs when all internal boundaries between the two constituents are parallel to the electric field, and maximal screening happens when all boundaries aligned
perpendicular to the field. These two extremes are possible in an alternating layered structure, provided that the thickness of each layer is much less than the wavelength of incidence [
17. D. E. Aspnes, “Optical-Properties of Thin-Films,” Thin Solid Films 89, 249–262 (1982). [CrossRef]
]. In this case the two extreme values of the effective permittivity can be approximated as
and 1-
denote the volume fractions of components 1 and 2, and the subscripts ‖ and ⊥ indicate the cases with electric field polarized parallel and perpendicular to the interfaces of the layers,
respectively. Such layered structures have been studied extensively in recent years for various purposes, especially in sub-diffraction imaging for both the near field and the far zone [
The two extrema in Eq. (
) are called the Wiener bounds to permittivity, which set the absolute bound on all possible values of the effective permittivity of a two-phase composite [
25. D. E. Aspnes, “Bounds on Allowed Values of the Effective Dielectric Function of 2-Component Composites at Finite Frequencies,” Phys. Rev. B 25, 1358–1361 (1982). [CrossRef]
]. In realistic composites, more strict limits, for example those from the spectral representation developed by Bergman and Milton [
26. D. J. Bergman, “Exactly Solvable Microscopic Geometries and Rigorous Bounds for the Complex Dielectric-Constant of a 2-Component Composite-Material,” Phys. Rev. Lett. 44, 1285–1287 (1980).
27. G. W. Milton, “Bounds on the Complex Dielectric-Constant of a Composite-Material,” Appl. Phys. Lett. 37, 300–302 (1980). [CrossRef]
], might apply in addition to the Wiener bounds, but Eq. (
) nonetheless provides a straightforward way to evaluate the accessible permittivity in a composite with given constituent materials. The Wiener bounds can be illustrated on a complex
-plane with the real and imaginary parts of
being the
axis, respectively. In this plane, the low-screening bound in Eq. (
) corresponds to a straight line between
, and the high-screening bound in Eq. (
) defines an arc which is part of the circle determined by the three points
and the origin.
The required material properties for the cloak in Eq. (
) indicates that, for a non-magnetic cylindrical cloak with any transformation function,
varies from 0 at the inner boundary of the cloak (
) to 1 at the outer surface (
), while
is a function of
with varying positive value, except for the linear transformation case where ∂
′ is a constant. Now we can evaluate the possibility of fulfilling the required parameters in Eq. (
) based on alternating metal-dielectric slices whose properties are estimated by Eq. (
). With phase 1 being a metal (
<0) and phase 2 representing a dielectric (
>0), the desired material properties of the cloak are only possible when the slices are within the
plane of the cylindrical coordinates. In this case
correspond to
ε [‖]
ε [⊥]
in Eq. (
), respectively. This scenario is illustrated in
Fig. 1
. The thick solid and dashed lines represent the two Wiener bounds
ε [‖]
) and
ε [⊥]
) respectively. The constituent materials used for the calculation are silver and silica at a green light wavelength of 532 nm. The pair of points on the bounds with the same filling fraction are
connected with a straight line for clarification purposes. When
changes between 0 and 1, the value of
varies accordingly as shown by the arrow between the two thin dashed lines. Therefore, the construction of a non-magnetic cloak requires that the relationship between the two quantities
ε [‖]
ε [⊥]
(as functions of
) within the range shown in
Fig. 1
fits the material properties given in Eq. (
) for a particular transformation function
′). Another attractive feature of the proposed scheme is the minimal loss factor. As shown in
Fig. 1
, the loss feature described by the imaginary part of the effective permittivity is on the order of 0.01, much smaller than that of a pure metal or any resonant metal-dielectric structures. A
schematic of the proposed structure consisting of interlaced metal and dielectric slices is illustrated in
Fig. 2
Mathematically, for a preset operational wavelength we seek a transformation together with the cylindrical shape factor a/b that fulfills the following equation:
There does not exist an exact analytical solution to the equations above. However, we may use polynomial functions to approach a possible solution. More specifically, a quadratic function in the
following form
with |
b ^2
can serve as a good candidate for an approximate solution to Eq. (
Such a transformation automatically satisfies the boundary and monotonicity requirements in Eq. (
), and it is possible to fulfill Eq. (
) with minimal deviation when a proper shape factor is chosen. In
Table 1
we provide transformations, materials and geometries for non-magnetic cloaks designed for several important frequency lines across the visible range including 488 nm (Ar-ion laser), 532 nm (Nd:YAG
laser), 589.3 nm (sodium D-line), and 632.8 nm (He-Ne laser). In the calculations, the permittivity of silver is taken from well accepted experimental data [
28. P. B. Johnson and R. W. Christy, “Optical-Constants of Noble-Metals,” Phys. Rev. B 6, 4370–4379 (1972). [CrossRef]
], and the dielectric constant of silica is from the tabulated data in [
]. Note that the same design and transformation work for all similar cylindrical cloaks with the same shape factor
. When the approximate quadratic function is fixed for a given wavelength, the filling fraction function
) is determined by the following equation:
As an example, in
Fig. 3
we show the anisotropic material properties of a non-magnetic cloak corresponding to the
=532 nm case in
Table 1
. Our calculation shows that with the approximate quadratic transformation, the effective parameters
obtained with the Wiener bounds in Eq. (
) fit with the exact parameters required for this transformation by Eq. (
) remarkably well, with the average deviation of less than 0.5%.
Compared to the previously designed cloak in [
14. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Optical cloaking with metamaterials,” Nat. Photonics 1, 224–227 (2007). [CrossRef]
] which requires thin metal needles embedded in a dielectric host following a pre-designed distribution, the fabrication feasibility of the newly proposed design is obvious because such vertical
wall-like structures are compatible with mature fabrication techniques like direct deposition and direct etching.
4. Optical cloak with high-order transformations II: TE mode
In the next part of the paper, we focus on constructing a cylindrical cloak for TE mode working within the mid-infrared frequency range with a gradient in the magnetic permeability, as requested by
Eq. (
). An electromagnetic cloak operating at mid-infrared is of great military and civilian interests, because this wavelength range corresponds to the thermal radiation band from human bodies. For this
purpose there could be several different approaches which all involve silicon carbide, an important media for metamaterial research in mid-infrared. SiC is a polaritonic material with its phonon
resonance band falling into the spectral range centered at around 12.5 µm (800 cm
), which introduces a sharp Lorentz behavior in its electric permittivity. The dielectric function of SiC at mid-infrared is well described with the following model [
30. W. G. Spitzer, D. Kleinman, and D. Walsh, “Infrared Properties of Hexagonal Silicon Carbide,” Phys. Rev. 113, 127–132 (1959). [CrossRef]
31. D. Korobkin, Y. Urzhumov, and G. Shvets, “Enhanced near-field resolution in midinfrared using metamaterials,” J. Opt. Soc. Am. B 23, 468–478 (2006). [CrossRef]
∊ [∞]
cm ^-1
cm ^-1
cm ^-1
. On the high-frequency side, the dielectric function is strongly negative, which makes its optical response similar to that of metals and has been utilized in applications like a mid-infrared
superlens [
31. D. Korobkin, Y. Urzhumov, and G. Shvets, “Enhanced near-field resolution in midinfrared using metamaterials,” J. Opt. Soc. Am. B 23, 468–478 (2006). [CrossRef]
32. T. Taubner, D. Korobkin, Y. Urzhumov, G. Shvets, and R. Hillenbrand, “Near-field microscopy through a SiC superlens,” Science 313, 1595–1595 (2006). [CrossRef] [PubMed]
]. At frequencies lower than the resonance frequency, the permittivity can be strongly positive, which makes SiC an attractive candidate for producing high-permittivity Mie resonators at the
mid-infrared wavelength range [
33. J. A. Schuller, R. Zia, T. Taubner, and M. L. Brongersma, “Dielectric metamaterials based on electric and magnetic resonances of silicon carbide particles,” Phys. Rev. Lett. 99, 107401 (2007).
[CrossRef] [PubMed]
SiC structures can be used to build mid-infrared cloaking devices in different styles. For example, we may consider using the needle-based structure detailed in [
14. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Optical cloaking with metamaterials,” Nat. Photonics 1, 224–227 (2007). [CrossRef]
] for the TM mode, where needles made of a low-loss negative-
polaritonic material like SiC or TiO
are embedded in an IR transparent dielectric like ZnS. The non-magnetic cloak using alternating slices structure as proposed in this paper provides a more realistic design. With SiC as the negative-
material and BaF
as the positive-
slices with material properties given in [
], we can find the appropriate transformation function and shape factor that fulfills the material property requirements at a preset wavelength. The result for
=11.3 µm (CO
laser range) is shown in the last row of
Table 1
Now we consider a cylindrical cloak for the TE mode with the required material properties given in Eq. (
), which indicates that a gradient in the magnetic permeability along the radial direction is necessary. To be more specific,
varies from 0 at the inner boundary (
) to [∂
at the outer surface (
), while the required
changes accordingly following the function [∂
. The magnetic requirement may be accomplished using metal elements like split-ring resonators, coupled nanostrips or nanowires. However, such plasmonic structures inevitably exhibit a high loss,
which is detrimental to the cloaking performance. A SiC based structure provides an all-dielectric route to a magnetic cloak for the TE mode due to the Mie resonance in a subwavelength SiC unit.
Meta-magnetic responses and a negative index of refraction in structures made from high-permittivity materials have been studied extensively in recently years [
33. J. A. Schuller, R. Zia, T. Taubner, and M. L. Brongersma, “Dielectric metamaterials based on electric and magnetic resonances of silicon carbide particles,” Phys. Rev. Lett. 99, 107401 (2007).
[CrossRef] [PubMed]
]. Magnetic resonance in a rod-shaped high-permittivity particle can be excited by different polarizations of the external field with respect to the rod axis. When a strong magnetic resonance and an
effective permeability substantially distinct from 1 are desired, the rod should to be aligned parallel to the electric field to assure the maximum possible interaction between the rod and the
external field. In our conceptual design of a cylindrical cloak for the TE mode, the desired radial permeability has values of less than (but close to) 1, and resonance behavior in the effective
should be avoided for a minimal loss. Therefore, with the electrical field polarized along the
axis of the cylindrical system, we arrange the SiC rods along the
axis and form an array in the
plane. The proposed structure is depicted in
Fig. 4
, where arrays of SiC wires along the radial direction are placed between the two surfaces of the cylindrical cloak.
The effective permeability of the system can be estimated as follows using the approach in [
34. S. O’Brien and J. B. Pendry, “Photonic band-gap effects and magnetic activity in dielectric composites,” J. Phys. Condens. Matter. 14, 4035–4044 (2002). [CrossRef]
where h and φ represent the periodicities along the z and θ directions respectively, t denotes the radius of each wire, n=εSiC is the refractive index, k=2π/λ[0] denotes the wave vector, L1=hrφπ and
L2=(h+rφ)2 represent the two effective unit sizes based on area and perimeter estimations respectively.
a [0]
nJ [0]
J [1]
J [0]
J [1]
nJ [0]
H ^(1) [1]
H ^(1) [0]
J [1]
)] and
c [0]
J [0]
a [0] H ^(1) [0]
J [0]
) are the scattering coefficients, and the Bessel functions in the equation follow the standard notations. The permittivity along the
direction is well approximated using Maxwell-Garnett method [
34. S. O’Brien and J. B. Pendry, “Photonic band-gap effects and magnetic activity in dielectric composites,” J. Phys. Condens. Matter. 14, 4035–4044 (2002). [CrossRef]
]. In the design we choose the appropriate transformation, geometry and operational wavelength such that the calculated effective parameters
follow what is required by Eq. (
) with tolerable deviations. In
Fig. 5
we plot the required and the calculated
for a TE cloak at
=13.5 µm. The parameters used for this calculation are
=15 µm,
=1.2 µm,
=2.8 µm,
=10.6°, and the
coefficient in the quadratic transformation is 0.5
b ^2
. We observe very good agreement between the required values and the calculated ones based on analytical formulae, and the imaginary part in the effective permeability is less than 0.06. This
computation verifies the feasibility of the proposed cloaking system based on SiC wire arrays for the TE polarization.
5. Conclusions
In summary, using high-order transformations we proposed two novel designs of optical cloaks for TM and TE polarizations. This critical development builds upon our previous work on the design of a
non-magnetic cloak and the suggestion of using high-order transformations to produce more flexible cloaking systems. We should note that the effective material properties in the two designs presented
in this paper are evaluated using simple analytical models, and the bulk dielectric functions of the constituent materials are used in all calculations. In the real-life construction of such cloaking
systems, there would be expected deviations from the presented parameters in terms of particular values in the geometry and transformation functions. Nevertheless, this work provides realistic
structures and models which lead to a practical path towards realizing actual cloaking devices at optical wavelengths.
References and links
1. G. W. Milton and N. A. P. Nicorovici, “On the cloaking effects associated with anomalous localized resonance,” Proc. R. Soc. London, Ser. A 462, 3027–3059 (2006). [CrossRef]
2. N. A. P. Nicorovici, G. W. Milton, R. C. McPhedran, and L. C. Botten, “Quasistatic cloaking of two-dimensional polarizable discrete systems by anomalous resonance,” Opt. Express 15, 6314–6323
(2007). [CrossRef] [PubMed]
3. A. Alu and N. Engheta, “Achieving transparency with plasmonic and metamaterial coatings,” Phys. Rev. B 72, 016623 (2005). [CrossRef]
4. M. G. Silveirinha, A. Alu, and N. Engheta, “Parallel-plate metamaterials for cloaking structures,” Phys. Rev. B 75, 036603 (2007). [CrossRef]
5. D. A. B. Miller, “On perfect cloaking,” Opt. Express 14, 12457–12466 (2006). [CrossRef] [PubMed]
6. F. J. Garcia de Abajo, G. Gomez-Santos, L. A. Blanco, A. G. Borisov, and S. V. Shabanov, “Tunneling mechanism of light transmission through metallic films,” Phys. Rev. Lett. 95, 067403 (2005).
[CrossRef] [PubMed]
7. A. Greenleaf, M. Lassas, and G. Uhlmann, “Anisotropic conductivities that cannot be detected by EIT,” Physiol. Meas. 24, 413–419 (2003). [CrossRef] [PubMed]
8. Y. Benveniste and T. Miloh, “Neutral inhomogeneities in conduction phenomena,” J. Mech. Phys. Solids 47, 1873–1892 (1999). [CrossRef]
9. A. Hendi, J. Henn, and U. Leonhardt, “Ambiguities in the scattering tomography for central potentials,” Phys. Rev. Lett. 97, 073902 (2006). [CrossRef] [PubMed]
10. J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling electromagnetic fields,” Science 312, 1780–1782 (2006). [CrossRef] [PubMed]
11. U. Leonhardt, “Optical conformal mapping,” Science 312, 1777–1780 (2006). [CrossRef] [PubMed]
12. D. Schurig, J. B. Pendry, and D. R. Smith, “Calculation of material properties and ray tracing in transformation media,” Opt. Express 14, 9794–9804 (2006). [CrossRef] [PubMed]
13. D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr, and D. R. Smith, “Metamaterial electromagnetic cloak at microwave frequencies,” Science 314, 977–980 (2006).
[CrossRef] [PubMed]
14. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Optical cloaking with metamaterials,” Nat. Photonics 1, 224–227 (2007). [CrossRef]
15. W. Cai, U. K. Chettiar, A. V. Kildishev, V. M. Shalaev, and G. W. Milton, “Nonmagnetic cloak with minimized scattering,” Appl. Phys. Lett. 91, 111105 (2007). [CrossRef]
16. R. Weder, “A rigorous analysis of high-order electromagnetic invisibility cloaks,” J. Phys. A: Math. Theor. 41, 065207 (2008). [CrossRef]
17. D. E. Aspnes, “Optical-Properties of Thin-Films,” Thin Solid Films 89, 249–262 (1982). [CrossRef]
18. S. A. Ramakrishna, J. B. Pendry, M. C. K. Wiltshire, and W. J. Stewart, “Imaging the near field,” J. Mod. Opt. 50, 1419–1430 (2003).
19. D. Schurig and D. R. Smith, “Sub-diffraction imaging with compensating bilayers,” New J. Phys. 7, 162 (2005). [CrossRef]
20. P. A. Belov and Y. Hao, “Subwavelength imaging at optical frequencies using a transmission device formed by a periodic layered metal-dielectric structure operating in the canalization regime,”
Phys. Rev. B 73, 113110 (2006). [CrossRef]
21. S. M. Feng and J. M. Elson, “Diffraction-suppressed high-resolution imaging through metallodielectric nanofilms,” Opt. Express 14, 216–221 (2006). [CrossRef] [PubMed]
22. Z. Jacob, L. V. Alekseyev, and E. Narimanov, “Optical hyperlens: Far-field imaging beyond the diffraction limit,” Opt. Express 14, 8247–8256 (2006). [CrossRef] [PubMed]
23. A. Salandrino and N. Engheta, “Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations,” Phys. Rev. B 74, 075103 (2006). [CrossRef]
24. O. Wiener, “Die Theorie des Mischkorpers fur das Feld der stationaren Stromung,” Abh. Math.-Phys. Klasse Koniglich Sachsischen Des. Wiss. 32, 509–604 (1912).
25. D. E. Aspnes, “Bounds on Allowed Values of the Effective Dielectric Function of 2-Component Composites at Finite Frequencies,” Phys. Rev. B 25, 1358–1361 (1982). [CrossRef]
26. D. J. Bergman, “Exactly Solvable Microscopic Geometries and Rigorous Bounds for the Complex Dielectric-Constant of a 2-Component Composite-Material,” Phys. Rev. Lett. 44, 1285–1287 (1980).
27. G. W. Milton, “Bounds on the Complex Dielectric-Constant of a Composite-Material,” Appl. Phys. Lett. 37, 300–302 (1980). [CrossRef]
28. P. B. Johnson and R. W. Christy, “Optical-Constants of Noble-Metals,” Phys. Rev. B 6, 4370–4379 (1972). [CrossRef]
29. E. D. Palik, Handbook of Optical Constants of Solids (Academic Press, New York, 1997).
30. W. G. Spitzer, D. Kleinman, and D. Walsh, “Infrared Properties of Hexagonal Silicon Carbide,” Phys. Rev. 113, 127–132 (1959). [CrossRef]
31. D. Korobkin, Y. Urzhumov, and G. Shvets, “Enhanced near-field resolution in midinfrared using metamaterials,” J. Opt. Soc. Am. B 23, 468–478 (2006). [CrossRef]
32. T. Taubner, D. Korobkin, Y. Urzhumov, G. Shvets, and R. Hillenbrand, “Near-field microscopy through a SiC superlens,” Science 313, 1595–1595 (2006). [CrossRef] [PubMed]
33. J. A. Schuller, R. Zia, T. Taubner, and M. L. Brongersma, “Dielectric metamaterials based on electric and magnetic resonances of silicon carbide particles,” Phys. Rev. Lett. 99, 107401 (2007).
[CrossRef] [PubMed]
34. S. O’Brien and J. B. Pendry, “Photonic band-gap effects and magnetic activity in dielectric composites,” J. Phys. Condens. Matter. 14, 4035–4044 (2002). [CrossRef]
35. K. C. Huang, M. L. Povinelli, and J. D. Joannopoulos, “Negative effective permeability in polaritonic photonic crystals,” Appl. Phys. Lett. 85, 543–545 (2004). [CrossRef]
36. M. S. Wheeler, J. S. Aitchison, and M. Mojahedi, “Three-dimensional array of dielectric spheres with an isotropic negative permeability at infrared frequencies,” Phys. Rev. B 72, 193103 (2005).
37. L. Peng, L. X. Ran, H. S. Chen, H. F. Zhang, J. A. Kong, and T. M. Grzegorczyk, “Experimental observation of left-handed behavior in an array of standard dielectric resonators,” Phys. Rev. Lett.
98, 157403 (2007). [CrossRef] [PubMed]
OCIS Codes
(160.4760) Materials : Optical properties
(160.3918) Materials : Metamaterials
(230.3205) Optical devices : Invisibility cloaks
ToC Category:
Original Manuscript: February 11, 2008
Revised Manuscript: March 31, 2008
Manuscript Accepted: April 1, 2008
Published: April 3, 2008
Wenshan Cai, Uday K. Chettiar, Alexander V. Kildishev, and Vladimir M. Shalaev, "Designs for optical cloaking with high-order transformations," Opt. Express 16, 5444-5452 (2008)
Sort: Year | Journal | Reset
1. G. W. Milton, and N. A. P. Nicorovici, "On the cloaking effects associated with anomalous localized resonance," Proc. R. Soc. London, Ser. A 462, 3027-3059 (2006). [CrossRef]
2. N. A. P. Nicorovici, G. W. Milton, R. C. McPhedran, and L. C. Botten, "Quasistatic cloaking of two-dimensional polarizable discrete systems by anomalous resonance," Opt. Express 15, 6314-6323
(2007). [CrossRef] [PubMed]
3. A. Alu and N. Engheta, "Achieving transparency with plasmonic and metamaterial coatings," Phys. Rev. B 72, 016623 (2005). [CrossRef]
4. M. G. Silveirinha, A. Alu, and N. Engheta, "Parallel-plate metamaterials for cloaking structures," Phys. Rev. B 75, 036603 (2007). [CrossRef]
5. F. J. Garcia de Abajo, G. Gomez-Santos, L. A. Blanco, A. G. Borisov, and S. V. Shabanov, "Tunneling mechanism of light transmission through metallic films," Phys. Rev. Lett. 95, 067403 (2005).
[CrossRef] [PubMed]
6. A. Greenleaf, M. Lassas, and G. Uhlmann, "Anisotropic conductivities that cannot be detected by EIT," Physiol. Meas. 24, 413-419 (2003). [CrossRef] [PubMed]
7. Y. Benveniste and T. Miloh, "Neutral inhomogeneities in conduction phenomena," J. Mech. Phys. Solids 47, 1873-1892 (1999). [CrossRef]
8. A. Hendi, J. Henn, and U. Leonhardt, "Ambiguities in the scattering tomography for central potentials," Phys. Rev. Lett. 97, 073902 (2006). [CrossRef] [PubMed]
9. J. B. Pendry, D. Schurig, and D. R. Smith, "Controlling electromagnetic fields," Science 312, 1780-1782 (2006). [CrossRef] [PubMed]
10. U. Leonhardt, "Optical conformal mapping," Science 312, 1777-1780 (2006). [CrossRef] [PubMed]
11. D. Schurig, J. B. Pendry, and D. R. Smith, "Calculation of material properties and ray tracing in transformation media," Opt. Express 14, 9794-9804 (2006). [CrossRef] [PubMed]
12. D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr, and D. R. Smith, "Metamaterial electromagnetic cloak at microwave frequencies," Science 314, 977-980 (2006).
[CrossRef] [PubMed]
13. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, "Optical cloaking with metamaterials," Nat. Photonics 1, 224-227 (2007). [CrossRef]
14. W. Cai, U. K. Chettiar, A. V. Kildishev, V. M. Shalaev, and G. W. Milton, "Nonmagnetic cloak with minimized scattering," Appl. Phys. Lett. 91, 111105 (2007). [CrossRef]
15. R. Weder, "A rigorous analysis of high-order electromagnetic invisibility cloaks," J. Phys. A: Math. Theor. 41, 065207 (2008). [CrossRef]
16. D. E. Aspnes, "Optical-Properties of Thin-Films," Thin Solid Films 89, 249-262 (1982). [CrossRef]
17. S. A. Ramakrishna, J. B. Pendry, M. C. K. Wiltshire, and W. J. Stewart, "Imaging the near field," J. Mod. Opt. 50, 1419-1430 (2003).
18. D. Schurig, and D. R. Smith, "Sub-diffraction imaging with compensating bilayers," New J. Phys. 7, 162 (2005). [CrossRef]
19. P. A. Belov, and Y. Hao, "Subwavelength imaging at optical frequencies using a transmission device formed by a periodic layered metal-dielectric structure operating in the canalization regime,"
Phys. Rev. B 73, 113110 (2006). [CrossRef]
20. S. M. Feng, and J. M. Elson, "Diffraction-suppressed high-resolution imaging through metallodielectric nanofilms," Opt. Express 14, 216-221 (2006). [CrossRef] [PubMed]
21. Z. Jacob, L. V. Alekseyev, and E. Narimanov, "Optical hyperlens: Far-field imaging beyond the diffraction limit," Opt. Express 14, 8247-8256 (2006). [CrossRef] [PubMed]
22. A. Salandrino, and N. Engheta, "Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations," Phys. Rev. B 74, 075103 (2006). [CrossRef]
23. O. Wiener, "Die Theorie des Mischkorpers fur das Feld der stationaren Stromung," Abh. Math.-Phys. Klasse Koniglich Sachsischen Des. Wiss. 32, 509-604 (1912).
24. D. E. Aspnes, "Bounds on Allowed Values of the Effective Dielectric Function of 2-Component Composites at Finite Frequencies," Phys. Rev. B 25, 1358-1361 (1982). [CrossRef]
25. D. J. Bergman, "Exactly Solvable Microscopic Geometries and Rigorous Bounds for the Complex Dielectric-Constant of a 2-Component Composite-Material," Phys. Rev. Lett. 44, 1285-1287 (1980).
26. G. W. Milton, "Bounds on the Complex Dielectric-Constant of a Composite-Material," Appl. Phys. Lett. 37, 300-302 (1980). [CrossRef]
27. P. B. Johnson, and R. W. Christy, "Optical-Constants of Noble-Metals," Phys. Rev. B 6, 4370-4379 (1972). [CrossRef]
28. E. D. Palik, Handbook of Optical Constants of Solids (Academic Press, New York, 1997).
29. W. G. Spitzer, D. Kleinman, and D. Walsh, "Infrared Properties of Hexagonal Silicon Carbide," Phys. Rev. 113, 127-132 (1959). [CrossRef]
30. D. Korobkin, Y. Urzhumov, and G. Shvets, "Enhanced near-field resolution in midinfrared using metamaterials," J. Opt. Soc. Am. B 23, 468-478 (2006). [CrossRef]
31. T. Taubner, D. Korobkin, Y. Urzhumov, G. Shvets, and R. Hillenbrand, "Near-field microscopy through a SiC superlens," Science 313, 1595-1595 (2006). [CrossRef] [PubMed]
32. J. A. Schuller, R. Zia, T. Taubner, and M. L. Brongersma, "Dielectric metamaterials based on electric and magnetic resonances of silicon carbide particles," Phys. Rev. Lett. 99, 107401 (2007).
[CrossRef] [PubMed]
33. S. O'Brien and J. B. Pendry, "Photonic band-gap effects and magnetic activity in dielectric composites," J. Phys. Condens. Matter. 14, 4035-4044 (2002). [CrossRef]
34. K. C. Huang, M. L. Povinelli, and J. D. Joannopoulos, "Negative effective permeability in polaritonic photonic crystals," Appl. Phys. Lett. 85, 543-545 (2004). [CrossRef]
35. M. S. Wheeler, J. S. Aitchison, and M. Mojahedi, "Three-dimensional array of dielectric spheres with an isotropic negative permeability at infrared frequencies," Phys. Rev. B 72, 193103 (2005).
36. L. Peng, L. X. Ran, H. S. Chen, H. F. Zhang, J. A. Kong, and T. M. Grzegorczyk, "Experimental observation of left-handed behavior in an array of standard dielectric resonators," Phys. Rev.
Lett. 98, 157403 (2007). [CrossRef] [PubMed]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-16-8-5444&id=157066","timestamp":"2014-04-23T15:59:26Z","content_type":null,"content_length":"233478","record_id":"<urn:uuid:9b15c97d-0363-40c4-a24b-6b800175feb8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Falling Bullets and the Density of Air
Matt discusses a Mythbusters episode where they drop a bullet and shoot one horizontally from a gun, and see which falls first. I want to take this up where he left it off
They find that to within their experimental error, the two bullets hit the ground at the same time. It doesn’t matter how fast you move in the x-direction; gravity still gives you the same
acceleration in the y-direction.
It turns out that with standard assumptions about air resistance, this is not true. What I hope to do here is, by assuming that bullets falling and shot horizontally fall at very nearly the same
rate, put a bound on the density of air.
First we need a model for air resistance. If the bullet is a non-rotating sphere and air is non-viscous, then the bullet slows down because air smashes into it. We can imagine the bullet leaving a
cylindrical trail behind it as it flies. Let’s say that as the bullet flies through its path, all the air it encounters gets brought from rest up to some set fraction $x$ of the bullet’s speed, and
that $x$ does not depend on the bullet’s speed $v$.
Then in one incremental unit of time $\Delta t$, the bullet intersects a mass of air that is the density of air times the volume of its path over that time, and the volume of the path is given by the
velocity of the bullet times is cross-sectional area. This is $\rho \pi r^2 v dt$ grams of air, where $\rho$ is the density of air, $v$ is the speed of the bullet, and $r$ is its radius.
The momentum imparted to the bullet is the same as that imparted to the air. The momentum imparted to the air is the mass of air encountered times the speed it’s being sped up to. This is $(\rho \pi
r^2 v dt) x*v = dp$.
The force on the bullet is the time derivative of its momentum, so $dp/dt = F = x \rho \pi r^2 v^2$. For the time being, set $k = x \rho \pi r^2$ so the force on the bullet is just $kv^2$.
Now let’s make some simplifying assumptions. We imagine that air resistance plays a minor role, because otherwise the Mythbusters would not have found so close to a tie as they did. This means the
bullets have very nearly the same y-height at any time, and that it’s the same as the normal, physics 1a parabola falling under gravity. Let the bullet fired out of the gun have the same horizontal
velocity throughout its flight. Also assume the bullet goes much faster in the horizontal than in the vertical direction. Using these assumptions, we’ll try to find the relative acceleration of the
two bullets, rather than finding both of their paths and then subtracting.
For the bullet falling straight down, all motion is in the y-direction, and the upward force is $kv_y^2$. For the bullet fired out of the gun, the total force is $k(v_x^2 + v_y^2)$, but only part of
this is in the y-direction, so the force in the y-direction is $k v_y\sqrt{v_x^2 + v_y^2}$. That can be simplified to $k v_y*v_x$ because velocity in the y-direction is small.
Because air resistance plays a minor role, assume the bullets fall for almost the same amount of time, $t_f = \sqrt{\frac{2g}{h}}$. Over that time, there is a difference in the force on the bullets
given by $F_{shot} - F_{drop} = k(v_xv_y - v_y^2) = kv_y(v_x - v_y) = (kv_x)v_y$. Notice that this is equivalent to assuming the dropped bullet doesn’t get slowed down at all. The shot bullet
experiences so much more air resistance that we are essentially assuming only air resistance on the shot bullet matters.
The bullets are in free fall, so $v_y = gt$ and the total distance between the two bullets when they fall comes from integrating their relative acceleration once for their relative velocity, and
again for their relative displacement. When the mass of the ball is $m$, we have
$v_{rel}(t) = \int_0^t accel_{rel}(t')dt' = \int_0^t kv_x v_y(t')/m dt' = \frac{k v_x}{m} \int_0^t gt' dt' = \frac{1}{2m}kv_x g t^2$
$y_{rel}(t) = \int_0^t v_{rel}(t') = \frac{1}{6m} k v_x g t^3$.
Plugging in $t_f$, the falling time, for $t$ gives the distance between the bullets as they land. Dividing by their speed at that time $gt_f$, gives the difference in their falling times. This comes
$\Delta t = \frac{x \rho \pi r^2 v_x h}{3mg}$
To check, the dimensions work out to time. The difference increases with the density of air, with the radius of the ball (for fixed mass), with the horizontal firing speed, with the height dropped,
and with the $x$ factor of air resistance. It decreases with increasing gravity and mass of ball (for fixed radius). All that seems plausible enough.
We can solve this for $\rho$, the density of air. Since $\pi = 3$ I can cancel those.
$\rho = \frac{\Delta t m g}{x r^2 v_x h}$.
Let’s make some guesses. They fired the bullet from about $1m$ off the floor. It traveled some $300m$ down the hall in the $1/3 s$ it had to fall, so $v_x = 1000m/s$. $g = 10 m/s^2$, $m = 100g$ and
$r = .01m$. $x = .5$.
That gives
$\rho = \Delta t \frac{20 kg}{s m^3}$.
The Mythbusters think $\Delta t < .1s$ (they said 39 milliseconds), which implies
$\rho < \frac{2 kg}{ m^3}$.
And it turns out the density of air is indeed less than this – about 1 kg/m^3.
I should admit, though, that the first time I crunched through these numbers, I got that the density of air should be less than $0.1 kg/m^3$. That was plugging in $5g$ for the bullet weight, thinking
of a $1cm^3$ bullet about as dense as a typical rock. But the bullet I hypothetically used had radius 1cm, not diameter, and also bullets are made of metals that are pretty dense. I googled "bullet
weight" and learned that google automatically interprets that to mean the weight of rugby player Tom James, nicknamed "The Bullet". (His weight was 15 st 8 lbs).
Incidentally, I think my calculation worked by accident. The aerodynamics of a bullet are probably much more complicated than I've assumed here, since the bullet spins and probably entrains air and
makes eddy currents and does all kinds of ungodly things. $v_x$ isn't constant, but according to my model it should drop by less than $50m/s$ in the time it takes the bullet to fall. I guessed kind
of wildly on things like the height of the drop and the factor by which air is sped up when the ball passes through. Also, the experiment has error in terms of simultaneous dropping and how level the
gun is. But things seem to have worked out to order of magnitude.
Ken Clark Says:
October 26, 2009 at 2:44 pm
three matters, one, the bullet is spinning on its axis of travel at about 1400 rpm, not sure that makes much difference, but the big one is the flow over the bullet is supersonic, also, the bullets
starting place while on an apparent plane with the landing location, was actually a little ways around a sphere, introducing even more error ;)
Search Job Says:
January 25, 2010 at 3:38 pm
I must say this is a great article i enjoyed reading it keep the good work | {"url":"http://arcsecond.wordpress.com/2009/10/19/falling-bullets-and-the-density-of-air/","timestamp":"2014-04-20T18:35:06Z","content_type":null,"content_length":"61378","record_id":"<urn:uuid:b1d06744-2d56-484b-abe9-3d957f3eee83>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of closed term
, and in other disciplines involving
formal languages
, including
mathematical logic
computer science
, a
free variable
is a
that specifies places in an
may take place. The idea is related to a
that will later be replaced by some
literal string
), or a
wildcard character
that stands for an unspecified symbol.
The variable x becomes a bound variable, for example, when we write
'For all x, (x + 1)^2 = x^2 + 2x + 1.'
'There exists x such that x^2 = 2.'
In either of these propositions, it does not matter logically whether we use x or some other letter. However, it could be confusing to use the same letter again elsewhere in some compound proposition
. That is, free variables become bound, and then in a sense retire from further work supporting the formation of formulae.
In computer programming, a free variable is a variable referred to in a function that is not a local variable or an argument of that function.
The term "dummy variable" is also sometimes used for a bound variable (more often in general mathematics than in computer science), but that creates an ambiguity with the definition of dummy
variables in regression analysis.
Before stating a precise definition of
free variable
bound variable
, we present some examples that perhaps make these two concepts clearer than the definition would:
In the expression
$sum_\left\{k=1\right\}^\left\{10\right\} f\left(k,n\right),$
n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called k on which it could depend.
In the expression
$int_0^infty x^\left\{y-1\right\} e^\left\{-x\right\},dx,$
y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend.
In the expression
$lim_\left\{hrightarrow 0\right\}frac\left\{f\left(x+h\right)-f\left(x\right)\right\}\left\{h\right\},$
x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called h on which it could depend.
In the expression
$forall x exists y varphi\left(x,y,z\right),$
z is a free variable and x and y are bound variables; consequently the logical value of this expression depends on the value of z, but there is nothing called x or y on which it could depend.
Variable-binding operators
The following
$sum_\left\{xin S\right\}$
quadquad prod_{xin S} quadquad int_0^inftycdots,dx quadquad lim_{xto 0} quadquad forall x quadquad exists x quadquad psi x
are variable-binding operators. Each of them binds the variable x.
Formal explanation
Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science but in all cases they are purely
properties of expressions and variables in them. For this section we can summarize syntax by identifying an expression with a
whose leaf nodes are variables, constants, function constants or predicate constants and whose non-leaf nodes are logical operators. Variable-binding operators are
logical operators
that occur in almost every formal language. Indeed languages which do not have them are either extremely inexpressive or extremely difficult to use. A binding operator Q takes two arguments: a
and an expression
, and when applied to its arguments produces a new expression Q(
). The meaning of binding operators is supplied by the
of the language and does not concern us here.
$forall x, \left(exists y, A\left(x\right) vee B\left(z\right)\right)$
Variable binding relates three things: a variable
, a location
for that variable in an expression and a non-leaf node
of the form Q(
). Note: we define a location in an expression as a leaf node in the syntax tree. Variable binding occurs when that location is below the node
To give an example from mathematics, consider an expression which defines a function
$\left(x_1, ldots , x_n\right) mapsto operatorname\left\{t\right\}$
where t is an expression. t may contain some, all or none of the x[1], ..., x[n] and it may contain other variables. In this case we say that function definition binds the variables x[1], ..., x[n].
In the lambda calculus, x is a bound variable in the term M = λ x . T, and a free variable of T. We say x is bound in M and free in T. If T contains a subterm λ x . U then x is rebound in this term.
This nested, inner binding of x is said to "shadow" the outer binding. Occurrences of x in U are free occurrences of the new x.
Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses.
Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially.
A closed term is one containing no free variables.
See also
A small part of this article was originally based on material from the Free On-line Dictionary of Computing and is used with permission under the GFDL. Most of what now appears here is the result of
later editing. | {"url":"http://www.reference.com/browse/closed+term","timestamp":"2014-04-20T07:01:26Z","content_type":null,"content_length":"86287","record_id":"<urn:uuid:7cd8438a-f45b-4f7e-b8b3-b5af8c931b33>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
he.lp please Jerome delivers newspapers. He earns $5 a week, plus $.20 for each paper he delivers. How many papers must he deliver to earn $25 a week?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Let the number of papers be x. The money he must earn by delivering papers=25-5=20 The amount he earns from delivering a paper=0.20 0.20x=20 Solve for x. Can u do?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50632d0be4b0da5168bd9b00","timestamp":"2014-04-18T21:17:29Z","content_type":null,"content_length":"27782","record_id":"<urn:uuid:af46bc39-fad3-40a3-a490-7ed6fd51d930>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Inclusive and exclusive definitions... again!
Replies: 8 Last Post: Apr 1, 2011 3:04 PM
Messages: [ Previous | Next ]
Re: Inclusive and exclusive definitions... again!
Posted: Mar 8, 2010 8:18 PM
*Due to trouble sending this dialogue around the email list, I'm posting it here in the Math Forum board*
From Walter Whiteley:
An interesting chart - and a topic worth continuing conversations.
I have a alternatives to how the classification is done - and therefore what is worth naming, and how the names are done.
Two perspectives lead to some different classes:
(a) if we classify quadrilaterals by symmetries, then some distinctions don't matter so much. On the the other hand, a kite (with a mirror through two vertices) can be non-convex. By the way, in
this classification, parallelogram is the class with half-turn symmetry.
(b) If we think about classifying on the sphere - where there is duality between angles and lengths, then some of the alternatives you have get 'paired up'. Interestingly, these pairings carry on
into the plane under polarity about a circle - between shapes with four vertices on the circle, and shapes with four edges tangent to the circle.
One version of this is linked at the Geometer Sketchpad Users Group site:
Note that (a) and even (b) actually work well with the names for triangles, and we don't really try to capture the comparable analysis for 5 or more sides. Also, in 3-space, with skew
quadrilaterals, there is a further set of connections. In the end - 3-D reasoning is a key goal, so I am happy to do a bit less in the plane if the larger vision opens up (see the link above).
These perspectives do come from some types of reasoning one wants to do - and I think naming is best developed to help cue some reasoning / connections etc. So classifying parallelograms by
half-turn symmetry, cues us to the fact that most proofs for parallelograms implicitly use this property - or would be easier if we do use this property. For example, when the proof uses a diagonal
and that cites ' congruent triangles' - the congruence actually is a half-turn symmetry! Much of the symmetry analysis becomes evident / even essential, when we observe which isometry is used for
the 'congruence'.
I have some other charts etc. for some of this. On of the criterion: How well does it generalize' is useful, as well as what reasoning / connections does it afford?
In terms of the dislike for 'diamond' - many people (including many students) would certainly consider a 'square' oriented with vertices up and down (45 degree angle to the 'standard' orientation'
as a diamond. There is a even a commercial in North America which plays on this for a cereal (Shreddies) has a square shape (see en.wikipedia.org/wiki/Shreddies).
Walter Whiteley
Date Subject Author
3/7/10 Inclusive and exclusive definitions... again! Allan Turton
3/8/10 Re: Inclusive and exclusive definitions... again! Allan Turton
3/8/10 Re: Inclusive and exclusive definitions... again! Allan Turton
3/8/10 Re: Inclusive and exclusive definitions... again! Allan Turton
3/8/10 Re: Inclusive and exclusive definitions... again! Allan Turton
3/8/10 Re: Inclusive and exclusive definitions... again! Allan Turton
11/8/10 Re: Inclusive and exclusive definitions... again! Michael de Villiers
11/20/10 Re: Inclusive and exclusive definitions... again! Michael de Villiers
4/1/11 Re: Inclusive and exclusive definitions... again! Michael de Villiers | {"url":"http://mathforum.org/kb/message.jspa?messageID=7005576","timestamp":"2014-04-17T16:50:58Z","content_type":null,"content_length":"28470","record_id":"<urn:uuid:da389ddf-1ec9-4072-8492-f674081b33b5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2003/155
A Formal Proof of Zhu's Signature Schemehuafei zhuAbstract: Following from the remarkable works of Cramer and Shoup \cite{CS}, three trapdoor hash signature variations have been presented in the
literature: the first variation was presented in CJE'01 by Zhu \cite{Zhu}, the second variation was presented in SCN'02 by Camenisch and Lysyanskaya \cite{CL} and the third variation was presented in
PKC'03 by Fischlin \cite{Fis}. All three mentioned trapdoor hash signature schemes have similar structure and the security of the last two modifications is rigorously proved. We point out that the
distribution of variables derived from Zhu's signing oracle is different from that generated by Zhu's signing algorithm since the signing oracle in Zhu's simulator is defined over $Z$, instead of
$Z_n$. Consequently the proof of security of Zhu's signature scheme should be studied more precisely. We also aware that the proof of Zhu's signature scheme is not a trivial work which is stated
below: \begin{itemize} \item the technique presented by Cramer and Shoup \cite{CS} cannot be applied directly to prove the security of Zhu's signature scheme since the structure of Cramer-Shoup's
trap-door hash scheme is double deck that is easy to simulate a signing query as the order of subgroup $G$ is a public parameter; \item the technique presented by Camenisch and Lysyanskaya \cite{CL}
cannot be applied directly since there are extra security parameters $l$ and $l_s$ guide the statistical closeness of the simulated distributions to the actual distribution; \item the technique
presented by Fischlin cannot be applied directly to Zhu's signature scheme as the security proof of Fischlin's signature relies on a set of pairs $(\alpha_i, \alpha_i \oplus H(m_i))$ while the
security proof of Zhu's signature should rely on a set of pairs $(\alpha_i, H(m_i))$. \end{itemize}
In this report, we provide an interesting random argument technique to show that Zhu's signature scheme immune to adaptive chosen-message attack under the assumptions of the strong RSA problem as
well as the existence of collision free hash functions.
Category / Keywords: Date: received 5 Aug 2003, last revised 8 Aug 2003Contact author: zhuhf at zju edu cnAvailable format(s): PDF | BibTeX Citation Version: 20030808:075709 (All versions of this
report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2003/155/20030808:075709","timestamp":"2014-04-21T15:23:53Z","content_type":null,"content_length":"3743","record_id":"<urn:uuid:41a29992-16fc-4069-abd8-e38b7ffc4abf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
electric field
electric field
is a concept from
. It has two sources, (1) electric charges and (2) the
magnetic field
. The relation of the electric field to its two sources is expressed by two of
Maxwell’s Equations
∇ • E = ρ / ε[o]
∇ x E = - ∂B/∂t
where where E is the Electric field, B is the Magnetic field, ρ is the position dependent density of electric charge, ε[o] is a constant called the permettivity of free space to be determined
experimentally, ∇ • V is the divergence of a vector field, and ∇ x V is the curl of a vector field.^1 The units of the electric field are force per unit charge, which in SI units is Newtons per
The first equation relates the divergence of the electric field to a distribution of charges. It is the integral formulation of the more familiar Gauss’ Law^2
∫[surf]E • da = Q[enc]/ ε[o].
This relates the component of the electric field perpendicular to the surface of a boundary added over the entire boundary in terms of the total charge contained within the boundary. In the case of a
point charge and a boundary of constant radius, we have the even more familiar Coulomb Force law:
E = 1/4πε[o] q* r^unit/r^2
where r is the distance from the source to the point at which the field is being evaluated and q is the charge.
For a distribution of point charges, the Coulomb forces are summed over all the charges:
E = 1/4πε[o] ∫ &rho*r^unit/r^2
where ρ is now the position dependent charge density. This is an equivalent statement to the first Maxwell Equation above.
The other Maxwell Equation above relates the curl of the electric field to the change in the flux of the magnetic field. It is the integral formulation of the more familiar Faraday’s Law,^2
∫ E • dl = - dΦ/dt, where Φ is the magnetic flux Φ = ∫[surf] B • da. This is why electric generators work.
If there are only stationary charges, then there are no currents and therefore no magnetic field, so the electric field is given by the first equation only. This situation is called electrostatics.
In this case, ∇ x E = 0 and the electric field is curl free, and can therefore be expressed in terms of a scalar potential E = -∇V. This potential is the more familiar voltage.
In the presence of non-stationary charges, then there will in general be a magnetic field and the electric field will be determined by both equations.
Analysis of Maxwell’s Equations show that the electric field is physically real, and that alternate electric and magnetic fields are responsible for electromagnetic radiation.
are quantities from
vector calculus
. The
is can be thought of as a measure of how closely a vector field radiates from a single point, and the
is in a way a measure of how much a vector field curls around a point. The character ∇ is a directional derivative operator, in symbols (i ∂/∂x, j ∂/∂y, k ∂/∂z), and is pronounced ‘
,’ or ‘
’ by some freaks.
^2These integral formulations are equivalent to the original formulations by the Divergence Theorem and Stokes’ Theorem, two results from vector calculus. For more information, see my w/u under
Maxwell’s Equations. | {"url":"http://everything2.com/title/electric+field","timestamp":"2014-04-18T17:21:36Z","content_type":null,"content_length":"24224","record_id":"<urn:uuid:8b6264a9-795a-473a-9b74-7efd14bcc180>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Water Flow Rates for Pipe Sizes with Excel Formulas, Using the Hazen Williams Formula
• Limitations on the Hazen Williams Formula for Water Flow Rate Calculations
The Hazen Williams formula is an empirical equation that can be used for turbulent flow of water at typical ambient temperatures. The turbulent flow requirement is not very limiting. Most
practical applications of water transport in pipes are in the turbulent flow regime. For a review of this topic see the article, 'Reynolds Number and Laminar & Turbulent Flow.' Strictly speaking,
the Hazen Williams formula applies to water at 60^oF, but it works quite well for a reasonable range of water temperatures above or below 60^oF. For fluids with viscosity different from water, or
for water temperatures far above or below 60^oF, the Darcy Weisbach Equation works better than the Hazen Williams Formula. Click on the following link for more details about the Darcy Weisbach
Following presentation and discussion of several forms of the Hazen Williams equation in the next couple of sections, a downloadable Excel spreadsheet template will be presented and discussed for
making Hazen Williams water flow rate calculations, using Excel formulas.
• Forms of the Hazen Williams Formula
There are several different forms of the Hazen Williams Formula in use for water flow rate calculations. It can be written in terms of water velocity or water flow rate, in terms of pressure drop
or head loss, and for several different sets of units. The traditional form of the Hazen Williams formula is:
U.S. units: V = 1.318 C R^0.633 S^0.54, where:
□ V = water flow velocity in ft/sec
□ C = Hazen Williams coefficient, dependent on the pipe material and pipe age
□ R = Hydraulic radius, ft (R = cross-sectional area/wetted perimeter)
□ S = slope of energy grade line = head loss/pipe length = h[L]/L, which is dimensionless
S.I. units: V = 0.85 C R^0.633 S^0.54, where:
□ V is in m/s and R is in meters
The Hazen Williams Formula is used primarily for pressure flow in pipes, for which the hydraulic radius is one fourth of the pipe diameter (R = D/4). Using this relationship and Q = V(πD^2/4),
for flow in a circular pipe, the Hazen Williams formula can be rewritten as shown in the next section.
• Water Flow Rates for Pipe Sizes over a Range of Diameters with the Hazen Williams Formula
For flow of water under pressure in a circular pipe, the Hazen Williams formula shown above can be rewritten into the following convenient form:
in U.S. units: Q = 193.7 C D^2.63 S^0.54, where:
□ Q = water flow rate in gal/min (gpm)
□ D = pipe diameter in ft
□ C and S are the same as above
in S.I. units: Q = 0.278 C D^2.63 S^0.54, where
□ Q is in m^3/s and D is in meters
The Hazen Williams formula can also be expressed in terms of the pressure difference (ΔP) instead of head loss (h[L]) across the pipe length, L, using ΔP = ρgh[L]:
In S.I. units, a convenient form of the equation is: Q = (3.763 x 10^-6) C D^2.63 (ΔP/L)^0.54, where
□ Q is water flow rate in m^3/hr,
□ D is pipe diameter in mm
□ L is pipe length in m,
□ ΔP is the pressure difference across pipe length, L, in kN/m^2
In U.S. units: Q = 0.442 C D^2.63 (ΔP/L)^0.54, where
□ Q is water flow rate in gpm,
□ D is pipe diameter in inches
□ L is pipe length in ft,
□ ΔP is the pressure difference across pipe length, L, in psi
This is a form of the Hazen Williams formula that is convenient to use for estimating water flow rates for pipe sizes and lengths in U.S. units, as illustrated in the section after next on the
second page.
The second page of this article has a table with values for the Hazen Williams coefficient, a table with example water flow rate calculations for several PVC pipe lengths and diameters, and a
link to download a spreadsheet template with Excel formulas to make the water flow rate calculations.
• Values for the Hazen Williams Coefficient
In order to use the Hazen Williams formula for water flow rate calculations, values of the Hazen Williams coefficient, C, are needed for the pipe material being used. Values of C are available
in many handbooks, textbooks, and on internet sites. C values typically used for some common pipe materials are shown in the table at the left.
Source:Toro Ag Irrigation (PDF)
• Example Calculation of Water Flow Rates for Pipe Sizes and Lengths
The table below was prepared using the equation: Q = 0.442 C D^2.63 (ΔP/L)^0.54, with units as given above, to calculate the water flow rates for PVC pipe with diameters from 1/2 inch to 6 inches
and length from 5 ft to 100 ft, all for a pressure difference of 20 psi across the particular length of pipe. The Hazen Williams coefficient was taken to be 150 per the table in the previous
...............................................WATER FLOW RATE IN GPM.............................................
...................... ..................................Pipe Diameter in Inches.............................................
length, ft........0.5.......0.75........1.........1.5.........2.........2.5.........3..........4..........5............6
The table shows a pattern that you should intuitively expect. For a given pressure difference driving the flow, the water flow rate increases as diameter increases for a given pipe length and the
water flow rate decreases as pipe length increases for a given pipe diameter. The equation above can be used to calculate water flow rates for pipe sizes and lengths with different pipe materials
and pressure driving forces, using the Hazen Williams equation as demonstrated in the table above.
• An Excel Template to Calculate Water Flow Rates for Pipe Sizes and lengths.
The spreadsheet template at the left has the Excel formulas built in to calculate water flow rates for different pipe sizes as illustrated in the previous section. This Excel spreadsheet
template that can be downloaded below, allows for input of the Hazen Williams coefficient value and the pressure drop across the length of pipe being considered. Also, the pipe diameters and
lengths can be changed from those currently in the spreadsheet, so the flow rate can be calculated for any combination of pipe diameter and length if the Hazen Williams coefficient is known and
the pressure drop across the pipe is known.
The example spreadsheet has U.S. units, but an S.I. version and a U.S. version are available for download.
Click here to download this spreadsheet template in U.S. units.
Click here to download this spreadsheet template in S.I. units.
• References
References for Further Information:
1. Bengtson, H., Fundamentals of Fluid Flow, An online continuing education course for PDH credit.
2. Munson, B. R., Young, D. F., & Okiishi, T. H., Fundamentals of Fluid Mechanics, 4th Ed., New York: John Wiley and Sons, Inc, 2002.
3. Liou, C.P., "Limitations and Proper Use of the Hazen-Williams Equation," Journal of Hydraulic Engineering, Vol. 124, No. 9, Sept. 1998, pp. 951-954.
Pipe Flow Calculations
Pipe flow calculations include using Reynolds number to find if the flow is laminar flow or turbulent flow. Frictional head loss can be found using the Darcy Weisbach equation and the friction
factor. The entrance length for fully developed flow can be found for turbulent flow and for laminar flow. | {"url":"http://www.brighthubengineering.com/hydraulics-civil-engineering/73748-excel-formulas-to-calculate-water-flow-rates-for-different-pipe-sizes/","timestamp":"2014-04-20T23:29:55Z","content_type":null,"content_length":"54058","record_id":"<urn:uuid:83a49249-c84b-44a2-877a-c2f9f7ca21ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: RE: Factor Analysis: which explained variance?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: RE: Factor Analysis: which explained variance?
From "Verkuilen, Jay" <JVerkuilen@gc.cuny.edu>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject st: RE: RE: Factor Analysis: which explained variance?
Date Mon, 21 Dec 2009 13:53:50 -0500
Nick Cox wrote:
>P.P.S. the whole notion of variance is perhaps a little suspect when the
originals are indicator variables. <
@Nick: I don't know, you have variances, they're just functions of the mean (proportion)! However, there are covariances that aren't redundant.
@The original poster:
With four indicators, you really can only afford a one dimensional factor analysis. Anything higher dimension will be, essentially, unidentified, and thus even more indeterminate than usual for factor analysis. Three indicators is exactly identified. Four indicators with correlated factors that have two indicators per factor is also identified, but if the solution says that you have three and one you're really out of luck.
Without knowing the tetrachoric correlation matrix (these are indicators, i.e., binary, so polychoric is just tetrachoric anyhow) it's very hard to say on any statistical grounds.
Is there a theoretical reason to form a sum score from these indicators? For instance, do they operate like items on a quiz where you want to know the total score?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-12/msg00792.html","timestamp":"2014-04-16T10:46:28Z","content_type":null,"content_length":"7437","record_id":"<urn:uuid:0787346b-7335-4b4f-a9aa-9364ca9152c3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: BUMPER Up: Element Dictionary Previous: ALPH
A map of Bx and By vs x and y.
Parallel capable? : yes
│ Parameter Name │ Units │ Type │ Default │ Description │
│ L │ │ double │ 0.0 │ length │
│ STRENGTH │ │ double │ 0.0 │ factor by which to multiply field │
│ ACCURACY │ │ double │ 0.0 │ integration accuracy │
│ METHOD │ │ STRING │ NULL │ integration method (runge-kutta, bulirsch-stoer, modified-midpoint, two-pass modified-midpoint, leap-frog, non-adaptive runge-kutta │
│ FILENAME │ │ STRING │ NULL │ name of file containing columns (x, y, Fx, Fy) giving normalized field (Fx, Fy) vs (x, y) │
│ FX │ │ STRING │ NULL │ rpn expression for Fx in terms of x and y │
│ FY │ │ STRING │ NULL │ rpn expression for Fy in terms of x and y │
│ GROUP │ │ string │ NULL │ Optionally used to assign an element to a group, with a user-defined name. Group names will appear in the parameter output file in the column │
│ │ │ │ │ ElementGroup │
This element simulates transport through a transverse magnetic field specified as a field map. It does this by simply integrating the Lorentz force equation in cartesian coordinates. It does not
incorporate changes in the design trajectory resulting from the fields. I.e., if you input a dipole field, it is interpreted as a steering element.
The field map file is an SDDS file with the following columns:
• x, y -- Transverse coordinates in meters (units should be ``m'').
• Fx, Fy -- Normalized field values (no units). The field is multiplied by the value of the STRENGTH parameter to convert it to a local bending radius. For example, if Fx=y and Fy=x, then STRENGTH
is the K1 quadrupole parameter.
• Bx, By -- Field values in Tesla (units should be ``T''). The field is still multiplied by the value of the STRENGTH parameter, which is dimensionless. Note: the default value of STRENGTH is 0, so
if you don't set it to something, you'll get no effect!
The field map file must contain a rectangular grid of points, equispaced (separately) in x and y. There should be no missing values in the grid (this is not checked by elegant). In addition, the x
values must vary fastest as the values are accessed in row order. To ensure that this is the case, use the following command on the field file:
sddssort fieldFile -column=y,incr -column=x,incr
Next: BUMPER Up: Element Dictionary Previous: ALPH Robert Soliday 2014-03-21 | {"url":"http://www.aps.anl.gov/Accelerator_Systems_Division/Accelerator_Operations_Physics/manuals/elegant_latest/node103.html","timestamp":"2014-04-18T17:01:50Z","content_type":null,"content_length":"6792","record_id":"<urn:uuid:75536427-e34c-460d-a475-f041b2eb1eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
11.1 In order to measure the price change in the Consumer Price Index (CPI) excluding any quality or quantity changes, the ABS uses a fixed basket of goods and services. However, as consumer
expenditure patterns change over time in a dynamic economy, the fixed basket used in the CPI runs the risk of becoming unrepresentative and can lead to bias. There are a number of different types of
bias that may affect price indexes, outlined in Chapter 4. The ABS applies significant effort to address these biases. Some aspects, such as quality change, have been addressed in Chapter 8. This
chapter includes the strategies the ABS uses to minimise the effect of substitution bias on the CPI and an estimation of one type of bias, the upper-level substitution basis.
11.2 The production of a price index by reference to a fixed basket of goods and services has several advantages. Firstly, the concept is easy to understand; price the same basket of goods and
services at two different periods, and compare the total price of the basket. Secondly, by fixing both the items within the basket and their quantities, the resulting values provide a measure of pure
price change that is free from compositional change. In application, this process is more complex than the basket analogy would suggest. In practice, the transactions occurring in the market place
are frequently changing. This observation reveals a dilemma, namely how can a price index use a fixed basket to measure pure price change and at the same time remain both contemporary and
representative of the market?
11.3 The ABS has a policy of continual assessment of the samples of consumer goods and services that it uses in the CPI. Essentially there are three levels of maintaining representation of an index:
(i) Sample maintenance - ongoing updating and replacement of specifications, respondents, and weights for the prices collected in the CPI, which ensures that the structure of respondent samples and
specifications remains relevant.
(ii) Sample review - a complete reassessment of the sample used to represent all products in the commodity classification; covering companies, products, pricing procedures and relative weights based
on consumer expenditure. The end product of the sample review may be a new or revised sample (respondents, specifications and collection methods), the confirmation of the existing sample or a change
to the index structure below the Expenditure Class (EC) level.
(iii) Index reviews - periodic (six-yearly) reviews of the overall index structure and the price collection methodology and updates to the weighting pattern based on Household Expenditure Survey
(HES) data.
11.4 Item substitution occurs when households react to changes in relative prices by choosing to reduce purchases of goods and services showing higher relative price change and instead buy more of
those showing lower relative price change.
11.5 Under such circumstances, a fixed-basket Laspeyres index will overstate the price change of the whole basket as it does not take account of changes in the substitutions that consumers make in
response to relative price changes. For example, if the price of beef were to increase more than the price of chicken, one would expect consumers to purchase more chicken and less beef. As a
fixed-base index would continue to price the original quantities of beef and chicken, the price change faced by consumers would be overstated.
11.6 Item substitution bias is due to changes in the pattern of household consumption which takes place over time as a result of both demand and supply changes. The longer the period between weight
revision periods, the more time there is for consumers to substitute towards or away from goods and services in reaction to relative price changes and as a result of changes in income. Similarly,
supply conditions (and therefore the availability, or otherwise, of certain goods and services) can change substantially over the period in which the weights are fixed.
11.7 Like most CPIs, the Australian CPI is calculated using a base-weighted modified Laspeyres index formula (known as Lowe index(footnote 1) ) which keeps quantities fixed between major revisions
but allows prices to vary. A Laspeyres (or in most cases a Laspeyres-type) index measures the change in the cost of purchasing the same basket of goods and services in the current period as was
purchased in a specified base period. The weights reflect expenditures from a historical period, the base period. See Chapter 4 for more detail.
11.8 There is a family of indexes called superlative indexes. Superlative indexes make use of both beginning-of-period and end-of-period information on both prices and quantities (expenditures),
thereby accounting for substitution across items. However, in order to construct a superlative index both price and quantity (expenditure) data are required for both periods under consideration.
11.9 Superlative indexes can only be produced retrospectively once the required weighting data is available. Given that current period expenditure data for households is not available on a
sufficiently timely basis (generally not available until 12 months after the reference period), a superlative formula cannot be used in the routine production of the CPI, which is why statistical
agencies rely on fixed baskets. Most, if not all, statistical agencies use a Laspeyres-type index. The requirement for end-of-period information in real time is the reason a superlative index is an
impractical option for statistical offices for the compilation of the CPI.
11.10 The ABS has constructed a retrospective superlative-type index to provide an estimation of potential item (upper level) substitution bias in the fixed-weight Australian CPI. While there are
five main sources of bias in CPIs (described further in chapter 4), this analysis focuses on one type only - upper level item substitution bias - and therefore the results in the analysis should not
be taken to equate to the total bias in the CPI, which will be the cumulative impact of all sources of bias. This analysis can only be conducted retrospectively when new HES data is available -
currently every six years.
11.11 Superlative indexes allow for substitution as they make use of weights for both the earlier and later periods under consideration (basically averaging across historical and current expenditures
to derive a ‘representative’ set of weights for the period) whereas the Laspeyres index uses only base period weights.
11.12 The estimate of upper level substitution bias has been made at relatively high levels of aggregation. The analysis is calculated based on the amount of consumer substitution between expenditure
classes as this is the lowest level for which reliable weighting information (from the HES) is available and this is the level at which the underlying quantity weights remain fixed between CPI
reviews. Thus, the analysis captures substitution from one expenditure class to another, e.g. from beef and veal to poultry, but not within a given expenditure class, e.g. from beef to veal. The
substitution within an expenditure class is called lower level substation bias which is minimised through regular sample maintenance, sample reviews and choice of index formulas.
11.13 Two superlative indexes have been constructed and linked together to form one continuous series. The first index was constructed on the 14th series CPI basis between the June quarter 2000 and
the June quarter 2005 and the second index was constructed on the 15th series CPI basis between the June quarter 2005 and the June quarter 2011.
11.14 Using the expenditure class at the weighted average of eight capital cities level, i) Laspeyres-type, ii) Paasche-type, and iii) superlative Fisher-type indexes have been calculated at the All
groups CPI level.(footnote 2) The indexes have all been calculated with the base period June quarter 2000 = 100.0. The Fisher index is regarded as the best practical approximation of a 'true' (or
'ideal') price index, being the geometric average of the Laspeyres and Paasche indexes.
11.15 The Laspeyres-type index is equivalent to the published All groups CPI re-referenced to the June quarter 2000. There may be some differences in the movements compared to the All groups CPI due
to rounding.
11.16 The Paasche and Fisher-type indexes were a retroactively modelled analytical series and are not replacing the published Australian Consumer Price Index which is designed to measure price
inflation for the household sector as a whole.
11.17 The Paasche-type and superlative Fisher-type indexes were constructed using the same structure as the All groups CPI as published at the time to allow for direct comparison. The indexes from
the June quarter 2000 to the June quarter 2005 were derived using the 14th series classification consisting of 88 expenditure classes. The index numbers from the June quarter 2005 to the June quarter
2011 were derived using the 15th series classification consisting of 90 expenditure classes.
11.18 Using these indexes, an estimate of upper level substitution bias in the CPI was obtained by subtracting the superlative (Fisher-type) index from the All groups CPI (Laspeyres-type) index. The
Fisher index is regarded as the best practical approximation of a 'true' (or 'ideal') price index, being the geometric average of the Laspeyres and Paasche indexes.
11.19 For the Paasche-type index, to estimate current period weights each quarter, the ABS applied a linear model between the re-weighting periods (June quarter 2000 - June quarter 2005 and June
quarter 2005 - June quarter 2011). In calculating the Paasche-type index the June quarter 2011 weight for the Fruit expenditure class was modified to adjust for the effect of cyclone Yasi.
11.20 The analysis found the total upper level substitution bias of the All groups CPI (as measured by the difference between the Laspeyres-type index and the Fisher-type index) was 3.6 percentage
points after 11 years due to the inability of the fixed-base index to take account of the item substitution effect. The All groups CPI, calculated using a fixed-weight direct Laspeyres-type index
increased by a total of 41.3% from June quarter 2000 to June quarter 2011. The retrospective superlative index, calculated using the Fisher-type index, increased by 37.7% over the same period.
11.21 To estimate the average annual upper level substitution bias, the indexes can be expressed as Compound Annual Growth Rates (CAGR).
= ((I[L,JQ11] / I[L,JQ00]) ^(1/11) - 1) * 100
= ((141.3/100.0) ^(1/11) - 1) * 100
= 3.19%
= ((I[F,JQ11] / I[F,JQ00]) ^(1/11) - 1) * 100
= ((137.7/100.0) ^(1/11) - 1) * 100
= 2.95%
11.22 The average annual upper level substitution bias was calculated as Laspeyres
- Fisher
= 3.19% - 2.95% = 0.24%. The CPI for the period June quarter 2000 to the June quarter 2011 was potentially upwardly biased by 0.24 of a percentage point per year on average due to the inability to
take account of the upper level item substitution effect. These results are consistent with studies by other national statistical agencies.
11.23 The results show that the longer the period between re-weights, the larger the potential upper level item substitution bias effect on the index. Table 11.1 illustrates that the average annual
substitution bias increases at a faster rate the longer the period between re-weights. The re-weighting periods in this analysis were June quarter 2000 and June quarter 2005.
11.1 Average Annual item substitution bias(a)
Time since re-weight Laspeyres[CAGR] - Fisher[CAGR]
(b)1 year 0.16
2 years 0.08
3 years 0.12
4 years 0.15
5 years 0.22
(c)6 years 0.25
Annual average between June quarter 2000 and June quarter 2011 0.24
(a) This takes the average of the average annual item substitution bias for the period June quarter 2000 - June quarter 2005 and the period June quarter 2005 - June quarter 2011.
(b) This figure includes the banana price increase in March 2006 which was a result of cyclone Larry.
(c) The six year average annual item substitution bias is only based on the index numbers for June quarter 2005 to June quarter 201.
11.24 The result for 1 year since re-weight was caused by the introduction of the GST and cyclone Larry and can be considered atypical. Excluding this, it can be seen that the average annual item
substitution bias increases over time and also increases at a faster rate, especially after the fourth year. This finding is consistent with the Statistics New Zealand (SNZ) analysis which showed
that item substitution bias is considerably greater when NZ CPI weights are updated at six-yearly rather than three-yearly intervals.
(footnote 3)
11.25 While there are five main sources of bias in CPIs, this analysis focuses on one type only - upper level item substitution bias - and therefore the results in the analysis should not be taken to
equate to the total bias in the CPI, which will be the cumulative impact of all sources of bias.
11.26 As different index number formulas produce different results, the ABS has to decide which formula to use. The usual way is to evaluate the performance of a formula against a set of desirable
mathematical properties or tests. This is called the axiomatic approach. This approach is certainly useful however a few practical issues need to be considered, such as: the relevance of the tests
for the application at hand; the importance of a particular test (some tests are more important than others); and even if an index formula fails a test, how close in practice will the index likely be
to the best measure?
11.27 The range of tests developed for index numbers has expanded over the years. Diewert (1992) describes twenty tests for weighted index formulas, and Diewert (1995) provides seventeen tests for
equally weighted (or elementary) index formulas, and attributes the tests to their authors. It is beyond the scope of this chapter to describe all the tests, but several important ones are outlined
below. Many of the tests apply to both types of formulas.
□ Time reversal. This test requires the index formula to produce consistent results whether it is calculated from period 0 to period 1 or from period 1 to period 0. More specifically, if the price
observations for period 0 and period 1 are changed around then the resulting price index should be the reciprocal of the original index.
□ Circularity (often called transitivity). This is a multiperiod test (essentially a test of chaining). It requires that the product of the price index obtained by going from period 0 to period 1
and from period 1 to 2 is the same as going directly from period 0 to period 2.
□ Permutation or price bouncing. This test requires that, if the order of the prices in either period 0 or period 1 (or both) is changed, but not the individual prices, the index number should not
change. This test is appropriate in situations where there is considerable volatility in prices; for example, due to seasonal factors or sales competition.
□ Commensurability. This test requires that if the units of measurement of the item are changed (e.g. from kilograms to tonnes), then the price index should not change.
11.28 The Fisher Ideal index formula passes the tests on time reversal, circularity and commensurability; whereas the Laspeyres and Paasche only pass the test of commensurability.
11.29 Regarding the three equally weighted price index formulas discussed in Chapter 4, the arithmetic mean of price relatives (APR) fails the first three tests, the relative of average prices (RAP)
fails the commensurability test, but the geometric mean (GM) approach passes all tests. Of Diewert's seventeen tests for elementary index formulas, the RAP passes fifteen tests and the GM sixteen
11.30 Although the equally weighted GM appears to have considerable appeal as an elementary index formula, there are some situations in which it produces an undesirable result. The GM cannot handle
zero prices which might occur, for example, if the government introduced a policy to subsidise fully a particular good or service. In addition, the GM may not produce acceptable movements when a
price falls sharply. For example, consider a price sample of two items, each selling for $10 in one period, with the price of one of the items falling to $1 in the second period. The GM produces an
index of 31.6 for the second period (assuming it was 100 in the first period), a fall of around 68%. Because the GM maintains equal expenditure shares in each period, it effectively gives a larger
weight to lower prices.
(footnote 4)
11.31 The GM formula has become more widely accepted in official circles for compiling consumer price indexes. For example, Canada switched to using GMs in the late 1980s; the United States
introduced the GM formula for items making up about 61% of its CPI in January 1999; and Australia began introducing the formula in the December quarter 1998. (However, where there is a likelihood of
zero occurring in the price sample the GM is inappropriate, and the ABS generally uses the RAP formula instead.) Furthermore, the GM formula is prescribed by the European Union for calculation of
price sample means in its Harmonised Indices of Consumer Prices (HICP).
11.32 There is another aspect to indexes that is worth considering, although it is not rated as a test in the literature. In most countries the CPI is produced at various levels of aggregation.
Typically there are three or more levels between the lowest published level, and the total of all goods and services. In practice, it is desirable that the same result is obtained whether the total
index is compiled directly from the lowest level or in a staged way using progressively higher levels of aggregation. Diewert (1978) shows that the fixed weighted Laspeyres and Paasche indexes may be
aggregated consistently, and the Fisher and Törnqvist indexes are (very) closely consistent.
(footnote 5)
1 Consumer Price Indices; An ILO Manual, by Ralph Turvey et al (ILO, Geneva 1989).
2 For a description of the indexes, refer to Chapter 4 Price Index Theory.
3 Consumers Price Index Retrospective Superlative Index, 2008 (Statistics New Zealand, 2008), available at (
4 The RAP and APR formulas both give an index of 55.0 in this case.
5 The aggregation property of the Laspeyres and Paasche indexes allows them to be broken down into points contributions which is very useful for analysing the relative significance of items in the
index, and their contributions to changes in the aggregate index. However, Diewert (2000) has a way to decompose superlative indexes.
This page last updated 19 December 2011 | {"url":"http://www.abs.gov.au/ausstats/abs@.nsf/Latestproducts/6461.0Main%20Features132011?opendocument&tabname=Summary&prodno=6461.0&issue=2011&num=&view=","timestamp":"2014-04-18T04:53:31Z","content_type":null,"content_length":"46713","record_id":"<urn:uuid:848cb8a1-a141-4279-8f86-dd1eddfec4b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2002 [00353]
[Date Index] [Thread Index] [Author Index]
Re: Change of Variables
• To: mathgroup at smc.vnet.net
• Subject: [mg32512] Re: Change of Variables
• From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>
• Date: Fri, 25 Jan 2002 02:57:40 -0500 (EST)
• Organization: Universitaet Leipzig
• References: <a2onns$5v9$1@smc.vnet.net>
• Reply-to: kuska at informatik.uni-leipzig.de
• Sender: owner-wri-mathgroup at wolfram.com
what is the difference between Replace[] and ReplaceAll[] ?
"expr /. rules applies a rule or list of rules in an attempt to
transform \
each subpart of an expression expr"
"Replace[expr, rules] applies a rule or list of rules in an attempt to \
transform the entire expression expr. Replace[expr, rules, levelspec]
applies \
rules to parts of expr specified by levelspec."
test^2 /. v -> c*beta
2*Pi*c^2*h/lambda^5*(E^(h*c/(lambda*k*T)) - 1)^(-1) /.
T -> h*c/(lambda*k x)
(2*c^2*h*Pi)/((-1 + E^x)*lambda^5)
John S wrote:
> Hello,
> I would greatly appreciate any help with the following problem. I am trying
> to perform a change of variable in a function/definition so that I can
> integrate it. In particular, I want to take Planck's Radiation Law:
> planck=2*Pi*c^2*h/lambda^5*(E^(h*c/(lambda*k*T))-1)^(-1)
> and substitute x=h*c/(lambda*k*T) and integrate wrt lambda from 0 to
> infinity. I tried using replace, but that does not seem to try to
> manipulate the function in terms of x, but simply seek out the replacement,
> and if it exists, perform it.
> An even simpler example is the following:
> test=v/c
> Replace[test^2,v/c -> beta]
> does not yield beta^2, but rather v^2/c^2.
> Again, any and all help would be greatly appreciated. | {"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00353.html","timestamp":"2014-04-17T12:57:36Z","content_type":null,"content_length":"35687","record_id":"<urn:uuid:23124a08-5bfa-4b45-865a-8cce5175334e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
What was Euclid's contribution in the realm of geometry? - Yahoo Answers
What was Euclid's contribution in the realm of geometry?
• ✓
Follow publicly
• ✓
Follow privately
• Unfollow
Best Answer
Euclid , another name Euclid of Alexandria and also known as Father of Geometry. A Greek mathematician known for the major contribution on geometry. The whole new stream of geometry established by
him known as Euclidean Geometry. Basically the modern 2 dimension geometry (mean what u can draw on a paper) is actually adopted from the Euclidean Geometry. and also it is the basic building block
of the modern geometry.
All of his invents and developments , axioms etc. are written on his book "Euclid's Elements" that consisting 13 book. This is one of the most influential and successful textbook ever written.
He also works of Number Theory, Spherical Geometry, Conic Section etc.
Other Answers (3)
Rated Highest
• Everything.
There is an entire discipline in mathematics known as Euclidean Geometry.
Euclid's Elements, 13 books containing the principles of geometry and a good deal of algebra and number theory, is still being used today.
Next to the bible, Euclid's Elements is the most widely published book(s) in history !
Do a google search on "Euclid" to get a wealth of information.
• not much, unless you count "Elements"
• Plane geometry. (Geometry problems that can be done on paper).
Non-Euclidean Geometry involves the idea that the world is not flat, so the problems involve curves rather than flat planes. | {"url":"https://in.answers.yahoo.com/question/index?qid=20090601065733AAs53D9","timestamp":"2014-04-18T03:24:03Z","content_type":null,"content_length":"54191","record_id":"<urn:uuid:bd653289-8e9a-4d88-82df-7fe04c23c89e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Channel Analysis Methods
Anchor: #i1018552
Section 6: Channel Analysis Methods
Anchor: #i1018557
The depth and velocity of flow are necessary for the design and analysis of channel linings and highway drainage structures. The depth and velocity at which a given discharge flows in a channel of
known geometry, roughness, and slope can be determined through hydraulic analysis. The following two methods are commonly used in the hydraulic analysis of open channels:
Generally, the Slope Conveyance Method requires more judgment and assumptions than the Standard Step Method. In many situations, however, use of the Slope Conveyance Method is justified, as in the
following conditions:
Anchor: #i1018601
Slope Conveyance Method
The Slope Conveyance Method, or Slope Area Method, has the advantages of being a relatively simple, usually inexpensive and expedient procedure. However, due to the assumptions necessary for its use,
its reliability is often low. The results are highly sensitive to both the longitudinal slope and roughness coefficients that are subjectively assigned. This method is often sufficient for
determining tailwater (TW) depth at non-bridge class culvert outlets and storm drain outlets.
The procedure involves an iterative development of calculated discharges associated with assumed water surface elevations in a typical section. The series of assumed water surface elevations and
associated discharges comprise the stage-discharge relationship. When stream gauge information exists, a measured relationship (usually termed a “rating curve”) may be available.
You normally apply the Slope Conveyance Method to relatively small stream crossings or those in which no unusual flow characteristics are anticipated. The reliability of the results depends on
accuracy of the supporting data, appropriateness of the parameter assignments (n-values and longitudinal slopes), and your selection of the typical cross section.
If the crossing is a more important one, or if there are unusual flow characteristics, use some other procedure such as the Standard Step Backwater Method.
A channel cross section and associated roughness and slope data considered typical of the stream reach are required for this analysis. A typical section is one that represents the average
characteristics of the stream near the point of interest. While not absolutely necessary, this cross section should be located downstream from the proposed drainage facility site. The closer to the
proposed site a typical cross section is taken, the less error in the final water surface elevation.
You should locate a typical cross section for the analysis. If you cannot find such a cross section, then you should use a “control” cross section (also downstream). (Known hydraulic conditions, such
as sluice gates or weirs exist in a control cross section.) The depth of flow in a control cross section is controlled by a constriction of the channel, a damming effect across the channel, or
possibly an area with extreme roughness coefficients.
The cross section should be normal to the direction of stream flow under flood conditions.
After identifying the cross section, apply Manning’s roughness coefficients (n-values). (See Equation 6-3 and Chapter 6 for more information.) Divide the cross section with vertical boundaries at
significant changes in cross-section shape or at changes in vegetation cover and roughness components. (See Chapter 6 for suggestions on subdividing cross sections.)
Manning’s Equation for Uniform Flow (see Chapter 6 and Equation 6-3) is based on the slope of the energy grade line, which often corresponds to the average slope of the channel bed. However, some
reaches of stream may have an energy gradient quite different from the bed slope during flood flow.
Determine the average bed slope near the site. Usually, the least expensive and most expedient method of slope-determination is to survey and analyze the bed profile for some distance in a stream
reach. Alternately, you may use topographic maps, although they are usually less accurate.
Anchor: #i1018662
Slope Conveyance Procedure
The calculation of the stage-discharge relationship should proceed as described in this section. The Water Surface Elevation tables represent the progression of these calculations based on the cross
section shown in Figure 7-14. The result of this procedure is a stage-discharge curve, as shown in Figure 7-15. You can then use the design discharge or any other subject discharge as an argument to
estimate (usually done by interpolation) an associated water surface elevation.
Anchor: #LGNMJLGM
1. Select a trial starting depth and apply it to a plot of the cross section. Anchor: #HHNNMEHK
2. Compute the area and wetted perimeter weighted n-value (see Chapter 6) for each submerged subsection. Anchor: #OJKINHJK
3. Compute the subsection discharges with Manning’s Equation. Use the subsection values for roughness, area, wetted perimeter, and slope. (See Equation 7-1). The sum of all of the incremental
discharges represents the total discharge for each assumed water surface elevation. Note. Compute the average velocity for the section by substituting the total section area and total discharge
into the continuity equation. Anchor: #KTLFEHJN
4. Anchor: #FGONMNIE
Equation 7-4.
Anchor: #QIEEEFIE
5. Tabulate or plot the water surface elevation and resulting discharge (stage versus discharge). Anchor: #KJKHFEGE
6. Repeat the above steps with a new channel depth, or add a depth increment to the trial depth. The choice of elevation increment is somewhat subjective. However, if the increments are less than
about 0.25 ft. (0.075 m), considerable calculation is required. On the other hand, if the increments are greater than 1.5 ft. (0.5 m), the resulting stage-discharge relationship may not be
detailed enough for use in design. Anchor: #RNGNGIKE
7. Determine the depth for a given discharge by interpolation of the stage versus discharge table or plot.
The following x and y values apply to Figure 7‑14:
Anchor: #
i1006327X and
Y Values for
Figure 7‑14
│ 0 │ 79 │
│ 2 │ 75 │
│ 18 │ 72 │
│ 20 │ 65 │
│ 33 │ 65 │
│ 35 │ 70 │
│ 58 │ 75 │
│ 60 │ 79 │
Anchor: #i999533grtop
Figure 7-14. Slope Conveyance Cross Section
Anchor: #i1058504Water Surface Elevation of 66 ft.
│ Area (ft2) │ 0 │ 13.34 │ 0 │ 13.34 │
│ Wetted Perimeter (ft) │ 0 │ 15.12 │ 0 │ │
│ Hydraulic Radius (ft) │ │ 0.88 │ │ │
│ n │ 0.060 │ 0.035 │ 0.060 │ │
│ Q (cfs) │ │ 10.43 │ │ 10.43 │
│ Velocity (fps) │ │ 0.78 │ │ 0.78 │
Anchor: #i1006910Water Surface Elevation of 79 ft.
│ Area (ft2) │ 92.00 │ 226.00 │ 153.50 │ 471.5 │
│ Wetted Perimeter (ft) │ 20.75 │ 25.67 │ 28.01 │ │
│ Hydraulic Radius (ft) │ 4.43 │ 8.81 │ 5.48 │ │
│ n │ 0.060 │ 0.035 │ 0.060 │ │
│ Q (cfs) │ 122.98 │ 818.33 │ 236.34 │ 1177.66 │
│ Velocity (fps) │ 1.34 │ 3.62 │ 1.54 │ 2.50 │
Anchor: #i1000728grtop
Figure 7-15. Stage Discharge Curve for Slope Conveyance
Anchor: #i1018740
Standard Step Backwater Method
The Step Backwater Method, or Standard Step Method, uses the energy equation to “step” the stream water surface along a profile (usually in an upstream direction because most Texas streams exhibit
subcritical flow). This method is typically more expensive to complete but more reliable than the Slope-Conveyance Method.
The manual calculation process for the Standard Step Method is cumbersome and tedious. With accessibility to computers and the availability of numerous algorithms, you can accomplish the usual
channel analysis by Standard Step using suitable computer programs.
A stage-discharge relationship can be derived from the water surface profiles for each of several discharge rates.
Ensure that the particular application complies with the limitations of the program used.
Use the Standard Step Method for analysis in the following instances:
Anchor: #HKKFJMNH
• results from the Slope-Conveyance Method may not be accurate enough Anchor: #LMIMMNNN
• the drainage facility’s level of importance deserves a more sophisticated channel analysis Anchor: #SFMLGJNL
• the channel is highly irregular with numerous or significant variations of geometry, roughness characteristics, or stream confluences Anchor: #EFFIIMEF
• a controlling structure affects backwater.
This procedure applies to most open channel flow, including streams having an irregular channel with the cross section consisting of a main channel and separate overbank areas with individual
n-values. Use this method either for supercritical flow or for subcritical flow.
Anchor: #i1018812
Standard Step Data Requirements
At least four cross sections are required to complete this procedure, but you often need many more than three cross sections. The number and frequency of cross sections required is a direct function
of the irregularity of the stream reach. Generally speaking, the more irregular the reach, the more cross sections you may require. The cross sections should represent the reach between them. A
system of measurement or stationing between cross sections is also required. Evaluate roughness characteristics (n-values) and associated sub-section boundaries for all of the cross sections.
Unfortunately, the primary way to determine if you have sufficient cross sections is to evaluate the results of a first trial.
The selection of cross sections used in this method is critical. As the irregularities of a stream vary along a natural stream reach, accommodate the influence of the varying cross-sectional
geometry. Incorporate transitional cross sections into the series of cross sections making up the stream reach. While there is considerable flexibility in the procedure concerning the computed water
surface profile, you can use knowledge of any controlling water surface elevations.
Anchor: #i1018827
Standard Step Procedure
The Standard Step Method uses the Energy Balance Equation, Equation 6-11, which allows the water surface elevation at the upstream section (2) to be found from a known water surface elevation at the
downstream section (1). The following procedure assumes that cross sections, stationing, discharges, and n-values have already been established. Generally, for Texas, the assumption of subcritical
flow will be appropriate to start the process. Subsequent calculations will check this assumption.
Anchor: #LOKGHGKI
1. Select the discharge to be used. Determine a starting water surface elevation. For subcritical flow, begin at the most downstream cross section. Use one of the following methods to establish a
starting water surface elevation for the selected discharge: a measured elevation, the Slope-Conveyance Method to determine the stage for an appropriate discharge, or an existing (verified)
rating curve. Anchor: #HHPMHLKH
2. Referring to Figure 6-1 and Equation 6-11, consider the downstream water surface to be section 1 and calculate the following variables:
Anchor: #GIENJHLL
□ z[1] = flowline elevation at section 1 Anchor: #LRJIIGFM
□ y[1] = tailwater minus flowline elevation Anchor: #KJOEJKII
□ α = kinetic energy coefficient (For simple cases or where conveyance does not vary significantly, it may be possible to ignore this coefficient.)
Anchor: #JUGGELNL
3. From cross section 1, calculate the area, A[1]. Then use Equation 6-1 to calculate the velocity, v[1], for the velocity head at A[1]. The next station upstream is usually section 2. Assume a
depth y[2] at section 2, and use y[2] to calculate z[2] and A[2]. Calculate, also, the velocity head at A[2]. Anchor: #LPMJIKJJ
4. Calculate the friction slope (s[f]) between the two sections using Equation 7-5 and Equation 7-6:
Anchor: #KPFFENGH
Equation 7-5.
Anchor: #FNMNJGLN
Equation 7-6.
Anchor: #NJHKGFKF
5. Calculate the friction head losses (h[f]) between the two sections using
Anchor: #HGLIGHFI
Equation 7-7.
Anchor: #ELTLIEIL
6. Calculate the kinetic energy correction coefficients ([1] and [2]) using Equation 6-10. Anchor: #JGLFIMHK
7. Where appropriate, calculate expansion losses (h[e]) using Equation 7‑8 and contraction losses (h[c]) using Equation 7-9 (Other losses, such as bend losses, are often disregarded as an
unnecessary refinement.)
Anchor: #MHEMLEEI
Equation 7-8.
Anchor: #ULIKNMHM
Equation 7-9.
Anchor: #JFIJIJIE
8. Check the energy equation for balance using Equation 7-10 and Equation 7-11.�
Anchor: #MILNJIGF
Equation 7-10.
Anchor: #NVMIGJIJ
Equation 7-11.
The following considerations apply:
Anchor: #LLKEJJNF
□ if L=R within a reasonable tolerance, then the assumed depth at Section 1 is okay. This will be the calculated water surface depth at Section 1; proceed to Step (9) Anchor: #NELKHMKH
□ if L≠R, go back to Step (3) using a different assumed depth.
Anchor: #FGIJNENL
9. Determine the critical depth (d[c]) at the cross section and find the uniform depth (d[u]) by iteration. If, when running a supercritical profile, the results indicate that critical depth is
greater than uniform depth, then it is possible the profile at that cross section is supercritical. For subcritical flow, the process is similar but the calculations must begin at the upstream
section and proceed downstream. Anchor: #JMKFINFE
10. Assign the calculated depth from Step (8) as the downstream elevation (Section 1) and the next section upstream as Section 2, and repeat Steps (2) through (10). Anchor: #KNKKEHHE
11. Repeat these steps until all of the sections along the reach have been addressed.
Anchor: #i1019025
Profile Convergence
When you use the Standard Step Backwater Method and the starting water surface elevation is unknown or indefinite, you can use a computer to calculate several backwater profiles based on several
arbitrary starting elevations for the same discharge. If you plot these profiles ,as shown in Figure 7‑16, they will tend to converge to a common curve at some point upstream because each successive
calculation brings the water level nearer the uniform depth profile.�
Anchor: #i1000901grtop
Figure 7-16. Water Surface Profile Convergence
The purpose of plotting the curves and finding the convergence point is to determine where the proposed structure site is in reference to the convergence point. If the site is in the vicinity or
upstream of the convergence point, you have started the calculations far enough downstream to define a proper tailwater from an unknown starting elevation. Otherwise, you may have to begin the
calculations at a point further downstream by using additional cross sections. | {"url":"http://onlinemanuals.txdot.gov/txdotmanuals/hyd/channel_analysis_methods.htm","timestamp":"2014-04-16T16:01:31Z","content_type":null,"content_length":"202099","record_id":"<urn:uuid:8cc8dd1f-aa2b-40c9-969e-1d5b441d0176>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
If AD is 6 and ADC is a right angle, what is the area of
Author Message
Re: Area of triangular region [#permalink] 15 Dec 2013, 22:21
Expert's post
AccipiterQ wrote:
I thought if you dropped a line down from a triangle vertex and it formed a right angle on the opposite side then that line bisected the side? So in this case if you know what BD
VeritasPrepKarishma is then you know what DC is?
Veritas Prep GMAT To figure out whether it holds, why don't you try drawing some extreme figures, say, something like this:
Joined: 16 Oct 2010
Posts: 4192 Ques3.jpg [ 3.67 KiB | Viewed 290 times ]
Location: Pune, Will this be true in this case?
When will it be true? When the triangle is equilateral, sure. Also when the triangle is isosceles if the equal sides form the angle from which the altitude is dropped.
Followers: 897
Don't put your faith in the figure given. It may be just one of the many possibilities or may be somewhat misleading.
Kudos [?]: 3816 [0]
, given: 148 _________________
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 15 Dec 2013, 22:50
amitjash wrote:
If AD is 6 and ADC is a right angle, what is the area of triangular region ABC?
CrackVerbalGMAT (1) Angle ABD = 60°
Manager (2) AC = 12
Affiliations: Statement I is insufficient
The left triangle becomes a 90 30 60 triangle however we don't know anything about the right triangle
Joined: 03 Oct 2013
Statement II is insufficient
Posts: 124
The right triangle becomes a 90 30 60 triangle however we don't know anything about the left triangle
Location: India
Combining is sufficient
GMAT 1: 780 Q51 V46
We know both the triangles are 90 30 60 with same sides.
Followers: 11
Hence Answer is C
Kudos [?]: 46 [0],
given: 1 Pushpinder Gill
If you find our response valuable, please encourage us with Kudos!
Have questions about your candidature? Get a FREE profile evaluation from CrackVerbal experts!
Attend live online instructor-led GMAT classes by 99th p'cile instructors!
Re: Area of triangular region [#permalink] 24 Jan 2014, 21:10
Hi Bunuel,
Joined: 17 Jun 2013
I have a question.
Posts: 5
Can I not imagine that, in a right angled triangle if one of the side of a triangle bears x square root 3 as its length can I not imagine that it's a 30-60-90? If not why.
Schools: ISB '14
Please help.
Followers: 0 Thanks
Kudos [?]: 0 [0],
given: 29
Math Expert Re: Area of triangular region [#permalink] 25 Jan 2014, 02:39
Joined: 02 Sep 2009 Expert's post
Posts: 17381
Followers: 2886
Kudos [?]: 18476 [0
], given: 2361
Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 06 Feb 2014, 10:28
Bunuel wrote:
manimgoindowndown wrote:
Hey I had the same question as the last poster. How do we know that BD and DC are of the same length? or that angle BAC has been bisected?
Joined: 30 Jan 2014
They are not equal:
Posts: 1
Followers: 0
(from the first statement) and
Kudos [?]: 0 [0],
given: 4 AD=6*\sqrt{3}
(from the second statement).
I think answer is E, since it is not given that BDC points are collinear.
Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 07 Feb 2014, 04:27
Expert's post
virendrasd wrote:
Bunuel wrote:
manimgoindowndown wrote:
Hey I had the same question as the last poster. How do we know that BD and DC are of the same length? or that angle BAC has been bisected?
They are not equal:
(from the first statement) and
(from the second statement).
I think answer is E, since it is not given that BDC points are collinear.
That's not correct.
OG13, page 272:
A figure accompanying
a data sufficiency problem
will conform to the information given in the question but will not necessarily conform to the additional information given in statements (1) and (2).
Bunuel Lines shown as straight can be assumed to be straight and lines that appear jagged can also be assumed to be straight. You may assume that the positions of points, angles,
regions, and so forth exist in the order shown
Math Expert
and that angle measures are greater than zero degrees.
Joined: 02 Sep 2009
All figures lie in a plane unless otherwise indicated.
Posts: 17381
OG13, page 150:
Followers: 2886
Kudos [?]: 18476 [0
], given: 2361 A figure accompanying
a problem solving question
is intended to provide information useful in solving the problem. Figures are drawn as accurately as possible. Exceptions will be clearly noted.
Lines shown as straight are straight
, and lines that appear jagged are also straight.
The positions of points, angles, regions, etc., exist in the order shown
, and angle measures are greater than zero. All figures lie in a plane unless otherwise indicated.
Hope it helps.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
sanjoo Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 18 Mar 2014, 00:11
Senior Manager what if question wud have been like this..
Joined: 06 Aug 2011 If AD is 6, and ADC is a right angle, what is the area of
Posts: 402 equilateral triangle
Followers: 2 ABC?
Kudos [?]: 41 [0], _________________
given: 81
Bole So Nehal.. Sat Siri Akal.. Waheguru ji help me to get 700+ score !
Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 18 Mar 2014, 07:51
Expert's post
sanjoo wrote:
what if question wud have been like this..
Veritas Prep GMAT
Instructor If AD is 6, and ADC is a right angle, what is the area of equilateral triangle ABC?
Joined: 16 Oct 2010 Then statement 1 would have been irrelevant and statement 2 would have been incorrect. If side of an equilateral triangle is 12, the altitude would be
Posts: 4192 (\sqrt{3}/2)*12 = 6*\sqrt{3}
Location: Pune, But AD is given to be 6.
Followers: 897
Kudos [?]: 3816 [0] Veritas Prep | GMAT Instructor
, given: 148 My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
gmatclubot Re: If AD is 6 and ADC is a right angle, what is the area of [#permalink] 18 Mar 2014, 07:51
Similar topics Author Replies Last post
What is the area of a parallelogram with an angle 45 sgrover 2 23 Jun 2006, 15:56
1 If angle BAD is a right angle, what is the length of side gottabwise 5 23 Jan 2010, 15:43
If AD is 6, and ADC is a right angle, what is the area of sm0k3rz 2 01 Jun 2010, 04:22
4 If angle BAD is a right angle, what is the length of side chiragatara 11 30 Nov 2010, 10:09
1 If AD is 6, and ADC is a right angle, what is the area of MackyCee 7 23 Jul 2011, 02:04 | {"url":"http://gmatclub.com/forum/if-ad-is-6-and-adc-is-a-right-angle-what-is-the-area-of-101200-20.html","timestamp":"2014-04-23T19:08:32Z","content_type":null,"content_length":"179607","record_id":"<urn:uuid:02ce8e2c-8ef3-4dfe-bced-5f4db7807d60>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Hills, NY Geometry Tutor
Find a North Hills, NY Geometry Tutor
...I have an undergraduate degree in Mathematics and Master's and PhD degrees in Computer Science. I am a NJ-certified math teacher (K-12), scored a perfect 200 on the Praxis II Middle School
Mathematics test, and received an ETS Recognition of Excellence on the Mathematics Content Knowledge exam. ...
36 Subjects: including geometry, reading, ESL/ESOL, algebra 1
...I am very familiar with the new comon core standards that students are expected to demonstrate. In one or two sessions, I am able to assess each student's needs and tailor a lesson plan to
ensure that he or she thoroughly understands all the necessary material and is well-prepared for exams.For ...
29 Subjects: including geometry, reading, biology, piano
...My patient, polite and easy-going manner coupled with my ability to model various methods for understanding and “seeing” things, accentuates my success as a teacher and tutor. My teaching
strategies include giving a mixed review of problems. I also always have the student model and explain what they've learned, showing their process for deriving an answer.
16 Subjects: including geometry, chemistry, calculus, algebra 1
...I have taken Symbolic Logic, Mathematical Logic, Advanced Logic, Computability, and Modal Logic, each with excellent marks. While taking Symbolic Logic the professor requested to refer students
in the class to me for questions with the material, and after taking the class, asked me to serve as a...
32 Subjects: including geometry, calculus, physics, statistics
...I started with a major test prep company, and have experience in the following tests: SAT I (math, reading, and writing), ACT, GRE, GMAT, MCAT Verbal, LSAT, SSAT, SHSAT, ISEE, SSAT, and SAT
Subject Tests and AP Tests (Math level 1 and 2, World History, US History, and Government). I have tutored...
42 Subjects: including geometry, reading, English, biology | {"url":"http://www.purplemath.com/North_Hills_NY_Geometry_tutors.php","timestamp":"2014-04-18T08:39:34Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:0da3bb42-d2b0-4982-aa83-3cb5636baaa0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Jon Harrop
Date: -- (:)
From: Jon Harrop <jon@j...>
Subject: Re: [Caml-list] Set union
On Friday 25 February 2005 10:56, Radu Grigore wrote:
> When all elements of a set are bigger than all elements of the other
> set the Set.union function seems to simply add the elements of the
> small set to the bigger set one at a time. So the complexity looks
> like O(m lg n) where m is the size of the smaller set. For other cases
> the process is a bit more complex: take the root of the short tree,
> split the large tree into smaller/larger elements based on that root,
> compute union of "small" trees, "compute union of "large" trees",
> merge them. If I'm not mistaken this is O(m lg n) too.
Yes, my guess was O(n ln(n+N)) because the last insertions from the smaller
set may be into a set with n+N elements. In contrast, the STL set_union
function is T(n+N), which explains why it is so slow.
Now, what about best case behaviour: In the case of the union of two equal
height, distinct sets, is OCaml's union T(1)?
Dr Jon D Harrop, Flying Frog Consultancy Ltd. | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2005/02/f50f17719f172da5096cdde268b2f88d.en.html","timestamp":"2014-04-20T03:46:14Z","content_type":null,"content_length":"7728","record_id":"<urn:uuid:386fac44-f041-4e85-95a2-ed1b808ef6d0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Induction and formulating equations
September 18th 2010, 11:32 PM #1
Sep 2010
Induction and formulating equations
While browsing, I came across equations for finding the sums of certain series's, the first is (n²+n)/2 for "1+2+3...+n", the second is (1/6)n(n+1)(2n+1) for "1²+2²+3²...n²".
I found out how to come up with the first equation through reasoning: n(n+1)/2 is basically the median times n, and since the difference in this series is 1, and the series begins with 1, the
median is the same as the mean, so the mean times the number of numbers in the series finds the total.
Now what I can't figure out is how to formulate the second equation, any ideas?
Why the 3x multiplier in 3(1²+2²+...+n²)?
The shaded area on the right is equal to the total area of the shaded squares on the left (small squares with the same type of shading from various big squares are collected together). Also, in
the picture on the right, the blank area to the left of the shaded figure is equal to 1*1 + 2*2 + 3*3 + 4*4 + 5*5: look at it bottom up. The same goes for the blank area right of the shaded
figure. Thus, the total area of the rectangle on the right is three times the total area of the squares on the left.
And you already know that $1 + 2 + 3 + \dots + n = \frac{1}{2}n(n+ 1)$.
So that means
$3(1^2 + 2^2 + 3^2 + \dots + n^2) = (1 + 2 + 3 + \dots + n)(2n + 1)$
$= \frac{1}{2}n(n + 1)(2n + 1)$.
$1^2 + 2^2 + 3^2 + \dots + n^2 = \frac{1}{6}n(n + 1)(2n + 1)$.
While browsing, I came across equations for finding the sums of certain series's, the first is (n²+n)/2 for "1+2+3...+n", the second is (1/6)n(n+1)(2n+1) for "1²+2²+3²...n²".
I found out how to come up with the first equation through reasoning: n(n+1)/2 is basically the median times n, and since the difference in this series is 1, and the series begins with 1, the
median is the same as the mean, so the mean times the number of numbers in the series finds the total.
Now what I can't figure out is how to formulate the second equation, any ideas?
You could also create some numerical patterns as follows...
$=1+(1+3)+(1+3+3+2)+(1+3+3+3+2+2+2)+(1+3+3+3+3+2+2+ 2+2+2+2)+....$
Two of these are very straightforward to evaluate.
1 appears n times.
3 appears in (n-1) brackets and the sum is a simple arithmetic series of (n-1) terms.
2 appears in (n-2) brackets and the sum is the sum of triangular numbers
whose sequence can be seen on the 3rd diagonal from the top of Pascal's triangle.
The sum of triangular numbers can be seen on the diagonal below that (the 4th from the top).
These sums are $\binom{n+2}{3}$
As there are (n-2) terms in our sum, the sum within brackets of the 2's is $\binom{n}{3}$
$\displaystyle\ 1^2+2^2+3^2+...+n^2=n+3\frac{(n-1)(n)}{2}+2\frac{n(n-1)(n-2)}{3!}$
$=\displaystyle\ n+3\frac{n(n-1)}{2}+2\frac{n\left(n^2-3n+2\right)}{6}$
$=\displaystyle\frac{n\left(2n^2+3n+1\right)}{6}=\f rac{n(2n+1)(n+1)}{6}$
Last edited by Archie Meade; September 22nd 2010 at 03:32 AM. Reason: added diagram
September 18th 2010, 11:49 PM #2
September 20th 2010, 08:17 AM #3
Sep 2010
September 20th 2010, 01:28 PM #4
MHF Contributor
Oct 2009
September 20th 2010, 05:28 PM #5
September 21st 2010, 04:10 AM #6
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/algebra/156664-induction-formulating-equations.html","timestamp":"2014-04-16T19:13:31Z","content_type":null,"content_length":"52152","record_id":"<urn:uuid:4fe2dd24-310a-4423-8768-67cc0710681a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
solve after sum to product
July 20th 2008, 02:36 PM #1
Dec 2007
solve after sum to product
need values after using sum to product formula.
I used sum to product and get 2sin4xcosx=0
setting each factor to 0, 2sin4x=0 and cosx=0,
I have the answers in the book, but need to know how to get to the answers. x=0, pi/4, pi/2 etc. (adding [(npi)/4] each time.
Hello, dashreeve!
Are you really having trouble solving elementary trig equations?
Solve: . $2\sin4x\:=\:0\:\text{ and }\;\cos x\:=\:0$
$2\sin4x \:=\:0 \quad\Rightarrow\quad \sin4x \:=\:0 \quad\Rightarrow\quad 4x \:=\:0 + \pi n \quad\Rightarrow\quad x \:=\:\frac{\pi}{4}n \;\;{\color{blue}[1]}$
$\cos x \:=\:0 \quad\Rightarrow\quad x \:=\:\frac{\pi}{2} + \pi n \;\;{\color{blue}[2]}$
But all the solutions of [2] are included in [1].
Therefore, the solution is: . $x \:=\:\frac{\pi}{4}n\;\;\;\text{ for }n \in I$
thanks for the encouraging remarks! I'm having a lot of trouble understanding this stuff actually. And half way through the course now, and haven't seen the technique you use there (which is much
easier than what I've been doing). Did I miss the rule about sin4x=0 can move to 4x=0+pi(n)??? When was I supposed to learn that? I know my professor has us jumping around the book, but I can't
imagine I would forget that little rule..
Hello, dashreeve!
Suppose we have: . $2\sin 3x \:=\:1$
Then we have: . $\sin3x \:=\:\frac{1}{2}$
We know that: . $\sin\frac{\pi}{6} \:=\:\frac{1}{2}\:\text{ and }\:\sin\frac{5\pi}{6} \:=\:\frac{1}{2}$
So the angle is either $\frac{\pi}{6}\,\text{ or }\,\frac{5\pi}{6}$ . . . plus some multiple of $2\pi.$
So we have: . $3x \;=\;\begin{Bmatrix}\dfrac{\pi}{6} + 2\pi n \\ \\[-3mm]\dfrac{5\pi}{6} + 2\pi n \end{Bmatrix}$
Therefore: . $x \;=\;\begin{Bmatrix}\dfrac{\pi}{18} + \dfrac{2\pi}{3}n \\ \\[-3mm] \dfrac{5\pi}{18} + \dfrac{2\pi}{3}n \end{Bmatrix}$
I see. Thanks for your help on several questions. It's really teaching me a lot.
July 20th 2008, 03:10 PM #2
Super Member
May 2006
Lexington, MA (USA)
July 20th 2008, 03:21 PM #3
Dec 2007
July 21st 2008, 08:19 AM #4
Super Member
May 2006
Lexington, MA (USA)
July 23rd 2008, 08:25 AM #5
Dec 2007 | {"url":"http://mathhelpforum.com/trigonometry/44144-solve-after-sum-product.html","timestamp":"2014-04-17T18:28:46Z","content_type":null,"content_length":"44608","record_id":"<urn:uuid:b247c3ff-d6c7-4550-9dbb-0ab24b19334e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Current Electricity
Electric Current
The motion of charges constitue electric current. The electric current is defined as the ratio of flow of charges. If a charge 1 flow through any cross-section of the conductor in time than,
$\text{electric} \, \, current \, \, i = \dfrac{q}{t} \cdots \text{equation 1a}$
If small charge dq flows in time dt, then,
$\text{Current} \,\, = \dfrac{dq}{dt} \cdots \text{equation 1b}$
The unit of electric current in M.K.S. and S.I. system is coulomb/second or Ampere. Ampere is the fundamental unit in S.I. system. The electric current has direction, but it is a scalar quantity,
since it does not obey the laws of vector addition. Conventionally the direction of current is taken along the direction of flow of positive charges and opposite to the direction of flow of negative
The electric current in metals is due to flow of free electrons.
Current Density
The current density at any point inside a conductor is defined as a vector quantity whose magnitude is equal to current through infinitesimal area at that point, the area being normal to the
direction of flow of current and whose direction is along the direction of current at that point.
If $A_n$ is small area at point P, normal to current I, then:
$\text{Current Density} \, \, J = \dfrac{I}{A_n} \cdots \text{equation} \, \, 2a$
If the plane of the small area A is not normal to current, but makes an angle $\theta$ is not normal to current, but makes an angle $\theta$ with the normal to current, then:
$J = \dfrac{I}{A_n} = \dfrac{I}{A \, \, cos \theta} \cdots \text{equation} \, \, 2b$
The unit of current density is $\dfrac{amp}{m^2}$
From equation (2b) we have,
$I = J A cos \theta = \overrightarrow{J} \, \, \overrightarrow{A} \cdots \text{equation} \, \, 3$
Drift Velocity
When no potential difference is applied across a conductor, the free electrons are in thermal equilibrium with the rest of the conductor and are in random motion. That is the average velocity vector
of free electrons is zero and consequently this motion does not constitute a net transport of charge across any section of the conductor and hence there is no current in the conductor.
If a potential difference is applied across a conductor, the electrons gain some average velocity in the direction of positive potential. This average velocity is superimposed over the random
velocity of electrons and is called the drift velocity.
The speed of random motion is determined by temperature by,
$\dfrac{1}{2} m v^2 = \dfrac{3}{2} K T$
$v = \sqrt{ ( \dfrac{3 K T}{m} ) }$
Where k is Boltzmann’s constant = $1.38 \times 10^{-23}$ Joule/Kelvin.
Its order is $10^5$ m/sec, while the drift velocity $v_d$ is determined by potential difference (V) applied across the conductor. The order of drift velocity is $10^{-4} m / s$
Relation between drift velocity and potential difference: If V is the potential difference applied across a conductor of length l, then electric field strength $E = \dfrac{V}{l}$
Force of electron F = e E
If m is the mass of electron, acceleration produced,
$a = \dfrac{F}{m} = \dfrac{eE}{m} \cdots \text{Equation 1}$
Average random velocity of free electrons, u=0. If ‘v’ is the velocity just before the start of next collision the n,
$Drift \, \, Velocity \, \, v_d = \dfrac{u + v}{2} = \dfrac{0 + v}{2} = \dfrac{v}{2}$
If $\tau$ is time between successive collisions, or relaxation time, then $v = a \tau = 0 + a \tau = a \tau$
Using (1), we get,
$v = \dfrac{e E}{m} \tau$
$Drift \, \, velocity \, \, v_d = \dfrac{e E \tau}{2 m}$
Clearly drift velocity is directly proportional to the electric field strength E.
As $E = \dfrac{V}{i}$
$v_d = \dfrac{e ( \dfrac{V}{I} ) \tau}{2m} = \dfrac{e v \tau}{2 m l}$
The relaxation time changes with change of temperature. Actually it decreases with rise of temperature. Thus at a given temperature drift velocity $v_d$ is directly proportional to the potential
difference, inversely proportional to length and is independent of cross-sectional area.
Ohm’s Law and Electrical Resistance
When a potential difference is applied across a conductor, a current ‘I’ is set up in the conductor. According to Ohm’s law under given physical conditions e.g. at constant temperature and pressure,
the potential difference applied across a conductor is directly proportional to the current produced in it.
$V \propto I \, \, \, \, V = r I \cdots \text{equation} \, \, 1$
When the constant ‘R’ is called the electrical resistance of the given conductor.
Conductance: The reciprocal of resistance is called the conductance. It is denoted by K.
$K = \dfrac{1}{R}$
The unit of resistance R is volt/ampere = Ohm and that of conductance is mho or Siemen.
Physical Concept of Electrical Resistance: The free electrons of the conductor make collisions with themselves and imperfections of lattice. The electric current is opposed by these collisions. The
net hindrance offered by a conductor to the flow of current is called the electrical resistance of the conductor. Naturally the electrical resistance of a conductor depends upon the size, geometry,
temperature and internal structure of the conductor.
Ohmic and Non-ohmic conductors
If voltage-current graph of a conductor is a straight line, it is said to be ohmic conductor: The examples are metallic conductors Cu, Fe, tungsten etc, provided the current is not too high; because
when current becomes high, the temperature of conductors becomes sufficiently large to change the resistance of conductor; so that linearity between V and I breaks down.
If voltage current graph of a conductor is nonlinear, it is said to be non-ohmic conductor. The examples are the torch bulb, junction diode, thermistor etc.
In this case the resistance varies with voltage and is called the dynamic resistance. It if found by the formula $R = ( \dfrac{ \delta V}{\delta I} )$ near given voltage.
Resistivity and Conductivity
For a given conductor of uniform cross-section A and of length l, the electrical resistance R is directly proportional to length I and inversely proportional to cross-sectional area A.
$R \propto \dfrac{I}{A} \, \, \, or \, \, \, R = \dfrac{\rho l}{A} \cdots \text{equation} \, \, 1$
Where $\rho$ is a constant of proportionality called the specific resistance or resitivity of the metal of the conductor at given temperature.
From (1), $\rho = \dfrac{RA}{l} \cdots \text{equation} \, \, 2$
If l=1m, A = $1 m^2$ , then $\rho = R$
That is the specific resistance of the material of the conductor defined as the resistance offered by the conductor of 1 m length and 1 $m^2$ cross-sectional area, when current flow is normal to
The unit of resistivity is ohm X meter.
The reciprocal of resistivity is called the conductivity. The unit of conductivity is mho/meter.
Ohm’s law in alternative from may be expressed as $J = \sigma E \cdots equation \, \, 3$
Where J = Current density and E = electric field strength.
Recasting a wire of a given mass
Usally $R = \dfrac{\rho l}{A}$ i.e. resistance is proportional to length of the conductor. But if a wire of given mass is recasted to increase its length, then area of cross-section also decrease; so
this must be taken into account.
As, $R = \dfrac{\rho l}{A} \cdots equation \, \, 1$ and mass, M = volume X density = (Al d) = constant.
$A = \dfrac{m}{id} \cdots equation \, \, \, 2$
Substituting this (1), we get,
$R = \dfrac{\rho l}{ ( \dfrac{m}{ld} ) } = \dfrac{\rho d}{m} l^2$
As $\rho$ , d and m are constants.
$R' \propto l^2$
Thus if a wire of initial resistance R is stretched to make its length n-times, then the new resistance become $n^2 - times$ .
$R' = n^2 R \cdots equation \, \, 3$
But if radius is given, then from (2)
$l = \dfrac{m}{Ad}$
Substituting this in (1), we get
$R = \dfrac{\rho ( \dfrac{m}{Ad} )}{A} = \dfrac{\rho m}{A^2 d}$
As $\rho$ , m and d are constants,
$R \propto \dfrac{1}{A^2}$
As A = $r^2$ , $R = \dfrac{1}{1^4}$
Thus if a wire of initial resistance R is stretched to make its radius $\dfrac{1}{n}$ –time, then the new resistance becomes $n^4$ – times.
$R' = n^4 R$
Variation of Resistance with temperature
The resistance of a conductor varies with temperature figure represents the graph of variation of resistance of pure metal with temperature.
Mathematically the dependence of (R) on temperature (t) is expressed as:
$R_i = R_0 ( 1 + \alpha t + \beta t^2 ) \cdots equation \, \, 1$
Where $\alpha > \beta$ are temperature coefficents of resistance. Their values vary from metal to metal. If temperature t is not sufficiently large as in most practical cases, then equation (1) may
be expressed as:
$R_t = R_0 ( 1 + \alpha t )$
The constant $\alpha$ is called the temperature coefficient of resistance of the material $\alpha$ is positive for metals and negative for semi-conductors and electrolytes. If $R_1 \, \, and \, \,
R_2$ are the resistances of the same specimen at temperature $t_1 \, \, and \, \, t_2 \, \, _0 C$ , then
$R_2 = R_1 [ 1 + \alpha ( t_2 - t_1 ) ]$
It is a heat sensitive device made of a semi-conductor. The temperature coefficient of thermistor is negative but us unusually large.
The voltage current graph of thermistor is unusual as shown in figure.
Thermistor is used:
(i) In resistance thermistors to measure low temperature of the order of 10 K.
(ii) To safeguard electronic circuits against current jumps, because initially thermistor has high resistance when cold and its resistance decrease appreciably when it warms up.
Color code of carbon Resistances
It is indicate resistance and its percentage reliability. The color bands are formed from left to right. The first three bands give the value of resistance. The first and second band indicate the
first and second significant digit while the third band gives the number of zeros which follow the first two digits, often called multiplier. The fourth band represents its tolerance. Absence of any
fourth band means a tolerance of 20%.
│Memory Letter │Color │Band ‘1’│Band “2’│Band ‘3’ │Band ‘4’ │
│ │ │ │ │ │ │
│ │ │ │ │(multiplier)│ │
│B │Black │0 │0 │$10^0$ │Gold 5% │
│B │Brown │1 │1 │$10^1$ │Silver 10% │
│R │Red │2 │2 │$10^2$ │No Color 10%│
│O │Orange│3 │3 │$10^3$ │ │
│Y │Yellow│4 │4 │$10^4$ │ │
│G │Green │5 │5 │$10^5$ │ │
│B │Blue │6 │6 │$10^6$ │ │
│V │Violet│7 │7 │$10^7$ │ │
│G │Grey │8 │8 │$10^8$ │ │
│W │White │9 │9 │$10^9$ │ │
The memory letters may be remembered by the following sentence:
“B.B.ROY 9of) Great Britain (has) Very Good Wife.”
Sources of EMF
A source of emf is a device that drives charge carriers from one point to another. The sources of emf may be chemical, thermal, electromagnetic, piezoelectric etc. Some sources of emf are
thermocouple, piezoelectric crystal, photocell and electric cell. In a thermocouple heat energy is converted into electrical energy; in a photocell light energy is converted into electrical energy;
in an electric cell (usually called a cell), chemical energy is converted into electric energy.
These cells are divided into two categories:
(i) Primary cells: A primary cell consists of an electrolyte, two electrodes and a depolarizing agent. The Lechlanche cell, Daniel cell, dry cell are examples of primary cells. They are used where no
continuous current is required. They have high internal resistance and their material used up. Moreover they cannot be recharged.
(ii) Secondary cells: In secondary cell the chemical reaction is reversible; so they can be recharged. Its internal resistance is smaller than a primary cell. They used where continuous supply of
current is required.
When a secondary cell is giving current in a circuit; the chemical energy is convened into electrical energy and during the process its emf and density of electrolyte falls; while during charging
process; the electrical energy is convened into chemical energy and its emf and density of electrolyte increase and finally becomes steady.
Remarks: When a secondary cell is charged, its positive terminal is connected to positive terminal of the charging supply.
Electromotive Force and Potential Difference
The E.M.F of a source of potential difference is the potential difference between the terminals of the source when no current is drawn from it i.e. when the source is in the open (or infinite
resistance) circuit. Alternatively the e.m.f. of a cell is defined as the work done in moving per unit positive test charge in the entire closed circuit including the solution of the cell.
$E = \dfrac{W}{q_0}$
When the terminals of a cell are connected to an external resistance, the cell is said to be in closed circuit. The potential difference across the terminals of a cell in closed circuit is called the
potential difference across the external resistance. Alternatively the p.d. across the external circuit is the work done in carrying per unit positive test charge from one terminal to another in
external circuit.
$V = \dfrac{W_{ext}}{q_0}$
In general E > V : but E may be less than V, if a opposite current flows in the cell e.g. when the cell is being charged.
Internal Resistance of Cell (r): The internal resistance of a cell is the resistance offered by the solution of the cell between its electrodes. It is denoted by r.
The internal resistance of a cell:
(i) Varies directly as concentration of the solution of the cell.
(ii) Varies directly as the separation between electrodes i.e. length of solution between electrodes.
(iii) Varies inversely as the area of immersed electrodes.
(iv) is independent of the material of electrodes.
Terminal p.d. of a cell: If a cell of emf E internal resistance is giving current I in external resistance R, then its terminal p.d. V = E — Ir = IR and internal resistance of a cell
$r = ( \dfrac{E}{V} - I ) R$
Combination of Resistances
There are two arrangements for connecting a number of resistances.
1. Series combination : In this arrangement the resistances are connected end to end in succession (Fig.). In this combination,
(i) The current in each resistor is same.
(ii) The total p.d. V across the combination is equal to the sum of the p.d. across individual resistance.
$V = v_1 + V_2 + V_3$
(iii) The equivalent or effective resistance (r) of the combination is equal to the sum of individual resistance.
$R = R_1 + R_2 + R_3$
2. Parallel combination: In this arrangement one each end of each resistor is connected at one point and the other end of each to other point. Then these two points are connected across a source
p.d. v.
In this arrangement:
(i) The p.d. (v) across each resistor is the same.
(ii) The current is different in different resistances such that the total current flowing in the combination is shard by the individual resistances.
$I = i_i + i_2 + i_3$
(iii) The equivalent or effective resistance (R) of the combination is given by:
$\dfrac{1}{R} = \dfrac{1}{R_1} + \dfrac{1}{R_2} + \dfrac{1}{R_3}$
Or effective conductance K is the sum of inductance resistors.
$K = K_1 + K_2 + K_3$
Kirchhoff’s Laws
Kirchhoff in 1882 gave two laws which are used to solve the complicated problems.
1. First law: It states that “The algebraic sum of currents meeting at any junction is zero.”
$\sum I= 0$
Conventionally the currents towards the junction are taken as positive; while those directed away from the junction are taken as negative. Accordingly $i_1 \, \, and \, \, \, i_2$ are positive while
$i_3 \, \, \, and\, \, \, i_4$ are negative for junction O.
$i_1 + i_2 - i_3 - i_4 =0$
2. Second law (or Loop Law): It states that “The algebraic sum of potential differences across each element of a closed circuit (or loop) is zero.
$\sum V = 0$
Conventionally the potential fall is taken as negative; while potential rise is taken as negative. In a resistor the current flows from higher to lower potential; therefore potential difference
across a resistor is taken as negative if we proceed in the direction of current and is taken as positive if we proceed opposite to direction of current.
If there is source of emf E, there is rise of potential from negative to positive terminal and hence is taken positive; while there is fall of potential from positive to negative terminal and hence
is taken negative. For example we consider a simple circuit and proceed along path abcda. Let ‘I’ be the current in circuit. Then,
$- I R - I r + E = 0$
That is,
$E = I R + I r$
This law is based on conservation of energy.
Wheatstone’s Bridge
The Wheatstone’s bridge is shown in figure P, Q, R and S are four resistances, G is galvanometer and E is a battery. The Wheatstone’s bridge is said to be balanced when no current flows in
galvanometer i.e when potential of B = potential of D.
Condition of balance, $\dfrac{P}{Q} =\dfrac{R}{S}$
(i) The sensitivity of Wheatstone’s bridge is maximum when all the four resistances become equal.
$P = Q = R = S$
(ii) If battery and galvanometer are interchanged, the balanced position of bridge remains unchanged while its sensitivity changes.
(iii) When Wheatstone’s bridge is balanced, the resistance in arm BD may be ignored while calculating the equivalent resistance of bridge between A and C.
(iv) To calculate the resistance between terminals B and D, the resistance of G is counted. In this case P and R are in series, Q and S are in series, while all the three arms (arm containing P, R),
G, and (arm containing Q, G) are in parallel.
Combination of Cells
There are three possible arrangement of a number of cells.
1. Series Arrangement: In this arrangement the positive terminal of one cell is connected to negative terminal of the other in succession. Figure represents n cells, each of e.m.f. E and internal
resistance r connected in series and an external resistance R is connected across the combination
The net e.m.f = nE
Net internal resistance = nr
Total resistance of circuit = R + nr
$Current \, \, I = \dfrac{Net EMF}{Net \, \, Resistance}$$= \dfrac{nE}{R + nr}$
If R >> nr, then $I = n \dfrac{E}{R} = n time \, \, current \, \, due \, \, to \, \, one \, \, cell$
Obviously for maximum current, the cells should be connected in series, when net external resistance >> net internal resistance.
2. Parallel Arrangement: In this arrangement the positive terminals of all cells are connected to one point and negative terminals to the other point. Figure represents m cells, each of e.m.f E and
internal resistance r, connected in parallel and an external resistance R is connected across the combination.
Net e.m.f + E
Net internal resistance $R_{int} = \dfrac{r}{m}$
Total resistances of circuit = $R + \dfrac{r}{m}$
$Current I = \dfrac{E}{R} + ( \dfrac{r}{m} )$
Obviously for maximum current, the cells should be connected in parallel when net internal resistance >> net external resistance.
3. Mixed Grouping: In this arrangement the total number N of cells is divided into m groups in parallel and in each group n cells are connected in series. Fig. represents the mixed grouping of N = mn
cells, having m rows of cells connected in parallel, each row containing n cells in series. The e.m.f. of each cell is E and internal resistance of each cell is r. The combination is connected to
external resistance R.
Net e.m.f = nE
Net internal resistamce $r_{int} = \dfrac{nr}{m}$
Net resistance of circuit = $R + \dfrac{nr}{m}$
$Current \, \, = \dfrac{n E}{R + ( \dfrac{nr}{m} ) } = \dfrac{m n r}{m R + n r}$
For maximum current:
$R = \dfrac{n r}{m}$$r_{ext} = r_{int}$
Thus for maximum current, the cells should be connected in mixed grouping when external resistance R = net internal resistance.
$R_{ext} = R_{int}$
Cells of different emf’s in parallel: If two cells of different emfs $E_1$ and $E_2$ and of different internal resistances $r_1$ and $r_2$ are connected in parallel, then the net effective emf,
$E = \dfrac{ \dfrac{E_1}{r_1} + \dfrac{E_2}{r_2}}{ \dfrac{1}{r_1} + \dfrac{1}{r_2}}$ and net internal resistance.
$r_{int} = \dfrac{r_1 r_2}{r_1 + r_2}$
Potentiometer is an ideal device to measure the p.d. between two points. It consists of a long resistance wire AB of uniform cross-section with a steady direct current set up in it by means of a main
battery B. this maintains a uniform potential gradient along the length of the wire. If $V_{AB}$ is potential differences across wire AB and $r_{AB}$ is its total resistance, then potential gradient,
$K = \dfrac{V_{AB}}{R_{AB}} = I \rho = I \dfrac{R_{AB}}{L}$
Where $\rho= \dfrac{R_{AB}}{L}$ = resistance per unit length of potentiometer wire.
If E is the e.m.f of source balanced between points A and C, then e.m.f of source,
E = K X length of AC = Kl
The potentiometer is said to act as ideal voltmeter since it is based on cancellation of currents.
Related posts: | {"url":"http://www.sciencehq.com/physics/current-electricity.html","timestamp":"2014-04-16T10:32:06Z","content_type":null,"content_length":"58536","record_id":"<urn:uuid:f0b197a8-e444-4870-8f0b-792c4a0c2e22>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Satisfying Pictures
Definition of Satisfying
1. Adjective. Providing abundant nourishment. "Four square meals a day"
2. Adjective.
Providing freedom from worry.
Definition of Satisfying
1. Adjective. That satisfies, gratifies, pleases or comforts. ¹
2. Verb. (present participle of satisfy) ¹
¹ Source: wiktionary.com
Definition of Satisfying
1. satisfy [v] - See also: satisfy
Satisfying Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Satisfying Images
Lexicographical Neighbors of Satisfying
satisfactory satisfier satnavs
satisfiability satisfiers satori
satisfiable satisfies satoris
satisfice satisfieth satoyama
satisficed satisfy satpaevite
satisficer satisfying (current term) satphone
satisfices satisfyingly satphones
satisficing satisfyingness satrap
satisfie sative satrapal
satisfied satnav satrapate
Literary usage of Satisfying
Below you will find example usage of this term as found in modern and/or classical literature:
1. A Treatise on Conic Sections: Containing an Account of Some of the Most by George Salmon (1879)
"The properties of systems of curves, satisfying one condition less than is sufficient to ... Let /< be the number of conies satisfying four conditions, ..."
2. The Law of Contracts by Samuel Williston, Clarence Martin Lewis (1920)
"CHAPTER XIX SATISFACTION OF THE STATUTE BY ACCEPTANCE AND RECEIPT OR PART PAYMENT Methods of satisfying the statute 539 Satisfaction of Section 17 540 ..."
3. Higher Mathematics for Students of Chemistry and Physics: With Special by Joseph William Mellor (1902)
"Straight Lines satisfying Conditions. The reader should work through the following ... (14) which is an equation of a straight line satisfying the required ..."
4. Roughing It by Mark Twain (2001)
"... showed us more on his arms and face, and said he believed he had bullets enough in his body to make a satisfying A FOE. pig of lead. ..."
5. The Public Records of the Colony of Connecticut [1636-1776] by Connecticut, Connecticut General Assembly, Connecticut Council, Council of Safety (Conn.)., James Hammond Trumbull, Charles Jeremy
Hoadly (1881)
"Qd. as also the incident charges of such sales, and the monies arising on the sale aforesaid to dispose of for the satisfying such debts and charges, ..."
6. Abridgment of the Debates of Congress, from 1789 to 1856: From Gales and by United States Congress, Thomas Hart Benton (1868)
"During this extended course of time—embracing periods eminently favorable for satisfying all just jemands upon the Government, the claims embraced in this ..."
7. The Elements of Analytic Geometry by Percey Franklyn Smith, Arthur Sullivan Gale (1904)
"This discussion leads to the fundamental definition : The equation of the locus of a point satisfying a given condition is an equation in the variables x ..."
Other Resources Relating to: Satisfying | {"url":"http://lexic.us/definition-of/satisfying","timestamp":"2014-04-21T02:00:10Z","content_type":null,"content_length":"34431","record_id":"<urn:uuid:bac3a642-bf72-4712-8c63-b1619b42142f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear combinations
April 12th 2011, 12:01 PM
linear combinations
a) write b as a linear combination of the column vectors a1 and a2
b) use the result from a to determine solution of linear system Ax = b. does the system have any other solutions? explain.
c) write c as a linear combination of a1 and a2.
I got part a) I think
b = 2a1 +a2 since 2 times a1 plus one a2 equals the b matrix.
so then part b)
x would be
2 times A is b
c) is where I am stuck
I cant figure out a combination for
I feel I need to subtract a2 from a1, but then the bottom position wont be -2 if I subtract a1 by 2a2
April 12th 2011, 01:04 PM
solve the two equations:
x-2y = -2
x+2y = -3
then x will be what you multiply a1 by, and y will be what you multiply a2 by.
hint: they won't be integers.
April 12th 2011, 01:53 PM
Thanks. The TA for our class explained it the same way.
made a matrix and reduced it and got the solution. | {"url":"http://mathhelpforum.com/advanced-algebra/177655-linear-combinations-print.html","timestamp":"2014-04-17T13:29:43Z","content_type":null,"content_length":"4605","record_id":"<urn:uuid:7434fb55-2afa-4678-adb9-fe58c476bf39>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Level Curves
hi... I am new to this topic and frustrated.
I have a curve f(x,y)= -3y/(x^2 +y^2 + 1)
I was asked to draw a level curve of this and I'm not getting anywhere with it. If anyone has any pointers, or can help me with solving this question I would be gretfull. The only other thing this
question asks is to describe it at the orgin or at (0,3) ( which is steeper).
thanks for any help. | {"url":"http://www.physicsforums.com/showthread.php?t=370943","timestamp":"2014-04-20T16:05:19Z","content_type":null,"content_length":"26089","record_id":"<urn:uuid:281e1014-5982-4df2-95e8-94f4e7053984>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Domain representations of topological spaces
Results 1 - 10 of 24
- Logical Methods in Computer Science
"... Abstract. We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any
continuous predicate, either selects an element for which the predicate holds or else tells there is no examp ..."
Cited by 13 (12 self)
Add to MetaCart
Abstract. We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any
continuous predicate, either selects an element for which the predicate holds or else tells there is no example. The Cantor space of infinite sequences of binary digits is known to be searchable.
Searchable sets are exhaustible, and we show that the converse also holds for sets of hereditarily total elements in the hierarchy of continuous functionals; moreover, a selection functional can be
constructed uniformly from a quantification functional. We prove that searchable sets are closed under intersections with decidable sets, and under the formation of computable images and of finite
and countably infinite products. This is related to the fact, established here, that exhaustible sets are topologically compact. We obtain a complete description of exhaustible total sets by
developing a computational version of a topological Arzela–Ascoli type characterization of compact subsets of function spaces. We also show that, in the non-empty case, they are precisely the
computable images of the Cantor space. The emphasis of this paper is on the theory of exhaustible and searchable sets, but we also briefly sketch applications. 1.
, 1999
"... This paper gives effective domain representations of spaces H(X) of non-empty compact subsets of effective complete metric spaces X. The domain representation of H(X) is constructed from a
domain representation of X using the Plotkin power domain construction. As an application of the representation ..."
Cited by 13 (5 self)
Add to MetaCart
This paper gives effective domain representations of spaces H(X) of non-empty compact subsets of effective complete metric spaces X. The domain representation of H(X) is constructed from a domain
representation of X using the Plotkin power domain construction. As an application of the representation an effective version of a fundamental theorem on IFS (iterated function system) is shown.
, 2000
"... A partial spatial object is a partial map from space to data. Data types of partial spatial objects are modelled by topological algebras of partial maps and are the foundation for a high level
approach to volume graphics called constructive volume geometry (CVG), where space and data are subspaces o ..."
Cited by 11 (4 self)
Add to MetaCart
A partial spatial object is a partial map from space to data. Data types of partial spatial objects are modelled by topological algebras of partial maps and are the foundation for a high level
approach to volume graphics called constructive volume geometry (CVG), where space and data are subspaces of # dimensional Euclidean space. We investigate the computability of partial spatial object
data types, in general and in volume graphics, using the theory of effective domain representations for topological algebras. The basic mathematical problem considered is to classify which partial
functions between topological spaces can be represented by total continuous functions between given domain representations of the spaces. We prove theorems about partial functions on regular
Hausdorff spaces and their domain representations, and apply the results to partial spatial objects and CVG algebras.
, 2006
"... We introduce a notion of reducibility of representations of topological spaces and study some basic properties of this notion for domain representations. A representation reduces to another if
its representing map factors through the other representation. Reductions form a pre-order on representatio ..."
Cited by 8 (4 self)
Add to MetaCart
We introduce a notion of reducibility of representations of topological spaces and study some basic properties of this notion for domain representations. A representation reduces to another if its
representing map factors through the other representation. Reductions form a pre-order on representations. A spectrum is a class of representations divided by the equivalence relation induced by
reductions. We establish some basic properties of spectra, such as, non-triviality. Equivalent representations represent the same set of functions on the represented space. Within a class of
representations, a representation is universal if all representations in the class reduce to it. We show that notions of admissibility, considered both for domains and within Weihrauch’s TTE, are
universality concepts in the appropriate spectra. Viewing TTE representations as domain representations, the reduction notion here is a natural generalisation of the one from TTE. To illustrate the
framework, we consider some domain representations of real numbers and show that the usual interval domain representation, which is universal among dense representations, does not reduce to various
Cantor domain representations. On the other hand, however, we show that a substructure of the interval domain more suitable for efficient computation of operations is equivalent to the usual interval
domain with respect to reducibility. 1.
- the Journal of Logic and Computation , 2007
"... It is well known that to be able to represent continuous functions between domain representable spaces it is critical that the domain representations of the spaces we consider are dense. In this
article we show how to develop a representation theory over a category of domains with morphisms partial ..."
Cited by 7 (2 self)
Add to MetaCart
It is well known that to be able to represent continuous functions between domain representable spaces it is critical that the domain representations of the spaces we consider are dense. In this
article we show how to develop a representation theory over a category of domains with morphisms partial continuous functions. The raison d’être for introducing partial continuous functions is that
by passing to partial maps, we are free to consider totalities which are not dense. We show that the category of admissibly representable spaces with morphisms functions which are representable by a
partial continuous function is Cartesian closed. Finally, we consider the question of effectivity. Key words. Domain theory, domain representations, computability theory, computable analysis. 1
- Department of Mathematics, Uppsala University , 2005
"... In this paper we consider admissible domain representations of topological spaces. A domain representation D of a space X is λ-admissible if, in principle, all other λ-based domain
representations E of X can be reduced to D via a continuous function from E to D. We present a characterisation theorem ..."
Cited by 6 (1 self)
Add to MetaCart
In this paper we consider admissible domain representations of topological spaces. A domain representation D of a space X is λ-admissible if, in principle, all other λ-based domain representations E
of X can be reduced to D via a continuous function from E to D. We present a characterisation theorem of when a topological space has a λ-admissible and κ-based domain representation. We also prove
that there is a natural cartesian closed category of countably based and countably admissible domain representations. These results are generalisations of [Sch02]. 1
, 2003
"... It is shown that every compact metric space X is homeomorphically embedded in an !-algebraic domain D as the set of minimal limit elements. ..."
Cited by 4 (3 self)
Add to MetaCart
It is shown that every compact metric space X is homeomorphically embedded in an !-algebraic domain D as the set of minimal limit elements.
- Prospects for Hardware Foundations, Lecture Notes in Computer Science , 1998
"... We present a general theory for the computation of stream transformers of the form F: (R-- B)-- (T-- A), where time T and R, and data A and B, are discrete or continuous. We show how methods for
representing topological algebras by algebraic domains can be applied to transformations of continuous ..."
Cited by 3 (3 self)
Add to MetaCart
We present a general theory for the computation of stream transformers of the form F: (R-- B)-- (T-- A), where time T and R, and data A and B, are discrete or continuous. We show how methods for
representing topological algebras by algebraic domains can be applied to transformations of continuous streams. A stream transformer is continuous in the compact-open topology on continuous streams
if and only if it has a continuous lifting to a standard algebraic domain representation of such streams. We also examine the important problem of representing discontinuous streams, such as signals
T-- A, where time T is continuous and data A is discrete.
- Logic, Problem Solving, Programs, & Computers , 1992
"... on topological spaces via domain representations ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=785334","timestamp":"2014-04-21T16:11:25Z","content_type":null,"content_length":"35689","record_id":"<urn:uuid:73021a9c-d5f9-49f6-953a-05cd1a1b230a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US5191640 - Method for optimal discrete rendering of images
The present invention is a method for rendering two-dimensional continuous-tone images on bi-level digital output devices. Increasingly sophisticated and powerful computer resources are able to
manipulate a variety of generic data types. Text data and organized databases were the earliest of data types. Currently, graphics and image data are being created, transferred and manipulated in
general purpose computer systems. These new forms of data pose new problems for the computer systems designer. To the user, displaying an image on any of a wide variety of devices must be as
transparent as displaying ASCII text documents. Office video displays with differing gray level capacity, laser printers, and home dot-matrix printers, all of various resolutions and aspect ratios,
must render a given image in a similar way. To achieve this transparency of display, each output device should have a dedicated pre-processor that transforms generic digital image data to a form
tailored to the characteristics peculiar to that device.
The binary nature of a given output device, for example a laser printer or video display, creates particular problems in rendering continuous-tone images. Outside of photographic film and some
thermal sensitive materials, there does not exist a practical method of producing true continuous-tone hard copy. Computer hard copy devices are almost exclusively binary in nature. An output device
is binary if the lowest resolvable image portion (often called a pixel or bit) is either on or off, not some intermediate value in between. While the video displays associated with workstations and
terminals are certainly capable of true continuous-tone representations, they are often implemented with frame buffers that provide high spatial resolution rather than full gray-scale capability.
Digital half-toning techniques, implemented in an output pre-processor system, comprise any algorithmic process which creates the illusion of continuous-tone image from careful arrangement of binary
picture elements. Since most output devices are designed for display of binary dot-matrix text or graphics, digital half-toning provides the mechanism to display images on them as well.
An adequate digital half-toning technique must contend with two particular problems. Both problems stem from the interactions of local groups of displayed pixels in the final perception of the
rendered image. First, peculiarities in the output device may allow neighboring pixel intensities to affect other pixels. For example, a printer may allow the bleeding of dye or toner from one
localized pixel to another, altering the printed image. Second, the human eye itself tends to read groups of pixels together. This effect is often used to advantage to create the illusion of
continuous-tone color in video displays. However, the effects of low-level processing by the human eye and brain can alter the perception of a theoretically accurate display of an image, leading to
further undesired results.
A satisfactory method for rendering continuous-tone images onto any available binary display must take into account both display interactions and perceptual interactions in the processing of images.
Such a method should allow a variety of different display system characteristics to be used interchangeably, without great difficulty, in order to process an image for a given display. In addition,
the method should explicitly take into account the interactions of pixels on each other, both in the physical output of the display and in the perception by the human eye and brain. The method should
also permit the processing of images for a given display system in a minimum amount of time, and without requiring inordinate amounts of computational power.
The present invention provides a novel and efficient algorithm for converting a two-dimensional continuous-tone image into a bitmap suitable for printing or display from a bi-level digital output
device. The invention accounts for both the effects of the output device and the characteristics of eye-brain reception on local neighborhoods of pixels to provide an output image closely resembling
the original image in a reasonable amount of time.
The inventive procedure begins with an original two-dimensional continuous-tone monochrome image. The original image is first decomposed into a lower resolution bitmap by laying a coordinate grid
over the original image and assigning a grey-scale intensity value to each pixel of the grid, using standard sampling techniques. For convenience, the grid of pixels for the original image has the
same spacing and dimensions as that corresponding to the output device's images, although the present method may be extended to instances where this is not the case. The present invention converts
this original sampled image, being an array of multiple intensity values, into a printable bitmap, being an array of only on and off values. Throughout the description, printing is understood to
include actual printing from dot matrix, laser or other printers and also the displaying of images on any video display, and any other display technology.
First, a printer model applicable to the particular output device is specified. The printer model is an algorithm for describing how the printer takes a particular printable bitmap and produces a
given output. Often, various ink and paper interactions, such as non-linear or non-digital response of the imaging medium, dot spread or distortion, dot overlap, etc., prevent a rigorous one-to-one
correspondence between the binary bitmap input and the printed bitmap output, or rendering. The printer model captures the essential aspects of the peculiarities in the device's output, and may be
specified by printing and testing various binary test patterns. The printer model is a local model in the sense that the value of the printer model at a given point depends only upon a small number
of nearby pixels in the bitmap. The simple printer model used in the current description takes into account the intensities of the 8 nearest neighbors of a given pixel in calculating the central
pixel's output intensity.
Second, a perception model for describing how the eye and brain perceive a bi-level output image is provided. The eye and brain, in receiving a printed bitmap, tend to read each pixel as some
combination of nearby pixels. The perception model captures in a very approximate way the effects produced by this low-level processing by the eye and brain. A simple model used in the current
description computes the perceived intensity of each output pixel as a weighted average of the printed pixel intensities in the nine-by-nine neighborhood centered around that pixel, similar to the
operation of the printer model. The application of the perception model onto the rendered output of the printer model provides a perceived bitmap image, or perception.
Third, a comparison model provides a method of comparing the perception of the original image with the perception of the printed image. The comparison is local in a similar way as the printer and
perception models are local: it should be expressible as the sum of quantities, one quantity per pixel, with each quantity depending only on the values of the two images in some small neighborhood of
a particular pixel. For the present description, a relatively simple comparison model subtracts the intensity value of the pixel of the perceived printed image from that of the corresponding pixel of
the perceived original image and adds the absolute difference values for each pixel to arrive at an overall difference value for the two images.
The printer model describes how a given bitmap image emerges from a given output device: neighboring pixels may affect the physical intensity of a given pixel, so that the entire printed bitmap has
been altered neighborhood-by-neighborhood. The perception model characterizes how each pixel of the printed bitmap is seen by the eye and brain, given each pixel's immediate neighborhood. And the
comparison model describes how an individual might compare the perception of the printed image with the perception of the original. The three models, the printer, perception and comparison models,
both describe how a bitmap image is printed, perceived and compared with the original image, and provide procedural tools for altering the bitmap before printing to more closely match the original
The present invention uses these models to analyze alternative bitmaps of the original image, before any bitmap has been sent to an actual printer or other output device, to more faithfully reproduce
the original image. Simply comparing every permutation of the printable bitmap with the original sampled image would be a needlessly complex task. Instead, the present invention provides a dynamic
programming algorithm for simplifying this task and yielding extremely close bitmap renderings of original images in an acceptable length of time.
To simplify the problem of permuting a sampled image to form an acceptable printable bitmap, a series of parallel swaths through the image are used to perform only local permutations, one swath at a
time. A swath signifies here a linear path of adjacent bits or pixels in the bitmap, having a constant width of n bits. Each column of n bits in a swath is numbered starting with one and ending with
the length of the swath in pixels. Depending on the particular printer and perception models chosen, a certain number of columns of non-image bits will be added at the beginning and the end of the
swath as a border and are typically preset to a single value. For convenience, these non-image bits are preset and held at 0. To analyze and choose the proper settings for the bits within the current
swath, all the bits of the image outside the swath are held constant. Then, the next swath in turn is analyzed and altered, and so on until the entire image has been processed.
As a first step in the processing of an individual swath, the first column of n bits is chosen as the current column to analyze. A number k of consecutive columns of "look-ahead" bits after the first
column are also selected. An array of all possible configurations of the look-ahead bits will be attached in computer memory to the index for the current column of bits. In addition, a first and
second neighborhood of pixels are defined around the current column of bits. The size of these neighborhoods is provided by the printer and perception models. The perception model defines the
boundaries of the second neighborhood, being here a three-by-three square of bits around each bit in the current column. The first neighborhood is larger and accounts for printer effects at each
pixel in the second smaller neighborhood. Here, for example, each pixel in the second neighborhood is affected by pixels in a three-by-three neighborhood centered around it. Hence, the first printer
model neighborhood will comprise the area of the second perception model neighborhood plus a one pixel border around the second neighborhood's perimeter. Some portion of the bits of these two
neighborhoods will comprise the look-ahead bits, a large number will include those bits of the image held constant outside the current swath, and, especially for the first few and last few columns, a
certain number of bits may be non-image bits outside the actual sampled image which may be assigned the intensity value of zero.
For each possible configuration of the look-ahead bits, values for the current column of bits are selected which minimize the local difference between the printed image and the original image.
Several sub-steps accomplish this selection. Keeping the particular configuration of the look-ahead bits constant, the current column of bits is permuted one configuration at a time. For each
configuration of the current column, the printer model is applied to the first neighborhood around the first column, providing a close approximation to how the printer device would render the current
configuration of the bitmap for those pixels within the second neighborhood. Applying the perception model to the pixels in the second neighborhood (rendered by the printer model) yields a close
approximation to how the eye and brain would perceive the printed pixels in the column itself. Finally, the comparison model calculates the overall difference between the perceived values of the
current column and the perceived values of the original image, recording that combined difference value with the particular configuration of the current column. After all configurations of the column
bits have been made, the configuration which yields the least difference between printed image and original image is stored along with the particular configuration of look-ahead bits to which it
belongs. The same process is repeated to find the optimal current column of bits for each possible combination of its look-ahead bit set, keeping the rest of the bitmap constant.
The next step in the process assigns the succeeding column of bits in the swath to be the current column, and adjusts the columns of look-ahead bits accordingly. Again, for every configuration of
look-ahead bits, the values of bits for the new current column are permuted to find the optimal set of bits which minimize the combined difference value between the printed image and the original
image. One change is made, however, in that the combined difference value calculated at each column is cumulative, taking into account the difference values determined for the previous columns of
bits. This summing of difference values may be done because successive analysis steps for each current column are linked. A particular combination of permuted values of the current column combined
with its permuted set of look-ahead bits yields a single configuration of look-ahead bits for the previous "current" column, the one we just examined. Given this configuration of look-ahead bits, one
has already calculated which combination of the previous column's bits to choose to minimize the cumulative combined difference value. Working backwards, all the optimal choices for previous columns
may be looked up once we select a configuration for the current column. Since these values are set, as are the bits outside the swath, one can again calculate the combined difference value for all
columns up to and including the current column, for each configuration of the current column's values, given each configuration of that column's set of look-ahead bits.
The process of choosing optimal values for a current column for each configuration of its look-ahead columns, to minimize the cumulative combined difference value, is repeated for each column of bits
in turn, until the last column of image bits is reached. Since the look-ahead bits for the last column are set at one value (in other words, they are non-image bits which have only one
configuration), calculating the optimal values for the last column given its look-ahead bits yields only one set of optimal bit values. These values are the "best" values to choose for this last
column, given the behavior of the printer and reception by the eye and brain. The procedure then cascades backwards, since the last column of values combined with the next k-1 look-ahead values (all
zeros) determines the look-ahead values for the next-to-last column. One needs to only look up which set of values for the next-to-last column minimize the combined difference value, given the
established set of look-ahead values, and another column of values has been determined. Now the bits of the next-to-last and last columns and the next k-2 look-ahead columns provide the configuration
for the look-ahead bits of the next column working backwards. Again, given this selection of look-ahead bits, the optimal values for the column may be looked up from the previous calculations. The
procedure continues backwards until the first column is reached and all bits for all columns in the swath have been chosen. At this point, the combined difference value between the perceived printed
image and the perceived original image has been minimized for the current swath.
The procedure described for the first swath is repeated for all swaths in the image. Giving a time of calculation vs. quality of image trade-off, the overall process may be conducted several times
across the sampled image, until the printable bitmap relaxes into a bitmap providing a very close rendition of the original image. For instance, after the first swaths are laid out in a vertical path
and analyzed from left to right, another set of swaths laid out horizontally and analyzed from top to bottom may be processed, and so on. The marginal benefits gained from each pass of the processing
algorithm would typically decrease as the number of passes increases. The printable bitmap resulting from the present invention, once processed by the output device, should provide a faithful
duplication of the original image. The present invention, by including explicit printer and perception effects, and by employing a local configuration of bits to find local optimal choices given
future selections of the look-ahead bits, provides a technique for efficiently and successfully converting an original sampled bitmap into a bitmap suitable for a specific output device.
An appreciation of other aims and objectives of the present invention and a more complete and comprehensive understanding of this invention may be achieved by studying the following description of a
preferred embodiment and by referring to the accompanying drawings.
FIG. 1 is a schematic depiction of a bitmap, with a swath, current bit column and look-ahead bits displayed in accordance with the present invention.
FIG. 2A is a representation of a two-dimensional printer model.
FIG. 2B is a representation of a two-dimensional perception model.
FIG. 3 is a schematic depiction of a bitmap, showing a first and second neighborhood superimposed upon the bitmap.
FIG. 4 is a schematic depiction of a bitmap, revealing non-swath bit areas and non-image bit areas.
FIG. 5 is a schematic depiction of a bitmap, representing operation of the printer and perception models.
FIG. 6 is a schematic depiction of a bitmap, with a current bit column advanced one bit ahead, in accordance with the present invention.
To illustrate the present invention, it is helpful to first examine a related one-dimensional problem. In this related problem, a hypothetical one-dimensional continuous-tone image is converted to a
binary one-dimensional bitmap. To accomplish the conversion process, the printer, perception and comparison models must be specified.
For the sake of illustration, a simple printer model may be used. The model is shown in Table 1 below. The model represents the combined non-linear and non-digital effects of an actual output device,
due to imaging medium, dot spread or distortion, dot overlap, etc. To use the printer model, a computation means samples three contiguous bitmap bits (for example, "1-1-0") and uses the printer
look-up table given in Table 1 to calculate the actual intensity of center bit as displayed. The contiguous bitmap trio 1-1-0, according to Table 1, would result in the output of 0.8 for the center
bit (what should theoretically have been a "1" in intensity). The printer model used in Table 1 is relatively simple, being a (0.2,0.6,0.2) convolution kernel applied to each successive bitmap trio.
By applying the printer model to each bit of the bitmap, i.e., convolving it with its neighboring bits, the resultant printed bitmap (the rendering) can be calculated. The more accurate the printer
model in duplicating non-linear printer effects, the more accurate this calculation will be.
TABLE 1______________________________________Printer Model Printed CenterAdjacent Bitmap Bits Biti - 1 i i + 1 i______________________________________0 0 0 0.00 0 1 0.21 0 0 0.21 0 1 0.40 1 0 0.60 1 1 0.81 1 0 0.81 1 1 1.0______________________________________
Once the printer model has been applied to the initial bitmap, yielding the bitmap rendering, the perception model can be applied. The perception model, as discussed above, provides an approximate
version of the low-level processing of the human eye and brain. For the present example, a simple perception model uses the following computation. For each rendered pixel location, the pixel's value
multiplied by 2 is summed with the values of the two adjacent pixels. In essence, this calculation constitutes a low-pass filter with a (1,2,1) convolution kernel. After the perception model has been
applied to the rendered bitmap, the new perception of the bitmap can be compared with an original paradigm image, using a comparison model, to determine how far from ideal the perceived image is. The
comparison model used in the present description simply sums the absolute values of the pixel-by-pixel differences in the intensities of the two images.
The following Table 2 illustrates the operation of the three models on a trial bitmap. The trial bitmap is compared with a desired paradigm image, shown in the second column. To account for the edges
of the image (whether a one-dimensional image as in the present example, or the more usual two-dimensional image), calculations involving the borders of the bitmap may simply supply a 0 value for any
non-existent non-image pixels beyond the bitmap boundary. Referring to the first row of Table 2, the neighbors of the first bit of the trial bitmap (value=0) are 0 and 1. The 0 neighbor has been
arbitrarily supplied, since the bitmap ends at bit number 1. Application of the printer model to the first bit and its neighbors yields a rendered bit having a value of 0.2. The perception model is
next applied to the neighborhood around the rendered bit number one (the values are: 0.0 (supplied since the non-image bit is outside the bitmap boundary), 0.2, and 0.6). The perception model yields
a perceived value of 1.0 at bit number 1. Finally, the comparison model takes the absolute value of the difference between the perceived value of the trial bitmap bit (1.0) and the perceived value of
the original image (0.9) to yield a "badness" value for that bit of 0.1. The 0.9 value for the perceived value of the original image bit is found by applying the same perception model to the image
values found in column two. (For example, the calculation would be 1*(0.0)+2*(0.3)+1*(0.3)=0.9). "Badness" indicates the difference in perceived value of the printed trial bitmap pixel and the
sampled image pixel. Summing all the badness values for all the trial bitmap bits yields an overall badness value of 5.2.
TABLE 2__________________________________________________________________________Application of Models to Trial One-Dimensional Bitmap. Trial After After After Image Bitmap Bit Printer Bit Perception Comparison =Bit Value Value Neighborhood Model Neighborhood Model Model Badness__________________________________________________________________________1 0.3 0 0 0 1 0.2 0.0 0.2 0.6 1.0 0.12 0.3 1 0 1 0 0.6 0.2 0.6 0.2 1.6 0.33 0.4 0 1 0 0 0.2 0.6 0.2 0.0 1.0 0.44 0.3 0 0 0 0 0.0 0.2 0.0 0.2 0.4 1.05 0.4 0 0 0 1 0.2 0.0 0.2 0.8 1.2 0.46 0.5 1 0 1 1 0.8 0.2 0.8 1.0 2.8 0.87 0.6 1 1 1 1 1.0 0.8 1.0 0.8 3.6 1.38 0.6 1 1 1 0 0.8 1.0 0.8 0.2 2.8 0.59 0.5 0 1 0 0 0.2 0.8 0.2 0.0 1.2 0.4__________________________________________________________________________
Since the overall badness value of a particular trial bitmap may be calculated, the methods of the present invention enable the determination of the trial bitmap which minimizes this badness value.
The method relies on the locality of the individual models, i.e. that a calculation for a given pixel or bit only depends on some local neighborhood of pixels. For the particular models chosen in the
present example, the partial badness calculated over the first three bit locations does not depend at all upon the bitmap values for locations 6 through 9. The perceived value of last bit 3 depends
upon the rendered value of bit 4 through operation of the perception model, while the rendered value of bit 4 depends upon the trial bit 5, through operation of the printer model. In this way, the
perceived value of bit 3 is linked to the trial bitmap value of bit 5, but to no further bits. The localness of each model effectively cuts off, after a certain point, the dependency of the perceived
value of a given bit from the rest of the bit values.
The problem of finding the optimal one-dimensional trial bitmap can be expressed as a set of related subproblems, each of the form:
Subproblem i: Construct a table which calculates, for every possible setting of bits i+1, i+2, i+3, i+4, the minimum partial badness for cells 1 through i+2, along with the setting for bit i which
attains this minimum.
In this formalism, i represents the bit currently being considered and i+1, i+2, i+3, i+4 are "look-ahead bits". Since there are 2.sup.4 =16 possible combinations of the look-ahead bits, and 2
possible states for bit i, the table would have 32 combinations. Half of these combinations would be eliminated as not minimizing the partial badness for a particular look-ahead configuration.
The first subproblem 1 is easily solved by examining all 32 cases and retaining the 16 yielding a lower partial badness value. A portion of the table is shown below in Table 3:
TABLE 3______________________________________Configuration Table for Subproblem 1. BadnessBit Number Value for for Cells 12 3 4 5 Bit # 1 to 3______________________________________0 0 0 0 . . . . . .. . .1 0 0 0 0 0.81 0 0 0 1 3.0. . .______________________________________
For each configuration of look-ahead bits 2 through 5, two possible values for bit 1 exist, 0 and 1. For the particular look-ahead bit configuration 1-0-0-0, letting bit 1 equal 0 yields a partial
badness of 0.8, while a value of 1 yields a partial badness of 3 Hence, value 0 is retained as the optimal value for bit 1 given the particular configuration of look-ahead bits.
Successive subproblems are slightly different. Subproblem 2 tabulates the ways to minimize the partial badness of cells 1 through 4 for each setting of bits 3, 4, 5 and 6, to find the optimal values
for bit number 2. Table 4 illustrates a partially filled-in table of these calculations.
______________________________________Configuration Table for Subproblem 2. BadnessBit Number Value for for Cells 13 4 5 6 Bit # 2 to 4______________________________________0 0 0 0 . . . . . .. . .1 0 0 0 ? ?. . .______________________________________
For each configuration of the look-ahead bits 3 through 6, alternate values of bit 2 are attempted. Once a particular value for bit 2 is selected, bits 2 through 5 provide a complete set of
look-ahead bits for bit 1, as used in the preceding sub-problem 1. Thus, the results of the preceding calculations can be consulted to recall the optimal value of bit 1 given the particular values of
its look-ahead bits, thereby setting the values for bits 1 through 6. Using these values and the printer and perception models, the overall badness for bits 1 through 4 can be calculated for both
values of bit 2, allowing an optimal value of bit 2 to be selected.
The solution of the remaining sub-problems proceeds in the same way, selecting alternative values for the current bit for a particular configuration of look-ahead bits. In each successive
sub-problem, the value of the current bit, combined with a subset of its look-ahead bits, provides a configuration of look-ahead bits for the preceding sub-problem. Hence, the table constructed from
the preceding sub-problem can be consulted, the configuration of look-ahead bits used as an index, and the optimal value of the current-minus-1 bit determined. Then, the value of the current-minus-1
bit, the current bit and a portion of the current bit's look-ahead bits provide a configuration of look-ahead bits for the current-minus-2 subproblem, and so on. In this fashion, alternately
selecting the value of an ith bit, and a particular configuration of the ith bit's look-ahead bits, completely determines the values of bits 1 through i+4. From these values and the printer and
perception models, the minimum badness for bits 1 through i+2 can be determined. Finally, solving for the optimal value of the final bit in the last sub-problem determines the minimum badness for the
whole bitmap. The final sub-problem constructs a table with only one row, since all the look-ahead bits fall outside the image and are all set to zero (in other words, there is only one possible
configuration of look-ahead bits).
Having completed the last sub-problem, the optimal settings for all the bits of the bitmap may be easily determined by tracing backwards through the solutions to the subproblems. The last table
yields the setting for the last bit, the last bit and the next-to-last table yield the optimal setting for the next-to-last bit, and so on, ending finally with the first table and all previously
determined bits, which combined yield the optimal value for bit 1.
The method of solution demonstrated for the one-dimensional image case may be adapted to find near-optimal solutions for the two-dimensional case. Because of the extra dimension, the table-building
scheme described above can no longer be used to find an exact solution. However, the techniques provide an efficient and practical method for calculating near-optimal two-dimensional solutions.
The present invention adapts the one-dimensional method to optimize a given image over one narrow swath of pixels at a time. This simplifies the problem of permuting a sampled image to form an
acceptable printable bitmap. A swath 14 signifies here a linear path of adjacent bits or pixels in the bitmap 12, having a constant width of n bits. In the present example, n is equal to 2, as shown
in FIG. 1. Each column of n bits in a swath is numbered starting with one and ending with the length of the swath in bits. As discussed above, depending on the particular printer and perception
models chosen, a certain number of columns of non-image bits will be added at the each end of the swath as a border and are typically preset to a single value. For the particular models used in the
present discussion (having three-by-three neighborhoods around each pixel) two columns of non-image bits are required at the beginning of the swath; four columns are needed at the end. For
convenience, these non-image bits are preset and held at 0. To analyze and choose the proper settings for the bits within the current swath, all the bits of the image outside the swath are held
constant. Then, the next swath in turn is analyzed and altered, and so on, until the entire image has been processed.
In adapting the printer and perception models discussed above to the two-dimensional case, they can simply become two-dimensional square convolutions around a particular bit or pixel. For instance,
the printer model 17a can convolve the three-by-three neighborhood around a pixel using the same kernel as before (i.e. the center pixel value is multiplied by 0.6 and added to its eight nearest
neighbor's values each multiplied by 0.2, as shown in FIG. 2A). In the same way, the perception model 17b convolves the three-by-three neighborhood around a pixel using the (1,2,1) kernel, as shown
in FIG. 2B. The comparison model can remain the same, merely adding the absolute difference between the perceived values for individual printed bits and the perception of an ideal image.
As a first step in the processing of an individual ith swath 14i of a sampled image 12, the first column 16a of n bits is chosen as the current column to analyze. A number k of consecutive columns of
"look-ahead" bits after the first column are also selected. In the present example, k is equal to 4. An array of all possible configurations of the look-ahead bits will be attached in computer memory
to the index for the current column of bits. In addition, a first and second neighborhood of pixels are defined around the current column of bits. The size of these neighborhoods is provided by the
printer and perception models 17a and 17b. The perception model defines the boundaries of the second neighborhood 18a, shown in FIG. 3, being here a three-by-three square of bits around each bit in
the current column and a certain number of the look-ahead columns of bits. In the present case, two further columns beyond the current column are included in the second neighborhood. Those bits
within the second neighborhood and within the swath will be used in the comparison step to determine the minimum badness of the printed and perceived image.
The first neighborhood 20a is larger and accounts for printer effects at each pixel in the second smaller neighborhood. Here, for example, each pixel in the second neighborhood is affected by pixels
in a three-by-three neighborhood centered around it. Hence, the first printer model neighborhood 20a will comprise the area of the second perception model neighborhood 18a plus a one pixel border
around the second neighborhood's perimeter. Some portion of the bits of these two neighborhoods will comprise the look-ahead bits (area 22a, shown in FIG. 4), a large number will include those bits
of the image held constant outside the current swath (areas 24a and 24b), and, especially for the first few and last few columns, a certain number of bits may be non-image bits outside the actual
sampled image which may be assigned the intensity value of zero (area 26).
FIG. 5 illustrates the two-dimensional operation of the printer and perception models. To find the rendered value of bit B, the printer model 17a convolution values shown in FIG. 2A are applied to
the three-by-three neighborhood 28 around B, neighborhood 28 lying within the first neighborhood 20. Then, to find the perceived value of A, the perception model 17b convolution values shown in FIG.
2B are applied to the rendered values of the three-by-three neighborhood 30 around pixel A. All pixels B in neighborhood 30 would have similar neighborhoods 28 around them to apply the printer model
to, and all pixels A within the swath of interest would have similar neighborhoods 30 around them. The set of all neighborhoods 30 for all pixels A in a given set of columns in the current swath
constitute the second neighborhood 18a; the set of all neighborhoods 28 around all values B in the second neighborhood 18a constitute the first neighborhood 20a.
For each possible configuration of the look-ahead bits 22a, values for the current column 16a of bits are selected which minimize the local difference between the printed image and the original
image. Several sub-steps accomplish this selection. First, keeping the particular configuration of the k columns of look-ahead bits constant, the current column of bits 16a is permuted one
configuration at a time. Next, for each configuration of the current column 16a, the printer model 17a is applied to the first neighborhood 20a around the first column 16a, providing a close
approximation to how the printer device would render the current configuration of the bitmap for those pixels within the second neighborhood 18a. Then, applying the perception model 17b to the pixels
in the second neighborhood 18a (rendered by the printer model) yields a close approximation to how the eye and brain would perceive the printed pixels in the column itself. Finally, the comparison
model calculates the overall difference between the perceived values of the current column (and a given number of look-ahead columns within the second neighborhood 18 a, here chosen to be two) and
the perceived values of the original image, recording that combined difference value with the particular configuration of the current column 16a. After all configurations of the column bits 16a have
been made, the configuration which yields the least difference between printed image and original image is stored along with the particular configuration of look-ahead bits to which it belongs. The
same process is repeated to find the optimal current column of bits 16a for each possible combination of its look-ahead bit set 22a, keeping the rest of the bitmap 12 constant.
The next step in the process assigns the succeeding column of bits 16b in swath 14i to be the current column, and adjusts the k columns 22b of look-ahead bits accordingly, as shown in FIG. 6. Again,
for every configuration of look-ahead bits 22b, the values of bits for the new current column 16b are permuted to find the optimal set of bits which minimize the combined difference value between the
printed image and the original image. First neighborhood 20b and second neighborhood 18b are also adjusted accordingly. One change is made, however, in that the combined difference value calculated
at each column 16b is cumulative, taking into account the difference values determined for the previous columns of bits (16a, etc.). This summing of difference values may be done because successive
analysis steps for each current column are linked. A particular combination of permuted values of the current column 16b combined with its permuted set of look-ahead bits 22b yields a single
configuration of look-ahead bits 22a for the previous "current" column 16a, the one just examined. Knowing this configuration of look-ahead bits, the combination of the previous column's bits 16a has
already been calculated to know which to choose to minimize the cumulative combined difference value. Working backwards, all the optimal choices for previous columns may be looked up once a
configuration for the current column is selected. Since these values are set, as are the bits outside the swath, the combined difference value for all columns up to and including the current column
can be calculated, for each configuration of the current column's values, given each configuration of that column's set of look-ahead bits.
The process of choosing optimal values for a current column for each configuration of its look-ahead columns, to minimize the cumulative combined difference value, is repeated for each column of bits
in turn, until the last column of image bits is reached. Since the look-ahead bits for the last column are set at one value (in other words, they have only one configuration, being all non-image bits
set at zero), calculating the optimal values for the last column given its look-ahead bits yields only one set of optimal bit values. These values are the "best" values to choose for this last
column, given the behavior of the printer and reception by the eye and brain. The procedure then cascades backwards, since the last column of values combined with the next k-1 look-ahead values (all
zeros) determines the look-ahead values for the next-to-last column. As described above, one can refer to the already calculated set of minimal values for the next-to-last column, given an
established set of look-ahead values, to determine another column of bit values. Then, the bits of the next-to-last and last columns and the k-2 look-ahead columns provide the configuration for the
look-ahead bits of the next column working backwards. Again, given the particular selection of look-ahead bits, the optimal values for the column may be looked up from the previous calculations. The
procedure continues backwards until the first column is reached and all bits for all columns in the swath have been chosen. At this point, the combined difference value between the perceived printed
image and the perceived original image has been minimized for the current swath.
The procedure described for the first swath is repeated for all swaths 14i in the image. Giving a time of calculation vs. quality of image trade-off, the overall process may be conducted several
times across the sampled image, until the printable bitmap relaxes into a bitmap providing a very close rendition of the original image. For instance, after the first swaths are laid out in a
vertical path and analyzed from left to right, another set of swaths laid out horizontally and analyzed from top to bottom may be processed, and so on. The marginal benefits gained from each pass of
the processing algorithm would typically decrease as the number of passes increases. The printable bitmap resulting from the present invention, once processed by the output device, should provide a
faithful duplication of the original image. The present invention, by including explicit printer and perception effects, and by employing a local configuration of bits to find local optimal choices
given future selections of the look-ahead bits, provides a technique for efficiently and successfully converting an original sampled image into a bitmap suitable for a chosen output device.
The methods and apparatus for optimal discrete rendering of images disclosed are applicable to any image rendering application. In particular, although the present embodiment has been described with
reference to continuous-tone monochrome images, the same inventive procedures may be readily applied to color image applications. Although the present invention has been described in detail with
reference to a particular preferred embodiment, persons possessing ordinary skill in the art to which this invention pertains will appreciate that various modifications and enhancements may be made
without departing from the spirit and scope of the claims that follow.
LIST OF REFERENCE NUMERALS FIG. 1
10: Schematic Bitmap Diagram
12: Sampled image
14i: Current ith Bitmap Swath
16: Current Column
FIGS. 2a and 2b
17a: Printer Model
17b: Perception Model
FIG. 3
18: Second Neighborhood
20: First Neighborhood
22: Look-Ahead Bits
FIG. 4
24: Non-Swath Image Bits Kept Constant
26: Non-Image Bits Held at Zero
FIG. 5
28: Printer Model Neighborhood
30: Perception Model Neighborhood | {"url":"http://www.google.co.uk/patents/US5191640","timestamp":"2014-04-21T00:03:25Z","content_type":null,"content_length":"108609","record_id":"<urn:uuid:263d64c1-1b6d-4ac0-a9fa-8a9098d9d865>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Black hole growth paradox
Let's say I have two black holes with a combined mass of 1000 solar masses orbiting (really fast) at 3500 km separation, and for the sake of simplicity, that their event horizons remain roughly
spherical. I thus have two separate BHs and two horizons.
I see your intent here, but it won't work as you've described it. Your intent appears to be to have the horizons separated by a very small distance, but there is no way to have such a configuration
be stable. As I posted earlier in this thread, there are no stable orbits inside r = 6M, which is three times the horizon radius; so even a very small object can't orbit the BH just outside its
Furthermore, another BH of the same mass is not "a very small object", and you can't treat it as one. You simply can't construct a stable scenario with two BHs orbiting each other this way. Gravity
in GR is nonlinear, so you can't just superpose two individual BH solutions and get another solution. That's not to say that it's impossible for two BHs to orbit each other, just that it's not as
simple as just having them orbit each other like two billiard balls.
In what follows, I'm going to pretend for the sake of argument that we *can* construct a stable system with two BHs orbiting each other fairly closely (but not as close as you've said). "Stable" here
means the BHs stay in their mutual orbit for a long enough time compared to whatever experiments we are going to run; but it's important to note that such a system, even if it can be constructed,
will *not* stay stable indefinitely. The two BHs will gradually spiral into each other because the system as a whole will be emitting gravitational waves and therefore losing energy, just as a binary
pulsar system does (this has been confirmed by observation):
In view of the above, please bear in mind that everything I'm saying is only heuristic; I am not working from an actual known solution of the GR equations. So this is really just handwaving--educated
handwaving, I hope, but still handwaving. The strict answer would simply be that the scenario you have tried to construct is not valid; but I know that's not very satisfying, so I'm trying to do more
than that, with caveats as above.
Now add some infalling mass that goes into orbit around either the BHs or their combined center of gravity (at their LaGrange points). At some time, the orbiting mass + all (or portions of) the
orbiting BHs reaches critical density around the common center of gravity.
Yes, but that doesn't mean an event horizon instantaneously forms there. An event horizon is a globally defined surface: it's the boundary of a region of spacetime (a "black hole") that can't send
light signals to infinity. If two black holes are merging, then there is really only one event horizon; it just has two "branches" in the past instead of a single one. But since the final
configuration is a single black hole, there is only one region of spacetime as a whole that can't send light signals to infinity; again, that region just has two "branches" instead of one, so if you
drew a spacetime diagram, for instance, with time vertical and spatial dimensions horizontal, the black hole region would look like a pair of trousers, so to speak, instead of a cylinder.
Also, the event horizon doesn't "jump" from one radius to another; it moves smoothly between them. Consider a simpler case for a bit: a single black hole that gains mass from a spherically symmetric,
thin shell of infalling matter. The mass of the BH plus the shell is larger than the mass of the BH by itself, so what happens when the shell reaches the new, larger horizon radius due to the
combined mass (which is slightly larger than the original horizon radius)? When the shell reaches that point, the new event horizon with a larger radius must be formed, right?
Yes, it is, but now consider a light ray that is moving outward, just outside the original horizon radius, in such a way that it just happens to hit the infalling shell at exactly the instant that
the shell reaches the new (larger) horizon radius. That light ray will be trapped: it will stay at the new horizon radius forever (because that's what the horizon *is*, locally--it's a surface where
outgoing light rays are trapped at the same radius forever). But that also means that an event just inside the path of that light ray, even though it is outside the original horizon radius, can't
send light signals to infinity, so it must be part of the global black hole region.
In other words, globally, the event horizon expands smoothly from the original radius to the new radius as the infalling shell approaches the new radius; at the instant the shell hits the new radius,
the event horizon has just reached that new radius as well. That means that we can't know exactly where the event horizon is without knowing the entire future history of the spacetime--for example,
if we ourselves were hovering just outside the original BH, before the infalling shell of matter came in, we could find ourselves stuck inside the new BH without realizing it, if we didn't know the
shell was falling in, and if we were inside the new horizon radius, because the boundary of the global region that can send light signals to infinity could pass by us, moving outward, *before* we saw
the infalling shell. There is no way to tell, locally, that you can no longer send light signals to infinity from your current location.
This is kind of long-winded, but the point is that the event horizon is not a "thing" that you can keep track of just by looking at local phenomena. It's a globally defined boundary, and you can be
misled if you try to think of it as a local thing.
At first, this new even horizon is likely to pierce the horizons of the original BHs
No. What will happen is that the event horizons of the two original BHs will start expanding, *before* the new matter has accumulated; they will probably merge with each other even before all the new
matter has fallen in, and then the single combined EH will continue to expand until all of the accumulating matter has fallen inside the new horizon radius due to the final total mass present. After
everything is all done, there will be a single BH, and a single event horizon.
I realize this is not easy to visualize, and there are a lot more complications that I haven't even gone into: the infalling matter is likely to emit X rays, and as the horizons merge, gravitational
waves will be emitted. There are lots of efforts ongoing to numerically simulate black hole mergers to learn more details.
If you want to try to get a handle on how black holes gain mass, I would back away from the complicated scenario you've proposed, and start with the simpler case I gave above: a single, non-rotating,
spherically symmetric BH that gains mass from a thin, spherically symmetric shell of infalling matter. Understanding that scenario will give a good baseline to go on to more complicated ones. | {"url":"http://www.physicsforums.com/showthread.php?p=4213938","timestamp":"2014-04-19T12:35:13Z","content_type":null,"content_length":"117100","record_id":"<urn:uuid:23a719a2-1ab8-47ea-99f3-005cdbf6aa6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
Moisture fluxes at land surfaces
Components of the Energy Budget
The nature of the land surface affects conditions in the lower layers of the atmosphere through its influence on the value of net radiation and through its effects on the transfer of momentum,
moisture, and sensible heat between the air and the ground. These transfers are linked to one another and operate within the constraint of the energy balance at the surface. The energy budget
equation can be written as
in which energy fluxes from the atmosphere to the land surface are taken as positive. R[n], is the net radiative flux density at the particular surface (water, soil, snow, or vegetal canopy); H is
the specific flux of sensible heat into the atmosphere; L[e] is the latent heat of vaporization and E the rate of evaporation; L[p] is the thermal conversion factor for the fixation of carbon dioxide
and F[p] is the specific flux of carbon dioxide; C is the specific energy flux into the land surface; A[h] is the energy advection into the surface layer and W is the energy stored in the surface
On the space and time scales of general circulation models (100-500 km and 10-15 days), four of the seven terms in equation (9) can be assumed to be negligible compared to the remaining three terms.
The advection term Ah may be significant in the case of rainfall on a snow surface or snowfall on a warm lake but is likely to be negligible otherwise for any area of appreciable extent. Table I
shows representative values of the other terms of equation (9) for vegetation of about I m height under cloudless summer conditions in middle latitudes (Thom 1975).
TABLE 1. Typical energy budget over vegetation (Wm^-2)
│ Time │ │ │ │ G │ │
│ Near sunrise │ 0 │ -8 │ + 3 │ - 5 │ + 10 │
│ Noon │ 500 │ 461 │ + 12 │ +25 │ + 2 │
│ Near sunset │ 0 │ +3 │ +2 │ +5 │ - 10 │
│ Midnight │ - 50 │ - 20 │ - 3 │ - 25 │ - 2 │
If we average over a day to eliminate the diurnal variation, then the daily contribution of the energy flux into the canopy and of the rate of change of energy storage in the canopy and in the soil
becomes negligible, so that the daily energy balance can be written
The term arising from carbon dioxide fixation may sometimes be of the order of 5% of global radiation but is more usually less than 1%. In practice this photosynthetic term is neglected except in
direct studies of carbon dioxide exchange. The energy budget of equation (9) therefore reduces for the space and time scales of interest in GCM modelling to
The ratio of sensible heat flux to the vertical flux of latent heat (the Bowen ratio) is used to characterize the partition of the available energy between heat and moisture transfer to the
atmosphere. Thus
where B is the Bowen ratio.
The net radiation available at the land surface depends on the nature of that surface. The net radiation is given by
where the first and second terms on the right-hand side of the equation represent available short-wave and long-wave radiation respectively. Rs is global short-wave radiation downwards on the land
surface; a is the surface albedo or ratio of reflected to incident short-wave radiation; e is the long-wave emissivity of the surface; R. is the downward long-wave radiation; To is the surface
temperature and sigma the StefanBoltzmann constant. The value of the albedo is strongly influenced by the nature of the surface cover: a varies from 0.1 for tropical rain forest to 0.8 for snow at
high latitudes. In contrast, the emissivity does not vary widely, being between 0.9 and 1.0 for most natural surfaces.
Components of the Water Budget
Water balances can be attempted on a variety of space scales and a variety of time scales. In the coupling of general circulation models with land surface processes, the values for any individual
grid square of rainfall and snowmelt and potential evaporation may be considered as given. Hydrologic modelling is required to provide the corresponding values of infiltration and percolation,
surface and subsurface runoff, changes in soil moisture storage, and actual evapotranspiration. For the time scales of interest in climate modelling, changes in surface storage for snow-free areas
are negligible compared with subsurface storage.
For any defined area we can then write
where [a](t) is the direct storm runoff, which is too rapid to be available for evaporation; and Y[b](t) is the subsequent baseflow, which is affected by evaporation. In any part of the area in which
the rate of precipitation P(t) is less than the rate of potential infiltration F[p](t) at the surface (or the rate of potential percolation of near-surface soil layer), the direct runoff is assumed
to be zero so that we have
where F(t) is the rate of actual infiltration (or percolation). If, on the contrary, the rate of precipitation is greater than the rate of potential infiltration F[p](t) then the local rate of direct
runoff will be given by
and the rate of actual infiltration by
In either of the two cases
which represents the assumption of negligible change in surface storage, and also
which is the equation for soil moisture accounting.
The parametrization of potential infiltration starts from the physical equation for vertical flow in an unsaturated soil. For this situation, equation (8) above becomes
where w(t) is the vertical velocity of unsaturated flow, K(c) is the unsaturated hydraulic conductivity, p(c) is the soil moisture pressure (which is negative in the unsatu rated zone), gama is the
weight density of water, and z is the elevation. When combined with the continuity equation
equation (19) gives
which is known as Richards equation.
For dry soils the second term on the right-hand side of equation (21) is much smaller than the other two terms during the initial period of high rate infiltration. If this term is neglected, it can
be shown that for surface ponding of a semi-infinite soil column the cumulative infiltration is given by
where A is a parameter (often referred to as the sorptivity) that depends on the initial soil moisture content and on the soil properties and B is a parameter that depends on the hydraulic
conductivity. If the second term on the right-hand side of equation (21) is not neglected, a solution can be found by successive perturbations (Philip 1957, 1969). Brutsaert (1977) showed that a good
approximation to the series solution is given by
where b[0] is a parameter that depends on the pore-size distribution and that is of the order of b [0] = 2/3 for most field soils.
One could attempt to derive the values of A and B either by parametrization from the microscale of physical hydrology or by calibration on the basis of catchment records on as large a scale as
possible. On the scale appropriate to general circulation modelling the effect of the approximations in equation (22) or equation (23) are negligible in relation to the uncertainty of the spatial
variability in the physical parameters at the microscale. A possible approach at the catchment scale might be to use the fact that simplified solutions of equation (21) for an initial moisture
content c0 uniformly distributed in a semi-infinite column indicate that sorptivity is proportional to (c[sat] - c[0]) for a number of varying soil moisture relationships. Accordingly, one might
attempt to examine whether at the large scale the derived sorptivity (A) was approximately proportional to the field moisture deficit and whether the constant of proportionality (B) could be related
to major soil types.
Potential and Actual Evaporation
The problem of estimating actual evaporation when the moisture flux is soil controlled rather than atmosphere controlled remains a central problem in catchment hydrology. The combination approach
pioneered by Penman (1948, 1963) represents a useful means of estimating evaporation independently rather than treating it as a residual in a waterbalance equation. In its original form the Penman
estimate for potential evaporation is
where the first term on the right-hand side represents the equilibrium rate of evaporation very far downstream from the leading edge of a wet surface and E[A] represents the drying power of the air,
which decreases downstream of the leading edge. The drying power E[A] is a function of the wind speed and the vapour pressure deficit at the air temperature.
Even when vegetation is well supplied with water the vegetal surface will not be wet except after precipitation. The rate of evaporation can be related to the gradient of specific humidity between
the saturated air in the sub-stomata! cavities and the atmosphere above the canopy by
where rst is the bulk stomata! resistance to the transfer of vapour from inside the stomata to the leaf surface and r[a] the aerodynamic resistance to the movement of vapour through the canopy air.
Using the relationship between specific humidity (q) and vapour pressure (e)
we can derive the Penman-Monteith equation for a vegetated surface
which for r[st] = 0 is equivalent in form to equation (24).
It remains to consider the key question of the reduction of potential to actual evapotranspiration for the large space scales and long time scales appropriate to climate modelling. Long-term average
relationships have been suggested on the basis of observed records, conceptual modelling at catchment scale, and parametrization of the equations of physical hydrology derived for conditions at a
point. The empirical approach can be illustrated by the three examples of the methods proposed by Turc, by Pike, and by Budyko for estimating the annual evaporation from the annual potential
evaporation and the annual precipitation.
Turc (1954, 1955) assumed that there would be a limiting rate of evaporation as annual precipitation increased and, on the basis of records of 250 catchments in different climatic regimes, proposed
the formula
where E, E[0], and P are, respectively, the annual values of actual evapotranspiration, maximum possible evapotranspiration (based on a cubic relationship with mean annual temperature), and
precipitation. Pike (1964) replaced the estimate of limiting evaporation by the Penman estimate of open-water evaporation and found that replacing 0.9 by 1.0 gave better results for Malawi.
Budyko (1948, 1971) found that data from the water balance of a number of catchments were intermediate between the exponential relationship proposed by Schreiber in 1904 and the hyperbolic tangent
relationship proposed by Ol'dekop in 1911. Accordingly, he proposed the geometric mean of the two relationships. Thus
This relationship was checked first for 29 European rivers (Budyko 1951) and then for 1,200 regions for which precipitation and runoff data were available (Budyko and Zubenok 1961).
The Turc and Pike equations can be directly compared with the Budyko equation by assuming that for moist conditions over a large area
and writing all three equations in the form of E/E[0], as a function of P/E[0] or P/R[n], which is the reciprocal of the Budyko index of dryness. This enables us to compare all three equations on
figure 4, where the evaporation efficiency is plotted against the ratio of precipitation to maximum possible evaporation. The characterization of the biomes (desert, steppe, forest, tundra) in figure
4 by the value of P/E[0] (= L[e]P/R[n]) is due to Budyko (1971).
Conceptual catchment models formulated at a catchment scale are essentially methods of water storage accounting and a key element in their operation is soil moisture accounting. The simplest form of
soil moisture accounting is a single storage element and the simplest relationship between actual evaporation and soil moisture storage is one that is linear below a threshold value (Thornthwaite and
Mather 1955; Budyko 1955, 1956). Hydrological catchment models use multi-layer soil moisture accounting and allow for varying values for potential infiltration and potential evaporation in the
catchment area (Fleming 1975). Conceptual models have also been developed to represent the soil-plant-atmosphere system (Halldin 1979; Fritschen 1982), but a review of these approaches is outside the
scope of the present paper.
The main attempt to parametrize from physical hydrology to catchment scale and to GCM scale has been made by Eagleson (Eagleson 1978; Milly and Eagleson 1982a, 1982b; Andreou and Eagleson 1982). This
research work is reviewed briefly under "Macro-Hydrology" below. In another approach to macroscale formulation, Bouchet (1963) considered regional evaporation under conditions of a fixed energy
budget and suggested that the actual evaporation and the apparent potential evaporation based on actual atmospheric conditions would be complementary to one another. Morton (1965, 1975) and Brutsaert
and Stricker (1979) have applied this concept to the estimation of regional evapotranspiration.
Contents - Previous - Next | {"url":"http://archive.unu.edu/unupress/unupbooks/80635e/80635E0i.htm","timestamp":"2014-04-16T16:14:44Z","content_type":null,"content_length":"23506","record_id":"<urn:uuid:8ca6e450-0eba-41dd-ace8-b44a35100ad1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine magnitude and direction of resulting vector
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I GOT THIS, I JUST ACED A TEST OVER THIS.. ONE SECOND
Best Response
You've already chosen the best response.
okay, first can you give me what the problem says so i can depict the picture a little better?
Best Response
You've already chosen the best response.
OKAY :D
Best Response
You've already chosen the best response.
thats the picture given !
Best Response
You've already chosen the best response.
but here ill rewrite the question
Best Response
You've already chosen the best response.
A camera is suspended by two wires over a football field to get shots of the action form above. At one point, the camera is closer to the left side of the field. The tension in the wire on the
left is 1500 N, and the tension in the wire on the right is 800 N. The angle between the two wires is 130 degrees. Determine the approximate magnitude and direction of the resultant force.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
That says 50 degrees, i accidentally typed N
Best Response
You've already chosen the best response.
|dw:1361220666415:dw| the only thing i dont understand is the 50 degrees you just mentioned.. was that given? how did you get that?
Best Response
You've already chosen the best response.
That is the picture given
Best Response
You've already chosen the best response.
so the 800N is going directly East and the 1500N is going that 50deg direction?
Best Response
You've already chosen the best response.
Yup !
Best Response
You've already chosen the best response.
okay so that means to get the component form you put <1500cos50, 1500sin50> and add that with <800, 0> **the sine is zero because it is on the x-axis.** and what do you get?
Best Response
You've already chosen the best response.
Why do you put <1500cos50, 1500sin50>
Best Response
You've already chosen the best response.
that should give you <1764, 1149> you have to sqaure them and set them equal to the resulant vector so R (being the resultant vector)....|dw:1361221683792:dw|
Best Response
You've already chosen the best response.
that is the formula you use in order to put the two vectors in component form. then once you do that, you have to take the square root of the numbers you get squared.
Best Response
You've already chosen the best response.
But its not a right triangle :S
Best Response
You've already chosen the best response.
you're doing the Pythagorean theorem right
Best Response
You've already chosen the best response.
it doesnt have to be a right triangle
Best Response
You've already chosen the best response.
but i was using the distance formula
Best Response
You've already chosen the best response.
If you dont understand it, i recommend quickly looking over this site (only should take 5 minutes) and then come back and try to see if you understand anything more and if you have any questions
i am happy to help! :) http://hotmath.com/hotmath_help/topics/magnitude-and-direction-of-vectors.html
Best Response
You've already chosen the best response.
@ihatealgebrasomuch okay then what do i do with the R i get 2105.207
Best Response
You've already chosen the best response.
i dont get it
Best Response
You've already chosen the best response.
i dont get how you got thoses number from 1500N and 800 N
Best Response
You've already chosen the best response.
okay, that is the magnitude of the resultant vector, not all you have to do if find the direction which is tan^-1 (sin/cos) or in this case 1149/1764 which ends up being about 33 degrees
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5121cbcbe4b06821731d2c88","timestamp":"2014-04-18T10:57:24Z","content_type":null,"content_length":"132919","record_id":"<urn:uuid:e4d475cc-6c35-473b-adb5-e9771a96bffd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
BMI Calculator
Height feet inches
Weight pounds Result
Height centimeters
Weight kilograms BMI = 22.96 kg/m^2 (Normal)
• Normal BMI range: 18.5kg/m^2 - 25 kg/m^2
• Normal BMI weight range for the height: 128.9lbs - 174.2 lbs
• Ponderal Index: 12.91 kg/m^3
The Body Mass Index (BMI) Calculator can be used to calculate your BMI value and weight status while taking your age into consideration. Use the "metric units" tab if you are more comfortable with
the international standard metric units. The referenced weight range and calculation formula is listed below.
Your BMI is a measurement of your body weight based on your height and weight. Although your BMI does not actually "measure" your percentage of body fat, it is a useful tool to estimate a healthy
body weight based on your height. Due to its ease of measurement and calculation, it is the most widely used diagnostic indicator to identify a person's optimal weight depending on his height. Your
BMI "number" will inform you if you are underweight, of normal weight, overweight, or obese. However, due to the wide variety of body types, the distribution of muscle and bone mass, etc., it is not
appropriate to use this as the only or final indication for diagnosis.
Body Mass Index Formula
The formulas to calculate BMI based on two of the most commonly used unit systems:
BMI = weight(kg)/height^2(m^2) (Metric Units)
BMI = 703·weight(lb)/height^2(in^2) (U.S. Units)
BMI Table for Adults
This is the World Health Organization's (WHO) recommended body weight based on BMI values for adults. It is used for both men and women, age 18 or older.
Category BMI range - kg/m^2
Severely underweight < 16.5
Underweight 16.5 - 18.5
Normal 18.5 - 25
Overweight 25 - 30
Obese Class I 30 - 35
Obese Class II 35 - 40
Obese Class III > 40
BMI Chart for Adults
This is a graph of BMI categories based on the World Health Organization data. The dashed lines represent subdivisions within a major categorization.
BMI Table for Children and Teens, Age 2-20
The Centers for Disease Control and Prevention (CDC) recommends BMI categorization for children and teens between age 2 and 20.
Category Percentile Range
Underweight <5%
Healthy weight 5% - 85%
At risk of overweight 85% - 95%
Overweight >95%
BMI Chart for Children and Teens, Age 2-20
The Centers for Disease Control and Prevention (CDC) BMI-for-age percentiles growth charts.
chart for boys chart for girls
Ponderal Index
Similar to BMI, the Ponderal Index (PI) is a measure of leanness of a person. It was also called Rohrer's index. Comparatively, the Ponderal Index is more even for different stature. Therefore, it
was commonly used in pediatrics.
PI = weight(kg)/height^3(m^3) (Metric Units) | {"url":"http://www.calculator.net/bmi-calculator.html","timestamp":"2014-04-20T09:15:17Z","content_type":null,"content_length":"13288","record_id":"<urn:uuid:582a64e2-1421-492f-bcde-fa0914b9c15b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hello, I'm Kevin Houston, a mathematician in the School of Mathematics at the University of Leeds.
My research has traditionally been in Singularity Theory, a long and venerable subject dating back to Newton. Recently I have been working in Discrete Differential Geometry, in particular recent
ideas surrounding generalizations of the discrete Laplacian.
For example, any scanned shape will have noise due to scanner inaccuracy. Can we smooth away the noise? The picture on the right shows a scanned object that has been smoothed using a diffusion-type
flow. The top two show the original scan and the bottom two show the smoothed. The colours represent the mean curvature. As you can see from the colours the smoothed version has lot, but not all, of
the noise removed. The problem is to remove the other noise. First we need to decide what we mean by noise...
If you fancy doing a PhD in this area, then please contact me: k.houston(at)leeds.ac.uk
Mathematical Thinking
My main teaching interest is encouraging (read forcing) students to think, hence the title of my best-selling book, How To Think Like a Mathematician. You might be interested in this taster for it: a
free booklet called 10 Ways To Think Like a Mathematician. | {"url":"http://www.kevinhouston.net/","timestamp":"2014-04-20T08:17:02Z","content_type":null,"content_length":"3430","record_id":"<urn:uuid:cc3b3211-5d00-42aa-97ad-36bbbf26d226>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about how completely a rectifiable arc can fill a non-empty compact continuum that is the closure of its interior
up vote 2 down vote favorite
Let P be the Euclidean plane and let C be a compact and convex subset of P whose interior is non-empty. Does there always exist a strictly increasing sequence of positive real numbers l(1),l(2),...,l
(n),... as well as a strictly decreasing sequence of positive real numbers e(1),e(2),...,e(n),... converging to zero such that, for each positive integer i, there is a subset s(i) of C which is a
rectifiable arc whose length does not exceed l(i) and whose distance from each point of C is not greater than e(i)? If the answer to this question is "Yes", must the infinite sequence l(1),l(2),...,l
(n),... always be unbounded? .........It looks to me intuitively that the answers to these questions should be "Yes" and that the proof should be easy. But all my attempted proofs seem to have gaps,
so maybe I am missing something. I specialized the problem here to make it easier to "visualize". P could be a higher dimensional Euclidean space and C does not have to be convex.
add comment
1 Answer
active oldest votes
No, the lengths must go to infinity.
If $s$ is a path of length $l$ in the plane and $\varepsilon>0$, then the area of the $\varepsilon$-neighborhood of $s$ is no greater than $20\varepsilon(l+\varepsilon)$. Indeed, $s$
can be divided into at most $l\varepsilon^{-1}+1$ subintervals of length at most $\varepsilon$. Pick a point on each subinterval and consider balls of radius $2\varepsilon$ centered at
up vote 7 these points. These balls cover the $\varepsilon$-neighborhood of $s$ and the sum of their areas is at most $4\pi\varepsilon^2(l\varepsilon^{-1}+1)=4\pi\varepsilon(l+\varepsilon)<20\
down vote varepsilon(l+\varepsilon)$.
Since your $C$ has nonempty interior, it has positive area $A$. Since $e(i)$-neighborhood of $s(i)$ covers $C$, the above inequality implies that $20 e(i)(l(i)+e(i))\ge A>0$. Since $e
(i)\to 0$, it follows that $l(i)\to\infty$.
Sergei, thanks for a very informative answer. Actually, your "No" should be a "Yes" since I asked whether the sequence of lengths necessarily had to be unbounded. – Garabed Gulbenkian
Jun 14 '13 at 18:46
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/133755/a-question-about-how-completely-a-rectifiable-arc-can-fill-a-non-empty-compact-c","timestamp":"2014-04-16T22:45:09Z","content_type":null,"content_length":"51864","record_id":"<urn:uuid:46350c6d-b110-4027-8482-0ca73deb9be5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Hotdeck imputation
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Hotdeck imputation
From "Daniel Waxman" <dan@amplecat.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: Hotdeck imputation
Date Sat, 11 Jun 2005 17:18:13 -0400
I need to do a relatively simple imputation, but am having trouble following
the examples given.
Here is the situation:
Dataset ~ 10,000 obs (non-weighted, 1 obs/subject)
Variable to be imputed:
EKG_abnormal --binary(yes/no), missing at random < 5% of observations.
Potential predictors with which to impute:
At least five, some binary (e.g. chestpain yes/no, first_cat (1-5), etc.)
some which are continuous but can be made categorical (e.g. age ==> age_cat)
Primary outcome being studied: Death yes/no
The questions:
(1) Should I use the outcome variable (death) as one of imputation
variables? Should I use many imputation variables since I can (large
(2) Most important: Can somebody give an example for the correct way to
issue the commands?
If I do the following:
. hotdeck ekg_abnormal using imp, by(agecat first_cat) store
keep(merge_variable) impute(5)
Then I end up with 5 files, imp1 imp2 imp3 imp4 imp5
Eventually I want to end up with imputed values for ekg_abnormal that I can
use the main logistic regression equation of interest. Not sure where the
options infile(), command(logit) fit into things.
Any help would be greatly appreciated.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-06/msg00318.html","timestamp":"2014-04-17T18:52:17Z","content_type":null,"content_length":"6000","record_id":"<urn:uuid:2359c345-169b-4289-ae93-49397bde2cfa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
You can't prove this title wasn't an attempt to illustrate Godel
June 29, 2005 2:59 AM Subscribe
See also
Rebecca Goldstein's new book on Godel
. Haven't read it, and found her fiction to be pretty lame, but apparently Godel is the new black.
posted by foxy_hedgehog at 3:35 AM on June 29, 2005
I just recently read
Godel, Escher, Bach: An Eternal Golden Braid
. It's all about relating the self-referentialness of all three of their lives' works to how the human conciousness runs. It's terribly interesting, but a real hard read, I did a 20 page paper on it
last semester. I got a B- (even though she called my paper well written and intelligent).
posted by Mach5 at 4:58 AM on June 29, 2005
Achilles: What, they haven't mentioned
Gödel, Escher, Bach: An Eternal Golden Braid?
Tortoise: Well, they did say the new black. Hofstadter is rather old school.
Achilles: Do tell. Old school or not, it remains an excellent treatise on the subject, and it keeps up in work.
Tortoise: Heavens, yes, and there's no arguing that. Still, though, it's to be expected. "The world will little note, nor long remember, what we say here," after all.
Achilles: Drat. You're right again, Mr. T.
Tortoise: ?. On preview, it looks like we've be trumped.
posted by eriko at 5:01 AM on June 29, 2005
Well ... Mach 5 has to be faster than either a Tortoise or Achilles.
posted by TimothyMason at 5:15 AM on June 29, 2005
Hofstadter often gets the blame for the wild misrepresentations of Godel's incompleteness theorm, but unfairly so in my opinion.
Godel's theorem says no more and no less than what it says. What it says is very counter-intuitive. That it is very counter-intuitive may be important...or it may not be important. You can be
rigorous about it. If so, then you're doing math and if it's important, it's important in the
realm of mathematics
. Alternatively, you can be philosophical about it in the hand-waving sense. If so, then you're not doing math. That
mean that it's necessarily not in some way important outside of math. It well may be. But the math alone doesn't get you that far. It's a mistake to claim that it does.
By nature, I'm the type to think it's important because I'm the type to be immensely fascinated by the very notion of "intuitive" and "counter-intuitive",
in the context of math. No doubt because I'm very intuitive. And my intuition tells me that the ways in which our intuition leads us to deeply counter-intuitive results is meaningful. But then, I
would think that, wouldn't I?
Maybe someone will come along who will rigourously connect Godel's proof to something of wider philosophical import. Lots of people have tried, are trying, and will try because, by gosh, their
intuition says that there's something there. But Godel's incompleteness theorem is very, very specific. It very well may have no relevance to anything beyond the branch of mathematics within which it
And this may be perverse, but over my lifetime I've come to strongly believe that you can't productively search "intuition space" without having a strong grasp, and constantly keeping in mind, what
is known and not known rigourously. Because I'm by nature a hand-waver, this was a lesson hard won.
Douglas Hofstadter is also deeply intuitive, ruminating, "philosophical". GEB spoke extremely deeply to me for this reason. And he had to write that book because it was a labor of love and it was a
core expression of his being. But I can't help but think that had someone else written it, he, too, would be annoyed at the popularization of Godel incompleteness brought by that book and the
misrepresentations and misunderstandings it spawned.
posted by Ethereal Bligh at 5:19 AM on June 29, 2005
For anyone else who was confused, it took a
wikipedia entry
to help me decipher what this Godel thing is that everyone keeps talking about.
(Please don't let me be the only one who was confused, I'm really quite smart, really ...)
posted by forforf at 5:29 AM on June 29, 2005 [1 favorite]
forforf, that's not the relevant theorem(s). This
, is.
posted by Gyan at 5:40 AM on June 29, 2005
tortois and the hare
. I have much respect for the
Eternal Golden Braid
, but still, it's
tortois and the hare
. Isn't it? I was there for my childhood. You can't fool me twice. Or something. And who keeps putting an s on math, for that matter?
posted by nervousfritz at 5:50 AM on June 29, 2005
Good lord. I thought the ontological proof was stupid back when I was fourteen and I haven't had any reason to change my mind since. Gödel took it seriously?! What thin partitions sense from
crackpottery divide.
posted by languagehat at 6:08 AM on June 29, 2005
Thankyouthankyouthankyou Torkel Franzen (and Gyan for the great links!). Intellectual overenthusiasm mixed with hubris and laziness of thought frustrates me more than I can say, and one of the most
egregious and common examples is the willy-nilly invocation of Gödel's theorem where it clearly doesn't belong. I've heard it mentioned more than once in my
classes. Kills me.
Zeno's paradox
, so far as I know, involved Achilles.
Lewis Carroll's version
, though, was all about "A Kill-Ease".
This is good.
On preview: eriko is clearly my Achilles.
posted by dilettanti at 6:14 AM on June 29, 2005
I like Goedel because he was a Platonist. Not enough of those.
posted by mokujin at 6:17 AM on June 29, 2005
And who keeps putting an s on math, for that matter?
, "It is often abbreviated maths in Commonwealth English and math in American English." There is a world outside your borders, you know.
posted by salmacis at 6:17 AM on June 29, 2005
: "I like Goedel because he was a Platonist. Not enough of those."
Stewart Shapiro, in his book,
Philosophy of Mathematics
, says that most mathematicians are closet platonists.
posted by Gyan at 6:21 AM on June 29, 2005
Do people study econs as well as maths in England?
posted by driveler at 6:22 AM on June 29, 2005
Probably not, since I have no idea what "econs" is.
posted by salmacis at 6:59 AM on June 29, 2005
"Stewart Shapiro, in his book, Philosophy of Mathematics, says that most mathematicians are closet platonists."
Someone else that I've read, but can't recall their name, said the same thing. Rubens? Of course, "platonists" in the idealism with regard to mathematics sense. Not real platonists.
Anyway, contemporary mathematicians have to ride the razor's edge I alluded to in my comment. They probably wouldn't be mathematicians if they didn't
that mathematics is in some sense a description of reality; but at the rigorous level they are formalists because they've learned (as a group) that they
be formalists lest their idealism lead them astray. History has proven this. So they're curious mixtures of both intuition and very narrow, rigorous discipline.
This has a bearing on what's happening when we talk about what Godel's theorem "means".
posted by Ethereal Bligh at 6:59 AM on June 29, 2005
This sentence confuses "means" with "use".
(Sorry, EB, but you've triggered the self referential part of my brain.)
Why is it that this noun phrase doesn't mean what this noun phrase does?
Does this sentence remind you of quonsar?
This sentence, although not a question, neverless ends in a question mark?
This sentence no verb, or relevance.
posted by eriko at 7:20 AM on June 29, 2005
Mr Bluesky's take: Tea + G5 + 7:30 - Full 8 Hrs + Godel's Theorem =
Will return after tea has taken hold of my senses.
posted by Mr Bluesky at 7:28 AM on June 29, 2005
Oh, and by the way languagehat, I'm glad you said it (about Anselm's proof). I wanted to, but didn't, because I'm probably being enough of a smartypants already. But Anselm's proof annoys the hell
out of me. I've known smart people who take it seriously. Okay, worse, I've known smart people who are normally very careful thinkers who take it seriously. Or at least more seriously than it
deserves. It's ugly and sloppy and even Leibniz and Godel can't clean it up. Why would they try? That's a good question.
And really, really smart people who are careful thinkers take Newcomb's paradox seriously, too. It is this sort of problem that leaves me feeling as if I'm way smarter than some really smart
people...or way dumber. Some kinds of things which appear to be deep mysteries or paradoxes to others almost always look to me like an obvious category error of some sort. I see
positive property
in Godel's ontological "proof" and I see a placeholder for meaning, but no meaning. I see
in Newcomb's paradox and I ask "what the hell is
On Preview:
"Sorry, EB, but you've triggered the self referential part of my brain"
Exactly which part would that be?
posted by Ethereal Bligh at 7:28 AM on June 29, 2005
It appears Godel's theorems are abused so often because they are so useful and novel as ideas.
And it does have implications for human consciousness, since humans do work with math(s).
I envy anyone reading GEB for the first time. You can only do that once....er.
posted by Smedleyman at 7:35 AM on June 29, 2005
But Goedel was a vocal Platonist who believed that concepts and ideas have some kind of objective reality independent of the human mind. And that is probably why he liked the ontological proof.
posted by mokujin at 7:47 AM on June 29, 2005
In this theory we can also talk about the language and theorems of T itself, through a coding or "Gödel numbering". [...] Without going into any details regarding how such a correspondence can be
This is where I lose a lot of interest. Maybe it's just because I am an applied mathematician. It's not clear to me that such a correspondence can actually be established outside of theoretical
hand-waving, so it's never been entirely clear to me that I should really care about the incompleteness theorem.
Trying to formalize theorem provers is very non-trivial. Most non-trivial proofs seem to require a lot of language that it is very difficult to formalize. So I don't know. But I'll have to read up on
him one day, as it's probably just my own ignorance.
posted by teece at 7:50 AM on June 29, 2005
Eriko wins!
Great tortoise/hare conversation.
Hofstadter would be proud.
Torkel Franzen wins also with the book title employing the word "incomplete."
Self reference can be such fun.
posted by nofundy at 7:54 AM on June 29, 2005
"And it does have implications for human consciousness..."
"...since humans do work with math(s)."
Well, no. That doesn't follow.
On Preview:
"And that is probably why he liked the ontological proof."
Maybe, but he shouldn't have. Plato himself could not make platonism completely rational. There's a reason why he has Socrates retreat to myth and oracles.
On Second Preview:
"It's not clear to me that such a correspondence can actually be established outside of theoretical hand-waving".
I'm not a mathematician, applied or otherwise. But it's my understanding that what is most astonishing and elegant about Godel's proof is that he accomplishes exactly that. That he established
incompleteness was flashy. The means with which he did so was
posted by Ethereal Bligh at 7:58 AM on June 29, 2005
Interesting, I had no idea Torkel had a book coming out. He's working at the same department I did before I left the university 5 weeks ago. In fact, he recently moved into my old office...
I think I have to get the book - I have tried to discuss Gödels theorems and different misconceptions with people several times.
posted by rpn at 8:14 AM on June 29, 2005
But it's my understanding that what is most astonishing and elegant about Godel's proof is that he accomplishes exactly that. That he established incompleteness was flashy. The means with which he
did so was useful.
Ah then, I will definitely have to head to the library and wade into the rarified heights of such abstraction, then. Thanks.
posted by teece at 8:16 AM on June 29, 2005
When studying it in college, I remember the theorem only referring to "completeness" and not "consitency". This Franzen fellow refers to all of the theorems as determining "consistency" in a system.
I've never heard that term in mathematics before. Is that the same as completeness?
posted by destro at 8:16 AM on June 29, 2005
It appears Godel's theorems are abused so often because they are so useful and novel as ideas.
Indeed. And one of the worst abuses (because it comes from someone I admire who should really know better) is
P.S.: Love the title.
I'd like to make my last sentence self-referential, but couldn't - oh wait.
posted by spazzm at 8:37 AM on June 29, 2005
Excellent post. I've seen Godel's theorem(s) invoked rather haphazardly in the blue to support any number of claims and it's gotten under my skin. Glad to hear I'm not the only one bugged by this. I
even vaguely recall its use to "attack" any meaningful distinction between, e.g., noetic and empirical approaches to knowledge. I'd provide the set of all sets containing the aforementioned posts,
but it's lunchtime. Also, if you take the ontological proof seriously, I have a perfect island I'd like to sell you. Guanillo Charter Tours, all of that.
posted by joe lisboa at 9:19 AM on June 29, 2005
When studying it in college, I remember the theorem only referring to "completeness" and not "consitency". This Franzen fellow refers to all of the theorems as determining "consistency" in a system.
I've never heard that term in mathematics before. Is that the same as completeness?
the incompleteness result can be expressed as "any formal system of sufficient power as to describe arithmetic is either incomplete or inconsistent".
inconsistent means that the system proves some statements to be both true and false. incomplete means that there are some statements which cannot be proven to be either true or false.
the original (and only?) proof was formulated entirely in terms of rock solid number theory and first order logic. no flakey hand waving at all.
posted by paradroid at 9:19 AM on June 29, 2005
Paraconsistent logics are the future, people. Well, they're fun anyway.
posted by sonofsamiam at 9:32 AM on June 29, 2005
franzen's arguments are incomplete and inconsistent.
posted by 3.2.3 at 9:54 AM on June 29, 2005
Intellectual overenthusiasm mixed with hubris and laziness of thought frustrates me more than I can say, and one of the most egregious and common examples is the willy-nilly invocation of Gödel's
theorem where it clearly doesn't belong.
Hear hear. Along with quantum mechanics and the second law of thermodynamics. Maybe someone will take on those in the same vein.
posted by DevilsAdvocate at 10:13 AM on June 29, 2005
Assertion: Godel's Incompleteness theorem has been superseeded by Turing's Halting problem, which is much easier to understand.
posted by delmoi at 1:17 PM on June 29, 2005
Good summation of Godel, platonism and some other stuff in Edge's
with Rebecca Goldstein.
Delmoi, inasmuch as the halting problem represents incompleteness, it's just a symbolic representation of the concept. Boiling water getting colder at room temperature is easy to envisage but that
doesn't make thermodynamics any less relevent.
posted by Sparx at 2:37 PM on June 29, 2005
"It is often abbreviated maths in Commonwealth English and math in American English."
I should have guessed as much. Those English bastards are always changing the spelling of everything. Next it will be "mauths" They love putting extra 'u's in the way, almost as much as they love
their damn tea.
posted by nervousfritz at 3:06 PM on June 29, 2005
Godel's theorems are pure mathematical masturbation.
I dont believe in any natural numbers bigger than A^A^A^A^A or less than -A^A^A^A^A where A is the number of sub-atomic particles in the known universe. I define my arithmetic system accordingly. If
for some reason, I need to believe in a natural number of greater magnitude than these, let me know and I will redefine my system to accommodate.
Bingo. My belief system is totally consistent and complete and Godel and all his disciples and pretenders disappear in a puff of logic.
posted by DirtyCreature at 4:36 PM on June 29, 2005
Godel's Incompleteness theorem has been superseeded by Turing's Halting problem, which is much easier to understand.
Possibly, and it's kind of funny to watch these pedantic math types correct the supposedly egregious popular misunderstandings of Godel when I suspect that many of them are not familiar with the
halting problem, which does say that there are true things that can't be proved. Well, computed.
posted by transona5 at 6:44 PM on June 29, 2005
If for some reason, I need to believe in a natural number of greater magnitude than these, let me know and I will redefine my system to accommodate.
What if one could conceivably need an indefinite -- perhaps ongoing -- number of such extensions?
posted by weston at 7:25 PM on June 29, 2005
What if one could conceivably need an indefinite -- perhaps ongoing -- number of such extensions?
LOL. I knew I shouldn't have put that caveat in. Ok ok - I'll let you extend my system A^A^A^A^A times but that's IT.
But first I'd like to know just ONE possible reason why you would ever need to extend my system.
posted by DirtyCreature at 10:26 PM on June 29, 2005
To calculate the (A^A^A^A^A + 1)-th digit of pi, perhaps?
Because, simply, your number system makes no sense? I mean, I'm pretty sure you're joking, but still. While every practical use of numbers will fall well shy of that (including, if I'm correct,
calculating the entropy of the whole universe, which should be of the order of A! - an indescribably huge number, which is still smaller than A
), that doesn't make it right. For example, in your universe, you can't add any two number or multiply any two numbers arbitrarily - which for any reasonable requirement of a number system seems to
be a bit of a failiure.
posted by vernondalhart at 11:03 PM on June 29, 2005
writes "And one of the worst abuses (because it comes from someone I admire who should really know better) is this."
Agreed. Great tiles, but his explorations of consciousness are so much hand waving. "Quantum effects in microtubules" just takes a
that's become a
and turns it back into a
. Feh.
posted by orthogonality at 11:21 PM on June 29, 2005
Of course we're familiar with the halting problem. I knew of the halting problem long before Godel's theorem. Anyway, it too only proves what it proves and nothing more or less. It's no more
explicitly philosophically far-reaching than Godel's theorem.
Yeah. Ugh. Penrose.
posted by Ethereal Bligh at 12:17 AM on June 30, 2005
To calculate the (A^A^A^A^A + 1)-th digit of pi, perhaps?
For what purpose?? For mathematical masturbation? Sure - go nuts. But they got to the moon with 15 decimal places of pi. A little less than A^A^A^A^A no?
Because, simply, your number system makes no sense? I mean, I'm pretty sure you're joking
It makes complete sense. Tell me the two biggest numbers you will ever multiply by each other and I'll build a number system which satisfies your needs way more than adequately. Ok certain incredibly
large unfathomable numbers in the system can't be multiplied or added together - so? Its just another axiomatic system.
Not pretty? Not elegant? Trust me, for all intents and purposes my system will function as sexily as normal number theory for any practical problem that can ever be considered. Sure the concept of
the countably infinite is elegant but as we've seen, it gets you in a lot of strife and is completely unnecessary.
My point is it's incredibly easy to prove consistency and be complete at the same time as long as your belief system stays finite - no matter how unbelievably incredibly unthinkably large you choose
your finite universe to be. Anything more is masturbatory and problematic.
posted by DirtyCreature at 3:43 AM on June 30, 2005
Actually, your system will be incredibly complicated:
'Normal' number systems allow any two numbers to be multiplied and added. Any number can be divided by any other number.
Exception: You can't divide by zero.
Your system would be pretty similar, but your list of exceptions would be very, very long (infinite if you include real number, as a matter of fact) since you have to make an exception for any
addition or multiplication that exceeds the bounds of your system.
Why can't you simply disallow any operation that produces a result larger than the bound? Because then you must define "larger than the bound", which is a number outside the bound. Catch-22, amigo.
But I assume you're joking, of course.
posted by spazzm at 4:07 AM on June 30, 2005
(infinite if you include real number, as a matter of fact)
The reals are defined separately and aren't a problem for Godel. To know where I'm coming from requires an understanding of the machinery of the Godel proof. It's the natural numbers (1,2,3,4,.....)
that are the problem.
Because then you must define "larger than the bound", which is a number outside the bound. Catch-22, amigo.
No thats not how you do it. Define two sets - the touchables and the untouchables. Any two numbers in the touchables can be added or multiplied, the results of which may be an element of either set.
There is no multiplication or addition defined on elements of the untouchables. The "bound" as you describe it is just the sum of the sizes of the two sets.
posted by DirtyCreature at 4:26 AM on June 30, 2005
Any two numbers in the touchables can be added or multiplied, the results of which may be an element of either set.
Ahh, so you're inconsistent, thus, you are immune to Gödel's discovery.
Why your system is inconsistent: Operations give different answers at different scales. If we let "N" equal the largest possible positive integer, we find that N!=N and (N-1)!=N, and (N-2)!=N, but 3!
=6, and (3-1)!=2, and (3-1)!=1.
By the way, does the set of all non-self-inclusive sets include itself in your system?
posted by eriko at 5:20 AM on June 30, 2005
I bow before your superiour intellect. Clearly, your new system of natural positive numbers smaller than or equal to X has no flaws. And pay no heed to eriko (that snarky cad!), he's just jealous
because he didn't think of it first.
Incidentially, here's a
related research project.
Se also
here.posted by spazzm at 6:55 AM on June 30, 2005
" "...since humans do work with math(s)."
Well, no. That doesn't follow. "
Ok, humans don't work with math. Fine.
This isn't a post.
posted by Smedleyman at 1:14 PM on June 30, 2005
"Ok, humans don't work with math. Fine."
I said the argument was flawed--that the conclusion was not warranted from the premises. Furthermore, the conclusion of the argument
"humans work with math". Therefore, your concession is itself a
non sequitor
And you're right: that isn't a post. It's a comment.
posted by Ethereal Bligh at 1:51 PM on June 30, 2005
I bow to your perceptiveness at being able to recognize someone with a little more training in this area than some from just a few sentences.
Factorial isn't a fundamental operator of arithmetic but ok if you want factorials, let them be defined only on the touchables and we will extend the untouchacles to include the factorials of all
numbers in the touchables.
posted by DirtyCreature at 2:33 PM on June 30, 2005
Constructivism is a (somewhat) legitimate school of thought in mathematics.
posted by Ethereal Bligh at 2:47 PM on June 30, 2005
we will extend the untouchables to include the factorials of all numbers in the touchables.
What the heck is the use of a number system that does not define operations on 1 (factorial(1)), 2 (factorial(2)), 6 (factorial(3)) or 24 (factorial(4))?
What you're proposing regarding disallowing numbers larger than a certain limit is what digital computers do - if you keep adding one to an integer you'll end up with a negative number after a while,
because addition is undefined beyond a certain limit.
While this is useful, it doesn't disprove Godel.
To put it bluntly: Your system may be consistent, but it is incomplete.
It's incomplete because there are true statements that it cannot prove.
For example:
If the limit (the largest number defined in the "touchables" or "untouchables") of your system is X, you cannot prove or disprove (for example) "Y is prime", where Y > X.
You may hold the opinion that no-one would ever "need" a number as large as Y, but that's not the point. Completeness, as defined by Godel, is not subject to "need".
posted by spazzm at 3:45 PM on June 30, 2005
While this is useful, it doesn't disprove Godel.
I'm not trying to disprove Godel's theorems. I'm trying to show they are masturbatory and aren't really very useful or applicable.
It's incomplete because there are true statements that it cannot prove.
This is not the definition of completeness. Completeness is determined with respect to expressible statements within the language of the system.
"Y is prime", where Y > X.
"Y is prime" is not expressible in my system because Y is not even defined. I don't believe in Y. It's completely useless to anyone except masturbators.
Trust me. My system is complete and consistent. For those who want to learn more, I suggest looking up Wikipedia as a starting point.
posted by DirtyCreature at 5:06 PM on June 30, 2005
This is not the definition of completeness. Completeness is determined with respect to expressible statements within the language of the system.
Nice try. Godel defined completeness only for systems
strong enough to define the natural numbers
. Your system is not strong enough to define the natural numbers - therefore, it can never be complete in the Godelian sense.
You seem to try to get around this by redefining the meaning of "natural number" by claiming that there is a largest possible natural number. Unfortunately (?), there is no largest natural number.
There are systems that are both complete and consistent (Peano arithmetic comes to mind), but they cannot define the natural numbers.
I don't believe in [a very large prime] Y.
There's some pretty compelling evidence that there are
infinitely many primes
. This means that no matter how large X (the largest number in your system) is, there will always be a prime Y such that Y > X. Whether you believe in it or not.
Trust me. My system is complete and consistent
I'll trust you once you explain how a number system that does not allow operations including the numbers 1, 2, 6 or 24 is 'complete' in the sense Godel uses. For some reason you seem to be avoiding
the question.
posted by spazzm at 5:51 PM on June 30, 2005
There are systems that are both complete and consistent (Peano arithmetic comes to mind), but they cannot define the natural numbers.
Um - that should be "cannot define multiplication and addition on all natural numbers".
posted by spazzm at 6:11 PM on June 30, 2005
In all honesty, DirtyCreature, your system would work for everyday life just fine. I can't think of a single number that one could ever need for any use in every day life, or in any science other
than mathematics that would be larger than you upper bound. I'll grant you that.
However, it doesn't work for mathematics. While you might be able to produce some version of the Reals out of it, your upper bound on the integers would also provide an upper bound on the reals, and
hence it would also provide a lower bound in size of integers too - which would yield major problems with notions of convergence and continuity. Although in that sense, you couldn't really do
calculus in such a system, and hence basically any science that would ever use it would be out.
You could try get around this by just using the techniques of calculus and call the rest 'mathematical masturbation' but you'd be grossly deluding yourself. The reason that these things work
implicitly assumes a multitude of facts about the number system that they are based on - and there's no way around that. A major part of the 19th and 20th centuries work in mathematics was coming to
grips with all this. The world you live in is built on such masturbation, I'm afraid.
posted by vernondalhart at 8:06 PM on June 30, 2005
Sorry, that should be
... and hence it would also provide a lower bound in size of
real numbers
too ...
posted by vernondalhart at 8:07 PM on June 30, 2005
Again, some mathematicians take mathematical constructivism seriously. Google it. I'm not paying much attention to DirtyCreature's comments, or this argument, and skimming leads me to view him
unfavorably--but, even so, it is simply wrong to dismiss out-of-hand (what appears to me) to be his underlying mathematical philosophy.
posted by Ethereal Bligh at 8:14 PM on June 30, 2005
I can't think of a single number that one could ever need for any use in every day life, or in any science other than mathematics that would be larger than you upper bound.
Thank you.
However, it doesn't work for mathematics.
As defined by masturbators, agreed. (Don't take offence by the term "masturbators". The analogy is that masturbating might be fun but it doesn't produce anything much useful and can lead to
significant embarassment.)
While you might be able to produce some version of the Reals out of it, your upper bound on the integers would also provide an upper bound on the reals
Sorry no, this is wrong. I know this seems a strange result but an axiomatic system defining all the real numbers as we know and love them can be consistent and complete. This is confusing given that
the reals contain the natural numbers but nevertheless true. It's just the natural numbers that are the problem. (Should be somewhere in Wikipedia)
skimming leads me to view him unfavorably--but, even so, it is simply wrong to dismiss out-of-hand ... his underlying mathematical philosophy
Ok ok I'm a complete, consistent asswipe. I can live with that ;)
posted by DirtyCreature at 9:19 PM on June 30, 2005
Taski's theorem on real closed fields (See Model Theory)
which proves an axiomatization of the reals exists which is complete.
I'm sure this level of detail is going way too far for many here but I'm a sucker for punishment.
posted by DirtyCreature at 10:39 PM on June 30, 2005
« Older Ballistic Defecation is just what it sounds like.... | Physics, bikinis and bubbles.... Newer »
This thread has been archived and is closed to new comments | {"url":"http://www.metafilter.com/43130/You-cant-prove-this-title-wasnt-an-attempt-to-illustrate-Godel","timestamp":"2014-04-17T08:40:05Z","content_type":null,"content_length":"72249","record_id":"<urn:uuid:8a043bc6-7b78-4b33-841d-7f2d82570844>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
October 3 2005 Vol. 10, No. 40
THE MATH FORUM @ DREXEL INTERNET NEWS
Teacher2Teacher FAQ: Metric Week
MathSongs | Developer's Guide to Excelets
TEACHER2TEACHER FAQ: METRIC WEEK
As students start to understand the structure of the base 10
number system, it is important to provide opportunities for
them to apply what they have learned.
During the week of October 10 (i.e., 10/10) classrooms can
participate in "Metric Week" by investigating measurement,
measurement systems, place value, and decimals using
real-world contexts.
If you are looking for some ways to celebrate math in your
math class, introduce or reinforce the metric system, or if
you would like to suggest some school-wide thematic
activities, you are certain to find ideas on this page to
help you design a Metric Week that students will enjoy.
Vicki Young, Motlow State Community College in Lynchburg,
Tennessee, uses math songs and poems to motivate her
mathematics students. Young writes, "My students get used to
their crazy singing teacher and sometimes I can even get them
to sing along! If you have RealPlayer on your computer, you
should be able to hear me sing an a cappella version of each
DEVELOPER'S GUIDE TO EXCELETS
Scott Sinex, Prince George's Community College, Largo,
Maryland, provides information on creating Excelets. Excelets
are interactive Excel spreadsheets or simulations of
mathematical models. The user changes a variable and the
spreadsheet changes in numerical, graphical, and/or even
symbolic form (equations).
Examples include:
- interactive features tour
- flipping pennies
- fractions
- radioactive decay
- derivatives
- ideal gas law
Sinex also provides the guide "Using Excel for Handling,
Graphing, and Analyzing Scientific Data: A Resource for
Science and Mathematics Students."
Download the PDF here:
CHECK OUT OUR WEB SITE:
The Math Forum @ Drexel http://mathforum.org/
Ask Dr. Math http://mathforum.org/dr.math/
Problems of the Week http://mathforum.org/pow/
Mathematics Library http://mathforum.org/library/
Math Tools http://mathforum.org/mathtools/
Teacher2Teacher http://mathforum.org/t2t/
Discussion Groups http://mathforum.org/kb/
Join the Math Forum http://mathforum.org/join.forum.html
Send comments to the Math Forum Internet Newsletter editors
Donations http://deptapp.drexel.edu/ia/GOL/giftsonline1_MF.asp
Ask Dr. Math Books http://mathforum.org/pubs/dr.mathbooks.html
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \ | {"url":"http://mathforum.org/electronic.newsletter/mf.intnews10.40.html","timestamp":"2014-04-17T06:03:23Z","content_type":null,"content_length":"7138","record_id":"<urn:uuid:11cff4c2-210e-4cfd-a31b-8e22ffdddb80>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total no. of Triangle,Square and Rectangle
hi jacks,
I suggest you start with squares; that looks easiest to me.
You can have squares that are 1 by 1; 2 by 2; 3 by 3; and 4 by 4.
EDIT: You should also count the squares that are at 45 degrees to the grid.
There's an easy pattern to this, so finding that is a good beginning to the harder rectangle case.
Last edited by bob bundy (2013-09-30 07:13:07)
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://mathisfunforum.com/viewtopic.php?pid=285821","timestamp":"2014-04-17T09:34:43Z","content_type":null,"content_length":"12338","record_id":"<urn:uuid:f39e3220-f7e3-4b34-a695-3a74b6a59ccd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inequality problem
May 22nd 2009, 09:49 AM #1
Inequality problem
Find the set of values of x for which:
$\frac{x}{x-3} > \frac{1}{x-2}$.
I rearranged it to form:
$\frac{x^2 - 3x + 3}{(x-3)(x-2)} > 0$
Therefore critical points of the denominator are x = 2, 3.
It says that the numerator is always positive, could you explain why exactly this is, or is this just something that you "know".
I am not sure then how to work out a set of values for x, they have x<2 and x>3 in the book.
Thanks in advance for the help
Find the set of values of x for which:
$\frac{x}{x-3} > \frac{1}{x-2}$.
I rearranged it to form:
$\frac{x^2 - 3x + 3}{(x-3)(x-2)} > 0$
Therefore critical points of the denominator are x = 2, 3.
It says that the numerator is always positive, could you explain why exactly this is, or is this just something that you "know".
The roots of the numerator are complex numbers.
The value of the numerator at x=0 is 3. So it is always positive.
Thanks for the reply Plato. Do you have any idea how they worked out their set of values?
Find the set of values of x for which:
$\frac{x}{x-3} > \frac{1}{x-2}$.
I rearranged it to form:
$\frac{x^2 - 3x + 3}{(x-3)(x-2)} > 0$
Therefore critical points of the denominator are x = 2, 3.
It says that the numerator is always positive, could you explain why exactly this is, or is this just something that you "know".
If the discriminant of a quadratic $ax^2+bx+c$ is negative, then it keeps a constant sign, the same as a.
So here, since a=1, and the discriminant is $9-12=-3$, the above is always positive.
You can prove it by taking the derivative. For this particular polynomial.
I am not sure then how to work out a set of values for x, they have x<2 and x>3 in the book.
Thanks in advance for the help
Then, for the quotient to be positive, you need the denominator to be positive.
The product of two terms is positive if and only if both terms have the same sign.
That is to say [x-2>0 and x-3>0] or [x-2<0 and x-3<0]
From the first one, you have [x>2 and x>3], which is [x>3]
From the second one, you have [x<2 and x<3], which is [x<2]
Does it look clear to you ?
That makes perfect sense thank you, if the discriminant is negative - ie two complex roots - then the quadratic keeps the sign of the $x^2$ constant.
Never had it explained like this before, but seems much easier.
Then, for the quotient to be positive, you need the denominator to be positive.
The product of two terms is positive if and only if both terms have the same sign.
That is to say [x-2>0 and x-3>0] or [x-2<0 and x-3<0]
From the first one, you have [x>2 and x>3], which is [x>3]
From the second one, you have [x<2 and x<3], which is [x<2]
Does it look clear to you ?
Yep that's perfectly clear thank you.
Sorry just one more question, if the original question had something that was less than zero, would you do the opposite of the above, make one negative and one positive?
Not sure if anyone mentioned this but, you can complete the square for the numerator to prove that it is always positive cause i dont understand the discriminant thingy always.
Since $[x-\frac{3}{2}]^2\geq0$ for all values of x
Therefore, $[x-\frac{3}{2}]^2+\frac{3}{4}>0$ for all values of x
May 22nd 2009, 10:08 AM #2
May 22nd 2009, 10:14 AM #3
May 22nd 2009, 10:27 AM #4
May 22nd 2009, 03:05 PM #5
May 22nd 2009, 03:07 PM #6
May 22nd 2009, 10:04 PM #7 | {"url":"http://mathhelpforum.com/algebra/90080-inequality-problem.html","timestamp":"2014-04-17T06:01:52Z","content_type":null,"content_length":"54854","record_id":"<urn:uuid:2e78a043-53ff-4015-9c1c-d86aa2ee8f38>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Application of Boyle's Law
The formula means that if you have a quantity of gas with pressure P1 and volume V1, and you compress it, or let it expand, and afterwards the pressure and volume are P2 and V2, then
P1V1 = P2V2
you know the volume before and after removing the block. What can you say about the pressure, and how is this related to the mass of the block and the piston. | {"url":"http://www.physicsforums.com/showthread.php?t=383343","timestamp":"2014-04-21T07:23:41Z","content_type":null,"content_length":"22325","record_id":"<urn:uuid:793580c0-518a-4258-a560-87f42fcc6896>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factorials of Negative Integers
Date: 06/27/2002 at 02:21:04
From: Steven
Subject: factorial
Is it possible to compute the factorial of a negative number, e.g.,
(-4)! = ?
Date: 06/28/2002 at 05:33:31
From: Doctor Floor
Subject: Re: factorial
Hi, Steven,
Thanks for your question.
Starting from the explanation for 0! in the Dr. Math FAQ,
0! = 1
if we try to extend this to -1, we find that (-1)! = 1/0 which means
that (-1!) is undefined. Extending further to (-2)! etc. is thus
impossible as well.
The same FAQ mentions the Gamma function, which generalizes the idea
of factorial to non-integer numbers. Note that this function also
yields undefined values for negative integer factorials.
If you have more questions, just write back.
Best regards,
- Doctor Floor, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/60851.html","timestamp":"2014-04-21T07:50:00Z","content_type":null,"content_length":"5851","record_id":"<urn:uuid:0b727240-dd6a-4753-b547-8042b1885923>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Series - Problem 2
Summing a series, so behind me I have a series consisting of three terms and some dot, dot, dot which means that this is ongoing. So what this tells me is that I am summing a infinite series, there
is no end, there is no specific sum this many terms. We don't have a formula for summing a infinite arithmetic sequence, so I'm hoping that this is a geometric series that I am dealing with.
So if geometric series we are going from one term to the next by multiplying by a consistent rate.
So what I see is some denominators for our second and third term but not our first, so I'm going to write in the denominator just to make my life a little bit easier to see if I can get any
consistencies in this case. So I look at my denominators I'm going from 1 to 4 to 16 which tells me that I have to be multiplying by 4 in the denominator each time to go from one term to the next. So
I know my rate is going to be something over 4.
Similarly I'm going from 5 to -15 to 45 in the numerator which tells me I'm multiplying by 3, but there is this sign change, I'm going from positive to negative and back to positive, the only way to
do that is if we have a negative involved in there as well, so I know that my rate then has to be -3 over 4.
You can always check 5 times -3/4 is -15 over 4 times -3/4 is 45 over 16, so I found the rate for this geometric infinite series.
We now need to find the sum. We have a formula for the infinite sum; s is equal to a1 over 1 minus r, now all we have to do is plug in our information. We know that a1 is 5, that's easy enough and we
know that our rate is -3/4 so this just becomes 1 minus -3/4. 1 minus -3/4 just becomes positive so this just becomes plus, so this becomes 5 over 1 plus 3/4, four-fourths plus 3/4 is just 7/4
dividing by a fraction just flip and multiply, so this is 5 times 4 over 7 which just leaves us with 20 over 7.
So finding a infinite sum, make sure it's a geometric series because in order for it to have an infinite sum, it has to be geometric, find your rate and then just plug it into your equation.
geometric series sum of a geometric sequence rate infinite sum | {"url":"https://www.brightstorm.com/math/algebra-2/sequences-and-series/geometric-series-problem-2/","timestamp":"2014-04-19T14:56:57Z","content_type":null,"content_length":"56937","record_id":"<urn:uuid:3f731cc6-157c-4e85-94b5-5fc0e302fcc5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Interesting Interpolation Problem
[Date Index] [Thread Index] [Author Index]
An Interesting Interpolation Problem
I need your help once again. Any feedback would be much appreciated.
I'm really going to test my powers of explanation so here we go......
I have a two-dimensional matrix. I would like to create an
interpolating object of this thing, but I do not wish to use the
standard interpolating procedure built-in to Mathematica as it would
obscure the "threshold" nature of this matrix. Let me explain.
Consider the following 3 X 3 matrix in MatrixForm(I'm using letters
just for the sake of discussion, numbers will actually be used):
a 0 0
b c 0
d e f
Let's call this mat, so mat[[1,1]] = a, mat[[1,2]] = 0, etc.
Imagine that I create an interpolating object of mat (matinterp
ListInterpolation[mat,InterpolationOrder->1], I just use linear
interpolation for the sake of discussion, I don't think the type of
algorithm available in Mathematica will solve my problem).
If I put matinterp[1,1], I get "a". This is correct.
If I put matinterp[1,1.01], the algorithm will use information at
surrounding values, such as "a","0","b","c", to approximate
I DON'T WANT IT TO DO THIS!!!
When I put in matinterp[1,1.01], I wish it to return 0. This matrix
represents threshold behavior, I want there to be sharp
discontinuities, yet the interpolating object smoothes these out.
This is a tricky problem to say the least. I was thinking of pulling
out the points which lie on the "threshold curve." This would be (the
form is {x coord, y coord, value} ):
Imaging that I plotted the INTERPOLATED "threshold" in xy space. Then,
(x,y) combinations ABOVE the "threshold" would receive a value of
zero. (x,y) combinations BELOW or ON the "threshold" would receive
some kind of value. The trick is, however, that for (x,y)
combinations BELOW or ON the "threshold" any interpolating which
needed to be done (i.e. the point may lie off of the grid) can only
use information for points below the threshold---the algorithm cannot
use the 0 values above the threshold. I think this is the hard part.
I realize this is a little complex, but any help would be much
Chris Farr | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Apr/msg00214.html","timestamp":"2014-04-19T14:38:10Z","content_type":null,"content_length":"36325","record_id":"<urn:uuid:0bec1a0e-d850-44cc-8abe-c01456f2e7bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
iterative combinations
06-01-2003 #1
Registered User
Join Date
Jun 2003
iterative combinations
I wrote the following code in order to compute combinations of n per k.Though it works right for number n between 1-30 i can't make it work for larger numbers.Does anybody have any idea on how to
improve it?
#include <iostream>
#include <cmath>
using namespace std;
int main() {
int n,c,j,k;
cout<<"Give n:";cin>>n;
cout<<"Give k:";cin>>k;
/* i goes through all n-bit numbers */
for (int i=0; i<(1<<n); i++) {
/* masking the j'th bit as j goes through all the bits,
* count the number of 1 bits. this is called finding
* a population count.
for (j=0,c=0; j<32; j++) if (i & (1<<j)) c++;
/* if that number is equal to k, print the combination... */
if (c == k) {
/* by again going through all the bits indices,
* printing only the ones with 1-bits
for (j=0;j<32; j++) if (i & (1<<j))
cout<<j<<' ';
for (int i=0; i<(1<<n); i++)
Thats equivalent to saying 2^n. I know off the top of my head that for a signed integer (normal 32 bit int), anything above n=30 will cause wraparound error (for unsigned 32 bit int, n=31 is the
The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop.
You might want to precalculate 1 << n, but that's not that important.
The overflow issue is more important, but that's always a problem with combinations.
Another is that the next largest integer type (64-bit) doesn't have the same name on all compilers. On Borland and MS it's called __int64 while on standards compliant compilers (e.g. gcc) it's
called long long.
I usually use boost for that and use the typedef boost::int64_t (or something like that).
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
You aren't finding all combinations here, but all permutations of 32 bit strings having k ones. The easyest way is to just use next_permutation
int main(int argc, char *argv[]) {
typedef std::vector<bool> bvec;
typedef bvec::size_type sz_t;
sz_t n=(argc>1)?atoi(argv[1]):4;
sz_t k=(argc>2)?atoi(argv[2]):2;
if(k>n) {
std::cerr << n << " bit number cannot have " << k << " ones";
std::cerr << std::endl;
return 2;
bvec v;
do {
for(sz_t i=0;i<v.size();++i) std::cout << ((v[i])?'1':'0');
std::cout << std::endl;
} while(std::next_permutation(v.begin(),v.end()));
return 0;
06-01-2003 #2
06-02-2003 #3
06-02-2003 #4
Registered User
Join Date
Jan 2003 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/40154-iterative-combinations.html","timestamp":"2014-04-20T04:10:59Z","content_type":null,"content_length":"52310","record_id":"<urn:uuid:0db4a39d-50cc-4345-91b5-0dba1b571327>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Numbers and Computer Methods for Factorization, 2nd Ed
Results 1 - 10 of 18
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance,
because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends o ..."
Cited by 41 (17 self)
Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because
the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer
factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the
availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiple-polynomial quadratic sieve (MPQS) algorithm, and discuss their
parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the
problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of
the 617-decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
- In Proceedings of the Tenth IEEE International Symposium on High Performance Computer Architecture , 2004
"... Using alternative cache indexing/hashing functions is a popular technique to reduce conflict misses by achieving a more uniform cache access distribution across the sets in the cache. Although
various alternative hashing functions have been demonstrated to eliminate the worst case conflict behavior, ..."
Cited by 24 (3 self)
Add to MetaCart
Using alternative cache indexing/hashing functions is a popular technique to reduce conflict misses by achieving a more uniform cache access distribution across the sets in the cache. Although
various alternative hashing functions have been demonstrated to eliminate the worst case conflict behavior, no study has really analyzed the pathological behavior of such hashing functions that often
result in performance slowdown. In this paper, we present an in-depth analysis of the pathological behavior of cache hashing functions. Based on the analysis, we propose two new hashing functions:
prime modulo and prime displacement that are resistant to pathological behavior and yet are able to eliminate the worst case conflict behavior in the L2 cache. We show that these two schemes can be
implemented in fast hardware using a set of narrow add operations, with negligible fragmentation in the L2 cache. We evaluate the schemes on 23 memory intensive applications. For applications that
have non-uniform cache accesses, both prime modulo and prime displacement hashing achieve an average speedup of 1.27 compared to traditional hashing, without slowing down any of the 23 benchmarks. We
also evaluate using multiple prime displacement hashing functions in conjunction with a skewed associative L2 cache. The skewed associative cache achieves a better average speedup at the cost of some
pathological behavior that slows down four applications by up to 7%. 1.
- In Proc. of COCOON 2000 , 2000
"... Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed
difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In ..."
Cited by 20 (1 self)
Add to MetaCart
Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed
difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In recent years the limits of the best integer factorisation algorithms have been extended
greatly, due in part to Moore’s law and in part to algorithmic improvements. It is now routine to factor 100-decimal digit numbers, and feasible to factor numbers of 155 decimal digits (512 bits). We
outline several integer factorisation algorithms, consider their suitability for implementation on parallel machines, and give examples of their current capabilities. In particular, we consider the
problem of parallel solution of the large, sparse linear systems which arise with the MPQS and NFS methods. 1
, 2002
"... Numbers of the form (6m + 1)(12m + 1)(18m + 1) where all three factors are simultaneously prime are the best known examples of Carmichael numbers. In this paper we tabulate the counts of such
numbers up to 10 for each n 42. We also derive a function for estimating these counts that is remarkably ..."
Cited by 3 (0 self)
Add to MetaCart
Numbers of the form (6m + 1)(12m + 1)(18m + 1) where all three factors are simultaneously prime are the best known examples of Carmichael numbers. In this paper we tabulate the counts of such numbers
up to 10 for each n 42. We also derive a function for estimating these counts that is remarkably accurate.
, 2001
"... Mathematicians have been attempting to find better and faster ways to factor composite numbers since the beginning of time. Initially this involved dividing a number by larger and larger primes
until you had the factorization. This trial division was not improved upon until Fermat applied the ..."
Cited by 1 (0 self)
Add to MetaCart
Mathematicians have been attempting to find better and faster ways to factor composite numbers since the beginning of time. Initially this involved dividing a number by larger and larger primes until
you had the factorization. This trial division was not improved upon until Fermat applied the
, 2002
"... 3 Finite Fields In computational number theory and cryptographic applications, we often have to work over finite fields. A finite field F is a finite set with operations "+ " and "\Theta " which
satisfy the usual associative, commutative and distributive laws: ..."
Add to MetaCart
3 Finite Fields In computational number theory and cryptographic applications, we often have to work over finite fields. A finite field F is a finite set with operations "+ " and "\
Theta " which satisfy the usual associative, commutative and distributive laws:
, 1996
"... This Report updates the tables of factorizations of a n \Sigma 1 for 13 a ! 100, previously published as CWI Report NM-R9212 (June 1992) and updated in CWI Report NM-R9419 (September 1994). A
total of 760 new entries in the tables are given here. The factorizations are now complete for n ! 67, an ..."
Add to MetaCart
This Report updates the tables of factorizations of a n \Sigma 1 for 13 a ! 100, previously published as CWI Report NM-R9212 (June 1992) and updated in CWI Report NM-R9419 (September 1994). A total
of 760 new entries in the tables are given here. The factorizations are now complete for n ! 67, and there are no composite cofactors smaller than 10 94 . 1991 Mathematics Subject Classification.
Primary 11A25; Secondary 11-04 Key words and phrases. Factor tables, ECM, MPQS, SNFS To appear as Report NM-R96??, Centrum voor Wiskunde en Informatica, Amsterdam, March 1996. Copyright c fl 1996,
the authors. Only the front matter is given here. For the tables, see rpb134u2.txt . rpb134u2 typeset using L a T E X 1 Introduction For many years there has been an interest in the prime factors of
numbers of the form a n \Sigma 1, where a is a small integer (the base) and n is a positive exponent. Such numbers often arise. For example, if a is prime then there is a finite field F with a n ...
"... This paper presents two algorithms that, given an n-bit positive integer m 2 1 + 8Z that is not a square, nd an element of Z=m that is a nonsquare or a nonzero non-unit. Under a standard
conjecture, the rst algorithm takes time O(n(lg n) 3 lg lg n). Under a new but plausible conjecture, the sec ..."
Add to MetaCart
This paper presents two algorithms that, given an n-bit positive integer m 2 1 + 8Z that is not a square, nd an element of Z=m that is a nonsquare or a nonzero non-unit. Under a standard conjecture,
the rst algorithm takes time O(n(lg n) 3 lg lg n). Under a new but plausible conjecture, the second algorithm takes expected time O(n).
, 47
"... Let n > 2 be a positive integer and let denote Euler's totient function. De ne (n) = (n) and (n)) for all integers k 2. De ne the arithmetic function S by S(n) = (n) + (n) + 1, where (n) = 2. We
say n is a perfect totient number if S(n) = n. We give a list of known perfect ..."
Add to MetaCart
Let n > 2 be a positive integer and let denote Euler's totient function. De ne (n) = (n) and (n)) for all integers k 2. De ne the arithmetic function S by S(n) = (n) + (n) + 1, where (n) = 2. We say
n is a perfect totient number if S(n) = n. We give a list of known perfect totient numbers, and we give sucient conditions for the existence of further perfect totient numbers.
"... This Report updates the tables of factorizations of a 1 for 13 a < 100, previously published as CWI Report NM-R9212 (June 1992) and updated in CWI Report NM-R9419 (September 1994). A total of
760 new entries in the tables are given here. The factorizations are now complete for n < 67, and th ..."
Add to MetaCart
This Report updates the tables of factorizations of a 1 for 13 a < 100, previously published as CWI Report NM-R9212 (June 1992) and updated in CWI Report NM-R9419 (September 1994). A total of 760 new
entries in the tables are given here. The factorizations are now complete for n < 67, and there are no composite cofactors smaller than 10 . | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1871676","timestamp":"2014-04-16T22:00:40Z","content_type":null,"content_length":"35383","record_id":"<urn:uuid:160cf048-3662-47db-ab79-d6f4cf2260e2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
July 6th, 2012
The Problem with Inductive Arguments
by Max Andrews
All induction problems may be phrased in a way that depicts a sequence of predictions. Inductive problems will contain a previous indicator or explanans for the explanandum. For instance, Carl
Hempel’s example of Jones’ infection:
Where j is Jones, p is the probability, Sj is Jones’ infection, Pj is he being treated with penicillin, and Rj is his recovery. If the probability of observing R at any time given the past
observations of S&P[1]… S&P[2] … S&P[n] (the probability of the set meeting R is m) where R was close to 1 then a predictive explanans (the S&P[n] ) can be made for future instances of m using an
inductive-statistical explanation. For if the probability m(S&P[n ]| S&P[1]… S&P[2] …) is a computable function, the range of data is finite then a posterior predication M can be made from m. M can
be legitimately referred to as a universal predictor in cases of m. This is where Hempel rejects the requirement of maximal specificity (RMS), contra Rudolph Carnap, in which the RMS is a maxim for
inductive logic that states that this is a necessary condition for rationality of any given knowledge situation K. Let K represent the set of data known in m. According to Hempel we cannot have all
the material for K.
read more »
Posted in Logic | No Comments » | {"url":"http://sententias.org/2012/07/06/","timestamp":"2014-04-19T11:57:12Z","content_type":null,"content_length":"33297","record_id":"<urn:uuid:1c45835c-34e1-46e0-8cca-6825643b3fd9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangent to the curve.
August 27th 2007, 10:30 AM #1
Aug 2007
Tangent to the curve.
Ok, I have a question here, however the problem i have is more simple than the actual question itself, it's a basic math lack of understanding on my part.
Consider the curve defined by:
$x^2 + 4xy + y^2 = 6$
Find an expression for $\frac{dy}{dx}$ in terms of x and y, and hence give the equation of the tangent to the curve at the point (x,y) = (1,1)
So, to begin, I make:
$x^2 + 4xy + y^2 = 0$
then differentiate with respect to X:
giving $\frac{dy}{dx} = 2x+4y$
Next, differentiate with respect to Y:
giving $\frac{dx}{dy} = 4x + 2y$
Now, I know that the next step is $\frac{dy}{dx} = -\frac{2x+4y}{4x+2y}$
But I don't understand where the - comes from to make the fraction negative. I know that $\frac{dx}{dy}$ needs to be inverted to make it into $\frac{dy}{dx}$ and so moving to the bottom ofthe
fraction, but where does the minus sign come from?
Could someone please explain that before we move onto the next step? Thanks
Last edited by Alias_NeO; August 27th 2007 at 10:31 AM. Reason: Math code error.
Why bother so much?
$<br /> x^2 + 4xy(x) + y^2(x) = 6<br />$
and differentiate for x:
$<br /> 2x+4y(x)+4xy'(x) + 2y(x)y'(x) =0<br />$
so at points where y(x) is defined,
$<br /> y'(x) =(-2x-4y(x))/(4x+2y(x))<br />$
and use $(x,y(x))=(1,1)$ to find $y'(1)=\ldots...$
I do it the other way because it is the way i learned and I can't understand the way you gave, can you help fcomplete my method? I have a lot to learn in the next two days and learning a new
method from scratch isn't the best way to get that done.
But I don't understand where the - comes from to make the fraction negative.
Call the given function $F(x,y)=0$. Assuming y=y(x), differentiate using the chain rule:
$<br /> \frac{dF}{dx}\frac{dx}{dx} +\frac{dF}{dy}\frac{dy}{dx}=0<br />$
and since $\frac{dx}{dx}=1$, solve the last one for $\frac{dy}{dx}$ to obtain:
$<br /> \frac{dy}{dx}=\frac{-\frac{dF}{dx}}{\frac{dF}{dy}}<br />$.
This is your method, isn't it?
I'm sorry, it's this general sort of maths notation that causes me problems. Bearwith my while i try write all you have just said out again using the full numerical functions.
I can't do general equiations/maths. Need to see the figures and values.
Sorry to be a nusiance, could you please tell me what each of the terms refers to, i can't quite work it out, also what does "y =y(x)" mean?
I gathered that:
$F(x,y) = x^2+4xy+y^2$
I get lost after that
Not looking good for me.
y=y(x) means that the y in the equation F(x,y)=0 is a function of x.
Anyhow, you just need the formula $<br /> \frac{dy}{dx}=\frac{-\frac{dF}{dx}}{\frac{dF}{dy}}<br />$.
Ok, so when i have a fraction of a negative number over a positive one, and i take the negatiev sign out, do i invert all signs on the top?
so $+\frac{-1}{+2}$ to $-\frac{+1}{+2}$
Would doing the following be the same and correct?
$+\frac{-x-1}{+2}$ to $-\frac{+x+1}{+2}$
Shown the signs to explain what I mean, is this the correct manipulation?
Yes it is.
Work hard to overcome those algebra shortcomings!
Next step.
Tell me about it, but right now i just need to able to do the papers, starting wednesday i have 3 maths papers, if i don't pass them all i fail my year, i resit the yearagain losing another year
of my life, i end up being a year behind all my friends and it will cost me an extra £3000 so on the whole, i really don't want to fail these.
That being said, I don't have any more maths exams after this year.
Ok, so now I got that part, next is the next step.
"Give the equation of the tangent to the curve at the point (x,y) = (1,1)
How do i sole this last step.
The solution I am given is:
$y = -(x-1) + 1 = 2-x$
How do i find this? Thanks
you already know the gradient of the curve at T(1, 1):
$\frac{dy}{dx} = -\frac{2x+4y}{4x+2y}$ . Plug in the given values:
$y'(1,1) = -\frac{2\cdot 1+4\cdot 1}{4\cdot 1+2\cdot 1} = -\frac{6}{6} = -1$
You are looking for the equation of a straight line passing through T (the tangent point) with a slope m = -1. Use the point slope-formula of a straight line:
If a line passes through the point $P_1(x_1, y_1)$ with the slope m then the equation of the line is:
$\frac{y-y_1}{x-x_1}=m$ . Plug in all values you know (coordinates of T, slope m):
$\frac{y-1}{x-1}=-1~\Longrightarrow~y-1=-1 \cdot (x-1) ~\Longrightarrow~ y=-(x-1)+1~\Longrightarrow~ y=-x+2$
I've attached a diagram of the curve and the tangent.
Thank you for the excellent explanation.
August 27th 2007, 10:44 AM #2
August 27th 2007, 10:54 AM #3
Aug 2007
August 27th 2007, 11:06 AM #4
August 27th 2007, 11:09 AM #5
Aug 2007
August 27th 2007, 11:31 AM #6
Aug 2007
August 27th 2007, 11:36 AM #7
August 27th 2007, 11:49 AM #8
Aug 2007
August 27th 2007, 12:01 PM #9
August 27th 2007, 12:06 PM #10
Aug 2007
August 27th 2007, 09:40 PM #11
August 28th 2007, 02:17 AM #12
Aug 2007 | {"url":"http://mathhelpforum.com/calculus/18107-tangent-curve.html","timestamp":"2014-04-20T17:47:25Z","content_type":null,"content_length":"68573","record_id":"<urn:uuid:9b7b0255-162a-4b15-96d3-f794f8209c7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Action Prealgebra Problem Solve
Similar Searches: mathematics, mathematics action: prealgebra problem solve, developmental mathematics edition 4, mathematic, apply mathematics for business, 4th edition,, mathematics for elementary
teacher, elementary lgebra 4 th ed tussy, mathematics ninth edition, finite mathematics 7th ed, seymour lipschutz, mathematics for elementary teacher bennett, elementary mathematics, precalculus:
problem orient approach, 6th edition, precalculus: problem orient approach, 6th edition ebook, mathematics teacher edition, problem solve approach mathematics, autocad 2011: problem solve approach,
mathematical reason for elementary teachers, 5th edition long & de temple, contemporary mathematics context teacher edition, and rick billstein
We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals.
As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based
Customer Service team is ready to help by email, chat or phone.
For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going!
*It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time.
With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days.
Need it faster? Our shipping page details our Express & Express Plus options.
Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page.
Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students
the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same
people, only our corporate name has changed. | {"url":"http://www.bookrenter.com/mathematics-action-prealgebra-problem-solve/search--p3","timestamp":"2014-04-18T13:29:43Z","content_type":null,"content_length":"45772","record_id":"<urn:uuid:a5ee6bae-89ff-4959-b7b7-57486af8f8f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
If you were given a large data set such as the sales over the last... - (165296) | Transtutors
If you were given a large data set such as the sales over the last year of our top 1,000 customers, what might you be able to do with this data? What might be the benefits of describing the data?
Posted On: May 03 2012 12:01 AM
Tags: Statistics, Operational Research, Decision Making, University
Solution to be delivered in 24 hours
after verification
Solution to "statistic"
Related Questions in Decision Making
“2” Operational Research experts Online
Ask Your Question Now
Copy and paste your question here...
Have Files to Attach?
Questions Asked
Questions Answered
Topics covered in Statistics | {"url":"http://www.transtutors.com/questions/statistic-165296.htm","timestamp":"2014-04-18T18:11:30Z","content_type":null,"content_length":"58266","record_id":"<urn:uuid:1a71e456-3a0b-4a25-aba8-037e69956302>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring PolynomialsAlgebraLAB: Lessons
Example Group #1
Factoring out the
Greatest Common Factor
(GCF) is perhaps the most used type of factoring because it occurs as part of the process of factoring other types of products. Before you can
trinomials, for example, you should check for any GCF.
#1: Factor the following problem completely
□ Look for the greatest factor common to every term
□ Factor out the GCF by dividing it into each term
What is your answer?
#2: Factor the following problem completely ^
□ In this problem, the greatest common factor includes both numbers and variables. First we need to factor out the greatest number that will divide into both 15 and 9. In this case, it will be a
3. Next we need to factor out the smallest power of the variable x that can be seen in the problem. In this case, it will be ^. Our GCF is
□ Factor out the GCF by dividing it into each term.
What is your answer?
#3: Factor the following problem completely
□ Find the greatest factor common to every term. Since the last term, 24, does not contain any variables, no variables are “common” in this problem and only the -6 may be factored out.
□ Factor out the GCF by dividing it into each term
What is your answer?
Example Group #2
Oftentimes when there is no
common to all terms of a
there will be factors common to some of the terms. A second technique of factoring called
is illustrated in the following examples.
#4: Factor the following problem completely
□ Factor out 3a from the first 2 terms and 4 from the last 2 terms.
□ Notice that the terms inside each set of parentheses are the same. Those terms have now become the GCF. The answer may be checked by multiplying the factored form back out to see if you get the
original polynomial.
What is your answer?
#5: Factor the following problem completely
□ Factor out 2a from the first 2 terms and -5 from the last 2 terms. Be careful about signs!
□ The terms inside each set of parentheses are the same. Those terms have now become the GCF.
What is your answer?
Example Group #3
difference in two perfect squares
by definition states that there must be two terms, the sign between the two terms is a minus sign, and each of the two terms contain perfect squares. The answer after factoring the difference in two
squares includes two binomials. One of the binomials contains the sum of two terms and the other contains the difference of two terms. In general, we say
#6: Factor the following problem completely
What is your answer?
#7: Factor the following problem completely
□ Factor out the GCF of
□ Now factor the difference in two squares. The square root of is []and the square root of ^is ^. Make certain that you check to be sure that neither factor will factor again. What is the final
What is your answer?
#8: Factor the following problem completely
□ Check to be sure that neither factor will factor again. The term
What is your answer?
Example Group #4
Factoring the
sum or difference in two perfect cubes
is our next technique. As with squares, the difference in two cubes means that there will be two terms and each will contain perfect cubes and the sign between the two terms will be negative. The
sum of two cubes would, of course, contain a plus sign between the two perfect
terms. The follow formulas are helpful for factoring cubes:
Notice that the sum and the difference are exactly the same except for the signs in the factors. Many students have found the acronym
extremely helpful for remembering the arrangement of the signs.
S represents the fact that the sign between the two terms in the binomial portion of the answer will always be the same as the sign in the given problem.
O implies that the sign between the first two terms of the trinomial portion of the answer will be the opposite of the sign in the problem.
AP states that the sign between the final two terms in the trinomial will be always positive.
#9: Factor the following problem completely ^
□ This is a difference in two cubes, so begin with two sets of parentheses.
□ In the first set, there will be a binomial containing the cube root of each term. In this problem, x and 3.
□ In the second set there will be a trinomial. The first term of the trinomial is the square of the first term in the binomial.
□ The last term is the square of the last term in the binomial.
□ The middle term is the product of the two terms in the binomial.
□ You will be finished when you insert the appropriate sign between each of the terms.
What is your answer?
#10: Factor the following problem completely
□ This is a sum of two cubes, so begin with two sets of parentheses.
□ In the first set, there will be a binomial containing the cube root of each term. In this problem,
□ In the second set there will be a trinomial. The first term of the trinomial is the square of the first term in the binomial.
□ The last term is the square of the last term in the binomial.
□ The middle term is the product of the two terms in the binomial.
□ You will be finished when you insert the appropriate sign between each of the terms.
What is your answer?
#11: Factor the following problem completely ^
□ Now finish the problem by factoring the difference of the two perfect cubes.
What is your answer?
#12: Factor the following problem completely
□ Now finish by factoring the sum of the two perfect cubes.
What is your answer?
Example Group #5
factoring a trinomial
, examine the
to be sure that terms are arranged in descending order. Most of the time trinomials
to two binomials in
#13: Factor the following problem completely. ^
□ The three terms are arranged in descending order. There is not a GCF. Therefore the factoring process is begun by opening two sets of parentheses.
□ Place the factors for the first term of the trinomial in the front of each set of parentheses.
□ Then, because the sign of the last term is positive, factor the last term of the trinomial to factors that multiply to give 12 and add to give 7.
□ Finally, because the sign of the last term is positive, the sign of the 4 and the sign of the 3 will each have the same sign. Because the sign of the 7 is positive, the sign of the 4 and the
sign of the 3 will each be a positive sign. Check the answer using multiplication.
What is your answer?
#14: Factor the following problem completely. ^
□ This example is very similar to Example #1 above. So we begin by opening two sets of parentheses and placing the factors for the first term in the front of each set of parentheses.
□ The difference lies in the signs. The sign of the 12 is still positive, so the sign of the 4 and the 3 will again be the same. However, since the sign of the 7 is negative, the sign of the 4
and the 3 will each be negative.
What is your answer?
#15: Factor the following problem completely. ^
□ The terms are arranged in descending order, and there is no GCF. So again begin by opening two sets of parentheses and placing factors for the first term in the front of each set of
□ Choose numbers that multiply to give 12. Since the 12 is negative, find two numbers that subtract to give 4.
□ Since the sign of the 12 is negative, one factor of the answer will be positive and the other will be negative. Since the sign of the middle term of the trinomial is negative, the larger of the
two factors used in the answer will have that sign. Therefore, the 6 will be negative and the 2 will be positive.
What is your answer?
Example Group #6
general trinomial
is one whose first term has a
that can not be factored out as a GCF. The method of trial and error will be used to mentally determine the factors that satisfy the trinomial. We will show you the steps to
each of the following general trinomials completely.
#16: Factor the following problem completely. ^
□ Factor out the GCF.
□ In factoring the general trinomial, begin with the factors of 12. These include the following: 1, 12, 2, 6, 3, 4. As a general rule, the set of factors closest together on a number line should
be tried first as possible factors for the trinomial.
□ The only factors of the last term of the trinomial are 1 and 3, so there are not other choices to try. Because the last term is negative the signs of the factors 1 and 3 must be opposite.
□ This is the first trial. The answer must be checked by multiplication, as follows:
□ The factorization of the trinomial is almost correct. However, the sign of the middle term is incorrect. That means that the signs of the two factors should be switched.
What is your answer?
#17: Factor the following problem completely. ^
□ Factor out the negative sign first. Doing so will change all the signs of the trinomial.
□ Now factor the trinomial. Factors of the first term include 1, 4, 2. Factors of the last term include 1, 6, 2, 3. The sign of the 6 is negative, so the signs in the two factors must be
□ Consider 2 and 2 as factors of 4, and 3 and 2 as factors of 6.
□ Such choices are not good, because it causes the second factor to contain a GCF and that should be avoided. A second attempt must be made, since checking the factors will fail as follows:
□ Try another combination.
□ Check the factorization.
□ Trials would continue by perhaps trying to switch the 3 and 2; however, that would cause a GCF in the first set of parentheses. That should be avoided, so the next idea would be to use 6 and 1
instead of 3 and 2.
What is your answer? | {"url":"http://algebralab.org/lessons/lesson.aspx?file=Algebra_factoring.xml","timestamp":"2014-04-16T04:11:21Z","content_type":null,"content_length":"69950","record_id":"<urn:uuid:d9c837dd-0a7a-404b-9fd7-dfe961008fe1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Additive and Multiplicative Rules for Probability ( Read ) | Probability
Do you remember Alicia from the Use Tree Diagrams to List All Possible Outcomes Concept? Take a look at the tree diagram of outfits.
Alicia is going to sing for the Talent Show. She is very excited and has selected a wonderful song to sing. She has been practicing with her singing teacher for weeks and is feeling very confident
about her ability to do a wonderful job.
Her performance outfit is another matter. Alicia has selected a few different skirts and a few different shirts and shoes to wear. Here are her options for shirts
Striped shirt
Solid shirt
Here are her options for skirts.
Blue skirt
Red skirt
Brown skirt
Here are her options for shoes
Dance shoes
Black dress shoes
Here is a tree diagram of Alicia's outfits.
What is the probability of Alicia wearing her striped shirt, blue skirt and dance shoes?
To figure this out you will need to know how to use a tree diagram to calculate a specific probability. Pay attention and you will learn what you need to know in this Concept.
Previously we worked on tree diagrams. We have seen how tree diagrams can be very helpful when looking for a sample space. Tree diagrams can also be helpful when finding probability.
Finding the probability of an event is a matter of finding the ratio of favorable outcomes to total outcomes. For example, the sample space for a single coin flip has two outcomes: heads and tails.
So the probability of getting heads on any single coin flip is:
$P (\text{heads}) = \frac{favorable \ outcomes}{total \ outcomes} =\frac{1}{2}$
You can see that the sample space is represented by a number in the total outcomes. For example, if you had a spinner with four colors, the colors by name would be the sample space and the number
four would be the total possible outcomes.
What about if we flipped a coin more than one time?
To find the probability of a single outcome for more than one coin flip, use a tree diagram to find all possible outcomes in the sample space.
Then count the number of favorable outcomes within that sample space to find the probability.
For example, to find the probability of tossing a single coin twice and getting heads both times, make a tree diagram to find all possible outcomes.
The diagram shows there are 8 total outcomes and they are paired with first toss option and second toss option.
Then pick out the favorable outcome–in this case, the outcome “heads-heads” is shown in red. You could have selected any of the favorable outcomes for the probability to be accurate.
Now write the ratio of favorable outcomes to total outcomes in the sample space.
$P (\text{heads-heads}) = \frac{favorable \ outcomes}{total \ outcomes}=\frac{1}{4}$
You can see that since 1 of 4 outcomes is a favorable outcome, the probability of the coin landing on heads 2 times in a row is $\frac{1}{4}$
Let’s look at another scenario.
What is the probability of flipping a coin two times and getting two matching results–that is, either two heads or two tails?
First, let’s create a tree diagram to see our options.
Once again, just pick out the favorable outcomes on the same tree diagram. They are shown in red.
You can see that 2 of 4 total outcomes match.
$P (2 \ \text{heads or 2 tails}) = \frac{favorable \ outcomes}{total \ outcomes}=\frac{2}{4}=\frac{1}{2}$
You can see that the probability of flipping two heads or two tails is 1:2.
Try a few on your own.
Example A
Look at the tree diagram above. What is the probability of it being heads and then tails?
Solution: $\frac{1}{4}$
Example B
What is the probability of tails then heads?
Solution: $\frac{1}{4}$
Example C
What is Example A and B as a percent?
Solution: $25%$
Here is the original problem once again.
Alicia is going to sing for the Talent Show. She is very excited and has selected a wonderful song to sing. She has been practicing with her singing teacher for weeks and is feeling very confident
about her ability to do a wonderful job.
Her performance outfit is another matter. Alicia has selected a few different skirts and a few different shirts and shoes to wear. Here are her options for shirts
Striped shirt
Solid shirt
Here are her options for skirts.
Blue skirt
Red skirt
Brown skirt
Here are her options for shoes
Dance shoes
Black dress shoes
Here is a tree diagram of Alicia's outfits.
What is the probability of Alicia wearing her striped shirt, blue skirt and dance shoes?
There are twelve possible outcomes, but only one is the striped shirt, blue skirt and dance shoes.
Our answer is $\frac{1}{12}$
Tree Diagram
a visual way of showing all of the possible outcomes of an experiment. Called a tree diagram because each option is drawn as a branch of a tree.
Sample Space
the possible outcomes in an experiment.
Favorable Outcome
the outcome that you are looking for in an experiment.
Total Outcome
the number of options in the sample space.
Guided Practice
Here is one for you to try on your own.
What is the probability of a win-win-win?
There are eight possible outcomes for the teams.
There is one option for a win-win-win in all three games.
The probability is $\frac{1}{8}$
Video Review
- This is a James Sousa video on probability.
Directions: Answer each question. Use tree diagrams when necessary.
1. What is the probability that the arrow of the spinner will land on red on a single spin?
2. If the spinner is spun two times in a row, what is the probability that the arrow will land on red both times?
3. If the spinner is spun two times in a row, what is the probability that the spinner will land on the same color twice?
4. If the spinner is spun two times in a row, what is the probability that the arrow will land on red at least one time?
5. If the spinner is spun two times in a row, what is the probability that the spinner will land on a different color both times?
6. If the spinner is spun two times in a row, what is the probability that the arrow will land on blue or green at least one time?
7. Two cards, the Ace and King of hearts, are taken from a deck, shuffled, and placed face down. What is the probability that a single card chosen at random will be an Ace?
8. If one card is chosen from the 2-card stack above, then returned to the stack and a second card is chosen, what is the probability that both cards will be Kings?
9. If one card is chosen from the 2-card stack above, then returned to the stack and a second card is chosen, what is the probability that both cards will match?
10. If one card is chosen from the 2-card stack above, then returned to the stack and a second card is chosen, what is the probability that both cards NOT match?
Directions: Look at the tree diagram and figure out each probability as a ratio.
11. What is the probability of the option tile, steel and granite?
12. What is the probability of either tile, steel, granite or tile, granite, white?
13. What is the probability of Formica being in the option?
14. What is the probability of tile and Formica being in the option?
15. What is the probability of white and Formica being in the option? | {"url":"http://www.ck12.org/concept/Additive-and-Multiplicative-Rules-for-Probability-Grade-7/?eid=None&ref=None","timestamp":"2014-04-18T01:50:51Z","content_type":null,"content_length":"114278","record_id":"<urn:uuid:e87e7805-ea2d-4bc1-99b6-d1b9ae305d17>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Praxis - Amazon Interview Question
Programming Praxis – Amazon Interview Question
In today’s Programming Praxis exercise, our goal is to find the 100 points closest to (0, 0) out of a list of 1000000 random points in O(n). Let’s get started, shall we?
Some imports:
import Control.Applicative
import qualified Data.IntMap as I
import System.Random
The obvious solution is to simply sort the list and take the first 100 elements. Sorting, however, usually takes O(n log n), which is not allowed. Fortunately, by using the square of the distance
rather than the distance we not only save a million square root operations, but it also makes the value we’re sorting by an integer, which allows us to use an IntMap that has O(1) insertion and thus
O(n) sorting.
closest :: Int -> [(Int, Int)] -> [(Int, Int)]
closest n = take n . concat . I.elems . I.fromListWith (flip (++)) .
map (\(x,y) -> (x*x+y*y, [(x,y)]))
To test, we need a million random points.
points :: IO [(Int, Int)]
points = take 1000000 <$> liftA2 zip (randomRs (-1000, 1000) <$> newStdGen)
(randomRs (-1000, 1000) <$> newStdGen)
Finally we run the algorithm.
main :: IO ()
main = print . closest 100 =<< points
To see whether our algorithm is truly linear, let’s look at some timings:
1 million: 2.9 s
2 million: 5.7 s
4 million: 11.6 s
8 million: 23.8 s
Looks fairly linear to me.
Tags: amazon, bonsai, closest, code, Haskell, interview, kata, points, praxis, programming, question | {"url":"http://bonsaicode.wordpress.com/2012/11/27/programming-praxis-amazon-interview-question/","timestamp":"2014-04-21T10:35:37Z","content_type":null,"content_length":"50822","record_id":"<urn:uuid:801ce3f9-03dc-43db-90f6-9be7b9886593>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Driving a car
September 5th 2013, 08:34 AM
Driving a car
Hey people! I got a problem here I'm stuck on:
A car is driving at night along a level, curved road. It starts in the origin, the equation of the road is y = x^2, and the car's x-coordinate is an increasing function of time. There is a
signpost located at (2,3.75).
a) What is the position of the car when its headlight illuminates the signpost? Do you have any implicit physical assumptions in your solution?
b) What is the shortest distance between the signpost and the car?
c) Let dx/dt=v[x] and dy/dt=v[y]. The car's velocity is then[v[x],v[y]]. How are v[x] and v[y] related?
I got some ideas for how I am to solve it, but I do not know where to begin. Please help me :)
September 5th 2013, 09:21 AM
Re: Driving a car
I'll help you begin. In order for the headligts to shine on the sign the slope of the road dy/dx must equal the slope of a line conecting the car to the sign. So given that the sign is at
coordinates (A,B), the car is at (x,y), and y = x^2, you have:
$\frac {dy}{dx} = 2x = \frac {B-y}{A-x} = \frac {B-x^2}{A-x}$
Now you can solve for x, then y.
As for part (b) the car is at its closest approach when the slope of the line connecting the car to the sign is the negative inverse of the slope of the road:
$\frac {B-y}{A-x} = \frac {-1}{2x}$
For (c) consider the chain rule.
September 8th 2013, 09:21 AM
Re: Driving a car
Thank you so much for help! I managed to do a) with no problems (I didn't actually need help on that :P ), but I still can't seem to get b) right. And c) I don't have a clue for how I am to
Can you help me a bit more in the right direction? That would be great! :D
September 9th 2013, 05:11 AM
Re: Driving a car
For (b), do you understand how I got: $\frac {B-y}{A-x} = \frac {-1}{2x}$? Now replace y with x^2, solve for x, then y=x^2 gives you the position of the car on the road.
For (c) - are you familiar with the chain rule for derivatives? If you have dx/dt and dy/dt, and from the curve of the road you know the value for dy/dx, the chain rule lets you do this:
$\frac {dx}{dt} = \frac {dx}{dy}\ \times \ \frac{dy}{dt} = \frac {dy/dt}{dy/dx}$
Hope this helps.
September 9th 2013, 10:43 AM
Re: Driving a car
No, I'm not quite sure how you come up with the "= -1/2x"-part :P
But when solving for x, I end up with a third order polynominal, and get three values for x. How can I explain which one to choose?
So the chain rule for derivatives gives the relation between the velocity for x and y?
Sorry for asking so much. Hehe, I just want to understand everything I do! And thanks for your help :D
September 9th 2013, 11:32 AM
Re: Driving a car
The slope of the parabola is dy/dx=2x, so the negative inverse of that is -(1/2x), and that's the slope of the line connecting the car to the sign post at the closest point of approach. And yes,
you do get a cubic, because there are three points on the car's path when the sign is at right angle's to the car's trajectory. Of course only one of these points is the point of closest
approach. In the attached figure note that two of the solutions are with negative values of x, and the third is for positive x and hence the correct answer, though it's a little hard to see
because it almost overlaps the car's path.
The chain rule allows you to relate Vx to Vy. Since Vx = dx/dt and Vy = dy/dt, and slope is dy/dx, then Vy divided by the slope of the car's path equals Vx.
Attachment 29144
September 9th 2013, 11:50 AM
Re: Driving a car
Thank you so much! :D | {"url":"http://mathhelpforum.com/calculus/221685-driving-car-print.html","timestamp":"2014-04-19T22:07:31Z","content_type":null,"content_length":"11900","record_id":"<urn:uuid:bd096b03-80af-4cb4-84b8-6f7c7d0e8925>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reoccurring Error in RLC Series Circuit Problems.
Hi All,
In doing some practice RLC series circuit question, I have been obtaining incorrect values for active and reactive power. The magnitudes I obtain are correct however the sign of the value is not. I
have attributed this to obtaining an incorrect argument for the absolute power, or computing phasors incorrectly. I have attached an image of the reasoning behind my problem solving however I must be
fundamentally wrong somewhere. | {"url":"http://www.physicsforums.com/showthread.php?t=721115","timestamp":"2014-04-18T21:21:38Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:512c18a7-0671-445c-b580-c24b8d301982>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can any1 help me in physics please
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e917b9f0b8bfb9f23e5ccc6","timestamp":"2014-04-19T15:18:55Z","content_type":null,"content_length":"39586","record_id":"<urn:uuid:125ef93f-b09e-4712-b708-17d832ea756b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frank Morgan's Math Chat - WHY IS THE MATHEMATICIAN SO MESSY?
May 6, 1999
VIENNESE COMPANY TO REUSE 1972 CALENDAR IN 2000. After our last column about 2000 having the same calendar as 1972, 1944, and 1916, J. Ernst Oberklammer wrote that "a Viennese electricity-supply
company will be using the calendar of 1972 in the year 2000 in its computer system because many of their hardware components have to be replaced within the next ten years anyway (and it would be too
costly to replace them now, and again in a few years, or so)."
OLD CHALLENGE. A physicist and a mathematician can clean a house in 6 hours; an engineer and the mathematician in 3 hours; and the physicist and the engineer in 1 hour and 12 minutes. How long would
it take the physicist alone?
ANSWER (Derek Smith). Three hours. In six hours, P and M can clean 1 house, E and M can clean 2 houses, P and E can clean 5 houses, so 2P = (P + M) - (E + M) + (P + E) can clean 1-2+5 = 4 houses.
Hence one physicist can clean 2 houses in six hours, or 1 house in three hours.
This is faster than the physicist and mathematician working together! The mathematician must be slowing things down. Readers supply a number of different interpretations:
"With horror, we note that, as given, the mathematician's rate is actually negative; he/she must be making a mess rather than cleaning. A calumny!" -Elliot Kearsley
"El matemático es una carga." Juan José López Ordóñez, University of Granada, Spain ["Carga" is a wonderful Spanish word meaning "burden" or "millstone around their necks."]
"I have known some mathematicians like that." -Joe Shipman
"Since my wife has been telling me for many years that I actually hinder rather than help cleaning the house, I believe that my solution is correct." -Michael Marcotty
"So, why is the room most in need of cleaning in our school the Physics lab?" -Richard Ritter
I think that the best explanation comes from Al Zimmermann:
"Clearly the mathematician's conversation is so incredibly engaging as to distract any coworkers from the task at hand."
NEW CHALLENGE (John Snygg). A mathematician, lost in the desert, hears a single toot from a train due west. He knows that the train goes at constant speed in a straight line, but he forgets in which
direction. What direction should he walk?
Send answers, comments, and new questions by email to Frank.Morgan@williams.edu, to be eligible for Flatland and other book awards. Winning answers will appear in the next Math Chat. Math Chat
appears on the first and third Thursdays of each month. Prof. Morgan's homepage is at www.williams.edu/Mathematics/fmorgan.
Copyright 1999, Frank Morgan. | {"url":"http://www.maa.org/frank-morgans-math-chat-why-is-the-mathematician-so-messy","timestamp":"2014-04-18T05:52:14Z","content_type":null,"content_length":"90957","record_id":"<urn:uuid:d2564127-28ce-442d-8f48-031b0c5f3a68>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
I received the following email about Introducing Monte Carlo Methods with R a few days ago: Hallo Dr. Robert, I am studying your fine book for myself. There´s a little problem in examples 7.17 and
8.1: in the R code a function “gu” is used and a reference given to ex. 5.17, but I cann´t
The function pimax from our package mcsm is used in to reproduce Figure 5.11 of our book Introducing Monte Carlo Methods with R. (The name comes from using the Pima Indian R benchmark as the
reference dataset.) I got this email from Josué I ran the ‘pimax’ example from the mcsm manual, and it gave
Difficulty with mcsm?
An email from Keith I got this morning: Professor Robert, I have loaded the mcsm package to windows. The following messages appear in the R console: trying URL 'http://cran.stat.ucla.edu/bin/windows/
contrib/2.9/mcsm_1.0.zip' Content type 'application/zip' length 193590 bytes (189 Kb) opened URL downloaded 189 Kb package 'mcsm' successfully unpacked and MD5 sums checked But when I use the
How to use mcsm
Within the past two days, I received this email Dear Prof.Robert I have just bought your recent book on Introducing Monte Carlo Methods with R. Although I have checked your web page for the R
programs (bits of the code in the book, codes for generating the figures and tec – not the package available | {"url":"http://www.r-bloggers.com/tag/mcsm/","timestamp":"2014-04-19T19:40:39Z","content_type":null,"content_length":"30402","record_id":"<urn:uuid:fe1217ce-b9fd-4c0d-9856-33e66b55bbe5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
can someone explain to me..
March 3rd 2010, 06:48 PM #1
Feb 2010
can someone explain to me..
why in this derivative the tan(x) doesn't distribute to the sec(x)
f(x)= sec(x)tan(x)
= sec^2(x) (sec(x)) + (sec(x)tan(x))(tan(x))
= sec^3(x) + tan^2(x) sec(x)
maybe I'm just missing an easy rule, I'm not sure though
Do you mean, why is it not
Yes, that's an equivalent expression
You're thinking too hard. It's simply the distributive property (not) at play.
(a * b) * c = a * (b * c) = a * b * c
In this case sec x * tan x * tan x just ends up being tan^2(x) * sec(x).
Hope that helped!
Ok but why is the sec(x) not multiplied by the tan(x).. It looks as if the tan(x) is only distributing to the tan(x) within the parenthesis instead of both sec(x) and tan(x)
You could plug in some values to prove that it doesn't work. Try pi/6.
( sec(pi/6) * tan(pi/6) ) * tan (pi/6) = (2/3) *(sqrt(3)/3) = 2 sqrt(3) / 9
which does not equal (sec(pi/6) *(tan(pi/6) ) * (tan^2(pi/6)) = (2/3)*(1/3)
= 2/9
It looks like you have
$f(x) = sec(x)tan(x)$
$f'(x) = sec^2(x) \cdot [sec(x) + sec(x)tan(x)] \cdot tan(x)$
This is not the derivative! Gotta watch the distributions, Product Rule : $\frac{d}{dx}(f(x) \cdot g(x)) = f'(x)g(x) + g'(x)f(x)$
$f(x) = sec(x)tan(x)$
$f'(x) = [sec(x)]' \cdot tan(x) + [tan(x)]' \cdot sec(x)$
$f'(x) = [sec(x)tan(x)] \cdot tan(x) + [sec^2(x)] \cdot sec(x)$
$f'(x) = sec(x)tan^2(x) + sec^3(x)$
Does this help?
It looks like you have
$f(x) = sec(x)tan(x)$
$f'(x) = sec^2(x) \cdot [sec(x) + sec(x)tan(x)] \cdot tan(x)$
This is not the derivative! Gotta watch the distributions, Product Rule : $\frac{d}{dx}(f(x) \cdot g(x)) = f'(x)g(x) + g'(x)f(x)$
$f(x) = sec(x)tan(x)$
$f'(x) = [sec(x)]' \cdot tan(x) + [tan(x)]' \cdot sec(x)$
$f'(x) = [sec(x)tan(x)] \cdot tan(x) + [sec^2(x)] \cdot sec(x)$
$f'(x) = sec(x)tan^2(x) + sec^3(x)$
Does this help?
Yes but why in the third step is the tan(x) only distributed to the tan(x) giving you tan^2(x). Why is it not distributed to the sec(x) also?
But it is!
You can also think of it like this:
Say we have $A B B$, would you agree that
$A B B = A B^2$
This is the same for $tan(x)$, you can just think of the product as being of 3 parts, where two pieces are the same.
We can write it like this too if it helps;
$sec(x) tan(x) tan(x) = sec(x) [tan(x)]^2 = sec(x) tan^2(x)$
EDIT : Just hit me you might be talking about the tan not distributing to the $sec^3x$ term on the right. And that is just due to the nature of the product rule.
Just focus on $\frac{d}{dx}(f(x) \cdot g(x)) = f'(x)g(x) + g'(x)f(x)$, where for this problem; $f(x) = sec(x)$ and $g(x) = tan(x)$.
Just plug away and take derivatives where appropriate.
Using the product rule of differentiation,
$f'(x)=sec(x)\frac{d}{dx}tan(x)+tan(x)\frac{d}{dx}s ec(x)$
This is
which has sec(x) as a factor, so it may also be written
When you asked, why is
not equal to
it's because you've brought in an extra tan(x) that does not belong
March 3rd 2010, 07:02 PM #2
MHF Contributor
Dec 2009
March 3rd 2010, 07:33 PM #3
Feb 2010
March 3rd 2010, 07:55 PM #4
Feb 2010
March 3rd 2010, 08:00 PM #5
Feb 2010
March 3rd 2010, 08:09 PM #6
Feb 2010
March 3rd 2010, 08:23 PM #7
Mar 2009
March 3rd 2010, 09:03 PM #8
Feb 2010
March 3rd 2010, 09:09 PM #9
Mar 2009
March 4th 2010, 01:22 AM #10
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/calculus/131944-can-someone-explain-me.html","timestamp":"2014-04-18T06:55:12Z","content_type":null,"content_length":"64610","record_id":"<urn:uuid:c5798764-52ed-4dcb-882b-bdc5c2813b3e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coalition structure generation over graphs
Voice, Thomas, Polukarov, Maria and Jennings, Nicholas R. (2012) Coalition structure generation over graphs. Journal of Artificial Intelligence Research, 45, 165-196. (doi:10.1613/jair.3715).
PDF - Pre print
Download (463Kb)
PDF - Publishers print
Restricted to internal admin
Download (463Kb) | Request a copy
We give the analysis of the computational complexity of coalition structure generation over graphs. Given an undirected graph G = (N,E) and a valuation function v : P(N) → R over the subsets of
nodes, the problem is to find a partition of N into connected subsets, that maximises the sum of the components values. This problem is generally NP complete; in particular, it is hard for a defined
class of valuation functions which are independent of disconnected members that is, two nodes have no effect on each others marginal con- tribution to their vertex separator. Nonetheless, for all
such functions we provide bounds on the complexity of coalition structure generation over general and minor free graphs. Our proof is constructive and yields algorithms for solving corresponding
instances of the problem. Furthermore, we derive linear time bounds for graphs of bounded treewidth. However, as we show, the problem remains NP complete for planar graphs, and hence, for any K_k
minor free graphs where k ≥ 5. Moreover, a 3-SAT problem with m clauses can be represented by a coalition structure generation problem over a planar graph with O(m^2) nodes. Importantly, our hardness
result holds for a particular subclass of valuation functions, termed edge sum, where the value of each subset of nodes is simply determined by the sum of given weights of the edges in the induced
Actions (login required) | {"url":"http://eprints.soton.ac.uk/342780/","timestamp":"2014-04-16T04:54:19Z","content_type":null,"content_length":"26357","record_id":"<urn:uuid:63cf603f-ba20-4e84-87a0-569eb7f0206f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of range error
In the
military science
circular error probable (CEP)
circular error probability
is an intuitive measure of a weapon system's
. It is defined as a circle, centered about the mean, whose boundary is expected to include 50% of the population within it.
The original concept of CEP was based on a Circular Bivariate Normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with
this distribution behavior tend to cluster around the aim point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n meters,
50% of rounds land within n meters of the target, 43% between n and 2n, and 7 % between 2n and 3n meters, and the proportion of rounds that land farther than three times the CEP from the target is
less than 0.2%.
This distribution behavior is often not met. Precision-guided munitions generally have more 'close misses' and so are not normally distributed. Munitions may also have larger standard deviation of
range errors than the standard deviation of azimuth (defelection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will
not be (0,0). This is referred to as bias.
In order to apply the CEP concept in these conditions, we can define CEP as the square root of the mean error squared (MSE). The MSE will be the sum of the variance of the range error plus the
variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically
corresponding to radius of a circle within which 50 % of rounds will land.
Conversion between CEP, RMS, 2DRMS, and R95
While 50% is a very common definition for CEP, the circle dimension can be defined for percentages. Approximate formulas are available to convert the distributions along the two axes into the
equivalent circle radius for the specified percentage.
Accuracy Measure Probability (%)
RMS (Root Mean Square) 63 to 68
CEP (Circular Error Probability) 50
2DRMS (Twice the Distance Root Mean Square) 95 to 98
R95 (95% Radius) 95
From/To CEP RMS R95 2DRMS
CEP - 1.2 2.1 2.4
RMS 0.83 - 1.7 2.0
R95 0.48 0.59 - 1.2
2DRMS 0.42 0.5 0.83 - | {"url":"http://www.reference.com/browse/range%20error","timestamp":"2014-04-19T02:07:11Z","content_type":null,"content_length":"75910","record_id":"<urn:uuid:e2878b23-4b9e-42db-9841-bc76c5cb690c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
EC Summer School 1 - 5 September 1997
Organisers: Christopher Bishop (Aston) and Joe Whittaker (Lancaster)
PROBABILISTIC GRAPHICAL MODELS
PROVISIONAL PROGRAMME
Sunday, Aug 31
18:00 - 19:00 Wine reception and registration
Monday, Sep 1
08:30 - 09:00 Registration
09:00 - 10:30 Tutorial
Joe Whittaker (Lancaster) Graphical models
10:30 - 11:00 Coffee-Break
11:00 - 12:30 Tutorial
David Heckerman (Microsoft) Directed Acyclic Graphs: Representation and Learning
14:30 - 16:00 Presentations. Chair: Geoffrey Hinton
David Cox (Oxford) tba
Robert Cowell (City) Learning with Dirichlet Mixtures in Bayesian Networks
Wally Gilks (Cambridge) Simulation Methodology for Dynamic Models
16:00 - 16:30 Tea-Break
16:30 - 18:00 Presentations. Chair: Michael Titterington
Ed George (Texas) Empirical Bayes Covariance Selection
Stefano Monti and Greg Cooper (Pittsburg) Learning Bayesian Networks from Data Containing both Continuous and Discrete Variables
Bert Kappen (Nijmegen) A Polynomial Time Algorithm for Boltzmann Machine Learning
18:00 - 19:00 Wine reception
Tuesday, Sep 2
09:00 - 10:30 Tutorial
Christopher Bishop (Aston) Latent variables
10:30 - 11:00 Coffee-Break
11:00 - 12:30 Tutorial
Steffen Lauritzen (Aalborg) Message passing on graphs
14:30 - 16:00 Presentations. Chair: Phil Dawid
Mauro Piccioni (Aquila) The stochastic IPF algorithm and some of its properties.
Prakash Shenoy (Kansas)
Some Improvements to the Shenoy-Shafer Architecture for Computing Marginals
Larry Saul (ATT) tba
16:00 - 16:30 Tea-Break
16:30 - 18:00 Presentations. Chair: Steffen Lauritzen
Mike Titterington (Glasgow) Hyperparameter Estimation And Related Topics
Michael Kearns (AT&T) Polynomial-time Algorithms for Learning Hidden Structure in Two-layer Noisy-OR Networks
Jim Smith (Warwick) Geometrical aspects of missing data in graphical models
18:00 - 19:00 Wine reception
Wednesday, Sep 3
09:00 - 10:30 Tutorial
Mike Jordan (MIT) Approximate inference via variational techniques
10:30 - 11:00 Coffee-Break
11:00 - 12:30 Tutorial
Geoffrey Hinton (Toronto) Learning Intractable Graphical Models
Afternoon free for sightseeing
Thursday, Sep 4
09:00 - 10:30 Tutorial
Phil Dawid (London) Conditional Independence for Statistics and AI
10:30 - 11:00 Coffee-Break
11:00 - 12:30 Tutorial
Stuart Russell (Berkeley) Learning: temporal processes and structure
14:30 - 16:00 Presentations. Chair: Michael Jordan
Thomas Richardson (Washington) Ancestral Graphs: Representing Markov Properties of Acyclic Directed Graphs under Marginalization and Conditionalization
Milan Studeny (Praha) On separation criterion for chain graphs
Michael Perlman (Washington) Alternative Markov Properties for Chain Graphs
16:00 - 16:30 Tea-Break
16:30 - 18:00 Presentations. Chair: Michael Perlman
Paolo Giudici (Pavia) Markov chain Monte Carlo decomposable graphical gaussian model determination
Alberto Roverato (Modena) Asymptotic Analysis of Graphical Gaussian Models: an Isserlis Matrix Based Approach
Tommi Jaakkola (MIT) Bayesian Estimation in the Presence of Missing Values
18:00 - 19:30 Reception at the Cambridge University Press bookshop
Friday, Sep 5
09:00 - 10:30 Tutorial
David MacKay (Cambridge) Introduction to Monte Carlo Methods
10:30 - 11:00 Coffee-Break
11:00 - 12:30 Tutorial
Judea Pearl (LA) Causality
14:30 - 16:00 Presentations. Chair: David Cox
Guido Consonni (Pavia) Priors for quantitative learning in probabilistic expert systems
Sebastian Seung (Lucent)Learning the Relationships Between Parts of an Object
Ross Shachter (Stanford) Causal Models: What does one need to know?
16:00 - 16:30 Tea-Break
16:30 - 18:00 Tutorial
David Spiegelhalter (MRC, Cambridge) Bayesian Graphical Modelling
19:00 Conference dinner (Sydney Sussex College)
Joe Whittaker (Lancaster)
Graphical Models
Interpreting inter-relationships between many variables simultaneously is a complex task that confronts the research worker in many areas of applied statistics. Specific problems are to determine
which variables interact, and how strongly, and to decide if the data can be condensed without loss of information.
Graphical modelling is a powerful technique for simplifying and describing multivariate interaction and association. The theoretical basis to the technique is the concept of conditional independence
and the prime theoretical and practical tool is the conditional independence graph. The lecture gives an elementary account, with some practical examples, of the graphical modelling approach to
multivariate data analysis, and is based on the recent book of Whittaker (1990) Graphical Models in Applied Multivariate Statistics.
The Markov properties of the independence graph, model fitting by maximum likelihood is discussed, the technique is related to log-linear models for contingency tables, to covariance selection models
for correlation matrices, to recursive models for path analysis and to the mixed interaction model.
David Heckerman (Microsoft)
Directed Acyclic Graphs: Representation and Learning
I will describe DAG models and discuss methods for learning the parameters and structure of such models from data. I will mention constraint-based methods for learning, but concentrate on the
Bayesian approach. Topics will include criteria for model selection, techniques for assigning priors, methods for handling missing data and latent variables, and search methods. At least one
real-world application will be presented.
Robert Cowell (City)
Learning with Dirichlet Mixtures in Bayesian Networks
The general framework for sequential learning on a Bayesian network using data was presented by Spiegelhalter and Lauritzen, who introduced the simplifying assumptions of local and global
independence of the parameters underlying the distribution over the network. However, when learning with incomplete data, global and local independencies are in general lost. Approximations may then
be made to restore these desirable independence properties, so that the computations become tractable.
Here we investigate approximate sequential learning methods -- which relax the traditional re-imposition of local and global independencies after processing each incomplete case -- with incomplete
data in a simple network, comparing their predictive performance with that obtained by Gibbs Sampling.
Wally Gilks (Cambridge) and Carlo Berzuini (Pavia)
Simulation Methodology for Dynamic Models
In situations where real-time, sequential, updating of predictive distributions is required, the usual Bayesian technology using Markov chain Monte Carlo (MCMC) is often too slow. Such situations
arise in a clinical setting where patients are being continuously monitored, and where predictions of adverse events are required. Non-biomedical situations include tracking of military targets and
financial time-series prediction. Similar problems arise in predictive model selection. We present a new technique for combining MCMC and importance resampling, where significant gains in
computational speed can be achieved with little loss in precision. We demonstrate the methodology on artificial data from a hidden Markov model.
Stefano Monti and Greg Cooper (Pittsburg)
Learning Bayesian Networks from Data Containing both Continuous and Discrete Variables
In this paper we illustrate two different methodologies for learning Bayesian networks from complete datasets containing both continuous and discrete variables. The two methodologies differ in the
way of handling continuous data when learning the Bayesian network structure. The first methodology uses discretized data to learn the Bayesian network structure, and the original non-discretized
data for the parameterization of the learned structure. The second methodology uses non-discretized data both to learn the Bayesian network structure and its parameterization. For the direct handling
of continuous data, we propose the use of artificial neural networks as probability estimators, to be used as an integral part of the scoring metric defined to search the space of Bayesian network
We report experimental results aimed at comparing the two methodologies. These results provide evidence that learning with discretized data presents advantages both in terms of efficiency and in
terms of accuracy of the learned models over the alternative approach of using non-discretized data.
Christopher Bishop (Aston)
Latent Variables
A powerful approach to probabilistic modelling involves the introduction of additional latent, or hidden variables. The distribution of observed variables is then representated in terms of the
marginalisation of the joint distribution over latent and observed variables. Familiar examples include mixture distributions (involving a discrete latent variable) and factor analysis (involving
continuous latent variables). The structure of such probabilistic models can be made particularly transparent using a diagrammatic representation in terms of a directed acyclic graph.
In this tutorial I will provide an overview of latent variable models for representing continuous variables. I will introduce the EM (expectation-maximization) algorithm which provides a general
technique for estimating the parameters of such models through maximum likelihood.
I will also show how a specific form of linear latent variable model can be used to provide a probabilistic formulation of the well-known technique of principal components analysis (PCA). This leads
naturally to mixtures, and hierarchical mixtures, of probabilistic PCA models, with applications in areas such as data compression, pattern recognition and data visualization.
C M Bishop (1995) Neural Networks for Pattern Recognition, Oxford University Press.
C M Bishop, M Svensen and C K I Williams (1996) GTM: The Generative Topographic Mapping, accepted for publication in Neural Computation. Available from http://www.ncrg.aston.ac.uk/
G J McLachlan and T Krishnan (1997) The EM Algorithm and Extensions, Wiley.
M E Tipping and C M Bishop (1997) Mixtures of Principal Component Analysers, technical report NCRG/97/003, submitted to Neural Computation. Available from http://www.ncrg.aston.ac.uk/
M E Tipping and C M Bishop (1997) A Hierarchical Latent Variable Model for Data Visualization, technical report NCRG/96/028. Available from http://www.ncrg.aston.ac.uk/
Paolo Guidici (Pavia) and Peter Green (Bristol)
Markov chain Monte Carlo decomposable graphical gaussian model determination
We propose a methodology for Bayesian model determination in undirected decomposable graphical gaussian models. To achieve this aim we consider, for each given graph, a hyper inverse Wishart prior on
the covariance matrix. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative
structures for the hyperparameters of the latter, and their consequences for the model.
Model determination is then carried out implementing a reversible jump MCMC sampler. In particular, the dimension changing move we propose concerns adding or dropping an edge from the graph. We
characterise the set of such moves which lead to a decomposable graph. We then consider appropriate random walk proposals for the within-model moves. The main advantages of our sampler are: a)
simplicity of the simulations; b) locality of most of the computations; c) extendability to hierarchical priors. These allow our proposed sampler to be suited for the analysis of complex structures.
{Keywords}: Bayesian Model Selection; Hyper Markov Laws; Junction Tree; Inverse Wishart Distribution; Reversible Jump MCMC.
Tommi Jaakkola (MIT)
Bayesian Estimation in the Presence of Missing Values
Bayesian parameter estimation has a number of advantages over simple maximum likelihood estimation. In the presence of missing values, however, the cost of performing Bayesian estimation often
considerably exceeds that of maximum likelihood methods. Moreover, in a maximum likelihood setting we have the EM algorithm that naturally handles missing values, while in the Bayesian context we
often need (possibly sophisticated) sampling methods. I will present a generally applicable deterministic algorithm for Bayesian parameter estimation in graphical models. The algorithm has an EM
style inner loop for optimizing the posterior over the parameters (and its dual distribution) in the context of each observation. The resulting posteriors can be represented locally while they are
optimized jointly in a KL-divergence sense. Overall the sequential algorithm is no more costly than performing ordinary maximum likelihood estimation.
David MacKay (Cambridge)
Introduction to Monte Carlo Methods
I will introduce a sequence of Monte Carlo methods: importance sampling, rejection sampling, the Metropolis method, and Gibbs sampling. For each method, we will discuss whether the method is expected
to be useful for high--dimensional problems.
The terminology of Markov chain Monte Carlo methods will be reviewed.
Some advanced Monte Carlo methods will be presented, including methods for reducing random walk behaviour.
Mauro Piccioni
The stochastic IPF algorithm and some of its properties.
For the Bayesian analysis of hierarchical models of contingency tables which are not decomposable it is interesting to compute expected values with respect to a Dirichlet distribution on the table
conditioning to zero the interactions prescribed by the model. A convenient implementation of the Gibbs sampler is possible, which exploits the independence between the marginals of any maximal
component and the interactions depending on its complement. The resulting algorithm is similar to the IPF with the difference that each marginal which is imposed in a step is drawn at random from a
compatible Dirichlet distribution, rather than being obtained from a table of data. Some properties of the algorithm are easily established; however, the main problem of evaluating its speed of
convergence seems to be open.
Michael D. Perlman (Washington)
Alternative Markov Properties for Chain Graphs
Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UGs) include models for
spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models
for expert systems and Bayesian belief networks.
Lauritzen, Wermuth, and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies
and which include both UGs and ADGs as special cases. For multivariate normal distributions, Cox and Wermuth introduced both multivariate regression and block regression models for chain graphs. In
this paper an alternative Markov property (AMP) for chain graphs is introduced and shown to be the Markov property satisfied by a Cox-Wermuth multivariate regression model, a multinormal
block-recursive linear structural equations model. This model can be decomposed into a collection of conditional normal models, each of which combines the features of multivariate linear regression
models and covariance selection models, facilitating the estimation of its parameters.
In the general case, necessary and sufficient conditions are given for the equivalence of the LWF and AMP Markov properties of a chain graph, for the AMP Markov equivalence of two chain graphs, for
the AMP Markov equivalence of a chain graph to some ADG or decomposable UG, and for other equivalences. A new pathwise separation criterion for chain graphs is presented, called p-separation, that is
equivalent to the AMP global Markov property. In some ways, the AMP Markov property and p-separation criterion for chain graphs are more direct extensions of the classical Markov property and Pearl's
d-separation criterion for ADGs than are the LWF Markov property and the Bouckaert-Studeny c-separation criterion for chain graphs.
(This research has been conducted jointly with Steen A. Andersson and David Madigan.)
Alberto Roverato (Modena) and Joe Whittaker
Asymptotic Analysis of Graphical Gaussian Models: an Isserlis Matrix Based Approach
The Isserlis matrix, which was introduced by Isserlis (1918) who computed the variance matrix of the sample variances and covariances in the normal case, plays an important role in an asymptotic
analysis of graphical Gaussian models. Both the Bayesian and frequentist approaches to inference require the computation of the Isserlis matrix, Iss$(\cdot)$, of a variance matrix $\Sigma$ with a
given zero pattern in the inverse.
We study the properties of the Isserlis matrix of the completion, $\Sigma$, of a positive definite matrix and propose an edge set indexing notation which highlights the symmetry existing between $\
Sigma$ and Iss$(\Sigma)$. In this way well known properties of $\Sigma$ can be exploited to give an easy proof of certain asymptotic results for graphical Gaussian models, as well as to extend such
results to the non-decomposable case.
Ross Shachter (Stanford)
Causal Models: What does one need to know?
Causality has been a controversial modeling distinction for most of this century. In some areas of computer science and the social sciences it has been treated as an intuitive primitive explanation
for all probabilistic dependence or relevance. On the other hand, in the statistics and systems analysis communities it has been tabooed as too subjective and poorly defined, and association has been
the acceptable model for relevance. In modeling decision processes and managing intelligent systems, the associational model is insufficient to predict the effects of actions. We have been working to
develop a practical framework that we believe successfully bridges these two extreme viewpoints.
As more formal models of causality have been proposed, a new controversy has arisen, namely, how complex must a causal model be? Can a simple causal model be sufficient for some purposes and
inadequate for others? Is it always necessary to model "counterfactual" propositions? When it is possible to learn causal models from data? We will try to address these questions as we present our
framework for building and applying causal models.
A. P. Dawid (University College, London)
Conditional Independence for Statistics and AI
The axiomatic theory of Conditional Independence provides a general language for formulating and determining questions relating to the intuitive idea of relevance, in a wide variety of different
contexts. This tutorial describes the basic theory and various interesting models of it, with special emphasis on its use in conjunction with modular graphical representations of problems in
Probability, Statistics and Expert Systems.
Stuart Russell (Berkeley)
Learning: temporal processes and structure
The first part of the talk will describe methods for representing temporal processes as graphical models, and will show how such representations can be learned from traces of system behaviour. Such
models can be shown to have advantages over two other standard representations for temporal processes, namely Hidden Markov Models and Kalman filters. Temporal processes raise special problems for
inference, and new algorithms will be described that partially solve these problems.
The second part of the talk describes work by Nir Friedman on learning structure in graphical models. It is shown that EM can be generalized to allow structural as well as parameter updates. This
allows a much more efficient search of the hypothesis space and also simplifies the calculation of Bayesian and MDL scores in the structure learning process.
Thomas Richardson (Department of Statistics, University of Washington)
Ancestral Graphs: Representing Markov Properties of Acyclic Directed Graphs under Marginalization and Conditionalization
It is natural to consider the class of acyclic directed graph (DAG) models under marginalization, representing latent variables or "correlated errors", and under conditionalization, representing
selection bias.
A graphical object, called a mixed ancestral graph (MAG) will be presented which represents the conditional dependencies in the observed margin. Further, in the Gaussian case, it is possible to
construct a model which parameterizes all distributions represented by the models within a particular Markov Equivalence class of marginalized DAG models. This parameterization forms the basis for
scoring based search algorithms.
A further graphical object, called a partial ancestral graph, will be presented, which represents structural features that are invariant across a Markov equivalence class of marginalized and
conditionalized DAGs. Alternatively, a PAG can be viewed as representing a Markov equivalence class of MAGs.
There exist algorithms for constructing a partial ancestral graph from a marginalized and conditionalized DAG or from an oracle for conditional independence facts.
P Shenoy (Kansas)
Some Improvements to the Shenoy-Shafer Architecture for Computing Marginals
In this talk we introduce three improvements to the Shenoy-Shafer architecture for computing marginals using local computation. Although the architecture is valid more generally, we will describe our
improvements for the case of Bayesian networks.
The traditional Shenoy-Shafer architecture has three phases. Phase one is the construction of a join tree. Phase two is the computation of messages between adjacent nodes in the join tree. And Phase
three is the computation of marginals for the desired nodes in the join tree.
The first improvement is the concept of a binary join tree. A binary join tree is a join tree such that no node has more than three neighbors. Binary join trees ensure that all combinations
(multiplications of tables for the case of probabilities) are done on a binary basis.
The second improvement is the introduction of a new phase called the transfer of valuations phase. This phase is subsequent to the join tree construction phase and prior to the message passing phase.
In this phase, some functions (tables for the case of probabilities) are transferred from the non-clique nodes to the clique nodes of the join tree. The transfer of valuations phase reduces the
amount of calculations needed for computation of marginal probabilities.
The third modification is the introduction of a new rule for computing marginals. The usual rule for computing the marginal for a node is to combine all messages the node receives from its neighbors
with the function associated with the node. If the node has a neighbor that is its superset, then the marginal is simply the combination of the two messages exchanged between the node and the
neighbor (one in each direction). Thus the marginal can be computed using only one combination of tables as opposed to several as per the old rule.
1. Shenoy, P. P. and G. Shafer (1990), "Axioms for probability and belief-function propagation," in Shachter, R. D., T. S. Levitt, J. F. Lemmer and L. N. Kanal (eds.), Uncertainty in Artificial
Intelligence, 4, 169-198, North-Holland, Amsterdam.
2. Shenoy, P. P. (1997), "Binary join trees for computing marginals in the Shenoy-Shafer architecture," International Journal of Approximate Reasoning, 17(2-3), 239-263.
3. Schmidt, T. and P. P. Shenoy (1997), "Some Improvements to the Shenoy-Shafer and Hugin architectures for computing marginals," Working Paper, University of Kansas, Lawrence, KS.
Milan Studeny
On separation criterion for chain graphs.
Chain graphs introduced in middle eighties provide a quite wide class of graphical models of probabilistic conditional indepence structures. They generalize both undirected graph models and directed
acyclic graph models. One of possible ways how to introduce the class of Markovian distribution with respect to a chain graph is a separation criterion for reading conditional independence statements
from the chain graph. It generalizes the well-known d-separation criterion for directed acyclic graphs and therefore it is also named the c-separation (chain separation) criterion.
In the talk the c-separation criterion will be formulated and explained by means of a few illustrative examples. It will be compared with the d-separation criterion and its equivalence with the
original moralization criterion will be shown. Its role in the proof of existence of a perfect Markovian distribution for every chain graph will be shortly mentioned. The end of the talk will be
devoted to interesting open questions concerning chain graphs.
Michael Kearns (AT&T)
Polynomial-time Algorithms for Learning Hidden Structure in Two-layer Noisy-OR Networks
I will describe work in progress (joint with Yishay Mansour of Tel-Aviv University) on inferring the structure of two-layer Bayesian networks in which the observable output units have noisy-OR CPT's
over their unobservable parents. Such networks have been the subject of both applied and theoretical study in recent years. I will discuss conditions under which the hidden causal relationships
between the inputs and outputs can be recovered *exactly* from sample data, and along the way make some combinatorial and algebraic observations that may prove useful in more general settings.
J.Q. Smith and R. Settimi (Warwick)
Geometrical aspects of missing data in graphical models
The estimation of probabilities on a Bayesian network is now well developed for complete data sets or for data collected on ancestor sets (eg Speigelhalter et al, 1993). The problem of how to
estimate probabilities when data is missing is somewhat more problematical. Numerical methods have been devised by a number of authors (Cowell, 1997, Ramoni & Sebastiani, 1997 a,b,c,d) and various
encouraging results have been proved for the case when data is missing at random. However major problems still exist when data is collected systematically only on certain joint margins (or indeed
when this is the case except for certain unusual observations). Some of the problems of lack of learning potential were illustrated numerically in Speigelhalter & Cowell (1992).
In this presentation we will address some of the structural issues associated with observing only certain marginal tables when a model is known to exhibit certain conditional independence structures.
We note that a collection of conditional independence statements for discrete probability models is equivalent to demanding that joint probabilities on the vector of discrete random variables lie in
the solution space of high dimensional sets of simultaneous quadric equations. Typically these equations have multiple solutions and so even with perfect information on probability margins the model
is often unidentifiable. In such instances numerical approaches will at best provide very uncertain estimates of probabilities and at worst be suseptible to giving misleading results.
We will begin by outlining the times when it is known that the graphical model is identifiable. We will then present examples which are is in a certain sense generic and which give rise to very
difficult forms of identifiability. Some new results on the nature of these structural ambiguities will be reported. These will appear in more detail in Settimi & Smith (1997). We discuss some of the
ways in which expedient prior information can make unidentifiable systems identifiable. We will argue that information on selected probabilities is often of far more value than additional conditional
independence statements. We will also report some new results about how Grobner Bases can be constructed on a case by case basis to check the nature and extent of identifiability as it occurs
(Riccomagno et al, 1997).
Geoffrey Hinton (Toronto) Learning Intractable Graphical Models
I will describe some learning rules for densely connected graphical models in which it is intractable to compute the full posterior probability distribution over all possible configurations of the
hidden variables. To perform learning, it is necessary to approximate the posterior distribution. The first learning rule is for models called "Boltzmann machines" that have undirected connections
and binary variables. The approximation to the posterior is performed using Gibbs sampling and the inaccuracy of the approximation causes serious problems. The other learning rules are for models
with directed acyclic connections and discrete or piecewise linear variables. In these models the learning can work well even if the approximation to the posterior distribution over hidden states is
Mike Jordan (MIT)
Approximate inference via variational techniques
For many graphical models of practical interest, exact inferential calculations are intractable and approximations must be developed. In this tutorial I describe the principles behind the use of
variational methods for approximate inference. These methods provide bounds (upper and lower) on probabilities on graphs. They are complementary to the exact techniques in the sense that they tend to
be more accurate for dense networks than for sparse networks; moreover, they can readily be combined with exact techniques. I will describe the application of variational ideas in a number of
(This is joint work with Zoubin Ghahramani, Tommi Jaakkola, and Lawrence Saul).
Sebastian Seung (Lucent))
Learning the Relationships Between Parts of an Object
In the syntactic approach to pattern recognition, a pattern is first decomposed into its parts, yielding a structural description. This structural description is checked against a set of "syntactic"
rules that govern how parts are combined to form a whole. The syntactic approach is intuitively appealing, and is related to current psychological models of object recognition.
I will discuss the problem of implementing the syntactic approach with graphical models. The discussion will focus on a very simple model with two layers of variables, visible and hidden. The
connections between hidden and visible variables define a pattern description "language," while the lateral connections between hidden variables specify a "grammar."
Learning algorithms for both types of connections will be illustrated by modeling images of handwritten digits. The hidden-visible connections learn stroke-like features, while the lateral
connections learn constraints on the combination of these features.
Because the hidden variables are continuous, they are well-suited for representing analog variations in a pattern. However, they are also constrained to be nonnegative, giving them a quasi-binary
I will conclude by linking this work to recent brain models that rely on continuous attractor dynamics.
David Spiegelhalter (MRC, Cambridge)
Bayesian Graphical Modelling
This tutorial will be based on three interconnected topics.
Graphical models: the use of directed and undirected conditional independence graphs to express the qualitative assumptions underlying a complex model.
Bayesian inference: the attachment to the links of the graph of conditional or prior distributions for all quantities, leading to a full probability model for both observables and parameters.
Markov chain Monte Carlo methods: conditional on observed data, the simulation of the posterior distributions for quantities of interest using Markov chain Monte Carlo methods, with Gibbs sampling as
the simplest procedure.
These three themes are brought together in the BUGS software.
Mike Titterington (Glasgow)
Hyperparameter Estimation And Related Topics
The talk will consider a framework that includes the estimation of hyperparameters in Bayesian analysis. The same framework covers a number of incomplete-data problems such as standard mixture
models, a wide variety of latent-structure models, hidden Markov chain models that are often applied in speech processing, and hidden Markov random field models popularised in recent approaches to
the statistical modelling of noisy images and related to the analysis of Boltzmann machines. The relationships among the models will be established, as will be the feasibility of applying empirical
Bayes and fully Bayesian approaches to the problem of estimating unknown quantities. Difficulties arise, especially in the context of hidden Markov random fields, and ways of trying to overcome the
difficulties will be described. In the context of the empirical Bayes methods, in particular, the use of devices such as mean-field approximations will be discussed.
Guido Consonni (Pavia)
Priors for quantitative learning in probabilistic expert systems
We consider a Directed Acyclic Graph with discrete nodes and address the issue of learning about conditional probabilities, which are regarded as uncertain quantities. Usually both global
independence and local independence are assumed in the corresponding prior. We suggest to relax the latter using two distributions, namely a hierarchical partition prior and a hierarchical logit
prior. Each prior is a discrete mixture with respect to several alternative dependence structures of the probabilities of each node conditional on parents' configurations. An application to binary
data under a complete sampling scheme is presented and directions on how to extend the methodology to incomplete data scheme is discussed. | {"url":"http://www.newton.ac.uk/programmes/NNM/nnmec_prog.html","timestamp":"2014-04-20T03:18:52Z","content_type":null,"content_length":"40220","record_id":"<urn:uuid:c8003e63-d385-48c2-9c2e-93747dd3ea0c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Here a Tau, there a Tau… Plotting Quantile Regressions
I’ve ended up digging into quantile regression a bit lately (see this excellent gentle introduction to quantile regression
for ecologists [pdf] for what it is and some great reasons why to use it -see also here and here). In R this is done via the quantreg package, which is pretty nice, and has some great plotting
diagnostics, etc. But what it doesn’t have out of the box is a way to simply plot your data, and then overlay quantile regression lines at different levels of tau.
The documentation has a nice example of how to do it, but it’s long tedious code. And I had to quickly whip up a few plots for different models.
So, meh, I took the tedious code and wrapped it into a quickie function. Which I dorp here for your delectation. Unless you have some better fancier way to do it (which I’d love to see – especially
for ggplot….)
Here’s the function:
quantRegLines <- function(rq_obj, lincol="red", ...){
#get the taus
taus <- rq_obj$tau
#get x
x <- rq_obj$x[,2] #assumes no intercept
xx <- seq(min(x, na.rm=T),max(x, na.rm=T),1)
#calculate y over all taus
f <- coef(rq_obj)
yy <- cbind(1,xx)%*%f
if(length(lincol)==1) lincol=rep(lincol, length(taus))
#plot all lines
for(i in 1:length(taus)){
lines(xx,yy[,i], col=lincol[i], ...)
And an example use.
taus <- c(.05,.1,.25,.75,.9,.95)
plot(income,foodexp,xlab="Household Income",
ylab="Food Expenditure",
pch=19, col=alpha("black", 0.5))
rq_fit <- rq((foodexp)~(income),tau=taus)
Oh, and I set it up to make pretty colors in plots, too.
plot(income, foodexp, xlab = "Household Income",
ylab = "Food Expenditure",
pch = 19, col = alpha("black", 0.5))
quantRegLines(rq_fit, rainbow(6))
legend(4000, 1000, taus, rainbow(6), title = "Tau")
All of this is in a repo over at github (natch), so, fork and play.
One thought on “Here a Tau, there a Tau… Plotting Quantile Regressions” | {"url":"http://www.imachordata.com/here-a-tau-there-a-tau-plotting-quantile-regressions/","timestamp":"2014-04-18T20:42:33Z","content_type":null,"content_length":"13266","record_id":"<urn:uuid:a82d6aac-9bf2-4afa-b62d-026cb1cd137f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Date Speaker Title (click titles for abstract)
Signed tree associahedra
Vincent Pilaud An associahedron is a polytope whose vertices correspond to the triangulations of a convex polygon and whose edges correspond to flips between them. Loday gave a particularly
14 Apr. (É. elegant realization of the associahedron which has been generalized in two directions: on the one hand by Hohlweg and Lange to obtain multiple realizations of the associahedron
2014 Polytechnique) parametrized by a sequence of signs, and on the other hand by Postnikov to obtain a realization of the graph associahedra of Carr and Devadoss. The goal of this talk is to
unify and extend these two constructions to signed tree associahedra. We will also present the rich combinatorial and geometric properties of the resulting polytopes. The talk
is based on arXiv:1309.5222.
Intervals of the Tamari lattice
7 Apr. Viviane Pons
2014 (U. Vienna) We present the Tamari lattice on binary trees and more specifically, the questions related to the intervals of the lattice. The purpose of the talk is to define a new
combinatorial object called Tamari interval-posets which can be used to deal with those questions and give some enumeration results.
Non-commutative algebras and non-commutative Gröbner bases of homogeneous ideals
Any finitely generated non-commutative algebra is the quotient of a (non-commutative) polynomial ring by an ideal. In the recent years, I have been interested by three such
Nantel (1) When the ideal is Symmetric polynomials in n (non-commutative) variables
31 Mar. Bergeron (2) When the ideal is Quasisymmetric functions in n variables
2014 (York U.) (3) Fomin-Kirrilov algebra
The main question in the three cases is: Is the algebra finite dimensional? Very little is know in each case. [Mike, last week, also presented such an algebra] It would be very
interesting to understand the Gröbner basis of the defining ideal in each case. Since the relations are homogeneous, the Gröbner basis is homogeneous as well. I will present
the algebras above. Do a brief survey of Gröbner basis algorithm. Show that in the homogeneous case the algorithm can be modified to give the answer degree by degree. This
allows us to have partial answer for the algebras above, and hopefully, understand them better.
Non-commutative Gröbner bases and self avoiding walks
I will demonstrate a method of computing self avoiding walks using algebra by realizing paths in an n-dimensional lattice as the monomials of a 2n-variable non-commutative
algebra. The quotient by the two sided ideal of monomials representing paths that start and end at the same point is an algebra whose graded dimensions are the number of self
24 Mar. Mike Zabrocki avoiding walks that end at a fixed point.
2014 (York U.) This is work in progress, but we found some surprising initial results. It turns out that the non-commutative Gröbner basis of the ideal is the set of monomials representing
self-avoiding polygons plus those paths representing a step forward and back. In addition, the algebra seems to agree with factorization algorithms that appear in the self
avoiding walk literature. I will try to demonstrate how these calculations can be done using a package in GAP for computing non-commutative Gröbner bases called "GBNP".
This is joint work with Andrew Rechnitzer
Parallelogram Polyominoes and (Surprise!)-- The Diagonal Harmonics
A recent paper of Dukes and Le Borgne studied two statistics on parallelogram polyominoes-- two nonintersecting paths, each composed of north and east steps and bounded by a
17 Mar. Angela Hicks rectangular m x n bounding box. Conjecturing that q,t-counting the polyominoes by the two statistics resulted in polynomials that were symmetric in q and t as well as m and n,
2014 (Stanford U.) they called the statistics "area"' and "bounce," in reference to the historic statistics on parking functions. This talk will discuss a following joint paper (with the original
two authors, Aval, and D'Adderio) which introduced a third statistic, "dinv," on polyominoes and demonstrated the conjectured symmetries. In a surprising twist, the proof
illuminates a direct link from polyominoes to parking functions and the famous space of diagonal harmonics.
The order of birational rowmotion (joint work with Tom Roby)
For any finite poset P, rowmotion is a certain permutation of the set of order ideals of P. Studied by various authors (sometimes under different names and in different
guises), this permutation has proven to have interesting and nontrivial properties -- e. g., its order is p + q when P is the product [p] x [q] of two chains. In very recent
10 Mar. Darij Grinberg work (inspired by discussions with Berenstein), Einstein and Propp describe a way to generalize rowmotion: first to the piecewise-linear setting of order polytopes, then via
2014 (MIT) detropicalization to the birational setting.
In the latter setting, "birational rowmotion" is a birational self-equivalence of a certain algebraic variety, and no longer has finite order in the general case. Yet we were
able to compute its order for several classes of posets, including the product [p] x [q] of two chains (here the order is the same as in the case of ordinary rowmotion, that
is, p + q), some triangle-shaped posets and graded forests. Our methods are partly based on those used by Alexandre Volkov to resolve the type AA (rectangular) Zamolodchikov
Periodicity Conjecture, and the well-behavedness of birational rowmotion seems to be related to the combinatorics of root lattices.
The ring of graphs and the chromatic polynomial
3 Mar. Marcelo Aguiar Certain basic operations among graphs resemble addition and multiplication of ordinary numbers. Formally, the species of graphs is a ring in an appropriate category. We explain
2014 (Cornell U.) this fact and employ it to obtain a novel understanding and a wide generalization of the chromatic polynomial (and the corresponding symmetric function of Stanley) in terms of
ring theory. The talk is based on joint work with Swapneel Mahajan and Jacob White.
Dyck path triangulations and extendability
We introduce the Dyck path triangulation of the cartesian product of two simplices. The maximal simplices of this triangulation are given by Dyck paths, and its construction
24 Feb. Cesar Ceballos partially generalizes to produce triangulations using rational Dyck paths. Our study of the Dyck path triangulation is motivated by extendability problems of partial
2014 (York U.) triangulations of products of two simplices. We show that whenever m≥k>n, any triangulation of Δ[m-1]^k-1 x Δ[n-1] extends to a unique triangulation of Δ[m-1] x Δ[n-1] .
Moreover, with an explicit construction, we prove that the bound k>n is optimal. We also exhibit interesting interpretations of our results in the language of tropical oriented
matroids, which are analogous to classical results in oriented matroid theory.
This is joint work with Arnau Padrol and Camilo Sarmiento.
17 Feb.
2014 Family Day (University is closed)
Hessenberg-Stirling Matrices and divisible closure of the algebra of weighted Isobaric Polynomials
Aura Conci, Huilan Li, T. MacHenry, Geanina Tudose
In a paper called "Reflections on Symmetric Polynomials and Arithmetic Functions", Geanina Tudose and I exhibited an embedding of the Weighted Isobaric Polynomials (WIPs) in
Trueman their injective hull, that is, we adjoined all of the rational roots with respect to the convolution product of these polynomials to these polynomials. The WIP polynomials are
10 Feb. Machenry just the Schur Hook Polynomials written on the elementary symmetric polynomial basis, and include the Generalized Fibonacci Polynomials (GFP) and the Generalized Lucas
2014 (York U.) Polynomial (GLP). Because the Group of Multiplicative Arithmetic Functions can be faithfully represented using GFPs, and the Group of Additive Arithmetic Functions can be
faithfully represented using GLPs, these groups of Arithmetic functions inherit this embedding, that is, are also explicitly embedded in their divisible closures, this time
with respect to the Dirichlet product. In a recently published paper Huilan Li and I used Hessenberg Matrices to represent the GFP and GLP. In a current paper, Aura Conci and I
have used Hessenberg matrices and some functions related to the Stirling Numbers of the first and second kind to give matrix representations for these embeddings. The
Hessenberg matrices are especially suitable for such embeddings because of their computability properties, and the nice relation between their Determinants and Permanents,
Permanents being of importance in particle physics.
Strong persistence property of square-free monomial ideals
We say that an ideal I of a commutative Noetherian ring A has the strong persistence property if (I^{k+1}:I)=I^k for each k≥1. This concept was introduced by J. Herzog and A.A.
Qureshi in their work Persistence and stability properties of powers of ideals, where they present an equivalence in terms of associated primes and show that each polymatroidal
27 Jan. Jonathan ideal satisfies this property. Nonetheless, it had been studied among others by S. Morey, J. Matínez-Bernal, R.H. Villarreal in their work Associated primes of powers of edges
2014 Toledo ideals. The authors show that edge ideals of graphs have this property as a step towards obtaining other results, so we began to study the case of square-free monomial ideal.
(Cinvestav) The results presented in this talk were obtained recently in my PhD. We will begin by showing some general results about this property, like how to study it through components
of the minimal set of generators considering as a clutter, we will also show that for every square-free monomial ideal I holds (I^2:I)=I. After that, we will give examples and
classes of monomial ideals which have the strong persistence property, like cases of path ideals and edge ideals of weighted graphs. Finally we will talk about target problem
which we are working on.
Rational Catalan Numbers and Rational Associahedra
For each rational number x outisde the interval [-1,0] I will define a positive integer Cat(x) called the "rational Catalan number". The classical Catalan number corresponds to
25 Nov. Drew Armstrong x=n and the Fuss-Catalan number corresponds to x=n/((k-1)n+1). These numbers satisfy the symmetry Cat(x)=Cat(-x-1), which implies that Cat(1/(x-1))=Cat(x/(1-x)). I will call
2013 (U. of Miami) this common value the "derived Catalan number" Cat'(x):=Cat(1/(x-1))=Cat(x/(1-x)), and it follows that Cat'(x)=Cat'(1/x). Rational Catalan numbers are categorified by various
generalizations of traditional Catalan structures. In particular, I will describe joint work with B. Rhoades and N. Williams in which we define a "rational associahedron". This
is a pure simplicial complex with Cat(x) many maximal faces. It is not a polytope but it is homotopy equivalent to a wedge of Cat'(x) many spheres. We conjectured that the
equality Cat'(x)=Cat'(1/x) is represented by Alexander duality of rational associahedra. This conjecture was recently proved by B. Rhoades.
18 Nov. Nathan I will talk about two combinatorial miracles relating purely poset-theoretic objects with purely Coxeter-theoretic objects. The first miracle is that there are the same number
2013 Williams of linear extensions of the root poset as reduced words of the longest element (occasionally), while the second is that there are the same number of order ideals in the root
(LaCIM, UQAM) poset as certain group elements (usually). I will conjecturally place these miracles on remarkably similar footing and examine the generality at which we should expect such
statements to be true.
Laura Trying to prove stability
11 Nov. Colmenarejo
2013 (U. de We define the plethysm of two Schur symmetric functions as a new operation, which is more complicated and interesting than the Kronecker product. We will discuss this and other
Sevilla) technics (like FI-modules and Vertex operators) which can be used to study stability problems that appear in different contexts.
Applications of abacus diagrams: Simultaneous core partitions, alcoves, and a major statistic
4 Nov. Hanusa A t-core partition is a partition whose Young diagram has no hooks of length t. Partitions that are both s-core and t-core for integers s and t are called simultaneous core
2013 (Queens partitions. We will discuss the applications of simultaneous core partitions--we visit with lattice paths, alcoves in a hyperplane arrangement, and a "major index" statistic
College) that recovers a q-analog for Catalan numbers. This is joint work with Brant Jones and Drew Armstrong.
Bott-Samelson varieties, subword complexes and brick polytopes
The Bott-Samelson varieties are a resolution of singularities for Schubert varieties. Intuitively, Bott-Samelson varieties factor G/B into a product of ℂℙ^1's via a map into G/
28 Oct. Laura Escobar B. These varieties are mostly studied in the case in which the map into G/B is birational, however in this talk we will study fibers of this map when it is not birational. We
2013 (Cornell U.) will see that in some cases this fiber is a toric variety. In order to do so we will translate this problem into a purely combinatorial one in terms of subword complexes. These
simplicial complexes, defined by Knutson and Miller, encode a lot of information about reduced words in a Coxeter system. Pilaud and Stump realized certain subword complexes as
the boundary of a polytope which generalizes the brick polytope defined by Pilaud and Santos. For a nice family of words, the brick polytope is generalized associahedron. These
stories connect in a nice way: for certain words a fiber of the Bott-Samelson map is the toric variety of the Brick polytope.
The Zero-Divisor Graphs of Semigroups, Rings, and Group Rings
21 Oct. Farid We associate some graphs to a ring R and we investigate the interplay between the ring-theoretic properties of R and the graph-theoretic properties of the graphs associated to
2013 Aliniaeifard R. The Zero-divisor graph of a semigroup S is a graph with non-zero zero-divisors of S as vertex set and distinct vertices x and y are adjacent if xy = 0 or yx = 0. We
(York U.) investigate diameter, girth, and Isomorphism Problem for zero-divisor graphs of rings. Also, we show that the set of ideals of R is a semigroup. So we can define a zero-divisor
graph for the set of ideals of R. At the end we investigate the genus of these graphs.
14 Oct.
2013 Thanksgiving | {"url":"http://garsia.math.yorku.ca/seminar/algebra.html","timestamp":"2014-04-19T11:58:21Z","content_type":null,"content_length":"27610","record_id":"<urn:uuid:dadd6425-ed71-488b-a994-61354eda2f12>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reoccurring Error in RLC Series Circuit Problems.
Hi All,
In doing some practice RLC series circuit question, I have been obtaining incorrect values for active and reactive power. The magnitudes I obtain are correct however the sign of the value is not. I
have attributed this to obtaining an incorrect argument for the absolute power, or computing phasors incorrectly. I have attached an image of the reasoning behind my problem solving however I must be
fundamentally wrong somewhere. | {"url":"http://www.physicsforums.com/showthread.php?t=721115","timestamp":"2014-04-18T21:21:38Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:512c18a7-0671-445c-b580-c24b8d301982>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gamma spaces and monoidal categories II
up vote 4 down vote favorite
This question is kind of a follow-up of this one.
Suppose I have a topological category $\mathcal{C}$ (objects and morphisms topological spaces, source and target map continuous, etc.) together with a continuous tensor product $\otimes \colon \
mathcal{C} \times \mathcal{C} \to \mathcal{C}$, such that it is strict monoidal and symmetric, but there is no unit object (I have some kind of homotopy unit, in the cases I am interested in, but I
don't know how to build it into the category).
What is the structure of the classifying space $B\mathcal{C}$? Does the Gamma-space construction of Segal still work and give me some kind of homotopy associative $H$-space?
homotopy-theory at.algebraic-topology ct.category-theory symmetric-monoidal-catego
add comment
2 Answers
active oldest votes
I think you do not need to use the Gamma space construction of Segal here. You just need to note that the classyfing space functor from topological categories into toopological spaces
preserves products (given you work with compactly generated spaces).
This implies that in your situation the classifying spaces $|C|$ inhertits a strict multiplication, i.e. it is a topological monoid (possibly without unit). The homotopy unit on the
topoloigal category will then lead to a homotopy unit for your monoid.
up vote 6
down vote If you now want to group complete you form $\Omega B |C|$. Note here that $B|C|$ can be formed by taking the fat geometric realization of the simplicial space $N|C|_n := |C|^{n-1}$. For
this you don't need degeneracies, i.e. units.
Or do I misunderstand something?
You are right. But this way, I only see the $H$-space structure of the geometric realization. In a Gamma-space I also have control over the higher homotopies. For example, I can deloop
a Gamma space. The question is, whether I still get a Gamma space, if I drop the assumption about units. – Ulrich Pennig Mar 1 '12 at 9:26
You don't see higher homotopies because they are not there. You can deloop topological monoids without units, as a described above. If you insist on getting a Gamma space I don't know
from the top of my head. I have to think about it. – Thomas Nikolaus Mar 1 '12 at 9:30
There are higher homotopies still; they come from the symmetric structure on the classifying space, and are required in order to allow iterated delooping. – Tyler Lawson Mar 1 '12 at
I understood that Ulrich assumed that the initial category was strict symmetric!? If this is not the case you are of course right (and in fact it almost never happens that its strict
symmetric). – Thomas Nikolaus Mar 1 '12 at 15:36
@Tyler: So, iterated deloopings still work in the non-unital case? – Ulrich Pennig Mar 1 '12 at 21:15
add comment
You can construct a $\Gamma_{epi}$-category from your category (where $\Gamma_{epi}$ denotes the category of finite pointed sets with epimorphisms). You cannot extend it to a $\
Gamma$-category if your symmetric monoidal category doesn't have a unit (if you could, this would give you a unit). When you apply nerve you get a $\Gamma_{epi}$-space and this is probably
the same as a non unital $E_{\infty}$-space (i.e. a space over the operad $E_{\infty}$ where you forgot about the $0$-th space).
up vote 1
down vote However there is a result of Lurie in Higher Algebra that says that homotopy units on non unital $E_{\infty}$-spaces can be strictified so that the result is a unital $E_{\infty}$-space.
You seem to have a homotopy unit in your category so this result might be relevant.
Thank you. That makes sense. I also wondered about the following thing: I could just add a dummy object 1 to the category with mor(1,x) = empty if x is not 1 and mor(1,1) = id_1. Is there
any problem with extending the monoidal structure in such a way that 1 becomes a unit? If this works: What is the relation between the delooping above and the delooping of the
"unification"? – Ulrich Pennig Mar 2 '12 at 20:18
What you do is adding a disjoint unit or equivalently writing you non unital monoid as the augmentation kernel of an augmented monoid. I can tell you what happens for commutative algebras
in the category of chain complexes. I don't know how relevant it is to this situation. So if A is an augmented algebra and I is its augmentation ideal, then the iterated bar $B^n(A)$ is
equivalent to $k\oplus B^n(I)[n]$. You can find this result in a paper of Po Hu. – Geoffroy Horel Mar 3 '12 at 2:31
add comment
Not the answer you're looking for? Browse other questions tagged homotopy-theory at.algebraic-topology ct.category-theory symmetric-monoidal-catego or ask your own question. | {"url":"http://mathoverflow.net/questions/89935/gamma-spaces-and-monoidal-categories-ii","timestamp":"2014-04-19T22:31:58Z","content_type":null,"content_length":"64316","record_id":"<urn:uuid:0989b0ae-b7a7-4908-9316-03fe202962d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |