content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Hardware Verification
- Journal of the ACM , 1991
"... A logic simulator can prove the correctness of a digital circuit if it can be shown that only circuits fulfilling the system specification will produce a particular response to a sequence of
simulation commands. This style of verification has advantages over other proof methods in being readily a ..."
Cited by 37 (5 self)
Add to MetaCart
A logic simulator can prove the correctness of a digital circuit if it can be shown that only circuits fulfilling the system specification will produce a particular response to a sequence of
simulation commands. This style of verification has advantages over other proof methods in being readily automated and requiring less attention on the part of the user to the low-level details of the
design. It has advantages over other approaches to simulation in providing more reliable results, often at a comparable cost.
- Journal of Automated Reasoning , 1985
"... Computer programs may be regarded as formal mathematical objects whose properties are subject to mathematical proof. Program verification is the use of formal, mathematical techniques to debug
software and software specifications. 1. Code Verification How are the properties of computer programs prov ..."
Cited by 14 (4 self)
Add to MetaCart
Computer programs may be regarded as formal mathematical objects whose properties are subject to mathematical proof. Program verification is the use of formal, mathematical techniques to debug
software and software specifications. 1. Code Verification How are the properties of computer programs proved? We discuss three approaches in this article: inductive invariants, functional semantics,
and explicit semantics. Because the first approach has received by far the most attention, it has produced the most impressive results to date. However, the field is now moving away from the
inductive invariant approach. 1.1. Inductive Assertions The so-called Floyd-Hoare inductive assertion method of program verification [25, 33] has its roots in the classic Goldstine and von Neumann
reports [53] and handles the usual kind of programming language, of which FORTRAN is perhaps the best example. In this style of verification, the specifier "annotates " certain points in
the program with mathematical assertions that are supposed to describe relations that hold between the program variables and the initial input values each time "control " reaches the
annotated point. Among these assertions are some that characterize acceptable input and the desired output. By exploring all possible paths from one assertion to the next and analyzing the effects of
intervening program statements it is possible to reduce the correctness of the program to the problem of proving certain derived formulas called verification conditions. Below we illustrate the idea
with a simple program for computing the factorial of its integer input N flowchart assertion start with input(N) input N A: = 1 N = 0 yes stop with? answer A | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=199124","timestamp":"2014-04-19T19:59:45Z","content_type":null,"content_length":"17146","record_id":"<urn:uuid:48bc4521-31be-4ece-abcc-98822e282d21>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: RDF Semantics: RDFS entailment lemma
From: <herman.ter.horst@philips.com> Date: Mon, 14 Apr 2003 17:49:06 +0200 To: Pat Hayes <phayes@ai.uwf.edu>, www-rdf-comments@w3.org Message-ID:
>Herman, greetings. Your example (below) has sparked quite a lot of
>work. I think I now understand what is going on in it and how to
>handle it, so this is an attempt to summarize and explain.
>The key pattern is the following combination:
>type subproperty p .
>p domain d .
>Together, these support an inference path from any type assertion to
>another type assertion:
>a type b
>a p b (using rdfs6)
>a type d (using rdfs2)
>and hence, since a could be anything, support an entailment
>b subClass d
>which however is not inferrable at present. The 'natural'
>corresponding inference rule would be the one I mentioned:
>if [a type b .] entails [a type c .], then infer [b subclass c .]
>which is not a closure rule. Including this rule would be
>semantically elegant but computationally ugly, so Ive been trying to
>find a way to avoid it.
>(BTW, I now think I was wrong to worry that there was a corresponding
>case for subPropertyOf, as the inference path for the corresponding
>antecedent subproof would have to go from a P b to a Q b for any a
>and b, and I cannot see any such pathway which does not already
>involve an assertion of subPropertyOf, since there is no property
>corresponding to binary predication in the way that rdf:type does to
>unary predication.)
>The entailment you noted actually follows from a further observation, viz
>a type Resource
>is always true for any a; so using the above reasoning gives
>Resource subClass d
>as an entailment of the two initial triples alone, with no further
>assumptions. And is in fact follows by the kind of semantic analysis
>that you performed, more directly. Now, this in turn can only be
>interpreted as saying that the class extension of d is the same as
>rdfs:Resource, an equation I had not previously thought possible to
>express in RDF (and which means I have to rewrite rule rdfs7, see
>However, this (unexpectedly strong) conclusion does mean, I think,
>that this entire phenomenon can be captured by a single special rule,
>rdf:type rdfs:subPropertyOf xxx .
>xxx rdfs:domain yyy .
>rdfs:Resource rdfs:subClassOf yyy .
>and a modification to rdfs7a, viz:
>xxx rdf:type rdfs:Class .
>rdfs:Resource rdfs:subClassOf yyy .
>xxx rdfs:subClassOf yyy .
>(which covers the previous version since
>rdfs:Resource rdfs:subClassOf rdfs:Resource .
>follows trivially by rdfs 7b)
>The conclusion in your example then follows directly, since of course
>all classes are subclasses of Resource.
I agree that my two examples follow from the new rule rdfs12, and that
this rule is valid.
And I agree with Graham Klyne that the new rule rdfs7a follows
from the old rule rdfs7a, so can be replaced by it.
>But your example has some
>other consequences: in fact, it entails that Resource is a subClass
>of Class, ie that everything is a class.
How do you obtain this?
>In order to prove the closure lemma, I need to somehow show that this
>is the *only* way that the above entailment rule could possibly be
>The only way I can see how to do this at present is by an
>exhaustive analysis of the rule base, but I bet there is some elegant
>way to do it which I don't have time to think of.
>The general pragmatic conclusion seems to be that it is definitely
>not a good idea to try to say things about superproperties of
>rdf:type, for sure :-) I propose to add the following paragraph as a
>'warning' and also a brief commentary on this new rule:
>The rule rdfs11 is a technicality, required in order to ensure the
>truth of the following lemma. It is unlikely to be used in practice,
>and will normally only produce redundant inference paths for some
>items in the closure. In general, the property rdf:type is best
>considered to be part of the logical machinery; as this rule
>illustrates, imposing gratuitous conditions on rdf:type can produce
>unexpected entailments. Similar strange conclusions can arise from
>asserting that rdfs:Resource is a subclass of another class, for
>example, or asserting unintuitive properties of rdfs:Class.
I'm not sure whether these last two paragraphs are justified.
Couldn't you also say that the new rule rdfs12 shows that the
rdfs does not enable one to make the domain of rdf:type
or any of its superproperties any smaller than it is
(i.e., rdfs:Resource) by adding other domain statements?
>Any comments?
>>It seems that the RDFS entailment lemma as currently stated
>>in the RDF Semantics document (last call or editor's version)
>>is not entirely correct.
>>Consider the RDF graph G:
>> x rdf:type rdfs:Class .
>> rdf:type rdfs:domain y .
>>This RDF graph rdfs-entails the triple
>> x rdfs:subClassOf y .
>>( Proof: let I be an arbitrary rdfs interpretation of G.
>>Clearly I(x) and I(y) are in IC. Suppose z in ICEXT(I(x)),
>>so <z,I(x)> in IEXT(I(rdf:type)). The second triple shows
>>that <I(rdf:type),I(y)> in IEXT(I(rdfs:domain)).
>>With the semantic condition on rdfs:domain it follows
>>that z in ICEXT(I(y)), so that <I(x),I(y)> in
>>IEXT(I(rdfs:subClassOf)). )
>>However, this triple is not in the rdfs closure of G,
>>unless x = y.
>>(Proof: this closure contains the subClassOf statements
>> x rdfs:subClassOf x .
>> y rdfs:subClassOf y .
>>but no other subClassOf statements involving x or y.)
>>This example could be used as another closure rule ("rdfs11"),
>>but then the RDFS entailment lemma would still be false.
>>Namely, a slightly more complicated proof shows that
>>the graph H:
>> x rdf:type rdfs:Class .
>> rdf:type rdfs:subPropertyOf p .
>> p rdfs:domain y .
>>rdfs-entails the triple
>> x rdfs:subClassOf y .,
>>but that this triple is not in the (extended definition of)
>>I found these examples in an attempt to become completely convinced
>>of the truth of the rdfs entailment lemma.
>>In this attempt I did become convinced of the "soundness" part of
>>the lemma. For the "completeness" part of the lemma, it would perhaps
>>be simpler, and still very useful, to restrict the lemma to
>>"well-behaved" RDF graphs, which might be defined as RDF graphs which
>>do not make (RDF) statements about built-in (rdf or rdfs) vocabulary
>>in addition to the statements given by the axiomatic triples.
>>Herman ter Horst
>IHMC (850)434 8903 or (650)494 3973 home
>40 South Alcaniz St. (850)202
4416 office
>Pensacola (850)202 4440 fax
>FL 32501 (850)291 0667 cell
>phayes@ai.uwf.edu http://www.coginst.uwf.edu/~phayes
>s.pam@ai.uwf.edu for spam
Received on Monday, 14 April 2003 11:51:07 GMT | {"url":"http://lists.w3.org/Archives/Public/www-rdf-comments/2003AprJun/0060.html","timestamp":"2014-04-18T20:46:00Z","content_type":null,"content_length":"16427","record_id":"<urn:uuid:d4fc01cb-8a70-41b9-baaf-c5f2844e244d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alamo, CA Statistics Tutor
Find an Alamo, CA Statistics Tutor
...My professor noticed that I was helping students in his class and I was doing extremely well in his class so he recommended me to the tutoring center. I have enjoyed tutoring ever since. After
taking chemistry I began tutoring it also and with these two subjects as well as working in the stockroom I was able to pay for most of my education.
19 Subjects: including statistics, chemistry, physics, calculus
...I tutored pre-calculus as an on-call tutor for Diablo Valley Junior College for three years. I taught pre-calculus sections as a TA at UC Santa Cruz for two years. I have taken classes in
teaching literacy at Mills College.
15 Subjects: including statistics, reading, calculus, writing
...I take a personal interest in all of my students to make sure they understand the material and get the grades they are hoping to achieve. I have found that the best way to tutor students is to
be as open with them as possible. I try to befriend each of my students because I find that when you are friends it is easier to talk openly and figure out where the student is having issues.
27 Subjects: including statistics, chemistry, calculus, physics
...I have tutored hundreds of students. Although many came to me afraid of math, most of them left with a well-earned sense of confidence and mastery. You will too.
14 Subjects: including statistics, geometry, ASVAB, algebra 1
...I look forward to talking with parents about their concerns for their students. We CAN make a difference!The study of psychology at all levels, from high school through undergraduate, Master's
and Doctoral levels, including statistics, experimental design and methodology. As a doctoral student ...
20 Subjects: including statistics, calculus, geometry, biology
Related Alamo, CA Tutors
Alamo, CA Accounting Tutors
Alamo, CA ACT Tutors
Alamo, CA Algebra Tutors
Alamo, CA Algebra 2 Tutors
Alamo, CA Calculus Tutors
Alamo, CA Geometry Tutors
Alamo, CA Math Tutors
Alamo, CA Prealgebra Tutors
Alamo, CA Precalculus Tutors
Alamo, CA SAT Tutors
Alamo, CA SAT Math Tutors
Alamo, CA Science Tutors
Alamo, CA Statistics Tutors
Alamo, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Alamo_CA_statistics_tutors.php","timestamp":"2014-04-20T16:38:52Z","content_type":null,"content_length":"23928","record_id":"<urn:uuid:5c07d157-b6e0-4c24-9297-29dd30f7673f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
canonical height on an elliptic curve
canonical height on an elliptic curve
Let $E/\mathbb{Q}$ be an elliptic curve. It is often useful to have a notion of height of a point, in order to talk about the arithmetic complexity of a point $P$ in $E(\mathbb{Q})$. For this, one
defines height functions. For example, in $\mathbb{Q}$ one can define a height by
Following the example of $\mathbb{Q}$, one may define a height on $E/\mathbb{Q}$ by
$h_{x}(P)=\begin{cases}\log H(x(P))&\text{if }Peq O\\ 0&\text{if }P=O.\end{cases}$
In fact, given any even function $f:E(\mathbb{Q})\to\mathbb{R}$ on $E(\mathbb{Q})$ (i.e. $f(P)=f(-P)$ for any $P\in E(\mathbb{Q})$) one can define a height by:
However, one can refine this definition so that the height function satisfies some very nice properties (see below).
Let $\mathbb{Q}$ be a number field and let $E$ be an elliptic curve defined over $\mathbb{Q}$. The canonical height (or Néron-Tate height) on $E/\mathbb{Q}$, denoted by $\hat{h}$, is the function on
$E(\mathbb{Q})$ (with real values) defined by:
$\hat{h}(P)=\frac{1}{\deg f}\lim_{{N\to\infty}}\frac{h_{f}([2^{N}]P)}{4^{N}}$
for any even function $f:E(\mathbb{Q})\to\mathbb{R}$.
The fact that the definition does not depend on the choice of even function $f$ is due to J. Tate. In particular, one can simply choose $f$ to be the $x$-function, whose degree is $2$. The canonical
height satisfies the following properties:
Let $E/\mathbb{Q}$ and let $\hat{h}$ be the canonical height on $E$. Then:
1. 1.
The height $\hat{h}$ satisfies the parallelogram law:
for all $P,Q\in E(\overline{\mathbb{Q}})$.
2. 2.
For all $m\in\mathbb{Z}$ and all $P\in E(\overline{\mathbb{Q}})$:
3. 3.
The height $\hat{h}$ is even and the pairing:
$\langle\cdot,\cdot\rangle:E(\overline{\mathbb{Q}})\times E(\overline{\mathbb{Q% }})\to\mathbb{R},\quad\langle P,Q\rangle=\hat{h}(P+Q)-\hat{h}(P)-\hat{h}(Q)$
is bilinear (usually called the Néron-Tate pairing on $E/\mathbb{Q}$).
4. 4.
For all $P\in E(\overline{\mathbb{Q}})$ one has $\hat{h}(P)\geq 0$ and $\hat{h}(P)=0$ if and only if $P$ is a torsion point.
HeightFunction, RegulatorOfAnEllipticCurve
Mathematics Subject Classification
no label found
no label found
no label found | {"url":"http://planetmath.org/canonicalheightonanellipticcurve","timestamp":"2014-04-18T18:13:31Z","content_type":null,"content_length":"104160","record_id":"<urn:uuid:b7d447ba-7d90-4b96-880f-b4352c63c8bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do I use the quotient rule on this function: (4x − 2) / (x2 + 1) ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ddd2b36ee2c8b0bdc653fe8","timestamp":"2014-04-19T07:05:56Z","content_type":null,"content_length":"51617","record_id":"<urn:uuid:6b090b3c-34b8-43b6-8270-4ac30acdef42>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
what is a semi-vertical angle
October 14th 2007, 10:15 AM
what is a semi-vertical angle
i only want to know what is a semi-vertical angle of 45 degree. can anyone draw a graph of it for me
October 14th 2007, 10:21 AM
do you know what the line y = x looks like? the acute angle that line makes with the positive x-axis is a semi-vertical angle
this is my 44:):)th post!!!
October 14th 2007, 11:07 AM
the question of my book said:
a right circular conical vessel with the semi-vertical angle of 45 degree is placed on a horizontal table with its apex upwards. water is poured into the vessel through a small hole in the apex,
at a uniform rate of 12 m^3/s
........ another question said:
water is flowing at the rate of 10 cm^3/s into a leaking inverted circular cone of semi-vertical angle of 30 degree ............
what does that semi-vertical angle mean?
October 15th 2007, 12:37 AM
semi-vertical angle
October 15th 2007, 01:47 AM
As usual, if I don't undferstand a term, I look it up in the internet.
Google gave me a website that says,
About the word 'vertical'
'Vertical' has come to mean 'upright', or the opposite of horizontal. But here, it has more to do with the word 'vertex'. Vertical angles are called that because they share a common vertex.
So the vertical angle of a right circular cone is the apex angle of its cross-section along the height.
Semi-vertical angle then is half of that vertical anglpe.
In your posted drawings, the one defined at the cone to the right is the semi-vertical angle.
So, a semi-vertical angle of 45 degrees has 2(45) = 90 degrees for its full vertical angle. | {"url":"http://mathhelpforum.com/geometry/20564-what-semi-vertical-angle-print.html","timestamp":"2014-04-16T07:41:32Z","content_type":null,"content_length":"6310","record_id":"<urn:uuid:abe506c9-35a0-4aad-9d45-24ed95e86d1e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explanation 1
Line 0: KINETICS [ number ] [ description ]
KINETICS is the keyword for the data block.
number --Positive number to designate the following set of kinetic reactions. A range of numbers may also be given in the form m-n , where m and n are positive integers, m is less than n , and the
two numbers are separated by a hyphen without intervening spaces. Default is 1.
description --Optional comment that describes the kinetic reactions.
rate name --Name of a rate expression. The rate name and its associated rate expression must be defined within a RATES data block, either in the default database file or in the current or previous
simulations of the run. The name must be spelled identically to the name used in RATES input (except for case).
Line 2: -formula list of formula, [ stoichiometric coefficient ]
By default, the rate name is assumed to be the name of a phase that has been defined in a PHASES data block and the formula for that phase is then used for the stoichiometry of the reaction (for
example, calcite in case "b" above). However, kinetic reactions are not restricted to mineral phases, any set of elements produced or consumed by the kinetic reaction (relative to the aqueous phase)
can be specified through a list of doublets formula and stoichiometric coefficient (lines 2a and 2c). Optionally, formula or -f[ ormula].
formula --Chemical formula or the name of a phase to be added by the kinetic reaction. If a chemical formula is used, it must begin with a capital letter and contain element symbols and
stoichiometric coefficients (line 2a). A phase name may be entered independent of case. Each formula must be a charge-balanced combination of elements. (An exception may be for defining exchangers or
surfaces related to kinetic reactants).
stoichiometric coefficient --Defines the mole transfer coefficient for formula per mole of reaction progress (evaluated by the rate expression in RATES). The product of the coefficient times the
moles of reaction progress gives the mole transfer for formula relative to the aqueous solution; a negative stoichiometric coefficient and a positive value for reaction progress gives a negative mole
transfer, which removes reactants from the aqueous solution. In line 2a, each mole of reaction dissolves 1.0 mole of FeS[ 2] and 0.001 moles of FeAs[ 2] into the aqueous solution; in line 2c, each
mole of reaction (as calculated by the rate expression) adds 0.5 mole of CH[ 2] O and 0.05 mole of NH[ 3] to the aqueous solution to simulate the degradation of nitrogen-containing organic matter.
Default is 1.0.
moles --Current moles of reactant. As reactions occur, the moles will increase or decrease. Default is equal to initial moles if initial moles is defined, or 1.0 mol if initial moles is not defined.
Optionally, m or -m.
initial moles --Initial moles of reactant. This identifier is useful if the rate of reaction is dependent on grain size. Formulations for this dependency often include the ratio of the amount of
reactant remaining to the amount of reactant initially present. The quantity initial moles does not change as the kinetic reactions proceed. Frequently, the quantity initial moles is equal to moles
at the beginning of a kinetic reaction. Default is equal to moles if moles is defined, or 1.0 if moles is not defined. Optionally, m0 or -m0
Line 5: -parms list of parameters
list of parameters --A list of numbers may be entered that can be used in the rate expressions, for example constants, exponents, or half saturation constants. In the rate expression defined with the
RATES keyword, these numbers are available to the Basic interpreter in the array PARM ; PARM(1) is the first number entered, PARM(2) the second, and so on. Optionally, parms, -p[ arms], parameters,
or -p[ arameters].
tolerance --Tolerance for integration procedure (moles). For each integration time interval, the difference between the fifth-order and the fourth-order integrals of the rate expression must be less
than this tolerance or the time interval is automatically reduced. The value of tolerance is related to the concentration differences that are considered significant for the elements in the reaction.
Smaller concentration differences that are considered significant require smaller tolerances. Numerical accuracy of the kinetic integration can be tested by decreasing the tolerance to determine if
results change significantly. Default is 1e-8. Optionally, tol or -t[ ol].
Line 7: -steps list of time steps
list of time steps --Time steps over which to integrate the rate expressions (seconds). The -steps identifier is used only during batch-reaction calculations; it is not needed for transport
calculations. By default, the list of time steps are considered to be independent times all starting from zero. The example data block would produce results after 100, 200, and 300 seconds of
reaction. However, the INCREMENTAL_REACTIONS keyword can be used to make the time steps incremental so that the results of the previous time step are the starting point of the new time step. For
incremental time steps, the example data block would produce results after 100, 300, and 600 seconds. Default is 1.0 second. Optionally, steps or -s[ teps].
Line 8: -step_divide step_divide
step_divide --If step_divide is greater than 1.0, the first time interval of each integration is set to time step / step_divide ; at least two time intervals must be integrated to reach the total
time of time step --0 to time step / step_divide and time step / step_divide to time step . If step_divide is less than 1.0, then step_divide is the maximum moles of reaction that can be added during
a kinetic integration subinterval. Frequently reaction rates are fast initially, thus requiring small time intervals to produce an accurate integration of the rate expressions. The Runge-Kutta method
will adapt to these fast rates when the integration fails the -tolerance criterion, but it may require several reductions in the length of the initial time interval for the integration to meet the
criterion; step_divide > 1 can be used to make the initial time interval of each integration sufficiently small to satisfy the criterion, which may speed the overall calculation time. However, the
smaller time interval will apply to all integrations throughout the simulation, even if reaction rates are slow later in the simulation. Using an appropriate step_divide < 1 can also cause
sufficiently small initial time intervals when rates are fast, but will not require small time intervals later in the simulation if rates are slow; however, the appropriate value for step_divide < 1
is not easily known and usually must be found by trial and error. The default maximal reaction is 0.1 moles during a time subinterval. Normally, -step_divide is not used unless run times are long and
it is apparent that each integration requires several time intervals. The status line, which is printed to the screen, notes the number of integration intervals that fail the -tolerance criterion as
"bad" and the number of integration intervals that pass the criterion as "OK". Optionally, step_divide or -step_[ divide].
Line 9: -runge_kutta ( 1, 2, 3, or 6)
( 1, 2, 3, or 6)--Designates the preferred number of time subintervals to use when integrating rates and is related to the order of the integration method. A value of 6 specifies that a 5th order
embedded Runge-Kutta method, which requires 6 intermediate rate evaluations, will be used for all integrations. For values of 1, 2, or 3, the program will try to limit the rate evaluations to this
number. If the -tolerance criterion is not satisfied among the evaluations or over the full integration interval, the method will automatically revert to the Runge-Kutta method of order 5. A value of
6 will exclusively use the 5th order method. Values of 1 or 2 are mainly expedient when it is known that the rate is nearly constant in time. Default is 3. Optionally, rk, -r[ k], runge_kutta, or -r[ | {"url":"http://wwwbrr.cr.usgs.gov/projects/GWC_coupled/phreeqc/html/final-45.html","timestamp":"2014-04-18T23:54:49Z","content_type":null,"content_length":"24655","record_id":"<urn:uuid:e521e0cd-fe9a-4cfe-a4b0-aa2bbe8b120a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
PBS KIDS Giveaway: Win a Cat in the Hat Prize Pack (5 Winners) - Mom it Forward
PBS KIDS Giveaway: Win a Cat in the Hat Prize Pack (5 Winners)
Can you believe that March 2 is Dr. Seuss’ 108^th birthday? In celebration of his birthday, PBS stations nationwide will feature THE CAT-IN-THE-HAT-A-THON, a two-hour marathon of THE CAT IN THE HAT
KNOWS A LOT ABOUT THAT! (check local listings). Kids will also be able to engage with the Cat and friends through games and video content online and on mobile, and parents will be able to enjoy a
new Birthday Party Builder Tool on the PBS KIDS Shop website.
THE CAT IN THE HAT-A-THON will feature two brand new episodes, “Seasons — Spring and Summer/Fall and Winter,” which takes the Cat, Nick and Sally on a journey through the four seasons, and “When I
Grow Up/Doing It Differently,” in which Nick and Sally explore what it means to grow up and learn that trying a different approach can sometimes be the best way to solve a problem. The marathon will
also include encore presentations of “Hooray for Hair/Ice Is Nice” and “Chasing Rainbows/Follow the Prints.”
THE CAT IN THE HAT KNOWS A LOT ABOUT THAT! has ranked among the top ten programs for children ages 2 to 5 since it premiered on PBS KIDS in September 2010. The series is a key part of PBS KIDS’
commitment to helping kids build critical STEM — science, technology, engineering and math — skills through engaging content across platforms.
Mark your calendar on Thursday, March 1 from 9-10 p.m. ET and join us as we chat about exploration with PBS KIDS!
The Prizes
This week's giveaway is all about exploration and The Cat and the Hat. Five lucky winners will receive a $25 Gift Card to the PBS KIDS Shop, 1 THE CAT IN THE HAT KNOWS A LOT ABOUT THAT! DVD courtesy
of NCircle Entertainment, and 1 book from The Cat in the Hat’s Learning Library book series published by Random House Children’s Books.
Entry Requirements
For a chance to enter and win, please complete the following requirements and leave a separate comment for each on this post, including links to your original tweets.
1. Follow @PBSKIDS on Twitter.
2. Like the PBS KIDS Facebook page.
3. Answer the following question in the comment section below: Why do you love The Cat and the Hat?
4. Tweet the following:
Optional Entries
The following are extra entries that are completely optional and will earn you one extra entry for each completed item. Leave a separate comment on this post for each completed optional entry.
Terms and Conditions
Winners will be selected randomly through http://random.org. No purchase necessary to enter. Giveaway ends at 11:59 p.m. ET Friday, March 2. See all terms and conditions here. This giveaway is
available to U.S. residents only.
You might also like...
244 Responses to “PBS KIDS Giveaway: Win a Cat in the Hat Prize Pack (5 Winners)”
1. I follow @PBSKids on Twitter.
2. I liked PBS Kids on Facebook.
3. I love the Cat in the Hat because it reminds me of my childhood and my son loves it!
6. I follow @momitforward on Twitter.
7. I liked Mom It Forward on Facebook.
8. The Cat and the Hat is fun and entertain my boys.
9. I love that my kids can enjoy the same The Cat in the Hat that I did . The stories are still as entertaining as I remember.
12. i like the pbskids facebook page
melissa pruitt
13. i follow pbskids on twitter
14. i like momitforward on facebook
15. I love the variety of topics, the creative use of language and the focus on science!
16. The makes my curious little scientists think. He inspires their creativity and drives them to explore their world.
17. I love how the mom always lets them go with the,cat! Lol I also love the little buckle up banter…we do the same thing in the car and my kids,always giggle!
18. I love Dr. Seuss and what he does for our childrens abilty to learn to read. I think I did everything above
20. I love Dr. Suess books because I like to Rhyme!
21. Follow @PBSKIDS on Twitter. @purplelover04
23. it his fun and makes everyone so happy
28. I followed pbskids on twitter @Geekdad248
29. I LIKED pbs kids on facebook
30. Cat in the Hat is an educational show that I feel comfortable letting my kids watch and they enjoy watching it!
33. Followed @momitforward on Twitter: @Geekdad248
34. I follow @PBSKIDS on Twitter.
35. I liked PBS Kids on Facebook.
36. I love that the Cat and the Hat is educational and that my son enjoys having me read the books to him.
39. I follow @momitforward on Twitter. (Mito_Mom)
40. I like Mom It Forward on Facebook
41. The way he inspires creativity
42. I Follow @PBSKIDS on Twitter.(Kellydsaver)
43. I Like the PBS KIDS Facebook page(Kelly D Saver)
46. I Follow @momitforward on Twitter.(Kellydsaver)
47. I Like Mom It Forward on Facebook(Kelly D Saver)
48. I Follow @PBSKIDS on Twitter as @happymomc
49. I Like the PBS KIDS Facebook page.
as happy momc
50. I love The Cat and the Hat because it is educational
53. I Follow @momitforward on Twitter as @happymomc
54. I Like Mom It Forward on Facebook as happy momc
55. Follow @PBSKIDS on Twitter. @mellanhead
56. like pbs kids on facebook (jeannine drenchek-scavo)
57. I love the ryhmes, easy for kids learning to read
60. follow you on twitter @mellanhead
61. like you on facebook (jeannine drenchek-scavo)
62. I Follow @PBSKIDS on Twitter.
63. I Like the PBS KIDS Facebook page.
64. Oh the Cat In The Hat always brings back such fun memories. I just loved reading them to my son…and now my grandson! They always make you smile! The best!
67. I follow you on twitter
68. I follow PBS Kids on twitter. @khmorgan_00
69. I like PBS Kids on facebook. Kristie Newton
70. I enjoy the simple rhyming while reading The Cat in the Hat to the kids and the kids love watching the show on PBS and learning something new each time
73. I follow Mom it Forward on twitter. @khmorgan_00
74. I like Mom it Forward on facebook. Kristie Newton
76. aLL 4 OF MY KIDS LEARNED TO READ AND RHYME FROM CAT IN THE HAT
78. i follow mom it foward on twitter and liked on facebook
79. I love The Cat and The Hat and anything to do with Dr. Seuss, my kids are so excited for the Dr. Seuss marathon tomorrow!
82. I follow PKB kids on Twitter
83. I like PBS kids on facebook
84. Follow @PBSKIDS on Twitter.
85. Like the PBS KIDS Facebook page
86. I love cat in the Hat because it makes me laugh and is just a fun story!
89. Follow @PBSKIDS on Twitter.
90. Follow @momitforward on Twitter.
91. Like Mom It Forward on Facebook
95. dr suess is amazing and funny timeless lessons
97. I follow pbskids on twitter
98. I love The Cat and the Hat because he’s a fun character & the story is so easy & enjoyable for the kids to read.
100. I like the PBS KIDS fb page
102. The Cat in the Hat loves to explore. I love that my kids look at the world the same way!
103. I love Cat in the Hat because he was created by Dr. Seuss and I love Dr. Seuss!
106. sponsor’s fb liker
amramazon280 at yahoo dot com
107. sponsor’s twitter follower
amramazon280 at yahoo dot com
108. your fb liker
amramazon280 at yahoo dot com
109. your twitter follower
amramazon280 at yahoo dot com
113. I love everything about The Cat In The Hat – the rhyming, quotes, illustration, etc. I now have my son loving anything to do with Dr. Seuss, period.
{P.S. I already follow on twitter & like on facebook…}
114. my kids love The Cat In The Hat because of his jokes
Thank you for hosting this giveaway
pumuckler {at} gmail {dot} com
115. I like PBS KIDS on facebook (Louis H Uffmire)
pumuckler {at} gmail {dot} com
116. following @pbskids on twitter @left_the_stars
pumuckler {at} gmail {dot} com
117. pumuckler {at} gmail {dot} com
118. pumuckler {at} gmail {dot} com
119. I like Mom It Forward on facebook (Louis H Uffmire)
pumuckler {at} gmail {dot} com
120. following you on twitter @left_the_stars
pumuckler {at} gmail {dot} com
121. Follow @PBSKIDS on Twitter: @lic_gilda.
122. I like PBS Kids on facebook: Bella Campos
123. 1) Followed PBS Kids on twitter
124. 2) liked PBS kids facebook ages ago
125. 3) I like the cat in the hat because its such an easy read for kids — such a classic too! Love Dr. Seuss
126. Follow @PBSKIDS on Twitter.
129. Like the PBS KIDS Facebook page.
130. 6) followed MomItForward on Twitter
131. 7) liked Momitforward on FB long time ago
132. I love this because it’s great and you don’t have to be neat, his character is generally messy…
133. Why do you love The Cat and the Hat? My sons think is fun.
136. Follow @momitforward on Twitter: @lic_gilda.
137. Like Mom It Forward on Facebook: Bella Campos.
138. Following PBSKIDS on Twitter
139. Liked PBS KIDS on Facebook.
140. Honestly? I never liked the Cat in the Hat. He was always messing things up! But now that I’m a grown up can see that he was trying to help. His help just didn’t conform to the existing
expectations! Neither does my DH’s or my 18 mo old’s help. I guess the Cat is growing on me and helping me to accept housekeeping help however it comes.
142. Tweeted the JYLmomIF version of the twitter party w/modifications (west coast time). Does that count? I’m worried about flooding my followers cause I don’t usually do parties…
144. The Cat in the Hat is so fun! I love reading to my daughter as much as she enjoys hearing it =)
149. Like mom it foreword on Fb
150. follow mom it forward on twitter
151. I follow @PBSKids on Twitter.
152. I like PBS Kids on Facebook
153. I love the Cat in the Hat because it really is educational for my daughter. She learns a lot from the books and the show.
154. The Cat in the Hat is a wonderful book and an even better PSS Kids show. It’s been so much fun watching it with my kiddo!
155. I follow @PBSKIDS on Twitter
157. I love PBS Kids Facebook page
158. Following @MomIfForward on Twitter
159. Like MomItForward on Facebook
160. It’s fun and creative and bright and funny.
161. I love The Cat and the Hat because my kids really relate to him now, and he brings back awesome memories for me. Plus we learn so much (yea, me too)
163. I follow @MomitForward on Twitter
164. I like Mom it Forward on Facebook.
165. Follow @PBSKIDS on Twitter.
166. Like the PBS KIDS Facebook page.
167. I like the cat because he is alway willing to teach other new information.
170. I’m following @PBSKids on twitter @brendakae42
171. Follow @momitforward on Twitter.
172. Like Mom It Forward on Facebook:
173. I like PBS Facebook page.
174. I love the Cat in the hat cuz it’s just such a classic. 3 generations in my family.
178. I love the Cat in the hat because he’s a cat and it’s so cool kids everywhere today are experiencing him like I did when I was kid! Also love that he is on PBSkids.
179. Had an awesome time at the twitter party!!! I retweeted alot and think I am following everyone as well
180. not sure how to link back to any of it though
182. I liked MomItForward on FB!
183. My boys (and Mommy) love the Cat in the Hat because of the rhyming and funny literature and the art/illustrations work!
184. I subscribed to Mom It Forward weekly newsletter!
185. Follow @PBSKIDS on Twitter.
186. Like the PBS KIDS Facebook page.
Teh Doll
187. i love cat in the hat cause his sings such great songs and make my kids super happy..
188. tweeted
189. Like you on facebook
Teh Doll
190. I Follow @PBSKIDS on Twitter (@lake12)
191. I Like the PBS KIDS Facebook page
192. Because it is such a Classic!! I remember reading it when I was little! And I love reading it to my son now!!
194. I follow @PBSKIDS on Twitter.as @mngirlinssp
195. Follow @momitforward on Twitter (@lake12)
196. I Like the PBS KIDS Facebook page as rebecca shockley
197. I love Cat in the Hat because he tells such a great story with lot’s of very cool words!
200. I Follow @momitforward on Twitteras @mngirlinssp
201. I like you on FB as rebecca shockley
202. .Followrd @PBSKIDS on Twitter.
203. Follow @PBSKIDS on Twitter @eaglesforjack eaglesforjack@gmail.com
204. .Liked the PBS KIDS Facebook page
205. Like the PBS KIDS Facebook page (mrstinareynolds) eaglesforjack@gmail.com
206. I love the cat in the hat because of the creativity and imagination. eaglesforjack@gmail.com
208. my grandchilden love Cat in the Hat
214. I Follow @PBSKIDS on Twitter.
215. I Like the PBS KIDS Facebook page.
216. I like Cat in the Hat because I grew up with it, as well as all of the suess classics.
217. I tweeted both tweets on twitter.
218. I Follow @momitforward on Twitter.
219. I Like Mom It Forward on Facebook:
220. I love the cat in the hat because I grew up reading the same books that i am now reading to my two sons!!
222. I Followed @momitforward on Twitter.
223. liked PBS Kids on facebook
224. I Liked Mom It Forward on Facebook.
225. I follow @PBSKIDS on Twitter
226. I Follow @PBSKIDS on Twitter
227. I like Cat In The Hat because Dr. Suess was one of the first books I ever remember being able to read on my own.
228. I Like the PBS KIDS Facebook page
229. I like Cat in the Hat because of it’s creativity!
232. I Follow @momitforward on Twitter
233. I Like Mom It Forward on Facebook
234. i follow pbskids on twitter- @denimorse
236. i like the cat in the hat because he makes my daughter smile
239. i follow momitforward on twitter- @denimorse
240. i like momitforward on fb
241. love hIs talking in rhyme
243. We love the Cat in the Hat at our house because he keeps both my two year old and me entertained!
244. Followed @PBSKIDS on Twitter. | {"url":"http://momitforward.com/pbs-kids-giveaway-win-cat-hat-prize-pack-5-winners","timestamp":"2014-04-19T17:07:26Z","content_type":null,"content_length":"244989","record_id":"<urn:uuid:f429da7b-df30-40db-ba64-e33dd0515d1f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficiency of Batch v. Fly as it Relates to Mash Tun Design
If the trapped sugars (in solution) inside the grains were able to be mechanically extracted, one would acheive better efficiency. That's assuming there is a significant amount of available
sugars to extract which I believe to be true.
I don't think that there are actually a lot of sugars trapped inside the grain. When I mean indide the grain I mean inside the grits. The mash gives enough time for those sugars to leach into the
mash liquid. A fact that can be verified by testing the mash gravity. Once that is done it is all about whashing the sugar rich wort off the outside of the pieces that are the spent grain. There
might be some diffusion happening from inside the grits into the sparge water but I deem that effect to be minimal. If it was true that a significant amount of sugar has to be extracted from the
grits during the spare the rest time during batch sparges would matter.
Agitation during mash helps by providing some mechanical force that helps distibute enzymes, product and substrate. It helps during lautering b/c it breaks up channels that may have formed. | {"url":"https://www.homebrewersassociation.org/forum/index.php?topic=829.0;prev_next=prev","timestamp":"2014-04-17T07:40:35Z","content_type":null,"content_length":"71443","record_id":"<urn:uuid:d09a45d7-c02e-4d94-9185-7cbfd0e63c44>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Camp Springs, MD Math Tutor
Find a Camp Springs, MD Math Tutor
...Linear Algebra is more advanced, but can be approached easily if you understand the basics of matrices. I achieved an A in Linear Algebra when I took it in college. I am a recent graduate from
the University of Connecticut.
32 Subjects: including logic, algebra 1, algebra 2, calculus
...My math tutoring methods involve explanations via diagrams, extensive practice problems, and taking tests under timed conditions. As a reader, writer, and public speaker, I have a very strong
understanding of the of vocabulary. This is evidenced by my scores of 4 on the AP English Literature Exam, 780 on the Critical Reading section of the SAT, and 800 on the Writing Section.
16 Subjects: including algebra 1, prealgebra, reading, English
...Wegman's in Woodmore, MD;I have extension experience in this particular subject. My undergraduate major was statistics. I have taken various graduate level courses in Survey Methodology as
15 Subjects: including geometry, calculus, discrete math, differential equations
...My lessons are fun and creative but most importantly, I try to show how Mathematics is relevant. I believe in teaching the content as well as instilling confidence so students can become
successful, life-long learners. I am a regional instructor for Texas Instruments and have expertise in the technology used in your student's classroom.
14 Subjects: including SAT math, chess, trigonometry, precalculus
...I am equipped to tutor in all elementary and middle school subjects, as well as English, Writing, Social Studies, History, and Theatre Arts on the high school level. References are available
upon request. I am excited to work with you to and be a part of your educational journey to success!
23 Subjects: including algebra 1, prealgebra, English, reading
Related Camp Springs, MD Tutors
Camp Springs, MD Accounting Tutors
Camp Springs, MD ACT Tutors
Camp Springs, MD Algebra Tutors
Camp Springs, MD Algebra 2 Tutors
Camp Springs, MD Calculus Tutors
Camp Springs, MD Geometry Tutors
Camp Springs, MD Math Tutors
Camp Springs, MD Prealgebra Tutors
Camp Springs, MD Precalculus Tutors
Camp Springs, MD SAT Tutors
Camp Springs, MD SAT Math Tutors
Camp Springs, MD Science Tutors
Camp Springs, MD Statistics Tutors
Camp Springs, MD Trigonometry Tutors
Nearby Cities With Math Tutor
Adelphi, MD Math Tutors
Colesville, MD Math Tutors
Forestville, MD Math Tutors
Franconia, VA Math Tutors
Hillcrest Heights, MD Math Tutors
Kettering, MD Math Tutors
Landover, MD Math Tutors
Marlow Heights, MD Math Tutors
Morningside, MD Math Tutors
North Bethesda, MD Math Tutors
Saint Charles, MD Math Tutors
Silver Hill, MD Math Tutors
Suitland Math Tutors
West Springfield, VA Math Tutors
Wheaton, MD Math Tutors | {"url":"http://www.purplemath.com/Camp_Springs_MD_Math_tutors.php","timestamp":"2014-04-20T08:44:29Z","content_type":null,"content_length":"24189","record_id":"<urn:uuid:4927e9f5-dc5a-4705-837e-1cbe9c4727bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
The case of the red-haired kids
When going through data - whether from Google or CERN - understanding how and how not to look at the numbers can make a large difference.
Seriously, shame on me for not noticing the release of a product named Correlate until December 2012. Correlate by Google was released in May last year and is a tool to see how two different search
trends have panned out over a period of time. But instead of letting you pick out searches and compare them, Correlate saves a bit of time by letting you choose one trend and then automatically picks
out trends similar to the one you’ve your eye on.
For instance, I used the “Draw” option and drew a straight, gently climbing line from September 19, 2004, to July 24, 2011 (both randomly selected). Next, I chose “India” as the source of search
queries for this line to be compared with, and hit “Correlate”. Voila! Google threw up 10 search trends that varied over time just as my line had.
Since I’ve picked only India, the space from which the queries originate remains fixed, making this a temporal trend - a time-based one. If I’d fixed the time - like a particular day, something short
enough to not produce strong variations - then it’d have been a spatial trend, something plottable on a map.
Now, there were a lot of numbers on the results page. The 10 trends displayed in fact were ranked according to a particular number “r” displayed against them. The highest ranked result, “free english
songs”, had r = 0.7962. The lowest ranked result, “to 3gp converter”, had r = 0.7653.
And as I moused over the chart itself, I saw two numbers, one each against the two trends being tracked. For example, on March 1, 2009, the “Drawn Series” line had a number +0.701, and the “free
english songs” line had a number -0.008, against it.
What do these numbers mean?
This is what I want to really discuss because they have strong implications on how lay people interpret data that appears in the context of some scientific text, like a published paper. Each of these
numbers is associated with a particular behaviour of some trend at a specific point. So, instead of looking at it as numbers and shapes on a piece of paper, look at it for what it represents and
you’ll see so many possibilities coming to life.
The numbers against the trends, +0.701 for “Drawn Series” (my line) and -0.008 for “free english songs” in March ‘09, are the deviations. The deviation is a lovely metric because it sort of presents
the local picture in comparison to the global picture, and this perspective is made possible by the simple technique used to evaluate it.
Consider my line. Each of the points on the line has a certain value. Use this information to find their average value. Now, the deviation is how much a point’s value is away from the average value.
It’s like if 11 red-haired kids were made to stand in a line ordered according to the redness of their hair. If the “average” colour around was a perfect orange, then the kid with the “reddest” hair
and the kid with the palest-red hair will be the most deviating. Kids with some semblance of orange in their hair-colour will be progressively less deviating until they’re past the perfect
“orangeness”, and the kid with perfectly-orange hair will completely non-deviating.
So, on August 23, 2009, “Drawn Series” was higher than its average value by 0.701 and “free english songs” was lower than its average value by 0.008. Now, if you’re wondering what the units are to
measure these numbers: Deviations are dimensionless fractions - which means they’re just numbers whose highness or lowness are indications of intensity.
And what’re they fractions of? The value being measured along the trend being tracked.
Now, enter standard deviation. Remember how you found the average value of a point on my line? Well, the standard deviation is the average value among all deviations. It’s like saying the children
fitting a particular demographic are, for instance, 25 per cent smarter on average than other normal kids: the standard deviation is 25 per cent and the individual deviations are similar percentages
of the “smartness” being measured.
So, right now, if you took the bigger picture, you’d see the chart, the standard deviation (the individual deviations if you chose to mouse-over), the average, and that number “r”. The average will
indicate the characteristic behaviour of the trend - let’s call it “orange” - the standard deviation will indicate how far off on average a point’s behaviour will be deviating in comparison to
“orange” - say, “barely orange”, “bloody”, etc. - and the individual deviations will show how “orange” each point really is.
At this point I must mention that I conveniently oversimplified the example of the red-haired kids to avoid a specific problem. This problem has been quite single-handedly responsible for the
news-media wrongly interpreting results from the LHC/CERN on the Higgs search.
In the case of the kids, we assumed that, going down the line, each kid’s hair would get progressively darker. What I left out was how much darker the hair would get with each step.
Let’s look at two different scenarios.
Scenario 1: The hair gets darker by a fixed amount each step.
Let’s say the first kid’s got hair that’s 1 units of orange, the fifth kid’s got 5 units, and the 11th kid’s got 11 units. This way, the average “amount of orange” in the lineup is going to be 6
units. The deviation on either side of kid #6 is going to increase/decrease in steps of 1. In fact, from the first to the last, it’s going to be 5, 4, 3, 2, 1, 0, 1, 2, 3, 4, and 5. Straight down and
then straight up.
Scenario 2: The hair gets darker slowly and then rapidly, also from 1 to 11 units.
In this case, the average is not going to be 6 units. Let’s say the “orangeness” this time is 1, 1.5, 2, 2.5, 3, 3.5, 4, 5.5, 7.5, 9.75, and 11 per kid, which brings the average to ~4.6591 units. In
turn, the deviations are 3.6591, 3.1591, 2.6591, 2, 1591, 1.6591, 1.1591, 0.6591, 0.8409, 2.8409, 5.0909, and 6.3409. In other words, slowly down and then quickly more up.
In the second scenario, we saw how the average got shifted to the left. This is because there were more less-orange kids than more-orange ones. What’s more important is that it didn’t matter if the
kids on the right had more more-orange hair than before. That they were fewer in number shifted the weight of the argument away from them!
In much the same way, looking for the Higgs boson from a chart that shows different peaks (number of signature decay events) at different points (energy levels), with taller but fewer peaks to one
side and shorter but many more peaks to the other, can be confusing. While more decays could’ve occurred at discrete energy levels, the Higgs boson is more likely (note: not definitely) to be found
within the energy-level where decays occur more frequently (in the chart below, decays are seen to occur more frequently at 118-126 GeV/c2 than at 128-138 GeV/c2 or 110-117 GeV/c2).
If there’s a tall peak where a Higgs isn’t likely to occur, then that’s an outlier, a weirdo who doesn’t fit into the data. It’s probably called an outlier because its deviation from the average
could be well outside the permissible deviation from the average.
This also means it’s necessary to pick the average from the right area to identify the right outliers. In the case of the Higgs, if its associated energy-level (mass) is calculated as being an
average of all the energy levels at which a decay occurs, then freak occurrences and statistical noise are going to interfere with the calculation. But knowing that some masses of the particle have
been eliminated, we can constrain the data to between two energy levels, and then go after the average.
So, when an uninformed journalist looks at the data, the taller peaks can catch the eye, even run away with the ball. But look out for the more closely occurring bunches - that’s where all the action
If you notice, you’ll also see that there are no events at some energy levels. This is where you should remember that uncertainty cuts both ways. When you’re looking at a peak and thinking “This
can’t be it; there’s some frequency of decays to the bottom, too”, you’re acknowledging some uncertainty in your perspective. Why not acknowledge some uncertainty when you’re noticing absent data,
While there’s a peak at 126 GeV/c2, the Higgs weighs between 124-125 GeV/c2. We know this now, so when we look at the chart, we know we were right in having been uncertain about the mass of the Higgs
being 126 GeV/c2. Similarly, why not say “There’s no decays at 113 GeV/c2, but let me be uncertain and say there could’ve been a decay there that’s escaped this measurement”?
Maybe this idea’s better illustrated with this chart.
There’s a noticeable gap between 123 and 125 GeV/c2. Just looking at this chart and you’re going to think that with peaks on either side of this valley, the Higgs isn’t going to be here… but that’s
just where it is! So, make sure you address uncertainty when you’re determining presences as well as absences.
So, now, we’re finally ready to address “r”, the Pearson covariance coefficient. It’s got a formula, and I think you should see it. It’s pretty neat.
(TeX: r\quad =\quad \frac { { \Sigma }_{ i=1 }^{ n }({ X }_{ i }\quad -\quad \overset { \_ }{ X } )({ Y }_{ i }\quad -\quad \overset { \_ }{ Y } ) }{ \sqrt { { \Sigma }_{ i=1 }^{ n }{ ({ X }_{ i }\
quad -\quad \overset { \_ }{ X } ) }^{ 2 } } \sqrt { { \Sigma }_{ i=1 }^{ n }{ (Y_{ i }\quad -\quad \overset { \_ }{ Y } ) }^{ 2 } } })
The equation says "Let's see what your Pearson covariance, "r", is by seeing how much all of your variations are deviant keeping in mind both your standard deviations."
The numerator is what’s called the covariance, and the denominator is basically the product of the standard deviations. X-bar, which is X with a bar atop, is the average value of X - my line - and
the same goes for Y-bar, corresponding to Y - “mobile games”. Individual points on the lines are denoted with the subscript “i”, so the points would be X1, X2, X3, ..., and Y1, Y2, Y3, …”n” in the
formula is the size of the sample - the number of days over which we’re comparing the two trends.
The Pearson covariance coefficient is not called the Pearson deviation coefficient, etc., because it normalises the graph’s covariance. Simply put, covariance is a measure of how much the two trends
vary together. It can have a minimum value of 0, which would mean one trend’s variation has nothing to do with the other’s, and a maximum value of 1, which would mean one trend’s variation is
inescapably tied with the variation of the other’s. Similarly, if the covariance is positive, it means that if one trend climbs, the other would climb, too. If the covariance is negative, then one
trend’s climbing would mean the other’s descending (In the chart below, between Oct ’09 and Jan ’10, there’s a dip: even during the dive-down, the blue line is on an increasing note – here, the local
covariance will be negative).
Apart from being a conveniently defined number, covariance also records a trend’s linearity. In statistics, linearity is a notion that stands by its name: like a straight line, the rise or fall of a
trend is uniform. If you divided up the line into thousands of tiny bits and called each one on the right the “cause” and the one on the left the “effect”, then you’d see that linearity means each
effect for each cause is either an increase or a decrease by the same amount.
Just like that, if the covariance is a lower positive number, it means one trend’s growth is also the other trend’s growth, and in equal measure. If the covariance is a larger positive number, you’d
have something like the butterfly effect: one trend moves up by an inch, the other shoots up by a mile. This you’ll notice is a break from linearity. So if you plotted the covariance at each point in
a chart as a chart by itself, one look will tell you how the relationship between the two trends varies over time (or space). | {"url":"http://m.thehindu.com/opinion/blogs/blogs-the-copernican/article4256300.ece/?secid=12476","timestamp":"2014-04-19T09:26:48Z","content_type":null,"content_length":"39092","record_id":"<urn:uuid:d887ec74-49d1-431f-bb93-530e2c64e250>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Internet Differential Equations Activities
Chemical Kinetics
Table of Contents:
Chemical Kinetics - Introduction
Chemical kinetics, a topic in several chemistry courses, illustrates the connection between mathematics and chemistry. Chemical kinetics deals with chemistry experiments and interprets them in terms
of a mathematical model. The experiments are perfomed on chemical reactions as they proceed with time. The models are differential equations for the rates at which reactants are consumed and products
are produced. By combining models with experiments, chemists are able to understand how chemical reactions take place at the molecular level.
Basic Ideas of Chemical Kinetics
Stoichiometry in chemical reactions and relation to differential rate laws and integrated rate laws.
Definition of the rate of a chemical reaction.
Extent of reaction, extent variable: x.
Define rate law, rate constant and order of a reaction.
Temperature and reaction rates: Arrhenius Equation and activation energy.
Differential versus Integrated rate laws.
Empirical Rate Equations: FIRST ORDER chemical kinetics equations and solutions.
Empirical Rate Equations: SECOND ORDER chemical kinetics equations and solutions.
Composite Reactions or Reaction Mechanisms
Advanced Ideas in Chemical Kinetics
Oscillating Chemical Reactions:
History of oscillating reactions.
Example chemical oscillator: Lotke-Volterra.
Example chemical oscillator: Brusselator.
Example chemical oscillator: Oregonator.
Thermogravimetric Analysis
TGA refers to the process of monitoring the mass of a sample while it is heated. The rising temperature causes chemical reactions to occur and may result in loss of mass. Background for TGA and
illustrative examples are found by clicking here .
Glossary of Terms
• Stoichiometry determines the molar ratios of reactants and products in an overall chemical reaction. We express the stoichiometry as a balanced chemical equation. For kinetics it is convient to
write this as products minus reactants: n[p]P + n[q]Q - n[a]A - n[b]B (instead of the conventional equation n[a]A + n[b]B ---> n[p]P + n[q]Q). This indicates that n[a] and n[b] moles of reactants
A and B, resp., produce n[p] and n[q] moles of products P and Q.
• The rate of a chemical reaction is defined in such a way that it is independent of which reactant or product is monitored. We define the rate, v, of a reaction to be v = (1/n[g]) d[G]/dt where n
[g] is the signed (positive for products, negative for reactants) stoichiometric coefficient of species G in the reaction. Namely, v = (-1/n[a]) d[A]/dt = (1/n[p]) d[P]/dt, etc.
• It is convenient to refer to the extent of reaction. As the reactants are sonsumed and the products are produced, their concentrations change. If the initial concentrations of A, B, P and Q are
[A], [B], [P] and [Q], resp., then the extent of reaction is defined: x = -([A]-[A][0])/n[a] = -([B] - [B][0])/n[b] = ([P]-[P][0])/n[p] = ([Q]-[Q][0])/n[q]. Alternately, each species
concentration is a function of the extent of reaction: [A] = [A][0] - n[a]x, etc.
• Many reactions follow elementary differential rate laws such as v = k f([A], [B], ...) where f([A], [B], ...) is a function of the concentrations of reactants and products. That is, the rate
varies as the concentrations change. A proportionality constant, k, is called the rate constant of the reaction.
• When the rate law has the special form of a product (or quotient) of powers, f([A], [B], ...) = [A]^a [B]^b [P]^p [Q]^q then a is the order of the reaction with respect to A, b is the order
w.r.t. B, etc. Note that order may be positive, negative, integer, or non-integer. Further, the sum a + b + p + q is the overall order of the reaction rate law.
• NOTE: there is no necessary relation between orders and stoichiometric coefficients. That is, a might differ from n[a].
• Reaction rate constants are usually temperature dependent; the rate of a reaction usually increases as the temperature rises. The temperature dependence often follows Arrhenius' equation: k(T) =
A exp(-Ea/RT) where T is the absolute temperature, R the universal gas constant, Ea is the activation energy (specific to each reaction), and A is the "pre-exponential" or "frequency" or
"entropy" factor.
• One objective of chemical kinetics is to solve the differential rate law d[G]/dt = k f([A], [B], ...), and thereby express each species concentration as a function of time: [G](t). Since solution
requires integration, we call it the integrated rate law.
• A reaction mechanism is a set of steps at the molecular level. Each step involves combinations or re-arrangements of individual molecular species. The steps in combination describe the path or
route that reactant molecules follow to reach the product molecules. The result of all steps is to produce the overall balanced stoichiometric chemical equation for reactants producing products.
Extensive collection of physical chemistry problem solutions using MathCad can be found at the scicomp site.
You are invited to send your comments about these chemical kinetics pages to Ron Poshusta <poshustr@mail.wsu.edu>.
• Physical Chemistry by K.J. Laidler & J.H. Meiser [Houghton Mifflin Co., 1995] Chapters 9 and 10.
• Physical Chemistry by P. Atkins [W.H. Freeman & Co., 1994] Chapters 25 and 26.
• More references will be found under subtopics in chemical kinetics. | {"url":"http://www.idea.wsu.edu/ChemKinetics/","timestamp":"2014-04-17T13:26:23Z","content_type":null,"content_length":"13589","record_id":"<urn:uuid:b350f70e-f43d-4fda-a159-70753bacb34c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning and extracting finite state automata with second-order recurrent neural networks
Results 1 - 10 of 150
, 1994
"... We propose a new Mgorithm which allows for the identification of any stochastic deterministic regular language as well as the determination of the probabilities of the strings in the language.
The algorithm builds the prefix tree acceptor from the sample set and merges systematically equivaJent stat ..."
Cited by 137 (13 self)
Add to MetaCart
We propose a new Mgorithm which allows for the identification of any stochastic deterministic regular language as well as the determination of the probabilities of the strings in the language. The
algorithm builds the prefix tree acceptor from the sample set and merges systematically equivaJent states. Experimentally, it proves very fast a.ad the time needed grows only linearly with the size
of the sample set.
- Cognitive Science , 1999
"... Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing
such structures. We present a connectionist model of human performance in processing recursive language st ..."
Cited by 126 (19 self)
Add to MetaCart
Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such
structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative
performance profile of the model matches human behavior, both on the relative difficulty of center-embedding and crossdependency, and between the processing of these complex recursive structures and
right-branching recursive constructions. We analyze how these differences in performance are reflected in the internal representations of the model by performing discriminant analyses on these
representations both before and after training. Furthermore, we show how a network trained to process recursive structures can also generate such structures in a probabilistic fashion. This work
suggests a novel explanation of people’s limited recursive performance, without assuming the existence of a mentally represented competence grammar allowing unbounded recursion. I.
- IEEE TRANSACTIONS ON NEURAL NETWORKS , 1998
"... A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor
structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive ..."
Cited by 117 (46 self)
Add to MetaCart
A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor
structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured
information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this
paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem
of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical
formalism for r...
, 1999
"... Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function.
Presently, the best predictors are based on machine learning approaches, in particular neural network archite ..."
Cited by 116 (22 self)
Add to MetaCart
Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function.
Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the
prediction site. Although a fixed small window avoids overfitting problems, it does not permit to capture variable long-ranged information. Results: We introduce a family of novel architectures which
can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing non-causal bidirectional dynamics to capture both upstream
and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the
input and output levels. While our system currently achieves an overall performance close to 76% correct prediction---at least comparable to the best existing systems---the main emphasis here is on
the development of new algorithmic ideas. Availability: The executable program for predicting protein secondary structure is available from the authors free of charge. Contact: pfbaldi@ics.uci.edu,
gpollast@ics.uci.edu, brunak@cbs.dtu.dk, paolo@dsi.unifi.it. 1
- ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS , 1995
"... We introduce a recurrent architecture having a modular structure and we formulate a training procedure based on the EM algorithm. The resulting model has similarities to hidden Markov models,
but supports recurrent networks processing style and allows to exploit the supervised learning paradigm ..."
Cited by 108 (15 self)
Add to MetaCart
We introduce a recurrent architecture having a modular structure and we formulate a training procedure based on the EM algorithm. The resulting model has similarities to hidden Markov models, but
supports recurrent networks processing style and allows to exploit the supervised learning paradigm while using maximum likelihood estimation.
- IEEE Transactions on Neural Networks , 1996
"... We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a
modular structure that associates a subnetwork to each state. The model has a statistical interpretation ..."
Cited by 98 (12 self)
Add to MetaCart
We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a
modular structure that associates a subnetwork to each state. The model has a statistical interpretation we call Input/Output Hidden Markov Model (IOHMM). It can be trained by the EM or GEM
algorithms, considering state trajectories as missing data, which decouples temporal credit assignment and actual parameter estimation. The model presents similarities to hidden Markov models (HMMs),
but allows us to map input se-quences to output sequences, using the same processing style as recurrent neural networks. IOHMMs are trained using a more discriminant learning paradigm than HMMs,
while potentially taking advantage of the EM algorithm. We demonstrate that IOHMMs are well suited for solving grammatical inference problems on a benchmark problem. Experimental results are
presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization.
- THEORETICAL COMPUTER SCIENCE , 1994
"... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time,
corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to ha ..."
Cited by 87 (8 self)
Add to MetaCart
We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to
an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their
capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work
[20].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can
compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard
problems, as the equality ...
- Journal of the ACM , 1996
"... Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating
performance can be attributed to the instability of the internal representation of the learned DFA states. The use o ..."
Cited by 70 (16 self)
Add to MetaCart
Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can
be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this
instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal
DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural
network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive
algorithm genera...
, 1996
"... To Mom, Dad, and Susan, for their support and encouragement. ..."
, 1996
"... The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of
information between symbolic and connectionist knowledge representations. The focas of this paper is on t ..."
Cited by 61 (15 self)
Add to MetaCart
The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of
information between symbolic and connectionist knowledge representations. The focas of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time
recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic
finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a
training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the
consistent DFAs the model which best approximates the learned regular grammar. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=157883","timestamp":"2014-04-23T22:35:30Z","content_type":null,"content_length":"39168","record_id":"<urn:uuid:9ccae823-c09a-48a1-8bb3-e22abf6ddcac>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
about the rate distortion function
I am having problems with the rate distortion function implemented in matlab. im recieving an error message of
Error in ==> blahut at 9
m=sum(p.*Y); %mathematical expectation of Y
Error in ==> Rate_distort at 18
[R,SNR] = blahut(p,nl,Y');
here is the blahut mfile
function [R,SNR] = blahut(p,nl,Y)
%This function computes one point of H(D) for given lambda
%D is distortion in dB
%p is probability distribution of input sequence
%Y is approximating set
m=sum(p.*Y); %mathematical expectation of Y
sigma2=sum(p.*(Y-m).^2);%variation of Y
ry=p; %initialization of distribution r(y)
while flag==1
v=ry*A'; %sum_y r(y)2^(lambda d(x,y))
P=p./v;%p(x)/sum_y r(y)2^(lambda d(x,y))
c=P*A; %c(y)
ry=ry.*c; %modified distribution
%%%%%%%%Computing thresholds%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%Stopping rule%%%%%%%%%%%%%%%%%%%%%
if error < 10^(-3)
d=P*D*ry'; %distortion
rate=-nl*d-log2(v)*p'-TL; %rate;
SNR=10*log10(sigma2/d);%distortion in dB
Here is my test file with components
N takes in values from 10:10:50
P is the probability distribution in the form of the binomial distribution
theta takes in value in the interval [0 1]
also theta is specified for theta = i/n with i = 0,1,....,n
%% Values for n
for jdx=1:length(theta)
for kdx=1:length(nVec)
%% specifications for rate distortion
nl = 0.1;
Y = [0 1; 1 0];
%% Blahut algorithm for rate distortion calculation
[R,SNR] = blahut(p,nl,Y');
When I type in my disitortion matrix for Y, this is where Im encountering the error
Error in ==> blahut at 9
m=sum(p.*Y); %mathematical expectation of Y
I also need to know how to plot the Distortion vs R(D).
Please help with any tips. Thank you very much if true
2 Comments
There would be an error message just before that. For example it might say something like "matrix dimensions must agree". We need to see that message.
No products are associated with this question. | {"url":"http://mathworks.com/matlabcentral/answers/75861-about-the-rate-distortion-function","timestamp":"2014-04-16T10:12:36Z","content_type":null,"content_length":"25782","record_id":"<urn:uuid:f9ae4b19-e8e3-46ec-a461-9de7c6a4d404>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Serious is Unequal Power Distribution Between Speakers In Series? [Archive] - TeamTalk
10-10-2013, 06:38 PM
I've had a discussion with A JL Audio Tech guy today. I want to run 4 speakers off of one MHD750/1. That would require wiring each pair of speakers in series and then wiring both pairs parallel
resulting in a 4ohm load. The tech advised against this. Allthough he said it has been done, this kind of set-up will result in an unbalanced distribution of power between the speakers which are
wired in series. How serious is this issue? Generally, I believe most of us would like to know that each speaker is getting the same amount of power. Any thoughts on this? | {"url":"http://www.mastercraft.com/teamtalk/archive/index.php/t-57792.html","timestamp":"2014-04-20T01:23:23Z","content_type":null,"content_length":"12483","record_id":"<urn:uuid:bec47e85-b498-42b8-a765-d82b73391690>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random Number Creator
Random Number Creator is a Windows based application designed to generate quickly generate thousands random numbers in seconds. Random Number Creator Program allow users to add the prefix and suffix
with all the numbers generated. So the generated random numbers can be positive or negative. Random numbers can be edit and copied to the clipboard for pasting into other applications. Random Number
Generator can print all random numbers or save numbers as file. Random Number Generator will generate to 5000 numbers at the time.
Random Number Creator is a software that features rich editing capabilities with clipboard support, undo/redo, sorting. A Random Number Generator is a function object that can be used to generate a
random sequence of integers. That is: if f is a Random Number Generator and N is a positive integer, then f(N) will return an integer less than N and greater than or equal to 0. You can also generare
the list in the sequence. Digits can be extand up to 18. You can create any of the list random list, secuential list or constant value list with length atmost 18.
See also: Random number, Random Number Creator, random numbers, digits, list, floating-point, integers, gaussian, Clipboard, Text, random list generator, sequence | {"url":"http://www.downloadery.com/software/random-number-creator-34506.htm","timestamp":"2014-04-21T04:34:15Z","content_type":null,"content_length":"10137","record_id":"<urn:uuid:d4d7bf34-e164-40b3-9478-a0f4b2aa8e49>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: what to use when reshape is not good anymore?please some help
Replies: 2 Last Post: Sep 8, 2013 6:15 PM
Messages: [ Previous | Next ]
Curious Re: what to use when reshape is not good anymore?please some help
Posted: Sep 6, 2013 10:13 AM
Posts: 1,923
Registered: 12/6 "Marinela Finta" wrote in message <l0bqi5$1th$1@newscl01ah.mathworks.com>...
/04 > I am just starting to learn matlab and I would appreciate very much if you could help me...
> I am stuck with calculating the returns between some hours and for each day..
> For some date was going ok the following method:
> (now the price is between that hours)
> n=length(price);
> % The number if days
> %(79 is the number of observations within one day)
> ndays = n/79;
> price_d = reshape(price,79,ndays);
> %I take returns for each day
> returns_d = log(price_d(2:79,:))- log(price_d(1:78,:));
> However now I have another data where the number of observations are not anymore the same for each day..So in one day I have 79 in another 30,75 observations within a day.Therfore
i CANNOT use anymore RESHAPE..:(
I'm not sure why you can't use RESHAPE.
What happens if you make 79 a variable and then
change it to 30, 75, or whatever based on how you
know that the number of observations change.
(Same for 78 in your above code.)
Also, it seems to me that you should do some error checking
to ensure that ndays is an integer (due to floating point arthmetic).
> How should I do in order to have sorted the observations(prices) according to each day? So to have similar thing as before: row with the prices and column with prices corresponding
to each day..
> thank you for your time..
> Marinela
Date Subject Author
9/6/13 what to use when reshape is not good anymore?please some help Marinela Finta
9/6/13 Re: what to use when reshape is not good anymore?please some help Curious
9/8/13 Re: what to use when reshape is not good anymore?please some help Marinela Finta | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2597635&messageID=9255212","timestamp":"2014-04-16T10:28:19Z","content_type":null,"content_length":"19931","record_id":"<urn:uuid:41017870-9bfd-41d7-b6bb-3a922aa3dbf6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Injective maps $\mathbb{R}^{n} \to \mathbb{R}^{m}$
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let $f: \mathbb{R}^{n} \to \mathbb{R}^{m}$ be an injection for $n>m$. Can $f$ be continuous? Why?
I got this question in mind when I was trying to find a continuous map from $\mathbb{R}^{2}$ to $\mathbb{R}$.
up vote 6 down vote favorite
1 gn.general-topology
add comment
Let $f: \mathbb{R}^{n} \to \mathbb{R}^{m}$ be an injection for $n>m$. Can $f$ be continuous? Why?
I got this question in mind when I was trying to find a continuous map from $\mathbb{R}^{2}$ to $\mathbb{R}$.
If $f$ is injective and continuous from $\mathbb{R}^n$ to $\mathbb{R}^m$ where $n>m$ then $f$ restricts to a continuous bijection from $S^{n-1}$, the unit sphere in $R^n$, to a
compact subset $K$ of $\mathbb{R}^m$. Thus you can embed $S^{n-1}$, and a foriori $S^m$ in $\mathbb{R}^m$.
But there are homological obstructions to embedding $S^m$ in $\mathbb{R}^m$. Using the arguments of this excellent paper
Albrecht Dold, A simple proof of the Jordan-Alexander complement theorem, Amer. Math. Monthly 100 (1993), 856-85.
(essentially a cunning use of the Mayer-Vietoris theorem) it would entail the homology of the space $\mathbb{R}^m-K$ being nonzero in negative dimension, which is absurd.
up vote 13 down Added As I replied in haste I forgot the sledgehammer that cracks this little nut, namely Alexander duality.
vote accepted
Added later In fact this result also follows from Brouwer's Invariance of Domain. This is Theorem 2B.3 on page 172 of Hatcher's book. This implies that if one has an embedding from $\
mathbb{R}^n$ to itself, then its image is open. One gets such an embedding by composing your putative embedding with the natural embedding of $\mathbb{R}^m$ in $\mathbb{R}^n$.
Adapting the proof, gives a swift proof that the answer of your original question is no.
If you have a continuous injection from $\mathbb{R}^n$ to $\mathbb{R}^m$, with $n>m$ then you have an embedding of $S^{n-1}$ into a nontrivial hyperplane in $\mathbb{R}^n$. By the
Jordan-Brouwer separation theorem the image $K$ of this embedding separates $\mathbb{R}^n$ but it's easy to see that since the image is in a hyperplane any two points of the
complement of $K$ can be connected by a path (exercise for reader :-)).
show 6 more comments
If $f$ is injective and continuous from $\mathbb{R}^n$ to $\mathbb{R}^m$ where $n>m$ then $f$ restricts to a continuous bijection from $S^{n-1}$, the unit sphere in $R^n$, to a compact subset $K$ of
$\mathbb{R}^m$. Thus you can embed $S^{n-1}$, and a foriori $S^m$ in $\mathbb{R}^m$.
But there are homological obstructions to embedding $S^m$ in $\mathbb{R}^m$. Using the arguments of this excellent paper
Albrecht Dold, A simple proof of the Jordan-Alexander complement theorem, Amer. Math. Monthly 100 (1993), 856-85.
(essentially a cunning use of the Mayer-Vietoris theorem) it would entail the homology of the space $\mathbb{R}^m-K$ being nonzero in negative dimension, which is absurd.
Added As I replied in haste I forgot the sledgehammer that cracks this little nut, namely Alexander duality.
Added later In fact this result also follows from Brouwer's Invariance of Domain. This is Theorem 2B.3 on page 172 of Hatcher's book. This implies that if one has an embedding from $\mathbb{R}^n$ to
itself, then its image is open. One gets such an embedding by composing your putative embedding with the natural embedding of $\mathbb{R}^m$ in $\mathbb{R}^n$. Adapting the proof, gives a swift proof
that the answer of your original question is no.
If you have a continuous injection from $\mathbb{R}^n$ to $\mathbb{R}^m$, with $n>m$ then you have an embedding of $S^{n-1}$ into a nontrivial hyperplane in $\mathbb{R}^n$. By the Jordan-Brouwer
separation theorem the image $K$ of this embedding separates $\mathbb{R}^n$ but it's easy to see that since the image is in a hyperplane any two points of the complement of $K$ can be connected by a
path (exercise for reader :-)).
Alternatively, you may use the Borsuk-Ulam antipodal theorem, in order to prove that such a map cannot be one-to-one.
up vote 4 down vote
add comment
Alternatively, you may use the Borsuk-Ulam antipodal theorem, in order to prove that such a map cannot be one-to-one.
More generally, you might ask whether there is a continuous injective map from a space of dimension $n$ to a space of dimension $m$, where $n > m$. The subject of dimension theory
investigates this problem. (Hurewicz & Wallman wrote a classic, though hard to find nowadays, book on the subject; perhaps an expert could recommend a more modern reference?) Here is a
representative theorem:
If $f: X \to Y$ is a continuous surjection bewteen separable compact metric spaces, where $X$ has dimension $n$ and $Y$ has dimension $m$, then there is a point $y \in Y$ whose preimage
contains at least $n - m + 1$ points.
up vote 2
down vote This applies to your question, as Robin Chapman explained, by restricting your map to a nice compact subspace of dimension $n$, for instance the unit closed ball.
In any case, my point is that the techniques above for Euclidean space and for spheres have appropriate generalizations to separable metric spaces.
As to the "why?", I recommend tracking down a copy of Hurewicz & Wallman from a library. It's very pleasant reading and assumes only a brief encounter with point set topology.
add comment
More generally, you might ask whether there is a continuous injective map from a space of dimension $n$ to a space of dimension $m$, where $n > m$. The subject of dimension theory investigates this
problem. (Hurewicz & Wallman wrote a classic, though hard to find nowadays, book on the subject; perhaps an expert could recommend a more modern reference?) Here is a representative theorem:
If $f: X \to Y$ is a continuous surjection bewteen separable compact metric spaces, where $X$ has dimension $n$ and $Y$ has dimension $m$, then there is a point $y \in Y$ whose preimage contains at
least $n - m + 1$ points.
This applies to your question, as Robin Chapman explained, by restricting your map to a nice compact subspace of dimension $n$, for instance the unit closed ball.
In any case, my point is that the techniques above for Euclidean space and for spheres have appropriate generalizations to separable metric spaces.
As to the "why?", I recommend tracking down a copy of Hurewicz & Wallman from a library. It's very pleasant reading and assumes only a brief encounter with point set topology.
I taught an introductory topology course this last autumn where I covered this theorem from an elementary point of view. The argument just uses the Brouwer fixed point theorem (which itself
has a proof via Stokes' theorem which is readily accessible to students with several variable calculus) plus elementary point set topology. In particular no homology or Jordan--Brouwer
separation theorems are used. The treatment was based upon that of Hurewicz & Wallman but was also inspired by Larry Guth's ICM-2010 presentation in Hyderabad.
up vote 2 If you're interested the notes are available on
down vote
add comment
I taught an introductory topology course this last autumn where I covered this theorem from an elementary point of view. The argument just uses the Brouwer fixed point theorem (which itself has a
proof via Stokes' theorem which is readily accessible to students with several variable calculus) plus elementary point set topology. In particular no homology or Jordan--Brouwer separation theorems
are used. The treatment was based upon that of Hurewicz & Wallman but was also inspired by Larry Guth's ICM-2010 presentation in Hyderabad. | {"url":"http://mathoverflow.net/questions/34232/injective-maps-mathbbrn-to-mathbbrm?sort=newest","timestamp":"2014-04-20T05:56:44Z","content_type":null,"content_length":"72363","record_id":"<urn:uuid:90b3e6d3-5793-444d-8b51-83fbef96804c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amplitude and Period of Sine and Cosine Functions
The amplitude of
Amplitude = | a |
Let b be a real number. The period of
Find the period and amplitude of
Compare the functions
The amplitude of the function is 5/2.
The period of the function is | {"url":"http://hotmath.com/hotmath_help/topics/amplitude-and-period-of-sine-and-cosine-functions.html","timestamp":"2014-04-16T10:10:11Z","content_type":null,"content_length":"4667","record_id":"<urn:uuid:9fcaa89b-4339-433d-bc83-c53ec92cb02a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Group cohomology of an abelian group with nontrivial action
up vote 1 down vote favorite
How do I compute the group cohomology $H^2(G,A)$ if G is a finite abelian group acting nontrivially on a finite abelian group A?
5 Type it into a computer. Seriously. Magma will definitely do it. – Kevin Buzzard May 30 '11 at 19:44
GAP too........ – Fernando Muro Sep 14 '11 at 12:10
add comment
3 Answers
active oldest votes
If $G$ is any group and $A$ is any $G$-module, then $H^2(G,A)$ can be identified with the set of the equivalence classes of extensions $$1\to A\to H\to G\to 1$$
up vote 1 down such that the action of $G$ on $A$ is the given action. Two extensions $H_1,H_2$ are said to be equivalent if there is an isomorphism $H_1\to H_2$ that makes the extension exact
vote sequences commute. See K. Brown, Group cohomology, chapter 4.
add comment
You can compute it using the Bar resolution, see [Weibel, H-book].
up vote 0
down vote
I know exactly how it is related to extensions and how to compute it explicitly if the actions is trivial (using Kunneth formula to translate the problem to cyclic groups). I was just
trying to find out if there is some relatively easy procedure that gives you an explicit answer for nontrivial actions (perhaps by somehow using the long exact sequence to translate the
problem to that of trivial actions). – Mitja May 30 '11 at 21:06
2 I don't think it's a good idea to use the bar resolution here since it's way to large. You'll get a projective resolution of mimimal complexity by tensoring the periodic resolutions of
the cyclic summands of $G$. – Ralph May 31 '11 at 4:14
add comment
One can do the calculation using Kunneth theorem and the cohomology of cyclic group.
up vote
0 down See eqn J18 and appendix J.6 and J.7 in a physics paper http://arxiv.org/pdf/1106.4772v2
I don't think this works so easily with non-trivial coefficients. – Fernando Muro Sep 14 '11 at 12:12
Dear Fernando: Eqn. J60 - eqn. J70 in the above paper give some explicit results for non-trivial coefficients, for some simple Abelian groups. But do your suggest that I cannot use Kunneth
theorem for non-trivial coefficients? – Xiao-Gang Wen Sep 14 '11 at 16:02
I agree with Fernando. At least the derivation of (J54) is doubtful: It's based on (J43), but in (J43) one has $H^i(G_1;M)\otimes_M H^{n-i}(G_2;M)$ while by setting $G_1 := Z_2^T, G_2 :=
Z_n$, (J54) reads: $H^i(G_1;Z_T) \otimes_Z H^{d-i}(Z_n;Z)$, i.e. in both components of the tensor product in (J43) the coefficients are equal, while in (J54) they differ! Also be aware of
the wikipedia references for Kuenneth-formulars: They require trivial coefficients! (otherwise wikipedia had to use (co)homology with local coefficients, what they don't). – Ralph Sep 14
'11 at 20:58
Dear Ralph: Thanks for the comments. I agrees with you that eqn. J43 from a webpage is intended for trivial coefficient. But if Kuenneth formula only depends on the cohomological structure
algebraically, should it also apply to non-trivial coefficients, provided that the group action "splits" in some way? Here $Z_T$ is the same as $Z$. Just that $G_1$ has a non-trivial
action on $Z_T$. In fact, $G1\times G_2$ acts "naturally" on $H^i(G_1,Z_T)\otimes_Z H^{d−i}(G2,Z)$, Let $a\in H^i(G_1,Z_T)$ and $b\in H^{d−i}(G_2,Z)$. We have a group action $(g1,g2) \cdot
(a\otimes b)=(g1\cdot a)\otimes b$. – Xiao-Gang Wen Sep 14 '11 at 22:31
In the case that I am interested in, the action does not split in any way at all. To make things more precise, in my case $G$ is a transitive abelian subgroup of $S_n$ acting on $A=(Q/Z)\
times\ldots \times (Q/Z)$ by permuting factors (in the concrete problem I actually have the multiplicative group of complex numbers without $0$ insead of $Q/Z$). – Mitja Sep 15 '11 at
add comment
Not the answer you're looking for? Browse other questions tagged group-cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/66473/group-cohomology-of-an-abelian-group-with-nontrivial-action","timestamp":"2014-04-19T12:11:27Z","content_type":null,"content_length":"69390","record_id":"<urn:uuid:a6ac72e5-4d64-4839-a137-5925057cc67b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Grossman, Editor
Frontiers in Applied Mathematics 5
Here is a monograph that describes current research efforts in the application of symbolic computation to several areas, including dynamical systems, differential geometry, Lie algebras, numerical
analysis, fluid dynamics, perturbation theory, control theory, and mechanics. The chapters, which illustrate how symbolic computations can be used to study various mathematical structures, are
outgrowths of the invited talks that were presented at the NASA-Ames Workshop on The Use of Symbolic Methods to Solve Algebraic and Geometric Problems Arising in Engineering. More than 100 people
participated in the two-day conference, which took place in January 1987 at the NASA-Ames Research Center in Moffett Field, California.
The field of symbolic computation is becoming increasingly important in science, engineering, and mathematics. The availability of powerful computer algebra systems on workstations has made symbolic
computation an important tool for many researchers.
Computer Algebra and Operators; The Dynamicist's Workbench: I, Automatic Preparation of Numerical Experiments; Symbolic Computations in Differential Geometry Applied to Nonlinear Control Systems;
Vector Fields and Nilpotent Lie Algebras; FIDIL: A Language for Scientific
Programming; Perturbation Methods and Computer Algebra; Multibody Simulation in an Object Oriented Programming Environment.
1989 / x + 186 pages / Softcover / ISBN-13: 978-0-898712-39-1 / ISBN-10: 0-89871-239-4 /
List Price $59.00 / SIAM Member Price $41.30 / Order Code FR05 | {"url":"http://www.ec-securehost.com/SIAM/FR05.html","timestamp":"2014-04-20T06:27:49Z","content_type":null,"content_length":"5526","record_id":"<urn:uuid:9f8babaa-451a-46ba-8947-c69ab955451f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
<LPM AND <PMN, find the value of x
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/504a5a64e4b059e709a3e099","timestamp":"2014-04-20T03:40:30Z","content_type":null,"content_length":"47773","record_id":"<urn:uuid:b0a02b61-e5f1-46db-9c1b-79b32c48408a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Title page for ETD etd-08312001-162742
The k-sample data setting is one of the most common data
settings used today. The null hypothesis that is most generally
of interest for these methods is that the k-samples have the
same location. Currently there are several procedures available
for the individual who has data of this type. The most often used
method is commonly called the ANOVA F-test. This test assumes
that all of the underlying distributions are normal, with equal
variances. Thus the only allowable difference in the
distributions is a possible shift, under the alternative
hypothesis. Under the null hypothesis, it is assumed that all k
distributions are identical, not just equally located.
Current nonparametric methods for the k-sample setting require
a variety of restrictions on the distribution of the data. The
most commonly used method is that due to Kruskal and Wallis (1952). The
method, commonly called the Kruskal-Wallis test, does not assume
that the data come from normal populations, though they must
still be continuous, but maintains the requirement that the
populations must be identical under the null, and may differ only
by a possible shift under the alternative.
In this work a new procedure is developed which is exactly
distribution free when the distributions are equivalent and
continuous under the null hypothesis, and simulations are used to
study the properties of the test when the distributions are
continuous and have the same medians under the null. The power of
the statistic under alternatives is also studied. The test bears
a resemblance to the two sample sign type tests, which will be
pointed out as the development is shown. | {"url":"http://scholar.lib.vt.edu/theses/available/etd-08312001-162742/","timestamp":"2014-04-20T15:54:06Z","content_type":null,"content_length":"10021","record_id":"<urn:uuid:15fccff2-c960-4c18-b5d6-cc140de65a7a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tree Everlasting
First thing you do to make a quilt using this method, is to plan out your quilt...the length and width you want it to be, including overhang on sides and end, and pillow tuck, if you want those. Now,
based on those measurements, divide each by 3"...that will tell you how many rows across and half squares down you will need. If it doesn't come out an exact number round UP to the nearest 3".
EXAMPLE: You want a pretty standard full sized Quilt, measuring approx 81" x 88" ...divide each measurement by 3" = 27 by 29.333, or twenty seven strips across the bed, and 29.333 units
lengthwise...round UP the 29.333 to 30. Drawing this out on graph paper is a good idea! Now you know that you need thirty half squares in each half square row, separated by long rows of solid fabric.
Multiply the number of half square rows you will have by 30 in each row, and that gives you the total number of half squares you need to make....now... | {"url":"http://www.quilterscache.com/T/TreeEverlastingBlock.html","timestamp":"2014-04-16T22:12:22Z","content_type":null,"content_length":"6180","record_id":"<urn:uuid:b2a8b985-ac37-42cb-a093-4c261591d5c1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
programming with "enum"
07-16-2005 #1
Registered User
Join Date
Jun 2005
Hey... I've been working on this program using the enumeration type. When I execute what I've written, it returns a number instead of the type of triangle it is. If you could look at the code and
help me out I'd appreciate it!
#include <iostream>
using namespace std;
enum triangleType {scalene, isosceles, equilateral, noTriangle};
triangleType triangleShape(double, double, double);
int main()
double firstSide, secondSide, thirdSide;
cout << "Enter the lengths of the three sides of a triangle." << endl;
cin >> firstSide >> secondSide >> thirdSide;
cout << endl;
cout << "The shape of the triangle is: "
<< triangleShape(firstSide, secondSide, thirdSide) << endl;
return 0;
triangleType triangleShape(double firstSide, double secondSide, double thirdSide)
triangleType type;
if (firstSide + secondSide <= thirdSide || firstSide + thirdSide <= secondSide ||
secondSide + thirdSide <= firstSide)
type = noTriangle;
if (firstSide + secondSide > thirdSide && firstSide + thirdSide > secondSide &&
secondSide + thirdSide > firstSide)
if (firstSide != secondSide && firstSide != thirdSide && secondSide != thirdSide)
type = scalene;
else if (firstSide == secondSide || firstSide == thirdSide || secondSide == thirdSide)
type = isosceles;
type = equilateral;
return type;
enum's are enumerated constants. They represent numbers. The first entry in an enum statement represents 0, the second 1 and so on.
You can of course change this:
enum triangles { equilateral=10, scalene=20 }; // etc
If this isn't what you meant, well, sorry.
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
No, that's not what I meant but thanks for tryin. In my function I'm trying to return the type of triangle after the user has input the lengths of the three sides of a triangle. But when I run
the program, it's giving me those values; 0, 1, 2, etc. Any one got some ideas for me?!?!
When the user inputs the # sides for a scalene triangle, what number is output?
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
well yeah.. it is suppose to return a number..
enum triangleType {scalene, isosceles, equilateral, noTriangle};
//scalene = 0;
// isosceles = 1;
// equilateral = 2;
//noTriangle = 3;
and the function is going to return one of those values.
if you want it return a string like "Equilateral" make a string function
There's your answer - the same as my reply
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
There's your answer - the same as my reply
sorry i didn't see your reply.. you must of posted it seconds before mine..
Well you replied to my reply saying thanks for trying. Never mind about it, at least you won't have problems with it again
Damn, sorry, I thought you were the one who started the thread lol. Yeah I posted it about the same time
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
well yeah.. it is suppose to return a number..
enum triangleType {scalene, isosceles, equilateral, noTriangle};
//scalene = 0;
// isosceles = 1;
// equilateral = 2;
//noTriangle = 3;
and the function is going to return one of those values.
if you want it return a string like "Equilateral" make a string function
But for an enumeration type, aren't "scalene, isosceles, etc" variables, not strings? How would I set that up if making it return a string is the only way I can do it?
first start off with making the function return a string:
#include <iostream>
using namespace std;
string Triangle(double side1, double side2, double side3)
//Here check the sides like you did on the other function
//and return the right string for examlpe for Equilateral triangle "Equilateral"
int main()
return 0;
Well, you could do something like this just after the enum:
std::string triangleName[] = {"scalene", "isosceles", "equilateral", "noTriangle"};
After which triangleName[isosceles] would be the string "isosceles".
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
The point of my program though is on using the enumeration type. Shouldn't there be another way for me to return what I want it to?
string trianglePrint (triangleType t) {
case scalene: return "scalene";
case isosceles: return "isosceles";
case equilateral: return "equilateral";
case noTriangle: return "No Triangle";
default: return "Error";
Or you can directly print it...
void trianglePrint (triangleType t) {
case scalene: cout << "scalene"; break;
case isosceles: cout << "isosceles"; break;
case equilateral: cout << "equilateral"; break;
case noTriangle: cout << "No Triangle"; break;
default: cout << "Error";
I'm supposed to be able to return the shape of the triangle from the triangleShape function.
You're not making much sense here. Do you want to return a string indicating which shape it is (the examples fellow repliers have posted above) or, like, a vertex for use in '3d space'.
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
07-16-2005 #2
07-16-2005 #3
Registered User
Join Date
Jun 2005
07-16-2005 #4
07-16-2005 #5
07-16-2005 #6
07-16-2005 #7
07-16-2005 #8
07-16-2005 #9
Registered User
Join Date
Jun 2005
07-16-2005 #10
07-16-2005 #11
07-16-2005 #12
Registered User
Join Date
Jun 2005
07-16-2005 #13
Join Date
Mar 2005
07-16-2005 #14
Registered User
Join Date
Jun 2005
07-16-2005 #15 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/67724-programming-enum.html","timestamp":"2014-04-18T03:39:45Z","content_type":null,"content_length":"95421","record_id":"<urn:uuid:94de38c9-dac7-4463-99a3-599a41872268>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
To mike: Spanish
Number of results: 10,357
Josh and Mike live 13 miles apart. Yesterday Josh started to ride his bicycle toward Mike's house. When they met, Josh had ridden for twice the length of time as Mike and at 4/5 of Mike's rate. How
many miles had Mike ridden when they met? (a) 4 (b) 5 (c) 6 (d) 7 (e) 8
Sunday, January 25, 2009 at 11:02pm by Josh
what does it mean when someone said? how is Mike,Is he behaving himself? It means that Mike sometimes does not behave as well as might be expected. The person asking wants to know if Mike has been
being good.
Monday, February 19, 2007 at 4:41pm by Bob
To mike: Spanish
Rather than asking for tests, which we do not do, why not post the things with which you are having difficulty? THEN we can HELP you, but we do not DO the work for you. No effort on your part gets no
effort on ours! Sra (who would like to help you with Spanish)
Tuesday, May 3, 2011 at 11:43am by SraJMcGin
Star and her brother, Mike, produce roughly the same levels of male hormones and female hormones. Given this information, it is likely that: Answer Star and Mike have not yet reached puberty. Star
has reached puberty but Mike has not. Mike has reached puberty but Star has not...
Tuesday, February 5, 2013 at 7:31pm by Tammy
Philadelphia Flyers player Mike Richards stands on the frictionless ice next to the goal. Neither Mike nor the goal are anchored to the ice. Frustrated after a recent playoff loss, 102.5kg Mike
suddenly pushes on the goal, sending it across the ice a 2.80m/s. Mike recoils in ...
Sunday, April 29, 2012 at 10:40pm by Sarah
To mike: Spanish
I've deleted both of your posts asking for answers to Spanish tests. Jiskha tutors do not supply answers. Period. If you'd like to post a few questions AND your ideas about the answers, we'll be glad
to help you.
Tuesday, May 3, 2011 at 11:43am by Ms. Sue
grade 9 algerbra HELP!!!! PLZ!!!
betty is now three times as old as mike, but in five years she will be only twice as old as mike. how old is mike right now? i don't get what your suppose to do
Monday, March 9, 2009 at 7:25pm by emily
Mike’s father is twice as old as he is. Seven years ago, the father was 7 years less than thrice as old as Mike was. Find Mike’s present age.
Tuesday, January 21, 2014 at 7:37am by Bebang Biskobengboom
Mike’s father is twice as old as he is. Seven years ago, the father was 7 years less than thrice as old as Mike was. Find Mike’s present age.
Tuesday, January 21, 2014 at 10:03am by Beth
English expression
In Western culture, a man is introduced to a woman and a young person is introduced to an elderly person. (Is this expressions grammatical?) e.g. Mina, this is Mike. Mike, this is Mina. In this
words, who is being introduced to whom? Is Mina being introduced to Mike? Or Is ...
Thursday, March 6, 2008 at 3:32pm by John
please check my answer thanks :) There has been judgement entered agaisnt Mike in the Northern District Court of New York. What statment below must be true for the judegment to be aganist Mike
personally. A. He was a resident of New York B. Mike lives in the northern district...
Friday, November 30, 2007 at 7:25am by Lucie
mike knits and sells them for $36 at the street fair every other week. it costs him about $19 for the yarn for each scarf and $8 for the other decorations he uses on them.mike estimates that he makes
about 75% profit on each scarf he sells. is mike's estimate correct ?
Sunday, April 15, 2012 at 10:48pm by lauren
PSAT - Math
Hello, I am reviewing my psat and looking at the questions I got wrong. I can't understand how to do this problem. Mike and paul left their houses at the same time for a fitness run to the park. Mike
ran at an average speed of 7 miles per hour. Paul ran at an average speed of ...
Saturday, December 26, 2009 at 9:51pm by Tiger
Mike would like to plant some trees so he needs to use a portion of his existing yard. The perimeter of this rectangular portion needs to be 14 yards and the diagonal is 5 yards. Can you help Mike
determine the length and width of this new portion? Mike requested that you show...
Sunday, January 16, 2011 at 11:13pm by ella
1. Jane and Mike are the same in score. Jane and Mike are the same in a score. Jane and Mike are the same in the score. (For example, Jane got 90 points. Mike also got 90 points.) Which expression is
correct? 2. How does the baby look? It looks sleepy. (Is the question or ...
Sunday, March 8, 2009 at 3:20pm by John
Mike is correct on both counts! (zapatos = shoes & zapatería = shoe shop) (el suéter = masculine singular like este) Sra
Tuesday, December 7, 2010 at 5:18pm by SraJMcGin
English expression
Thank you for using the Jiskha Homework Help Forum. When you say "Mina, this is Mike," Mike is being introduced to Mina.
Thursday, March 6, 2008 at 3:32pm by SraJMcGin
physical science
Mike jogs east at a rate of 4 mph. The wind is blowing west at 1 mph. What is Mike's velocity?
Tuesday, January 8, 2013 at 5:13pm by alex
Mike can run the mile in 6 minutes, and Dan Can run the mile in 9 minutes. If mike gives Dan a head start of 1 minute, how far from the start will mike pass Dan? How long does it take?
Wednesday, September 17, 2008 at 9:26pm by kelli
4th Grade Math
Mike drank more than half the juice in his glass. What fraction of the juice could Mike have drunk?
Monday, March 17, 2014 at 5:07pm by Kianie
Mike drank more than half the juice in his glass. What fraction of the juice could Mike have drunk?
Monday, March 17, 2014 at 4:42pm by Kianie
Mike makees the following table of the distances he travels during the first day of the trip. A suppose Mike continues riding at this rate. Write an equation for the distance mike travles after t
hours. B sketch a graph of the equation. How did you choose the range of values ...
Monday, August 29, 2011 at 8:39pm by david
1. Jane and Mike are the same in score. Jane and Mike are the same in a score. Jane and Mike are the same in the score. (For example, Jane got 90 points. Mike also got 90 points.) Better pheasing is
this: "Jane and Mike have the same score." Which expression is correct? 2. How...
Sunday, March 8, 2009 at 3:20pm by Writeacher
Pre Algebra
Mike spent $40 at the music store. He bought a cd for $20 and some cassettes for $4 each. How many cassettes did mike buy?
Thursday, February 14, 2013 at 11:11pm by Makayla
Thanks to Mike, look at that 2nd one again. Are you saying the "imagen" is called? = llamada or "Cristo" is called = llamado. Sra
Thursday, May 14, 2009 at 5:09pm by SraJMcGin
mike can run the mile in 6 minutes and dan can run the mile in 9 minutes. If mike gives dan a head start of 1 minute, how far from the start will mike pass dan?
Thursday, September 4, 2008 at 7:55pm by jackie
solving algebraic equation
For college orientation week,Mike sold 200 shirts.sweat shirts were priced at $50.00 each ant t-shirts at $20.00 each Mike recieve a total of $6250.00 for the shirts. How many type of shirt dud Mike
sell for college orientation week ?please help and show a equation to solve
Wednesday, February 8, 2012 at 2:00pm by tanya
Mike pulls a 4.5 kg sled across level snow with a force of 250 N along a rope that is 35.0° above the horizontal. If the sled moves a distance of 63.8 m, how much work does Mike do?
Monday, January 23, 2012 at 6:29pm by Mac
Mike pulls a 4.5 kg sled across level snow with a force of 250 N along a rope that is 35.0° above the horizontal. If the sled moves a distance of 63.8 m, how much work does Mike do?
Tuesday, January 24, 2012 at 8:24am by Jacqueline
Mike can assemble a computer table in 40 minutes and I can do it in 60 minutes. If Mike works for 20 minutes before I join him, how long will it take us to finish?
Monday, May 24, 2010 at 10:01pm by TinaG
English Grammar
e.g. Nice to meet you, Mike. 1. Stress Nice. Nice is a content word, so nice is stressed. 2. Put the stress on meet. 3. Stress Mike. 4. Mark accent on the first e in 'meet' 5. Put the stress on the
first e in 'meet'. 6. Mike is a proper noun, so you should stress Mike. Mark ...
Wednesday, March 12, 2008 at 4:58pm by John
Contract Law
Mary offered to sell Mike several pieces of rare Chinese art at a very good price because they were duplicates in her own collection. Mike could not accept the offer at that time, but he did give
Mary $500 in return for her promise to keep her offer open for three (3) weeks. ...
Thursday, March 1, 2012 at 2:17pm by Anonymous
Also another sentence was Bart, Mike, and Jim went bowling. I thought the simple subject might be Bart, Mike and Jim...but not sure
Friday, June 10, 2011 at 4:31pm by Amanda
List their ages at the two different times NOW: Mike's age --- x father's age -- 2x 7 YEARS AGO: Mike ---- x-7 father ---- 2x-7 It said: " (7 yrs ago) the father was 7 years less than thrice as old
as Mike was ---> 2x-7 = 3(x-7) - 7 2x - 7 = 3x - 21 - 7 -x = -21 x = 21 Mike...
Tuesday, January 21, 2014 at 7:37am by Reiny
Pick a Card. Mike and dave play the following card game. Mike picks a card from the deck If he selects a heart Dave gives him $5, if not, he gives Dave $2. Determine Mikes's expectation. Determine
Daves' expectation Can you help me with this? i have tried to do it and i came ...
Friday, July 13, 2012 at 5:31pm by Adie
Mike drove to a friends house at a rate of 40 mph he returned by the same route at 45 mph The driving time round trip was 4 hours. write and solve an equation to find the distance Mike traveled
Tuesday, August 13, 2013 at 2:26pm by Molly
The perimeter of this rectangular portion needs to be 14 yards and the diagonal is 5 yards. Can you help Mike determine the length and width of this new portion? Mike requested that you show him how
you arrive at the answers.
Sunday, July 11, 2010 at 8:52pm by JT
mike has an average of 89 on the five tests he has taken so far this semester. what grade must mike earn on the next test to raise his average grade exactly to 90?
Tuesday, May 29, 2012 at 5:38pm by keke
Mike and John ordered 2 pizzas. Mike ate 1/2 of his pizza and John ate 1/2 of his. How much did they eat altogether?
Wednesday, January 9, 2013 at 6:28am by Mike
Mike pulls a 4.5 kg sled across level snow with a force of 250 N along a rope that is 35.0° above the horizontal. If the sled moves a distance of 63.8 m, how much work does Mike do? please show steps
Monday, January 23, 2012 at 1:23pm by Zach G
1. What did Mike need at the store? He needed some things. 2. Who went shopping together? Mike and Ann did. 3. What did Mike and Ann do? They went shopping together. 4. What did they look at? They
looked at the yellow bananas, the green peas, and the orange carrots. 5. What is...
Sunday, October 19, 2008 at 4:57pm by John
1. Jane has got 90 points. Mike has got 90 points as well. So Jane is as smart as Mike. They have the same degree of score. 2. Jane has lifted 20 kg weights. Mike has lifted 20 kg weights. Jane is as
strong as Mike. So they hav e the same degree of strength. 3. Jane has just ...
Friday, March 6, 2009 at 6:44am by John
Joe is sledding down a snow hill when he collides with Mike half way down the hill. Joe and the sled have a mass of 65 kg and their velocity before the collision was 12 m/s. If Mike has a mass of 55
kg, what velocity would Joe, Mike, and the sled have after the collision?
Sunday, January 16, 2011 at 4:19pm by Sara
Math 6
I am stuck. Mike's truck engine holds 1 1/4 gallons of oil in the engine now, how many more quarts of oil does Mike need to add to fill the engine to capacity? Having trouble on do I subtract or??
what do I do?
Sunday, January 27, 2013 at 11:21pm by Moby
Calculus 2
Calculus 2. Tom and Mike have a bet as to who will do the most work today. Mike has to compress a coil 200 feet. It takes Mike 250 lbs to compress the coil 10 feet. Tom needs to pump water through
the top of a cylindrical tank sitting on the ground. The tank is half full, has ...
Thursday, September 27, 2012 at 8:34pm by Laura
I presume you mean 'apostrophe' The apostrophe here will show possession. The car belongs to Mike thus Mike's car needs a new muffler and new brakes.
Wednesday, November 10, 2010 at 8:12am by Dr Russ
Mike = 2/6 for red car Adam = 1/6 for red car, because Mike has already chosen one. The probability of all/both events occurring is found by multiplying the probabilities of the individual events.
Tuesday, August 23, 2011 at 8:00pm by PsyDAG
grade 9 algerbra HELP!!!! PLZ!!!
NOW: MIke - x Betty - 3x In 5 Years: MIke - x+5 Betty - 3x+5 but it said: (3x+5) = 2(x+5) solve for x
Monday, March 9, 2009 at 7:25pm by Reiny
intermediate algebra
need the equation for this problem. Mike invested $706 for one year. He invested part of it at 5% and the rest at 3% at the end of the year he earned $28.00 in interest. How much did mike invest at
each rate of interest
Thursday, January 19, 2012 at 11:51pm by jackie
Tim started out traveling at 60 km/h. Two hours later, Mike left from the same point. He drove along the rode at 80 km/h. How many hours does Mike have to drive to catch up with Tim?
Thursday, January 21, 2010 at 9:09pm by rebecca
Select the set of equations that represents the following situation: Mike invested $706 for one year. He invested part of it at 5% and the rest at 3%. At the end of the year he earned $28.00 in
interest. How much did Mike invest at each rate of interest?
Saturday, November 13, 2010 at 12:34pm by running wild
intermediate algebra
Select the set of equations that represents the following situation: Mike invested $706 for one year. He invested part of it at 5% and the rest at 3%. At the end of the year he earned $28.00 in
interest. How much did Mike invest at each rate of interest?
Monday, February 21, 2011 at 12:54pm by joel
Personal Finance
Mike Welch purchased 5,000 shares of Grass Roots stock for $82 per share and paid a commission of 1% on the purchase price. The current value of the stock is $96 per share. Mike received no dividends
last year.
Thursday, April 21, 2011 at 11:40am by Kelsey
english- please help me edit
Aside from Prufrock, Mike from the movie “Swingers” seemed to go through a similar problem in the beginning, but has a different and much happier ending. Mike was proven to have a much more
successful relationship in the past, and after he broke up with the woman that he ...
Thursday, May 1, 2008 at 4:01pm by Dina
Math help?
I am stuck. Mike's truck engine holds 1 1/4 gallons of oil. If there are 3 quarts of oil in the engine now, how many more quarts of oil does Mike need to add to fill the engine to capacity? Having
trouble on do I subtract or?? what do I do
Sunday, January 27, 2013 at 11:53pm by Moby
Mike had some pieces of wood to build a doghouse. He cut each piece in fourths. Then he had 24 pieces of wood. How many whole pieces of wood did Mike have to start with?
Friday, April 5, 2013 at 6:06pm by Kristy
Mike,Liz,and Danna are friends.They each measured the length of the sidewalk in front of their school for a project.Which friend do you think measured incorrectly? Mike -measured 280 inches Liz
measured 24 ft. Danna measured 8 yards
Wednesday, June 5, 2013 at 7:27pm by ttuuffyy
All languages are unique. As Spanish is a Romance language, familiarity with Latin or Italian will make Spanish easier to learn. In many parts of the world, Spanish is a very useful second language
because of so many native Spanish-speakers close by.
Friday, October 18, 2013 at 11:39am by Ms. Sue
college math
It takes Mike 8 minutes to walk to the store from his house. The return trip takes 13 minutes because he walks 1mph slower due to the bags he is carrying. How fast did Mike walk on his way to the
store? Carry out one decimal place. Thanks in advance!
Friday, April 5, 2013 at 2:34pm by Lorie
Math alg2!
1.)Mike is 5 years more than twice as old as Tom. The sum of their ages is 65. How old is Mike? 2.)Mrs. Computer is 3 times older than her daughter, Mousy. The sum of their ages is 52. How old is
Thursday, December 13, 2012 at 8:14pm by tyneisha
Mike throws a ball upward & toward the east at a 20.0 angle with a speed of 51.0 . Nancy drives east past Mike at 15.0 at the instant he releases the ball. What is the ball's initial angle in Nancy's
reference frame?
Tuesday, May 29, 2012 at 12:08am by keren
English expression
What Number is Mike? He is Number 71. What number is Mike? He is number 71. Which one is correct? Do I have to capitalize number?
Friday, March 14, 2008 at 3:50pm by John
Mike bought 252 bricks to build a wall in his yard. Each row has 14 bricks. Each bricks is 3 inches tall. How many inches tall will be wall be if Mike uses every brick?
Thursday, October 20, 2011 at 5:04pm by Mia
Jill is 2 years older than Mike. in 5 years, 5 times Mike's age will be 4 times Jill's age. how old is Mike? Write an equation that models how old in years each of you will be, when your ages add up
to 150 years old. For example, if x = your age and the eldest person was a ...
Monday, January 21, 2013 at 1:30pm by Anonymous
Intermediate Accounting
E6-5 Mike Finley wishes to become a millionaire. His money market fund has a balance of $92,296 and has a guarateed interest rate of 10%. How many years must Mike leave that balance in the fund in
order to get his disired $1,000,000? I am getting $9,229.60 and know this is not...
Sunday, January 30, 2011 at 1:25pm by Lori
1. Mike doesn't often watch TV. 2. Mike often doesn't watch TV. -------------------- Which one is grammatical? Are both OK? Which one is commonly usd?
Wednesday, June 18, 2008 at 7:11pm by John
If Mike spends 1/6 of his study time on his science test, and his study time is 2/3 of all of his time, then you divide 1/6 by 2/3 to find that Mike spent (1/6)*(3/2)=3/12=1/4 of his time on Tuesday
studying for the science test.
Saturday, February 22, 2014 at 11:31am by Adam
english- please help me edit
Aside from their similar problems with women, they also seemed to have a similar outlook on life. Like before, Prufrock never really lived his life to the fullest. He would procrastinate if he had to
take a risk, like when he had to admit his feelings towards the woman that he...
Thursday, May 1, 2008 at 4:01pm by Dina
I'm required to look up an article in a Spanish magazine that has to do with events that took place in Spanish hometowns. The magazine has to be in Spanish and it has to deal with event(s) that
occurred in a Spanish hometown. I've looked everywhere on Google and can't seem to ...
Thursday, October 16, 2008 at 8:16pm by Chloe
Jay is twice as old as Sandy and 2 years younger than Mike. Mike is 7 years older than Sandy. How old is Jay?
Wednesday, March 6, 2013 at 10:51am by tina
Spanish 1B - I am an honor student but with spanish for me I can memorize for the tests but I am not really undestanding it. I am great in science and math - and i got an A barely in spanish but it
gets harder. The hardest part for me is that every other class comes easy. I ...
Tuesday, October 23, 2007 at 12:50pm by sam
in a week Mike ran 8km farther than Bill, while Pete ran 1 km less than 3 times as far as Bill.If pete ran 15 km farther than Mike, how many kilometers did Bill run?
Tuesday, December 30, 2008 at 10:11pm by ???E
criminal procedure
officer Mike and Gail, while patrolling an area known to them as a high crime area, observed 2 men get out of a car after seeing the patrol car. the passenger ran from the scene while the driver
began walking away from the car with his hands tucked inside the front of his ...
Tuesday, March 29, 2011 at 4:32pm by loulou
Spanish-one word
Can someone please tell me where I can find how to say "are" in Spanish. It's not in my Spanish book or Spanish dictionary we have to have for class. Thanks
Sunday, April 26, 2009 at 8:03pm by Mike
math (interests & percentss)
i need help with these problems below ! mike deposited $500 for 9 months at 8%, compounded quarterly. a. how many times was interest added to mike's account? b. what percent interest was added each
time? c. what was the balance in mike's account at the end of 9 months? sara ...
Thursday, November 18, 2010 at 4:38pm by bree
john's high score on the asteroid game was 326,700. mike's high score was 418,200. rebecca just plated the game and her high score wa halfway between john's and mike's. what was rebecca's score.
Thursday, December 8, 2011 at 7:28pm by joe
john's high score on the asteroid game was 326,700. mike's high score was 418,200. rebecca just plated the game and her high score was halfway between john's and mike's. what was rebecca's score?
Tuesday, September 20, 2011 at 9:03pm by joe
John's high score on the Asteriud Game was 326,700. Mike's high score was 418,200. Rebecca just played the game and her high score was halfway between John's and Mike's. What was Rebecca's score?
Monday, September 17, 2012 at 6:34pm by Brandon
English expression
Who is Mi-na's older brother? He is Min-su.Min-su is. Who is Mi-na's new friend? He is Mike. Mike is. What is Mi-na's pet? A dog is. It is a doge. She is a dog. ---------------- Are all the answers
to the questions all correct? Which ones are commonly used?
Monday, March 24, 2008 at 9:06pm by John
Bill -- x mike --- x+15 Pete --- 3x - 1 3x-1 = x+15 2x = 16 x = 8 bill ran 8 miles check: Bill -- 8 miles Mike -- 23 mi Pet -- 23 mi Yeah!
Tuesday, October 18, 2011 at 11:58pm by Reiny
Mike said, last week I ran 15 miles farther than Bill. Pete said Last week I ran one mile less than 3 times as far as Bill. If Mike and Pete ran the same distance, how far did Bill run?
Tuesday, October 18, 2011 at 11:58pm by Anonymous
This is what my paper said(or close enough) Mike is measuring his baseball bat with a 1/8 inch ruler What is the precison using this measurement using this ruler______(i answered 1/8 inch, is it
correct?) What is the greatest possible error?_________ (this is the hardest one ...
Tuesday, November 29, 2011 at 5:53pm by Anonymous
In a roller blade race grant is 5 meters ahead of cathy. mike is 7 meters behind cathy. joey is 4 meters ahead of Mike. How many meters is grant ahead?
Wednesday, October 20, 2010 at 1:44pm by I. Ranson
mike spends 2/3 of his time studying. on tuesday spends 1/6 of his time studying for his science test. what is the value of the combine time Mike studied for his science test?
Saturday, February 22, 2014 at 11:31am by Pearl
Spanish Literature
SraJMcGin: I would like to ask a couple of questions so I can try to understand my assignment better. I am in a Spanish Literature course in College and would like to know some questions that I need
answers to: 1. Which type of Spanish(1. Latin American Spanish, 2. Mexican ...
Saturday, July 27, 2013 at 6:22pm by Joy
English Grammar
e.g. Mike, this is Adams. In this sentence, Mike and Adams are content words, so they are stressed. What about 'is'? 'Is' is a verb. Verbs are content words, which are stressed. I know that a verb is
stressed in a sentence. Do we have to stress 'is' in e.g.?
Thursday, March 13, 2008 at 7:11am by John
English- please help me edit
Like Prufrock, Mike in the movie “Swingers”<~~movie titles are not in quotation marks; see my remark in the first paragraph seemed to go through a similar problem in the beginning, but he experiences
a different and much happier ending. Mike had a much more successful ...
Thursday, May 1, 2008 at 4:01pm by Writeacher
English Grammar
John, this cannot be a complete sentence unless you delete the "e.g." What is the entire context of this phrasing? (Context = the sentences around this) If you delete the "e.g." -- Mike = noun,
person being spoken to this = pronoun referring to Adams, subject of sentence is = ...
Thursday, March 13, 2008 at 7:11am by Writeacher
1. in a basketball game, mike scored 20% of the team's points. if mike scored 22 points, how many points did the team score as a whole? - is the answer 110 pts 2. sports authority was having a sale.
all their merchandise was 20% off the original price. the original price of a ...
Monday, May 31, 2010 at 7:18pm by Sara!
English expression
1. My name is Mike. This is my family: My grandmother, my father and my siter. 2. My name is Mike. This is my family: My grandmother, my father, my siter and me. Which expression is correct? Are both
Tuesday, April 1, 2008 at 9:10pm by John
I Need to write a letter to a girl in spanish for my spanish class. how do i start? how do i say, hi i really like you?
Monday, February 2, 2009 at 3:52pm by Spencer
So, my spanish name for spanish class is Catalina. To show affection, someone would say, "Catalinaito???"
Thursday, September 27, 2012 at 10:49pm by SoccerStar
Hello I was wondering if a fluent spanish speaker could look over my essay for spanish?
Wednesday, January 12, 2011 at 12:55am by Julia
spanish PLEASE HELP
I need help to figure out Direct object pronouns that are written in spanish to put them back into spanish how do I do this?
Wednesday, October 24, 2007 at 12:06am by Mandy
Need an answer in Spanish to a spanish question! Que vas a hacer despues de las clases hoy?
Thursday, October 15, 2009 at 1:50pm by Kyle
I have to do a Spanish News report and weather report. I have No idea how to write these thigs in spanish, I need help. How do you write a news report in very simple spanish words? (This is my first
year in Spanish!) HELP!! Thanks, Taylor
Monday, March 3, 2008 at 12:55pm by Taylor
spanish SRA MCGUINN
Fortunately in English can you please tell me the spanish translation, also the spanish word afortunadamente what is the Englsih transalation
Sunday, January 4, 2009 at 7:24pm by sam
I do not know Spanish well enough to find any errors, and am extremely impressed with what you wrote. Our Spanish expert SraJMcGin will probably comment later
Sunday, December 18, 2011 at 6:16pm by drwls
For my Spanish class I need to memorize a poem in Spanish that uses mostly subjunctive. It should be 1/4 to 1/2 a page long. I have tried searching for famous Spanish poets and looking through their
poems to find subjunctive and have come up empty handed. Can anyone help me?
Tuesday, May 13, 2008 at 9:08pm by Torey
Not to worry! It will come back to you! La muchacha se llama Julia. Spanish II is the most important class! You will get to review Spanish I and all the tiny problems that cause difficulty in 4th and
5th years! In Spanish II you will get essentially all the grammar! Sra
Wednesday, September 7, 2011 at 8:03pm by SraJMcGin
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=To+mike%3A+Spanish","timestamp":"2014-04-20T09:38:11Z","content_type":null,"content_length":"40204","record_id":"<urn:uuid:f3e3a418-89cb-441c-8db2-d81df2de4959>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hello Everybody,
I am new to this forum. As a kid I had maths phobia but with practice and time my phobia came to end. All I did was practice.
Re: Introduction
Hi cool_jessica,
Welcome to the forum!
Character is who you are when no one is looking.
Re: Introduction
Welcome Jessica,
Good point you make about practice!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Introduction
Hi cool_jessica;
All I did was practice.
I started doing one hour of math every hour. Then 30 minutes every 30 minutes, then 15 minutes every 15 minutes, then...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Introduction
hi cool_jessica
Welcome to the forum!
May others learn from your example; thanks.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Introduction
Thank you all for welcoming me.
Maths is a subject that needs practice but if you take it as a burden, you will never come out of the fear of numbers. Just make sure that you practice maths daily but too much of practice will also
not be a good thing
All the best
Re: Introduction
I could not agree more or less. I have been seriously thinking of only putting in .999999... times as much effort as I did in the past. This should allow me some time to relax.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Introduction
As far as I know and what my experience say is that you should keep practicing mathematics to make the subject simpler. With practice you will learn some techniques and even your speed of calculation
will improve. Practice means continuous practice, you cannot solve one problem then go for a walk and then come back after an hour that will be a really time taking and tough process. Make a time
table and try to solve as many problems a s possible in that time.I assure you you will definitely start solving sums at a much higher speed.
Re: Introduction
Hello cool_jessica.Nice to meet you.
Re: Introduction
Hehe,i'm a new user in this forum too,nice to meet you,what do you like or dislike? i like playinggames-for-kids,i like sleeping and watching tv,i dislike training.and you?
Re: Introduction
Hi jennylee1203,
I like watching TV, Games and browsing the net.
Welcome to the forum!
Character is who you are when no one is looking.
Re: Introduction
Hi jennylee1203;
I like training, I hate sleeping and watching tv.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Introduction
Hi jennylee1203
Welcome to the forum!
I like sleeping and doing math. I hate Terrans.
Last edited by anonimnystefy (2012-09-28 16:45:33)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Super Member
Re: Introduction
Hi. Forgot to say hello before.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Full Member
Re: Introduction
Well, I haven't been on for about a week, but would like to say...
Welcome to the forum!...
to both cool_jessica and jennylee1203.
As for the practice part of it, I'd agree for the most part but not completely.
As for what I like, I love working on logic (strictly), math, science, computers, philosophy, and video games. Unfortunately I do not work on them all the time, every day, but by far, those are the
things I do the most work on (or arguably play if it comes to video games...). What I work on the most is probably logic, and the least is probably science, but again, by far, I spend the most time
on all of these things. To clear things up, I'm kind of saying work in replace of what I like, because what I like almost always requires a lot of work, and it's easier to explain that way.
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Re: Introduction
In my opinion,math is a magic,i like math but i don't like do math exercises because it's hard for me,so i 'd better do some math games for kids more than some math practice exercises.
Last edited by jennylee1203 (2012-10-22 19:27:57)
Full Member
Re: Introduction
Hmm, well I can almost assure you math isn't magic, but it is great none the less. As for math challenges, I honestly don't do them much myself because I'm busy studying...newer material, so I
continue to learn and understand. Though, once I get far enough, I'll probably start doing more exercises. Though don't get me wrong, exercises can definitely help. They can be good for challenging
yourself which can help your understanding of stuff, finding new and better ways of doing things, speed, etc. Of course, if there is something you don't know, that what things like this website is
here for.
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=228606","timestamp":"2014-04-16T10:45:23Z","content_type":null,"content_length":"27015","record_id":"<urn:uuid:9d863dbd-8964-4fec-ba14-0f56546a3cdb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: CAPACITIVE MATRIX CONVERTERS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A direct current (DC) to DC converter, including: input ports for receiving an input DC voltage; output ports for outputting an output DC voltage; a first matrix of capacitors and switches; a second
matrix of capacitors and switches; and a control circuit, coupled to the switches of the first and second matrices, configure d to repetitively: (i) configure the first matrix to a charge
configuration and couple the first matrix to the input ports while configuring the second matrix to a discharge configuration and coupling the second matrix to the output ports; (ii) maintain the
charge and discharge configurations for a first period of time; (iii) configure the second matrix to the charge configuration and couple the second matrix to the input ports while configuring the
first matrix to the discharge configuration and couple the first matrix to the output ports; and (iv) maintain the charge and discharge configurations for a second period of time; (a) wherein the
charge configuration and the discharge configurations of each matrix out of the first and second matrices differ from each other by a replacement of serial connections of capacitors of the matrix to
parallel connections of capacitors of the matrix; (b) wherein the charge configuration and a discharge configuration of each of the first and second matrices are responsive to required conversion
ratio between the input DC voltage and the output DC voltage; and (c) each matrix of the first and second matrices comprises at least four capacitors.
A direct current (DC) to DC converter, comprising: input ports for receiving an input DC voltage; output ports for outputting an output DC voltage; a first matrix of capacitors and switches; and a
control circuit, coupled to the switches of the first matrix, configure d to repetitively: configure the first matrix to a charge configuration and couple the first matrix to the input ports;
maintain the charge configuration for a first period of time; configure the first matrix to the discharge configuration and couple the first matrix to the output ports; and maintain the discharge
configurations for a second period of time; wherein the charge configuration and the discharge configurations of the first matrix differ from each other by a replacement of serial connections of
capacitors of the matrix to parallel connections of capacitors of the matrix; wherein the charge configuration and a discharge configuration the first matrix are responsive to required conversion
ratio between the input DC voltage and the output DC voltage; and wherein the first matrix comprises at least four capacitors.
The DC to DC converter according to claim 1, further comprising a second matrix of capacitors and switches; wherein the control circuit, is coupled to the switches of the first and second matrices,
and configure d to repetitively: configure the first matrix to the charge configuration and couple the first matrix to the input ports while configuring the second matrix to the discharge
configuration and coupling the second matrix to the output ports; maintain the charge and discharge configurations for the first period of time; configure the second matrix to the charge
configuration and couple the second matrix to the input ports while configuring the first matrix to the discharge configuration and couple the first matrix to the output ports; and maintain the
charge and discharge configurations for the second period of time; wherein the charge configuration and the discharge configurations of the second matrix differ from each other by a replacement of
serial connections of capacitors of the matrix to parallel connections of capacitors of the matrix; wherein the charge configuration and the discharge configuration of the second matrix are
responsive to required conversion ratio between the input DC voltage and the output DC voltage; and the second matrix comprises at least four capacitors.
The DC to DC converter according to claim 2, wherein the first and second matrices are identical to each other.
The DC to DC converter according to claim 2, wherein all the capacitors of the first and second matrices have a same capacitance.
The DC to DC converter according to claim 2, wherein each of the first and second matrices forms a non-rectangular matrix of capacitors.
The DC to DC converter according to claim 2, wherein each of the first and second matrices forms an arbitrary shaped matrix of capacitors.
The DC to DC converter according to claim 2, wherein the charging configuration comprises multiple branches that are coupled in parallel to each other and wherein the discharging configuration
comprises serially coupled elements; wherein each branch comprises at least one serially coupled capacitors; wherein each element comprises at least one parallel coupled capacitors; wherein a number
of branches equals a number of elements; wherein each branch has an associated element that comprises a same number of capacitors.
The DC to DC converter according to claim 7, wherein at least two branches differ by a number of capacitors of the branches.
The DC to DC converter according to claim 7, wherein at branch comprises a single capacitor and at least one other branch comprises multiple capacitors.
The DC to DC converter according to claim 2, wherein the first matrix comprises at least three switches per capacitor.
The DC to DC converter according to claim 2, wherein the first matrix comprises at least four switches per capacitor.
The DC to DC converter according to claim 2, wherein the first matrix comprises at least five switches per capacitor.
The DC to DC converter according to claim 2, wherein the first matrix comprises multiple modular cells; each modular cell comprises a capacitor, five switches and six ports.
The DC to DC converter according to claim 2, wherein the first matrix comprises multiple modular cells; wherein each modular cell comprises: a first switch, coupled between a first port and a sixth
port; a second switch, coupled between the sixth port and a second port; a capacitor, coupled between the second port and a third switch; said third switch is coupled between the capacitor and a
fifth port; a fourth switch coupled between the capacitor and a third port; and a fifth switch coupled between the third port and a fourth port.
The DC to DC converter according to claim 2 wherein the control unit is configure d to repetitively configure the first matrix, maintain, configure the second matrix and maintain at a rate that
exceeds 2 megahertz.
The DC to Dc converter according to claim 2, wherein the control unit is arranged to determine the charge configuration and the discharge configuration of each of the first and second matrices based
on the required conversion ratio between the input DC voltage and the output DC voltage.
A method for direct current (DC) to DC conversion, the method comprises: repeating the stages of: configuring a first matrix of switches and capacitors to a charge configuration; coupling the first
matrix to input ports of a DC to DC converter for receiving an input DC voltage; configuring a second matrix of switches and capacitors to a discharge configuration; coupling the second matrix to
output ports of the DC to DC converter for outputting an output DC voltage; charging the first matrix and discharging the second matrix during a first period of time; configuring the second matrix to
the charge configuration; coupling the second matrix to the input ports of the DC to DC converter for receiving the input DC voltage; configuring the first matrix to the discharge configuration;
coupling the second matrix to the output ports of the DC to DC converter; charging the second matrix and discharging the first matrix during a second period of time; wherein the charge configuration
and the discharge configurations of each matrix out of the first and second matrices differ from each other by a replacement of serial connections of capacitors of the matrix to parallel connections
of capacitors of the matrix; wherein the charge configuration and a discharge configuration of each of the first and second matrices are responsive to required conversion ratio between the input DC
voltage and the output DC voltage; and wherein each matrix of the first and second matrices comprises at least four capacitors.
The method according to claim 17, wherein all the capacitors of the first and second matrices have a same capacitance; wherein each matrix of the first and second matrices comprise at least four
capacitors; and wherein each of the first and second matrices forms a non-rectangular matrix of capacitors.
The method according to claim 17, wherein the charging configuration comprises multiple branches that are coupled in parallel to each other and wherein the discharging configuration comprises
serially coupled elements; wherein each branch comprises at least one serially coupled capacitors; wherein each element comprises at least one parallel coupled capacitors; wherein a number of
branches equals a number of elements; wherein each branch has an associated element that comprises a same number of capacitors.
The method according to claim 17, wherein the first matrix comprises multiple modular cells; wherein each modular cell comprises: a first switch, coupled between a first port and a sixth port; a
second switch, coupled between the sixth port and a second port; a capacitor, coupled between the second port and a third switch; said third switch is coupled between the capacitor and a fifth port;
a fourth switch coupled between the capacitor and a third port; and a fifth switch coupled between the third port and a fourth port.
RELATED APPLICATIONS [0001]
This patent application claims the benefit of U.S. provisional patent Ser. No. 61/255,568, filing date Jul. 15 2010 which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION [0002]
The following prior art references provide a brief review of the known art and are incorporated herein by reference:
[1] J. D. Cockroft and E. T. Walton, "Production of High Velocity Positive Ions," Proc. Roy. Soc., A, vol. 136, pp. 619-630, 1932
[2] J. Dickson, "On-Chip High-Voltage Generation in NMOS Integrated Circuits Using an Improved Voltage Multiplier Technique," IEEE J. Solid-State Circuits, vol. 11, no. 6, pp. 374-378, June 1976
[3] D. Maximovic and S. Dhar, "Switched-Capacitor DC-DC Converters for Low-Power on-Chip Applications", IEEE 30th Annual Power Electronics Specialists Conference, 1999. PESC 99, August 1999.
[4] J. Han, A. V, Jouanne and G. C. Temes, "A New Approach to Reducing Output Ripple in Switched-Capacitor-Based Step-Down DC-DC Converters" IEEE Trans. on Power Elect., Vol. 21, No. 6, November
[5] C. Chang, and M. Knights, "Interleaving Technique in Distributed Power-Conversion Systems", IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., Vol. 42, no. 5, pp. 245-251, May 1995.
[6] S. Ozeri, D. Shmilovitz, S. Singer, and L. M. Salamero, "The Mathematical Foundation of Distributed Interleaved Systems," IEEE Trans. On Circuits and System-I, Vol. 54, No. 2, February 2007.
[7] G. Zhu, A. Ioinovici, "Switched-Capacitor Power Supplies: DC Voltage Ratio, Efficiency, Ripple, Regulation", IEEE International Symposium on Circuits and Systems, ISCAS 96', Atlanta, May 1996.
[8] B. Axelrod, Y. Berkovich, A. Ioinovici, "Switched-Capacitors/Switched-Inductor Structures for Getting Transformerless Hybrid DC-DC PWM Converters.", IEEE Trans. On Circuits and Systems-I, Vol.
55, No. 2, March 2008.
[9] Y. K. Ramadass and A. P. Chandrakasan, "Voltage Scalable Switched Capacitor DC-DC Converter for Ultra-Low-Power On-Chip Applications", Power Electronics Specialists Conference, 2007, PESC 2007,
June 2007.
[10] S. V. Cheong, H. Chung, A. Ioinovici, "Inductorless DC-to-DC Converter with High Power Density", IEEE Trans. On Industrial Elect. , Vol. 41, No. 2, April 1994.
[11] Jae-Yaul Lee, Sung-Eun Kim, Seong-kyung Kim, Jin-Kyung Kim, Sunyoung Lim, Hoi-Jun Yoo, "A Regulated Charge Pump With Small Ripple Voltage and Fast Start Up", IEEE J. of Solid-State Circuits,
Vol. 41, No. 2. pp. 425-432, February 2009.
[12] A. Cabrini, A. Fantini, G. Torelli, "High-Efficiency Regulator for On-Chip Charge Pump Voltage Elevarors", Electronics Letters, Vol. 42, No. 17, pp. 972-973, August 2006.
[13] S. Singer, "Transformer Description of a Family of Switched Systems", IEE Proc. Vol. 129, No.5, October 1982.
[14] Y. Beck and S. Singer , "Capacitive Matrix Converters", 11
IEEE Workshop on Control and Modeling for Power Electronics, COMPEL 2008, 18-20, Aug. 2008.
[15] Cormen, Leiserson, Rivest, and Stein "Introduction to Algorithms", Chapter 16 "Greedy Algorithms", 2001.
[16] S. Singer, "Gyrators Application in Power Processing Circuits", IEEE Trans. on Industrial Electronics, Vol. IE-34, No. 3, pp. 313-318, August 1987.
[17] M. Ehsani and M. O. Bilgic, "Power Converters as Natural Gyrators," IEEE Trans. On ircuits and Systems, Vol. 40, No. 12, pp. 946-949 December 1993.
[18] R. Erickson, "Dc-Dc Power Converters," article in Wiley Encyclopedia of Electrical and lectronics Engineering, vol. 5, pp. 53-63, 1999.
[19] A. Cid-Pastor, L. M. Salamero, C. Alonso, G. Schweitz and R. Leyva, "DC Power Gyrator versus DC Power Transformer for Impedance Matching of a PV Array", EPE-PEMC 2006. 12th International Power
Electronics and Motion Control Conference, 2006.
[20] H. L. Alder, "Partition Identities--From Euler to the Present", The American Mathematical Monthly, Vol. 76, No. 7, pp. 733-746, August-September 1969.
[21] S. Singer, "Loss Free Gyrator Realization.", IEEE Trans. Circuits Syst., Vol. 35, no. 1, pp. 26-34, January 1988.
[22] M. D. Seeman and S. R. Sanders, "Analysis and Optimization of Switched-Capacitor DC-DC Converters", IEEE Trans. on Power Electronics. Vol. 23, No. 2, pp. 841-851. March, 2008.
[23] R. W. Erickson, "Fundamentals of Power Electronics", pp. 94-104, Chapman & Hall, 1997.
Processes of development and manufacturing of integrated circuits based on semiconductors gained much progress during the last decade, both concerning the density of transistors for a unit area,
increased manufacturing yield, reducing power consumption, etc. Large Scale Distribution of the power into cellular units which processes a couple of dozen of miliwatts (mW), enable the use of a
cellular unit, which is implemented by simple techniques such as the Charge Pump (see: references [1] and [2]). This topology uses only switching elements such as transistors, diodes and capacitors
as reactive components.
This topology is adequate for very low power conversion. It has an inherent disadvantage of generating EMC pollution since the current shape is very narrow. That is due to the fact that no inductance
participates in the processing to limit the transition time of the current slope.
The building block components of the Charge Pump converter can be used through conventional semiconductor integrated technology. Large Scale Distribution to thousands of cellular converters leads to
cellular processing power unit of miliwatts, which leads to small switching elements and small capacitors in the order of 50-100 Pf. Such small capacitors can be easily implemented as part of the
silicon itself (see: reference [3]).
The disadvantage of the current shapes at the source and at the load are treated with an interleaving method for the reduction of the output ripple by means of distributing the converter to
interleaved micro-converters (see: references [4]-[6]). Other work deals with the performance parameters of switched capacitor converters, such as DC input to output voltage ratio (see: references
[7] and [8]).
SUMMARY OF THE INVENTION [0030]
A direct current (DC) to DC converter is provided. According to an embodiment of the invention the DC to DC converter includes: input ports for receiving an input DC voltage; output ports for
outputting an output DC voltage; a first matrix of capacitors and switches; and a control circuit, coupled to the switches of the first matrix, configure d to repetitively: (i) configure the first
matrix to a charge configuration and couple the first matrix to the input ports; maintain the charge configuration for a first period of time; (ii) configure the first matrix to the discharge
configuration and couple the first matrix to the output ports; and (iii) maintain the discharge configurations for a second period of time. Wherein the charge configuration and the discharge
configurations of the first matrix differ from each other by a replacement of serial connections of capacitors of the matrix to parallel connections of capacitors of the matrix. Wherein the charge
configuration and a discharge configuration the first matrix are responsive to required conversion ratio between the input DC voltage and the output DC voltage; and wherein the first matrix comprises
at least four capacitors.
The DC to DC converter may include a second matrix of capacitors and switches; wherein the control circuit, is coupled to the switches of the first and second matrices, and configure d to
repetitively: configure the first matrix to the charge configuration and couple the first matrix to the input ports while configuring the second matrix to the discharge configuration and coupling the
second matrix to the output ports; maintain the charge and discharge configurations for the first period of time; configure the second matrix to the charge configuration and couple the second matrix
to the input ports while configuring the first matrix to the discharge configuration and couple the first matrix to the output ports; and maintain the charge and discharge configurations for the
second period of time; wherein the charge configuration and the discharge configurations of the second matrix differ from each other by a replacement of serial connections of capacitors of the matrix
to parallel connections of capacitors of the matrix; wherein the charge configuration and the discharge configuration of the second matrix are responsive to required conversion ratio between the
input DC voltage and the output DC voltage; and wherein the second matrix comprises at least four capacitors.
The first and second matrices may be identical to each other.
All the capacitors of the first and second matrices may have a same capacitance.
Each of the first and second matrices may form a non-rectangular matrix of capacitors.
Each of the first and second matrices may form an arbitrary shaped matrix of capacitors.
The charging configuration may include multiple branches that are connected in parallel to each other. The discharging configuration may include serially connected elements. Each branch may include
at least one serially connected capacitor. Each element may include at least one parallel connected capacitor. A number of branches may equal a number of elements. Each branch may have an associated
element that includes a same number of capacitors.
At least two branches may differ by a number of capacitors of the branches.
At least one branch may include a single capacitor and at least one other branch may include multiple capacitors.
The first matrix and the second matrix may include at least three, at least four, at least five or even more switches per capacitor.
The first matrix and the second matrix may include multiple modular cells. Each modular cell may include a capacitor, five switches and six ports
The modular cell may include: a first switch, connected between a first port and a sixth port; a second switch, connected between the sixth port and a second port; a capacitor, connected between the
second port and a third switch; said third switch is connected between the capacitor and a fifth port; a fourth switch connected between the capacitor and a third port; and a fifth switch connected
between the third port and a fourth port.
The control unit may be configure d to repetitively configure the first matrix, maintain, configure the second matrix and maintain at a rate that exceeds 1 megahertz.
The control unit may be arranged to determine the charge configuration and the discharge configuration of each of the first and second matrices based on the required conversion ratio between the
input DC voltage and the output DC voltage.
A method for direct current (DC) to DC conversion is provided. The method may include repeating the stages of: (i) configuring a first matrix of switches and capacitors to a charge configuration;
(ii) coupling the first matrix to input ports of a DC to DC converter for receiving an input DC voltage; (iii) configuring a second matrix of switches and capacitors to a discharge configuration;
(iv) coupling the second matrix to output ports of the DC to DC converter for outputting an output DC voltage; (v) charging the first matrix and discharging the second matrix during a first period of
time; (vi) configuring the second matrix to the charge configuration; (vii) coupling the second matrix to the input ports of the DC to DC converter for receiving the input DC voltage; (viii)
configuring the first matrix to the discharge configuration; (ix) coupling the second matrix to the output ports of the DC to DC converter; and (x) charging the second matrix and discharging the
first matrix during a second period of time. The charge configuration and the discharge configurations of each matrix out of the first and second matrices differ from each other by a replacement of
serial connections of capacitors of the matrix to parallel connections of capacitors of the matrix. The charge configuration and a discharge configuration of each of the first and second matrices are
responsive to required conversion ratio between the input DC voltage and the output DC voltage.
BRIEF DESCRIPTION OF THE PRESENT INVENTION [0045]
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to identify like
or functionally similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
FIGS. 1A, 1B, 2A, 2B, 3 illustrate various circuits, charging and discharging configurations;
FIG. 4 illustrates relationships between number of capacitors, number of divisors and voltage ratio possibilities;
FIGS. 5A, 5B, 6A, 6B, 7A, 7B, 7C, 7D, 8A, 8B, 20, 21, 22, 24, 25, 26 and 27 illustrate various circuits, charging and discharging configurations according to various embodiments of the invention;
FIGS. 9A, 9B, 10, 11A, 11B, 11C, 12, 13, 14, 15, 16, 17, 18, 19 and 23 illustrate various simulation results and circuit models according to various embodiment of the invention;
FIGS. 28-31 illustrate a modular cell and its environment according to various embodiments of the invention; and
FIG. 32 illustrates a method according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE DRAWINGS [0052]
Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be
explained in any greater extent than that considered necessary for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract
from the teachings of the present invention.
Multiple topologies of switched-capacitor converters and provided, taking into account integration considerations and multiple DC voltage ratios: (i) an ideal continuous capacitor, (ii) a matrix of
capacitors, and (iii) a General Transposed Series-Parallel configuration (GTSP).
The GTSP configuration is based on parallel brunches of series capacitors in the charging state and series elements of parallel capacitors in the discharging state. This topology is suitable for fine
tuning in the DC voltage ratios. The GTPS configuration can achieve required conversion ratios with fewer components than rectangular arrays.
The Ideal Continuous Capacitor [0055]
First, a theoretical capacitor is considered. This capacitor is defined as one that can change its capacitance continuously to any desired capacitance. This capacitance change does not require any
additional energy and theoretically the capacitance change is done instantaneously with Δt→0. Next, the capacitor is connected in the circuit described in
FIG. 1 a illustrates continuous changed capacitance in an ideal transformer.
The following table illustrates the connectivity between the elements of circuit 8 of FIG. 1a:
-US-00001 Control First end Second end signal Voltage (First input port 10(1)). (Second input port 10(2)) supply 10 First end of S1 Second end of C(t) and load S1 11 Voltage supply First end of S2
and C(t) X S2 12 Second end of S1 and First end of load x first end of C(t) C(t) 13 Second end of S1 and Second end of voltage first end of S2 supply and load Load 15 (First output port 15(1)).
(Second out port 15(2)). S2 Second end of C(t) and voltage supply
In order to realize a two port element characterized by a transformer, the circuit 8 described in FIG. 1a is presented. The state of the switches s1 11 and s2 12 is denoted by the Boolean variables x
and x. Therefore, the circuit consists of two states.
The first is the charging state in which x=1, namely, S
11 is closed and S
12 is open. In this state the capacitor C(t) 13 has a capacitance of C
and is charged to the source voltage of V
The second state is the discharging state in which x=0, namely, S
11 opens and S
12 closes. In this state the control 14 changes the capacitance of the capacitor 13 to any value C
(the control is ideal with no energy consumption and the transition is immediate (with t→0 sec). Moreover, the switching frequency is high enough; therefore the output voltage over Load 15 ripple can
be neglected.
The above-mentioned yields power conservation from input to output, therefore:
Q in
= Q out V in 2 C 1 = V out 2 C 2 ( 1 ) ##EQU00001##
Consequently the transmission from input to output is
V out V in
= C 1 C 2 = n ( 2 ) ##EQU00002##
The same can be formulated for the input and output current. The ideal transformer equation can be written as:
[ V out I out ] = [ n 0 0 1 n ] [ V in I in ] where : n = C 1 C 2 ( 3 ) ##EQU00003##
Theoretically, in this "ideal" Direct-Coupled Transformer-DCT (see: reference [9]), any output voltage or current can be achieved, for a given input voltage or current respectively, by controlling
the capacitance of each of the states (charging and discharging states).
Dual Capacitor Configuration [0064]
The ideal continuous capacitor in the configuration of FIG. 1a is connected to the input at the charging state and to the output at the discharging state. Thus, there is discontinuity in the input
and output voltages and currents. To avoid this discontinuity, capacitors should be connected at the input and output terminals of the circuit. When integration is considered, such capacitors will
have to be connected externally. To eliminate this external capacitor the following dual capacitor configuration is suggested.
The following table illustrates the connectivity between the elements of circuit 9 of FIG. 1b:
-US-00002 Control First end Second end signal Voltage supply First end of S1 and S3 Second end of CA(t), 10 CB(t) and load S1 11 First end of voltage First end of S2 and X supply and S3 CA(t) S2 12
Second end of S1 and First end of load and x first end of CA(t) second end of S4 CA(t) 13 First end of S2 and Second end of second end of S1 voltage supply, CB(t) and load Load 15 Second end of S2
and Second end of CA(t), S4 CB(t) and voltage supply S3 15 First end of voltage First end of S4 and x supply and S1 CB(t) S4 17 Second end of S3 and First end of load and X first end of CB(t) second
end of S2 CB(t) 13 Second end of S3 and Second end of first end of S4 voltage supply, CB(t) and load
This configuration includes of two ideal continuous capacitors C
(t) 14 and C
(t) 18. These capacitors are complementary. Namely, when C
(t) 14 has a value of C
, then C
(t) 18 has the value of C
and when C
(t) has a value of C
, then C
(t) has the value of C
. The circuit 9 has two operating states. In the first state x=1 and S
11 and S
17 are closed. In this state, C
(t) 14 has a capacitance of C
and is connected to the source 10. Therefore it is charged to the voltage of the source, V
, C
(t) 18 has a capacitance of C
and is connected to the load 15 and is charged to the voltage of V
. The second state occurs when x=0. In this state S
11 and S
17 are opened and S
12 and S
15 are closed. Thus, the same capacitances and voltages are connected to the source and the load, except for the fact that the two capacitors switch their position. Namely, C
(t) 14 is now connected to the output while C
(t) 18 is connected to the input. In the steady state (when frequency is high enough) the transmission between the input and the output follows equations (1) to (3).
In general the dual capacitor configuration of continuous capacitors, as presented here, can be a basic cell for applicable configurations based on discrete capacitors, with a fixed capacitance of C.
This idea is presented in the following sections.
From Basic Series
-Parallel to Matrix of Capacitors
In typical power electronics DC/DC applications the input and output voltages are known, or at least require a discrete number of conversions. A simple way to achieve this is by using the known
series-parallel topology. This topology implements a step-up or a step-down converter by, in the charging state, connecting all capacitors in parallel to the input source. This topology implements a
step-up or a step-down converter by connecting all capacitors in the charging state in parallel to the input source. In the discharging phase, the capacitors are connected in series with each other
to the output. This simple operation and efficient utilization of capacitors makes it favorable for low-voltage designs. The number of identical capacitors will be determined by the input to output
voltage ratio and the resolution of the required output voltage.
First a two capacitor array Series-Parallel topology is considered. These capacitors can both be charged from the source in series and discharged to the load in parallel, or be charged in parallel
and discharged in series, and are depicted in FIG. 2a and FIG. 2b. Two capacitors C26 and C27 are connected to an array of switches S1-S5 21-25 and are controlled by a control unit (not shown).
The connectivity of various components of the circuits of FIG. 2a is illustrated in the following table:
-US-00003 Control First end Second end signal Voltage First end of S1 Second end of C26, first supply end of S5 and second end 10 of S4 S1 21 Voltage supply First end of S2, C26 and S3 X S2 22 Second
end of S1, first First end of load x end of C26 and S3 S3 23 First end of S2 and First end of C27 and S4 X C26, second end of S1 S4 24 First end of S3 and C27 Second end of C26, first x end of S5 S5
25 Second end of C26 and Second end of C27 and X S4 load C26 First end of S2 and S3, Second end of voltage second end of S1 supply and S4, first end of S5 C27 Second end of S3 and Second end of
voltage first end of S4 supply and s5 Load 15 Second end of S2 Second end of C27 and S5
The connectivity of various components of the circuits of FIG. 2b is illustrated in the following table:
-US-00004 Control First end Second end signal Voltage First end of S2 Second end of C26, S4 and supply first end of S5 10 S1 21 Second end of S2 and First end of load x S3, first end of C27 and S4 S2
22 Supply voltage Second end of S3, first end X of C27, S1 and S4 S3 23 First end of C26 First end of C27, S1 and x S4, second end of S2 S4 24 First end of C27 and Second end of C26, first X S1,
second end of C3 end of S5 S5 25 Second end of C26 and Second end of C27 and x S4 load C26 First end of S3 Second end of voltage supply and S4, first end of S5 C27 Second end of S2 and Second end of
load and S5 S3, and first end of S1 and S4 Load 15 Second end of S1 Second end of C27 and S5
The input and output voltages are discontinuous in the presented topologies. Input and output external capacitors can be added to eliminate this discontinuity. Otherwise, each capacitor can be
constructed as dual capacitor configuration (see: reference [14]).
In the case described in FIG. 2a, the energy conservation can be written as:
V in
2 2 C = V out 2 1 2 C V out V in = 2 ( 1 ) ##EQU00004##
It is noted that in the case shown in FIG. 2b the voltage ratio is 0.5.
This simple Series-Parallel example of two capacitors can be generalized to an I-by-m matrix of capacitors, as shown in FIG. 3.
FIG. 3 illustrates two configuration of a matrix of switches and capacitors. The left side of FIG. 3 illustrates a charging configuration 33 in which m branches 35(1)-35(m) are connected in parallel
to each other--between two input ports 37 and 38--and in parallel to the voltage supply 10. Each of these branches is illustrated as including I serially connected capacitors (C 30).
The right side of FIG. 3 illustrates a discharging configuration 34 in which m elements 36(1)-36(m) are connected in a sequential manner to each other--between two input ports 37 and 38. Each of
these elements is illustrated as including I capacitors (C 30) that are connected in parallel to each other.
The matrix of capacitors includes internal switches and with a suitable control of the switches the circuit will have two transposed phases,. In this case the transfer ratio, M, is given by:
= m l ( 2 ) ##EQU00005##
In (2), I is the number of series capacitors in the charging state and m is the number of parallel branches (m will be the number of series capacitors in the discharge state of the circuit).
The general capacitor matrix, as shown in FIG. 3, introduces versatility in the transitions between input and output. For example, 12 capacitors can be organized as 1 over 12 to give 12 or 1/12
transmission (depending on the constellation at the charging and discharging states, as mentioned above), or 2 over 6 for a 3 or 1/3 transmission and 3 over 4 for 1.33 or 0.75 transmission, etc. An
increase of the capacitors number results in a large variety of transmissions.
The complexity of this topology is in the large number of capacitors and the number of switches required for controlling the switching of the requested topology between the charging and discharging
The number of the DC voltage ratios possibilities for a given matrix of capacitors equals the number of divisors that can be found for the integer number of capacitors.
The sum of positive divisors function σ
(n) is defined as the sum of the a
powers of the positive divisors of n, namely:
σ α ( n ) = d n α ( 3 ) ##EQU00006##
Where n is an integer representing the number of capacitors in a matrix
The divisor function σ
(n) counts the number of divisors of an integer number n (see: reference [15]. For example, the divisor function of 1 is: σ
(1)=1 and the Divisor function of 6 is: σ
(6)=4 (1,2,3,6) and so on.
In Matrix topology where a matrix is assembled of L capacitors, the number of voltage ratio possibilities is the sum of all divisor functions, σ(L).sub.), equal to or less than L. Namely:
σ ( L ) = n = 1 L σ 0 ( n ) = n = 1 L d n 0 ( 4 ) ##EQU00007##
The divisor function of matrices which are composed of up to a 100 capacitors is shown in FIG. 4a (illustrated by curve 41) and the function that represents the number voltage ratio possibilities
(4), is shown in FIG. 4b--and illustrated by curve 42.
The divisor function is changing rapidly, as expected from divisors of integers. The voltage ratio possibilities function of FIG. 4b is a monotonically increasing function. For a matrix of 100
capacitors there are up to 482 possibilities of voltage ratios. The actual number is lower due to the fact that the same voltage ratio can be attained in various configurations. Therefore, the
function in FIG. 4b is the higher boundary of these possibilities.
Accuracy and Fine Tuning [0088]
It was shown that the matrix topology has the possibility for versatile voltage ratios. This versatility yields a topology with a better accuracy in comparison with other conventional topologies.
Accuracy gives the ability to get precisely, or at least very close, to any desired fixed voltage ratio value. If the number of capacitor is not limited, theoretically the accuracy can get to any
desired value. A theoretical test case for comparing the various topologies presented in this paper is a case where a difficult DC/DC voltage ratio is required, such as 2.2. In this case, a matrix of
m=22 and I=10 (a total of 220 capacitors) is the solution. This value is theoretically achieved in a 100% accuracy.
The case is getting more complicated if regulation is required. In this case the matrix needs to change to a one that suits the new value. This characteristic is called Fine Tuning. Matrix
configuration has the ability for fine tuning by changing the size of the matrix. In the above example, if for regulation purposes the 2.2 voltage ratio needs to change to 2.25, the required matrix
will be m=45 and I=20 (a total of 900 capacitors). These numbers are obviously impractical for implementation but this example emphasizes the presented idea of accuracy and fine tuning in matrix
based topologies.
Partial Arbitrary Matrices [0090]
The matrix topology requires a large number of capacitors and switches; therefore it is too complex for implementation and integration of converters, being more theoretical than practical. Further
studying of the matrix topology yields to the conclusion that it is a singular case of a more general and more flexible topology.
The topology presented next is the Partial Arbitrary Matrix Topology (PAMT). This topology is based on a concept in which for achieving a certain input to output voltage ratio, partial matrices and
arbitrary rectangular based arrangements of equally capacitance valued capacitors can be used.
As shown in FIG. 5a, a matrix can be charged at the charging state and at the discharging state the topology is transformed, as mentioned above. The left side of FIG. 5a illustrates a charging
configuration 51 in which m branches 52(1)-52(m) are connected in parallel to each other. Each of these branches is illustrated as including I serially connected capacitors. The right side of FIG. 5a
illustrates a discharging configuration 53 in which m elements 54(1)-54(m) are connected in a sequential manner to each other. Each of these elements is illustrated as including I capacitors that are
connected in parallel to each other.
The same idea can be implemented to non-rectangular matrices. Namely, any discrete number of equally capacitance valued capacitors can be used to achieve any desired DC/DC voltage ratio. In FIG. 5b
the capacitors are connected in an arbitrary shape. The left side of FIG. 5b illustrates a charging configuration 55 while the right side of FIG. 5b illustrates a discharging configuration 58.
The charging configuration 55 includes branches 55(1), 56(1)-56(J) and 55(2) that are coupled in parallel to each other. Branch 55(2) has a single capacitor, branch 55(1) has a first number of
serially connected capacitors and each of branches 56(1)-56(J) has a second number of serially connected capacitors, whereas the first number is between one and the second number.
The discharging configuration 58 includes elements 58(1), 59(1)-59(J) and 58(2) that are coupled in a serial manner to each other. Element 58(2) has a single capacitor, element 58(1) has a first
number of parallel connected capacitors and each of elements 59(1)-59(J) has a second number of parallel connected capacitors, whereas the first number is between one and the second number.
For conservation of charge and for eliminating any unacceptable electrical connections (such as the parallel connection of two capacitors charged at different voltages), four convections should be
(A) At the charging state, when the capacitors are connected to the DC source. The capacitors should be connected as a shunt connection of branches. Each branch consists of any number of series
(B) The discharging state is a dual connection to the charging state. That is the capacitors should be connected as series elements. Each element consists of a number of capacitors in parallel.
(C) The number of parallel branches at the charging state is equal to the number of the series elements in the discharging state.
(D) The number of capacitors in parallel at the discharge state is equal to the number of series capacitors in a branch at the charging state.
It is noted that such a transformation from input to output of partial and arbitrary shaped matrices from series to parallel, and vice versa, may be achieved by transferring a given topology through
a gyrator (see: references [16]-[19]).
The DC
/DC Input/Output Voltage Ratio
FIGS. 6a and 6b illustrates a charging configuration and a discharging configuration of a PAMT is presented. It is noted that the number of capacitors in each branch of FIG. 6a and each element of
FIG. 6b is denoted by N with a complementary index (j) that indicates the branch number. This number is an integer and may vary from 1 to L, where L is any finite integer.
The charging configuration 63 includes n branches 60(1)-60(n) that are coupled in parallel to each other. Each branch includes one or more serially connected capacitors. The j'th branch includes Nj
serially connected capacitors.
The discharging configuration 64 includes n elements 62(1)-62(n) that are serially connected to each other. Each branch includes one or more parallel connected capacitors. The j'th element includes
Nj parallel connected capacitors.
At the charging state of FIG. 6a, each capacitor at the j branch is charged to a voltage V
, of:
V cj
= 1 N j V in ( 5 ) ##EQU00008##
The same expression can be written for each capacitor in any given branch, taking into consideration the number of capacitors in that branch.
Consequently, at the discharging state, the equally charged series capacitors in a branch are connected in parallel. Therefore, the voltage of that series element of paralleled capacitors is equal to
the voltage of a single capacitor at the charging state, as in (5). Additionally, the output voltage at the discharging state is a sum of all voltages of all series elements in FIG. 6b. The output
voltage can be written as:
V out
= V in 1 N 1 + V in 1 N 2 + + V in 1 N n ( 6 ) ##EQU00009##
Where each of the N
, . . . , N
is an integer that can obtain any value at a range of 1 to L.
Then for a given number of capacitors in a PAMT configuration, the DC input/output voltage ratio is:
= V out V in = j = 1 n 1 N j ( 7 ) ##EQU00010##
FIG. 8a illustrates a charging configuration in which two branches 81(1)-81(2) are connected in parallel to the supply voltage 10. The numbers of the serially connected capacitors of these branches
are N1=1, N2=10, although N2 may differ than 10. FIG. 8a also illustrates a discharging configuration in which two elements 72(1)-72(2) are connected in a serial manner to the supply voltage. The
numbers of the parallel connected capacitors of these elements are N1=1, N2=10, although N2 may differ than 10.
Accuracy and Fine Tuning in PAMT
Various ratios can be provided in various manners, including but not limited to bypassing or disconnecting capacitors.
FIG. 7a illustrates a charging configuration in which three branches 70(1)-70(3) are connected in parallel to the supply voltage while another (optional) branch 70(4) is disconnected. The numbers of
the serially connected capacitors of these branches are N1=1, N2=1 and N3=5. FIG. 7c illustrates a discharging configuration in which three elements 71(1)-71(3) are connected in a serial manner to
the supply voltage while another (optional) branch 70(4) (not shown in FIG. 7c) is bypassed. The numbers of the parallel connected capacitors of these elements are N1=1, N2=1 and N3=5, although other
values of N1-N4 may be provided. These configurations provide a ratio of 2.2.
FIG. 7b illustrates a charging configuration in which three branches 70(1), 70(2) and 70(4) are connected in parallel to the supply voltage while another (optional) branch 70(3) is disconnected. The
numbers of the serially connected capacitors of these branches are N1=1, N2=1 and N4=4. FIG. 7c illustrates a discharging configuration in which three elements 71(1), 71(2) and 71(4) are connected in
a serial manner to the supply voltage while another (optional) branch 71(3) (not shown in FIG. 7d) is bypassed. The numbers of the parallel connected capacitors of these elements are N1=1, N2=1 and
N4=4, although other values of N1, N2 and N4 may be provided. These configurations provide a ratio of 2.25.
The arbitrary (or rather non-rectangular) matrixes provide a ratio of either one of 2.25 and 2.25 with only seven capacitors. The arbitrary (or rather non-rectangular) matrixes provide both ratios of
2.25 and 2.25 with only 11 capacitors. It is noted that a 2.2 ratio may require a rectangular matrix that includes 220 capacitors.
The configurations illustrates above allow to fine tune the ratio.
The same fine tuning example requires here an addition of four capacitors. Namely, eleven capacitors are required (compared with 990 in the rectangular matrix topology) for achieving the fixed value
accuracy and the Fine Tuning capability of the mentioned example.
In general, multiple branches and/or elements can be bypassed or disconnected.
The General Transposed Series
-Parallel Topology (GTSP)
The rectangular matrix topology requires a large number of capacitors and switches. PAMT requires a much less number of capacitors, but when regulation and fine tuning is required, the number of
capacitors can rise. Studying further the rectangular matrix topology and PAMT, yields to the conclusion that a more general and more flexible topology can be derived.
The GTSP topology presented next is a solution for achieving an integrated converter with a very large number of input/output DC voltage conversion ratios and the ability to multiply and divide by
integers, as well as fractional multiplications. Moreover, this topology has Fine Tuning capabilities for regulation purposes with a minimal addition of capacitors. The topology is based on a concept
in which for achieving a certain input to output voltage ratio, partial matrices and arbitrary rectangular based arrangements of equal capacitance valued capacitors can be used exactly, as was
explained in the PAMT. The only difference is that the arbitrary matrices are constructed out of a bank of capacitors. Each capacitor can be connected individually in any column.
Referring to the 2.2 voltage ratio example, the GTSP topology, shown in FIG. 8b, may be the same as the PAMT of FIG. 7a and FIG. 7c but includes bypass and/or disconnect capabilities of portions of
elements and/or branches.
FIG. 8b illustrates a charging configuration 89 in which three branches 87(1)-87(3) are connected in parallel to the supply voltage 10 and a single capacitor C89 of the third branch 87(3) can be
bypassed. The numbers of the serially connected capacitors of these branches are N1=1, N2=1 and N3=5 (including C89).
FIG. 8b also illustrates a discharging configuration 88 in which three elements 88(1)-88(3) are connected in a serial manner to the load 15 while capacitor C89 is disconnected. The numbers of the
parallel connected capacitors of these elements are N1=1, N2=1 and N3=5 (including C89).
It is noted that other values of N1-N3 may be provided, that the number of branches and elements may differ from three and that other capacitors can be bypassed (in addition to C89 or instead of
The columns 87(1) and 87(2) that include one capacitor (N
=1 and N
=2) may yield an integer multiplication. The columns 87(3) that have more than one capacitor yield the fractional multiplication. Therefore, a combination of the two provides any required voltage
ratio according to equation (7).
The PAMT topology may differ from the GTSP topology by the number of capacitors required for Fine Tuning.
A 2.2 voltage ratio can be changed to a 2.25 ratio by bypassing (or not) C89. Therefore, for the assumed example, only seven capacitors are needed for the accuracy and fine tuning.
The ability to describe the topology as a connection between discrete capacitors yields a topology with a very large amount of voltage ratios with a reduced number of capacitors. This leads to
accuracy and Fine Tuning capabilities that are only known in inductance based converters.
Accordingly--instead of bypassing or disconnecting entire branches (or elements) only a portion of a branch (or a portion of an element) can be bypassed or disconnected. The bypass and disconnection
may be implemented by switches.
/DC Input/Output Voltage Ratios Possibilities
The configuration of FIGS. 6a-6b may also be viewed as a case of GTSP. It is versatile and yields a large number of input/output DC voltage ratios. Note that for any number L of capacitors, it is
possible by controlled switching to use any number less or equal to L capacitors.
Examining the number of the DC voltage ratios, possibilities for a given number of capacitors show resemblance to partition functions in number theory. A partition function P(n) gives the number of
partitions of a number n as a sum of smaller integers regardless of order (see: reference [20]. For example, the partition function of 1 is: P(1)=1, the partition function of 2 is: p(2)=2 and p(3)=
(3); p(4)=5; p(5)=7; p(6)=11 . . . p(10)=42 and so on.
In the case of GTSP that is constructed out of K capacitors, the number of voltage ratio possibilities is the sum of all partition functions, P
, equal to or less than K. Namely:
P K
= n = 1 K p ( n ) ( 8 ) ##EQU00011##
The partition functions p(n) can themselves be calculated by using the greedy algorithm; (see: reference [15]), or from the generating function based on the reciprocal of the Euler's function (see:
reference [20]):
= 0 ∞ p ( n ) x n = k = 1 ∞ ( 1 1 - x k ) ( 9 ) ##EQU00012##
In FIG. 9a the number of possible DC/DC voltage ratios is presented with respect to the number of capacitors. It can be seen that with 20 capacitors the number of possibilities reaches a vast number
of over 2,800 ratios (curve 91 of FIG. 9a). Checking the actual voltage ratio yields that there are ratios that repeat themselves in various constellations of capacitors. After sorting and
eliminating repeating ratios, curve 92 in FIG. 9a shows the number of different ratios possible. At 20 capacitors there are still over 1000 possibilities. Moreover, these reported repetitions might
be useful in the future when transferring from one ratio to another. In this case, for energy consideration and optimization of the control, it might be useful to have a few configurations to choose
FIG. 9b shows normalized results for the case in which the number of possibilities are divided by the number of capacitors, the results show that even when normalized, the graph is still
exponentially increased. Curve 93 illustrates the number or ratios while curve 94 illustrates the number of different ratios. Therefore, adding capacitors to the system increases the number of ratio
possibilities in an exponential manner.
An important characteristic of GTSP is the ability to fine tune the voltage ratio. FIG. 10 shows a histogram 100 that sorts the total different voltage ratios. It can be seen that only one
configuration can make a voltage ratio of 20, but there are 184 configurations that give various voltage ratios between 0 and 1 and 171 configurations that give various voltage ratios between 1 and 2
and so on: the number drops in a logarithmical manner in the higher regions. This means that after reaching the rough voltage ratio, fine tuning can be established.
Accuracy and Fine Tuning Test Case [0137]
As a test case a 1.75 DC/DC voltage ratio is examined. An input voltage with up to ±15% deviation around a nominal value is considered. For appropriate regulated output voltage, the converter needs
to have 1.5 to 1.9 DC/DC voltage ratios. Calculating the sum of partitions and the possible voltage ratios in the above mentioned interval for 10, 15 and 20 capacitors is presented in FIG. 11.
The results show that for 10 capacitors, as shown in FIG. 11a, there are 8 different DC/DC voltage ratios in the interval of 1.5 to 1.9. The average voltage difference between two values is 0.0476.
In the case of 15 capacitors (FIG. 11b), there are 23 different ratios in the mentioned interval with an average difference of 0.0182 between two values. In the last case of 20 capacitors (FIG. 11c),
the 70 different ratios are with an average deviation of 0.0058 between two values.
For practical cases of switched mode power supplies, it is sufficient that the ripple Δv is:
Δ v Vout < 1 % ( 10 ) ##EQU00013##
It seems that the "fine tuning" of the transfer ratio should not be more precise than the required ripple; therefore a 20 capacitor converter seems to be the appropriate topology for achieving
accuracy. For a more rigid operating condition, an addition of capacitors should be considered. A note should be taken that each extra capacitor inserts an exponential number of possibilities, as
described in (8) and FIG. 7.
The Effect of Dispersion of Capacitance Value Parameter, on the Switching Frequency and Converter Losses
The analysis in the previous sections takes into consideration ideal components. Namely, all capacitors have the same capacitance value and the switches are ideal.
In manufacturing an identical capacitor, due to practical manufacturing methods, each capacitor's value can be considered to be C+ΔC. Moreover, ΔC is distributed normally, namely the normalized
variable ΔC/C can be written as:
Δ C C N ( 0 , σ 2 ) ( 11 ) ##EQU00014##
ΔC/C<<1 and σ
is the variance and depends on the accuracy of the manufacturing process.
The first branch of capacitors in FIG. 6a consists of N
capacitors connected in series. The inverse total capacitance of that branch is:
1 C T 1 = 1 C + 1 C + + 1 C N 1 = N 1 C ( 12 ) ##EQU00015##
Subsequently, the total capacitance of the GTSP configuration observed by the source is:
C Tot
= C j = 1 n 1 N j ( 13 ) ##EQU00016##
At the end of the charging state the voltage across each capacitor in the first branch is:
V C
1 = V in N 1 [ 1 - Δ C C ] ( 14 ) ##EQU00017##
Therefore, the voltage across any capacitor in any branch j in the
GTSP topology is:
V Cj
= V in N j [ 1 - Δ C C ] ( 15 ) ##EQU00018##
The output voltage drop at the beginning of the discharge state for capacitors in the first branch is:
V in N
1 = V out 1 ( 16 ) ##EQU00019##
Therefore, the energy loss in each capacitor with capacitance of C+ΔC at the first branch due to charging is:
Δ w = 1 2 ( C + Δ C ) [ ( V in N 1 [ 1 - Δ C C ] ) 2 - ( V in N 1 ) 2 ] ( 17 ) ##EQU00020##
The mean is given by:
( Δ w ) = 1 2 C ( V in N 1 ) 2 σ 2 ( 18 ) ##EQU00021##
The total energy loss of all capacitors with capacitance of C+ΔC due to discharging at the first row is:
Δ w Tot 1 = 1 2 C V in 2 N 1 σ 2 ( 19 ) ##EQU00022##
In the same manner, the energy loss of all capacitors with capacitance of C+ΔC due to discharging all rows is:
Δ w Totj = 1 2 CV in 2 σ 2 n = 1 N 1 N n ( 20 ) ##EQU00023##
The absorbed power when the switching frequency is f
P mismatch
= Δ w f switch = 1 2 CV in 2 σ 2 f switch n = 1 N 1 N n ( 21 ) ##EQU00024##
After a short period, the capacitors reach an average voltage Vj (this process consists of an energy loss of Δw), then the discharged state starts. In this state the capacitors of each branch are
connected in parallel and all branches are connected now in series. The total output capacitance is:
1 C Tot = 1 N 1 C + 1 N 2 C + + 1 N n C = 1 C j = 1 n 1 N j ( 22 ) ##EQU00025##
Under the assumption that the voltage is constant, namely, that the voltage drops very slightly, the current is then constant and is:
I dis
= P load V in n = 1 N 1 / N n ( 23 ) ##EQU00026##
The voltage drop is then:
Δ V p = I dis T switch C Tot = P V out T 1 C j = 1 n 1 N j ##EQU00027##
from. (24) it is evident that:
Δ V p V out = P V out 2 f 1 C j = 1 n 1 N j ( 25 ) ##EQU00028##
Now the frequency is:
f switch
= P load CV out 2 ( Δ V p V out ) j = 1 n 1 N j ##EQU00029##
The output voltage in the GTSP topology is:
V out
= V i n j = 1 n 1 N j ( 27 ) ##EQU00030##
Substituting (27) into (26) yields:
Δ P mismatch P load = 1 2 ( V i n V out ) 2 σ 2 j = 1 n 1 N j 2 j = 1 n 1 N j ( V out Δ V p ) ( 28 ) ##EQU00031##
From (28) it is evident that large parameter dispersion yields higher losses. Therefore, for integration it is crucial to lower this dispersion by an accurate production process. Since ΔV
/V means losses as well, an optimum should be achieved.
It should be mentioned that when real capacitors are concerned the analysis will include energy losses due to the fact that the capacitors are not equally valued. This energy loss will be absorbed in
the capacitors parasitic resistances which were neglected in order to simplify the analysis.
Small Ripple Analysis [0164]
The theory presented above takes into account that the switched capacitor converter is ideal, namely the presentation was based on the fact that the output power equals the input power, namely the
maximum efficiency. Therefore, the converter is described as a two port Direct Coupled Transformer (DCT) and defined by the chain matrix:
[ T ] = [ A B C D ] = [ m 0 0 m - 1 ] ( 29 ) ##EQU00032##
When the system is not ideal, losses are introduced and the ideal system now becomes semi-ideal, (see: reference [21]).
The switching of capacitors results in high peak current pulses in the input of the circuit. To eliminate these current spikes, a small series resistance is introduced to the input of the circuit.
The current now is restrained and the input current permutation is denoted as δi
. Furthermore, the output voltage permutation (ripple) is denoted by δv
, see FIG. 12.
The model of FIG. 12 includes a voltage supply 10 having one end connected to an input port of an ideal DCT 122 and another end connected to a first end of δi
121. The other end of δi
121 is connected to another input port of the ideal DCT 122. An output port of the ideal DCT 122 is connected to one end of δv
123 while another end of δv
123 is connected to one end of load 15. The other end of load 15 is connected to another output port of the ideal DCT 122.
The input current and output voltage equations are then:
(t) (30)
where i
and v*
are the rated input current and output voltage respectively.
For maintaining Semi-Ideal conditions, the output permutation δv
and the input current permutation δi
, should comply with:
δ i i n i i n = α ( t ) δ v out v out = β ( t ) ( 31 ) ##EQU00033##
For realization of a Semi-Ideal system the following should be satisfied:
|α(t)|<<1 and |β(t)|<<1 (32)
in the case of (30) the equation is:
The transfer ratio of the ideal DCT is:
= i i n * i out = v out * v i n ( 34 ) ##EQU00034##
then from
(33), (34) and (31) we derive:
( i i n * + δ i i n ( t ) ) i out = ( v out * + δ v out ( t ) ) v i n m [ 1 + α ( t ) ] = m [ 1 + β ( t ) ] ( 35 ) ##EQU00035##
Thus, the chain matrix is now:
[ T ] = [ m [ 1 + α ( t ) ] 0 0 { m [ 1 + β ( t ) ] } - 1 ] ( 36 ) ##EQU00036##
The [T] matrix of the approximated DCT can be split into two matrices as follows:
[ T ] = [ M ] + [ A ( t ) ] [ M ] = [ m 0 0 m - 1 ] [ A ] = [ m α ( t ) 0 0 [ m β ( t ) ] - 1 ] ( 37 ) ##EQU00037##
In (37), [M] is the matrix of an ideal DCT with the ideal transfer ration of n. The [A] matrix represents the variation of the input current and output voltage from their rated values, resulting from
the imbalanced input/output power of the Semi-Ideal system. If one of the parameters (input current or output voltage permutation) in the [A] matrix is significantly small, the corresponding
coefficient in the matrix tends to zero. When a and β tend to zero, the [T] matrix tends to equal [M] (an ideal DCT).
Losses Due to Voltage Ripple and Efficiency Considerations [0176]
The topologies presented here are loss free (in principle, assuming ideal capacitors and switching elements) as frequency tends to infinity. In practice the switching process involves losses, which
is dependent on frequency. At low frequency range this dependence is to the inverse of the frequency (see: references [11] and [22]):
Δ v = i out 2 C T f ( 38 ) ##EQU00038##
where C[T]
is the equivalent capacitance measured from the output and f is the switching frequency. The denominator is multiplied by two due to the fact that the topology is implemented by dual circuits, as
mentioned in (see: reference [14]). This voltage ripple implies losses due to the imbalance of capacitors voltage and the voltage source which empower the circuit. From (38) it is clear that the
voltage ripple is reduced as frequency increases. This means that the frequency dependence is minor in higher frequencies as mentioned in (see: reference [22]). Conduction losses are dominant in
higher frequencies. These losses are due to the On resistance of the switches and parasitic resistance of the capacitors. Increasing further the switching frequency, results in increased influence of
losses which are linearly proportional to the frequency, due to the charge/discharge processes in the semiconductor junctions (see: reference [23]). It would be reasonable to operate the converter at
a Mid-range frequency, in which the ripple is negligible and the high frequency losses impact is still small, so our loss calculation would be focused on this range.
In this case the dominant losses mechanism is the switches resistances and as first approximation only, this effect is considered.
In FIG. 13 a GTSP with arbitrary voltage is considered. In this topology the switches and capacitor parasitic resistance are combined and are denoted as OR. Note that FIG. 13a describes the charging
state and FIG. 13b is the discharging state of the configuration as shown.
The left side of FIG. 13 (denoted (a)) illustrates a charging configuration 134 in which three branches 130(1)-130(3) are connected in parallel to the supply voltage 10. The numbers of the serially
connected capacitors (and switches and capacitor parasitic resistances OR) of these branches are N1, N2 and N3.
The right side of FIG. 13 (denoted (b)) illustrates a discharging configuration 135 in which three elements 131(1)-131(3) are connected in a serial manner to the load 15. The numbers of the parallel
connected capacitors and switches and capacitor parasitic resistances OR) of these branches are N1, N2 and N3.
The power loss in the parasitic resistances at the discharging state is:
Δ P = N 1 [ ( i out N 1 ) 2 δ R ] + N 2 [ ( i out N 2 ) 2 δ R ] + + N n [ ( i out N n ) 2 δ R ] = i = 1 n 1 N i ( i out ) 2 δ R ##EQU00039##
It is interesting to mention that the Sigma section in the result of (39) is the DC/DC voltage ratio of the GTSP configuration as denoted by (7). Therefore, (39) can be written as:
δR (40)
In FIG. 14 the GTSP with voltage ratio of 2.2 is reconsidered as an example. Note that in the charging state 136 (left side of FIG. 14), the resistance in the integer multiplication branches is OR
and in the N
branch of the fractional multiplication, the resistance is the net resistance of all five capacitors. Therefore the resistance in this branch equals to 5δR. In the discharging state 137 (right side
of FIG. 14), the upper resistance is the net resistance of the N
and N
resistances, namely, 2δR. Each of the paralleled five capacitor of N
row has a series resistance of OR, as shown.
In the charging configuration 136 three branches 140(1)-140(3) are connected in parallel to the supply voltage 10. The numbers of the serially connected capacitors (and switches and capacitor
parasitic resistances OR) of these branches are N1=1, N2=2 and N3=5.
In the discharging configuration 137 three elements 141(1)-141(3) are connected in a serial manner to the load 15. The numbers of the parallel connected capacitors and switches and capacitor
parasitic resistances OR) of these branches are N1=1, N2=1 and N3=5.
The power loss in the parasitic resistances at the discharging state is:
Δ P = 5 [ ( i out 5 ) 2 δ R ] + i out 2 2 δ R = = i out 2 ( 55 25 ) δ R = 2.2 i out 2 δ R ( 41 ) ##EQU00040##
As expected from (40) the constant 2.2 is the DC/DC voltage ratio of the converter. It can be shown that this result is valid for any GTSP topology. The power loss in the switches is a function of
the voltage transfer ratio, the On resistance of the switches and the load resistance as in (40).
Subsequently, the ratio between the power losses on the parasitic and switches resistors and the output power (assuming constant power on the load) can be written as:
Δ P P load = i out 2 M δ R P load = M ( δ R R Load ) ( 42 ) ##EQU00041##
and the efficiency is therefore:
η = 1 - Δ P P load ( 43 ) ##EQU00042##
In a practical case, the converter is applied for power processing. If constant output power is considered, (42) yields that for increasing of the voltage transfer ratio M, the square of the output
current is decreased. Thus, in total the losses are decreasing linearly with the increase in M.
Equation (42) indicates that the conduction losses are a function of the transfer ratio M, the load resistance and the parasitic resistance δR only. Therefore, for simple conventional ratio (such as
1.5), in which 3 capacitors is required and for more complex transfer ratio (such as 1.35), in which a larger number of capacitors are required, the variations in power losses would be very modest.
It should be noted that this analysis does not consider housekeeping losses and second order phenomena that can lower the efficiency.
GTSP vs
. Matrix Topology
Matrix topology is based on rectangular matrices of equally valued capacitors as mentioned above. In Matrix topology the voltages and currents are equally divided between the capacitors. Thus, this
topology has improved usage of the converters volume. However, this advantage comes with complexity in the number of switches and the control scheme. This topology has the disadvantage of being
implemented by a large number of capacitors at non conventional DC/DC voltage ratios (such as 2.2). At the same voltage ratios the GTSP topology can be realized by a considerably reduced number of
capacitors. This is an advantage when integration and losses are considered, since the reduction of capacitors yields a reduction of the number of switches proportionally. Therefore, the converter
can be implemented on a smaller space and switching losses are reduced.
The unequal voltages and currents on the capacitors at GTSP topology have an effect on the design. This is due to the fact that the equally valued capacitors should be designed, consequently that
each capacitor is capable of holding the highest stress.
Simulation Results
As a simulation example, the 2.2 and 2.25 DC/DC voltage ratio circuit were tested. The number of capacitors in these GTSP configurations is 7 for 2.2 ratio and 6 for 2.25 voltage ratio, as shown in
FIG. 8b.
The 2.2 DC/DC voltage ratio is achieved by connecting one capacitor in N
and N
columns. This section is the rough integer multiplication of 2. The last column N
, consists of 5 capacitors for fine tuning of 0.25 multiplication and 4 capacitors for 0.2 multiplication. The simulation results are shown in FIG. 15 (curve 150) and FIG. 16 (curve 160).
The simulation was performed with dual circuits that work in complementary phases for the elimination of large external capacitors (see: reference [14]). The input voltage is 5V with capacitors of 50
pF. The simulation results show the assumed 11.25 for 2.25 voltage ratio and the 10.95V for 2.2 voltage ratio are obtained. Next the output voltage ripple is simulated for various frequencies; the
results are shown in FIG. 17.
In FIG. 17 the GTSP topology is constant (a 2.2 voltage ratio) the capacitors and load are also constant. The changing variable is the circuit frequency. The results show that for 2.5 MHz (curve 171)
the voltage ripple is 92 mV when the calculated ripple by (38) is 99 mV on a 1 k load. At 5 MHz (curve 172) the simulation shows 44 mV and at 10 MHz (curve 173), 28 mV when the calculated ripples are
49 mV and 25 mV respectively.
Comparison of the simulated results (curve 181) to the calculated (curve 182) results of (38) for a 2.2 voltage ratio configuration, yield the graphs in FIG. 18. The results are shown for frequency
range of 1 MHz to 10 MHz and show good agreement with the theory.
In FIG. 19 the efficiency is calculated for a 2.2 DC/DC voltage ratio with on resistance of 10 mΩ (curve 191), 100 mΩ (curve 192) and 500 mΩ (curve 193) for the switches.
The results are consistent with equations (42) and (43) and show efficiencies over 95%. These results are only a first approximation since other losses mechanisms can lower the efficiency as
mentioned above.
FIGS. 20, 21 and 24-26 illustrate various configurations of a circuit 200 that includes three capacitors C1-C3 211-213 and 20 switches 201-210.
The first end of voltage supply 10 is connected to a first end of switch D 201. The other end of switch D 201 is connected to a first end of capacitor 01 211, a first end of switch al 203 and a first
end of switch D' 202. The second end of voltage supply 201, capacitor C3 13, switch b1 210 and load 15 are connected to each other. The second end of switch 21 203 is connected to a first end of
switch 1A 206 and switch 4A 209. The second end of C1 211 is connected to a first end of switch 3A 205 and a first end of switch 2A 204. The second end of switch 2A 204 is connected to a second end
of switch 1 A 206 and a first end of capacitor C2 212. The second end of C2 212 is connected to a first end of switch 6A 207 and a first end of switch 5A 208. The second end of switch 5A 208 is
connected to a second end of switch 4A 209 and a first end of capacitor C3 213.
In FIG. 22 a charge configuration 242 (switches D 201, 2A 204 and 5A 208 are closed) and a discharge configuration 243 (switches D' 202, a1 203, 1A 206, 4A 209, 3A 205, 6A 207 and b1 210 are closed)
that provide a ratio of 1/3 are shown.
FIG. 3 illustrates the relationship between the input voltage (curve 255) and the output voltage (curve 254).
In FIG. 24 a charge configuration 264 (switches D 201, 1A 206, a1 203, 6A 207 and 3A 205 are closed) and a discharge configuration 265 (switches D' 202, 2A 204, 6A 207 and b1 210 are closed) that
provide a ratio of 1/2 are shown.
In FIG. 25 a charge configuration 266 (switches D 201, al 203, 1A 206, 6A 207, b1 210 and 4A 209 are closed) and a discharge configuration 267 (switches D' 202, a1 203, 1A 206 and 5A 208 are closed)
that provide a ratio of 1/2 are shown.
In FIG. 26 a charge configuration 268 (switches D 201, a1 203, 4A 209, 3A 205 and b1 210 are closed) and a discharge configuration 269 (switches D' 202, 3A 205, 6A 207 and 5A 208 are closed) that
provide a ratio of 1/2 are shown.
FIG. 21 illustrates a circuit 210 that has another set of three capacitors 231, 232 and 233 and ten switches 221-230 that are coupled in parallel to the three capacitors 211-213 and the ten switches
201-210 illustrated in FIG. 20. The three capacitors 231, 232 and 233 and ten switches 221-230 that are to each other at the same manner in which three capacitors 211-213 and the ten switches 201-210
are coupled to each other.
FIG. 27 illustrates two matrixes 277 and 278 that are connected in parallel between load 15 and voltage supply 10. First matrix 277 includes 4 capacitors 277(1)-277(4) and 20 switches 276(1)-276(20).
Second matrix 278 includes 4 capacitors 278(1)-278(4) and 20 switches 279(1)-279(20).
The connection between the voltage ripple ΔV and output voltage Vout, the frequency f, load resistance Rload and capacitance C equals:
Δ V V out = 1 2 1 N C f R Load ##EQU00043##
A DC to DC converter may include multiple modular cells such as the modular cell 400 of FIG. 28.
The modular cell 400 includes: a first switch s1 411, coupled between a first port 401 and a sixth port 406; a second switch s2 412, coupled between the sixth port 406 and a second port 402; a
capacitor 420, coupled between the second port 402 and a third switch s3 413; said third switch s3 413 is coupled between the capacitor 420 and a fifth port 405; a fourth switch s4 414 coupled
between the capacitor 420 and a third port 403; and a fifth switch s5 415 coupled between the third port 403 and a fourth port 404.
FIG. 29 illustrates a circuit 429 that includes a modular cell 400 that is connected (charged) to a supply voltage (Vcc) whereas conductor 291 shorts ports 401 and 406, and conductor 292 shorts ports
404 and 403. The output ports of circuit 429 are denoted 406' (connected to port 406), 404' (connected to port 404) and 405.
FIG. 30 illustrates a circuit 430 that includes a modular cell 400 that is connected (charged) to the ground (Vcc) whereas conductor 291 shorts ports 401 and 406, and conductor 292 shorts ports 405,
404 and 403. The output ports of circuit 430 are denoted 401' (connected to port 401), 403' (connected to port 403) and 402.
FIG. 31 illustrates three circuit 444, 446 and 448. The first circuit 444 equals circuit 429 of FIG. 9. The second circuit 448 equals circuit 430 of FIG. 30. A modular cell 446 is connected between
circuits 444 and 448-(i) ports 401-403 of modular cell 446 are connected to ports 306', 405 and 404' of first circuit 444, and (ii) ports 404-406 of modular cell 446 are connected to ports 403', 402
and 401' of third circuit 448.
FIG. 32 illustrates a method 500 according to an embodiment of an invention.
Method 500 starts by stage 502 of configuring a first matrix of switches and capacitors to a charge configuration, stage 504 of coupling the first matrix to input ports of a DC to DC converter for
receiving an input DC voltage, stage 506 of configuring a second matrix of switches and capacitors to a discharge configuration, stage 508 of coupling the second matrix to output ports of the DC to
DC converter for outputting an output DC voltage. Wherein the charge configuration and the discharge configurations of each matrix out of the first and second matrices differ from each other by a
replacement of serial connections of capacitors of the matrix to parallel connections of capacitors of the matrix. Wherein the charge configuration and a discharge configuration of each of the first
and second matrices are responsive to required conversion ratio between the input DC voltage and the output DC voltage; and wherein each matrix of the first and second matrices comprises at least
four capacitors.
Stages 502, 504, 506 and 508 are followed by stage 510 of charging the first matrix and discharging the second matrix during a first period of time.
Stage 510 is followed by stage 512 of configuring the second matrix to the charge configuration, stage 514 of coupling the second matrix to the input ports of the DC to DC converter for receiving the
input DC voltage; stage 516 of configuring the first matrix to the discharge configuration; and stage 518 of coupling the second matrix to the output ports of the DC to DC converter.
Stages 512, 514, 516 and 518 are followed by stage 520 of charging the second matrix and discharging the first matrix during a second period of time.
Those of skill in the art will appreciate that various equivalents can be provided without departing from the spirit of the invention.
Patent applications by Ramot At Tel Aviv University Ltd.
Patent applications in class With voltage division by storage type impedance (i.e., V out)
Patent applications in all subclasses With voltage division by storage type impedance (i.e., V out)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110090721","timestamp":"2014-04-18T18:53:12Z","content_type":null,"content_length":"120817","record_id":"<urn:uuid:239a1371-28ae-43ca-8786-ad5b419359d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A 12-ounce bag of birdseed costs $3.12. A 16-ounce bag of birdseed costs $3.84.Which is the better deal? How much money per ounce would you save by buying that size bag instead of the other?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Well consider this, if a 12 ounce bag costs 3.12, if you divide these you will get the price per ounce. 3.12/12 = $0.26 However, in the second question you want to buy a 16 ounce bag at 3.84,
when you divide that you get, 3.84/16 = $0.24 If you get the 16 ounce bag, you are paying at $0.24, which means you save 2 cents per ounce. Multiply by 16, and you get $3.2 dollars. Which means
you have saved $3.2. You have saved more than enough to buy another 12 ounce bag lol.
Best Response
You've already chosen the best response.
thxx :)
Best Response
You've already chosen the best response.
the answer is 2 cents right
Best Response
You've already chosen the best response.
yeah 2 cents per ounce.
Best Response
You've already chosen the best response.
IM CONFUSED LOL HOLD UPP.what is the better deal? How much money per ounce would you save by buying that size bag instead of the other?
Best Response
You've already chosen the best response.
Ok look. If you buy the 16 ounce package, you have a smaller price per ounce when compared to the 12 ounce package. So you want to buy the 16 ounce package because you will pay 24 cents per
ounce, rather than 26 cents. The difference is 2 cents.
Best Response
You've already chosen the best response.
SO... what is the better deal? the 16 ounce package. How much money per ounce would you save by buying that size bag instead of the other? 2 cents per ounce
Best Response
You've already chosen the best response.
Yes that is right :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you're welcome ;)
Best Response
You've already chosen the best response.
i got 2 go so i will see u 2morrow i think
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50636469e4b0583d5cd3742b","timestamp":"2014-04-18T10:40:35Z","content_type":null,"content_length":"52370","record_id":"<urn:uuid:27c88725-79a6-4829-94d1-94302632c06a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics in Action : Algebraic, Graphical, and Trigonometric Problem Solving 2nd Edition | 9780321149206 | eCampus.com
FREE SHIPPING OVER $59!
Your order must be $59 or more, you must select US Postal Service Shipping as your shipping preference, and the "Group my items into as few shipments as possible" option when you place your order.
Bulk sales, PO's, Marketplace Items, eBooks, Apparel, and DVDs not included. | {"url":"http://www.ecampus.com/mathematics-action-algebraic-graphical/bk/9780321149206","timestamp":"2014-04-21T08:20:33Z","content_type":null,"content_length":"105998","record_id":"<urn:uuid:c2909d44-dc91-48b3-946b-c8dfde122114>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Models for Two Real Scalar Fields and Their Kink-Like Solutions
Advances in High Energy Physics
Volume 2013 (2013), Article ID 183295, 9 pages
Research Article
New Models for Two Real Scalar Fields and Their Kink-Like Solutions
^1Departamento de Matematica Aplicada and IUFFyM, Universidad de Salamanca, 37007 Salamanca, Spain
^2Instituto de Física, Universidade de São Paulo, 05314-970 São Paulo, SP, Brazil
^3Departamento de Física, Universidade Federal da Paraíba, 58051-970 João Pessoa, PB, Brazil
^4Departamento de Física, Universidade Federal de Campina Grande, 58109-970 Campina Grande, PB, Brazil
^5Departamento de Física Fundamental and IUFFyM, Universidad de Salamanca, 37007 Salamanca, Spain
Received 10 June 2013; Accepted 7 August 2013
Academic Editor: Chao-Qiang Geng
Copyright © 2013 A. Alonso-Izquierdo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
We study the presence of kinks in models described by two real scalar fields in bidimensional spacetime. We generate new two-field models, constructed from distinct but important one-field models,
and we solve them with techniques that we introduce in the current work. We illustrate the results with several examples of current interest to high energy physics.
1. Introduction
The presence of kinks and solitons in models described by real scalar fields is of direct interest to high energy physics [1, 2] and other areas of nonlinear science [3, 4]. To mention specific
studies, in high energy physics kinks appear in very interesting systems introduced, for instance, in [5, 6]. In condensed matter one can investigate domain walls in magnetic systems [7, 8], and
nonlinear excitations in Bose-Einstein condensates [9, 10], to quote just a few examples.
In this work we focus on one-field and two-field models in spacetime dimensions. Two very interesting models described by a single real scalar field are known as the sine-Gordon and the models,
engendering spontaneous symmetry breaking. The model is described by a fourth-order polynomial potential and supports kink-like solutions, whereas the sine-Gordon model is characterized by a
nonpolynomial potential and supports not only solitons but also multisoliton and breather solutions. Fluctuations around the solitons and kinks, however, are governed by the and reflectionless
Hamiltonians of a general family known from supersymmetric quantum mechanics [11]. Moreover, a rich family of nonpolynomial models with spontaneous symmetry breaking was proposed in [12]. The main
feature of the family of kinks arising in this family is that the Hamiltonians governing the kink small fluctuations cover many of the remaining transparent SUSY Hamiltonians; see also [13].
We start with these one-field models, which are described by polynomial and nonpolynomial , and we then move on to the two-field models constructed from the previous ones. Our aim is to identify kink
solutions in these new models, which in general is a very difficult endeavor, as Rajaraman [14] notices: “This already brings us to the stage where no general methods are available for obtaining all
localized static solutions (kinks), given the field equations. However, some solutions, but by no means all, can be obtained for a class of such Lagrangians using a little trial and error.” In this
work we develop a technique which generates two-component kink solutions for two-field models in a straight-forward while way avoiding the use of the trial and error method mentioned by Rajaraman. We
mention, however, that there exist two scalar field theory models [15] and even models of three scalar field ones [16] such that all the kink solutions can be found due to the complete integrability
of the analogue mechanical problem.
For simplicity, we use natural units and then redefine fields and coordinates such that fields and space and time are all dimensionless. The study starts in Section 2, and the one-field and two-field
models are then used in Section 3 to generate new models, described by two fields. In this section we deal with polynomial potential, and so, to enlarge the scope of the work, in Section 4 we
introduce another family of models, containing a nonpolynomial function of the field . We end the work in Section 5, where we introduce some comments and conclusions.
2. Generalities
Let us first consider one-field models. We take the Lagrange density in the following form: Here we deal with topological solutions, so we write the potential in the following form: where , and
stands for the derivative with respect to ; that is, . The equation of motion for static field configuration is given by Here we are using , and so forth. The energy density for static solutions can
be written as for smooth superpotentials. We note that the energy is minimized to the value for field configurations that obey the first-order equation This is the Bogomol’nyi bound, and we can
easily see that solutions to (6) also solve the equation of motion (3). The field configurations that solve the first-order equation are named Bogomol’nyi-Prasad-Sommerfeld (BPS) states [17, 18]. We
note that since the potential does not see the sign of , there are in fact two first-order equations, one for and the other with changed to . This is related to the spatial reflection symmetry ,
which provides us with the kink/antikink solutions.
Two important models in the previous class of models are the model, where and the sine-Gordon model, where The potentials are, respectively, These models have solutions in the following form for the
model, and for the sine-Gordon model, where identifies one among the infinity of topological sectors of the sine-Gordon model.
Let us now consider two-field models. We start with the Lagrange density For static configurations, the equations of motion become We suppose that the potential is given in terms of the
superpotential by where and . Notice that the critical points of the superpotential provide us with the set of vacua for the field theory model. The energy density has the form The minimum energy
solutions comply with leading us to the BPS energy for smooth superpotentials. In terms of the superpotential, the equations of motion for static fields are written as which are solved by the
first-order equations (16), for , as we require in this work. Solutions to these first-order equations are BPS states, which solve the equations of motion. The sectors where the potential has BPS
states are named BPS sectors.
As an example, let us consider the model characterized by the superpotential which has been studied by Shifman et al. in the context of supersymmetric Wess-Zumino models with two chiral superfields [
19, 20]. In the purely bosonic framework the presence of domain walls and its stability have been analyzed in the references [21–23], while in [24, 25] the complete structure of this type of
solutions is given in two critical values of the coupling between the two scalar fields by exploiting the integrability of the analogue mechanical system associated with this model. This well-known
model will be used in the following sections to illustrate the applicability of the novel procedure introduced in this paper which allows the identification of kink-like solutions in new field
The first-order ODE (16) in this case are written as The potential is given by which can be seen as an extension of the model to the case of two fields. Here we consider to be real and positive. The
vacuum set comprises four elements: Associated with the superpotential (19) we can find five BPS sectors (here we do not distinguish between kinks and antikinks) by analyzing the first-order ODE (20
). Indeed this model is a very special case because an integrating factor can be calculated for the orbit equation extracted from (20). The kink trajectories are given by with and . For the range the
formula (23) describes kinks connecting the vacua and with while the value yields kinks linking the points with with . From construction all the kinks living in a specific topological sector are
energy degenerate.
In particular the and members of (23) correspond, respectively, to the one-component kink and the two-component kink lying in the topological sector connecting the minima and ; see Figure 1.
As an illustrative sample we introduce a kink solution lying in the topological sectors joining the points with the points for the case ; see Figure 2.
We remark that different superpotentials can generate the same potential. Indeed in this model other superpotentials than (19) have been identified for several particular values of the coupling
constant . This fact provides us with new degenerated BPS and non-BPS solutions in the topological sector joining the points and ; see [24].
3. New Models
To generate new two-field models and the accompanying static solutions, we proceed as follows. We start from the one-field model, with the superpotential written in the form This gives the
first-order equation and for the and sine-Gordon models we have and , respectively.
Now, to introduce two-field models, we get inspiration from the previous model, given by (19) and we propose the following superpotential: This generates the field potential term The critical points
of the superpotential, determined by and , provide us with the vacua of the model: where , , are the roots of . Therefore this kind of models involves vacua assuming that . The static solutions are
obtained from the first-order equations We can manipulate these equations to get Choosing as the static solution of a one-field model, such that we can rewrite (33) as where The function is a
particular solution of the linear ODE (35), so we can write the general solution in the form with being an integration constant. Plugging (37) into (30) we get a one-parameter family of potentials
which has a nontrivial two-component kink solution, whose orbit emerges from the use of (32), (34), and (37). Notice that strictly speaking the ODE (35) must be verified only on the kink orbit; in
the rest of the internal plane a natural extension of the potential is considered.
We note that the first-order equations (32) support the orbit , providing us with a second kink solution for our model. In this case the static solutions connecting neighbor minima located at the
-axis are obtained from The expression coincides with only if the integration constant . The general expression of gives rise to a family of two-component field theory models, which admits a
two-component kink solution whose first component coincides with the kink associated with .
The key step of the procedure appears in (34), and it is inspired by [26]. It works nicely for a variety of choices of , and the corresponding models include polynomial and nonpolynomial functions.
3.1. A First Example
To illustrate the previous procedure with concrete examples, let us start considering where , , and are real parameters and we obviously assume that and . We use (37) together with (34) and (36) to
obtain where This leads to the field potential term Here we have the static solution extracted from (34) for our choice of in (40); it reads and from (38) and (41) we obtain Dilatations and
translations in the internal space allow us to relocate two vacua placed in the -axis at the points , such that without loss of generality we can assume that and . If we restrict ourselves to
potentials (43) with a quartic algebraic expression in the fields and we must impose the conditions and , or equivalently . In this case we get the family of potentials where , comprises four
elements provided that . The two-component kinks whose kink orbit is given by connect the points and . The expression (46) can be written as where . A reparametrization of the spatial variable allows
us to identify the present example with the potential introduced in the previous section. In this sense if we choose we get the one-component topological kink solutions (24). For any other choice of
the constant the solution (47) plays the role of the two-component kink (25). The comparison is straightforward when the constant in (46) is unity see Figure 1. This works as a test for the procedure
introduced in this work in a well-known two-field theory model.
If we consider the special case in (40), such that , the use of (37) leads to the function which generates the field potential Because we are interested in quartic potentials in this section, we set
. The previous formula becomes whose zeroes are located at , , . Equation (38) leads to the kink orbits which connect the points with ; see Figure 3. The kink solutions are whose energy is .
As previously mentioned it is easy to identify the one-component kink linking the vacua for this model. We have whose energy is .
Notice that if we consider in (52), we recover the solutions (26) of the test model introduced in the previous section.
The above illustration shows that the procedure works nicely. Thus, below we introduce new families of models using adequate choices of the parameters.
3.1.1. A Family of Models
In the previous section we restrict ourselves to quartic potentials. Here we will introduce expressions of higher degree. Let us choose , , and in (41) and (42) with as a positive integer. Besides we
redefine the coupling constant as . Here we have which determines the potentials These potentials involve two distinct behaviors depending on being odd or even.
In the case for even, there are six degenerate minima at the points , , while for odd there are four degenerate minima; . For all models in the above family, we can find the static solutions which
connect the minima and by means of the orbit (38) given by the algebraical curve
see Figure 4. These solutions carry the energy
Now for the orbit in the BPS sectors connecting neighbor minima at the -axis, the solutions are obtained from (39) with given by (41), which may be solved case by case. For , we have the solution ,
and for we get the implicit expression and so on for other values of . It is remarkable that we can obtain the explicit expression of the two-component kinks (56) for any value of but not for the
one-component kinks.
3.1.2. Another Family of Models
Here we take , , , and integer and we redefine the coupling constant as . From (41) we have which generate a family of models whose potentials have up to minima: two minima at , one at , and up to
minima (for ) coming from the condition .
For all models of this family, we get from (38), (44), and (40) in the sector connecting the minima and the static solutions whose energy is given by ; see Figure 5.
4. Nonpolynomial Models
Let us now move on to the case of nonpolynomial potentials. Here we consider such that the field potential is which is a periodic function in the variable as illustrated in Figure 6. The set of
zeroes is given by , where where are the roots of the function , and . In this case, the static solutions are obtained from the first-order equations
We can manipulate these equations to get
Again, choosing satisfying (34) we can rewrite the above equation as where The general solution is given by Then, choosing we have the constraint for the field .
Also, the first-order equations (65) support the orbit , . In this case, the static solutions , connecting neighbor minima located at the -axis, are obtained from the equation Let us now illustrate
the above procedure with an example. We start using where , , are real parameters. By employing (69) we obtain Again we restrict ourselves to cases where the exponents in the previous expression are
even integers, such that we impose that with , for instance, by choosing . In this case The field potential term is obtained by plugging (74) into (63). In spite of the complexity of this expression
the kink solution is written by the expression (44) and from (70) as follows: For example for we get whose orbit is displayed in Figure 6 for the case .
5. Final Comments
In this work we proposed a new method to construct and solve models described by two real scalar fields. The procedure is simple, inspired by the approach introduced in [26], and it works for the
construction of polynomial and nonpolynomial models.
To illustrate the procedure, we studied several examples, which show how efficient the method is, in order to construct new two-field models with nontrivial two-component kink solutions. We note that
the method starts with , the superpotential, so all the models we construct lead to first-order differential equations, which solve the equations of motion. In this sense, all the solutions we found
are BPS states, and they are classically or linearly stable, as proved before in [27].
A relevant feature of the procedure is that it is different from the deformation procedure involving two-field models, and it is very simple to be applied in investigations based on two real scalar
fields. An issue which deserves further examination concerns the extension of the method to three or more real scalar fields. This is under investigation, and we hope to report the new results in a
separate work.
The authors would like to thank CAPES, CNPq, and FAPESP for partial financial support.
1. A. Vilenkim and E. P. S. Shellard, Cosmic Strings and other Topological Defects, Cambridge University Press, Cambridge, UK, 1994.
2. N. Manton and P. Sutcliffe, Topological Solitons, Cambridge University Press, Cambridge, UK, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. G. B. Whitham, Linear and Nonlinear Waves, Wiley, New York, NY, USA, 1974. View at Zentralblatt MATH · View at MathSciNet
4. D. Walgraef, Spacio-Temporal Pattern Formation, Springer, Berlin, Germany, 1981.
5. G. Basar and G. V. Dunne, “Self-consistent crystalline condensate in chiral Gross-Neveu and Bogoliubov-de Gennes systems,” Physical Review Letters, vol. 100, no. 20, Article ID 200404, 4 pages,
2008. View at Publisher · View at Google Scholar
6. A. Alonso-Izquierdo, M. A. G. Leon, and J. M. Guilarte, “Kinks in a nonlinear massive sigma model,” Physical Review Letters, vol. 101, no. 13, Article ID 131602, 2008. View at Publisher · View at
Google Scholar · View at Scopus
7. P. O. Jubert, R. Allenspach, and A. Bischof, “Magnetic domain walls in constrained geometries,” Physical Review B, vol. 69, no. 22, Article ID 220410R, 4 pages, 2004. View at Publisher · View at
Google Scholar
8. A. Vanhaverbeke, A. Bischof, and R. Allenspach, “Control of domain wall polarity by current pulses,” Physical Review Letters, vol. 101, no. 10, Article ID 107202, 4 pages, 2008. View at Publisher
· View at Google Scholar
9. J. Belmonte-Beitia, V. M. Perez-Garcia, V. Vekslerchik, and V. V. Konotop, “Localized nonlinear waves in systems with time- and space-modulated nonlinearities,” Physical Review Letters, vol. 100,
no. 16, Article ID 164102, 4 pages, 2008. View at Publisher · View at Google Scholar
10. A. T. Avelar, D. Bazeia, and W. B. Cardoso, “Solitons with cubic and quintic nonlinearities modulated in space and time,” Physical Review E, vol. 79, no. 2, Article ID 025602R, 4 pages, 2009.
View at Publisher · View at Google Scholar
11. J. Casahorrán, “Quantum-mechanical tunneling: differential operators, zeta-functions and determinants,” Fortschritte der Physik, vol. 50, no. 3-4, pp. 405–424, 2002. View at MathSciNet
12. M. Bordag and A. Yurov, “Spontaneous symmetry breaking and reflectionless scattering data,” Physical Review D, vol. 67, no. 2, Article ID 025003, 9 pages, 2003. View at Publisher · View at Google
13. A. Alonso-Izquierdo and J. M. Guilarte, “On a family of ($1+1$)-dimensional scalar field theory models: kinks, stability, one-loop mass shifts,” Annals of Physics, vol. 327, no. 9, pp. 2251–2274,
2012. View at Publisher · View at Google Scholar · View at MathSciNet
14. R. Rajaraman, Solitons and Instantons. An Introduction to Solitons and Instantons in Quantum Field Theory, North-Holland Publishing, Amsterdam, The Netherlands, 1987. View at MathSciNet
15. A. Alonso Izquierdo and J. M. Guilarte, “Generalized MSTB models: structure and kink varieties,” Physica D, vol. 237, no. 24, pp. 3263–3291, 2008. View at Publisher · View at Google Scholar ·
View at MathSciNet
16. A. Alonso-Izquierdo and J. M. Guilarte, “Composite solitary waves in three-component scalar field theory: three-body low-energy scattering,” Physica D, vol. 220, no. 1, pp. 31–53, 2006. View at
Publisher · View at Google Scholar · View at MathSciNet
17. M. K. Prasad and C. M. Sommerfeld, “Exact classical solution for the 't Hooft monopole and the Julia-Zee dyon,” Physical Review Letters, vol. 35, no. 12, p. 760, 1975. View at Publisher · View at
Google Scholar
18. E. B. Bolgomol'nyi, “The stability of classical solutions,” Soviet Journal of Nuclear Physics, vol. 24, no. 4, pp. 449–454, 1976.
19. B. Chibisov and M. Shifman, “BPS-saturated walls in supersymmetric theories,” Physical Review D, vol. 56, no. 12, pp. 7990–8013, 1997, Erratum in: Physical Review D, vol. 58, Article ID 109901,
1998. View at Publisher · View at Google Scholar
20. M. A. Shifman and M. B. Voloshin, “Degenerate domain wall solutions in supersymmetric theories,” Physical Review D, vol. 57, pp. 2590–2598, 1998. View at Publisher · View at Google Scholar
21. D. Bazeia, M. J. dos Santos, and R. F. Ribeiro, “Solitons in systems of coupled scalar fields,” Physics Letters A, vol. 208, no. 1-2, pp. 84–88, 1995. View at Scopus
22. D. Bazeia, J. R. S. Nascimento, R. F. Ribeiro, and D. Toledo, “Soliton stability in systems of two real scalar fields,” Journal of Physics A, vol. 30, no. 23, pp. 8157–8166, 1997. View at
Publisher · View at Google Scholar · View at Scopus
23. D. Bazeia and F. A. Brito, “Bags, junctions, and networks of BPS and non-BPS defects,” Physical Review D, vol. 61, no. 10, Article ID 105019, 16 pages, 2000. View at Publisher · View at Google
Scholar · View at MathSciNet
24. A. Alonso-Izquierdo, M. A. G. Leon, and J. M. Guilarte, “The Kink variety in systems of two coupled scalar fields in two space-time dimensions,” Physical Review D, vol. 65, Article ID 085012,
2002. View at Publisher · View at Google Scholar
25. A. Alonso-Izquierdo, M. A. G. Leon, J. M. Guilarte, and M. D. Mayado, “Adiabatic motion of two-component BPS kinks,” Physical Review D, vol. 66, no. 10, Article ID 105022, 9 pages, 2002. View at
Publisher · View at Google Scholar · View at Scopus
26. D. Bazeia, A. Das, L. Losano, and M. J. dos Santos, “Traveling wave solutions of nonlinear partial differential equations,” Applied Mathematics Letters, vol. 23, no. 6, pp. 681–686, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. D. Bazeia and M. M. Santos, “Classical stability of solitons in systems of coupled scalar fields,” Physics Letters A, vol. 217, no. 1, pp. 28–30, 1996. View at Publisher · View at Google Scholar
· View at Scopus | {"url":"http://www.hindawi.com/journals/ahep/2013/183295/","timestamp":"2014-04-19T21:17:19Z","content_type":null,"content_length":"524879","record_id":"<urn:uuid:8dadcdae-9fd9-473b-aa3a-e08b2e1d9d8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMA Newsletter #364
Tristram Bogart (University of Washington) IMA postdoc seminar: The small Chvátal rank of an integer matrix
Abstract: Given an integer matrix A and an integer vector b, we define the small Chvatal rank (SCR) of the system Ax <= b to be the least number of iterations of an iterated Hilbert basis
construction to obtain all facet normals of the integer hull (that is, the convex hull of the set of integer solutions. Our procedure is a variation of the Chvatal-Gomory procedure to compute integer
hulls of polyhedra. The key difference is that our procedure ignores the right-hand side vector b and uses only the matrix A.
We prove that the SCR of Ax <= b is bounded above by the Chvatal rank of Ax <= b and is hence finite. To justify the adjective "small", we show that when n=2, SCR is at most one while Chvatal rank
can be arbitrarily high. For a family of examples from combinatorial optimization, we prove that the SCR is one or two while the Chvatal rank is known to be roughly log(n).
We next relate SCR to the notion of supernormality of a vector configuration (specifically, the rows of A.) Supernormality is a generalization of unimodularity and we exhibit an infinite family of
vector configurations arising from odd cycles that are supernormal but not unimodular. This answers a question of Hosten, Maclagan and Sturmfels.
Lastly, we provide lower bounds on SCR. We prove that when n >= 3, SCR can be arbitrarily high and exponentially large in the input size. We also prove that for polytopes contained in the
d-dimensional unit cube, the SCR can be at least d/2 - o(d) which is of the same order as the known lower bounds on Chvatal rank for such systems. Our methods thus provide an alternate way to compute
lower bounds on Chvatal rank.
This project is joint work with Rekha Thomas.
Kenneth R. Driessel (Iowa State University) Real algebraic geometry tutorial: Quotients of polynomial rings, Hermite's quadratic form and root counting
Abstract: I shall mainly follow the material in the section "Zero-dimensional systems" in the book by Basu, Pollack and Roy.
Stephen E. Fienberg (Carnegie-Mellon University) Algebraic geometry and applications seminar: Statistical formulation of issues associated with multi-way contingency
tables and the links to algebraic geometry
Abstract: Many statistical problems arising in the context of multi-dimensional tables of non-negative counts (known as contingency tables) have natural representations in algebraic and polyhedral
geometry. I will introduce some of these problems in the context of actual examples of large sparse tables and talk about how we have treated them and why. For example, our work on bounds for
contingency table entries has been motivated by problems arising in the context of the protection of confidential statistical data results on decompositions related to graphical model representations
have explicit algebraic geometry formulations. Similarly, results on the existence of maximum likelihood estimates for log-linear models are tied to polyhedral representations. It turns out that
there are close linkages that I will describe.
Serkan Hosten (San Francisco State University) Algebraic geometry and applications seminar: An introduction to algebraic statistics
Abstract: This will be a gentle introduction to the applications of algebraic geometry to statistics. The main goal of the talk is to present statistical models, i.e. sets of probability
distributions (defined parametrically most of the time), as algebraic varieties. I will give examples where defining equations of such statistical model varieties have been successfully computed:
various graphical models and models for DNA sequence evolution. I will also talk about the algebraic degree of maximum likelihood estimation with old and new examples.
Evelyne Hubert (Institut National de Recherche en Informatique Automatique Algebraic geometry and applications seminar: Rational and algebraic invariants of a group action
Abstract: We consider a rational group action on the affine space and propose a construction of a finite set of rational invariants and a simple algorithm to rewrite any rational invariant in terms
of those generators. The construction can be extended to provide algebraic foundations to Cartan's moving frame method, as revised in [Fels & Olver 1999]. This is joint work with Irina Kogan, North
Carolina State University.
Janet Pavelich Keel (Lockheed Martin Missiles and Space Company, Inc.) IMA/MCIM Industrial problems seminar: Data fusion in a UAV surveillance system
Abstract: This talk will be about a recent Lockheed Martin project, the implementation of a data fusion method in the surveillance system of an Unmanned Aerial Vehicle (UAV). In this context data
fusion is the problem of sequentially estimating the state of a dynamic system - ships at sea - given a sequence of noisy and incomplete measurements from the sensor suite on the UAV: radar, an
Electro-Optical/Infrared (EO/IR) camera, and an Automatic Identification System (AIS) receiver. We will present the basic algorithms used in this project, and will then discuss their limitations and
possible improvements.
Michael Kerber (Universität Kaiserslautern) IMA postdoc seminar: Counting rational curves in P^2 using tropical geometry
Abstract: Tropical geometry is a rapidly developing field of algebraic geometry with important applications in quite distinct areas of pure and applied mathematics. In this talk, we give a short
introduction to the theory of tropical curves and discuss its applications to the enumerative geometry of plane curves, with a focus on computing the Kontsevich number N[d] of rational plane curves
of fixed degree d interpolating an appropriate number of given points in general position.
Hannah Markwig (University of Minnesota Twin Cities) IMA postdoc seminar: Counting rational curves in P^2 using Kontsevich's formula
Abstract: The numbers N[d] of rational plane curves has for a long time only been known for small d. In 1990, Kontsevich came up with a recursive formula. In this talk, we will give some basic ideas
and an example how Kontsevich's formula can be derived.
Peter J. Olver (University of Minnesota Twin Cities) Algebraic geometry and applications seminar: Moving frames in classical invariant theory and computer vision
Abstract: Classical invariant theory was inspired by the basic problems of equivalence and symmetry of polynomials (or forms) under the projective group. In this talk, I will explain how a powerful
new approach to the Cartan method of moving frames can be applied to classify algebraic and differential invariants for very general group actions, leading, among many other applications, to new
solutions to the equivalence and symmetry problems arising in both invariant theory, differential geometry, and object recognition in computer vision.
Philipp Rostalski (Eidgenössische TH Zürich-Hönggerberg) IMA postdoc seminar: Characterization and computation of real-radical ideals using semidefinite programming techniques
Abstract: In this talk I will discuss a method (joined work with M. Laurent and J.-B. Lasserre) for computing all real points on a zero-dimensional semi-algebraic set described by polynomial
equalities and inequalities as well as some "nice" polynomial generators for the corresponding vanishing ideal, namely border resp. Gröbner basis for the real radical ideal. In contrast to exact
computational algebraic methods, the method we propose uses numerical linear algebra and semidefinite optimization techniques to compute approximate solutions and generator polynomials. The method is
real-algebraic in nature and prevents the computation of any complex solution. The proposed methods fits well into a relatively new branch of mathematics called "Numerical Polynomial Algebra." | {"url":"https://www.ima.umn.edu/newsletters/2007/02/","timestamp":"2014-04-18T20:44:16Z","content_type":null,"content_length":"37333","record_id":"<urn:uuid:e225fc77-c82c-4506-a30e-9b3aaed74180>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy Efficient Design for Two-Way AF Relay Networks
International Journal of Antennas and Propagation
Volume 2014 (2014), Article ID 292087, 6 pages
Research Article
Energy Efficient Design for Two-Way AF Relay Networks
^1Institute of Embedded Software and Systems, Beijing University of Technology, Beijing 100022, China
^2Department of Vehicle Information Systems, North Information Control Group Co., Ltd., Nanjing 211153, China
^3Department of Physical, No. 1 High School of Shunde, Foshan 528300, China
Received 21 August 2013; Revised 9 December 2013; Accepted 26 December 2013; Published 28 January 2014
Academic Editor: Athanasios Panagopoulos
Copyright © 2014 Yong Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Conventional designs on two-way relay networks mainly focus on the spectral efficiency (SE) rather than energy efficiency (EE). In this paper, we consider a system where two source nodes communicate
with each other via an amplify-and-forward (AF) relay node and study the power allocation schemes to maximize EE while ensuring a certain data rate. We propose an optimal energy-efficient power
allocation algorithm based on iterative search technique. In addition, a closed-form suboptimal solution is derived with reduced complexity and negligible performance degradation. Numerical results
show that the proposed schemes can achieve considerable EE improvement compared with conventional designs.
1. Introduction
The unprecedented growth of wireless networks has resulted in a rapid rise of energy consumption and led to an emerging trend of green radio [1]. Energy-efficient system design for green radio is
earning considerable attention from both industry and academia for its positive impact on environment. The energy efficiency (EE) [2], which is widely defined as the system throughput per unit
energy, has become one of the critical performance metrics for future communication systems.
In the green communication systems, several advanced wireless communication techniques, such as cooperative relay transmissions [3, 4] and small cells [5, 6], have been adopted to provide significant
capacity improvements and reduce energy consumption. Compared with direct transmission, relay technique is essential to provide a more reliable transmission due to the smaller path loss attenuation
of shorter hops. The two-way relay system was introduced in [7] for two source nodes which want to exchange information with each other via a relay.
Most previous works on two-way relay networks mainly focused on the relay selection or power allocation from the perspective of spectral efficiency (SE) [8, 9]. In [8], the smaller of the received
signal-to-noise ratios (SNRs) of two transceivers was maximized through joint relay selection and power allocation. Achievable data rate region for the two source nodes was obtained in [9], and
considerable SE enhancement was achieved when compared with one-way relay network. However, the EE is a strictly decrease function of SE when only transmit power is considered [1]. After taking the
circuit power used for electronic devices and signal processing into account, the relation between SE and EE is more complicated. This indicates that maximizing SE may not maximize EE in a practical
two-way relay system.
Recently, some works have investigated energy related designs in two-way relay networks [10–15]. An efficient power allocation scheme to minimize the total transmit power consumption was presented in
[10] for the machine-to-machine networks, where the source node communicated with its corresponding destination node via one or more relays. The authors in [12] proposed a transmission scheme to
minimize the energy consumed per bit transmitted for both one-way and two-way relay networks by joint relay selection and power allocation. Reference [13] investigated a joint relay selection and
power allocation scheme to minimize overall transmit power while ensuring a certain data rate. However, the circuit power was ignored in all of them. In a practical system, not only the transmit
power but also the circuit power contributes to the total power consumption. Therefore, minimizing the transmit power may not necessarily lead to a high EE [14]. In [15], after taking the realistic
power consumption model into consideration, an analytical framework for the total energy consumption of AF multihop network while satisfying an average bit error rate requirement was developed. Based
on this framework, the impact of the relay's location and energy resource allocation between the relay and the source were also evaluated. The EE of two-way and one-way relay systems was analyzed in
[11] and the search technique was used to find the optimal solution which had high complexity and incurred additional energy.
In this paper, we propose an optimal energy-efficient power allocation algorithm for two-way relay networks while ensuring a certain throughput requirement. Using the results derived in [14], we
first study the transmit power minimization problem for a fixed data rate, analyze the relation between SE and EE, and then apply this solution to formulate an equivalent optimization problem with
one variable. To solve this problem, bisection search technique is used. However, the optimal power allocation with iterative search method has high computational complexity. In order to reduce the
complexity, a closed-form solution of suboptimal power allocation algorithm is derived. The proposed algorithm not only eliminates the complexity of bisection search, but also can achieve near
optimal EE capacity performance in two-way relay networks.
The rest of the paper is organized as follows. Section 2 introduces a two-way relay network model and formulates the corresponding power allocation problem. In Section 3, both the optimal and
suboptimal power allocation schemes are proposed. Simulation results are given in Section 4. Section 5 concludes this paper.
2. System Model and Problem Formulation
2.1. System Model
Consider a relay network consisting of two source nodes and exchanging information with each other at the same transmission rate via a half-duplex AF relay . All the nodes are equipped with a single
antenna. Assume that the signal is severely attenuated between two source nodes because of the high shadowing caused by obstacles or long distance, and thus the direct link transmission is not
considered here.
The channels among three nodes are assumed as block fading channels and the perfect channel state information (CSI) is available at each node. Denote by and the channel coefficients from source nodes
and to relay node , respectively. To exchange information from both of two source nodes, and must transmit in two phases. In the multiple access phase, both of the two source nodes and simultaneously
transmit their respective unit-energy symbols and to the relay node . Let and be the transmit power of source nodes and , respectively. Thus the received signal can be expressed as where is the noise
at relay node . All the noise terms in this paper are assumed to be independent Gaussian variables with zero mean and a variance of .
In the broadcast phase, transmits the received signal to and with an amplification factor . The received signals and at both of the two source nodes can be written as where is the additive noise at
the source node and is the transmit power of relay node . Since each source node receives a copy of its own transmitted signal as interference, the partner's message can be decoded after
self-interference cancelation (SIC).
2.2. Problem Formulation
Define parameters and as the instantaneous channel gain-to-noise ratio (CNR); then the received SNR at two source nodes can be given as
The instantaneous transmission data rate of nodes to can be obtained from the Shannon capacity formulae, which are, respectively, where is the transmission bandwidth. Hence, the overall data rate can
be driven as
According to [16], the circuit power consumption is incurred by signal processing and active electronic devices in the network, and it can be modeled as a linear function of transmission rate where
is the static circuit power and is the dynamic circuit power per unit data rate. Thus the total power consumption is where is a constant related to the efficiency of power amplifier.
In order to guarantee the quality of service (QoS), the transmission rate should be restricted by where is the minimum data rate requirement related to the type of traffic.
Since the total transmit power at node is limited, we have where is the maximum allowable transmit power.
The objective of this paper is to find the optimal power allocation of each node to maximize the EE of two-way relay network while ensuring a certain data rate. Hence, the optimization problem can be
formulated as follows :
We assume that can be achieved under constraint (10d). If not, the feasible solution of problem does not exist. In this case, the scheduler may have to decrease to make the solution feasible.
3. Energy-Efficient Design
In this section, we aim to find the optimal transmit powers , , and to maximize EE while satisfying a certain data rate. Before solving the problems (10a), (10b), (10c), and (10d), we introduce two
auxiliary optimization problems below. Define the transmit power minimization (MinP) problem under the required minimum data rate constraint as
Using the similar approach in [14], we have the optimal solution to problem :
where .
The other conventional optimization problem is data rate maximization () problem subject to the total transmit power constraint. It can be described as follows
According to (12a), (12b), and (12c), it can be seen that both the overall data rate and transmit power at node are the strictly increasing function of . Thus we can obtain the maximum achievable
data rate where and is the inverse function of .
Since we assume that is achievable under the constraints (10b) and (10d), we have . The equivalent optimization problem of can be reformulated as
where .
According to [16], we can prove that the system EE is a quasiconcave function of .
3.1. Optimal Energy-Efficient Power Allocation
Since the EE is a quasiconcave function of , an optimal solution that maximizes EE without any constraint always exists. Then is an increasing function of when and is decreasing when . The optimal
solution for problem is given as It is noticeable that (16) is equivalent to
The remaining thing for problem is to address and it can be solved by bisection search technique. Here we list the optimal power allocation method in Algorithm 1. In each iteration, the search region
is divided into two parts. To update the search region, we should determine which part contains by calculating the first derivative of . Note that the sign of can be determined by calculating the
value of , where is an infinitely small positive constant.
3.2. Suboptimal Energy-Efficient Power Allocation
Although the optimal power allocation solution is desirable, the cost to pay is the computational complexity in iterative search. It takes at most iterations to convergence which incurs additional
energy consumption, where denotes the smallest integer not less than . Thus, this straightforward approach is clearly not practical, and we need a low complexity approach to solve this problem.
Considering the following approximation for large : then the EE of problem can be written as where and . It can be easily proved that is also a quasiconcave function of ; the unique solution for
achieving the maximum without any constraint always exists. Differentiating (19) with respect to and setting this derivative to zero, we can obtain where
Since is a positive variable, equation (20) can be written as After some mathematical manipulation, we can find the root of (22), which is given as where is the base of the natural logarithm and
denotes the real branch of the Lambert function [17]. Thus the suboptimal power allocation can be obtained by substituting into (17), (12a), (12b), and (12c). The suboptimal transmit power at node is
4. Numerical Results
In this section, we provide simulations to evaluate the EEs of our proposed schemes and validate the previous analysis. The channel gains and are assumed to be independent block Rayleigh fading with
the same average CNR. Simulation parameters are set as follows: kHz, W, W, W, , kbps, and W/kbps.
The system EE and corresponding data rate versus average CNR are evaluated in Figures 1 and 2 under different strategies. It can be seen that both EE and data rate increase with the average CNR. This
is because the energy-efficient design tends to use less transmit power when CNR increases. Figure 1 shows that the proposed schemes outperform conventional schemes and the suboptimal scheme achieves
near optimal performance. The performance gap between proposed schemes and conventional schemes becomes larger when CNR increases. In low CNR region, the transmit power dominates the total power
consumption while in the high CNR region the circuit power does for the energy-efficient design. From Figure 2, it indicates that for specific channel realizations with bad quality, energy-efficient
design may have to operate exactly at the maximum transmit power to meet the minimum rate requirement. It can be seen that the proposed schemes have lower SE than the MaxR scheme and can achieve
higher EE with less power consumption at the cost of some SE loss when CNR is large enough. This indicates that our proposed schemes can achieve a better tradeoff between EE and SE.
5. Conclusion
In this paper, we studied the energy-efficient design of two-way networks under the constraints of transmit power and the minimum rate requirement. We proposed an optimal power allocation scheme
based on iterative search method. In order to reduce the computational complexity, we also derived a closed-form solution for suboptimal algorithm that performs near the optimal scheme. Compared with
traditional SE-oriented power allocation, the proposed schemes have significant improvement in terms of EE.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is partially supported by the funding from Beijing Natural Science Foundation (no. 4122009).
1. Y. Chen, S. Zhang, S. Xu, and G. Y. Li, “Fundamental trade-offs on green wireless networks,” IEEE Communications Magazine, vol. 49, no. 6, pp. 30–37, 2011. View at Publisher · View at Google
Scholar · View at Scopus
2. D. Feng, C. Jiang, G. Lim, L. J. Cimini, Jr., G. Feng, and G. Y. Li, “A survey of energy-efficient wireless communications,” IEEE Communications Surveys and Tutorials, vol. 15, no. 1, pp.
167–178, 2013. View at Publisher · View at Google Scholar · View at Scopus
3. V. K. Sakarellos, C. I. Kourogiorgas, D. Skraparlis, A. D. Panagopoulos, and J. D. Kanellopoulos, “End-to-end performance analysis of millimeter wave triple-hop backhaul transmission systems,”
Wireless Personal Communications, vol. 71, no. 4, pp. 2725–2740, 2013.
4. V. K. Sakarellos, D. Skraparlis, A. D. Panagopoulos, and J. D. Kanellopoulos, “Cooperative diversity performance in millimeter wave radio systems,” IEEE Transactions on Communications, vol. 60,
no. 12, pp. 3641–3649, 2012.
5. H. Wang, J. Jiang, J. Li, M. Ahmed, and M. Peng, “High energy efficient heterogeneous networks: cooperative and cognitive techniques,” International Journal of Antennas and Propagation, vol.
2013, Article ID 231794, 7 pages, 2013. View at Publisher · View at Google Scholar
6. V. K. Sakarellos, D. Skraparlis, and A. D. Panagopoulos, “Cooperation within the Small Cell: the indoor, correlated shadowing case,” Physical Communication, vol. 9, pp. 16–22, 2013.
7. P. Larsson, N. Johansson, and K.-E. Sunell, “Coded bi-directional relaying,” in Proceedings of the IEEE 63rd Vehicular Technology Conference (VTC '06), pp. 851–855, May 2006. View at Scopus
8. S. Talwar, Y. Jing, and S. Shahbazpanahi, “Joint relay selection and power allocation for two-way relay networks,” IEEE Signal Processing Letters, vol. 18, no. 2, pp. 91–94, 2011. View at
Publisher · View at Google Scholar · View at Scopus
9. P. Popovski and H. Yomo, “Physical network coding in two-way wireless relay channels,” in Proceedings of the IEEE International Conference on Communications (ICC '07), pp. 707–712, Glasgow,
Scotland, June 2007. View at Publisher · View at Google Scholar · View at Scopus
10. G. A. Elkheir, A. S. Lioumpas, and A. Alexiou, “Energy efficient AF relaying under error performance constraints with application to M2M networks,” in Proceedings of the IEEE 22nd International
Symposium on Personal, Indoor and Mobile Radio Communications ( PIMRC '11), pp. 56–60, September 2011. View at Publisher · View at Google Scholar · View at Scopus
11. C. Sun and C. Yang, “Energy efficiency analysis of one-way and twoway relay systems,” EURASIP Journal on Wireless Communications and Networking, vol. 2012, article 46, 2012.
12. R. Huang, C. Feng, T. Zhang, and W. Wang, “Energy-efficient relay selection and power allocation scheme in AF relay networks with bidirectional asymmetric traffic,” in Proceedings of the 14th
International Symposium on Wireless Personal Multimedia Communications: Communications, Networking and Applications for the Internet of Things (WPMC '11), pp. 7–11, October 2011. View at Scopus
13. M. Xu, Y. Wang, G. Li, and W. Lin, “On the energy-efficient power allocation for amplify-and-forward two-way relay networks,” in Proceedings of the IEEE 74th Vehicular Technology Conference (VTC
Fall '11), pp. 1–5, September 2011. View at Publisher · View at Google Scholar · View at Scopus
14. C. Sun and C. Yang, “Is two-way relay more energy efficient?” in Proceedings of the 54th Annual IEEE Global Telecommunications Conference: “Energizing Global Communications” (GLOBECOM '11), pp.
1–6, December 2011. View at Publisher · View at Google Scholar · View at Scopus
15. O. Waqar, M. A. Imran, M. Dianati, and R. Tafazolli, “Energy consumption analysis and optimization of BER-constrained amplify-and-forward relay networks,” IEEE Transactions on Vehicular
Technology, 2013. View at Publisher · View at Google Scholar
16. C. Xiong, G. Y. Li, S. Zhang, Y. Chen, and S. Xu, “Energy- and spectral-efficiency tradeoff in downlink OFDMA networks,” IEEE Transactions on Wireless Communications, vol. 10, no. 11, pp.
3874–3886, 2011. View at Publisher · View at Google Scholar · View at Scopus
17. R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth, “On the Lambert W function,” Advances in Computational Mathematics, vol. 5, no. 4, pp. 329–359, 1996. View at Scopus | {"url":"http://www.hindawi.com/journals/ijap/2014/292087/","timestamp":"2014-04-16T16:52:25Z","content_type":null,"content_length":"239358","record_id":"<urn:uuid:55409fdd-874e-43e5-a3b8-a6860e124fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Tiffany on Thursday, August 23, 2007 at 6:46pm.
I need help with solving this equation
and the steps on how to do it?
• Math - ~christina~, Thursday, August 23, 2007 at 7:03pm
well for this question the whole goal is to get all the x's on one side and all the numbers on the other side.
1st combine all like terms on the same sides
-11x-28 = 17x-14
then go and bring the x's over to one side and the numbers over to the other side(subtract or add)
-28x= 14
then divide to get x
x= 14/(-28)
so what do you get?
• Math - DrBob222, Thursday, August 23, 2007 at 7:09pm
Add like terms. For example, on the left
5X + 22X - 38X = -11X
on the right -17 -9 + 12 = -14
-11X-9-19 = 17X-14
Combine the -9 and -19
Now add 28 to both sides
-11X-28+28 = 17X-14+28
Subtract 17X from both sides
Multiply both sides by -1
28X = -14
Divide both sides by 28
X = -1/2
Related Questions
math - 17x – 42 = -39 i need to show the steps for solving this equation
math - I need help solve the equation by fractions 8/(21-y) = 12/(y+19) I think ...
Algebra - Solve: 5x-3=11+12x; 17x-3=11; 17x-3+3=11+3= 17x=14; x= 14/17 is this ...
college math - solve the equation 1). (x+4)(x-12)=5x 2). 4x^2+2x-14=16-17x
math - Which of the following equations represents the equation for the line ...
Algebra - Please help to find x, y 1. 8x-y=17 6x+y=11 2. 5x-2y=17 2x+3y=3 ...
math - The area of the wall you are going to paint is 50 feet2. If the width is ...
Algebra I - Thanks Reiny, but I'm still lost. I understand how you multipled by ...
Algebra - (x)/(x-3)-4-(2x-5)/(x+2) I posted something before this, so now this ...
Algebra - (x)/(x-3)-4-(2x-5)/(x+2) I posted something before this, so now this ... | {"url":"http://www.jiskha.com/display.cgi?id=1187909171","timestamp":"2014-04-20T18:41:07Z","content_type":null,"content_length":"9201","record_id":"<urn:uuid:a7a4ab65-8ef2-4b99-bfa4-5741045a6469>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dracut Math Tutor
Find a Dracut Math Tutor
...Almost all Western authors draw references from the Bible to enrich their writing. I've spent many hours daily in Bible studies for over ten years and have learned many different methods for
doing so. I've also studied under many of the most respected Bible scholars.
55 Subjects: including trigonometry, ACT Math, reading, algebra 1
...I recently (03/2013) passed the Massachusetts Test for Educator Licensing (MTEL) subject 09 test (which covers the standard math curriculum from grades 8 - 12) with the maximum scores in each
category. I have a Ph.D. in physical chemistry, and taught general and physical chemistry at the college...
12 Subjects: including linear algebra, algebra 1, algebra 2, calculus
...I am also an expert in time management and study skills, which are an essential part of scholastic success. I am patient, enthusiastic about learning, and will work very hard with you to
achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology.
10 Subjects: including algebra 1, algebra 2, biology, chemistry
...I am a certified math teacher with many years teaching experience who will help your child catch up and become proficient at fulfilling the requirements of elementary school math. I am very
hands-on, and give students lots of problems to solve for the practice they need to really master the topi...
9 Subjects: including prealgebra, algebra 1, algebra 2, geometry
I can help you excel in your math or physical science course. I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and
non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in...
16 Subjects: including algebra 1, geometry, precalculus, trigonometry
Nearby Cities With Math Tutor
Andover, MA Math Tutors
Billerica Math Tutors
Chelmsford Math Tutors
Derry, NH Math Tutors
Hudson, NH Math Tutors
Lawrence, MA Math Tutors
Lowell, MA Math Tutors
Methuen Math Tutors
North Andover Math Tutors
Pelham, NH Math Tutors
Reading, MA Math Tutors
Salem, NH Math Tutors
Tewksbury Math Tutors
Westford, MA Math Tutors
Wilmington, MA Math Tutors | {"url":"http://www.purplemath.com/dracut_ma_math_tutors.php","timestamp":"2014-04-21T12:44:36Z","content_type":null,"content_length":"23546","record_id":"<urn:uuid:b19a7d63-1d1a-42bd-ae86-ee6117fe4a17>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
find the largest number that will divide 398,436,542 leaving remainders 7,11,15 respectively
[quote=dajinder;167011]find the largest number that will divide 398,436,542 leaving remainders 7,11,15 respectively
I will use the following fact: FACT: If a number 'm' leaves remainder 'r', when divided by 'n', then n divides m - r. Lets call the largest number that does the job as 'n'. So we want n|398 - 7,n|436
- 11,n|542 - 15. In other words we want the largest n such that n divides 391,425,527. The largest the number that divides the three numbers 391,425,527 is called the hcf of 391,425,527. Thus n = hcf
(391,425,527) = 17 | {"url":"http://mathhelpforum.com/algebra/43714-hcf.html","timestamp":"2014-04-16T11:56:21Z","content_type":null,"content_length":"30507","record_id":"<urn:uuid:1ce87a0e-9898-4e44-9025-09aff4f9181a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
There are Infinitely Many Primes (Erdős)
Problem: Prove there are infinitely many primes
Solution: Denote by $\pi(n)$ the number of primes less than or equal to $n$. We will give a lower bound on $\pi(n)$ which increases without bound as $n \to \infty$.
Note that every number $n$ can be factored as the product of a square free number $r$ (a number which no square divides) and a square $s^2$. In particular, to find $s^2$ recognize that 1 is a square
dividing $n$, and there are finitely many squares dividing $n$. So there must be a largest one, and then $r = n/s^2$. We will give a bound on the number of such products $rs^2$ which are less than or
equal to $n$.
If we want to count the number of square-free numbers less than $n$, we can observe that each square free number is a product of distinct primes, and so (as in our false proof that there are finitely
many primes) each square-free number corresponds to a subset of primes. At worst, we can allow these primes to be as large as $n$ (for example, if $n$ itself is prime), so there are no more than $2^
{\pi(n)}$ such subsets.
Similarly, there are at most $\sqrt{n}$ square numbers less than $n$, since if $x > \sqrt{n}$ then $x^2 > n$.
At worst the two numbers $r, s^2$ will be unrelated, so the total number of factorizations $rs^2$ is at most the product $2^{\pi(n)}\sqrt{n}$. In other words,
$2^{\pi(n)}\sqrt{n} \geq n$
The rest is algebra: divide by $\sqrt{n}$ and take logarithms to see that $\pi(n) \geq \frac{1}{2} \log(n)$. Since $\log(n)$ is unbounded as $n$ grows, so must $\pi(n)$. Hence, there are infinitely
many primes.
Discussion: This is a classic analytical argument originally discovered by Paul Erdős. One of the classical ways to investigate the properties of prime numbers is to try to estimate $\pi(n)$. In
fact, much of the work of the famous number theorists of the past involved giving good approximations of $\pi(n)$ in terms of logarithms. This usually involved finding good upper bounds and lower
bounds and limits. Erdős’s proof is entirely in this spirit, although there are much closer and more accurate lower and upper bounds. In this proof we include a lot of values which are not actually
valid factorizations (many larger choices of $r, s^2$ will have their product larger than $n$). But for the purposes of proving there are infinitely many primes, this bound is about as elegant as one
can find.
6 thoughts on “There are Infinitely Many Primes (Erdős)”
1. Why is n a lower bound for that product?
□ Each number from 1 to $n$ has a unique factorization into a product of that form with each factor less than or equal to $n$, and we counted all possible such products where the two factors
are less than or equal to $n$. It’s a lower bound because we actually counted too many such products. For example, if $n = 9$, we counted the product $7*2^2$, even though this does not
correspond to the factorization of any number less than or equal to $n$.
☆ Ah, thanks. I missed that we were counting factorizations of numbers <= n and not just of n.
2. I must be missing something.
r=n/s^2, where r is square-free does not seem to hold for n=4. Either 4=4/1^2, or 2=4/sqrt(2)^2. But I assume r, n, and s are positive integers, and n>=2. Or perhaps 1 is considered to be a
square only when required?
Still, I don’t see that this affects the result.
□ I left out 1=4/2^2, but 1 is also a square.
□ For the purposes of making these results more elegant, 1 is the exception and is not considered to be a square. The definition would not make sense otherwise: 1 is a square that divides all
numbers, so there would be no square-free numbers. | {"url":"http://jeremykun.com/2012/11/10/there-are-infinitely-many-primes-erdos/","timestamp":"2014-04-16T04:20:47Z","content_type":null,"content_length":"84154","record_id":"<urn:uuid:376e1720-15d3-4a0a-bf8d-cd3f7792cb6f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Growth of Meromorphic Solutions for Linear Difference Equations
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 619296, 6 pages
Research Article
On Growth of Meromorphic Solutions for Linear Difference Equations
^1School of Mathematical Sciences, South China Normal University, Guangzhou 510631, China
^2Department of Mathematics, College of Natural Sciences, Pusan National University, Pusan 609-735, Republic of Korea
Received 9 June 2013; Accepted 19 August 2013
Academic Editor: Norio Yoshida
Copyright © 2013 Zong-Xuan Chen and Kwang Ho Shon. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We mainly study growth of linear difference equations and where are polynomials such that and give the most weak condition to guarantee that orders of all transcendental meromorphic solutions of the
above equations are greater than or equal to 1.
1. Introduction and Results
Consider growth of meromorphic solutions of the following linear difference equations:
where are polynomials such that .
Recently, several papers (including [1–8]) have been published regarding growth of the solutions of (1) and (2). We recall the following results. Ishizaki and Yanagihara proved the following theorem.
Theorem A (see [5]). Let be a transcendental entire solution of where are polynomials, , , and of order . Then one has where a rational number is a slope of the Newton polygon for (3) and is a
constant. In particular, one has .
Remark 1. In [5], Ishizaki and Yanagihara give an example. The difference equation that is,
admits an entire solution of order .
In [5], Ishizaki and Yanagihara do not give a concrete solution of order . In fact, we assert that (5) has no entire solution of order . Contrary to the assertion, we assume that is an entire
solution of order of . Set . Then is an entire function of order . Substituting in to , we obtain
Using the same method as in the proof of Case1, in proof of Theorem 4, we obtain the order of which is greater than or equal to 1. It is a contradiction.
Thus, we determine that whether (1) ((2) or (3)) has a transcendental meromorphic solution of order <1, it becomes a significant problem for mathematicians.
Chiang and Feng proved the following theorem.
Theorem B (see [3]). Let be polynomials such that there exists an integer so that
holds. Suppose that is a meromorphic solution of (1). Then, one has .
In this paper, we use the basic notions of Nevanlinna’s theory (see [9, 10]). In addition, we use the notations to denote the order of growth of a meromorphic function and to denote the exponent of
convergence of zeros of .
Remark 2. Comparing Theorem A with Theorem B, we see that since (3) can be rewritten as (1), Theorem A shows that, under general case, (1) may have transcendental meromorphic solution with . In
Theorem B, the condition (7) guarantees that all meromorphic solutions of (1) satisfy .
The author who weakened the condition (7) of Theorem B proved the following results.
Theorem C (see [2]). Let be polynomials such that and Then every finite-order meromorphic solution of (1) satisfies , assumes every nonzero value infinitely often, and .
Theorem D (see [2]). Let be polynomials such that and (8). Then every finite-order transcendental meromorphic solution of (2) satisfies and .
Theorem E (see [2]). Let be polynomials such that . Suppose that is a meromorphic solution with infinitely many poles of (1) (or (2)). Then .
From we see that the sum of coefficients of , which is equal to , does not satisfy the condition (8), but all transcendental entire solutions of have order .
Thus, a natural question to ask is whether the condition (8) can be weakened.
In this note, we consider this question, again weaken the condition (8) and prove the following results.
Theorem 3. Let be polynomials such that and satisfy Then every-finite order transcendental meromorphic solution of (1) satisfies , assumes every nonzero value infinitely often, and .
Theorem 4. Let be polynomials such that . Then every finite-order transcendental meromorphic solution of (2) satisfies .
Remark 5. For the homogeneous equation (1) by Theorems B, C and 3, we see that the condition (9) is weaker than (7) and (8). For the nonhomogeneous equation (2) by Theorem 4, we see that the
condition (9) is omitted. But, under the condition (8), (1) has no nonzero rational solution, and under the condition (9), (1) may have nonzero rational solution. For example, has a rational solution
. This shows that Theorem C can not be replaced by Theorem 3 completely.
Example 6. The equation has a solution ; here satisfies for any nonzero finite value , and has no zero. This shows that in Theorem 3, the condition cannot be omitted.
Example 7. The equation
has a solution which satisfies . This shows that in Theorem 4 a solution of (2) does not satisfy for a nonzero constant .
By Theorem 3, we can obtain the following corollary.
Corollary 8. Let be polynomials such that . If (1) has a transcendental meromorphic solution with , then
Consider the growth of the second order linear difference equation
with is a meromorphic function. Since (14) is closely related with the difference Riccati equation
we see that (14) is an important linear difference equation (see [4]).
Ishizaki [4] proved the following result.
Theorem F (see [4]). Suppose that is a rational function in (14) and has no transcendental meromorphic solutions of order less than . Further, one assumes that (14) possesses a rational solution.
Then, every transcendental meromorphic solution of (14) has order of at least .
In this note, we improve this result, omit the condition of Theorem F “(14) possesses a rational solution”, and prove the same result.
Theorem 9. Let be a rational function. Then every transcendental meromorphic solution of (14) has order of at least .
Further, If , where and are nonconstant polynomials such that , then (14) has no nonzero rational solution.
For the linear difference equation with transcendental coefficients, one has
Chiang and Feng proved the following result.
Theorem G (see [3]). Let be entire functions such that there exists an integer , such that
If is a meromorphic solution of (16), then one has .
Laine and Yang [6] prove that if are entire functions of finite order so that among those having the maximal order , exactly one has its type strictly greater than the others, then every meromorphic
solution of (16) satisfies .
Remark 10. If are meromorphic functions satisfying (17), then Theorem G does not hold. For example,
has a solution , in which .
This example shows that for the linear difference equation with meromorphic coefficients, the condition (17) can not guarantee that every transcendental meromorphic solution of (16) satisfies .
Thus, a natural question to ask is what conditions will guarantee that every transcendental meromorphic solution of (16) satisfies .
We answer this question and prove the following result.
Theorem 11. Let be meromorphic functions such that there exists an integer , such that If is a meromorphic solution of (16), then one has .
2. Proofs of Theorems
We need following lemmas and remark to prove Theorems 3, 4, 9, and 11.
Remark 12. Following Hayman [11, p. 75-76], we define an -set to be a countable union of open discs not containing the origin and subtending angles at the origin whose sum is finite. If is an -set,
then the set of for which the circle meets has finite logarithmic measure, and for almost all real the intersection of with the ray is bounded.
Lemma 13 (see [12]). Let be a function transcendental and meromorphic in the plane of order less than 1. Let . Then there exists an -set such that uniformly in for . Further, may be chosen so that
for large not in the function has no zeros or poles in .
Lemma 14 (see [6, 13]). Let be a nonconstant finite-order meromorphic solution of where is a difference polynomial in . If for a meromorphic function satisfying , then
Lemma 15 (see [3, 13]). Given two distinct complex constants , let be a meromorphic function of finite order . Then, for each , one has
Proof of Theorem 4. Suppose that is a transcendental meromorphic solution of (2) with . We divide this proof into the following two cases.
Case1. Suppose that has only finitely many poles. Now we suppose that . By Lemma 13, there exists an -set such that where satisfy Set . By Remark 12, is of finite logarithmic measure. Substituting
(24) into (2), we obtain, as in , that is, Thus, since has only finitely many poles, we deduce that when ,
This contradicts with the fact that is transcendental. Hence .
Case2. Suppose that has infinitely many poles. Thus, by Theorem D, we see that .
Finally, we prove that . By (2), and we set Thus, By Lemma 14, we have so that
Hence .
Thus, Theorem 4 is proved.
Proof of Theorem 3. Suppose that is a transcendental meromorphic solution of (1) with and that is a constant. Set . Then, .
Substituting into (1), we obtain
Since , we see that (33) satisfies the conditions of Theorem 4. Thus, we deduce that .
Finally, we prove that assumes every nonzero value infinitely often and that . Set Thus, since and (9), we have By Lemma 14 and (35), we have so that Hence . Theorem 3 is thus proved.
Proof of Theorem 9. Suppose that is a transcendental meromorphic solution of (14). We rewrite (14) as If , then by (38), we obtain
We affirm that . In fact, if , then has infinitely many zeros, or infinitely many poles. If has infinitely many zeros, then by (39), we see that if is a zero of , then , are also zeros of . Thus, .
If has infinitely many poles, then by using the same method, we can obtain .
Now we suppose that . Set , where and are nonzero polynomials. By (38), we have Since
by Theorem 3, we see that .
Further, if and are nonconstant polynomials such that , then (41) satisfies the condition of Theorem C. Thus, we see that (14) has no rational solution. Thus, Theorem 9 is proved.
Proof of Theorem 11. Clearly, (16) has no nonzero rational solution.
Now suppose that is a transcendental meromorphic solution of (16) with . By (16), we obtain Set Thus, we have By Lemma 15, we see that for given , Thus, by (42), (45), and (46), we have By (43), we
see that for given above, Since , we see that there is a sequence satisfying Thus, by (47)–(49), we obtain If we combine this with , it follows that So that, it follows that . Thus, Theorem 11 is
Zong-Xuan Chen was supported by the National Natural Science Foundation of China (no. 11171119). Kwang Ho Shon was supported by Basic Science Research Program through the National Research Foundation
of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2013R1A1A2008978). | {"url":"http://www.hindawi.com/journals/aaa/2013/619296/","timestamp":"2014-04-18T12:50:20Z","content_type":null,"content_length":"426878","record_id":"<urn:uuid:f420bab5-5531-4263-a69f-abb970cb5ee8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Island, TX Calculus Tutor
Find a Island, TX Calculus Tutor
...The Socratic Method is a "guided question" methodology that allows students to see how a problem is solved rather then just mindlessly applying a formula without a conceptual understanding of
the problem. A lot of students have difficulties with math and the math found in other subjects such as physics. Math is an interesting subject with a myriad of techniques for finding an answer.
29 Subjects: including calculus, chemistry, writing, English
I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for
groups of students to give them extra practice on their course material and help to answer any question...
7 Subjects: including calculus, statistics, algebra 2, algebra 1
...More advanced topics such as vectors, polar coordinates, parametric equations, matrix algebra, conic sections, sequences and series, and mathematical induction can be covered. I can reteach
lessons, help with homework, or guide you through a more rigorous treatment of these topics. As needed, we can reinforce prerequisite topics from algebra and pre-algebra.
30 Subjects: including calculus, physics, statistics, geometry
...I also helped out students that were struggling at the higher levels. Every spring, the TAKS test (STAAR) test was administered. All teachers including me made sure that our lessons matched
the objectives of the TAKS test so that our students passed.
34 Subjects: including calculus, chemistry, reading, English
...Motion of bodies in space requires modeling of the physics in first- and second-order Diff Eqs and numerical solution and simulation. These equations have been both full and partial
differentials. I am a semi-retired aerospace engineer.
10 Subjects: including calculus, computer science, differential equations, computer programming
Related Island, TX Tutors
Island, TX Accounting Tutors
Island, TX ACT Tutors
Island, TX Algebra Tutors
Island, TX Algebra 2 Tutors
Island, TX Calculus Tutors
Island, TX Geometry Tutors
Island, TX Math Tutors
Island, TX Prealgebra Tutors
Island, TX Precalculus Tutors
Island, TX SAT Tutors
Island, TX SAT Math Tutors
Island, TX Science Tutors
Island, TX Statistics Tutors
Island, TX Trigonometry Tutors
Nearby Cities With calculus Tutor
Aldine, TX calculus Tutors
Alta Loma, TX calculus Tutors
Astrodome, TX calculus Tutors
Beach, TX calculus Tutors
Clear Lake City, TX calculus Tutors
Cloverleaf, TX calculus Tutors
Crystal Beach, TX calculus Tutors
Galveston, TX calculus Tutors
Houston Heights, TX calculus Tutors
San Leon, TX calculus Tutors
Sienna Plantation, TX calculus Tutors
Texas City calculus Tutors
Tiki Island, TX calculus Tutors
Virginia Point, TX calculus Tutors
West Galveston, TX calculus Tutors | {"url":"http://www.purplemath.com/Island_TX_calculus_tutors.php","timestamp":"2014-04-17T10:47:07Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:bf9c13b2-cf0f-4ed7-ab68-af10e6e4ef7c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
perplexus.info :: Paradoxes : Marbles Bonanza
You have an empty container, and an infinite number of marbles, each numbered with an integer from 1 to infinity.
At the start of the minute, you put marbles 1 - 10 into the container, then remove one of the marbles and throw it away. You do this again after 30 seconds, then again in 15 seconds, and again in 7.5
seconds. You continuosly repeat this process, each time after half as long an interval as the time before, until the minute is over.
Since this means that you repeated the process an infinite number of times, you have "processed" all your marbles.
How many marbles are in the container at the end of the minute if for every repetition (numbered N)
A. You remove the marble numbered (10 * N)
B. You remove the marble numbered (N) | {"url":"http://perplexus.info/show.php?pid=1341&cid=7795","timestamp":"2014-04-19T19:46:37Z","content_type":null,"content_length":"13819","record_id":"<urn:uuid:9d36c6ff-d205-4b5e-a487-9f40b2f595cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Farmers Branch, TX Precalculus Tutor
Find a Farmers Branch, TX Precalculus Tutor
...I have taught physics at the high school level and recently obtained my Master's degree from the University of Texas at Dallas where I served as a teaching assistant for intro level Physics
labs. In my daily life, I am constantly exposed to the wonders of science and enjoy nothing more than bein...
5 Subjects: including precalculus, physics, calculus, algebra 2
...Many of my regular students tell me I explain things very differently from the way their professors do. I've come to realize this is a huge compliment because it means I can reach them on a
level that their professors can't. While I'm new to this website, I'm not new to tutoring.
41 Subjects: including precalculus, chemistry, French, calculus
...For Hubble observation statistics I used various Excel features such as writing cell formulas, generating plots, and doing import from/export to text files. I still use the formula capability
often. I do not have practice with pivot tables, but do have good knowledge of general techniques.
15 Subjects: including precalculus, chemistry, physics, calculus
I graduated from Southern Methodist University in 2010 and had majored in Chemistry. I had also taken handful of higher level math classes. I am working on my master's degree right now.
19 Subjects: including precalculus, chemistry, physics, geometry
...As such, I am able to help my students build the necessary bridges from where they are to where they need to go academically. Since joining WyzAnt late last year I have clocked over 200 hours
of tutoring with a rating of 4.78 out of 5. This demonstrates my success with my students.
82 Subjects: including precalculus, English, chemistry, calculus
Related Farmers Branch, TX Tutors
Farmers Branch, TX Accounting Tutors
Farmers Branch, TX ACT Tutors
Farmers Branch, TX Algebra Tutors
Farmers Branch, TX Algebra 2 Tutors
Farmers Branch, TX Calculus Tutors
Farmers Branch, TX Geometry Tutors
Farmers Branch, TX Math Tutors
Farmers Branch, TX Prealgebra Tutors
Farmers Branch, TX Precalculus Tutors
Farmers Branch, TX SAT Tutors
Farmers Branch, TX SAT Math Tutors
Farmers Branch, TX Science Tutors
Farmers Branch, TX Statistics Tutors
Farmers Branch, TX Trigonometry Tutors
Nearby Cities With precalculus Tutor
Addison, TX precalculus Tutors
Balch Springs, TX precalculus Tutors
Bedford, TX precalculus Tutors
Carrollton, TX precalculus Tutors
Coppell precalculus Tutors
Euless precalculus Tutors
Flower Mound precalculus Tutors
Grapevine, TX precalculus Tutors
Highland Park, TX precalculus Tutors
Hurst, TX precalculus Tutors
Irving, TX precalculus Tutors
Parker, TX precalculus Tutors
Richardson precalculus Tutors
The Colony precalculus Tutors
University Park, TX precalculus Tutors | {"url":"http://www.purplemath.com/Farmers_Branch_TX_precalculus_tutors.php","timestamp":"2014-04-16T22:12:15Z","content_type":null,"content_length":"24370","record_id":"<urn:uuid:71d7fc3e-85b6-42df-b330-aa3eb6c990e7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Integers
• Why is the x-coordinate in ordered pairs always first?
Many things in mathematics and in everyday life are done by agreement so that we will all understand one another. If we didn't have an agreement for the order of the x- and y-coordinates in
ordered pairs, some people would read them with the x first and others would read them with the y first. This would cause confusion about how to graph the point. We agree to do other things in
math as well. For example, we agree that the x-coordinates to the right of the y-axis are positive and the x-coordinates to the left of the y-axis are negative. In everyday life, we agree to
drive on the right side of the road in the United States. However, in Australia they agree to drive on the left side of the road. Knowing these agreements can be very helpful!
• Why should I bother learning this?
The ability to locate points on the coordinate plane was one of the major mathematics achievements of the twentieth century. Coordinate geometry linked algebra and geometry together and opened
the doors to many other achievements in both mathematics and science, such as the study of calculus. Your students will need to be able to graph relationships as they proceed through school.
Coordinate graphs are commonly used by newspapers, television, medical research, and the business world to show relationships between two variables, such as the production levels versus costs in
a business, exposure to tobacco smoke versus cancer rates, amount of money earned versus education level, student-to-teacher ratio versus the quality of education, and the height of a person
versus age. Have your students look for graphs which show relationships in newspapers and magazines.
• Are all graphs of equations going to be straight lines?
Have students examine the equation x^2 = y. Have them make a table of values for x and y and then graph them. They will see that the graph is not a straight line. Tell them that many of the
equations they will study when they are older are not linear equations. Stress that the equations they will study this year are linear and will help them understand future equations in the years
Ordered Pairs
Graphing Linear Equations | {"url":"http://www.eduplace.com/math/mathsteps/5/c/5.graph.ask.html","timestamp":"2014-04-18T10:46:48Z","content_type":null,"content_length":"7507","record_id":"<urn:uuid:06b2c304-505b-4a61-8e31-893bf485156e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please help me with my accounting homework.
March 28th 2008, 11:33 AM #1
Junior Member
Oct 2006
Please help me with my accounting homework.
I'm having a hard time with my accounting homework, I appreciate any help.
this is what my homework says
Astro Co. sold 20,750 units of its only product and incurred a $62,500 loss (ignoring taxes) for the current year 2008. During a planning session for year 2009’s activities, the production
manager notes that variable costs can be reduced 50% by installing a machine that automates several operations. To obtain these savings, the company must increase its annual fixed costs by
$220,000. The maximum output capacity of the company is 41,500 units per year.
then I'm asked a serie of question, I've answered most of them, except these two
Compute the sales level required in both dollars and units to earn $126,350 of after-tax income in 2009 with the machine installed and no change in unit sales price. Assume that the income tax
rate is 30%.
I'm told to use these equations
dollar sales at target after tax income = fixed costs +pretax income all this divided by contribution margin ratio
and to calculate unit sales at target after tax income= fixed costs + pretax income all this divided by contribution margin per unit.
I was able to figure out the contribution margin, it equals 180500, fixed costs are 490000 and pretax income is 180500, but I don't know nothing about contribution margin ratio nor contribution
margin per unit since I haven't been able to figure out how many units should be sold in order to make $126,350
thank you for any help.
forgot to paste this data concerning that problem
ASTRO COMPANY
Contribution Margin Income Statement
For Year Ended December 31, 2008
sales 1,037,500
variable costs 830,000
contribution margin 207,500
fixed costs, 270,000
net loss 62,500
March 28th 2008, 11:36 AM #2
Junior Member
Oct 2006 | {"url":"http://mathhelpforum.com/algebra/32351-please-help-me-my-accounting-homework.html","timestamp":"2014-04-16T19:09:18Z","content_type":null,"content_length":"32504","record_id":"<urn:uuid:6d34d05e-a9b2-44f6-8c60-c0e41ca7450f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
eliminate parameter in parametric equation
so the question is:
eliminate the parameter in the pair of parametric equations
x = h + a secθ
y = k + b tanθ
to find the corresponding rectangular equation.
im taking an online course of calculus and i cant find an explanation of how to do this anywhere...ive been trying but it just doesnt make sense to me yet....any help? | {"url":"http://www.physicsforums.com/showthread.php?t=185476","timestamp":"2014-04-20T01:01:10Z","content_type":null,"content_length":"43840","record_id":"<urn:uuid:8c1696d6-12d9-452d-8e79-587f3f23df86>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
June 11th 2009, 05:49 PM #1
Feb 2009
Log Issue
Hi, I am having a hard time solving for t using logarithms.
Equation is :
40000 + 2000t = 38500 (1.05)^t
Any help would be appreciated.
To solve this algebraically, you have to use the Lambert W function
Lambert W function - Wikipedia, the free encyclopedia
Here are some other examples where the Lambert W function is needed.
Try searching Math Forum for other threads where the Lambert W function is needed.
Hope this helps. Have fun exploring the world of math.
June 11th 2009, 08:00 PM #2
June 11th 2009, 08:09 PM #3
Feb 2009 | {"url":"http://mathhelpforum.com/algebra/92595-log-issue.html","timestamp":"2014-04-18T11:26:53Z","content_type":null,"content_length":"33457","record_id":"<urn:uuid:e450ec87-1abd-4245-b3cc-61ac2bfb6674>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improved Bounds on
Journal of Discrete Mathematics
Volume 2013 (2013), Article ID 628952, 7 pages
Research Article
Improved Bounds on
Department of Mathematics, Technical University of Gabrovo, 5300 Gabrovo, Bulgaria
Received 8 November 2012; Accepted 3 February 2013
Academic Editor: Aziz Moukrim
Copyright © 2013 Rumen Daskalov and Elena Metodieva. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Linked References
1. R. C. Bose, “Mathematical theory of the symmetrical factorial design,” Sankhyā, vol. 8, pp. 107–166, 1947. View at Zentralblatt MATH · View at MathSciNet
2. A. Barlotti, Some Topics in Finite Geometrical Structures, Institute of Statistics Mimeo Series no. 439, University of Carollina, 1965.
3. J. W. P. Hirschfeld, Projective Geometries over Finite Fields, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, NY, USA, 2nd edition, 1998. View at
4. J. W. P. Hirschfeld and L. Storme, “The packing problem in statistics, coding theory and finite projective spaces: update 2001,” in Finite Geometries, vol. 3 of Developments in Mathematics, pp.
201–246, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. R. Daskalov, “On the existence and the nonexistence of some (k, r)-arcs in $PG$(2, 17),” in Proceedings of the 9th International Workshop on Algebraic and Combinatorial Coding Theory, pp. 95–100,
Kranevo, Bulgaria, June 2004.
6. R. Daskalov and E. Metodieva, “New $\left(k,r\right)$-arcs in $\mathrm{PG}\left(2,17\right)$ and the related optimal linear codes,” Mathematica Balkanica. New Series, vol. 18, no. 1-2, pp.
121–127, 2004. View at MathSciNet
7. M. Braun, A. Kohnert, and A. Wassermann, “Construction of $\left(n,r\right)$-arcs in $\mathrm{PG}\left(2,q\right)$,” Innovations in Incidence Geometry, vol. 1, pp. 133–141, 2005. View at
8. S. Ball and J. W. P. Hirschfeld, “Bounds on $\left(n,r\right)$-arcs and their application to linear codes,” Finite Fields and Their Applications, vol. 11, no. 3, pp. 326–336, 2005. View at
Publisher · View at Google Scholar · View at MathSciNet
9. A. Kohnert, “Arcs in the projective planes,” Online tables, http://www.algorithm.uni-bayreuth.de/en/research/Coding_Theory/PG_arc_table/index.html.
10. R. Daskalov and E. Metodieva, “New $\left(n,r\right)$-arcs in $\mathrm{PG}\left(2,17\right)$, $\mathrm{PG}\left(2,19\right)$ and $\mathrm{PG}\left(2,23\right)$,” Problemy Peredachi Informatsii,
vol. 47, no. 3, pp. 3–9, 2011, English translation: Problems of Information Transmission, vol. 47, no. 3, pp. 217–223, 2011. View at MathSciNet
11. S. Ball, “Three-dimensional linear codes,” Online table, http://www-ma4.upc.edu/~simeon/.
12. H. A. Barker, “Sum and product tables for Galois fields,” International Journal of Mathematical Education in Science and Technology, vol. 17, no. 4, pp. 473–485, 1986. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. R. Daskalov and E. Metodieva, “New large arcs in PG(2,25) and PG(2,27),” in Proceedings of the 13th International Workshop on Algebraic and Combinatorial Coding Theory, pp. 130–135, Pomorie,
Bulgaria, June 2012.
14. R. Hill, “Optimal linear codes,” in Cryptography and Coding, II (Cirencester, 1989), vol. 33, pp. 75–104, Oxford University Press, New York, NY, USA, 1992. View at Zentralblatt MATH · View at
15. S. Ball, “Multiple blocking sets and arcs in finite planes,” Journal of the London Mathematical Society, vol. 54, no. 3, pp. 581–593, 1996. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
16. R. Daskalov, “On the maximum size of some $\left(k,r\right)$-arcs in $\mathrm{PG}\left(2,q\right)$,” Discrete Mathematics, vol. 308, no. 4, pp. 565–570, 2008. View at Publisher · View at Google
Scholar · View at MathSciNet
17. J. R. M. Mason, “On the maximum sizes of certain ($k,n$)-arcs in finite projective geometries,” Mathematical Proceedings of the Cambridge Philosophical Society, vol. 91, no. 2, pp. 153–169, 1982.
View at Publisher · View at Google Scholar · View at MathSciNet
18. A. Barlotti, “Sui $\left\{k; n\right\}$-archi di un piano lineare finito,” Bollettino della Unione Matematica Italiana, vol. 11, pp. 553–556, 1956. View at MathSciNet
19. S. Ball, “On nuclei and blocking sets in Desarguesian spaces,” Journal of Combinatorial Theory. Series A, vol. 85, no. 2, pp. 232–236, 1999. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jdm/2013/628952/ref/","timestamp":"2014-04-17T07:31:16Z","content_type":null,"content_length":"40630","record_id":"<urn:uuid:259574f9-f481-475e-b2ad-798e69581acd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tie Break Rules [Archive] - TennisForum.com
Chris 84
Jan 31st, 2007, 10:30 PM
As there are no properly written, definitive tie break rules within TT, I have discussed the matter with SloKid and have come up with what I hope to be adopted as the official TB rules. Obviously,
anyone is perfectly welcome to give their opinions on what is proposed, and we will consider any criticism or potential improvements that TT players give us.
Proposed TB rules:
1/ There will be at least 2 tie break matches per round of TT in which the competitors predict the exact score of the match in progress. The purpose of the TB matches is to produce a winner when the
scores are tied.
2/ TB1 is always the most important TB match, and TB2, TB3, etc are only ever used where TB1 does not produce a winner.
3/ The method of determining a winner based on the TB match is as follows:
- where player A has the correct winner of the match and player B does not, player A wins.
-where both players have the correct winner of the match, but player A has the correct set ratio and player B does not, then player A wins.
-where both players have the correct winner of the match and the correct set ratio, the player who is closest to the actual score wins.
eg, where a player has the score exactly right, they will always win, unless the other player also has the score exactly right. Predicting 2 sets exactly right is the next best thing to do, followed
by predicting 2 sets right in the wrong order, followed by predicting 1 set exactly right, followed by predicting 1set in the wrong order. If none of this occurs, then the player who is closest to
the actual score without correctly guessing any sets is the winner (eg Player A picks 6-3 6-3, player B picks 6-4 6-4, actual score is 7-6 7-6, then player B wins)
-in cases where players do guess a set correctly, then the player who is closest to the actual score should win (eg Player A picks 63 46 64, player B picks 63 46 62, actual score is 62 64, then
player B wins as he was closer to the actual score in the match)
-in cases where the match finishes eg 6-1 6-1, and player A has picked 7-5 6-3 and player B picks 6-4 6-4, player B wins the match. Both players have given the loser 8 games, but because player B
gave the winner the correct number of games, he wins.
-where both players have the incorrect winner of the match, the player who gives the most sets to the winning wta player will be the winner. Where both players give the same number of sets to the
winning player, the player who gives the winning wta player the most number of games shall win.
4/ Where none of the above produces a winner, use TB2 and repeat the above process.
*TB rule changes*
- Set Ratios DO NOT count as points in themselves any more, but will still be used when there are less than 8 matches as the primary tie break method.
(eg there are 5 matches, and Chris 84 and meelis both get 4 correct winners each. The score is then 4-4, but we then compare correct SRs, and whoever has more correct wins)
The secondary TB method is then TB1.
Where there are 8 or more matches, SRs are not needed and the primary TB method remains TB1.
- where players get the wrong winner in TB1, it is no longer the case that the person who gives more games to the loser will win. However, if Player A gives a set to the winner, and Player B does
not, then Player A wins)
If neither player has given a set to the winner, then move on to TB2 as the means of separating the players. If all TB matches are looked at and fail to produce a winner, the old method of TB will be
used where the person who gives the winner the most games will win.
-There will now be 5 TB matches. | {"url":"http://www.tennisforum.com/archive/index.php?t-283918.html","timestamp":"2014-04-17T14:41:05Z","content_type":null,"content_length":"23922","record_id":"<urn:uuid:9db5a7e7-2f43-4c93-973e-716712e8729b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sizing Conductors, Part XXI
The rating of the overcurrent device must be considered when sizing a conductor. In accordance with 240.4 in the National Electrical Code (NEC), conductors (other than flexible cords, flexible cables
and fixture wires) shall be protected against overcurrent in accordance with their ampacities specified in 310.15, unless otherwise permitted or required in 240.4(A) through (G).
The rules in 240.4(A) through (G) are alternatives. They pertain to power-loss hazards, overcurrent devices rated 800 amperes (A) or less, overcurrent devices rated over 800A, tap conductors,
transformer secondary conductors, and overcurrent protection for specific conductor applications. Another alternative provision pertains to small conductors (and is sometimes referred to as the
small-conductor rule). Unless specifically permitted in 240.4(E) or (G), the overcurrent protection shall not exceed the requirements of (D)(1) through (D)(7) after any correction factors for ambient
temperature and number of conductors have been applied [240.4(D)]. Conductor sizes covered by this section include 18 through 10 AWG copper and 12 through 10 AWG aluminum and copper-clad aluminum.
While this small-conductor rule has been a requirement for more than 30 years, it has not always been in Article 240. From the 1978 edition of the NEC to the 1996 edition, this rule was a footnote of
Table 310.16. Now, the footnote for 14–10 AWG conductors refers the reader to 240.4(D) for overcurrent protection limitations.
Prior to the 2008 edition, this provision only pertained to 14, 12 and 10 AWG conductors. Size 18 and 16 AWG copper conductors were added to the 2008 edition of the NEC. The overcurrent device shall
not exceed 7A for an 18 AWG copper conductor and 10A for a 16 AWG copper conductor. Besides the provisions for the overcurrent device’s maximum ampacity, 18 and 16 AWG conductors have additional
stipulations. The first stipulation for 18 AWG conductors states that continuous loads shall not exceed 5.6A. Likewise, the first stipulation for 16 AWG conductors states that continuous loads shall
not exceed 8A. This is equivalent to the branch-circuit conductor requirement in 210.19(A)(1) for continuous loads because continuous loads are multiplied by 125 percent. A continuous load of 5.6A
multiplied by 125 percent is 7A (5.6 × 125% = 7), and a continuous load of 8A multiplied by 125 percent is 10A (8 × 125% = 10).
The second stipulation pertains to overcurrent protection. In accordance with 240.4(D)(1)(2), overcurrent protection shall be provided by one of the following: 1. branch-circuit-rated circuit
breakers listed and marked for use with 18 AWG copper wire; 2. branch-circuit-rated fuses listed and marked for use with 18 AWG copper wire; or 3. Class CC, Class J, or Class T fuses. In accordance
with 240.4(D)(2)(2), overcurrent protection shall be provided by one of the following: 1. branch-circuit-rated circuit breakers listed and marked for use with 16 AWG copper wire; 2.
branch-circuit-rated fuses listed and marked for use with 16 AWG copper wire; or 3. Class CC, Class J, or Class T fuses. There are installations where the overcurrent protection could be more than 7A
for an 18 AWG copper conductor and 10A for a 16 AWG copper conductor. One example is fixture wires tapped to branch-circuit conductors. In accordance with 240.5(B)(2), 18 AWG fixture wires are
permitted on 20A circuits as long as the run length is not more than 50 feet. Likewise, 16 AWG fixture wires are permitted on 20A circuits as long as the run length is not more than 100 feet.
It is important to keep the small-conductor rule in mind when sizing 14, 12 and 10 AWG conductors because the maximum rating for the overcurrent device may be less than the maximum ampacity of the
conductor. For example, what size branch-circuit overcurrent protection is required for 12 AWG THHN copper conductors under the following conditions? The load will be 20A, noncontinuous. This
120-volt (V) branch circuit will consist of one ungrounded conductor, one grounded conductor and one equipment grounding conductor. These branch-circuit conductors will be in a raceway. The voltage
drop will not exceed the NEC recommendation. All of the terminations will be rated 75°C. The maximum ambient temperature will be 30°C. Because there are only two current-carrying conductors and the
ambient temperature will not be above 30°C, it is not necessary to apply correction and adjustment factors. Although the THHN conductors are rated 90°C, the allowable ampacity shall not exceed the
75°C column because of the termination provision in 110.14(C)(1)(a). The ampacity of a 12 AWG conductor, from the 75°C column of Table 310.15(B)(16), is 25A. Therefore, the maximum ampacity for these
conductors in this installation is 25A. In accordance with 240.4(D)(5), the maximum overcurrent protection for 12 AWG copper conductors is 20A. Although these conductors have an allowable ampacity of
25A, and 25A is a standard rating for an overcurrent device, the maximum overcurrent protection for the conductors in this installation is 20A (see Figure 1).
In accordance with 240.4(D), the maximum overcurrent protection is after the application of any correction and adjustment factors. For example, what size branch-circuit overcurrent protection is
required for 14 AWG THHN copper conductors under the following conditions? This circuit will be a single-phase, 240V branch circuit with a noncontinuous 14A load. The voltage drop in this branch
circuit will not exceed the recommendation in 210.19(A)(1) Informational Note No. 4. These branch-circuit conductors will be in a raceway. There will be four current-carrying conductors and an
equipment grounding conductor in this raceway. The terminations on both ends are rated at least 75°C. The maximum ambient temperature will be 35°C. All of this branch circuit will be installed in a
dry location.
Since the load is not continuous, it is not necessary to multiply the load by 125 percent. The ampacity of a 14 AWG conductor, from the 90°C column of Table 310.15(B)(16), is 25A. The Table 310.15(B)
(2)(a) correction factor, in the 90°C column, for an ambient temperature of 35°C is 0.96. The Table 310.15(B)(3)(a) adjustment factor for four current-carrying conductors in the raceway is 80 percent
(or 0.80). After derating because of ambient temperature and adjacent load-carrying conductors, this conductor has a maximum ampacity of 19A (25 × 0.96 × 0.80 = 19.2 = 19). In accordance with 240.4
(D)(3), the maximum overcurrent protection for 14 AWG copper conductors is 15A. Although these conductors have an allowable ampacity of 19A after the application of correction and adjustment factors,
the rating of the overcurrent device shall not exceed 15A (see Figure 2).
After applying correction and adjustment factors to 14, 12 or 10 AWG conductors, the ampere rating of the overcurrent device may not be above the maximum rating specified in 240.4(D). For example,
what size THHN copper conductors are required to supply a branch circuit under the following conditions? The load will be a 24A, continuous load. The voltage drop in this branch circuit will not
exceed the recommendation in 210.19(A)(1) Informational Note No. 4. These branch-circuit conductors will be in a raceway. There will be a total of seven current-carrying conductors and an equipment
grounding conductor in this raceway. The terminations on both ends are rated at least 75°C. The maximum ambient temperature will be 35°C.
Because of 210.19(A)(1), multiply the continuous load by 125 percent. The minimum ampacity after multiplying by 125 percent is 30A (24 × 125% = 30). Because of the termination provision in 110.14(C)
(1)(a), select a conductor from the 75°C column of Table 310.15(B)(16). Because the load is continuous, the minimum size conductor is 10 AWG copper.
Now ensure these conductors will work with the required overcurrent protection.
In accordance with 210.20(A), the overcurrent protection for a branch circuit supplying a continuous load shall be at least 125 percent of the continuous load. Since the branch-circuit overcurrent
protection must be at least 30A (24 × 125% = 30), and 10 AWG THHN conductors have a rating of 35A in the 75°C column of Table 310.15(B)(16), 10 AWG THHN conductors will work with a 30A breaker or
fuse. Now see if a 10 AWG THHN conductor can carry 24A after applying the correction and adjustment factors. The Table 310.15(B)(2)(a) correction factor, in the 90°C column, for an ambient
temperature of 35°C, is 0.96. The adjustment factor for seven current-carrying conductors is 70 percent or 0.70. Because the conductors are THHN, it is permitted to use the 90°C column (40 × 0.96 ×
0.70 = 26.88 = 27). After applying correction and adjustment factors, this conductor has an ampacity of 27A. Although the continuous load is 30A, the conductors are only required to have a rating of
the actual load of 24A. Because this installation meets the conditions in 240.4(B), it is permissible to round up to the next standard size overcurrent device rating above 27A. In this installation,
the minimum size THHN copper conductors are 10 AWG (see Figure 3).
Next month’s column will continue the discussion of sizing conductors.
Comment Count Comments | {"url":"http://www.ecmag.com/section/codes-standards/sizing-conductors-part-xxi","timestamp":"2014-04-18T09:47:11Z","content_type":null,"content_length":"62703","record_id":"<urn:uuid:6c09a02d-5672-4d21-b64a-1c78da250049>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help getting started..
October 27th 2010, 12:37 PM
Need help getting started..
Sled of 196 N is being pulled across a horizontal surface at a constant velocity. So it's in equilibrium right??
The pulling force(Tension force?) has a magnitude of 80 n and is direct at an angle 30degree above the horizontal. Determine the coefficient of the kinetic friction.
I am stuck on drawing the FBD. I suck at those. I get them confused.
Need help on getting started, thanks.
So 30 degree angle from the 0degree, Force of 80 is perpendicular to that, so at 120 degrees?? Where do I put the Ff? to figure out the u
October 27th 2010, 01:45 PM
Sled of 196 N is being pulled across a horizontal surface at a constant velocity. So it's in equilibrium right??
The pulling force(Tension force?) has a magnitude of 80 n and is direct at an angle 30degree above the horizontal. Determine the coefficient of the kinetic friction.
I am stuck on drawing the FBD. I suck at those. I get them confused.
Need help on getting started, thanks.
So 30 degree angle from the 0degree, Force of 80 is perpendicular to that, so at 120 degrees?? Where do I put the Ff? to figure out the u
Here is the free body diagram
Attachment 19497
$\displaystyle \sum F_x=80\cos(30^{\circ})-\mu_k\eta=ma_x=0$
$\displaystyle \sum F_y=\eta+80\sin(30^{\circ})-196=ma_y=0$
October 27th 2010, 02:01 PM
Ok, so do I solve for Fy and sub that into Fx? What letter do you give the 80N just P for pull or T for tension?
I solve for Fn for Fy and plug that into Fn in Fx?
Thanks for the diagram I was off somewhat for sure.
October 27th 2010, 02:11 PM
Solve for eta $(\eta)$ the normal force in the 2nd equation.
Sub this into the first equation and solve for the coefficient of kinetic friction $\mu_k$
October 27th 2010, 03:14 PM
OK that is what I meant, I typed it wrong. Thank you. I will study that. | {"url":"http://mathhelpforum.com/math-topics/161185-need-help-getting-started-print.html","timestamp":"2014-04-18T15:21:08Z","content_type":null,"content_length":"7181","record_id":"<urn:uuid:efa416b6-f635-4a27-83ed-35f0974f0b01>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physicists Discover Geometry Underlying Particle Physics
50867965 story
Posted by
from the amplituhedron-is-the-word-of-the-day dept.
New submitter Lee_Dailey sends this news from Quanta Magazine:
"Physicists have discovered a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components
of reality. 'This is completely new and very much simpler than anything that has been done before,' said Andrew Hodges, a mathematical physicist at Oxford University who has been following the work.
The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body
of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the
volume of the corresponding jewel-like "amplituhedron," which yields an equivalent one-term expression."
This discussion has been archived. No new comments can be posted.
Physicists Discover Geometry Underlying Particle Physics
Comments Filter:
• 42 (Score:5, Funny)
by syntheticmemory (1232092) on Wednesday September 18, 2013 @02:15PM (#44885971)
Almost there....
□ Re:42 (Score:5, Insightful)
by RDW (41497) on Wednesday September 18, 2013 @02:33PM (#44886159)
"They also claim to have found a "master amplituhedron" with infinitely many faces in infinitely many dimensions which should now be as important as the circle in two dimensions. ;-) Its
volume counts the "total amplitude" (?) of all processes; faces of this master jewel harbor the amplitudes for processes with finite collections of particles."
http://motls.blogspot.co.uk/2013/09/amplituhedron-wonderful-pr-on-new.html [blogspot.co.uk]
No idea what that means, but doesn't it sound cool?
• Bejeweled... (Score:5, Funny)
by Anonymous Coward on Wednesday September 18, 2013 @02:20PM (#44886029)
Is secretly a complex distributed particle physics computation!
□ by Teresita (982888) <`badinage1' `at' `netzero dot net'> on Wednesday September 18, 2013 @02:42PM (#44886273) Homepage
...formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like "amplituhedron"...
LaForge: "Captain, the amplituhedron flux is below seventy percent, we risk a core breach!"
Picard: "Initiate technobabbatron purge! Engage!"
☆ by gstoddart (321705) on Wednesday September 18, 2013 @03:03PM (#44886475) Homepage
Troi: Captain, I can 'feel' the amplituhedron.
Data: It's become sentient
Q: Foolish humans ... you could never hope to understand this.
Wesley: Oh sure, I made one in science class last week.
ALL: Wesley, STFU.
○ by Alsee (515537) on Wednesday September 18, 2013 @04:20PM (#44887437) Homepage
Kirk: My god Spock, it's an ampi
Take your TNG and get off my lawn, ya damn kids!
• hmmm.... (Score:4, Interesting)
by P-niiice (1703362) on Wednesday September 18, 2013 @02:21PM (#44886035)
Isn't this similar to the geometric structure that the 'surfing physicist' came up with - the one that predicts a bunch of undiscovered particles? Or is this completely different?
□ Re:hmmm.... (Score:5, Informative)
by quantumghost (1052586) on Wednesday September 18, 2013 @02:26PM (#44886093) Journal
Had the same thought. His name is Garrett Lisi [ted.com]
☆ Re:hmmm.... (Score:5, Informative)
by AliasMarlowe (1042386) on Wednesday September 18, 2013 @03:10PM (#44886551) Journal
Lisi's E_8 conjecture [wikipedia.org] is somewhat more complicated than this one. For a start, the geometry of the E_8 group is richer than that of a mere amplituhedron. Others may note
that Lisi's conjecture also includes gravitation in its unification, while TFA appears to be only about particle families.
○ Re:hmmm.... (Score:5, Funny)
by ColdWetDog (752185) on Wednesday September 18, 2013 @03:20PM (#44886683) Homepage
"mere amplituhedron"?
Are you allowed to say that?
■ by Zero__Kelvin (151819)
How else is he going to sound brilliant while still having no idea what he is talking about?
○ by KonoWatakushi (910213)
Unified theories are more attractive, but every new way of looking at physics (that accurately models reality) is one more potential avenue of insight into the fundamental nature of
our universe. This is definitely an exciting discovery, though I do not share their enthusiasm for boiling all of reality down to particle interactions with geometry, rather than
The Copenhagen interpretation of QM is a disgrace, and any self-respecting scientist should be ashamed to support a theory that hides reali
□ by interval1066 (668936)
This isn't a particle so much as methodology; physicists have discovered that certain particles fit together in a certain way. Apparently before this it was a huge clusterfuck. Its like the
mandelbrot set; its not a physical "thing", but its damn useful. To physicists only, I think, but we'll see.
☆ Re:hmmm.... (Score:5, Interesting)
by Dishevel (1105119) on Wednesday September 18, 2013 @03:25PM (#44886755)
This isn't a particle so much as methodology
The important bit here is why? Why does this methodology work so well. Is it because that deep down on a very fundamental level this "Geometry" is hard coded in the way the universe
works? If so. What does this tell us about how things really work?
○ Re:hmmm.... (Score:5, Interesting)
by St.Creed (853824) on Wednesday September 18, 2013 @03:55PM (#44887099)
This isn't a particle so much as methodology
The important bit here is why? Why does this methodology work so well. Is it because that deep down on a very fundamental level this "Geometry" is hard coded in the way the universe
works? If so. What does this tell us about how things really work?
That's a pretty good question. I've been wondering about that too, given the convergence between our definitions of entropy and Kolmogorov complexity, which describes how much
information is encoded in a signal (also tied in with Shannon's law). It hits directly into the heart of the question: what is information and how does it relate to reality? At a
basic level, our universe may be comprised of "information", or rather: a signal on top of noise.
This new discovery seems to suggest that at the most basic level, particles can be described as a mathematical function on top of some sort of "white noise" as well. I wonder how long
it will take to converge the two ideas. If ever.
In any case, exciting times are ahead for so-called computer scientists that deal with things like geometric algorithms. I predict a hot demand for top mathematicians in that field to
arise very soon.
Anyway, exciting times to be a theoretical physicist! Everyone expecting breakthroughs coming from the LHC and the experimental boys and girls, and now suddenly, out of left field the
theoretical physicists come back with a big right hook out of nowhere :)
○ by globaljustin (574257)
this "Geometry" is hard coded in the way the universe works? If so. What does this tell us about how things really work?
right...good question
the 'Geometry' to which you refer is an expression of relationships
it tells us how matter and energy relate...which is how physicists say 'how things really work'
imagine a simple ratio: x/y
as x increases, y increases...that can be mapped on a graph...
now thrown in every fundamental particle relationship we've observed, graph it, and it comes up with this geometric figu
• d20? (Score:5, Funny)
by space_jake (687452) on Wednesday September 18, 2013 @02:21PM (#44886037)
Roll for initiative...
□ Re:d20? (Score:5, Funny)
by Alsee (515537) on Wednesday September 18, 2013 @04:26PM (#44887505) Homepage
You are entangled with the Schrodragon.
You both win and lose initiative.
• so... (Score:4, Funny)
by BenSchuarmer (922752) on Wednesday September 18, 2013 @02:22PM (#44886061)
God is playing dice with the universe
□ by Errol backfiring (1280012)
Off course not. It's "the Lady".
• Hold up. (Score:5, Interesting)
by girlintraining (1395911) on Wednesday September 18, 2013 @02:25PM (#44886077)
Guys, we've been down this road about a million times in physics. Just because a mathematical model simplifies certain calculations, does not mean that the actual underlying physical geometry
matches the theoretical model. Mathematicians have been adding extra dimensions to equations and finding they simplify things for years. It doesn't mean we live in a 27 dimension manifold. All
direct observations to date point to a 3D universe.
□ Re: (Score:3, Insightful)
by benjfowler (239527)
To elaborate, models are only as good as their power to explain and predict. So if those models improve (explain/predict more, get simpler) over time, so much the better.
□ by gstoddart (321705) on Wednesday September 18, 2013 @02:30PM (#44886131) Homepage
It doesn't mean we live in a 27 dimension manifold.
Doesn't mean we don't. ;-)
All direct observations to date point to a 3D universe.
Ummm ... hang on a second. Won't any direct observation we make as 3D critters point to a 3D universe? Isn't that sort of inherent to us being only able to perceive 3D?
I'm not sure how we'd do any direct observations in any other dimensions. (Honestly, not a flame, I'm genuinely puzzled by how we could see anything else and every now and then something like
this hurts my head)
☆ Re:Hold up. (Score:4, Funny)
by Anonymous Coward on Wednesday September 18, 2013 @02:51PM (#44886357)
I'm not sure how we'd do any direct observations in any other dimensions. (Honestly, not a flame, I'm genuinely puzzled by how we could see anything else and every now and then something
like this hurts my head)
First, we assume a spherical cow, now that we have a more efficient source of steak and cheese, we get to the real work. The real work involves creating an infinitely large perfectly flat
mirror. Since we don't know of any way to push or pull something into dimensions that we cannot directly observe, we anchor the infinite mirror to the earth (or a designated
extraplanetary observatory) and wait. The odds that a 14-dimensional object/creature/other would not accidentally bump into an infinite functionally 2 dimensional surface approach zero as
your timescale expands. Therefore, we just wait until the mirror rotates in a way we cannot intuitively describe and effectively ceases to exist in our 3 dimensional space (or drags the
earth with it into some other 3 dimensional subset of realities).
Unless some of the dimensions are curved, then you need a hypercubic pig.
☆ Re: (Score:2, Insightful)
by Anonymous Coward
This is basically what particle colliders do. Imagine that We lived in a 2D universe like a sheet of paper. The particle collider smashes atoms and we observe the splash it makes. From
the splashes around the collision, we see that things seem to have appeared out of nowhere, but if We assume that there is actually a 3rd dimension, we can perceive that the particles/
energy didnt just appear, but traveled on an unseen dimension. That is what a particle collider does, if You can wrap Your head around it,
○ by s.petry (762400)
From the splashes around the collision, we see that things seem to have appeared out of nowhere, but if We assume that there is actually a 3rd dimension, we can perceive that the
particles/energy didnt just appear, but traveled on an unseen dimension. That is what a particle collider does, if You can wrap Your head around it, but in our 4D length/width/height/
moment range of observation.
You could take the more rational approach and believe that we simply lack the technology to detect and measure what really happened. Naw, you would rather claim that the particle
visited an invisible magical world! Was it Charon pulling the particle across the river Styx for a visit perhaps?
Wholly fuck we never left the dark ages did we?
■ by mooingyak (720677)
You could take the more rational approach and believe that we simply lack the technology to detect and measure what really happened. Naw, you would rather claim that the particle
visited an invisible magical world!
I'm not saying he's necessarily right, but if a particle moves along an unseen dimension, its movements are likely still predictable if you've got the mathematical chops. If
you're at a point where you can accurately predict something, that's what I'd call a good start.
But hey if you'd rather just throw your hands in the air and say fuck it I don't know, go for it.
☆ Re:Hold up. (Score:4, Interesting)
by Anonymous Coward on Wednesday September 18, 2013 @03:11PM (#44886561)
You know how neutrinos have this tendency to change flavors as they pass through time (i.e. neutrino oscillation)? One nifty way of viewing it is that they're 4D objects simply with a
spin in the fourth dimension. If you're into the physics, you'll note the same sort of calculations are used in the Pontecorvo–Maki–Nakagawa–Sakata matrix as are used by game engines when
calculating the 2D representations of 3D virtual objects: You just then need to do basic matrix transformations to derive the result.
☆ by ubermiester (883599) * on Wednesday September 18, 2013 @04:24PM (#44887477)
Check out Richard Feynman's lecture regarding space-time and his analogy of bugs on a sphere. If you tell them that the rule for making a square is to go N units in one direction, then
turn 90 degrees and repeat until you complete the square, they would find that they cannot actually make a square. This leads them to conclude that there is "something wrong" with their
The point is that while the underlying nature of their universe as a sphere is unavailable to them because they cannot escape it to see the bigger picture, they can still infer that
because Euclid's rules of geometry don't work there must be something going on that they can't see. Moreover, they should be able to guess that there is curvature - without knowing for
sure - because of exactly how the rules break down.
This is essentially what people talk about when they refer to the difference between larger objects like clumps of atoms and smaller ones like electrons and quarks. For some reason our 3D
(technically it's 4D according to Einstein) universe only behaves "normally" until we start measuring it at a small scale. Then we start seeing where our rules about the behavior of
"observable" objects - i.e., the stuff we can perceive with our senses - break down and are replaced by the true nature of the subatomic universe. In other words, when we look at quarks
do stuff, we can no longer make the square.
Constructs like the one described above are the result of us trying to get our little bug heads around the way in which our every day rules break down when really tiny things are
involved. It's a way for the bugs to correct Euclid to account for the spherical nature of things.
☆ by Dunbal (464142) *
Considering that we have evolved all these different sensory organs to help us survive, I'm sure that if perceiving a 4th dimension granted any biological advantage at all, we would be
able to perceive it. Sorry to be anthropic about it but my field is biology not physics, lol.
□ Re: (Score:3, Insightful)
by Anonymous Coward
Just because a mathematical model simplifies certain calculations, does not mean that the actual underlying physical geometry matches the theoretical model.
That's not really a problem if all you want to do is simplify the mathematics. Besides which, that was pretty much the reason that early astronomers weren't branded as heretics; they just
said that a heliocentric model made the calculations easier, and that they weren't suggesting that they reflected reality (although they did).
All direct observations to date point to a 3D universe.
Well no shit Sherlock. It's rather hard to observe dimensions that your eyes can't see and your mind can't design instruments to detect. Oh... and, you know, time?
*sigh* With your t
☆ Re:Hold up. (Score:5, Insightful)
by Your.Master (1088569) on Wednesday September 18, 2013 @03:58PM (#44887133)
and that they weren't suggesting that they reflected reality (although they did).
What's interesting there is we say it reflects reality because it makes the calculations easier. Other than the math and mental models being easier to grasp, there really is no good
reason to say the earth goes around the sun* rather than the sun going around the Earth. We just all decided that the calculations being easier trumps the very intuitive model that the
sun circles the Earth. You can construct a perfectly rational model of the Universe from the non-inertial frame of reference that holds the Earth as stationary. It's just full of
epicycles etc..
It's a fairly rare achievement for mass society to replace the naively simpler model of the stationary Earth.
*for the sake of argument, lets not get into them both orbiting a common barycentre; the argument extends to that as well anyway.
○ by markjhood2003 (779923)
What's interesting there is we say it reflects reality because it makes the calculations easier.
That really is the most interesting thing in this discussion. Essentially we are making a leap of faith, that simpler models are more likely to be true as long as they continue to
support the data and allow us to make predictions. But it is at root an aesthetic judgement: beauty is truth, and truth is beautiful. It is the essence of rationality.
It's cool to see how Feynman's diagrams may be like the epicycles of the earth-centered view of the universe: they can be made to work as long as you keep refin
□ by Anon-Admin (443764)
But we live in a 4D universe, or I do. I dont know about you.
☆ by GodfatherofSoul (174979) on Wednesday September 18, 2013 @02:49PM (#44886333)
Wait a second...yeah me to
□ by LaminatorX (410794)
Well, locality violations/exceptions are one thing that we've observed which might be construed as an indicator of additional dimensions, i.e. the events might local on an axis we cant see.
□ by sandytaru (1158959) on Wednesday September 18, 2013 @02:35PM (#44886187) Journal
It seems like their math is like good code. You can get a program to do the same thing in 10 lines what someone else tried to do in 1,000 lines. They're both describing the same basic
function, but one is doing it via a brute force in a roundabout way and the other is doing it much more directly.
Then again, mathematicians tend to be a bit crazy. I remember reading one bio-mathematics person determining that bees do their little waggle dances in nine dimensions projected onto two, and
I thought she was insane.
☆ by fahrbot-bot (874524) on Wednesday September 18, 2013 @03:03PM (#44886485)
I remember reading one bio-mathematics person determining that bees do their little waggle dances in nine dimensions projected onto two, and I thought she was insane.
Not insane, just high.
○ by Nadaka (224565)
its all that special smoke they use to sedate the bees.
□ by ultranova (717540)
Mathematicians have been adding extra dimensions to equations and finding they simplify things for years. It doesn't mean we live in a 27 dimension manifold. All direct observations to
date point to a 3D universe.
What observations would those be? If assuming 27 dimensions gets the same results as assuming 3 dimensions, then you can't tell which one the universe is through observation. And if 27
dimensions is a simpler model, then Occam's razor suggests we should indeed consider our home to be a 27D manifol
□ by interval1066 (668936)
Correction; 4D, and we have direct observational evidence that the universe is infact a larger reality known as "spacetime".
□ by Sponge Bath (413667) on Wednesday September 18, 2013 @03:09PM (#44886539)
All direct observations to date point to a 3D universe.
Ignignokt: You and your third dimension.
Frylock: What about it?
Ignignokt: Oh, nothing, it's cute. We have five.
Err: Thousand.
Ignignokt: Yes, five thousand.
Err: Don't question it.
Frylock: Oh, yeah? Well, I only see two.
Ignignokt: Well, that sounds like a personal problem.
□ Re:Hold up. (Score:5, Insightful)
by Anubis IV (1279820) on Wednesday September 18, 2013 @03:10PM (#44886547)
IANAPOM (I am not a physicist or mathematician), but from what I could gather from the article, it sounds like this isn't a new model that approximates the old, more complicated one, but
rather a massive simplification of the existing one that produces provably identical results in all cases. To drastically oversimplify using my extremely limited understanding while putting
it in terms I can wrap my brain around, it sounds like when you first learn about the arithmetic series in calculus (e.g. the summation of i from 0 to n). At first, the only way you can
approach it is by actually adding 0 + 1 + ... + (n-1) + n, but eventually you learn that you can skip that whole process if i starts at 0 and use n*(n+1)/2 to reach the result with far less
work, and then you're shown how to derive that formula yourself.
It sounds like something similar here. They previously had to calculate the results of every single Feynman diagram and then sum them together to reach a final result, which would involve
billions upon billions of calculations for even a very simple particle interaction. Now, however, rather than having to calculate all of the component parts and summing them, they've derived
a formula that produces the same answers with far less work.
Again, I may be way off, but that's the takeaway I had from the article.
☆ by rasmusbr (2186518) on Wednesday September 18, 2013 @03:36PM (#44886895)
Feynman diagrams are based on the idea that there is framework of time and space, more specifically basically the same time and space that we perceive in everyday life.
This new model apparently takes a simpler view of the problem by not caring about time and space. I suppose you could say that time and space could be viewed as emergent properties of
this geometric object that they have come up with / discovered.
• question: (Score:2)
by etash (1907284)
does the simplification that it mentions, mean that simulations will be way faster? does it in any way affect the n-body problem simulations ?
□ by khellendros1984 (792761)
My impression after reading the article is that this allows for easier predictions of the outcomes of particle interactions, like you might show with Feynman diagrams [wikipedia.org]
(particle decay, collisions that produce different particles, etc). Basically, the kinds of things that we'd study in a particle accelerator (so, quantum interactions, rather than classical
□ by arisvega (1414195)
does the simplification that it mentions, mean that simulations will be way faster? does it in any way affect the n-body problem simulations ?
An awesome question. And, basically, an awesome idea. I would think that if you can set up a numeric experiment that virtually represents fundamental particles and their interactions, and you
already know more or less the trajectories in some n-dimensional space (through this new discovery), then you can probably greatly optimize your algorithms since you will a priori know
whereabouts to look for solutions: you would not need to sweep everything.
Or, you can accept this manifold as truth, and further constr
• the wall of fundamental laws (Score:5, Funny)
by Max_W (812974) on Wednesday September 18, 2013 @02:34PM (#44886171)
I have an impressions that the wall of fundamental laws is reached and further research of particles is useless. This is it. No way further. The impasse.
□ by meta-monkey (321000)
Well, if this concept pans out, we'd be able to calculate all kinds of particle interactions we'd never be able to observe otherwise because those interaction would just be different facets
of The One True Gem. Who knows what kind of amazing things we'd find a facet or two over from our current understanding?
☆ Re:the wall of fundamental laws (Score:5, Funny)
by gstoddart (321705) on Wednesday September 18, 2013 @02:44PM (#44886297) Homepage
Well, if this concept pans out, we'd be able to calculate all kinds of particle interactions we'd never be able to observe otherwise because those interaction would just be different
facets of The One True Gem
Crap, so the "Time Cube" guy was right all along? ;-)
○ Re:the wall of fundamental laws (Score:5, Funny)
by meta-monkey (321000) on Wednesday September 18, 2013 @03:02PM (#44886461) Journal
Given how many insane conspiracy theories are lately turning out to be not completely insane, I'm just waiting for Congress to rip off their masks and reveal their true identities:
Lizard Men from the Hollow Earth.
□ by Plazmid (1132467)
Well, we still don't have a good theory of quantum gravity.
☆ by fahrbot-bot (874524)
Well, we still don't have a good theory of quantum gravity.
QG = x + y (for sufficiently appropriate values of x and y)
• Relevance of theory to the real world is unknown (Score:3, Informative)
by Anonymous Coward on Wednesday September 18, 2013 @02:37PM (#44886205)
Since the N=4 supersymmetric Yang-Mills theory is a toy theory that does not describe the real world, the relevance of this theory to the real world is currently unknown, but it provides
promising directions for research into theories about the real world.
□ by mrsquid0 (1335303)
If one assumes that Special Relativity and Quantum Mechanics are correct, and there is no observational evidence that they are not, then Yang-Mills theory, or something very much like it, is
inevitable. It arises from the need for conservation of the various charges each force.
☆ Re:Relevance of theory to the real world is unknow (Score:5, Informative)
by Guy Harris (3803) <guy@alum.mit.edu> on Wednesday September 18, 2013 @04:00PM (#44887167)
If one assumes that Special Relativity and Quantum Mechanics are correct, and there is no observational evidence that they are not, then Yang-Mills theory, or something very much like it,
is inevitable. It arises from the need for conservation of the various charges each force.
A Yang-Mills theory [wikipedia.org], based on {pick-your-favorite-group}, may be inevitable. Whether it would be the N=4 supersymmetric Yang-Mills theory [wikipedia.org] is another
matter; it won't be [arstechnica.com].
• Anathem (Score:3)
by meta-monkey (321000) on Wednesday September 18, 2013 @02:38PM (#44886219) Journal
The whole, "our understanding is a dim view of a more perfect geometry" thing gave me a very Neal Stephenson Anathem shiver.
• by n1ywb (555767) on Wednesday September 18, 2013 @02:39PM (#44886235) Homepage Journal
Time is an illusion. Lunchtime doubly so.
• Oblig (Score:5, Interesting)
by gmuslera (3436) on Wednesday September 18, 2013 @02:42PM (#44886269) Homepage Journal
xkcd's Purity [xkcd.com]. In the other hand, can't take out of my head that Kepler [wikipedia.org] originally tried to match that the orbits of the 6 known planets at that time with the shapes of
the platonic solids, and this could face the same risk.
• by FilmedInNoir (1392323) on Wednesday September 18, 2013 @02:42PM (#44886271)
I know some of you are thinking this, but it's not, ok.
It's not some complicated mess of geometrical shapes to describe the universe in kaleidoscopic glory as envisioned by a lunatic with a Spirograph.
• It's a time cube (Score:4, Informative)
by Russ1642 (1087959) on Wednesday September 18, 2013 @02:44PM (#44886301)
EARTH HAS 4 CORNER
SIMULTANEOUS 4-DAY
TIME CUBE
WITHIN SINGLE ROTATION.
4 CORNER DAYS PROVES 1
DAY 1 GOD IS TAUGHT EVIL.
IGNORANCE OF TIMECUBE4
SIMPLE MATH IS RETARDATION
AND EVIL EDUCATION DAMNATION.
CUBELESS AMERICANS DESERVE -
AND SHALL BE EXTERMINATED
• Just as alchemy eventually led to chemistry the mystics win again. The logic in theology is that God by definition would be the ultimate craftsman. That means no errors and no waste and no undue
use of effort or energy.
So just how God make a creation? Obviously endless universes could be set in motion by a science that resembles computer programs. Yes, humanity is nothing but the gorilla with a sledge hammer
playing whack-a-mole on a monitor.
• Breakthrough or bullshit? (Score:4, Interesting)
by Animats (122034) on Wednesday September 18, 2013 @03:02PM (#44886457) Homepage
This is either a major breakthrough or utter bullshit. It's too early to tell which. If it's real, it's a Nobel Prize in physics.
The publisher, the Simmons Foundation, is a project of a rich weirdo from Texas.
• space & time as emergent properties (Score:5, Interesting)
by kipsate (314423) on Wednesday September 18, 2013 @03:35PM (#44886881)
One of the things the article says is that space and time may not be fundamental properties of nature, but properties that emerge (i.e., are the result of) a more fundamental reality.
Warning: IANAP. But with some axioms, it is possible to reach the same conclusion.
Imagine a simple experiment with an electron source and a detector. An electron is emitted in the direction of a detector. The experiment is set up such that while travelling towards the
detector, the electron does not interact. More precisely, in between the emitter and the detector, the electron does not exchange any energy. Then, the electron hits the detector and becomes
detected (interaction two).
Has the electron physically travelled in the space between the electron source and the detector? May it be assumed that in between the interaction with the emitter and its subsequent interaction
with the detector the electron is physically present?
Obviously, it is impossible to establish that the electron is present between the emitter and the detector without actually interacting with the electron. It is therefore herewith observed that
any assumptions about physical presence of the electron in between the source and the detector can not be experimentally verified. More generally, it is observed that the assumption of physical
presence of any elementary particle in between two interactions can not be falsified.
Equally impossible to falsify is the assumption that in between the emitter and the detector, the electron in the experiment was not physically present. This assumption implies that (in the
reference frame of the observer) the electron disappeared at the emitter and reappeared at the detector, and did not take up any physical space at any time in between. In between interactions,
the representation of the electron disappeared and became unobservable. For as far as an observer can tell, the electron disappeared from the universe completely in between interactions.
Since obviously, properties about the electron are preserved in between interactions, the electron must still somehow being represented – i.e., the representation of the electron has clearly not
disappeared from the universe.
The notion “observable universe” is therefore being introduced to make the distinction between interactions which can be observed, and the herewith theorized part of the universe that is
apparently capable of at least holding a representation of an elementary particle and which can not be observed.
Observable universe: The part of the universe in which an interaction manifests itself.
Let us formulate the following two axioms:
Axiom 1: An interaction is instantaneous, i.e., it lasts for an infinitely small amount of time.
Axiom 2: An elementary particle only exists in the observable universe at the moment of its interaction.
Notice that axiom 1 and 2 are unfalsifiable. Consider the reverse of axiom 2:
Reverse of Axiom 2: An elementary particle physically exists in the observable universe in the time that passes (in the reference frame of an observer) between two interactions.
This axiom is equally unfalsifiable, since physical presence of an elementary particle can only be proven by interacting with it. The reverse of axiom 1, which would postulate that an interaction
lasts a non-zero amount of time, is equally unfalsifiable.
Elementary particles have no internal structure and are considered point particles. In other words, an elementary particle does not take up any physical space. If we assume that everything in the
observable universe consists of elementary particles, then it follows that all particles that exist in the universe do not take up any space. The aggregate volume of all elementary particles is
Combined, axioms 1 and 2 state that in between two interactions, an elementary particle is not present in the observable universe. A particle only manifests itse
• simply nonsense (Score:5, Informative)
by Browzer (17971) on Wednesday September 18, 2013 @03:36PM (#44886893)
The Slashdot headline, not the physics.
http://www.math.columbia.edu/~woit/wordpress/ [columbia.edu]
• TL;DR (Score:5, Funny)
by CanHasDIY (1672858) on Wednesday September 18, 2013 @03:37PM (#44886913) Homepage Journal
My question is - does this get humanity any closer to the point at which I can build my own interstellar spacecraft? If not... why I should care?
• Feynman Diagrams (Score:5, Insightful)
by PPH (736903) on Wednesday September 18, 2013 @03:37PM (#44886915)
This doesn't necessarily invalidate Feynman's approach. His problem was that he assumed a limitless supply of graduate students to calculate the various reaction path probabilities.
• by Charliemopps (1157495) on Wednesday September 18, 2013 @03:38PM (#44886921)
The biggest problem with particle physics is that we call them particles when they clearly are not.
• No unitarity - probabilities not adding up to 1? (Score:4, Insightful)
by acid_andy (534219) on Wednesday September 18, 2013 @05:50PM (#44888371)
The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity.
...And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one.
I'm probably being very naive attempting to understand this article that has probably already been massively dumbed down, but, how can the probabilities of all possible outcomes of an interaction
not add up to one? Surely they add up to one by definition, otherwise they are not probabilities? For example outcome X having a probability of 1/3 means, on average, you can multiply the number
of times you observe the interaction by 1/3 and get the expected number of times you would see outcome X. If the probabilities in your statistical trials didn't add up to 1, doesn't that mean
adding up the numbers of individual outcomes observed would give a number bigger (or smaller) than the total number of interactions observed? Obviously it cannot mean that, as that fails basic
I can imagine tossing a fair coin - heads has probability 0.5, tails 0.5, total 1. So now how about a 3 sided coin without unitarity? Let's say the probability of heads is still 0.5, tails 0.5
but it has a third side, bodies that also has probability 0.5 of occurring. That sounds mathematically impossible. It could be a mind-reading coin, where you pick heads and find that then occurs
on half your coin tosses. Later you pick tails, and that occurs on half your coin tosses, but when you pick bodies, that also occurs on half of those coin tosses. OK, I give up! Can anyone who
really understands unitarity enlighten me please? Is this anything like the uncertainty principle?
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/13/09/18/1717248/physicists-discover-geometry-underlying-particle-physics?sbsrc=science","timestamp":"2014-04-17T06:44:45Z","content_type":null,"content_length":"350365","record_id":"<urn:uuid:17475b3a-08d7-4315-b0b9-0f4caa8b0f46>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amherst College
Due dates
Written problem sets are normally due at the beginning of class on Wednesdays. We will have intermittent supplemental and/or extra credit online problems (via MasteringPhysics: see General
Information for registration information). Late problem sets can be turned in outside my office (118 Merrill) subject to a 20% penalty per day.
Translation table
Best practices in problem solving
My prescription for minimizing sloppy errors when doing problems:
1. Draw a large diagram. Don't worry about trees: in the long run you'll save paper by doing the problem correctly the first time.
2. Define a unique variable for everything in the problem. Label as many parts of the diagram as possible.
3. Write down the value of every variable you're given. But keep using the variable – not its numerical value – anyway. Keep a mental inventory of which variables are known and unknown.
4. Based on the relevant physics, write down a system of equations using variables.
1. Make sure all the relevant information in the problem appears in an equation somewhere.
2. Check if # equations = # unknowns.
1. If so, the physics is done: everything else is just algebra. If not, it's possible that one of the unknowns is irrelevant and will cancel out in the end: proceed with caution.
2. Sketch out how you will solve the system of equations. A little forethought will prevent you from endless loops of substituting one equation into another and back again.
1. In a homework problem you'll probably have to work through to the bitter end.
2. On an exam, I would give you 80-90% credit for reaching this point. You may want to skip actually doing the algebra: come back to it later if you have time.
3. If you're just studying, stop here. We all know you could do the algebra if necessary, but it's probably not going to add to your understanding of the physics.
5. Solve for whatever the problem asks about (which your book calls the "target variable") algebraically using variables. You will end up with a more-or-less succinct expression with which you can
1. Check units. You don't even need to put numbers in: just make sure the units of the final expression match the units of the target variable. If not, you probably made an algebra mistake.
You can check units of intermediate results to find the problem.
2. Plug in numbers. This is the first time you ever use the value of a variable.
3. Check that the result is reasonable. You probably have an idea of how big the answer should be. If you're way off, 9 times out of 10 you just keyed in a value incorrectly.
This procedure does not guarantee success, but it makes it much more likely that you will translate an understanding of the physics into a correct final answer with minimal pounding on the
Making up lost points on Problem Sets
People who missed points on problem sets can do extra credit to make up some of the lost opportunity, subject to the caveats:
1. Out of concern for the graders, makeup problems will be in MasteringPhysics only.
2. You cannot make up more points than you actually missed.
3. This is not supposed to be an easy process, so the MasteringPhysics problems will be relatively difficult.
4. You can make up points for Problem Set n by doing MasteringPhysics problems from Makeup Assignment n on our course MasteringPhysics site.
5. In this instance I am not using MasteringPhysics for its tutorial properties, so I will not allow you unlimited attempts to answer the questions and there will not usually be hints. Check the
grading details for the particular makeup assignment. | {"url":"https://www.amherst.edu/academiclife/departments/courses/0910S/PHYS/PHYS-16-0910S/assignments","timestamp":"2014-04-17T08:10:51Z","content_type":null,"content_length":"35650","record_id":"<urn:uuid:102ed23e-55e2-41b3-ab91-0dd6faf46458>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Display Ratio Analysis Report
Ratio analysis is a powerful tool for financial analysis. A meaningful analysis of a financial statement is made possible by the use of ratios.
Ratios are a set of figures compared with another set. The comparison gives an understanding of the financial position of a business unit. There are a number of ratios which can be computed from a
single set of financial statements. The ratios to be computed depend on the purpose for which these ratios are required. A single ratio may sometimes give some information, but to make a
comprehensive analysis, a set of inter-related ratios are required to be analysed.
To view the Ratio Analysis
Go to Gateway of Tally> Ratio Analysis
The Ratio Analysis screen is displayed as shown.
The screen is divided into two parts:
• Principal Groups
• Principal Ratios
The Principal Groups are the key figures that give perspective to the ratios.
Principal Ratios relate two pieces of financial data to obtain a comparison that is meaningful.
Principal Groups and key figures
Payment Performance of Sundry Debtors | {"url":"http://www.tallysolutions.com/website/CHM/TallyERP9/05_REPORTS/Management_Information_System_(MIS)_Reports/07_Display_Ratio_Analysis_Report/Display_Ratio_Analysis_Report.htm","timestamp":"2014-04-20T05:44:47Z","content_type":null,"content_length":"6649","record_id":"<urn:uuid:140cf0bc-64a2-48b8-ab62-1490770092fb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arithmetic Ring
The digits 1, 2, 3, 4, 5, 6, 7 and 8 are placed in the ring below.
With the exception of 6 and 7, no two adjacent numbers are consecutive.
Show how it is possible to arrange the digits 1 to 8 in the ring so that no two adjacent numbers are consecutive.
Here is one solution.
Are there any more solutions?
What if you had to arrange the numbers 1 to 12 in a 4 by 4 ring?
How many numbers would there be in an n n ring?
Problem ID: 117 (May 2003) Difficulty: 1 Star | {"url":"http://mathschallenge.net/full/arithmetic_ring","timestamp":"2014-04-17T21:22:35Z","content_type":null,"content_length":"4883","record_id":"<urn:uuid:3ac51a81-10eb-4e87-8e41-e0521289d3c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Of 26, 492 births, 869 males and 642 females were born with birth defects. What is the ratio of males and females born with birth defects? What is the rate of babies born with birth defects? would
ratio be like for every 1 female there is 1.3 say male or how does that work
Best Response
You've already chosen the best response.
I agree with your answer of 1.3 for the first part: Males with defects:females with defects would be 849:642. For the second part, try adding the males + females with defects and divide by the
total number of births. OK?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ef58f20e4b01ad20b50949c","timestamp":"2014-04-21T07:47:59Z","content_type":null,"content_length":"27979","record_id":"<urn:uuid:ca137856-fdc5-4913-b066-1afecf172b50>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
discrete topology, product topology
December 6th 2008, 10:03 AM #1
Dec 2008
discrete topology, product topology
My friend and I are still stuck on:
For each $n \in \omega$, let $X_n$ be the set $\{0, 1\}$, and let $\tau_n$ be the discrete topology on $X_n$. For each of the following subsets of $\prod_{n \in \omega} X_n$, say whether it is
open or closed (or neither or both) in the product topology.
(a) $\{f \in \prod_{n \in \omega} X_n | f(10)=0 \}$
(b) $\{f \in \prod_{n \in \omega} X_n | \text{ }\exists n \in \omega \text{ }f(n)=0 \}$
(c) $\{f \in \prod_{n \in \omega} X_n | \text{ }\forall n \in \omega \text{ }f(n)=0 \Rightarrow f(n+1)=1 \}$
(d) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|=5 \}$
(e) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|\leq5 \}$
Recall that $\omega=\mathbb{N} \cup \{ 0 \}$
If it helps you can think of $\prod_{n\in \omega} X_n$ as $\prod_{n=0}^{\infty} X_n$. We define $\prod_{n=0}^{\infty}X_n$ to be the set of all functions $f: \mathbb{N} \to \{ 0 , 1\}$ that
satisfies $f(n) \in \{ 0 , 1\}$.
There's a nice graphical representation of the product topology on $Y^X$ (i.e. the product of the space $Y$$|X|$ times). Namely, if we draw $X$ as an " $x-$axis" and $Y$ as a " $y-$axis", then
elements in $X^Y$ are "graphs of functions" in the $X-Y$ "plane". An open nbhd of an element $f$ is the set of all functions $g$ whose graphs are close to the graph of $f$ at finitely points. We
get different nbhds by varying the closeness to $f$ and/or the set of finite points.
In our case the product space is $2^\omega=2^\mathbb{N}$, whose "plane" looks like two copies of the naturals $\mathbb{N}$. In other words, if you were to imagine this as a 'subset' of $\mathbb
{R}^2$, it's just the set $\{(n,i) : n \in \mathbb{N}, i \in \{0,1\}\}$.
So remember that open sets in the infinite product topology is really just having all but finitely many the whole space and the rest are open. Since the individual factors are discrete, you only
need to check that all but finitely many are the whole space.
e.g. in (a) the 10th coordinate has a specific value, but all other coordinates can be whatever, so this is certain open.
More accurately, those sets form a base for the product topology. It is not true that every open set is of that form. For a set U to be open in the product topology, it is necessary that every
point of U should contain a basic neighbourhood that is contained in U.
For example. take set (b). Let $B = \{f \in \prod_{n \in \omega} X_n | \;\exists n \in \omega \; f(n)=0 \}$. If $f\in B$ then there exists m such that f(m)=0. Then the set $\{g \in \prod_{n \in \
omega} X_n |\; g(m)=0\}$ is an open neighbourhood of f contained in B. Therefore B is open.
It's usually more difficult to check when a set is closed. You have to look at its complement and decide whether that is open. Sometimes this is straightforward. For example, the complement of
set (a) is the set of all f such that f(10)=1. That is open, so set (a) is closed as well as open.
For a slightly less easy example, look at set (c). Let $C = \{f \in \prod_{n \in \omega} X_n | \text{ }\forall n \in \omega \text{ }f(n)=0 \Rightarrow f(n+1)=1 \}$. If $fotin C$ then there exists
m such that f(m)=f(m+1)=0. Then $\{g \in \prod_{n \in \omega} X_n |\; g(m)=g(m+1)=0\}$ is an open neighbourhood of f containing no points of C. Therefore the complement of C is open and so C is
I think that illustrates the main techniques you will need to answer the rest of the question.
Thank you so much!
(d) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|=5 \}$
(e) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|\leq5 \}$
So, now for parts (d.) and (e.) , these are tricky to see if these will be open, closed, both, or neither by looking at the complements and arguing similarly. Is (d.) closed? Because the
complement of (d.) would be open (at least I am thinking). And then part (e.) is very similar to part (d.) but it looks like this set is open. So should (e.) be open? This is a hard question that
we are struggling on. Thanks for your help. This is much more understandable now.
(d) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|=5 \}$
(e) $\{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|\leq5 \}$
So, now for parts (d.) and (e.) , these are tricky to see if these will be open, closed, both, or neither by looking at the complements and arguing similarly. Is (d.) closed? Because the
complement of (d.) would be open (at least I am thinking). And then part (e.) is very similar to part (d.) but it looks like this set is open. So should (e.) be open?
Let $D = \{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|=5 \}$, $E = \{f \in \prod_{n \in \omega} X_n | \text{ }|\{ n \in \omega | f(n)=0 \}|\leq5 \}$. If $fotin E$ then
there must be (at least) six values of n for which f(n)=1. The set of all elements of the product space that take the value 1 at those six points is an open neighbourhood of f that is disjoint
from E. So E is closed.
But the set D is different. For each integer m>4, define $f_m$ by $f_m(1) = f_m(2) = f_m(3) = f_m(4) = f_m(m) = 0$, and $f_m(n) = 1$ for all other values of n. Then $f_m\in D$, but $\lim_{m\to\
infty}f_m = g$, where g(n) = 0 for n=1,2,3,4, but g(n) =1 for all other values of n. So $gotin D$. That shows that D is not closed.
I'll leave you to think about whether D and E are open or not.
So, now how does one show that $D$ and $E$ are open or not open? I never had a course in graduate level (or upper level) topology/set theory, but I think this is a very interesting problem.
December 6th 2008, 01:31 PM #2
December 6th 2008, 09:47 PM #3
Dec 2008
December 7th 2008, 12:34 PM #4
Nov 2008
December 8th 2008, 04:10 AM #5
December 13th 2008, 02:19 PM #6
Dec 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/63592-discrete-topology-product-topology.html","timestamp":"2014-04-17T16:28:37Z","content_type":null,"content_length":"60666","record_id":"<urn:uuid:58e67bba-5eb2-422d-b70f-ca146b09dff4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
given that f is a linear function , f (4)=-5 and f (0)=3, write the equation that def
March 24th 2010, 05:09 PM #1
Junior Member
Dec 2009
given that f is a linear function , f (4)=-5 and f (0)=3, write the equation that def
given that f is a linear function , f (4)=-5 and f (0)=3, write the equation that defines f
i can find the averate rate of change and is -2but i cannot find the y int...
Any linear function can be written as f(x)= mx+ b. Knowing that f(4)= -5 tells you that 4m+ b= -5 and knowing that f(0)= 3 tells you that 0(m)+ b= 3. Solve those two equations for m and b.
March 24th 2010, 06:07 PM #2
March 25th 2010, 03:11 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/135518-given-f-linear-function-f-4-5-f-0-3-write-equation-def.html","timestamp":"2014-04-21T10:17:44Z","content_type":null,"content_length":"37853","record_id":"<urn:uuid:a27503ca-7056-44d0-ae04-122aa1a35d6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Attleboro Precalculus Tutor
Find an Attleboro Precalculus Tutor
...It will cover highlights of each of the 66 books of the Bible. It will include a close examination of why the ransom was necessary as well as the many prophecies beginning with the first
prophecy found in Genesis and down through Revelations. I have spent over a decade in study and helped others with studying the Bible.
38 Subjects: including precalculus, reading, algebra 1, English
I recently completed my undergraduate studies in pure mathematics at Brown University. I am available as a tutor for pre-algebra, algebra I, algebra II, geometry, trigonometry, pre-calculus,
calculus I, II, and III, SAT preparation, and various other standardized test preparations. I have extensiv...
22 Subjects: including precalculus, reading, Spanish, calculus
...As part of graduate coursework I assistant taught math courses, including linear algebra. I worked in an undergraduate tutorial office for one year as a tutor for subjects including linear
algebra. I was enrolled for two years in a math graduate program in logic.
29 Subjects: including precalculus, reading, calculus, English
...I can teach the basics of grammar, spelling, and punctuation for the lower levels (K-5), and essay writing, critical analysis, and critical essays of the classics for upper level grades. Before
I began a family I was in the actuarial field. I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students.
25 Subjects: including precalculus, English, reading, calculus
...I tremendously enjoy sharing my love of math with others and consistently seek to impart an integrated understanding of the material rather than simply helping students memorize formulas and
procedures. As a tutor I am patient and supportive and delight in seeing my students succeed. If it sounds to you like I can be helpful I hope you will consider giving me a chance.
14 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Attleboro_Precalculus_tutors.php","timestamp":"2014-04-19T20:05:06Z","content_type":null,"content_length":"24323","record_id":"<urn:uuid:6e3bb07f-2093-4efe-a619-cd40c3a19799>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Polarization Basis for Polarimetric Phased Array Weather Radar: Theory and Polarimetric Variables Measurement
International Journal of Antennas and Propagation
Volume 2012 (2012), Article ID 193913, 15 pages
Research Article
New Polarization Basis for Polarimetric Phased Array Weather Radar: Theory and Polarimetric Variables Measurement
School of Electronic Science and Engineering, National University of Defense Technology, De-Ya Road 46, Kai-Fu District, Hunan Province, Changsha 410073, China
Received 22 August 2012; Accepted 15 October 2012
Academic Editor: Francisco Falcone
Copyright © 2012 Jian Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
A novel scheme is developed for mitigating measurement biases in agile-beam polarimetric phased array weather radar. Based on the orthogonal Huygens source dual-polarized element model, a
polarization measurement basis for planar polarimetric phased array radar (PPAR) is proposed. The proposed polarization basis is orthogonal to itself after a 90° rotation along the array’s broadside
and can well measure the characteristics of dual-polarized element. With polarimetric measurements being undertaken in this polarization basis, the measurement biases caused by the unsymmetrical
projections of dual-polarized element’s fields onto the local horizontal and vertical directions of radiated beam can be mitigated. Polarimetric variables for precipitation estimation and
classification are derived from the scattering covariance matrix in horizontal and vertical polarization basis. In addition, the estimates of these parameters based on the time series data acquired
with the new polarization basis are also investigated. Finally, autocorrelation methods for both the alternate transmission and simultaneous reception mode and the simultaneous transmission and
simultaneous reception mode are developed.
1. Introduction
The theory and application of polarimetric radar for weather sensing have been developed for decades [1, 2]. Theoretically, any orthogonal polarization basis (e.g., horizontal and vertical linear,
right hand and left hand circular, and and slant linear) can be used for fully polarimetric measurement. However, depending on the shape, size, and mean canting angle of hydrometeors, the linear
(horizontal and vertical) polarization basis is the most common choice for current polarimetric weather radars with mechanically steered antenna, since it offers simpler, more direct, and accurate
quantitative measurements of precipitations [3, 4]. Polarimetric variables (e.g., the differential reflectivity , the total differential phase , the copolar cross-correlation coefficient , and the
linear depolarization ratio ) for precipitation estimation and classification have also been derived from the scattering covariance matrix in polarization basis [1–3]. The radar data acquired with
other orthogonal polarization states need to be transformed to the polarization basis for the estimates of these parameters [5, 6].
Basically, there are two popular measurement modes for implementing the linear polarization basis. One is the simultaneous transmission and simultaneous reception (STSR) mode, while the other is the
alternative transmission and simultaneous reception (ATSR) mode. Unless some orthogonal signal coding methods are used [7, 8], only copolar elements in scattering covariance matrix can be measured in
the STSR mode. But there are indeed some benefits such as reduced statistical fluctuations, and no need to account for Doppler shifts and simplification of hardware, which made the STSR mode to be a
common choice of recent polarimetric weather radar [3].
Polarimetric variables account for the differences of the and polarized fields which are reflected by or propagate through the hydrometeors. Since the shape of most hydrometeors is nearly spherical,
the differences are usually very small. Accurate measurement of polarimetric variables is required to provide reliable information. There are many factors of biases in polarimetric measurement. One
of the most critical factors comes from the nonideal polarization characteristics of an antenna, such as the mismatching of copular radiation and the limited isolation of cross-polar radiation. These
effects were first examined in [9] for the ATSR mode. The requirements on the cross-polar isolation of antennas were investigated in [10], for both the ATSR and the STSR modes. More recent efforts
reported in [11, 12] have considered the different cross-polar patterns of an antenna and given out compact forms of biases induced by the cross-polar radiation patterns and the unmatched copolar
In recent years, phased array radar technology has attracted wide attentions in weather radar community [13–15]. Phased array antenna becomes an inevitable choice for next generation multifunction
weather radar, due to its superiorities of faster scan speed, simultaneous multibeam, and adaptive beam forming. However, accurate polarimetric measurement with phased array antennas is still
challenging [16]. The polarization characteristics of dual-polarized elements in phased array will vary in terms of beam-steering angle, which has been modeled theoretically by using orthogonal
current sheets [17] and crossed infinitesimal dipoles [16, 18]. The cross-polar radiation of phased array antenna has been derived theoretically in [18] and has been validated with measurements in [
19]. As pointed out in [18], when beam is directed away from the broadside of planar phased array antenna, the misprojections of dual-polarized radiation fields onto the local and directions of the
radiated beam would cause undesirable measurement biases, which are much larger than the intrinsic values of polarimetric variables. The solutions for correcting these biases have been discussed in [
20], and two schemes by simultaneously adjusting the amplitude and phase of dual-polarized elements have been proposed. In order to achieve azimuth scan-invariant and high-accuracy weather
measurements, a cylindrical configuration for polarimetric phased array radar has been proposed in [21]. Based on the interleaving sparse array (ISA) concept, an economical way of making projection
corrections for planar phased array radar operated in alternate transmit-alternate receive (ATAR) mode has been proposed in [22]. The correction matrix approach proposed in [16, 18] has been
generalized in [23–25] for the calibration of practical polarimetric phased array systems, where the imbalances and cross-couplings in transmitter and receiver (T/R) modules, the cross-coupling
between antenna elements, and the polarization characteristics of practical antenna elements are included.
The existing schemes for mitigating measurement biases aimed at mimicking the polarimetric state of a mechanically steered beam in conventional polarization basis by simultaneously adjusting the
feeding voltages of dual-polarized elements [18, 20]. However, accurately adjusting transmitting power needs a highly linear power amplifier in the module, which is difficult to implement and will
cause very low power efficiency. In contrast, a novel polarization basis for PPAR is proposed in this paper to fulfill polarimetric measurement. The proposed polarization basis has a superiority that
is rotational symmetry along the array’s broadside. The radiated fields of dual-polarized element can be projected symmetrically in the new polarization basis through the whole scan volume. Thereby,
the synthesized polar beam pattern can be easily matched in all beam-steering directions, and the adjustments to the amplitudes and phases of the feeding voltages to the dual-polarized elements can
be avoided. However, as polarimetric variables for precipitation estimation and classification are derived from the scattering covariance matrix in conventional polarization basis, in order to
retrieve these parameters based on the time series data acquired with the new polarization basis, autocorrelation methods for both the ATSR and the STSR modes are also investigated.
The rest of this paper is organized as follows. In Section 2, a new polarization measurement basis for planar phased array antenna is introduced based on the orthogonal Huygens source dual-polarized
element model, and the benefits of this new basis for fulfilling polarimetric measurement are analyzed. The methods of retrieving scattering covariance matrix and polarimetric variables based on the
time series data in the proposed polarization basis are investigated in Sections 3 and 4, respectively. Section 3 focuses on the ATSR mode, while Section 4 focuses on the STSR mode. Conclusions and
perspectives are presented in Section 5.
2. Theory Development
As shown in Figure 1, a planar phased array antenna is located in the , plane in the coordinate system. Each dual-linear-polarized radiation element in the array has two ports named as port 1 and
port 2, respectively. From the polarimetric measurement viewpoint, the purpose of this array design is to radiate orthogonal polarized fields throughout a scan volume, one from each port. However, in
practice, the orthogonality of radiation fields excited by the two ports often cannot always be maintained through the whole scan volume. In order to measure this nonorthogonality and undertake
polarimetric measurement, a predefined orthogonal polarization basis (i.e., two unit polarized vectors which are orthogonal to each other) needs to be selected in advance. Polarimetric radars often
use the horizontal ( direction for taking the , plane as ground plane) and so-called “vertical” (negative direction which is only vertical at 0 elevation angle) polarization states for atmospheric
observations. The polarization basis works well to radars with mechanically steered antennas, since the beam is always at the boresight, where the copolar patterns can be easily designed to match
well and the cross-polar patterns are commonly at enough low level. However, in the case of planar dual-polarized phased array antennas, as most dual-linear-polarized elements have rotation
symmetrical radiation characteristic with respect to the two feeding ports (i.e., the antenna radiates the same field when excited from port 1 as when excited from port 2 except that it is rotated
about the broadside direction), the polarization basis cannot measure this symmetry since the or direction is not orthogonal to itself after a rotation along the array’s broadside (i.e., normal to
the array face). Consequently, when beams are steered away from the broadside, unsymmetrical projections of the electric fields excited by the two ports onto the local horizontal and vertical
directions of radiated beam are induced. The measurement biases caused by these misprojections can be much larger than the intrinsic values of polarimetric variables [18]. On the other hand, if
polarimetric measurements can be undertaken in a new polarization basis which is rotational symmetry about the array’s broadside and can well measure the polarization characteristics of
dual-polarized element, then the measurement biases can be mitigated. Thereby, the point is weather a new polarization basis can be defined from an “ideal” dual-polarized element, which radiates
orthogonal fields in the whole scan volume when excited by two ports.
2.1. Orthogonal Huygens Source Element Model
Theoretically, a Huygens source can be modeled by a pair of crossed infinitesimal electrical and magnetic current sources (dipoles), which are taken to be at the same location and with the same
radiation intensity [26, 27]. A dual-polarized element comprising two Huygens sources is located at origin with the PPAR array face in the , plane, shown in Figure 2. One Huygens source is composed
of a -directed electrical current source () and a -directed magnetic current source (), while another one is composed of a -directed electrical current source () and a negative -directed magnetic
current source (). The length of these current sources is infinitesimal, and is the radar wavelength. They are considered to be orthogonal since these two Huygens sources have a relation of
90°rotation around -axis.
The radiation fields of electrical and magnetic current sources can be solved with the aid of auxiliary potential functions [28, Ch3]. The electric fields radiated by Huygens source comprising and
and Huygens source comprising and are where , and is the intrinsic impedance of the radiation medium and equals . , and are the permittivity and permeability of the radiation medium. The radiated
fields of electrical and magnetic current sources in one Huygens source should have the same magnitude and phase [26, 27]. Thereby, and .
From (1) and (2), it can be verified that where “” denotes the scalar product. Equation (3) is suitable for any values of , . It indicates that electric fields radiated by the orthogonal Huygens
sources are orthogonal in any beam direction. Therefore, we can think that the orthogonal Huygens source is an “ideal” dual-polarized element for polarimetric measurement.
In order to classify different hydrometeors species, polarimetric weather radar often uses the horizontally () and so-called “vertically” () polarized directions as orthogonal polarization
measurement basis. Substituting for and for in (1) and (2), respectively, the radiation fields of orthogonal Huygens source in polarization basis are represented as where is the electric field
intensity transmitted along the direction of array broadside (). The subscript (1 or 2) denotes the th Huygens source.
2.2. New Orthogonal Polarization Basis
As and are orthogonal at any far-field position, two new unit polarization vectors and can be defined as where is the normalized coefficient. and are also orthogonal and composing a new polarization
basis. We denote this basis as polarization basis compared with the conventional polarization basis. The relation between the unit polarization and the unit polarization vectors can be expressed via
a transform matrix where is the polarization basis rotation matrix for planar array located in the , plane. It has the property of
2.3. Backscattering Matrix
Substituting (6) into (1) and (7) into (2), respectively, electric fields radiated by two orthogonal Huygens sources in basis are where and denote the and polarized incident field intensities at the
hydrometeors. The relationship between the incident field intensities and the exciting currents can be expressed in matrix form as in (13) is the normalized copolar field radiation pattern of Huygens
source in polarization basis, and the cross-polar pattern of “ideal” Huygens source in polarization basis is always zero.
The backscattering field () by hydrometeors can be expressed as where is the intrinsic backscatter matrix of hydrometeors in polarization basis Since the scattering medium is reciprocal, the
off-diagonal terms in (15) are equal. The transform between and the intrinsic backscatter matrix in polarization basis () can be expressed as
The scattered field can be completely received by one Huygens source as they share the same polarization state, while the scattered field can also be completely received by the other one. So electric
fields received by two Huygens sources can be expressed as
2.4. Propagation and Cross-Polarization Effects
The discussion so far deliberately ignored propagation effects. In practice, when electromagnetic waves propagate through precipitation, there could be differential attenuations and differential
phase shifts between the two differential polarized waves. Both attenuations and phases shifts along propagation path affect the received fields. Therefore, corrections for these factors are
necessary [1–4]. Accounting to the propagation effects, (17) should be generalized as where is the transmission matrix in polarization basis. The relation between and the transmission matrix in
polarization basis () is Since most hydrometeors have an axis of symmetry near vertical, or linearly polarized waves practically remain in the same pure polarization state as they propagate through
precipitation, and there is no depolarization of and waves [3]. The transmission matrix accounting for the extra phase shift and attenuation induced by the hydrometeors is a diagonal matrix It
indicates that the and components of the wave propagate independently. For weather radar operating at long wavelengths (e.g., 10cm), attenuation is rather small and is usually negligible [3].
Consequently, (21) is simplified as where and are the “perturbation” component of the free space propagation constant (i.e., ) due to the presence of the medium. One of the most important
polarimetric variables for rainfall estimation is the specific differential phase [1], which is defined as . The two-way differential propagation phase is defined as , which denotes the two-way
propagation phase difference of and polarized waves along a distance of .
Substituting (16) and (20) into (18) and using the equations and , the electric fields received by two Huygens sources are where denotes the combined backscattering and transmission matrix in
polarization basis [18]. It links the forward- and back-propagating dual-polarized electric fields in polarization basis at the location of the antenna, and it is given by The off-diagonal terms of
are equal (). Equation (23) is very important, since it shows theoretically that it is not necessary to remove the propagation effects before polarization basis transformation [6]. So the transform
from to the combined backscattering and transmission matrix in polarization basis () is
2.5. Receiving Voltage Equation
Electric fields received by an antenna are transformed by receivers into baseband voltages. If the relative gains and the phase difference of dual polarization receivers are calibrated, baseband
voltages received and transformed by two Huygens sources can be represented as where and are the received baseband voltages of and polarized signals, respectively. in (26) is the normalized copolar
power radiation pattern of Huygens source antenna in polarization basis. It is used to account the antenna’s gain decreasing when the beam’s direction is away from the array’s broadside.
Furthermore, (26) indicates that can be fully recovered without any bias if can be accurately measured. The transform from to is straightforward However, the returned signal from a radar resolution
volume filled with hydrometeors often fluctuates with time, which can be described in terms of the Doppler velocity spectrum of the scatterers [1, 2]. Equation (27) is valid only if one can carry out
simultaneous measurements of the four time series, , , and . When radar system operates at the ATSR mode, the pair , , and pair , are measured at alternative pulse repetition intervals, and the
coupling between the Doppler and the polarimetric effect prevents directly using (27) to retrieve . This coupling issue will be discussed and resolved in Section 3. In contrast, the Doppler and
polarimetric effects are not coupled in the STSR mode. This mode is based on the approximation that the intrinsic scattering matrix is diagonal, which is valid for most meteorological conditions in
and linear polarization observations. However, when STSR mode measurements are undertaken in the proposed linear polarization basis, the off-diagonal terms of would have nonzero value in most
observing directions even if is diagonal, and the four terms in cannot be measured separately. Only with the assumption that is diagonal, diagonal terms of can be recovered by postprocessing the and
polarization received signals. This issue will be discussed in Section 4.
2.6. Practical Antenna Element Consideration
Although orthogonal Huygens source is a theoretically dual-polarized element model, the design and realization of electrically small antennas which have similar radiation characteristics as Huygens
source have been studied recently [27, 29]. If this kind of antenna is used as radiation element of planar PPAR, polarimetric measurements can be undertaken in the proposed polarization basis without
obvious measurement biases.
For some popular antenna elements such as dual-polarized microstrip antennas [30], the proposed orthogonal polarization basis is also suitable. As aforementioned, when polarimetric planar phased
array composed of dual-polarized elements is designed to radiate orthogonal polarizations throughout a scan volume, the radiation characteristics of antenna element are often symmetrical with respect
to the two feeding ports. In order to measure the orthogonality of the polarizations from the two ports, the logical choice of orthogonal polarization basis for describing copolar and cross-polar
radiation patterns is the basis which is orthogonal to itself after a rotation about array broadside. The proposed polarization basis belongs to this case, while the polarization basis does not. The
matching of copolar radiation characteristics (i.e., the copolar radiation patterns should have the same main-beam shape, and the orthogonal copolar radiation fields excited by each port with equal
power should have the same intensity) is very important for highly accurate polarimetric measurement. However, when polarization basis is used for planar phased array, in order to keep the transmit
and fields to be the same as fields from mechanically steering beam, the voltage (power) fed to each port needs to be adjusted as a function of beam directions [20, 24, 25]. If antenna element of
planar phased array has rotation symmetrical radiation characteristic, the copolar radiation pattern in the proposed polarization basis can match well, which would make the calibration process more
With the electric and magnetic currents composing orthogonal Huygens sources placed along the -, -axes, the analogous polarization basis for planer array located in the , plane can be represented as
where is the polarization basis rotation matrix for planar array located in the , plane. The and unit vectors are exactly the Ludwig’s III definition of linear reference and cross-polarization, which
have already been proposed in the Ludwig’s classical paper for describing polarization purity of antenna patterns [31]. Since this definition was formulated mathematically in 1973, its usage had
become commonplace in antenna measurement community to measure and describe the copolar and cross-polar patterns of linear polarized antennas [32]. When using this kind of predefined polarization
basis as the orthogonal transmitting/receiving polarization states of planar PPAR, the radiation characteristics of dual-polarized element can be well described, and the measurement biases caused by
the misprojection of the copolar and cross-polar fields onto the local and directions of the radiated beam [18] can be mitigated. If mutual couplings between elements in a phased array are modeled
with active (embedded) element patterns [33], the relative cross-polar level of synthesized beam is just the relative cross-polar level of active element patterns in the same beam direction, which is
the primary reason for the polarization measurement errors by the antenna.
As described in [20], in order to produce a desired or polarized beam independent of direction, the transmitting fields of each polarization by dual-polarized element need to be adjusted
simultaneously. Since each copolar radiation fields in the new polarization basis can be considered to be only corresponding with one of the element’s feeding ports, all the matter of nonideal
polarization can be reduced to the cross-polarized radiation fields. If the cross-polar pattern’s maximal level in the whole scan volume is low enough to produce negligible measurement biases, each
polarization can be synthesized individually, which is another benefit of the proposed polarization basis. The cross-polarized fields of conventional dual-polarized element (e.g., dual-polarized
microstrip antenna) in the new polarization basis can reach its maximum in the diagonal far-field planes. If these fields are strong enough and need to be taken into account in polarization beam
synthesis, then the ISA technique [22] can be used here to cancel the cross-polarization of the main beam since each polarization is individually controlled.
The proposed polarization basis provides a new choice of accurate polarimetric measurement for planer phased array. The estimating methods for these polarimetric variables by the new polarization
measurement will be discussed in next two sections.
3. Polarimetric Variables Measurement at ATSR Mode
The methods of retrieving scattering covariance matrix and polarimetric variables based on the new polarization measurements in ATSR mode are presented in this section. The definition of polarimetric
variables for meteorological research and some approximation of backscatter and propagation characteristics of hydrometeors are reviewed at first. Then two ways of polarimetric variables measurement
are discussed.
3.1. Scattering Covariance Matrix and Polarimetric Variables
Since meteorological targets are composed of an ensemble of reshuffling hydrometeors that are located randomly in space, the total return from a radar resolution volume fluctuates with time [6]. The
mean scattering matrix is no longer sufficient to completely characterize this scattering medium. Thereby, the scattering covariance matrix is necessary. The definition of the scattering covariance
matrix [1, 2, 34] is where () is the term of intrinsic backscatter matrix in polarization basis, and the angle brackets denote ensemble averaging.
Most terms of the covariance matrix have been used by themselves or in combination with others to infer properties of the scattering hydrometeors. The popular polarimetric variables that are derived
from the covariance are , , and . The definitions of these polarimetric variables are
As the Rayleigh-Gans scattering theory is valid for hydrometeors at long wavelength (e.g., 10cm), the differences of backscattering phase angles by most hydrometeors are usually very small [35].
These phase differences are negligible, and the off-diagonal terms of (29) are considered to be real numbers. When propagation effects are taken into account, needs to be substituted for () in (30).
Attenuation along propagation path in most precipitation media is rather small and can also be neglected at long wavelength [3], Thereby, the values of and with propagation effects will not be
changed. Due to the differential propagation phase shift of and polarization waves, the copolar cross-correlation coefficient with propagation effects can be represented as where is the two-way
differential propagation phase which has been introduced before.
Due to the relative motion and random wobbling of hydrometeors in the radar resolution volume, there are Doppler phase shift and spectral broadening in the correlation of different time sampled
echoes [1, 2]. Since the relative motions and positions of scatterers are independent of their sizes and shapes at low elevation angles, various correlation coefficients between components of either
polarization at one sample time and components at another sample time can be expressed as a product of coefficients due to Doppler spread and due to polarization effects [36]. For radar with
alternating transmit polarization, and are measured at alternate pulse repetition intervals. The copolar cross-correlation coefficient can be expressed as where is the pulse repetition time, is an
integer implicitly multiplying , and is the correlation coefficient at lag due to the Doppler spread. is the mean Doppler phase shift, which corresponds the mean velocity () of hydrometeors in the
radar resolution volume with .
Since most weather echoes have Gaussian shape correlation coefficients (Doppler spectrum), the magnitude of Doppler spectrum satisfy where is the correlation coefficient at lag due to the Doppler
spread. Also, there may be some differences in Doppler phase/spectrum of various polarization correlations at the same lag [35], which will be ignored in this paper.
3.2. Polarimetric Variables Measurement by Estimates of Instantaneous Scattering Matrices
Consider the ATSR mode and two consecutive times defined with indices and , where is an integer implicitly multiplying the pulse repetition time . At time , a pure polarized field is transmitted
followed by a pure polarized field at time . From (26), the received baseband voltage alternates as follows.
At time At time Substituting , , and into (34) and (35), these voltages can be expressed as follows.
At time At time where the constants and are , . and are the diagonal and off-diagonal terms of and have a relation of . The time series , , , and can be directly calculated from (36), (37), (38), and
(39), respectively. Since these time series are not measured at the same time, they cannot make up of the instantaneous combined backscattering and transmission matrix in polarization basis.
Therefore, (27) cannot be used directly.
The question here is whether an instantaneous scattering matrix can be estimated from such alternate time series. This problem has been discussed in [6], and we represent here for completeness. If
the pulse repetition time is within half of the correlation threshold time as defined in [6], the instantaneous combined backscattering and transmission matrix can be constructed accurately and has a
form as where and are interpolated copolar and cross-polar terms at time . By invoking reciprocity for the case of backscatter, the cross-polar term can be estimated immediately as The estimation
problem is to determine the amplitude and phase of . The amplitude of can be estimated by a simple interpolation as The phase of can be estimated as where is the total copolar differential phase of
and polarization states at time , and it can be estimated as Equation (44) is just the th estimate according to the well known estimator for total mean copolar differential phase (), as given in [6],
where are the two estimates of cross-copolar covariance at lag 1.
Assuming that the relative phase difference between dual polarized transmitting and receiving channel of the radar are calibrated (or accounted), the instantaneous combined backscattering and
transmission matrix can then be constructed as This matrix can be transformed by (27) to the polarization basis. Polarimetric variables for precipitation estimation can then be calculated.
3.3. Polarimetric Variables Measurement by Estimates of Powers and Correlations
In this subsection, we will demonstrate how polarimetric variables can be calculated by operating on estimates of powers and correlations of the and polarization echoes. Assuming that an alternate
sequence of pulses is transmitted and received, according to (36)–(39), the power estimates from the even sequence (subscript ) are and the correlation estimate is The power estimates form the odd
sequence (subscript ) are and the correlation estimate is The two cross-correlations of the even and odd sequences are The subscripts , in (53) denote that the correlation of polarization received
even voltage sequence with polarization receiving odd voltage sequence, where the sequence corresponding to the subscript before comma is always conjugated and having leading time in the correlation.
With the same notation, other cross-correlations (i.e., , , , , , and ) can be similarly defined. As indicated before, is the normalized copolar power radiation pattern of Huygens source antenna, and
() is the electric field transmitted along array broadside. For practical antenna elements, the measured (or computed) copolar active power radiation pattern in polarization basis should be
substituted for in (48) and (54). Assuming that the relative gains and the phase differences of two receivers are calibrated and removed from baseband voltages, the magnitude and phase imbalances of
and need to be measured by some calibration processing. Then the following power and correlations can be derived: where (61), (62) have included the approximation that the Doppler effects and the
polarization effects can be separated at low elevation angles. Other cross-correlations (i.e., , , , , , and ) can also be calculated. With reciprocity condition (), the following equations can be
derived: If there is enough power in the cross-polar receiving channel, (64) can be used to estimate the Doppler spread and the mean Doppler velocity. The magnitude of correlation coefficient can be
estimated as The mean Doppler phase shift can be estimated as and the ambiguity of corresponds to the sampling at time .
Another method for estimating and is using the correlation of either the odd or even copolar sequence at lag . For the copolar even sequence, we have For the Gaussian-shaped Doppler spectrum, can be
estimated as and can be estimated as with an ambiguity corresponding to the sampling at .
Furthermore, we will demonstrate how polarimetric variables defined in (30) can be estimated from the powers and correlations of (55)–(62). Two types of precipitation media are considered. One is a
medium that has off-diagonal backscattering terms, while the other does not.
3.3.1. Nondiagonal Backscattering Matrix
The transform from to has been given in (27), we rewrite it here with expansions
With the approximation that Doppler effects and polarization effects are separated, the four covariance elements , , , and can be calculated from power and correlations defined in (55)–(62) as where
the reciprocity of scattering matrix () is used several times in these equations. The estimates of correlation coefficient () and mean Doppler phase shift () have been given before. When the
attenuation along propagation path is ignored, , , and can be computed as Equation (74) can be used to estimate and . With the approximation that all elements in backscattering matrix have the same
phase angles, can be estimated as and can be estimated as
3.3.2. Diagonal Backscattering Matrix
When observing at low elevation angles, the off-diagonal terms of intrinsic backscattering matrix is very small (can be ignored) for most meteorological conditions in and linear polarization
observations. Although the methods presented in last subsection are also suitable for this case, some simple but more effective methods for the estimates of polarimetric variables are presented here.
Since the off-diagonal terms of are considered to be zero (), the received baseband voltages are at time () and at time We can solve the and from (77) and (78) and and from (79) and (80)
Precipitation observation is often taken at low elevation angles, where the value of is approach 90°. Thereby, the coefficient in (81) and (84) is very small. As is in denominator of (82) and (83),
only (81) and (84) can be used here to avoid computational errors.
From the power estimates of (81) and (84), the covariance and can be derived as Then can be estimated by substituting (85) and (86) into (72).
The two cross-correlations of (81) and (84) can be calculated as As indicated in [37], the argument of (87) equals the sum of the differential phase and the Doppler shift , whereas the argument of (
88) is . So the can be estimated using . The magnitudes of and can be represented as Although (89) and (90) are equal in mean, both of them would be estimated and averaged to reduce errors in
practice. So can be estimated as and can be computed by substituting (85), (86), and (91) into (76).
4. Polarimetric Variables Measurement in STSR Mode
STSR mode is based on the approximation that the propagation and backscattering matrices in polarization basis are diagonal. In this mode, radars with mechanically steered beams simultaneously
transmit and receive both horizontal and vertical polarization states, and only reflectivity, differential reflectivity, differential phase, and copolar cross-correlation coefficient can be
However, when the polarization basis is used, the propagation and backscattering matrices are not diagonal, and the four terms of cannot be measured separately at STSR mode. Assuming the same time
index () and with , the simultaneously received baseband voltages in STSR mode can be derived from (26) and can be directly solved from the linear equations (92) and (93) So the simultaneous time
series of and can be solved upon each return, and conventional polarimetric variables estimate methods of STSR mode can be used [2, ch6].
The more efficient procedure is using the following powers and correlations of received baseband voltages: | {"url":"http://www.hindawi.com/journals/ijap/2012/193913/","timestamp":"2014-04-16T12:15:07Z","content_type":null,"content_length":"1046771","record_id":"<urn:uuid:344760a6-ec66-45bb-9c68-29145306b43a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
A little physics in your bio
For Today: Prediction and Testing (expectation values and measurements)
• Discussion: Prediction vs. reality
• Ponderable, coin flips: prediction and measurement.
• Discussion: Sample populations, RMS deviations and standard deviations, and central limit theorem (law of large numbers)
• There is a nice simulation of these ideas, in terms of dice rolling. Try this by varying the number of die and the number of rolls. Note that you can reset the applet by changing the number of
die, and if you select 5 rolls, each consecutive roll rolls 5 more times. Thus, you can see the distribution build up.
• Discussion: measurements and sample populations to random walk
• Discussion and lab: Video microscopy, calibration, and observation of brownian motion
• Discussion and lab: Video microscopy, calibration, and observation of protists
• Lab: Conversion and importing videos to imageJ
• Lab: Analyze videos with imageJ
• Discussion: kinematics and forces
• Tangible, kinematics of cellular motion: prediction and measurement.
• Tangible, kinematics of motion: laser trap.
Additional resources and readings
This material was prepared for the NSF REU Site: Molecular Biology, Bioinformatics and Biomaterials | {"url":"http://www.gwu.edu/~phy21bio/NSFREU/","timestamp":"2014-04-21T03:03:51Z","content_type":null,"content_length":"11509","record_id":"<urn:uuid:2de6e260-8b60-4f16-a745-760bbdd31f65>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double & Float primitive types
September 8th, 2011, 10:44 PM
Double & Float primitive types
Hi guys,
I have a quick question: What is the range the variable type "double" and "float" can hold, in terms of 2?, that is 2 tothepower of "x". Also the cardinality would be greatly appreciated.
September 8th, 2011, 10:58 PM
Re: Double & Float primitive types
Java floats: Wikipedia - IEEE 754-2008 single precision floating point numbers
Java doubles: Wikipedia - IEEE 754-2008 double precision floating point numbers
General IEEE 754-2008 specifications: Wikipedia - IEEE 754-2008
September 9th, 2011, 12:03 PM
Re: Double & Float primitive types
September 9th, 2011, 04:20 PM
Re: Double & Float primitive types
This are all good links, but they don't answer my question. I need to know the binary representation of a double type number inside the memory. For example an integer value takes space like this
inside the 4 bits from the last to the first filling the 1 and 0 until the number is described. But what happens to float and double numbers? This question goes hand in hand with its range in
terms of [ -2^x , 2^x + Y] or something similar
September 9th, 2011, 04:27 PM
Re: Double & Float primitive types
Huh? How did the links not answer that? And why do you think you need to know this?
September 9th, 2011, 04:39 PM
Re: Double & Float primitive types
The binary representation of Java floats (or single precision floating point numbers) and Java doubles (or double precision floating point numbers) is very clearly presented in the first two
Wikipedia links I posted. The two links also provide very clear formulas/methods for converting between the computer binary representation and a human readable format.
IEEE 754-2008 - Wikipedia, the free encyclopedia provides a table of how many binary digits of precision you can expect from both of those two data types (float = binary32, double = binary64), as
well as the range for the exponents.
September 9th, 2011, 05:03 PM
Re: Double & Float primitive types
This sure does smell like a homework assignment.
September 11th, 2011, 02:52 PM
Re: Double & Float primitive types
Actually no is not for homework. Yes I read the links which have a very pretty diagram showing the binary representation. I just needed a layman's terms explanation of how it works but thanks for
judging <3.
If you still are feeling friendly could you answer how to properly subtract doubles so that instead of 0.129999999999999 I get 0.13 when subtracting 20.37 from 20.50. Both values are stored in a
double type instance variable.
-Tried BigDecimal and it just makes it worse. Result: same with more digits.
keeping it friendly,
September 11th, 2011, 02:58 PM
Re: Double & Float primitive types
You can't subtract doubles with BigDecimal (another edit: yes you can, but not the way you think you can). You can subtract BigDecimals with BigDecimals - that'll work. If you're constructing
BigDecimals with a double value, your BigDecimal will already be 'wrong' - garbage in, garbage out and all that.
edit: the API doc for the BigDecimal(double) constructor explains this very clearly I've just noticed.
September 11th, 2011, 06:48 PM
Re: Double & Float primitive types
0.13 cannot be represent exactly in IEEE 754-2008 binary notation (for any finite mantissa bit length). This is one of the major implications of the way floating point numbers are represented why
they are not used to for exact calculations (such as calculations involving money).
For an online IEEE 754 converter:
IEEE 754 Converter
September 12th, 2011, 07:31 AM
Re: Double & Float primitive types
Google "what every computer scientist should know about floating-point arithmetic" for a popular article that explains what you aren't understanding. | {"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/10772-double-float-primitive-types-printingthethread.html","timestamp":"2014-04-19T05:21:05Z","content_type":null,"content_length":"10569","record_id":"<urn:uuid:d9046a32-7b86-4f2a-a118-ea06f2103646>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richard Elwes
4th September, 2012
In a paper with the unassuming title of Inter-universal Teichmuller theory IV: log-volume computations and set-theoretic foundations [pdf], Shinichi Mochizuki has released a purported proof of the
ABC conjecture. This would be huge news if correct, as this single conjecture is known to imply all sorts of exciting facts about the world of numbers. Proposed by Joseph Oesterlé and David Masser in
1985, its most famous consequence is Fermat’s Last Theorem… but of course that has already succumbed to other methods. So, to give a flavour of its power, I’ll discuss another: Pillai’s conjecture.
This starts with the observation that the numbers 8 & 9 are rather unusual. They are neighbours which are both powers of other positive whole numbers: 8=2^3 and 9=3^2. In 1844, Eugène Catalan
conjectured that this is the only instance of two powers sitting next to each other, a delightful and surprising fact which was eventually proved by Preda Mihăilescu in 2002.
However, a host of related questions remain unanswered. What about powers which are two apart, as 25 (=5^2) and 27 (=3^3) are? Or three apart, as happens for 125 (=5^3) and 128 (=2^7)?
In 1936 Subbayya Pillai conjectured that for every whole number k there are only ever finitely many pairs of powers exactly k apart. But so far the only case this is known is for k=1, i.e. Catalan’s
original 8 & 9.
A proof of the ABC conjecture would confirm Pillai’s conjecture for all the remaining values of k at a stroke… and a great deal else besides. So watch closely as the world’s number theorists now
descend on Mochizuki’s paper!
Categories: Maths, Number theory | Permalink | {"url":"http://richardelwes.co.uk/2012/09/04/as-easy-as-123/","timestamp":"2014-04-17T00:51:34Z","content_type":null,"content_length":"17801","record_id":"<urn:uuid:3461221e-d1ab-4c22-b73b-89b4c536da0a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
More Optimization and Modeling
November 15th 2006, 12:25 PM
More Optimization and Modeling
A rectangular beam is cut from a cylindrical log of radius 30 cm. The strength of a beam of width w and height h is proportional to $wh^2$. Find the width and height of the beam of maximum
I'm not sure what to do to set this up... After the setup, It'll be easy.. I'll just take the derivative of the strength equation to maximize it...
All i have so far is $s = kwh^2$. I don't know how to write any other equations to help with writing one variable in terms of another for substitution.
I had one of those light bulbs pop up in my head while i was standing my the microwave waiting for my hot pockets to heat up :)
Here's my work... please check to see if i thought of the correct thing.
since r = 30, then d = 60.
the diameter is also a diagonal of the rectangle so:
$60^2 = h^2 + w^2$
Solve for h.
$h = \sqrt{3600-w^2}$
Now, Since the strength function is $s = kwh^2$, I can substitute.
$s = kw(\sqrt{3600-w^2})^2$
$s = 3600kw - kw^3$
Now I will take the derivative and set it equal to 0 to find where the strength is maximized:
$s' = 3600k - 3kw^2 = 0$
$w = \sqrt{1200}$
Now the 2nd derivative test to be certain it is a maximum.
$s'' = -6kw$
$s''(\sqrt{1200}) = -6k(\sqrt{1200})$ <---- negative, so it's a max.
Plug it back into the diagonal equation to get the height.
$60^2 = h^2 + (\sqrt{1200})^2$
$3600 = h^2 + 1200$
$2400 = h^2$
$h = \sqrt{2400}$
The dimensions should be $\sqrt{1200}\quad x \quad \sqrt{2400}$
How's this? | {"url":"http://mathhelpforum.com/calculus/7622-more-optimization-modeling-print.html","timestamp":"2014-04-19T06:12:27Z","content_type":null,"content_length":"7729","record_id":"<urn:uuid:37790b84-d26c-4ed7-abdb-2b6daf10cd92>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nozzle Operating Instructions
Nozzle 3.7 New Version (3.7.0.5)
Quasi-one-dimensional flow INSTRUCTION MANUAL
A=A(x), p=p(x), r=r(x), T=T(x), M=M(x) A DeLaval Nozzle Analysis Program
for Microsoft Windows By AeroRocket
See the new Atlas 5 Purchase
RD-180 Rocket Motor Analysis
| MAIN PAGE | PRODUCTS | CONSULTING | MISSION | RESUME |
Copyright © 1999-2012 John Cipolla/AeroRocket
Nozzle 2-D Plume Instructions Turbulent Free Jet Instructions
Nozzle 3.7 is a one-dimensional and two-dimensional, compressible flow computer program for the analysis of converging-diverging nozzles. Nozzle models inviscid, adiabatic and hence isentropic flow
of a calorically perfect gas through variable-area ducts. Nozzle internal flow may be entirely subsonic, entirely supersonic or a combination of subsonic and supersonic including shock waves in the
diverging part of the nozzle. Shock waves are clearly identified as vertical red lines on all plots. The cross-sectional shape in the axial direction of the nozzle is specified by selecting from five
standard nozzle types or by defining nozzle geometry using the Free-Form nozzle geometry method. Nozzle plots color contours of pressure ratio, temperature ratio, density ratio, and Mach number and
has a slider bar that displays real-time values of all nozzle flow properties. New in this version is the ability to determine shock-angle, jet-angle (plume-angle) and Mach number for axisymmetric
and two-dimensional nozzles in the region near the lip for underexpanded and overexpanded flow. The plume analysis capability has been greatly enhanced in the new version of Nozzle 3.7.0.4.
NOZZLE INSTRUCTIONS Back To Top
Summary of Features
1) Specify nozzle geometry as either Parabolic, Conical, Bell, Imported, or Free-Form. Free-Form nozzle shapes may use up to 30 points to define nozzle geometry.
2) Standard and Import nozzle shapes may have up to 1000 axial points defining the cross-sectional area distribution of the nozzle.
3) Select either the classical isentropic and normal-shock relations method or the MacCormack backward-predictor forward-corrector finite difference method to determine characteristics of nozzle
internal flow.
4) Locate internal shock waves quickly using the slider bar that displays nozzle property verses axial location in real time.
5) Determine Mach number (V/c), pressure ratio (P/P0), density ratio (R/R0) and temperature ratio (T/T0) at each axial location in the nozzle.
6) Determine shock wave location, Mach number before the shock wave, Mach number after the shock wave and nozzle area at the shock wave location.
7) Specify fluid properties for a large number of inert gases, liquid fluid rocket propellants and solid fuel rocket propellants or specify your own.
8) Specify the units of analysis as MKS (meter-newton-seconds), CGS (centimeter-newton-seconds), IPS (inch-pound-seconds) and FPS (foot-pound-seconds).
9) Enlarge all plots for easy data reduction and output all data and plots to a color printer.
10) Print Detailed Results to a printer and Save Data File As for displaying X, R coordinates and flow properties in CSV format for spreadsheet display and review.
11) Fast solution - most analyses completed in less than 15 seconds.
12) Generate color contour plots for Mach number (Mn), Pressure ratio (P/P0), Temperature ratio (T/T0), and density ratio (R/R0).
13) Determine exterior flow properties in the nozzle-lip region for underexpanded nozzles and overexpanded nozzles.
14) Added a hybrid rocket motor propellant having the following fuel and oxidizer to the list of combustion gases: 85% Nitrous Oxide, 15% HTPB.
15) Made the SSME example (shown below) the start-up data for Nozzle program analyses. Data easily cleared for new data entries.
16) Two-dimensional plume analysis using the method of characteristics for underexpanded (Patm < Pexit) flow.
17) Nozzle_Examples.zip in the Nozzle directory includes 34 nozzle examples used for validation purposes.
18) Design Conditions routine for those who wish to quickly design subsonic/supersonic wind tunnels or efficient every-day nozzles
19) Added Turbulent Circular and Turbulent 2-D Free Jet analysis capability based on theory from Viscous Fluid Flow by Frank M. White
20) NEW! Display Conical nozzle geometry in Computer Aided Design (CAD) formatted LINES and CIRCLES for generating imported shapes using the Imported shape option command. Accessed by clicking File
then CAD Input For Conical Shapes then SHOW NOZZLE CAD. Use these LINE and CIRCLE values in any CAD program to generate the text file required to generate a Nozzle 3.7 import geometry file.
21) NEW! For overexpanded nozzles, the value for pressure ratio (Pa/P3), Mach number (M3) and oblique shock diameter at point-2 are inserted into the 2-D Plume Analysis overexpanded nozzles. For
underexpanded nozzles the value for pressure ratio (Pa/Pe), exit Mach number (Me) and nozzle exit diameter (De) are inserted and plume results automatically displayed from nozzle exit to several
diameters downstream as in previous versions of Nozzle.
Propellant Gases Available
│ │
│ Inert Gases │
│Dry Air │Hydrogen │Helium │Water Vapor │Argon │Carbon Dioxide │
│Carbon Monoxide │Nitrogen │Oxygen │Nitrogen Monoxide │Nitrous Oxide │Chlorine │
│Methane │ │ │ │ │ │
│ │
│ Liquid Fuel Propellant Gases │
│Oxygen, 75% Ethyl Alcohol(1.43) │Oxygen, Hydrazine(.09) │Oxygen, Hydrogen(4.02) │
│Oxygen, RP-1(2.56) │Oxygen, UDMH(1.65) │Fluorine, Hydrazine(2.3) │
│Fluorine, Hydrogen(7.60) │Nitrogen Tetroxide, Hydrazine(1.34) │Nitrogen Tetroxide, 50% UDMH, 50% Hydrazine(2.0) │
│Nitric Acid, RP-1(4.8) │Nitric Acid, 50% UDMH, 50% Hydrazine(2.20) │ │
│ │
│ Solid Fuel Propellant Gases │
│Ammonium Nitrate, 11% Binder, 4-20% Mg │Ammonium Perchlorate, 18% Binder, 4-20% Al │Ammonium Perchlorate, 12% Binder, 4-20% Al │
│ │
│ Hybrid Rocket Motor Propellant Gases │
│85% Nitrous Oxide, 15% HTPB │ │ │
│ │
│ User-Defined Gases │
│Specify custom or user-defined gases by inserting Ratio of specific heats (Cp/Cv) and Gas constant (R=Cp-Cv) in the nozzle data entry section. │
As the exit back pressure, Pe is reduced below Po, flow through the nozzle begins. If Pe is only slightly less than Po, the flow throughout the nozzle is subsonic and the pressure profile along the
axis would be like curve A in Figure 1. Reducing Pe increases the mass flow rate through the nozzle. As the flow rate increases, the pressure at the throat decreases until it reaches the critical
pressure as indicated by curve B (PCRIT-1). The exit pressure Pe which exactly corresponds to sonic conditions at the throat can be easily determined from isentropic flow relations. The flow is
subsonic everywhere in the nozzle except at the throat, and mass flow is the maximum possible for the given nozzle and the reservoir conditions.
Suppose the exit pressure is now reduced to a value corresponding to curve F (PCRIT-3) in Figure 1 where no shocks are present in the nozzle. The exit pressure at F is such that the entire expansion
is isentropic and the flow is supersonic in the diverging portion of the nozzle. The value for pressure is simply obtained from the isentropic relationships for Mach number, pressure, temperature and
density and represents an optimal nozzle design. The pressure within the nozzle exit cannot be reduced further and when the external pressure is reduced to G the fluid leaving the nozzle changes its
pressure through a complicated flow pattern outside the nozzle. Thus, the curves B and F represent the two limiting cases of exit pressure for isentropic flow in such a nozzle. For exit pressures
below that at B, a shock wave forms within the diverging part of the nozzle, changing the flow from supersonic to subsonic and compressing the gas exactly enough to match the nozzle exit conditions.
Because of the entropy rise across the shock, the overall flow through the nozzle is not isentropic, although the flow on either side of the shock can still be considered isentropic. The lower limit
for this kind of flow pattern is given by a shock occurring exactly at the exit of the nozzle as indicated by curve D (PCRIT-2). The flow conditions for exit pressures between curves B and D may be
computed with aid of the isentropic relationships and normal shock analysis. At still lower exit pressures the flow adjusts itself through a series of two-dimensional or three-dimensional shock waves
and the average exhaust velocity is generally still supersonic.
The designer must chose an appropriate condition from the previous possibilities for his particular application. When the flow leaves the nozzle at supersonic speeds and its pressure exactly equals
the surroundings (curve F) , the nozzle is called correctly expanded (PCRIT-3). If the exit area of the nozzle is less than the correctly expanded value for a given back pressure, the nozzle is
underexpanded and the fluid leaving the nozzle has a pressure greater than the surroundings (curve G). On the other hand, if the exit area of the nozzle is too large, shock waves form within or just
outside the nozzle and the flow is called overexpanded. The particular mode of operation of any nozzle can be quickly checked by first establishing the limiting pressure curves B and D and comparing
them with the specified exit pressure. REFERENCE: Pages 299 and 300 Fluid Flow, Sabersky and Acosta.
Figure 1 and Figure 2: Flow conditions depending on Pressure Ratio (Pe/Pc)
-------------------------- SHAPES Nozzle 3.7 CAN MODEL --------------------------
Figure 3: Nozzle 3.7 determines isentropic internal flow properties for axisymmetric nozzles Figure 4: Nozzle 3.7 determines isentropic internal flow properties for rectangular nozzles where the
where the cross-sectional area distribution is directly specified by throat diameter, exit cross-sectional area distribution is specified by the equivalent circular area defined by throat
diameter and nozzle program axial shape selections. Exit area, Ae for the circular diameter, exit diameter and nozzle program axial shape selections. Exit area, Ae for the rectangular
cross-section case is, Ae=p/4*De^2. Using this relationship the exit diameter, De for the cross-section case is, Ae=Wz*Hy=p/4*De^2. Using this relationship the equivalent exit diameter, De for
circular cross-section case is, De=sqr(4/p*Ae), where Ae is nozzle exit area. A similar the rectangular cross-section case is, De=sqr(4/p*Wz*Hy), where Wz is nozzle exit width in the
relationship determines throat diameter for the circular nozzle case to be, Dt=sqr(4/p*At). In z-direction and Hy is nozzle exit height in the y-direction. A similar relationship determines throat
practice De and Dt are specified directly without having to apply these simple relationships. diameter for the rectangular nozzle case to be, Dt=sqr(4/p*Wz*Hy), where Wz is nozzle throat width in
Compressible fluid flow properties only vary along the x-axis of the one-dimensional nozzle. the z-direction and Hy is nozzle throat height in the y-direction. Compressible fluid flow properties
only vary along the x-axis of the one-dimensional nozzle.
B) STEP-BY-STEP NOZZLE ANALYSIS EXAMPLE (REFER TO FIG 5 and FIG 6)
1) Using the Units pull-down menu check Length(inch), Pressure(lb/in^2) to specify the units of the analysis.
2) Select Bell nozzle in the Nozzle Shapes section. The data entries for a Bell nozzle having a bell shape will appear. Please see notes below.
3) Enter an entrance temperature of 5400 degrees R.
4) Enter an entrance pressure of 3000 psia.
5) Enter an Atmospheric pressure of .0017 psia to simulate vacuum conditions in space. Please see Note-5 for other options, for example the optimal design condition where no shocks are present.
6) Using the Gases pull-down menu select OXYGEN, HYDROGEN as the working fluid in the nozzle. By selecting OXYGEN, HYDROGEN the value for the ratio of specific heats (g) and the Gas Constant (Rgas)
are automatically specified in the appropriate spaces in thi s case having units, in^2 *sec^2/R.
7) Enter total nozzle length as 127 inches (The converging section is 6 inches long and the diverging section is 121 inches long).
8) Enter throat diameter as 10.3 inches.
9) Enter the throat location from the origin as 6 inches.
10) Enter the upstream throat radius as 7.725 inches (1.5 * Rthroat).
11) Enter the downstream throat radius as 1.967 inches (0.382 * Rthroat) .
12) Enter the entry angle of parabolic section as 32 degrees.
13) Enter the exit diameter as 90.7 inches.
14) Enter the total number of grids along the nozzle axis as 1000 points. A maximum of 1000 nozzle X,Y coordinates may be defined.
15) In the Solve Flow section select the Classical gasdynamics method option.
16) Click the SOLVE NOZZLE FLOW command button to determine nozzle flow characteristics using the method specified in step (16).
SPECIAL NOTE ON UNITS: Nozzle 3.7 design Units should be specified before dimensional data and fluid properties data are entered. Please see the nozzle design example in the section labeled,
STEP-BY-STEP NOZZLE ANALYSIS EXAMPLE. In this step-by-step example of the SSME, Units are specified at the beginning of each Project not in the middle or end of each Project. If however, the user
decides to “short circuit” the normal input procedure here is what happens if Units are changed "after" entering nozzle dimensional and fluid properties data. First, Entrance temperature and Entrance
pressure are reinitialized to 5400 degrees Rankin and 1000 psig or their values in the specified units. The value for Atmospheric (back) pressure remains unchanged from the value entered by the user
because it is not considered a basic property for the isentropic variations in the nozzle from chamber to nozzle exit. Then, Ratio of specific heats and Gas Constant are converted to their proper
values in the specified units. The remaining nozzle dimensional values are unchanged because Nozzle 3.7 assumes the user knows which units are required. To properly operate Nozzle 3.7 the user needs
to know that changing Units is considered a basic operational change that reinitializes the nozzle fluid dynamic analysis similar to choosing a new nozzle shape.
Figure 5: Nozzle Dimension Locations
NOZZLE RESULTS (REFER TO FIGURE 7)
17) Use the slider-bar to see real-time results for Nozzle radius [Y(X)], Nozzle cross-sectional area [A(X)], Mach number [Mn], Pressure ratio [P/Po], Temperature ratio [T/To] and Density rio [R/Ro].
18) After selecting the desired plot variable option-button, enlarge the plot by clicking the ENLARGE PLOT command button. The plots can be printed from the enlarged plot screen.
19) Nozzle results may be sent directly to a printer in text form by clicking File and then Print Detailed Results.
20) Click the Show results option to display the Results section.
The results of the analysis are:
a) Mach number (Mn) at exit is 5.182
b) Pressure ratio (P/P0) at exit is .0007
c) Temperature ratio at exit is .2227
d) Thrust produced is 466,151.266 pounds.
e) Mass flow rate through nozzle is 1024.329 pounds per second.
f) Thrust coefficient (CF) is 1.865
Mass flow rate: 1035 pounds per second (1.0% difference)
Vacuum thrust: 470,000 pounds (0.80% difference)
Note-1: The Bell nozzle shape uses a parabolic curve approximation from the throat to the nozzle exit. For an approximate G.V.R. Rao Bell nozzle configuration the contour immediately upstream of the
throat is a circular arc with radius 1.5*Rthroat. The divergent part of the nozzle immediately downstream of the throat is made up of a circular section with a radius of 0.382*Rthroat and then a
parabola to the exit of the nozzle.
Note-2: If Free-Form Shape is selected in step (2) the Imported and Graphical Shapes entry box appears. Enter all required data and then bring up the Free-Form screen by double-clicking on the DEFINE
FREE-FORM NOZZLE SHAPE icon. Generate the nozzle shape by dragging up to 30 points into position on the screen and then return to the main screen.
Note-3: In step (2) Import a list of X,Y nozzle coordinates by clicking File and then Import Nozzle Shape. The data must be in the following format: First line: [Total number of nozzle X,Y
coordinates]. Second and subsequent lines: [Point number], [X nozzle location], [Y nozzle location] and have a .TXT file delimiter. A maximum of 1000 nozzle X,Y coordinates may be defined. Use the
AeroGRID option of AeroRocketCAD to generate the fluid boundary of a nozzle by moving several points into place and by specifying the number of intermediate grids. See the simple nozzle example in
the AeroGRID section.
Note-4: In step (15) the selection of the MacCormack finite difference method will allow Nozzle to use the forward-predictor backward-corrector finite difference CFD method to compute nozzle flow.
This option computes curve F (PCRIT-3) which is the optimum design condition when no shocks are present in the nozzle (isentropic) and the flow is entirely supersonic in the diverging part of the
nozzle. For optimum nozzle expansion the nozzle exit pressure, P2 is equal to the external pressure, Patm. Rocket nozzles are normally designed using the PCRIT-3 flow expansion condition for optimal
performance. This method is only accurate if the residuals are reduced to at least 1.0E-6. In practice the number of nozzle points is usually less than 50, the CFL should be about 0.80, the starting
Mach number should be around 0.001 and finally the total number of iterations should be at least 750 and sometimes up to 2000.
Note-5: To compute an optimal nozzle design when no shocks are present and If the Classical gasdynamics method is selected insert 0.0 for the Atmospheric (back) Pressure. SOLVE the flow and the value
for PCRIT-3 and therefore atmospheric pressure is automatically determined and displayed in the Results section and reflected in the input data section. To compute the case where the flow is sonic (M
=1) at the throat and subsonic everywhere else (PCRIT-1) insert a value for Exit Pressure just slightly smaller than the Entrance Pressure. SOLVE the flow and an estimate for PCRIT-1 appears in the
status bar at the bottom of the screen. Insert this estimate for PCRIT-1 into the value for Exit Pressure and SOLVE again. The new value for the Exit Pressure in the Results section is the new value
for PCRIT-1.
Note-6: Please remember to "Click" back using the Return icon. Using the [X] box will kill the results and delete the modifications or may hang the application.
Figure 6: Input data for ideal expansion, no shock in nozzle.
Figure 7: Results for ideal expansion, no shock in nozzle.
Figure 8: Mach number contour plot for ideal expansion, no shocks in nozzle.
Figure 9: Results where back pressure is 100 psig causing a shock in the diverging part of the nozzle. This figure is not part of the SSME example.
This part of the description is to illustrate the Free-Form screen and is not part of the SSME nozzle example.
Figure 10: Free-Form screen for generating nozzle geometry and is not part of the SSME nozzle example.
OVEREXPANDED (Pa > Pe) AND UNDEREXPANDED (Pa < Pe) EXTERNAL NOZZLE FLOW
Nozzle uses two-dimensional oblique shock and Prandtl-Meyer expansion theory to predict shock-angles (b, b2), jet-angle (q) and Mach number for underexpanded and overexpanded flow. If the nozzle is
axisymmetric the solution is valid in the immediate vicinity of the nozzle-exit. Further than a half diameter from the nozzle-exit, axisymmetric expansions and shocks are not accurately defined by
two-dimensional oblique shock and Prandtl-Meyer theory. For this reason the external nozzle analysis is accurate for two-dimensional flow at large distances from the nozzle exit and a good
approximation for axisymmetric flow within a half diameter from the nozzle exit. A nozzle is underexpanded when Pa/Pc < Pe/Pc and is characterized as a nozzle that experiences a series of external
Prandtl-Meyer expansion waves starting from the lip of the nozzle. Similarly, a nozzle is overexpanded when Pa/Pc > Pe/Pc and is characterized as a nozzle that experiences a series of oblique shocks
starting from the lip of the nozzle. In this analysis, Pa/Pc is the ratio of the atmospheric (Pa) or back pressure to the chamber pressure (Pc) and Pe/Pc is the ratio of the nozzle exit pressure (Pe)
to the chamber pressure (Pc). Variables with subscript (c) are related to the entrance of the nozzle and variables with subscript (jet) for underexpanded flow and (2), (3) for overexpanded flow are
related to the exterior region of the nozzle after the nozzle-exit. Finally, variables with subscript (a) are related to the atmospheric pressure or back pressure of the environment.
For underexpanded flow, Nozzle determines flow properties using the Pa/Pc and Pe/Pc pressure ratio criteria. Then, Nozzle uses Me the exit Mach number to determine the Prandtl-Meyer function in
region (e) of the flow. Mjet is computed using the isentropic expansion equation by assuming Pjet = Pa. Then, using Mjet the Prandtl-Meyer function is determined in region (jet) of the flow. Finally,
the outer boundary or jet-angle is determined using the relationship, Theta(q) = n(Mjet) - n(Me). Theta (q) is defined as the angle the jet boundary makes with the horizontal axis of the nozzle-lip.
For overexpanded flow, Nozzle determines flow properties using the Pa/Pc and Pe/Pc pressure ratio criteria. Then, Nozzle computes the pressure ratio across the initial and reflected oblique shocks to
compute Mach number in region 2 and 3 and oblique shock angles (b, b2) using two-dimensional gas dynamics. Figure-11 and Figure-12 define the variables used for overexpanded flow and underexpanded
flow, respectively. Figure-13 illustrates the SSME nozzle having an atmospheric pressure of 14.7 psia and the resulting overexpanded external flow with shock and jet boundary. Finally, Figure-14
illustrates the SSME nozzle having an atmospheric pressure of 2.0692 psia with a slightly underexpanded external flow and a jet boundary set at almost 0.0 degrees. This condition can be understood to
mean optimal expansion when no shocks are present and the nozzle is exhausting directly into the atmosphere.
Figure11: Overexpanded nozzle (Pa/Pc > Pe/P
Figure 12: Underexpanded nozzle (Pa/Pc < Pe/Pc)
Figure 13: SSME overexpanded nozzle where Patm/Pc > Pe/Pc and external lip shocks and reflected shocks occur.
Figure 14: SSME properly expanded nozzle (Slightly Underexpanded) where Patm/Pc < Pe/Pc.
2-D PLUME ANALYSIS BY THE METHOD OF CHARACTERISTICS Back To Top
NOZZLE that models the external jet plume assumes the exit flow is supersonic, two-dimensional and underexpanded. The user may run the plume analysis at any time by clicking FILE and then clicking
2-D Plume Analysis. The plume analysis is used by first inserting the ratio of ambient pressure to exit pressure (Pa/Pe), exit-plane Mach number, ratio of specific heats and nozzle exit diameter. If
results for the flow on the main screen are for an underexpanded nozzle, the value for pressure ratio (Pa/Pe), exit Mach number (Me) and nozzle exit diameter (De) are inserted and plume results
automatically displayed from nozzle exit to several diameters downstream.
overexpanded nozzle, the value for pressure ratio (Pa/P3), Mach number (M3) and oblique shock diameter at point-2 are inserted into the plume analysis and results automatically displayed starting at
the end of the external oblique shock wave pattern as illustrated below. The user must first click SOLVE and click Show Contours before performing a 2-D Plume Analysis. Otherwise, the user can
override the automatic input and manually insert another value for Pa/Pe. The other 2-D plume input variables, Nozzle exit plane Mach number (Me), Specific heat ratio (Gamma) and Nozzle exit diameter
(De) may be entered manually and the external 2-D plume computed for underexpanded flow. The user may step though the flow field by clicking the LOCATION button to display the physical location in
the plume and the properties at that location. Finally, the screen may be enlarged and screen results printed by the click of a button. In these illustrations, "e" represents the correctly expanded
value at the exit-plane of the converging-diverging nozzle and "a" represents the atmospheric or back pressure of the environment which is 14.7 psig at sealevel.
The following steps generate solutions for overexpanded and underexpanded flows.
STEP-1: Perform fluid flow analysis on main screen and click Show Contours then click Plot external pattern and contours
to generate the oblique shock wave geometry for overexpanded flow or the expansion wave geometry for underexpanded flow.
Please refer to Figure-15, below
Figure-15: Test case from Gas Dynamics: Theory and Applications, Example-3 on page 97 to 99 for an overexpanded nozzle.
Example-3 predicts that a nozzle defined by M[1] = 2.44, Pb/Pc = 0.1 and A[e]/A[t] = 2.5 that b = 29.9 deg, q = -7 deg, and M[2] = 2.14
Also, this example displays the CAD definitions for a conical nozzle design composed of two LINEs and two CIRCLEs.
STEP-2: Click File then click 2-D Plume Analysis to generate underexpanded nozzle properties from nozzle-exit to several
diameters downstream. OR generate overexpanded nozzle properties from point-2 on the main screen plot of the oblique shock
wave region to several diameters downstream. Please refer to Figure-16, below.
Figure 16:Two-Dimensional plume analysis for underexpanded flow where Patm < Pexit. Nozzle 3.7 starts the 2-D plume
Method of Characteristics analysis starting at Point-2 of the overexpanded reflected oblique shock wave in Region-2.
For some users the relative complexity of standard Nozzle computer program features are to complex to apply for routine nozzle design. The new Design Conditions routine is for those who need nozzle
designs where the diverging nozzle flow is entirely subsonic or supersonic, including the exit jet where Pexit = Patm. Results from Design Conditions is suitable for designing subsonic or supersonic
wind tunnels or for designing efficient nozzles where no shocks are present in the diverging part of the nozzle or in the jet exhaust. Please see Figure-1 and Figure-2 where design condition refers
to flows that leave the nozzle at supersonic velocity and whose exit pressure equals the surroundings (curve F). The nozzle is called correctly expanded (PCRIT-3) for a supersonic design condition
nozzle. Simply specify exit Mach number (Me) or Pressure Ratio (Pc/Pe) for either subsonic (PCRIT-1) or supersonic (PCRIT-3) exit flow as depicted by Curve-B or Curve-F in Figure-1 and Figure-2. For
Curve-B and Curve-F the area ratio (Ae/At) exactly equals the critical ratio (Ae/Astar) for subsonic and supersonic correctly expanded flow. The throat velocity becomes sonic (M = 1), mass flux
reaches a maximum and the exit pressure (Pexit) exactly equals the atmospheric pressure (Patm) or in other words Pexit = Patm. The applicable nozzle equations required for design condition nozzles is
displayed in the Basic Equations menu. For a complete understanding of the technical aspects of nozzle design please refer to Fluid Mechanics by Frank M. White, pages 513 to 547 in chapter,
Compressible Flow. Quickly plot color contours and flow properties verses axial location for Mach number (Mn), Pressure Ratio (Pc/P), Temperature Ratio (Tc/T) and density Ratio (Rc/R). Please be
aware these ratios are the inverse of the flow properties computed in the main nozzle analysis.
Figure 17: Main screen for the Design Conditions routine showing properly designed nozzle results.
Figure 18: Main screen for the Design Conditions routine showing color contours of Mn verses axial location.
Figure 19 Main screen for the Design Conditions routine showing Mn verses axial location.
Figure 20: Main screen for the Design Conditions routine showing page one of three pages of nozzle equations.
TURBULENT CIRCULAR AND 2-D FREE JET ANALYSIS Back To Top
Steady-State Viscous Boundary Layer Analysis of a Free Jet issuing from an orifice: The sketch on the left shows a typical subsonic streamline pattern for circular and two-dimensional, turbulent free
jets. For free jets at some distance down stream of the beginning of the jet or wake, the viscous boundary layer approximations apply and the velocity profiles become nearly similar in shape when
normalized by local velocity and jet width. Similarity holds well for jets and wakes to determine the velocity profile along the axis of the jet. Please see Viscous Fluid Flow by Frank M. White,
starting on page 505 for a complete derivation of the boundary layer approximation for the circular and two-dimensional free jet flow used in this analysis.
The turbulent free jet geometry is completely defined by specifying Nozzle-exit radius (b1), Jet Computational length (Lmax) and Jet starting (centerline) velocity (Umax). Results include velocity
profile plots at each of five predetermined axial locations as a percentage of Lmax, velocity contour plots having a maximum of 256 plot levels and velocity vector plots. In addition, this routine
includes the ability to save the five-station velocity profile results in CSV format for spreadsheet applications. Finally, all Free Jet analysis screens may be sent to the printer.
Figure 21: Typical free jet streamline pattern with definitions.
Figure 22: Turbulent Circular Free Jet velocity profiles at each of five predetermined axial locations as a percentage of Lmax.
Figure 23: Turbulent Circular Free Jet velocity contour plot having a maximum of 256 plot levels. Line illustrates jet boundary.
Figure 24: Turbulent Circular Free Jet velocity vector plot. Line illustrates jet boundary.
ATLAS RD-180 ROCKET MOTOR ANALYSIS Back To Top
The RD-180 rocket engine is a Russian designed and built dual-combustion chamber, dual-nozzle rocket engine used to provide first-stage power for the US built Atlas 5 launch vehicle. The two
combustion chambers of the RD-180 share a single turbopump fueled by a mixture of RP-1 (kerosene) and LOX (Liquid oxygen) that uses an extremely efficient, high-pressure staged combustion cycle. The
RD-180 rocket engine operates on an oxidizer to fuel mixture ratio (O/F) of 2.72 and like its predecessor the RD-170, employs an oxygen-rich pre-burner. The thermodynamics of the staged combustion
cycle allows the efficient oxygen-rich pre-burner to provide greater than usual thrust to weight operation. However, to achieve greater efficiency the rocket motor must tolerate high pressure and
high temperature gaseous oxygen that must be cycled through the engine.
The following Nozzle 3.7 input variables are used to model the RD-180 rocket motor. Chamber pressure for the RD-180 rocket motor is 26.7 MPa, nozzle exit diameter is 1.4 m, nozzle area ratio (Ae/At)
is 36.87 which establishes the throat diameter to be 0.2306 meters, atmospheric pressure is 0.10135293 MPa. Entrance temperature, Ratio of specific heats and Gas constant are defined by specifying
RP-1 and LOX as propellant and oxidizer for propulsion. The remaining input dimensions are used to define an approximate Bell nozzle geometry for the rocket motor. Nozzle 3.7 determines thrust for a
single nozzle of the RD-180 which represents half of the total thrust generated. For the two nozzle RD-180 rocket motor total thrust is 3.79 MN at sea level and 4.1 MN in vacuum. Results for this
analysis are tabulated in Table-1 below which shows that Nozzle 3.7 is capable of predicting thrust for the RD-180 rocket motor within 1.5% of measured results.
Atlas 5-400 launch vehicle (left) and the RD-180 rocket motor (right).
Figure 25: Input data for sea level RD-180 Nozzle 3.7 analysis used to populate the results listed in Table-1, below.
Figure 26: RD-180 analysis displaying Isp, CF, V[exhaust] and external shock pattern compared to engine test at sea level.
The inserted exhaust plume of an RD-180 rocket motor is not part of Nozzle 3.7 output but the color contour plot is actual output.
│Table-1, RD-180 │Sea Level (Patm = 101,353 N/m^2) │Vacuum (Patm = 5 N/m^2 ) │
│Analysis Results├─────────────────────┬───────────┼────────────────┬────────┤
│ │ Thrust, lbf (MN) │ Isp, sec │Thrust, lbf (MN)│Isp, sec│
│Nozzle 3.7.0.x │ 851,421 (3.79) │ 302 │ 921,567 (4.15) │ 327 │
│RD-180 Test │ 860,568 (3.83) │ 311 │ 933,400 (4.10) │ 338 │
│Difference │ 1.1% │ 2.9% │ 1.3% │ 3.3% │
Nozzle Minimum System Requirements
(1) Screen resolution: 800 X 600
(2) System: Windows 98, XP, Vista, Windows 7 (32 bit and 64 bit), NT or Mac with emulation
(3) Processor Speed: Pentium 3 or 4
(4) Memory: 64 MB RAM
(5) English (United States) Language
(6) 256 colors
Please note this web page requires your browser to have
Symbol fonts to properly display Greek letters (a, m, p, ∂ and w)
ADDITIONAL REQUIREMENT: Input data for all AeroRocket programs must use a period (.) and not a comma (,) and the computer must be set to the English (United States) language. For example, gas
constant should be written as Rgas = 355.4 (J / kg*K = m^2 / sec^2*K) and not Rgas = 355,4. The English (United States) language is set in the Control Panel by clicking Date, Time, Language and
Regional Options then Regional and Language Options and finally by selecting English (United States). If periods are not used in all inputs and outputs the results will not be correct.
Nozzle 2.7 and Nozzle 2.8 Features
1) Nozzle outputs nozzle shapes in X,Y format. First, the user must run the program or click Plot Shape to generate the points describing the nozzle. The user may output X,Y nozzle coordinates and
all axially varying nozzle parameters using the Save Data File As command. The data file created using Save Data File As has the .CSV extension to distinguish it from the imported shape file that has
the .TXT extension.
2) Nozzle shows up on the Status Bar. The program may be minimized, maximized or terminated using the window controls.
3) Nozzle can model ultra-small nozzle shapes. Very small nozzles use scientific notation while larger (Greater than .001 diameter) nozzles use standard output format.
4) Mass flow rate in kg/sec or lbm/sec added to the output.
Nozzle 2.9 Features
1) Fixed a few minor problems involving display of very small nozzle dimensions and output results.
Nozzle 3.0 Features
1) Fixed error in the computation of thrust and mass flow rate.
2) Fixed a few spelling errors.
Nozzle 3.1 Features
1) Fixed error in the computation of thrust and mass flow rate.
Nozzle 3.2 Error Fix
1) Fixed an error in Nozzle that manifested itself when analyzing nozzles with area ratios (Ae/At) greater than 60. Specifically, for larger area ratios and for pressure ratios (Pe/Pc) sufficient for
a shock to form in the diverging portion of the nozzle, Nozzle would incorrectly determine that the flow was sonic at the throat and subsonic everywhere else in the nozzle. The problem was that the
constant PCRIT was miss-dimensioned as a string variable when it should have been defined as a single precision variable. This was an intermittent precision problem because sometimes the results were
correct and sometimes incorrect depending on the value of the pressure ratio and area ratio.
Nozzle 3.3 Features and Error Fixes
1) Added color contour plots for Mach number (Mn), Pressure ratio (P/P0), Temperature ratio (T/T0), and density ratio (R/R0).
2) Fixed a few FREE-FORM screen nozzle geometry errors. Nozzle would occasionally fail to analyze some FREE-FORM nozzle geometries when the ratio of specific heats were less than 1.4.
3) Cleaned up a few presentation errors and enhanced results display.
Nozzle 3.4 Features
1) Added the ability to specify the upstream radius and downstream radius on either side of the throat for Conical and Bell nozzle shapes.
Nozzle 3.5 Features (12/14/03)
1) Added the ability to interrupt the nozzle analysis or to Stop the nozzle analysis.
2) Improved the initial slope of the parabolic portion of the Bell nozzle shape.
Nozzle 3.6.2 Features and Error Fix (06/02/04)
1) Added the ability to determine underexpanded and overexpanded external flow in the vicinity of the nozzle-lip region.
2) Fixed confusion concerning Nozzle exit (back) pressure and Atmospheric pressure (Patm). These two quantities should always be identical, but confusion about these entries caused thrust to be
computed incorrectly. Now, the user enters only the Atmospheric (back) pressure. Previously, this entry did not accept pressures less than the optimal design condition (Pdesign) where no shocks are
present in the nozzle and the flow exhausts directly into the atmosphere. However, to allow for underexpanded and overexpanded nozzles this constraint needed to be lifted. Now, a small non-zero value
may be specified for the atmospheric pressure (Patm) corresponding to near-vacuum conditions.
Nozzle 3.6.3 Error Fix (02/22/05)
1) Under certain conditions when the maximum velocity exceeded Mach 7 in the diverging part of the nozzle, Nozzle would erroneously insert a shock wave. This condition has been fixed by increasing
the upper limit of the maximum exit velocity (Me) to Mach 20 which increases the upper limit of the area ratio (Ae/At) to over 15,000.
2) Under certain conditions when the user decided to Cancel a nozzle geometry Import, Nozzle would repeatedly provide an error message. The user would have to perform a CTR-ALT-DEL to exit the
Nozzle 3.6.4 Features and Error Fix (07/25/05)
1) When displaying external flow contour plots for overexpanded nozzles the value of Mjet was inadvertently displaying the normal component of Mach number across the oblique shock emanating from the
nozzle lip. Instead, the total Mach number in the jet region behind the oblique shock wave should have been displayed.
2) Added a hybrid rocket motor propellant having the following fuel and oxidizer to the list of combustion gases: 85% Nitrous Oxide, 15% HTPB.
Nozzle 3.6.5, 3.6.6 Feature (10/31/05)
1) Added a plume analysis for supersonic, two-dimensional and underexpanded nozzles.
Nozzle 3.6.7 Features and Error Fix (11/26/06)
1) Included Nozzle_Examples.zip in the Nozzle directory which includes 34 nozzle examples used for validation purposes.
2) The gas Nitrogen Dioxide in the Gases pull-down menu should be labeled Nitrous Oxide (N2O). (Fixed)
3) Program terminated if attempting to read a misspelled or non-existent file using the Open Project command. (Fixed)
Nozzle 3.7.0.1 Features (12/12/08)
1) Added a Design Conditions routine for those who wish to quickly design subsonic or supersonic wind tunnels or more efficient every-day nozzles when no shocks are present in the diverging part of
the nozzle or in the exhaust jet. Design Conditions quickly plots color contours and flow properties verses axial location for Mach number (Mn), Pressure Ratio (Pc/P), Temperature Ratio (Tc/T) and
density Ratio (Rc/R). No changes were made to the main nozzle analysis.
Nozzle 3.7.0.2 Features (09/14/09)
1) Added Turbulent Circular and Turbulent 2-D Free Jet analysis capability to Nozzle 3.7 based on the theory presented in Viscous Fluid Flow by Frank M. White, starting on page 505.
2) For Nozzle 3.7, fixed all input data text boxes for 32 bit and 64 bit Windows Vista. When operating earlier versions of Nozzle 3.7 in Windows Vista the input data text boxes failed to show their
borders making it difficult to separate each input data field from adjacent input data fields.
Nozzle 3.7.0.3 Features (01/12/10)
1) In the Design Conditions routine increased the number of input digits from 3 to 6 digits after the decimal point. Nothing else has been modified.
Nozzle 3.7.0.4 Features (08/21/10)
1) This version displays Conical nozzle geometry in Computer Aided Design (CAD) formatted LINES and CIRCLES for generating imported shapes using the Imported shape option command. Accessed by
clicking File then CAD Input For Conical Shapes then SHOW NOZZLE CAD.
2) For overexpanded nozzles, the value for pressure ratio (Pa/P3), Mach number (M3) and oblique shock diameter at point-2 are inserted into the 2-D Plume Analysis and the Method of Characteristics
results are automatically displayed starting at the end of the external oblique shock wave pattern as illustrated in Figure-16. Previously, the plume analysis did not compute the expansion wave
pattern for overexpanded nozzles. For underexpanded nozzles the value for pressure ratio (Pa/Pe), exit Mach number (Me) and nozzle exit diameter (De) are inserted and plume results automatically
displayed from nozzle exit to several diameters downstream as in previous versions of Nozzle. Fixed a minor error in the computation of jet Mach number for overexpanded nozzles.
Nozzle 3.7.0.5 Features (11/28/11) NEW!
1) This version displays specific impulse (Isp) in place of Thrust Coefficient (CF) in the Results section. Also, Exhaust Velocity (V[exhaust]) is also displayed as illustrated in Figure-27 in length
units (meters/sec in this case) selected by the user.
Back To Top
For more information about Nozzle 3.7 please contact AeroRocket at aerocfd@aerorocket.com.
| MAIN PAGE | PRODUCTS | CONSULTING | MISSION | RESUME | | {"url":"http://www.aerorocket.com/Nozzle/Nozzle.html","timestamp":"2014-04-16T21:51:46Z","content_type":null,"content_length":"71617","record_id":"<urn:uuid:eec69205-40f9-4356-bd4f-4ed1b65afaeb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating Test Data: Part 1 - Generating Random Integers and Floats
Posted Monday, March 26, 2012 1:57 PM
Matt Miller (#4) (3/26/2012)
SSC-Dedicated sknox (3/26/2012)For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to
the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.
* This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.
Group: General
Forum Members That's a good point to bring up. A random distribution will create a
Last Login:
Yesterday @ uniform
10:03 PM
Points: 35,956, distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or
Visits: 30,244 any number of other patterns across the set), using random data may not be a good option.
This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but
won't help with test the set as a whole.
Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to
"weight" the outcome of the constrained random generator.
--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."
"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013
Helpful Links:
How to post code problems
How to post performance problems
Posted Monday, March 26, 2012 2:22 PM
Hall of Fame Here is an alternate method that I use to generate the pseudo random numbers. The basic method is to take the right 7 bytes from the NEWID function and convert that to a BIGINT
before applying the MODULUS operator. No need for the ABS function, since 7 bytes can only produce a positive BIGINT number.
Group: General
Forum Members if object_id('tempdb..#t','U') is not null begin drop table #t end
Last Login: 2
days ago @ 1:11 -- Generate 20,000,000 rows
PM select top 20000000
Points: 3,081, NUMBER = identity(int,1,1)
Visits: 11,230 into
(select top 4473 * from master.dbo.syscolumns) a
cross join
(select top 4473 * from master.dbo.syscolumns) b
-- Show distribution of rowcount around average of 40000
Rows = count(*)
RandomNo =
#t aa
) a
group by
order by
RandomNo Rows
-------------------- -----------
(500 row(s) affected)
Posted Monday, March 26, 2012 3:05 PM
Jeff Moden (3/26/2012)
Matt Miller (#4) (3/26/2012)
sknox (3/26/2012)For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to
the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.
Group: General * This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.
Forum Members
Last Login: That's a good point to bring up. A random distribution will create a
Yesterday @
9:53 PM uniform
Points: 7,084,
Visits: 14,679 distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or
any number of other patterns across the set), using random data may not be a good option.
This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but
won't help with test the set as a whole.
Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to
"weight" the outcome of the constrained random generator.
That's kind of what I meant by the "on its own" comment. You can use the random data generator to pull in representative data in all allowed ranges, but you would need to play with
the frequency or weight based on how far away from the mean you happen to be. Assuming you have some knowledge of your data, you can shape your test data to match, using the random
set as a base.
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my
emergency again?
Posted Monday, March 26, 2012 3:07 PM
Group: General
Forum Members
Last Login:
Yesterday @
3:57 PM
Points: 6,544,
Visits: 8,758
Posted Monday, March 26, 2012 3:29 PM
That's a good point to bring up. A random distribution will create a uniform distribution across a range of data
Right there
with Babe
I heartily agree. It's been a long time since I studied statistical distributions but a basic understanding of them is hugely useful. It would be great to have a method of generating
random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.
Group: General A common phenomenon is where a column might validly accept one of say 30 integers. The vast majority of the time people will record one of five values and the frequency of recording
Forum Members of the others tapers off, with a few being used exceeding rarely. If you were testing things like index cardinality and column statistics generation, I wonder whether you'd get more
Last Login: 2 representative testing results if your test data could mimic the distribution of what you expected to occur in production.
days ago @ 7:49
Points: 778,
Visits: 1,503
One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important.
Bertrand Russell
Posted Monday, March 26, 2012 4:46 PM
Michael Valentine Jones (3/26/2012)
SSC-Dedicated Here is an alternate method that I use to generate the pseudo random numbers. The basic method is to take the right 7 bytes from the NEWID function and convert that to a BIGINT
before applying the MODULUS operator. No need for the ABS function, since 7 bytes can only produce a positive BIGINT number.
Group: General if object_id('tempdb..#t','U') is not null begin drop table #t end
Forum Members
Last Login: -- Generate 20,000,000 rows
Yesterday @ select top 20000000
10:03 PM NUMBER = identity(int,1,1)
Points: 35,956, into
Visits: 30,244 #t
(select top 4473 * from master.dbo.syscolumns) a
cross join
(select top 4473 * from master.dbo.syscolumns) b
-- Show distribution of rowcount around average of 40000
Rows = count(*)
RandomNo =
#t aa
) a
group by
order by
RandomNo Rows
-------------------- -----------
(500 row(s) affected)
Like I said in the article, the conversion to VARBINARY will slow things down and to no good end if you don't really need BIGINT for the random integer. If you really want BIGINT
capability (and I realize that wasn't one of your goals in your example), I believe you'd also have to convert the whole NEWID() to VARBINARY.
I also thought you were involved in some testing that showed the use of the square root of the final number of desired rows as a TOP for the self joined table in the Cross Join
really wasn't worth it.
The main point that I'm trying to make is that if it's too complicated, folks won't use it.
--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."
"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013
Helpful Links:
How to post code problems
How to post performance problems
Posted Monday, March 26, 2012 4:48 PM
Matt Miller (#4) (3/26/2012)
Jeff Moden (3/26/2012)
SSC-Dedicated Matt Miller (#4) (3/26/2012)
sknox (3/26/2012)For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to
the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.
Group: General
Forum Members * This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.
Last Login:
Yesterday @ That's a good point to bring up. A random distribution will create a
10:03 PM
Points: 35,956, uniform
Visits: 30,244
distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or
any number of other patterns across the set), using random data may not be a good option.
This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but
won't help with test the set as a whole.
Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to
"weight" the outcome of the constrained random generator.
That's kind of what I meant by the "on its own" comment. You can use the random data generator to pull in representative data in all allowed ranges, but you would need to play with
the frequency or weight based on how far away from the mean you happen to be. Assuming you have some knowledge of your data, you can shape your test data to match, using the random
set as a base.
Ah... understood. Thanks, Matt.
--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."
"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013
Helpful Links:
How to post code problems
How to post performance problems
Posted Monday, March 26, 2012 4:51 PM
GPO (3/26/2012)
SSC-Dedicated That's a good point to bring up. A random distribution will create a uniform distribution across a range of data
I heartily agree. It's been a long time since I studied statistical distributions but a basic understanding of them is hugely useful. It would be great to have a method of generating
random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.
Group: General
Forum Members A common phenomenon is where a column might validly accept one of say 30 integers. The vast majority of the time people will record one of five values and the frequency of recording
Last Login: of the others tapers off, with a few being used exceeding rarely. If you were testing things like index cardinality and column statistics generation, I wonder whether you'd get more
Yesterday @ representative testing results if your test data could mimic the distribution of what you expected to occur in production.
10:03 PM
Points: 35,956,
Visits: 30,244
Hmmmm... maybe there needs to be a Part 4 to this series.
--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."
"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013
Helpful Links:
How to post code problems
How to post performance problems
Posted Monday, March 26, 2012 4:54 PM
WayneS (3/26/2012)Excellent article Jeff.
SSC-Dedicated Nice coincedence today... I went to the site to find how you did this, and here's the article explaining it all.
Thanks for taking the time for this really great article that explains the how and why.
Group: General
Forum Members
Last Login: I know I said it before but thank you for the time you spent helping with the review.
Yesterday @
10:03 PM --Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".
Points: 35,956,
Visits: 30,244 First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."
"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013
Helpful Links:
How to post code problems
How to post performance problems
Posted Monday, March 26, 2012 6:43 PM
GPO said:
Hall of Fame It would be great to have a method of generating random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.
Group: General The approach requires multiplying the numbers in the uniform distribution by the inverse of the new distribution's probability function. This is not for the faint of heart. I've done
Forum Members it before (not in SQL) for a Weibull distribution.
Last Login:
Yesterday @ This article shows how it can be done for a Gaussian distribution:
6:03 PM http://murison.alpheratz.net/Maple/GaussianDistribution/GaussianDistribution.pdf
Points: 3,590,
Visits: 5,098 My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in
the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast! | {"url":"http://www.sqlservercentral.com/Forums/Topic1272409-203-3.aspx","timestamp":"2014-04-18T11:13:55Z","content_type":null,"content_length":"199925","record_id":"<urn:uuid:957bb439-bd15-41d9-8242-12ef5075a56f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irvine, CA Algebra 2 Tutor
Find a Irvine, CA Algebra 2 Tutor
I started out tutoring back in 2007 with a company known as SCORE! Educational Center. After quitting due to my move to Arizona for school, I started tutoring some students around campus.
27 Subjects: including algebra 2, English, grammar, geometry
...I`d like to make sure there is a confidence built in the student. I've worked with elementary, middle school, high school and college students. My main duties are to help students achieve a
specific level of understanding and become more responsible.I am a Biology/Pharmaceutical Science major at UCI and have been a tutor in biology.
26 Subjects: including algebra 2, reading, calculus, trigonometry
...I am a member of the mathematics and physics fellowship at my university and am an active member with the youth in my community. I have tutoring experience in all of my listed subject areas. I
always try to begin teaching through exposure.
14 Subjects: including algebra 2, calculus, physics, algebra 1
...It brings my great pleasure to know that I have helped other students . I have recently finished my 3rd year of college in which I was able to tutor my classmates and friends. My expertise and
knowledge is in math, physics, and of course chemistry. My goal is to make that every student understands the lesson being taught.
13 Subjects: including algebra 2, chemistry, physics, geometry
...I have a Phd from the University of Wisconsin - Madison, an MBA and a MS in Mathematics. It is not important. What is important is that I am able to explain tough to understand topics in easy
to understand manner.
48 Subjects: including algebra 2, physics, geometry, statistics
Related Irvine, CA Tutors
Irvine, CA Accounting Tutors
Irvine, CA ACT Tutors
Irvine, CA Algebra Tutors
Irvine, CA Algebra 2 Tutors
Irvine, CA Calculus Tutors
Irvine, CA Geometry Tutors
Irvine, CA Math Tutors
Irvine, CA Prealgebra Tutors
Irvine, CA Precalculus Tutors
Irvine, CA SAT Tutors
Irvine, CA SAT Math Tutors
Irvine, CA Science Tutors
Irvine, CA Statistics Tutors
Irvine, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/irvine_ca_algebra_2_tutors.php","timestamp":"2014-04-18T13:57:31Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:d3d38357-bdb3-4dd5-9adb-0c8aee9b3a01>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimization Problem
April 8th 2008, 04:31 PM #1
Feb 2008
Optimization Problem
The question is:
A gable window has the form of a rectangle topped by an equilateral triangle, the side of which are equal to the width of the rectangle. Find the maximum area of the window if the perimeter is
What i have done so far is that i have solved for y where $y= 300-3/2x$. I have also solved the triangle on top of the rectangle where the $h= x(3)^(0.5)$. (its x times root 3)
I am just not sure what to do from there.
Thank you in advance!
$600=3x+2y$....and you have that $A=xy+x^2$...now solve for a variable imput it into the area equation differentiate them find a max
ok, so i differentiated and i got $x=600/(3+(3)^(0.5)$ which equals 126.8. So i subbed that into total Area equation which is $A=300x - 3x^2 + x^2(3)^(0.5)/x$.
I get $A(126.8) = 20884.7cm$ but the answer is $2.1 m^2$
You must have made a mistake
$600=3x+2y$...so $y=\frac{-3(x-200)}{2}$...imputting that into your area equation you get $A=x\cdot\bigg(\frac{-3(x-200)}{2}\bigg)$..differentiating we get $A'=300-3x$ which equals zero at x is
equal to 100...now to check if its a max we find the second derivative $A''=-3$...which is always negative...therefore $x=100$ is a max...substitute it into your perimeter equation and get x
im sorry, but i have been stuck on the same question for almost 2 hrs now. Can you please expand the differentiation?
Of course
Of course you can this is the point of the site my friend $A=x\cdot\bigg(\frac{-3(x-200)}{2}\bigg)=x\cdot\bigg(\frac{-3x+600}{2}=\frac{-3x^2}{2}+300x\bigg)$...so using basic rules of
diffentiation we get that $A'=2\cdot\frac{-3x^{2-1}}{2}+1\cdot{300x^{1-1}}=-3x+300x^0=-3x+300=300-3x=3(100-x)$
April 8th 2008, 04:41 PM #2
April 8th 2008, 04:55 PM #3
Feb 2008
April 8th 2008, 05:12 PM #4
April 8th 2008, 05:43 PM #5
Feb 2008
April 8th 2008, 05:55 PM #6 | {"url":"http://mathhelpforum.com/pre-calculus/33717-optimization-problem.html","timestamp":"2014-04-17T12:57:38Z","content_type":null,"content_length":"51066","record_id":"<urn:uuid:112a3903-64fa-49e3-a4be-c81b390090c1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on July 12, 2011.
article states :
"The body exterts a huge gravitational pull, and to escape its clutches you'd need to accelerate away from it very hard indeed. Now if the body is so dense that escape becomes impossible for
everything including light, then the resulting object is called a black hole."
The speed of light is a universal constant, and it does not accelerate. But according to this article, acceleration is needed to escape the body's gravitational pull. Am I missing something here? | {"url":"http://plus.maths.org/content/comment/reply/2827/2602","timestamp":"2014-04-17T00:49:46Z","content_type":null,"content_length":"20285","record_id":"<urn:uuid:14f97d00-fb1c-427f-9a8a-2f5fce9ef12d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
All four kids have the same birthday
All four kids have the same birthday
by Sharon Hill • • 7 Comments
Family have four children with same birthday at odds of more than 133,000 to one – Telegraph.
Parents Emily Scrugham and Peter Dunn were amazed when their newborn son arrived on January 12, the exact same date as his three older brother and sisters.
The couple from Cleator Moor, Cumbria, beat odds of a staggering 1/133225 in producing a quadruple birthday for their four children.
The fortuitous date was not planned as the twins were taken in an emergency operation and the other two did not have this as their due date. They were either early or late. Of course the twins were
born on the same day so that makes only THREE birthdates. I’m thinking that reduces the odds a bit. Sure, it’s weird, but I suspect there is something about April as an amorous month for this couple
which lends itself slightly towards January births. Anyway, I think it’s kind of neat (maybe not for the kids though).
7 comments for “All four kids have the same birthday”
My brother, his son and my younger son also have the same birthday. It is in early September, so they all have had years when the first day of school coincided with their birthdays. Weird, but it
The odds are a bit off — given 4 random people, the odds that all 4 have the same birthday is (approximately) 1 out of 48 million (the odds given are the odds that 3 people have the same
birthday, not 4). In any case, that’s also a bit misleading, because those are the odds of 3 specific people. The odds that there exists a family with 3 (or in this example, 4) children have the
same birthday are significantly better. The larger your data set, the greater the odds.
Yes. I thought the odds looked a bit on the low side but wasn’t sure how to work it out.
Just a note on due dates. As I understood (being father of 1 plus 2×1, the idea of a due date is a bit misleading. The gestation period is not a fixed number of days, but an approximation with a
reasonable margin of error being plus or minus two weeks from conception. Would this not make the odds hard to work out?
Four words they’ll never hear their parents say in all seriousness “When’s your birthday, again?”
Even if the odds of this happening were 48 million to one, then given the number of families there are in the world, the chances that you would find such a family are close to a certainty. Not
It wouldn’t be a certainty, but it’s not surprising either. I’m not as good at math as I used to be — its been decades since I’ve had to know this stuff.
I (vaguely) remember a math seminar I attended in college. The speaker had done research on gambling, and his main observance was that in reality streaks (ie. coincidences) happen FAR more
often than people expect them to. Pretty much, the whole gambling industry depends on our false expectations. The same concept, though, applies to other forms of coincidences, such as people
sharing the same birthday. The reason why they happen is that the world is big and you have a wide selection of data to find those coincidences in. To the family where it occurs, of course
it’s amazing and completely surprising. But that such a family exists is less so.
My girlfriend and her sister have the same birthday, March 15th at one year apart. And my girlfriend’s two daughters also have the same birthday, Dec 10th about five years apart. | {"url":"http://doubtfulnews.com/2014/01/all-four-kids-have-the-same-birthday/","timestamp":"2014-04-16T13:05:18Z","content_type":null,"content_length":"51519","record_id":"<urn:uuid:f8f79ace-dd77-4db4-be28-4cda4026df75>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design and optimization of LC oscillators
- Operations Research , 2005
"... informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized
geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently s ..."
Cited by 27 (7 self)
Add to MetaCart
informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized geometric
program (GGP), which can be transformed to a convex optimization problem and then very efficiently solved. We start with a basic gate scaling problem, with delay modeled as a simple
resistor-capacitor (RC) time constant, and then add various layers of complexity and modeling accuracy, such as accounting for differing signal fall and rise times, and the effects of signal
transition times. We then consider more complex formulations such as robust design over corners, multimode design, statistical design, and problems in which threshold and power supply voltage are
also variables to be chosen. Finally, we look at the detailed design of gates and interconnect wires, again using a formulation that is compatible with GP or GGP.
- IEEE/ACM ICCAD , 2004
"... In this paper we propose a RObust Analog Design tool (ROAD) for post-tuning analog/RF circuits. Starting from an initial design derived from hand analysis or analog circuit synthesis based on
simplified models, ROAD extracts accurate posynomial performance models via transistor-level simulation and ..."
Cited by 19 (9 self)
Add to MetaCart
In this paper we propose a RObust Analog Design tool (ROAD) for post-tuning analog/RF circuits. Starting from an initial design derived from hand analysis or analog circuit synthesis based on
simplified models, ROAD extracts accurate posynomial performance models via transistor-level simulation and optimizes the circuit by geometric programming. Importantly, ROAD sets up all design
constraints to include large-scale process variations to facilitate the tradeoff between yield and performance. A novel convex formulation of the robust design problem is utilized to improve the
optimization efficiency and to produce a solution that is superior to other local tuning methods. In addition, a novel projection-based approach for posynomial fitting is used to facilitate scaling
to large problem sizes. A new implicit power iteration algorithm is proposed to find the optimal projection space and extract the posynomial coefficients with robust convergence. The efficacy of ROAD
is demonstrated on several circuit examples. 1.
"... In this paper we give a brief overview of a heuristic method for approximately solving a statistical digital circuit sizing problem, by reducing it to a related deterministic sizing problem that
includes extra margins in each of the gate delays to account for the variation. Since the method is based ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper we give a brief overview of a heuristic method for approximately solving a statistical digital circuit sizing problem, by reducing it to a related deterministic sizing problem that
includes extra margins in each of the gate delays to account for the variation. Since the method is based on solving a deterministic sizing problem, it readily handles large-scale problems. Numerical
experiments show that the resulting designs are often substantially better than one in which the variation in delay is ignored, and often quite close to the global optimum. Moreover, the designs seem
to be good despite the simplicity of the statistical model (which ignores gate distribution shape, correlations, and so on). We illustrate the method on a 32-bit Ladner-Fischer adder, with a simple
resistor-capacitor (RC) delay model, and a Pelgrom model of delay variation.
, 2006
"... Small models, that represent the overall functioning of portions of large or complex circuits, can be generated by algorithms and used for system design and verification. ..."
Add to MetaCart
Small models, that represent the overall functioning of portions of large or complex circuits, can be generated by algorithms and used for system design and verification.
, 2008
"... Design of modern mixed signal integrated circuits is becoming increasingly difficult. Continued MOSFET scaling is approaching the global power dissipation limits while increasing transistor
variability, thus requiring careful allocation of power and area resources to achieve increasingly more aggres ..."
Add to MetaCart
Design of modern mixed signal integrated circuits is becoming increasingly difficult. Continued MOSFET scaling is approaching the global power dissipation limits while increasing transistor
variability, thus requiring careful allocation of power and area resources to achieve increasingly more aggressive performance specifications. In this tightly constrained environment traditional
iterative system-to-circuit redesign loop, is becoming inefficient. With complex system architectures and circuit specifications approaching technological limits of the process employed, the
designers have less room to margin for the overhead of strict system and circuit design interdependencies. Severely constrained modern mixed IC design can take many iterations to converge in such a
design flow. This is an expensive and time consuming process. The situation is particularly acute in high-speed links. As an important building block of many systems (high speed I/O, on-chip
communication,...) power efficiency and area footprint are of utmost importance. Design of these systems is challenging in both system and circuit domain. On one hand system architectures are | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.16.7599","timestamp":"2014-04-16T23:01:02Z","content_type":null,"content_length":"23748","record_id":"<urn:uuid:d510b3e0-83c1-421b-9eaf-ddb1ba290d7d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
mixed variable integration coefficient
$(y^{4}-4xy)dx+(2xy^{3}-3x^{2})dy=0$ prove that this equation has an xy dependant integration coefficient
show that exist constants that verify $\displaystyle\frac{\partial M}{\partial y}-\frac{\partial N}{\partial x}=m\frac{N}{x}-n\frac{M}{y}.$
why?? if the left side equals zero then we have potential and could be solved without multilying by integration coefficient. what the right side says ? why this equation says about the coefficient ?
the condition i gave you gives you the integrating factor, if those constants exist, then the integrating factor is $u(x,y)=x^my^n,$ but on your question you were asked to show that your equation has
a integrating factor which depends of $xy,$ so that will force that $m=n$ in order to have $u(x,y)=h(xy).$
how you got this formula? so if u(x,y)=xy then m=n=1 $4y^3-4x-(2y^3-6x)=m\frac{2xy^3-3x^2}{x}-n\frac{y^4-4xy}{y}$ what to do now?
$4y^3-4x-(2y^3-6x)=n(2xy^4-3x^2y-y^4x-4x^2y)$ $\frac{4y^3-4x-(2y^3-6x)}{(2xy^4-3x^2y-y^4x-4x^2y)}=n$ i cant get a number out of it | {"url":"http://mathhelpforum.com/differential-equations/162903-mixed-variable-integration-coefficient.html","timestamp":"2014-04-16T05:49:51Z","content_type":null,"content_length":"49309","record_id":"<urn:uuid:3f692138-7ac4-4cb0-8f3b-1d92121cd5fc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by tracy on Sunday, June 23, 2013 at 2:08pm.
Consider the following ANOVA experiments. (Give your answers correct to two decimal places.)
(a) Determine the critical region and critical value that are used in the classical approach for testing the null hypothesis Ho: ì1 = ì2 = ì3 = ì4, with n = 19 and á = 0.01.
(b) Determine the critical region and critical value that are used in the classical approach for testing the null hypothesis Ho: ì1 = ì2 = ì3 = ì4 = ì5, with n = 17 and á = 0.05.
(c) Determine the critical region and critical value that are used in the classical approach for testing the null hypothesis Ho: ì1 = ì2 = ì3, with n = 19 and á = 0.05.
• statistics - tracy, Sunday, June 23, 2013 at 2:40pm
this is the answers that I got and they were wrong. f>equal to 3.20 for a part and b part I got f>equal to 3.41 But they both are wrong do not thing I am sitting them up right, any help?
• statistics - MathGuru, Sunday, June 23, 2013 at 7:00pm
You will need to determine "degrees of freedom between" and "degrees of freedom within" before checking an ANOVA table using alpha level.
To calculate df between:
k - 1 = 3 - 1 = 2
Note: k = number of levels.
To calculate df within:
N - k = 15 - 3 = 12
Note: N = total number of values in all levels.
Let's do part a) and see if you can work out the rest.
a) You have 4 levels. Your sample size is 19. Your alpha level is 0.01.
df between = k - 1 = 4 - 1 = 3
df within = N - k = 19 - 4 = 15
Checking the table using 0.01 alpha level using the above degrees of freedom, I see critical value of 5.42.
I'll let you try the rest.
• statistics - tracy12, Sunday, June 23, 2013 at 9:24pm
Can you give me a site for the correct values, I must not be looking at the same one or I am looking at the wrong ones because I can not get the numbers to match to the one you done. I do
appreciate all your help.
• statistics - tracy, Monday, June 24, 2013 at 7:12pm
So this is what I came up with on (b) 6.93 and (c) 6.48 and those were wrong. I think I am missing them at the beginning when k-1=?-1 and n-k=?-?. Which graph or table are you getting these from?
I figure out the other one to come up with the answers but I must of missed the first part.
Related Questions
Statistics Help - Consider the following ANOVA experiments. (Give your answers ...
statistics - Consider the following ANOVA experiments. (Give your answers ...
math - Consider the following ANOVA experiments. (Give your answers correct to ...
Math - Consider the following ANOVA experiments. (Give your answers correct to ...
math - any help appreciated on this one, none of us can get it right. Consider ...
Statistics - Consider the following ANOVA experiments. (Give your answers ...
statistics - Consider the following bivariate data. Point A B C D E F G H I J x ...
math - Consider the following ANOVA experiments. (Give your answers correct to ...
math check - Consider the following ANOVA experiments. (Give your answers ...
statistics check - Consider the following bivariate data. Point A B C D E F G H ... | {"url":"http://www.jiskha.com/display.cgi?id=1372010893","timestamp":"2014-04-17T02:30:27Z","content_type":null,"content_length":"10630","record_id":"<urn:uuid:25d70ca5-588f-4ac9-b93e-544bc7e68147>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] inversion of large matrices
David Warde-Farley dwf@cs.toronto....
Tue Aug 31 13:02:45 CDT 2010
On 2010-08-30, at 10:36 PM, Charles R Harris wrote:
> I don't see what the connection with the determinant is. The log determinant will be calculated using the ordinary LU decomposition as that works for more general matrices.
I think he means that if he needs both the determinant and to solve the system, it might be more efficient to do the SVD, obtain the determinant from the diagonal values, and obtain the solution by multiplying by U D^-1 V^T?
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-August/052528.html","timestamp":"2014-04-19T02:15:27Z","content_type":null,"content_length":"3071","record_id":"<urn:uuid:82aad4e0-ec6d-4862-b6d8-203e8980e007>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Supercomputers to Determine Bridge Loads
by Kornel Kerenyi, Tanju Sofu, and Junke Guo
FHWA and the Argonne National Laboratory are using multidimensional programs to study hydrodynamic forces on flooded bridge decks.
Computer modeling can enable engineers to design bridges that are sturdier against floodwaters. This section of I-80 in Iowa was inundated after heavy rains in June 2008, perhaps relieving pressure
on the Cedar River bridges in the background.
Bridges are a vital component of the Nation's transportation network. Evaluating their stability and structural response after flooding is critical to highway safety. When a bridge crossing a
waterway is partially or entirely submerged during a flood, the water can exert significant loading on the deck and threaten its integrity. Being able to estimate hydrodynamic loading accurately can
help designers and transportation agencies build better, stronger bridges.
Accurately modeling turbulence and sediment transport is essential for estimating the impact of scour on bridges. Researchers typically address problems in science and engineering through two
complementary approaches: experimental and analytical (or theoretical). In many applications, such as the fluid mechanics of streams and the impacts on bridges, the governing equations are nonlinear
and, except in special circumstances, analytical solutions are not available. In addition, fluid mechanics applications often are multidimensional in nature and time-dependent, which further
complicates attempts to understand and model real-life turbulence and scour conditions.
Pursuing a third approach, researchers at the Federal Highway Administration (FHWA) and the new Transportation Research and Analysis Computing Center (TRACC) in West Chicago, IL, a partnership of the
U.S. Department of Transportation (USDOT) and the Argonne National Laboratory, are using supercomputers with multidimensional hydraulics programs to closely mimic real-life conditions. The
researchers verify the computers' output using flume models at FHWA's Turner-Fairbank Highway Research Center (TFHRC) in McLean, VA, indicating that computers might play an even more valuable role in
bridge hydraulics and safety.
"As computer technology improves and the ability to accurately model turbulence and sediment transport becomes more robust, transportation agencies will benefit in both the planning and design of
bridges through improved hydraulic and scour estimation," says Kevin Flora, hydraulics branch chief for structure maintenance and investigation at the California Department of Transportation
The Third Approach
Effective use of prototypic experiments is a key approach to understanding real-life phenomena. For many fluid dynamics applications, such as those associated with bridge hydraulics, full-scale tests
are not possible. Researchers use smaller scale, and perhaps simplified, representations of the physical configuration, and they extrapolate the results to apply to actual conditions. Some
uncertainty remains in this extrapolation, however, related to the use of simplified experiments to predict the behavior of complex physical systems.
Computational fluid dynamics (CFD) attempts to address these issues and complement the experimental and analytical approaches through numerical solutions. CFD is a branch of fluid mechanics that uses
numerical methods and algorithms to analyze and solve problems involving fluid flows, such as water. Researchers use computers to perform the millions of calculations necessary to simulate the
interaction of fluids with the complex surfaces involved in engineering. CFD enables scientists and engineers to perform numerical simulations in the laboratory and significantly reduce the amount of
experimentation and overall cost. CFD is a highly interdisciplinary research area at the interface of physics, applied mathematics, and computer science.
CFD calculations are numerical solutions to the underlying equations that represent the flow of fluids. As a result, a computational grid — focusing many computers in a network on a single problem —
can approximate the true geometry of a given physical condition. In the CFD approach, researchers can include actual observed boundary conditions that might be impossible to represent in a laboratory
experiment and perform parametric studies on material properties and physical conditions that might be expensive or time-consuming to perform experimentally.
Although CFD can complement laboratory experiments and theoretical approaches, generating accurate numerical solutions requires establishing fine-mesh computational grids to represent the actual
geometry of the problem. Also, in time-dependent cases, the accuracy and numerical stability of the solutions often require small time steps to capture the influence of temporal variations in the
flow field.
Computing solutions on these grids can require many calculations, so the use of computers is essential. Development of powerful supercomputers, including massively parallel computers (which contain
hundreds of processors sharing a common hardware platform), and adaptation of the CFD techniques for use with these supercomputers in recent years is enabling engineers and scientists to perform
these complex CFD calculations.
"By comparing the computed results to some experimental information or to analytical solutions under simplified conditions, analysts can verify and validate the numerical approach," explains TRACC
Director David P. Weber. "With verified and validated results, analysts can then feel much more confident in performing calculations on more complex representations of the true physical conditions.
Thus, the three approaches of experiment, analysis, and computation are complementary."
Establishing TRACC
The Argonne National Laboratory is leading the initiative to establish a high-performance computing center, TRACC, to pursue transportation research and development (R&D) programs. Argonne analysts
provide technical support to researchers at TFHRC on CFD simulations.
This photo shows an experimental model of a typical U.S. highway bridge deck partially inundated in a tilting flume at FHWA's J. Sterling Jones Hydraulics Research Laboratory.
The overall objective of this effort is to establish validated computational practices to address the transportation community's research needs in bridge hydraulics. Traditionally, the bridge
hydraulics work relies on scaled experiments to provide measurements for flow field, which is the velocity and turbulence of a fluid as functions of position and time. Now, however, parallel
computers and commercially available software provide an opportunity to shift the focus of these evaluations to the CFD domain. After being validated using the data from a limited set of experiments,
high-fidelity CFD simulations can be used to expand the range and scope of parametric studies. The CFD simulations also can be used to predict the effects of scaling by studying differences between
the reduced-scale experiments and full-scale bridges.
Most recently, reduced-scale experiments conducted at TFHRC's J. Sterling Jones Hydraulics Research Laboratory established the foundations of a CFD-based simulation methodology. Researchers at
Argonne and TFHRC worked together to study CFD techniques for simulating open-channel flow around inundated bridges.
"The use of supercomputers and CFD code can potentially overcome the inherent problems of modeling sediment in the laboratory," says Flora. "Once physically based models of sediment transport and
scour are developed, computer simulations will provide engineers with new insights into better design and mitigate for scour at structures."
Research Considerations
The Argonne-TFHRC research included analysis of lift forces (FL) (or simply lift), forces produced perpendicular to the flow of a fluid; drag forces (FD), forces exerted on objects in the path of
fluids; and moments (M), the tendency to cause rotation about a point or axis, on inundated bridge decks under various flow configurations. The two research teams investigated the applicability of
commercial CFD software for predicting flow field and evaluating lift, drag forces, and moments. They compared the results with laboratory data using an ultraprecise force balance (which is a
mounting device to measure hydrodynamic forces on scale model bridge decks) to measure lift, drag force, and moments for various inundation ratios and flow rates.
This definition sketch shows a cross-section of the model six-girder bridge deck tested during the study. It defines the direction of the hydrodynamic forces and flow acting on the flooded bridge
deck model. The sketch also shows the point of application of forces, that is, where the moments act. The three boxes below the bridge deck define the Froude number, Reynolds number, and the
inundation ratio used in the design charts.
Source: FHWA
They also used Particle Image Velocimetry (PIV) to study flow fields around submerged model bridge decks. PIV is a noninvasive measurement technique to visualize flow distributions. The technique
involves adding microscopically small, highly reflective particles to the flow and using a laser to illuminate a thin layer of the flow, so only the particles in that light sheet reflect the laser's
light. Using cameras pointed with different angles toward the light sheet, researchers can capture images and an algorithm based on statistical probability to determine the speed and direction of the
moving particles. PIV-related tasks included the following:
• Assessing applicable phenomena for open-channel flow and identifying appropriate models — free surface, two-dimensional (2-D) versus three-dimensional (3-D) to include the effects of channel
wall, appropriate flow profile at the inlet, and roughness at the bottom surface
• Studying the sensitivity of the CFD solutions to the grid structure and mesh density (number of computational cells per unit area)
• Identifying the most suitable turbulence model in terms of accuracy and computational requirements
The researchers also adopted numerical simulations using two turbulence models, high-Reynolds number k-(epsilon) model and large eddy simulation (LES), to resolve unsteady turbulent flow. Turbulent
flow is an irregular condition in which the flow particles show a random variation with time and space coordinates. The k-(epsilon) model simulates transport of both the turbulent kinetic energy (k)
and the turbulent energy dissipation rate (epsilon). The rationale for using LES is to simulate the larger and more easily resolvable scales of the motions while accepting that LES will not represent
the smaller scales accurately.
The researchers examined the agreement between experimental data and the results of CFD simulations for a typical U.S. highway girder deck, corresponding to Froude numbers (dimensionless ratio)
ranging from 0.12 to 0.40 characterizing the open-channel flow. That is, the Froude number is a ratio comparing inertial and gravitational forces. If the Froude number is less than 1, the flow is
subcritical, and if it is greater than 1, the flow is supercritical. The TRACC researchers used STAR-CD (software for modeling fluid dynamics), while the TFHRC team used Fluent, Inc. CFD software in
their simulations.
The researchers integrated forces and moments over the surface of the bridge deck along the flow and perpendicular directions, respectively. When calculating the lift over the bridge deck, however,
they excluded the component of buoyancy, as they were interested only in hydrodynamic loading, not static loading.
Advances in CFD research provide the basis for modeling the multiphase nature of open-channel flow that consists of air and water separated by a free surface. In multiphase flows, each phase has
individually defined physical properties and flow fields. In both the STAR-CD and Fluent approaches, the researchers had three multiphase flow models available. The first was the volume of fluid
(VOF) model, which can model two or more nonmixable fluids by solving a single set of continuum equations and tracking the volume fraction of each of the component fluids throughout the domain.
Applications of the VOF model include stratified flows, free-surface flows, filling, sloshing, the motion of large bubbles in a liquid, the motion of liquid after a dam break, the prediction of jet
breakup (surface tension), and the steady or transient tracking of any liquid-gas interface.
In the VOF model, in addition to the conservation equation for mass and momentum, one has to introduce the volume fraction of phase in computational cell. In each control volume, the volume fraction
of all phases sum to unity. The fields for all flow variables are shared by the phases, as they represent volume-averaged values. Thus, the variables and properties in any given cell are
representative of one of the phases, or a mixture of phases, depending upon the volume fraction values.
Second was the mixture model, in which the flow consists of a continuous phase and one or more dispersed phases. Although it is computationally efficient, this Lagrangian approach is suitable only
for dispersed flows and cannot be used to accurately resolve the stratified nature of free-surface flows. The third was the Eulerian model, which is a multiphase model in which the phases are treated
as interpenetrating continua coexisting in the flow domain. The Eulerian model provides a general framework for both dispersed and stratified multiphase flows. However, the free surface calculated
with this approach is usually less sharp in comparison with the VOF method, making the latter the preferred model. Therefore, the researchers are testing the VOF computational method with both
STAR-CD and Fluent to capture the impact of free surface reasonably accurately and efficiently.
This photo shows a submerged sixgirder bridge deck model using PIV, a nonintrusive method to measure flow velocities, to visualize the flow field around the deck. A powerful laser light illuminates
PIV tracer particles, and high-speed cameras track their displacements, which then are converted into velocities.
Experimental Setup At TFHRC
The TFHRC researchers conducted the laboratory experiments using a 12.8-meter (42-foot)-long, 0.4-meter (1.3-foot)-wide, and 0.5-meter (1.6-foot)-high Plexiglas® rectangular flume. They set the flume
horizontally and controlled the depth of flow with an automatic adjustable tailgate located at the downstream end. The team used a 0.054-cubic-meter (2.0-cubic-feet)-per-second pumping system to
supply the flow. They measured the water surface level using ultrasonic sensors at two cross sections along the flume. An electromagnetic flow meter measured the discharge. They used an Acoustic
Doppler Velocimeter probe to measure the velocity distribution (speed of the water) of the flow.
The researchers constructed a six-girder bridge deck shape, which is typical of U.S. highways, using yellow Plexiglas and adopting a geometrical reduction scale of 1:40 based on the depths, maximum
discharges (flow rate), and inundations (submergence level) available. This scaled bridge deck is optimal to produce values ranging from low to high flows, all at subcritical Froude numbers (less
than 1) in the upstream flow.
To measure forces induced by the bridge deck model in the flow direction (drag) and simultaneously perpendicular to the flow direction (lift) using electric strain gauges, the research team
constructed an ultraprecise force balance, which can capture even small forces. They mounted the bridge deck model between brackets attached to a measuring platform. To test different submergence
ratios, they mounted the platform and model flexibly to allow vertical positioning of the bridge deck. The flexibility was important to perform testing at various submergence levels of the bridge
They conducted the experiments for water approach velocities ranging from 0.25 to 0.50 meter (0.8 foot to 1.6 feet) per second. They kept the flow depth for all experiments constant at 0.25 meter
(0.8 foot). They varied the Froude number within a range of 0.16 to 0.32 and the submergence of the bridge deck from slight submergence of the girders to complete overtopping of the bridge deck,
inundation ratio, h* = 0.29 to 3.2.
The force balance experiments used the PIV technique to visualize and measure flow fields for the submerged bridge deck model. The PIV technique is an optical flow diagnostic based on the interaction
of light refraction and scattering, using nonhomogeneous media. The fluid motion is made visible by tracking the locations of small tracer particles at two instances of time. The researchers then use
the particle displacement as a function of time to infer the velocity flow field. They analyzed the data in a format that could be compared with the CFD modeling.
TRACC Computing Facility
TRACC is a general purpose advanced computing and visualization facility available to the transportation community for a broad spectrum of applications. Discussions among staff at USDOT's Research
and Innovative Technology Administration and FHWA identified specific initial applications and technologies for assigning the highest priority for research and development (R&D) and user support.
The TRACC components include high-performance computing, visualization, and networking systems. To take advantage of Argonne's extensive experience in acquiring and operating similar facilities,
TRACC acquired the system components and then set them up in dedicated facilities at the DuPage National Technology Park near the DuPage Airport in Illinois.
The TRACC computational cluster is a 512-core, customized LS-1 system from Linux Networx that comprises 128 computational nodes, each with two dual-core Advanced Micro Devices, Inc. OpteronTM 2216
central processing units and 4 gigabytes of random access memory; a DataDirectTM Networks storage system consisting of 240 terabytes of shared RAID (Redundant Array of Inexpensive (or Independent)
Disks or (Drives)) storage that is expandable to 750 terabytes; a high-bandwidth, low-latency InfiniBand network for internode computations; and a high-bandwidth Gigabit Ethernet management network.
TRACC also provides scientific visualization capabilities through the National Center for Supercomputing Applications' Technology, Research, Education, and Commercialization Center at the same
location. TRACC meets the needs for visualization of multidimensional data via a high-performance graphics cluster linked with a 15-panel liquid crystal display (LCD) tiled display and a portal
optimized for visual simulation and high-speed broadband connectivity.
Test Results
The researchers conducted the CFD simulations using models of an 8.0-meter (26.2-foot)-long, 0.34-meter (1.1-foot)-wide, 0.5-meter (1.6-foot)-high rectangular channel. The study simulated both 2-D
and 3-D models. The researchers placed the bridge deck 4.4 meters (14.4 feet) downstream of the inlet of the channel to obtain a fully developed flow upstream. They situated the outlet far away from
the deck to avoid the influence of the outlet boundary on the flow field around it. The team conducted the simulations using Fluent and STAR-CD for approach velocities ranging from 0.25 meter (0.8
foot) to 0.50 meter (1.6 feet) per second. The Froude number varied within the range of 0.16 to 0.32, and the team ran the simulations for various inundation ratios. The researchers modeled the flow
domain using either tetrahedral cells (in Fluent) or hexahedral cells (in STAR-CD), both with gradually refined mesh structure in the vicinity of the bridge deck. (The gradual refinement of mesh in
the vicinity of the deck reduced computational time. If the researchers had refined the mesh in the complete domain, it would have increased the number of cells, which would require more
computational time.)
The TRACC parallel computing system, shown here, provided researchers the visualization capabilities necessary for their experiments. The TRACC computational cluster consists of a 512-core,
customized LS-1 system from Linux Networx.
The team presented the experimental and simulation results in dimensionless form for various Froude numbers (which characterize the open-channel flow) and the inundation ratio (which characterizes
the flooding height). They presented the measured and simulated forces and moments in terms of dimensionless coefficients as drag, lift, and moment coefficients. They performed the Fluent simulations
for 200 seconds using LES as well as the k-(epsilon) turbulence model. Due to fluctuating values of CD (drag force coefficient), CL (lift force coefficient), and CM (moment coefficient) with time,
they averaged the final values from 50 to 200 seconds. They checked the velocity distribution along the depth of flow at various locations upstream of the bridge deck, and profiles clearly showed
that fully developed flow occurs upstream of the bridge deck.
│ Defining the Terms │
│ │
│ As customary, the measured and calculated drag, lift, and moments often are expressed in terms of dimensionless drag, lift, and moment coefficients. Depending on the value of upstream water │
│ level, hu, the drag coefficient, CD, is defined as │
│ │
│ the lift coefficient CL is defined as │
│ │
│ and the moment coefficient CM is defined as │
│ │
│ All variables used in the equations above are defined in the following table: │
│ Drag Force │ FD │
│ Lift Force │ FL │
│ Moment │ M │
│ Drag Force Coefficient │ CD │
│ Lift Force Coefficient │ CL │
│ Moment Coefficient │ CM │
│ Density of Fluid │ r │
│ Upstream Velocity │ V │
│ Gravitational Acceleration │ G │
│ Upstream Flow Depth │ hu │
│ Froude Number │ Fr │
│ Inundation Depth │ hb │
│ Inundation Ratio │ h* │
│ Deck Width │ W │
│ Deck Height │ S │
│ Deck Length │ L │
The experimental data can be compared with the Fluent and STAR-CD simulation results for the drag coefficient as a function of dimensionless inundation ratio, h*, and Froude number, Fr, which
characterizes the flow rate. The simulated drag coefficient using Fluent increases as the inundation ratio increases from 1.0 to 1.5, and then tends to level off for the inundated bridge deck. In
other words, increasing the water level over the submerged bridge causes CD to reach its maximum and then stay constant, similar to what the researchers observed in the experiments. Although the
simulated data using STAR-CD are slightly above those observed from Fluent, they are generally compatible with each other. In practice, an empirical equation can be used to estimate CD, including the
effects of the inundation ratio and Froude number.
This computer visualization shows the meshed geometry of the 3-D bridge deck for the CFD model in Fluent. The bridge deck is in yellow/brown. The portion of the flume wall behind the deck is in dark
blue, and the portion below it is in light blue. Both portions are cross-hatched in black lines to indicate the triangular mesh placed in the flow domain, with the lines and triangles denser for the
more detailed flow field investigation close to the bridge deck.
The researchers also looked at the variation of lift coefficient values, CL, versus inundation ratio using Fluent and STAR-CD simulation results along with the experimental data. Like the
experiments, the component of buoyancy is excluded from the lift forces calculated with the CFD models. Both the simulated and experimental lift coefficients come out to be negative, indicating that
the net hydrodynamic force is directed downward. The lift coefficient value decreases as the inundation ratio decreases (that is, it gets more negative) until h* = 1.0. The maximum (negative) value
of CL is observed between h* = 0.5 and 1.0. Like the drag coefficient, when inundation is very large, the effect of free surface diminishes and CL tends to a constant. This means that when the bridge
deck is immersed well below the water level, the net lift force on the bridge deck does not change.
The graph shows experimentally determined and computer simulated drag coefficients versus inundation ratio for the six-girder bridge deck model. One blue line roughly tracks with the Froude Number
0.22, Reynolds Number 20292 data points (hollow triangles), and the other roughly tracks with the Froude Number 0.32 and Reynolds Number 28965 data points (black triangles). The plot shows that the
drag coefficient is constant for higher inundation ratios. Source: FHWA.
The researchers also compared the moment results of the drag and lift forces for the simulation data and experimental data. Again, the simulated results from Fluent and STAR-CD were close to the
experimental results.
The research team conducted the PIV experiments in a special flume using transparent Plexiglas models. The PIV flow field analysis was necessary to calibrate the CFD models. The researchers observed
strong agreement between computer simulation and PIV experiments; comparison of the contracted flow fields under the bridge deck especially showed excellent conformity.
The graph shows experimentally determined and computer simulated lift coefficients versus inundation ratio for the six-girder bridge deck model. The lift coefficient is negative for all tested
inundation ratios. This observation corresponds to a down-pull force. Source: FHWA.
Significant Findings
The k-(epsilon) turbulence model and LES together with the VOF method simulates faithfully the flow past the bridge deck in an open channel. The predictions of drag, lift, and moment coefficients
through the numerical modeling show a trend similar to the flume experimental results. Future bridge designers can use the numerical CFD model without much effort to obtain the coefficients on a
bridge deck in an open channel for various flow conditions encountered in practice. The Argonne and FHWA researchers are undertaking additional simulations for other shapes of bridge decks to predict
the drag, lift, and moments with the help of CFD.
The graph shows experimentally determined and computer simulated moment coefficients versus inundation ratio for the six-girder bridge deck model. The moment coefficients have a maximum when the
girders of the bridge deck are inundated. Source: FHWA.
"The validation of the computer modeling gives confidence in the results from computer program analysis of complex hydraulics structures without using physical models," says Michael Fazio, deputy
director of research and innovation at the Utah Department of Transportation. "This is a great advancement for hydraulics research."
FHWA's future strategic plan for hydraulics R&D proposes to move away gradually from physical experiments and use more CFD modeling to develop design guidance for practitioners. This successful
collaboration between Argonne and FHWA's hydraulics R&D program is the first step toward that vision.
"The use of CFD software on high-speed computers will undoubtedly play an increasingly important role in the future for understanding scour potential at bridges," says Caltrans's Flora. "Rapid
development of various bridge scenarios will be much easier through numeric simulation on supercomputers than through physical modeling in the lab."
Shown here are the velocity contour plots for the six-girder bridge deck using Fluent CFD code (top) and PIV technology (bottom). The red area below the bridge deck indicates higher flow velocities,
green represents intermediate velocities, and blue indicates areas with negative velocities. The blue region between the girders shows negative velocities where the water is trapped under the bridge
deck and the water is flowing backward. The higher velocities are mostly below the bridge deck due to the higher turbulence level, while the regions away from the deck show intermediate velocities
due to lower turbulence levels. The negative velocities indicate that the water is flowing in a circular motion creating underpressure between the girders. The underpressure system creates turning
moments that are dangerous to the bridge deck. Source: Argonne-TRACC, FHWA.
Kornel Kerenyi is a hydraulic research program manager in FHWA's Office of Infrastructure R&D. He coordinates FHWA's hydraulic and hydrology research activities with State and local agencies,
academia, and various partners and customers, and he manages the Hydraulics Laboratory. He was previously a research engineer for GKY & Associates, Inc. and supervised the support staff in the data
collection and analysis for this study. Kerenyi holds a doctorate in fluid mechanics and hydraulic steel structures from the Vienna University of Technology in Austria.
Tanju Sofu manages the Engineering Simulation and Safety Analysis section in the Engineering Analysis Department at Argonne. He specializes in large-scale computational physics and fluid dynamics
simulations on high-performance computing platforms and has extensive experience with a wide range of engineering systems analyses involving multidimensional, multiscale, multiphysics phenomena. Sofu
holds a doctorate from The University of Tennessee Knoxville.
Junke Guo is an assistant professor in the department of Civil Engineering at the Peter Kiewit Institute at the University of Nebraska-Omaha. He received his doctorate in fluid mechanics and
hydraulics from Colorado State University. His research interests include CFD, turbulent boundary layer flows, open-channel turbulence and sediment transport, and environmental fluid mechanics. His
current research emphasizes CFD applications in transportation-related flows.
For more information, contact Kornel Kerenyi at 202-493-3142, kornel.kerenyi@dot.gov; Tanju Sofu at 630-262-9673, tsofu@anl.gov; or Junke Guo at 402-554-3873, junkeguo@mail.unomaha.edu. | {"url":"https://www.fhwa.dot.gov/publications/publicroads/08sep/05.cfm","timestamp":"2014-04-20T11:01:59Z","content_type":null,"content_length":"45165","record_id":"<urn:uuid:2215a8ff-0a50-48e6-9b65-1b0f6f882895>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical methods for valuation and risk assessment of investment projects and real options
Cisneros-Molina, Myriam (2006) Mathematical methods for valuation and risk assessment of investment projects and real options. PhD thesis, University of Oxford.
In this thesis, we study the problems of risk measurement, valuation and hedging of financial positions in incomplete markets when an insufficient number of assets are available for investment (real
options). We work closely with three measures of risk: Worst-Case Scenario (WCS) (the supremum of expected values over a set of given probability measures), Value-at-Risk (VaR) and Average
Value-at-Risk (AVaR), and analyse the problem of hedging derivative securities depending on a non-traded asset, defined in terms of the risk measures via their acceptance sets. The hedging problem
associated to VaR is the problem of minimising the expected shortfall. For WCS, the hedging problem turns out to be a robust version of minimising the expected shortfall; and as AVaR can be seen as a
particular case of WCS, its hedging problem is also related to the minimisation of expected shortfall.
Under some sufficient conditions, we solve explicitly the minimal expected shortfall problem in a discrete-time setting of two assets driven by correlated binomial models.
In the continuous-time case, we analyse the problem of measuring risk by WCS, VaR and AVaR on positions modelled as Markov diffusion processes and develop some results on transformations of Markov
processes to apply to the risk measurement of derivative securities. In all cases, we characterise the risk of a position as the solution of a partial differential equation of second order with
boundary conditions. In relation to the valuation and hedging of derivative securities, and in the search for explicit solutions, we analyse a variant of the robust version of the expected shortfall
hedging problem. Instead of taking the loss function
Repository Staff Only: item control page | {"url":"http://eprints.maths.ox.ac.uk/645/","timestamp":"2014-04-16T22:04:45Z","content_type":null,"content_length":"20821","record_id":"<urn:uuid:1e9b7ba0-dfd9-4ec9-9b5f-762438d9de3c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Attleboro Precalculus Tutor
Find an Attleboro Precalculus Tutor
...It will cover highlights of each of the 66 books of the Bible. It will include a close examination of why the ransom was necessary as well as the many prophecies beginning with the first
prophecy found in Genesis and down through Revelations. I have spent over a decade in study and helped others with studying the Bible.
38 Subjects: including precalculus, reading, algebra 1, English
I recently completed my undergraduate studies in pure mathematics at Brown University. I am available as a tutor for pre-algebra, algebra I, algebra II, geometry, trigonometry, pre-calculus,
calculus I, II, and III, SAT preparation, and various other standardized test preparations. I have extensiv...
22 Subjects: including precalculus, reading, Spanish, calculus
...As part of graduate coursework I assistant taught math courses, including linear algebra. I worked in an undergraduate tutorial office for one year as a tutor for subjects including linear
algebra. I was enrolled for two years in a math graduate program in logic.
29 Subjects: including precalculus, reading, calculus, English
...I can teach the basics of grammar, spelling, and punctuation for the lower levels (K-5), and essay writing, critical analysis, and critical essays of the classics for upper level grades. Before
I began a family I was in the actuarial field. I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students.
25 Subjects: including precalculus, English, reading, calculus
...I tremendously enjoy sharing my love of math with others and consistently seek to impart an integrated understanding of the material rather than simply helping students memorize formulas and
procedures. As a tutor I am patient and supportive and delight in seeing my students succeed. If it sounds to you like I can be helpful I hope you will consider giving me a chance.
14 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Attleboro_Precalculus_tutors.php","timestamp":"2014-04-19T20:05:06Z","content_type":null,"content_length":"24323","record_id":"<urn:uuid:6e3bb07f-2093-4efe-a619-cd40c3a19799>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
The FFT function returns a result equal to the complex, discrete Fourier transform of Array . The result of this function is a single- or double-precision complex array.
The discrete Fourier transform, F ( u ), of an N -element, one-dimensional function, f ( x ), is defined as:
And the inverse transform, ( Direction > 0), is defined as:
If the keyword OVERWRITE is set, the transform is performed in-place, and the result overwrites the original contents of the array.
The result returned by FFT is a complex array that has the same dimensions as the input array. The output array is ordered in the same manner as almost all discrete Fourier transforms. Element 0
contains the zero frequency component, F0. F1 contains the smallest nonzero positive frequency, which is equal to 1/(Ni Ti), where Ni and Ti are the number of elements and the sampling interval of
the ith dimension, respectively. F2 corresponds to a frequency of 2/(Ni Ti). Negative frequencies are stored in the reverse order of positive frequencies, ranging from the highest to lowest negative
frequencies (see storage scheme below).
NOTE: The FFT can be performed on functions of up to eight (8) dimensions in size. If a function has n dimensions, IDL performs a transform in each dimension separately, starting with the first
dimension and progressing sequentially to dimension n. For example, if the function has two dimensions, IDL first does the FFT row by row, and then column by column.
For an even number of points in the ith dimension, the storage scheme of returned complex values is as follows:
│Real, Imag│Real, Imag││Real, Imag│Real, Imag│Real, Imag││Real, Imag│
For an odd number of points in the ith dimension, the storage scheme of returned complex values is as follows:
│Real, Imag│Real, Imag││Real, Imag│Real, Imag││Real, Imag│
The array to which the Fast Fourier Transform should be applied. If Array is not of complex type, it is converted to complex type. The dimensions of the result are identical to those of Array . The
size of each dimension may be any integer value and does not necessarily have to be an integer power of 2, although powers of 2 are certainly the most efficient.
Direction is a scalar indicating the direction of the transform, which is negative by convention for the forward transform, and positive for the inverse transform. If Direction is not specified, the
forward transform is performed.
A normalization factor of 1/ N , where N is the number of points, is applied during the forward transform.
NOTE: When transforming from a real vector to complex and back, it is slightly faster to set Direction to 1 in the real to complex FFT.
Note also that the value of Direction is ignored if the INVERSE keyword is set.
Set this keyword to a value other than zero to force the computation to be done in double-precision arithmetic, and to give a result of double-precision complex type. If DOUBLE is set equal to zero,
computation is done in single-precision arithmetic and the result is single-precision complex. If DOUBLE is not specified, the data type of the result will match the data type of Array .
Set this keyword to perform an inverse transform. Setting this keyword is equivalent to setting the Direction argument to a positive value. Note, however, that setting INVERSE results in an inverse
transform even if Direction is specified as negative.
Running Time
For a one-dimensional FFT, running time is roughly proportional to the total number of points in Array times the sum of its prime factors. Let N be the total number of elements in Array , and
decompose N into its prime factors:
Running time is proportional to:
where T [ 3] ~ 4T [ 2] . For example, the running time of a 263 point FFT is approximately 10 times longer than that of a 264 point FFT, even though there are fewer points. The sum of the prime
factors of 263 is 264 (1 + 263), while the sum of the prime factors of 264 is 20 (2 + 2 + 2 + 3 + 11).
Display the log of the power spectrum of a 100-element index array by entering:
PLOT, /YLOG, ABS(FFT(FINDGEN(100), -1))
As a more complex example, display the power spectrum of a 100-element vector sampled at a rate of 0.1 seconds per point. Show the 0 frequency component at the center of the plot and label the
abscissa with frequency:
N = 100 ; Define the number of points.
T = 0.1 ; Define the interval.
N21 = N/2 + 1 ; Midpoint+1 is the most negative frequency subscript.
F = INDGEN(N) ; The array of subscripts.
F[N21] = N21 -N + FINDGEN(N21-2) ; Insert negative frequencies in elements F(N/2 +1), ..., F(N-1).
F = F/(N*T) ; Compute T [ 0] ; frequency.
PLOT, /YLOG, SHIFT(F, -N21), SHIFT(ABS(FFT(F, -1)), -N21)
; Shift so that the most negative frequency is plotted first. | {"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl9d.htm","timestamp":"2014-04-19T17:41:22Z","content_type":null,"content_length":"14539","record_id":"<urn:uuid:b78e9daf-33a7-42ba-8d06-99fdfa4b7463>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptoanalysis is a synonym for codebreaking and refers to the practice of analyzing ciphertext with the intent of breaking it or "cracking the code". Cryptoanalysis is the study of ways that can be
used to obtain plaintext information from encrypted information without using the intended method such as using a key to decrypt the information.
Cryptoanalysis attempts to attack weaknesses in the methods used to encrypt code or the methods used to generate keys.
There are several cryptoanalysis methods including:
1. Linear cryptoanalysis is a plaintext attack which uses linear approximation to determine the behavior of the block cipher. If enough plaintest and matching ciphertest are obtained, information
about the key can eventually be discovered. It has been used successfully against FEAL and DES.
2. Differential cryptoanalysis is basically a plaintext attack using chosen plaintest and depends on an analysis of the differences between two related plaintexts as they are encrypted using the
same key. By carefully analyzing the data the probability of possible keys being used to encrypt the data can be calculated and the correct key can be eventually identified. | {"url":"http://www.comptechdoc.org/independent/security/terms/cryptoanalysis.html","timestamp":"2014-04-21T14:39:43Z","content_type":null,"content_length":"3754","record_id":"<urn:uuid:def02584-a9d5-453c-99a7-23a3f9b033c3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
The disturbance matrix and covariance
October 14th 2009, 07:49 AM #1
Oct 2009
The disturbance matrix and covariance
This may be a silly question, but I would really appreciate a serious answer.
The disturbance matrix can be constructed by multiplying the disturbance vector with its transpose. The diagonal then contains variances, the off-diagonal elements are covariances between pairs
of observations (right?).
My question is: Why do we get covariances this way (when the equation is really more demanding)? Is it because the expected value of the disturbance term is 0, so that it already contains
deviance scores (where the expected value is subtracted)?
I am inclined to think so, but I don’t like the fact that the expected values that are subtracted are based on all the observations, whereas the “covariances” in the matrix just concern single
pairs of them. Am I missing something? Is it ok to base the expected values used in the covariance equation on more observations than are included in the calculation of specific entries?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/108000-disturbance-matrix-covariance.html","timestamp":"2014-04-19T00:05:32Z","content_type":null,"content_length":"29780","record_id":"<urn:uuid:b76a456f-0464-44ae-a0b7-a39ddd5769c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fayville Calculus Tutor
Find a Fayville Calculus Tutor
...I've used it to solve all sorts of numerical problems. I've been tutoring in linear algebra and recently helped a student at Columbia learn this subject. As a tutor, I've taught math topics
such as differential equations, algebra, calculus, vector calculus, precalculus, trigonometry, probability, and statistics.
47 Subjects: including calculus, reading, chemistry, geometry
...I have taken my MTELs and passed the 09Mathematics. I am certified to teach high school mathematics. Linear Algebra deals with vector spaces including eigenvectors, linear transformations,
matrices and their identities, and solving systems of linear equations using row reduction, gause-jordan elimination and using the inverse matrix to solve systems of equations.
38 Subjects: including calculus, reading, algebra 1, English
...For fun we talk about gambling odds and the probability of winning the lottery. I have been a high school Math teacher for the past 17 years and have tutored a number of students in preparation
for the ACT, SAT, SSAT & GRE exams. I generally use the AP Collegeboard test prep books.
29 Subjects: including calculus, geometry, GRE, algebra 1
...Classes I have tutored: MIT 18.06 - Linear Algebra University of Phoenix MTH360 - Linear Algebra I aced a course in electromagnetism at MIT and passed the Fundamentals of Engineering exam,
which tests basic understanding of RLC circuits. I also have lots of hands-on experience with circuits; I h...
8 Subjects: including calculus, physics, SAT math, differential equations
...Since then I have been a nanny and a tutor and a cheerleading coach, while also starting a family. As far as my tutoring background, I started in high school when I spent my study halls
tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary school to help out children that were falling behind class.
17 Subjects: including calculus, geometry, statistics, linear algebra | {"url":"http://www.purplemath.com/fayville_ma_calculus_tutors.php","timestamp":"2014-04-18T14:06:42Z","content_type":null,"content_length":"24121","record_id":"<urn:uuid:d408673e-b253-4310-820c-0c660d1faf86>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degrees of freedom (statistics)
From Wikipedia, the free encyclopedia
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.^1
The number of independent ways by which a dynamic system can move without violating any constraint imposed on it, is called degree of freedom. In other words, the degree of freedom can be defined as
the minimum number of independent coordinates that can specify the position of the system completely.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the
degrees of freedom. In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as
intermediate steps in the estimation of the parameter itself (i.e., the sample variance has N-1 degrees of freedom, since it is computed from N random scores minus the only 1 parameter estimated as
intermediate step, which is the sample mean).^2
Mathematically, degrees of freedom is the number of dimensions of the domain of a random vector, or essentially the number of 'free' components (how many components need to be known before the vector
is fully determined).
The term is most often used in the context of linear models (linear regression, analysis of variance), where certain random vectors are constrained to lie in linear subspaces, and the number of
degrees of freedom is the dimension of the subspace. The degrees of freedom are also commonly associated with the squared lengths (or "sum of squares" of the coordinates) of such vectors, and the
parameters of chi-squared and other distributions that arise in associated statistical testing problems.
While introductory textbooks may introduce degrees of freedom as distribution parameters or through hypothesis testing, it is the underlying geometry that defines degrees of freedom, and is critical
to a proper understanding of the concept. Walker (1940)^3 has stated this succinctly as "the number of observation minus the number of necessary relations among these observations."
In equations, the typical symbol for degrees of freedom is $u$ (lowercase Greek letter nu). In text and tables, the abbreviation "d.f." is commonly used. R.A. Fisher used n to symbolize degrees of
freedom (writing n′ for sample size) but modern usage typically reserves n for sample size.
A common way to think of degrees of freedom is as the number of independent pieces of information available to estimate another piece of information. More concretely, the number of degrees of freedom
is the number of independent observations in a sample of data that are available to estimate a parameter of the population from which that sample is drawn. For example, if we have two observations,
when calculating the mean we have two independent observations; however, when calculating the variance, we have only one independent observation, since the two observations are equally distant from
the mean.
In fitting statistical models to data, the vectors of residuals are constrained to lie in a space of smaller dimension than the number of components in the vector. That smaller dimension is the
number of degrees of freedom for error.
Linear regression
Perhaps the simplest example is this. Suppose
are random variables each with expected value μ, and let
$\overline{X}_n={X_1+\cdots+X_n \over n}$
be the "sample mean." Then the quantities
are residuals that may be considered estimates of the errors X[i] − μ. The sum of the residuals (unlike the sum of the errors) is necessarily 0. If one knows the values of any n − 1 of the residuals,
one can thus find the last one. That means they are constrained to lie in a space of dimension n − 1. One says that "there are n − 1 degrees of freedom for errors."
An only slightly less simple example is that of least squares estimation of a and b in the model
$Y_i=a+bx_i+\varepsilon_i\text{ for } i=1,\dots,n$
where x[i] are given, but ε[i] and hence Y[i] are random. Let $\widehat{a}$ and $\widehat{b}$ be the least-squares estimates of a and b. Then the residuals
are constrained to lie within the space defined by the two equations
$x_1 e_1+\cdots+x_n e_n=0.\,$
One says that there are n − 2 degrees of freedom for error.
Note about notation: the capital letter Y is used in specifying the model, while lower-case y in the definition of the residuals; that is because the former are hypothesized random variables and the
latter are actual data.
We can generalise this to multiple regression involving p parameters and covariates (e.g. p − 1 predictors and one mean), in which case the cost in degrees of freedom of the fit is p.
Degrees of freedom of a random vector
Geometrically, the degrees of freedom can be interpreted as the dimension of certain vector subspaces. As a starting point, suppose that we have a sample of n independent normally distributed
This can be represented as an n-dimensional random vector:
$\begin{pmatrix} X_1\\ \vdots \\ X_n \end{pmatrix}.$
Since this random vector can lie anywhere in n-dimensional space, it has n degrees of freedom.
Now, let $\bar X$ be the sample mean. The random vector can be decomposed as the sum of the sample mean plus a vector of residuals:
$\begin{pmatrix} X_1\\ \vdots \\ X_n \end{pmatrix} = \bar X \begin{pmatrix} 1 \\ \vdots \\ 1 \end{pmatrix} + \begin{pmatrix} X_1-\bar{X} \\ \vdots \\ X_n-\bar{X} \end{pmatrix}.$
The first vector on the right-hand side is constrained to be a multiple of the vector of 1's, and the only free quantity is $\bar X$. It therefore has 1 degree of freedom.
The second vector is constrained by the relation $\sum_{i=1}^n (X_i-\bar X)=0$. The first n − 1 components of this vector can be anything. However, once you know the first n − 1 components, the
constraint tells you the value of the nth component. Therefore, this vector has n − 1 degrees of freedom.
Mathematically, the first vector is the orthogonal, or least-squares, projection of the data vector onto the subspace spanned by the vector of 1's. The 1 degree of freedom is the dimension of this
subspace. The second residual vector is the least-squares projection onto the (n − 1)-dimensional orthogonal complement of this subspace, and has n − 1 degrees of freedom.
In statistical testing applications, often one isn't directly interested in the component vectors, but rather in their squared lengths. In the example above, the residual sum-of-squares is
$\sum_{i=1}^n (X_i - \bar{X})^2 = \begin{Vmatrix} X_1-\bar{X} \\ \vdots \\ X_n-\bar{X} \end{Vmatrix}^2.$
If the data points $X_i$ are normally distributed with mean 0 and variance $\sigma^2$, then the residual sum of squares has a scaled chi-squared distribution (scaled by the factor $\sigma^2$), with n
− 1 degrees of freedom. The degrees-of-freedom, here a parameter of the distribution, can still be interpreted as the dimension of an underlying vector subspace.
Likewise, the one-sample t-test statistic,
$\frac{ \sqrt{n} (\bar{X}-\mu_0) }{ \sqrt{\sum\limits_{i=1}^n (X_i-\bar{X})^2 / (n-1)} }$
follows a Student's t distribution with n − 1 degrees of freedom when the hypothesized mean $\mu_0$ is correct. Again, the degrees-of-freedom arises from the residual vector in the denominator.
Degrees of freedom in linear models
The demonstration of the t and chi-squared distributions for one-sample problems above is the simplest example where degrees-of-freedom arise. However, similar geometry and vector decompositions
underlie much of the theory of linear models, including linear regression and analysis of variance. An explicit example based on comparison of three means is presented here; the geometry of linear
models is discussed in more complete detail by Christensen (2002).^4
Suppose independent observations are made for three populations, $X_1,\ldots,X_n$, $Y_1,\ldots,Y_n$ and $Z_1,\ldots,Z_n$. The restriction to three groups and equal sample sizes simplifies notation,
but the ideas are easily generalized.
The observations can be decomposed as
\begin{align} X_i &= \bar{M} + (\bar{X}-\bar{M}) + (X_i-\bar{X})\\ Y_i &= \bar{M} + (\bar{Y}-\bar{M}) + (Y_i-\bar{Y})\\ Z_i &= \bar{M} + (\bar{Z}-\bar{M}) + (Z_i-\bar{Z}) \end{align}
where $\bar{X}, \bar{Y}, \bar{Z}$ are the means of the individual samples, and $\bar{M}=(\bar{X}+\bar{Y}+\bar{Z})/3$ is the mean of all 3n observations. In vector notation this decomposition can be
written as
$\begin{pmatrix} X_1 \\ \vdots \\ X_n \\ Y_1 \\ \vdots \\ Y_n \\ Z_1 \\ \vdots \\ Z_n \end{pmatrix} = \bar{M} \begin{pmatrix}1 \\ \vdots \\ 1 \\ 1 \\ \vdots \\ 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}
+ \begin{pmatrix}\bar{X}-\bar{M}\\ \vdots \\ \bar{X}-\bar{M} \\ \bar{Y}-\bar{M}\\ \vdots \\ \bar{Y}-\bar{M} \\ \bar{Z}-\bar{M}\\ \vdots \\ \bar{Z}-\bar{M} \end{pmatrix} + \begin{pmatrix} X_1-\bar
{X} \\ \vdots \\ X_n-\bar{X} \\ Y_1-\bar{Y} \\ \vdots \\ Y_n-\bar{Y} \\ Z_1-\bar{Z} \\ \vdots \\ Z_n-\bar{Z} \end{pmatrix}.$
The observation vector, on the left-hand side, has 3n degrees of freedom. On the right-hand side, the first vector has one degree of freedom (or dimension) for the overall mean. The second vector
depends on three random variables, $\bar{X}-\bar{M}$, $\bar{Y}-\bar{M}$ and $\overline{Z}-\overline{M}$. However, these must sum to 0 and so are constrained; the vector therefore must lie in a
2-dimensional subspace, and has 2 degrees of freedom. The remaining 3n − 3 degrees of freedom are in the residual vector (made up of n − 1 degrees of freedom within each of the populations).
Sum of squares and degrees of freedom
In statistical testing problems, one usually isn't interested in the component vectors themselves, but rather in their squared lengths, or Sum of Squares. The degrees of freedom associated with a
sum-of-squares is the degrees-of-freedom of the corresponding component vectors.
The three-population example above is an example of one-way Analysis of Variance. The model, or treatment, sum-of-squares is the squared length of the second vector,
$SSTr = n(\bar{X}-\bar{M})^2 + n(\bar{Y}-\bar{M})^2 + n(\bar{Z}-\bar{M})^2$
with 2 degrees of freedom. The residual, or error, sum-of-squares is
$SSE = \sum_{i=1}^n (X_i-\bar{X})^2 + \sum_{i=1}^n (Y_i-\bar{Y})^2 + \sum_{i=1}^n (Z_i-\bar{Z})^2$
with 3(n-1) degrees of freedom. Of course, introductory books on ANOVA usually state formulae without showing the vectors, but it is this underlying geometry that gives rise to SS formulae, and shows
how to unambiguously determine the degrees of freedom in any given situation.
Under the null hypothesis of no difference between population means (and assuming that standard ANOVA regularity assumptions are satisfied) the sums of squares have scaled chi-squared distributions,
with the corresponding degrees of freedom. The F-test statistic is the ratio, after scaling by the degrees of freedom. If there is no difference between population means this ratio follows an F
distribution with 2 and 3n − 3 degrees of freedom.
In some complicated settings, such as unbalanced split-plot designs, the sums-of-squares no longer have scaled chi-squared distributions. Comparison of sum-of-squares with degrees-of-freedom is no
longer meaningful, and software may report certain fractional 'degrees of freedom' in these cases. Such numbers have no genuine degrees-of-freedom interpretation, but are simply providing an
approximate chi-squared distribution for the corresponding sum-of-squares. The details of such approximations are beyond the scope of this page.
Degrees of freedom parameters in probability distributions
Several commonly encountered statistical distributions (Student's t, Chi-Squared, F) have parameters that are commonly referred to as degrees of freedom. This terminology simply reflects that in many
applications where these distributions occur, the parameter corresponds to the degrees of freedom of an underlying random vector, as in the preceding ANOVA example. Another simple example is: if
$X_i;i=1,\ldots,n$ are independent normal $(\mu,\sigma^2)$ random variables, the statistic
$\frac{ \sum\limits_{i=1}^n (X_i - \bar{X})^2 }{\sigma^2}$
follows a chi-squared distribution with n−1 degrees of freedom. Here, the degrees of freedom arises from the residual sum-of-squares in the numerator, and in turn the n−1 degrees of freedom of the
underlying residual vector $\{X_i-\bar{X}\}$.
In the application of these distributions to linear models, the degrees of freedom parameters can take only integer values. The underlying families of distributions allow fractional values for the
degrees-of-freedom parameters, which can arise in more sophisticated uses. One set of examples is problems where chi-squared approximations based on effective degrees of freedom are used. In other
applications, such as modelling heavy-tailed data, a t or F distribution may be used as an empirical model. In these cases, there is no particular degrees of freedom interpretation to the
distribution parameters, even though the terminology may continue to be used.
Effective degrees of freedom
Many regression methods, including ridge regression, linear smoothers and smoothing splines are not based on ordinary least squares projections, but rather on regularized (generalized and/or
penalized) least-squares, and so degrees of freedom defined in terms of dimensionality is generally not useful for these procedures. However, these procedures are still linear in the observations,
and the fitted values of the regression can be expressed in the form
$\hat{y} = Hy,\,$
where $\hat{y}$ is the vector of fitted values at each of the original covariate values from the fitted model, y is the original vector of responses, and H is the hat matrix or, more generally,
smoother matrix.
For statistical inference, sums-of-squares can still be formed: the model sum-of-squares is $||Hy||^2$; the residual sum-of-squares is $||y-Hy||^2$. However, because H does not correspond to an
ordinary least-squares fit (i.e. is not an orthogonal projection), these sums-of-squares no longer have (scaled, non-central) chi-squared distributions, and dimensionally defined degrees-of-freedom
are not useful.
The effective degrees of freedom of the fit can be defined in various ways to implement goodness-of-fit tests, cross-validation and other inferential procedures. Here one can distinguish between
regression effective degrees of freedom and residual effective degrees of freedom.
Regression effective degrees of freedom
Regarding the former, appropriate definitions can include the trace of the hat matrix,^5 tr(H), the trace of the quadratic form of the hat matrix, tr(H'H), the form tr(2H – H H'), or the
Satterthwaite approximation, tr(H'H)^2/tr(H'HH'H). In the case of linear regression, the hat matrix H is X(X 'X)^−1X ', and all these definitions reduce to the usual degrees of freedom. Notice that
$\mathrm{tr}(H) = \sum_i h_{ii} = \sum_i \frac{\partial\hat{y}_i}{\partial y_i},$
i.e., the regression (not residual) degrees of freedom in linear models are "the sum of the sensitivities of the fitted values with respect to the observed response values".^6
Residual effective degrees of freedom
There are corresponding definitions of residual effective degrees-of-freedom (redf), with H replaced by I − H. For example, if the goal is to estimate error variance, the redf would be defined as tr
((I − H)'(I − H)), and the unbiased estimate is (with $\hat{r}=y-Hy$),
$\hat\sigma^2 = \frac{ \|\hat{r}\|^2}{ \hbox{tr}\left( (I-H)'(I-H) \right) },$
$\hat\sigma^2 = \frac{ \|\hat{r}\|^2}{ n - \mathrm{tr}( 2 H - H H' ) } = \frac{ \|\hat{r}\|^2}{ n - 2 \, \mathrm{tr}(H) + \mathrm{tr}(H H') } \approx \frac{ \|\hat{r}\|^2}{ n - 1.25 \, \mathrm
{tr}(H) + 0.5 }.$
The last approximation above^8 reduces the computational cost from O(n^2) to only O(n). In general the numerator would be the objective function being minimized; e.g., if the hat matrix includes an
observation covariance matrix, Σ, then $\|\hat{r}\|^2$ becomes $\hat{r}'\Sigma^{-1}\hat{r}$.
Note that unlike in the original case, non-integer degrees of freedom are allowed, though the value must usually still be constrained between 0 and n.
Consider, as an example, the k-nearest neighbour smoother, which is the average of the k nearest measured values to the given point. Then, at each of the n measured points, the weight of the original
value on the linear combination that makes up the predicted value is just 1/k. Thus, the trace of the hat matrix is n/k. Thus the smooth costs n/k effective degrees of freedom.
As another example, consider the existence of nearly duplicated observations. Naive application of classical formula, n − p, would lead to over-estimation of the residuals degree of freedom, as if
each observation were independent. More realistically, though, the hat matrix H = X(X ' Σ^−1 X)^−1X ' Σ^−1 would involve an observation covariance matrix Σ indicating the non-zero correlation among
observations. The more general formulation of effective degree of freedom would result in a more realistic estimate for, e.g., the error variance σ^2.
Other formulations
Similar concepts are the equivalent degrees of freedom in non-parametric regression,^10 the degree of freedom of signal in atmospheric studies,^11^12 and the non-integer degree of freedom in geodesy.
This section requires expansion. (August 2013)
The residual sum-of-squares $||y-Hy||^2$ has a generalized chi-squared distribution, and the theory associated with this distribution^15 provides an alternative route to the answers provided above.
See also
□ Eisenhauer, J.G. (2008) "Degrees of Freedom". Teaching Statistics, 30(3), 75–78
External links | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Degrees_of_freedom_(statistics)","timestamp":"2014-04-19T23:15:58Z","content_type":null,"content_length":"152725","record_id":"<urn:uuid:29b71b88-2042-4d95-ab0d-4af33cb574fd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perplexing Mensuration Question - sphere in a cone
September 26th 2010, 12:33 AM #1
Perplexing Mensuration Question - sphere in a cone
Hello everyone.
This maths question is somewhat beyond my ken. One part of it talks about similarity while the other talks about mensuration. Still, I could not find an interrelationship between the clues given.
Mensuration Question | Flickr
What I know intuitively is that there is a Pythagoras' Theorem involved in finding the radius of the sphere. However, VD, which is 8cm, is solely a part of the hypotenuse. If I have the value of
the hypotenuse, then I can use the Pythagoras' Theorem to find the radius of the cone.
Also, since triangle VDO is similar to triangle VCA, then it would be $VD=VC$, $VO=VA$ and $OD=CA$.
Can anyone give me hints? Help is greatly appreciated!
Here is a hint for you!
Let the radius of the sphere be r.
Since VD = 8cm, by the pythagoras theorem, VO is $\sqrt{64+r^2}$. Since OC is the radius, it has also length r.
Now VC = VO + OC = 12 cm. Substituting, you'll get $r + \sqrt{64 + r^2} = 12$
Move the r over, square both sides, and solve accordingly.
I did this quickly, so you may want to check the working for the equations.
Hello Gusbob!
Thank you for your reply.
So here's how it would be done:
$r + \sqrt{64 + r^2} = 12$
$r + 64 + r^2 = 144$
$r + r^2 = 80$
Oh no... it seems that I have to solve two unknowns. The answer may be wrong.
You made a mistake squaring: you need to square $r + \sqrt{64+r^2}$ as a whole, or move the r over with the 12.
Here is how I'll do it.
$r + \sqrt{64 + r^2} = 12$
$\sqrt{64+r^2} = 12 -r$
$64 + r^2 = 144 - 24r + r^2$
Sorry ... but what you've done is nearly a crime.
As Gusbob had told you, you have to move the linear r to the RHS first:
Now square both sides:
Now solve for r.
Oh, I see. It seems that I had flopped my algebra.
$64 + r^2 = 144 - 24r + r^2$
$64-144 + r^2 - r^2 +24r = 0$
$-80 +24r = 0$
$24r = 80$
$r = 3\left(\frac{1}{3}\right)$
Now that I had found the radius of the sphere, I must find the volume of it:
$\dfrac{4}{3}\pi r^3$
$= \dfrac{4}{3}\pi (\dfrac{10}{3)}^3$
$= \dfrac{4}{3}\pi \dfrac{100}{9}$
$= \dfrac{400}{27}\pi$
Can anyone help me?
Oh, I see. It seems that I had flopped my algebra.
Now that I had found the radius of the sphere, I must find the volume of it:
$\dfrac{4}{3}\pi r^3$
$= \dfrac{4}{3}\pi \left(\dfrac{10}{3}\right)^3$
$= \dfrac{4}{3}\pi \dfrac{100}{9}$<--- You have squared the value of the radius
The correct calculation:
$= \dfrac{4}{3}\pi \cdot \dfrac{1000}{27} = \dfrac{4000}{81} \pi$
Last edited by earboth; September 28th 2010 at 03:41 AM. Reason: typo
I am very sorry, earboth.
So now I would have to find the radius of the cone.
I discern that triangle VDO is similar to triangle VCA.
$\dfrac{CA}{3\left(\frac{1}{3}\right)}$$= \dfrac{12}{8]$ (corr. sides of similar triangles)
CA = 5cm
Volume of the cone is $\dfrac{1}{3}\pi r^2h$
= $\dfrac{!}{3}\pi5^2(!2)$
$= 100\pi$
So the volume of cone that is not occupied by the sphere is:
$100\pi - \dfrac{4000}{81} \pi$
Am I on the right track?
I am very sorry, earboth. <--- why?
So now I would have to find the radius of the cone.
I discern that triangle VDO is similar to triangle VCA.
$\dfrac{CA}{3\left(\frac{1}{3}\right)}$$= \dfrac{12}{8}$ (corr. sides of similar triangles)
CA = 5cm <<<<<<
Volume of the cone is $\dfrac{1}{3}\pi r^2h$
= $\dfrac{1}{3}\pi5^2(12)$
$= 100\pi$<<<<<<
So the volume of cone that is not occupied by the sphere is:
$100\pi - \dfrac{4000}{81} \pi$
Am I on the right track?
If you simplify your result you'll see that the sphere nearly takes half of the volume of the cone.
(Btw: I've corrected your LaTeX)
September 26th 2010, 12:53 AM #2
Super Member
Jan 2008
September 26th 2010, 01:10 AM #3
September 26th 2010, 01:15 AM #4
Super Member
Jan 2008
September 26th 2010, 01:17 AM #5
September 27th 2010, 02:16 AM #6
September 27th 2010, 04:26 AM #7
September 29th 2010, 12:20 AM #8
September 29th 2010, 01:46 AM #9 | {"url":"http://mathhelpforum.com/geometry/157452-perplexing-mensuration-question-sphere-cone.html","timestamp":"2014-04-21T04:03:29Z","content_type":null,"content_length":"66503","record_id":"<urn:uuid:28c7d26e-c38e-49c3-b634-e7a665d2427d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Exponential Functions: Introductory Concepts
Graphing Exponential Functions: Intro (page 1 of 4)
Sections: Introductory concepts, Step-by-step graphing instructions, Worked examples
Graphing exponential functions is similar to the graphing you have done before. However, by the nature of exponential functions, their points tend either to be very close to one fixed value or else
to be too large to be conveniently graphed. There will generally be only a few points that are "reasonable" to use for drawing your picture; picking these sensible points will require that you have a
good grasp of the general behavior of an exponential, so you can "fill in the gaps", so to speak.
Remember that the basic property of exponentials is that they change by a given proportion over a set interval. For instance, a medical isotope that decays to half the previous amount every twenty
minutes and a bacteria culture that triples every day each exhibits exponential behavior, because, in a given set amount of time (twenty minutes and one day, respectively), the quantity has changed
by a constant proportion (one-half as much and three times as much, respectively).
You can see this behavior in any basic exponential function, so we'll use y = 2^x as representative of the entire class of functions:
On the left-hand side of the x-axis, the graph appears to be on the x-axis. But the x-axis represents y = 0. Can you ever turn "2" into "0" by raising it to a power? Of course not. And a positive "2"
cannot turn into a negative number by raising it to a power, so the line, despite its appearance, never goes below the x-axis into negative y-values; the graph of y = 2^x is always actually above the
x-axis, even if only by a vanishingly-small amount.
So why does it look like it is right on the axis? Remember what negative exponents do: they tell you to flip the base to the other side of the fraction line. So if x = –4, the exponential function
above would give us 2^–4, which is 2^4 = 16 and then flipped underneath to be ^1/[16], which is fairly small. By nature of exponentials, every time we go back (to the left) by 1 on the x-axis, the
line is only half as high above the x-axis as it had been for the previous x-value. That is, while y = ^1/[16] for x = –4, the line will be only half as high, at y = ^1/[32], for x = –5. So, while
the line never actually touches or crosses the x-axis, it sure gets darned close! This is why, practically speaking, the left-hand side of a basic exponential tends to be drawn right along the axis.
If you zoom in close enough on the graph, you will eventually be able to see that the graph is really above the x-axis, but it's close enough to make no difference, at least as far as graphing is
If you are using TABLE or some similar feature of your graphing calculator to find plot points for your graph, you should be aware that your calculator will return a y-value of "0" for
strongly-negative x-values. Your calculator can carry only so many decimal places, and eventually it just gives up and says "Hey, zero is close enough":
But you should not forget that this is just a mark of the limitations of the technology. Like I frequently tell my students: "Student smart; calculator stupid". You need to remember that, no matter
what the calculator says, the graph is still above the x-axis; the y-values are still positive, though very, very, very small. Copyright © Elizabeth Stapel 2002-2011 All Rights Reserved
Let's look again at the graph of y = 2^x:
You can see that, on the right-hand side of the x-axis, the graph shoots up through the roof. This is again because of the doubling behavior of the exponential. Once the functions starts visibly
growing, it keeps on doubling, so it gets very large, very fast.
You will not generally be plotting many points on the left-hand side of the graph, because the y-values get so close to zero as to make the plot-points indistinguishable from the x-axis. And you will
not generally be plotting many point on the right-hand side of the graph, because the y-values get way too big. This is why I've gone on at length (above) about the general shape and behavior of an
exponential: You will need this knowledge to help you with the graphing, so make sure you have a fairly good grasp of it.
Top | 1 | 2 | 3 | 4 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Graphing Exponential Functions: Intro." Purplemath. Available from
http://www.purplemath.com/modules/graphexp.htm. Accessed | {"url":"http://www.purplemaths.com/modules/graphexp.htm","timestamp":"2014-04-19T22:26:21Z","content_type":null,"content_length":"32552","record_id":"<urn:uuid:074a06da-7d0c-44f9-8ca9-6441788fb8e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability in combination
December 2nd 2012, 08:57 AM #1
Dec 2012
Albany, Oregon
probability in combination
I am writing a How-to style hair book and wish to brag how many styles the book will help the readers create. I have a spin wheel inside the book. In case I cannot upload a photo, I will describe
it. 4 wheels stacked up on each other with an arrow on top. Each wheel spins separately. The largest has 36 different braid types. The next has 12 braid shapes. The next has 9 accents to be used.
And the final has 2 ways to do the hair...up or down.
[I could pretend this is something else and say I have 36 differently shaped objects and I want to change each one to 12 possible colors also each of those possibilities in 9 different sizes
while factoring in 2 different possible materials (wood and plastic)]
Can some willing brilliant person do this for me and tell me how many possible styles my 'Braider Creater' would create?
Re: probability in combination
36 x 12 x 9 x 2 = 7,776 different combinations
December 2nd 2012, 09:34 AM #2
Dec 2012 | {"url":"http://mathhelpforum.com/statistics/208895-probability-combination.html","timestamp":"2014-04-19T11:38:30Z","content_type":null,"content_length":"33012","record_id":"<urn:uuid:12ddb8ea-f1a6-45f6-bf2a-359eacbf08a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
note Anonymous Monk You write: If you take a definition from a random hit for "modulo mathematical definition" taken from the WWW, for instance, you'll find the definition: Two numbers a and b are
said to be equal or congruent modulo N iff N|(a-b), i.e. iff their difference is exactly divisible by N. Usually (and on this page) a,b, are nonnegative and N a positive integer. We write a = b (mod
N). Note that the difference has to be divisible by N. Yes, and the difference between no two different integers is divisible by zero. Therefore all integers are in different congruence classes mod
0, which is another way of saying that x mod 0 = x, as the original poster claimed. Although of course it's a matter of definition whether (1) an integer is then congruent to itself mod 0 and (2) the
mod "operator" is actually that closely related to the mod "arithmetic" that it returns the name of the congruence class under arithmetic mod that number. % is probably better called "remainder" than
mod, anyhow, and again it makes sense for x % 0 to be x when you talk about remainders. No matter how often you subtract zero, x remains... (but that's the same argument others have made). 87384 | {"url":"http://www.perlmonks.org/?displaytype=xml;node_id=995640","timestamp":"2014-04-16T20:51:23Z","content_type":null,"content_length":"1722","record_id":"<urn:uuid:eb2fe0e4-aa4c-4a2b-945f-2b2bcdfe4ac5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
SafeCopy extends the parsing and serialization capabilities of Data.Serialize to include nested version control. Nested version control means that you can change the defintion and binary format of a
type nested deep within other types without problems.
The packages safecopy and acid-state go hand-in-hand. You’ll never lose your data with acid-state but any changes in your data schema can easily make it unreadable until those schema changes are
reverted. SafeCopy picks up the ball from here and delivers it securely over the goal line.
Let’s consider a simple example where we migrate from one type to another.
-- We thought Int would be enough but doesn't seem to be the case.
-- To be safe, let's use Integers instead.
data OldType = OldType Int deriving (Show)
data NewType = NewType Integer deriving (Show)
-- Notice that putCopy/getCopy are pretty much identical in both instances.
instance SafeCopy OldType where
putCopy (OldType n) = contain $ safePut n
getCopy = contain $ OldType <$> safeGet
-- A key feature here is that we don't mention the type of the previous version.
instance SafeCopy NewType where
version = 2
kind = extension
putCopy (NewType n) = contain $ safePut n
getCopy = contain $ NewType <$> safeGet
instance Migrate NewType where
type MigrateFrom NewType = OldType
migrate (OldType n) = NewType (fromIntegral n)
-- note: b should be (Either String b)
-- keeping just b because it is easier to read
safeCoerce :: (SafeCopy a, SafeCopy b) => a -> b
safeCoerce a
= let serialized = runPut (safePut a)
in runGet safeGet serialized
The above code allows us to parse old serialization in the following way:
> safeCoerce (OldType 1337) :: NewType
NewType 1337
> safeCoerce (NewType 1337) :: NewType
NewType 1337
Migrations are not limited to a single step. You can build up long chains of migrations and SafeCopy will dutifully do the migrations for you.
data X1 = X1 Word8 deriving (Show)
data X2 = X2 Word16 deriving (Show)
data X3 = X3 Word32 deriving (Show)
instance SafeCopy X1 where
putCopy (X1 n) = contain (safePut n); getCopy = contain $ X1 <$> safeGet
instance SafeCopy X2 where
version = 2
kind = extension
putCopy (X1 n) = contain (safePut n); getCopy = contain $ X1 <$> safeGet
instance SafeCopy X2 where
version = 3
kind = extension
putCopy (X1 n) = contain (safePut n); getCopy = contain $ X1 <$> safeGet
instance Migrate X2 where
type MigrateFrom X2 = X1
migrate (X1 n) = X2 (fromIntegral n)
instance Migrate X3 where
type MigrateFrom X3 = X2
migrate (X2 n) = X3 (fromIntegral n)
We now have a chain of extensions: X1 -> X2 -> X3.
> safeCoerce (X1 42) :: X2
X2 42
> safeCoerce (X1 42) :: X3
X3 42
> safeCoerce (X2 42) :: X3
X3 42
> safeCoerce (X3 42) :: X3
X3 42
Types can only extend a single type but each type can have multiple extensions. Consider the following code:
data OldType = OldType Int deriving (Show)
data LeftBranch = LeftBranch String deriving (Show)
data RightBranch = RightBranch Integer deriving (Show)
instance SafeCopy OldType where
putCopy (OldType n) = contain $ safePut n
getCopy = contain $ OldType <$> safeGet
instance SafeCopy LeftBranch where
version = 2
kind = extension
putCopy (LeftBranch str) = contain $ safePut str
getCopy = contain $ LeftBranch <$> safeGet
-- Notice how both LeftBranch and RightBranch have the same version.
-- This would result in a run-time error if we marked either one as
-- the extension of the other.
instance SafeCopy RightBranch where
version = 2
kind = extension
putCopy (RightBranch n) = contain $ safePut str
getCopy = contain $ RightBranch <$> safeGet
instance Migrate LeftBranch where
type MigrateFrom LeftBranch = OldType
migrate (OldType n) = LeftBranch (show n)
instance Migrate RightBranch where
type MigrateFrom RightBranch = OldType
migrate (OldType n) = RightBranch (fromIntegral n)
The following coercions are possible:
> safeCoerce (OldType 5) :: LeftBranch
LeftBranch "5"
> safeCoerce (OldType 5) :: RightBranch
RightBranch 5 | {"url":"http://acid-state.seize.it/safecopy","timestamp":"2014-04-21T07:51:05Z","content_type":null,"content_length":"15935","record_id":"<urn:uuid:c93f8b88-4c60-40b4-a8b0-8580f9c84650>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
"My mathematics tools help me make life better for my community, they help me make wise choices as a consumer and they give me a valuable skill that all employers want."
Read More
"While education in any field is a worthwhile pursuit, an education in mathematics is one that is used as a basis for many academic career choices."
Read More
"I thoroughly enjoy the career I’ve chosen, and I have no question that I wouldn’t be here if I had not started my training with a degree in mathematics. The analytical problem-solving skills one
develops working through a mathematics curriculum are highly valuable and transferable to any future aspiration."
Read More
"Three points . . . first, mathematics is everywhere . . . from steam irons to nuclear power plants to elevators. Second, mathematics is hard work. Third, mathematicians are normal people. So, if you
like numbers, computers, and all that stuff, go for it! It will be worth the effort."
Read More
"With a bachelor’s in mathematics you can always build upon that knowledge in whatever field you may pursue!"
Read More | {"url":"http://weusemath.org/?m=20090624","timestamp":"2014-04-17T18:25:06Z","content_type":null,"content_length":"37360","record_id":"<urn:uuid:d9c264d7-a54f-42aa-a777-801c988ecb4e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Go4Expert - algorithm dealing with GCD_EuclidRec and GCD_Simple
dodo rawash 2Oct2010 19:34
algorithm dealing with GCD_EuclidRec and GCD_Simple
I am a student who is taking a computer science course in college and have been assigned to write an algorithm for ac++ program. I have no previous programming experience and I am really behind in
this class. It seems fairly simple and anyone on here could probably do it with their eyes closed. I have to write an algorithm dealing with GCD_EuclidRec and GCD_Simple (see below) but do not know
where to go from there. As you can see its basic fudamentals of c++ but i have no clue what i am doing, any help would be greatly appreciated.
Write the following program and run it.
#include <iostream>
#include <stdio.h>
#include <time.h>
int GCD_Euclid(int x, int y);
int GCD_EuclidRec(int x, int y);
int GCD_Simple(int x, int y);
bool isPrime(int x);
bool isprime_Rec(int x,int z);
using namespace std;
void main(int argc)
int x,y;
for(int rep=1; rep<=5; rep++){
x= rand()%100;
y= rand()%100;
cout << "GCD of " << x << " and " << y << " is " << GCD_Euclid(x,y) << endl;
cout << "GCD of " << x << " and " << y << " is " << GCD_EuclidRec(x,y) << endl;
cout << "GCD of " << x << " and " << y << " is " << GCD_Simple(x,y) << endl;
cout<<"the number "<<x<<" is prime"<<endl;
cout<<"the number "<<x<<" isn't prime"<<endl;
cout<<"the number "<<x<<" is prime"<<endl;
cout<<"the number "<<x<<" isn't prime"<<endl;
cout << endl;
int GCD_Euclid(int x, int y){
int r=x%y;
return x;
int GCD_EuclidRec(int x, int y)
return rand();
int GCD_Simple(int x, int y)
return rand();
bool isPrime (int x)
return rand();
bool isprime_Rec(int x, int z=2)
return rand();
Implement the two functions GCD_EuclidRec and GCD_Simple .
Run the program and obtain the output.
Implement the two functions isPrime and isprime_Rec.
Run the program and obtain the output.
Tasks (what to hand in):
Hand in the functions GCD_EuclidRec and GCD_Simple.
Hand in the functions isPrime and isprime_Rec.
Hand in the output of step 3 and 5.
How many steps are needed to find the GCD using the simple algorithm is the value of the first number is q and the second number is d?
If at least one of the two numbers is prime, what is the GCD of the two numbers? Why | {"url":"http://www.go4expert.com/printthread.php?t=23464","timestamp":"2014-04-20T16:26:28Z","content_type":null,"content_length":"11412","record_id":"<urn:uuid:299045c7-9257-48bf-856c-5c1c8695c0c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richard Brent
(Emeritus) Professor Richard Brent
Students and supervisors
Software (fast arithmetic in GF(2)[x], random number generators etc)
Primitive trinomial search (search for degree 57885161 was started on 9 February 2013 and finished on 13 May 2013)
Algorithms for Minimization without Derivatives (reprinted by Dover, January 2002)
Postal Address
Centre for Mathematics and its Applications, Mathematical Sciences Institute
Australian National University, Canberra, ACT 0200, Australia.
(61-2) 6125 3873
(61-2) 6125 5549 | {"url":"http://maths-people.anu.edu.au/~brent/","timestamp":"2014-04-20T00:42:54Z","content_type":null,"content_length":"7076","record_id":"<urn:uuid:aa0fbe94-7b9a-4e16-b6ca-0f7d8a619a34>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help! A remainder is chasing me - Page 3 - New Logic/Math PuzzlesHelp! A remainder is chasing me - Page 3 - New Logic/Math PuzzlesHelp! A remainder is chasing me - Page 3 - New Logic/Math Puzzles
In the spirit of ATL, I present to you my method, in C#, for calculating the answer by brute force, without taking into account LCM's. I'm curious how much smaller this could be done.
int GetNumber() {
int i=1, n=1; bool r=false;
for (;!r;){if(i%n==(n-1)){n=n%11+1;r=n==11;}else{i++;n=1;}}return i;}
Edited by Duh Puck, 15 February 2008 - 06:51 PM. | {"url":"http://brainden.com/forum/index.php/topic/535-help-a-remainder-is-chasing-me/page-3","timestamp":"2014-04-20T03:29:14Z","content_type":null,"content_length":"97224","record_id":"<urn:uuid:caf14296-8df2-4108-b3ef-098111c57455>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Equivalency Tool (Montana University System Common Course Changes)
PHSX 103IN THE PHYSICS OF HOW THINGS WORK
PHSX 200 RESEARCH PROGRAMS IN PHYSICS
PHSX 201IN PHYSICS BY INQUIRY
PHSX 205 COLLEGE PHYSICS I
PHSX 207 COLLEGE PHYSICS II
PHSX 220 PHYSICS I
PHSX 222 PHYSICS II
PHSX 224 PHYSICS III
PHSX 240 HONORS GENERAL AND MODERN PHYSICS I
PHSX 242 HONORS GENERAL AND MODERN PHYSICS II
PHSX 253 PHYSICS OF PHOTOGRAPHY
PHSX 261 LABORATORY ELECTRONICS I
PHSX 262 LABORATORY ELECTRONICS II
PHSX 290R UNDERGRADUATE RESEARCH
PHSX 291 SPECIAL TOPICS
PHSX 292 INDEPENDENT STUDY
PHSX 301 INTRODUCTION TO THEORETICAL PHYSICS
PHSX 305RN THE ART AND SCIENCE OF HOLOGRAPHY
PHSX 320 CLASSICAL MECHANICS
PHSX 331 METHODS OF COMPUTATIONAL PHYSICS
PHSX 343 MONDERN PHYSICS
PHSX 401 PHYSICS BY INQUIRY I
PHSX 402 PHYSICS BY INQUIRY II
PHSX 403 PHYSICS BY INQUIRY III
PHSX 405 SPECIAL RELATIVITY ONLINE
PHSX 423 ELECTRICITY AND MAGNETISM I
PHSX 425 ELECTRICITY AND MAGNETISM II
PHSX 427 ADVANCED OPTICS
PHSX 435 ASTROPHYSICS
PHSX 437 LASER APPLICATIONS
PHSX 441 SOLID STATE PHYSICS
PHSX 442 NOVEL MATERIALS FOR PHYSICS AND ENGINEERING
PHSX 444 ADVANCED PHYSICAL LAB
PHSX 451 ELEMENTARY PARTICLE PHYSICS
PHSX 461 QUANTUM MECHANICS I
PHSX 462 QUANTUM MECHANICS II
PHSX 490R UNDERGRADUATE RESEARCH
PHSX 491 SPECIAL TOPICS
PHSX 492 INDEPENDENT STUDY
PHSX 494 SEMINAR
PHSX 499 SENIOR CAPSTONE SEMINAR
PHSX 501 ADVANCED CLASSICAL MECHANICS
PHSX 506 QUANTUM MECHANICS I
PHSX 507 QUANTUM MECHANICS II
PHSX 511 ASTRONOMY FOR TEACHERS
PHSX 512 GENERAL RELATIVITY ONLINE
PHSX 513 QUANTUM MECHANICS ONLINE
PHSX 515 ADVANCED TOPICS IN PHYSICS
PHSX 516 EXPERIMENTAL PHYSICS
PHSX 519 ELECTROMAGNETIC THEORY I
PHSX 520 ELECTROMAGNETIC THEORY II
PHSX 523 GENERAL RELATIVITY I
PHSX 524 GENERAL RELATIVITY II
PHSX 531 NONLINEAR OPTICS & LASER SPECTROSCOPY
PHSX 535 STATISTICAL MECHANICS
PHSX 544 CONDENSED MATTER PHYSICS I
PHSX 545 CONDENSED MATTER PHYSICS II
PHSX 555 QUANTUM FIELD THEORY
PHSX 560 ASTROPHYSICS
PHSX 561 MODERN PHYSICS FOR TEACHERS: PARTICLES AND WAVES
PHSX 565 ASTROPHYSICAL PLASMA PHYSICS
PHSX 566 MATHEMATICAL PHYSICS I
PHSX 567 MATHEMATICAL PHYSICS II
PHSX 582 ASTROBIOLOGY FOR TEACHERS
PHSX 583 THE INVISIBLE UNIVERSE ONLINE: THE SEARCH FOR ASTRONOMICAL ORIGINS
PHSX 589 GRADUATE CONSULTATION
PHSX 590 MASTER'S THESIS
PHSX 591 SPECIAL TOPICS
PHSX 592 INDEPENDENT STUDY
PHSX 594 SEMINAR
PHSX 689 DOCTORAL READING & RESEARCH
PHSX 690 DOCTORAL THESIS
Course Equivalency Tool (Montana University System Common Course Changes)
PHSX 103IN THE PHYSICS OF HOW THINGS WORK
F 3 cr. LEC 3
PREREQUISITE: High School Algebra.
-- A practical approach to a broad array of fundamental topics in physics for non-science majors taught by analyzing things that are used and observed in everyday life. Classroom demonstrations will
provide the opportunity for in-class analysis, discussions, and hand-on activities. Physics principles will be used to scrutinize issues such as energy and recycling from economic and environmental
perspectives. The latest technology in transportation, electronics, and energy production will be analyzed. The connection between basic research in physics and modern technology will be examined.
Students will not receive credit if they have passed PHSX 205, PHSX 220, or PHSX 240.
PHSX 200 RESEARCH PROGRAMS IN PHYSICS
F 1 cr. LEC 1
-- An introduction to some of the exciting ideas, developments, problems, and experiments of modern day physics.
PHSX 201IN PHYSICS BY INQUIRY
F,S 3 cr. LAB 3
-- An in depth exploration of basic physics principles. Scientific model building and proportional reasoning skills will be developed in the context of properties of matter, observational astronomy,
and DC electric circuits. For pre-service elementary teachers.
PHSX 205 COLLEGE PHYSICS I
F,S,Su 4 cr. LEC 3 LAB 1
PREREQUISITE: High school trigonometry or M 151Q.
-- First semester of sequence. Topics include kinematics and dynamics of linear and rotational motion; work and energy; impulse and momentum; and fluids. Students will not receive credit if they have
passed PHSX 220 or PHSX 240.
PHSX 207 COLLEGE PHYSICS II
F,S,Su 4 cr. LEC 3 LAB 1
PREREQUISITE: PHSX 205 or PHSX 220.
-- Second semester of sequence. Topics include simple harmonic motion; electric forces and fields; dc electric circuits; magnetic forces and fields; and magnetic induction and motors. Students will
not receive credit if they have passed PHSX 222 or PHSX 242.
PHSX 220 PHYSICS I
F,S 4 cr. LEC 3 LAB 1
COREQUISITE: M 171Q or M 181Q
-- First semester of a three-semester sequence primarily for engineering and physical science students. Covers topics in mechanics (such as motion, Newton's laws, conservation laws, work, energy,
systems of particles, and rotational motion) and in mechanical waves (such as oscillations, wave motion, sound, and superposition).
PHSX 222 PHYSICS II
F,S 4 cr. LEC 3 LAB 1
PREREQUISITE: PHSX 220 or PHSX 240; M 171Q or M 181Q
COREQUISITE: M 172Q or M 182Q
-- Covers topics in electricity and magnetism (such as Coulomb's law, Gauss' law, electric fields, electric potential, dc circuits, magnetic fields, Faraday's law, ac circuits, and Maxwell's
equations) and optics (such as light, geometrical optics, and physical optics).
PHSX 224 PHYSICS III
F 4 cr. LEC 3 LAB 1
PREREQUISITE: PHSX 222 or PHSX 242; M 172Q or M 182Q
-- Covers topics in thermodynamics (such as temperature, heat, laws of thermodynamics, and the kinetic theory of gases) and modern physics (such as relativity; models of the atom; quantum mechanics;
and atomic, molecular, solid state, nuclear, and particle physics).
PHSX 240 HONORS GENERAL AND MODERN PHYSICS I
F 4 cr. LEC 3 LAB 1
COREQUISITE: M 171Q or M 181Q.
-- The honors section of PHSX 220. The concepts are discussed in more depth and the range of applications is greater.
PHSX 242 HONORS GENERAL AND MODERN PHYSICS II
S 4 cr. LEC 3 LAB 1
PREREQUISITE: PHSX 220 or PHSX 240; M 171Q or M 181Q.
COREQUISITE: M 172Q or M 182Q.
-- The honors section of PHSX 222. The concepts are discussed in more depth and the range of applications is greater.
PHSX 253 PHYSICS OF PHOTOGRAPHY
F 2 cr. LEC 2
PREREQUISITE: High school algebra.
-- Improvement of photographic skills through an understanding of the basic principles of photography. The nature of light and color and the physical principles involved in the operation of a camera
will be presented. Unusual effects and recent developments will be discussed. Numerous demonstrations, photographs, and slides will be used to illustrate the principles.
PHSX 261 LABORATORY ELECTRONICS I
F 2 cr. LEC 1 LAB 1
PREREQUISITE: PHSX 222 or PHSX 242.
-- Laboratory electronic measurements and analysis, and design of basic linear circuits.
PHSX 262 LABORATORY ELECTRONICS II
S 2 cr. LEC 1 LAB 1
PREREQUISITE: PHSX 261.
-- Analysis and design of basic digital circuits and advanced laboratory electronic measurements.
F,S,Su 1 - 6 cr. RCT
PREREQUISITE: Consent of instructor and approval of department head.
-- Directed undergraduate research. Course will address responsible conduct of research.
PHSX 291 SPECIAL TOPICS
On Demand 1 - 4 cr. Maximum 12 cr.
PREREQUISITE: None required but some may be determined necessary by each offering department.
-- Courses not required in any curriculum for which there is a particular one time need, or given on a trial basis to determine acceptability and demand before requesting a regular course number.
PHSX 292 INDEPENDENT STUDY
On Demand 1-3 cr. IND Maximum 6 cr.
PREREQUISITE: Consent of instructor and approval of department head.
-- Directed study on an individual basis.
S 3 cr. LEC 3
PREREQUISITE: M 273Q or M 283Q; PHSX 222 or PHSX 242.
COREQUISITE: M 274 or M 284.
-- Mathematical methods essential to the practice of theoretical physics, such as matrices, vector calculus, differential equations, complex variables, and Fourier series, with applications to
examples from mechanics and electromagnetism.
PHSX 305RN ART AND SCIENCE OF HOLOGRAPHY
S 3 cr. LEC 2 LAB 1
PREREQUISITE: Junior standing. M 151Q or equivalent M Placement Test.
-- Beginner's course on creating holograms. Pictorial and geometric interpretations of lasers, interference, coherence, film, and holography enable students with limited science and M backgrounds to
create their own holographic masterpieces. Lab techniques and documenting the creative process are emphasized.
PHSX 320 CLASSICAL MECHANICS
F 4 cr. LEC 4
PREREQUISITE: PHSX 224, PHSX 301.
-- Principles of Newtonian, Lagrangian, and Hamiltonian mechanics including single particle motion, systems of particles, rigid body motion, moving coordinate systems, and small oscillations.
F 1 cr. LEC 1
PREREQUISITE: PHSX 301.
-- Introduction to the use of computational methods in physics. Emphasis will be placed on common methods of casting problems into forms amenable to numerical solution and for displaying numerical
PHSX 343 MODERN PHYSICS
F 3 cr. LEC 3
PREREQUISITE: PHSX 224, PHSX 301, and M 284 or M 274.
-- Waves in classical physics and quantum mechanics: complex representation, amplitude mechanics, and interference; Special relativity: postulates, Lorentz transformations, applications in nuclear
and particle physics; Quantum mechanics: interpretation of key experiments, Schrodinger equation, particles in potentials, spin, the atom; Introduction to nuclear and particle physics.
PHSX 401 PHYSICS BY INQUIRY I
Su 3 cr. LAB 3.
PREREQUISITE: Teacher Certification.
-- An in-depth and hands-on exploration of basic physics principles. Scientific model building and proportional reasoning skills will be developed in the context of dc electronics, one and two
dimensional kinematics, and dynamics. For middle school and high school science teachers.
PHSX 402 PHYSICS BY INQUIRY II
Su 3 cr. LAB 3.
PREREQUISITE: PHSX 401.
-- An in-depth and hands-on exploration of basic physics principles. Scientific model building and proportional reasoning skills will be developed in the context of light, color, geometrical optics,
heat, and temperature. For middle school and high school teachers.
PHSX 403 PHYSICS BY INQUIRY III
Su 3 cr. LAB 3
PREREQUISITE: Science Teacher Certification.
COREQUISITE: PHSX 401.
--PHSX 403 is a continuation of the PHSX 401 experience, but it may also be taken concurrently with PHSX 401. The course will begin with a careful investigation of geometrical optics, leading to an
understanding of pinhole cameras, lenses, and prisms. This will be followed by an exploration of magnetic interactions and magnetic materials.
PHSX 405 SPECIAL RELATIVITY ONLINE
S alternate years, to be offered odd years 3 cr. RCT 3
PREREQUISITE: PHSX 222, M 172 or M 182, Bachelor's degree, and one year teaching experience.
-- This online course addresses the question: In what ways does nature behave differently at high relative speeds than at low speeds? Designed for practicing high school physics teachers. Assignments
and discussions use electronic computer conferencing and interactive visual software.
PHSX 423 ELECTRICITY AND MAGNETISM I
S 3 cr. LEC 3
PREREQUISITE: PHSX 343 or M 348.
-- Electrostatic fields, dielectric materials, magnetic fields, magnetic materials, and Maxwell's equations.
PHSX 425 ELECTRICITY AND MAGNETISM II
F 3 cr. LEC 3
PREREQUISITE: PHSX 423.
-- Propagation of electromagnetic waves, radiation, and general wave phenomena.
PHSX 427 ADVANCED OPTICS
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 224 and M 274 or M 284.
-- Emphasis is on new developments in optics triggered by the laser. Provides a good foundation in wave optics, nonlinear optics, integrated optics, and spectroscopy.
PHSX 435 ASTROPHYSICS
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 343.
COREQUISITE: PHSX 423.
-- A survey covering basic problems in modern astrophysics such as stellar structure and evolution, solar physics, compact objects, quasars, and cosmology.
PHSX 437 LASER APPLICATIONS
S alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 222.
-- A survey of laser types and properties and applications for scientists and engineers who wish to use lasers in research or technology. Many demonstrations will be used to illustrate the
PHSX 441 SOLID STATE PHYSICS
F alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 224.
-- A treatment of the classification and electronic structure of solids. Properties of conductors, superconductors, insulators, and semiconductors will be discussed. This course is strongly
recommended for students intending to study physics in graduate school.
PHSX 442 NOVEL MATERIALS FOR PHYSICS & ENGINEERING
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: Knowledge of introductory solid state physics; PHSX 441 or consent of instructor.
-- Provides basic physical knowledge of advanced natural/artificial materials; ferroelectrics, superconductors, nanotubes, super lattices, photonics materials, materials with giant magnetoresistance
and negative susceptibilities, molecular magnets, and biomaterials.
PHSX 444 ADVANCED PHYSICS LAB
F,S 4 cr. LAB 4 Maximum 8 cr
PREREQUISITE: PHSX 343.
COREQUISITE: PHSX 461.
-- Introduction to methods, instrumentation, and data acquisition techniques used in modern physics research. Different experiments are offered in the two semesters. For students desiring a strong
experimental exposure, taking both courses is recommended. Experiments in the fall semester are typically in the optical area and include interferometers, fiber optics, spectral measurement,
polarization, and laser optics. Experiments in spring semester are typically in solid state physics and particle spectroscopy.
S 3 cr. LEC 3
PREREQUISITE: PHSX 320.
-- Statistical physics and thermodynamics and their applications to physical phenomena. This course is strongly recommended for students intending to study physics in graduate school.
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 343
-- A survey of elementary particle physics, beginning with an historical viewpoint and leading up to today's remarkably successful "Standard Model" of quarks, leptons, and gauge bosons.
PHSX 461 QUANTUM MECHANICS I
F 3 cr. LEC 3
PREREQUISITE: PHSX 320.
-- Operators, eigenvalues, and correspondence with observables. Solutions to the Schrodinger equation: one dimensional problems, bound and unbound states, harmonic oscillator, and angular momentum.
PHSX 462 QUANTUM MECHANICS II
S 3 cr. LEC 3
PREREQUISITE: PHSX 461.
-- Three-dimensional problems, hydrogen atom, matrix mechanics, spin, perturbation theory, and applications to atomic, molecular, nuclear, and particle physics.
F,S,Su 1 - 3 cr. IND May be repeated. Max 6 cr.
PREREQUISITE: Junior standing and signed consent of instructor/ research advisor and academic advisor.
-- Directed undergraduate research/creative activity, which may culminate in a research paper, journal article, or undergraduate thesis. Course will address responsible conduct of research.
PHSX 491 SPECIAL TOPICS
On Demand 1 - 4 cr. Maximum 12 cr.
PREREQUISITE: Course prerequisites as determined for each offering.
-- Courses not required in any curriculum for which there is a particular one-time need, or given on a trial basis to determine acceptability and demand before requesting a regular course number.
PHSX 492 INDEPENDENT STUDY
On Demand 1 - 3 cr. IND Maximum 6 cr.
PREREQUISITE: Junior standing, consent of instructor and approval of department head.
-- Directed study on an individual basis.
PHSX 494 SEMINAR/WORKSHOP
On Demand 1 cr. SEM 1 Maximum 4 cr
PREREQUISITE: Junior standing and as determined for each offering.
-- Topics offered at the upper division level which are not covered in regular courses. Students participate in preparing and presenting discussion material.
PHSX 499 SENIOR CAPSTONE SEMINAR
S 1 cr SEM 1
PREREQUISITE: Senior standing, completion of a senior project, and 2 credits of PHSX 490R.
-- Senior capstone course. Participation in this course requires the completion of a senior project that integrates the student's knowledge and skills acquired during the undergraduate curriculum.
Students will be required to complete: i) an APS-style abstract, ii) an APS-style 10-minute oral presentation, iii) a poster session, and iv) a written research report, based on their research/
creative activity.
F 3 cr. LEC 3
PREREQUISITE: PHSX 320.
-- Lagrangian and Hamiltonian dynamics. Small oscillations. Rigid-body motion. An introduction to continuum mechanics.
PHSX 506 QUANTUM MECHANICS I
F 3 cr. LEC 3
PREREQUISITE: PHSX 462.
-- Ket space and matrix representations. Quantum dynamics and invariance. Path integral methods. Rotations and angular momentum theory. Translation, reflection, and inversion symmetries. Conservation
principles and degeneracy.
PHSX 507 QUANTUM MECHANICS II
S 3 cr. LEC 3
PREREQUISITE: PHSX 506.
-- Time-independent and time-dependent perturbations. Identical particles and permutation symmetry. Scattering theory. Applications of quantum mechanics.
PHSX 511 ASTRONOMY FOR TEACHERS
F,S 3 cr. RCT 3
PREREQUISITE: PHSX 207 or PHSX 222 or PHSX 242, and secondary certification in teaching and two years of teaching experience.
-- This is an online, distance education course primarily intended for science educators. Topics include: the laws of gravity and orbital dynamics, a survey of the solar system, stars and stellar
evolution, galaxies, and Big Bang cosmology.
PHSX 512 GENERAL RELATIVITY ONLINE
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 222 or PHSX 242; M 182, PHSX 405 and Bachelor's degree and one year teaching experience.
-- This online course addresses the theory of general relativity, which underlies our understanding of gravity and the large-scale structure of the cosmos. Designed for practicing high school physics
teachers. Assignments and discussions use electronic computer conferencing and simulation software.
PHSX 513 QUANTUM MECHANICS ONLINE
F alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 222 or PHSX 242; M 182, EDSD 366 and Bachelor's degree and one year teaching experience.
-- This online course addresses the key ideas behind quantum mechanical observations and devices, including the fundamental behavior of electrons and photons. Designed for practicing high school
physics teachers. Assignments and discussions use electronic computer conferencing and simulation software.
S 3 cr. LEC 3
PREREQUISITE: EDSD 366 or EDCI 325, professional teaching certification, Bachelor's degree and at least one year K-12 teaching experience, and a background knowledge of astronomy at the level of ASTR
110 (or its equivalent).
-- Establishing a Virtual Presence in the Solar System has been developed and tested as an Internet-delivered course for off-campus students. Its audience consists of practicing elementary and
secondary teachers who have experience in teaching general science but have little, if any, formal course work in astronomy. Its goal is to help graduate-level teachers learn solar system astronomy
concepts to integrate the new National Science Education Standards and NASA resources into existing instructional strategies. Course participants learn advanced solar system concepts, utilize
WWW-resources, communicate with research scientists using the Internet, analyze digital images using image processing software, and organize materials for use in K-12 classroom environments.
PHSX 515 ADVANCED TOPICS IN PHYSICS
On Demand 3 cr. LEC 3 Maximum 6 cr.
PREREQUISITE: Graduate standing.
-- Topics in astrophysics, condensed matter physics, optics, mathematical physics, or particle physics are presented as needed to supplement the curriculum.
PHSX 516 EXPERIMENTAL PHYSICS
F,S 3 cr. LAB 3 Maximum 6 cr.
PREREQUISITE: PHSX 261, PHSX 423, and PHSX 461.
-- Experiments chosen from laser optics and atomic, solid-state, and nuclear physics are carried out in depth to introduce the graduate student to methods, instrumentation, and data acquisition
techniques useful for experimental thesis projects.
PHSX 519 ELECTROMAGNETIC THEORY I
S 3 cr. LEC 3
PREREQUISITE: PHSX 425.
-- Electro- and magnetostatics, conservation laws and covariance of Maxwell's equations, and dynamics of relativistic particles and fields.
PHSX 520 ELECTROMAGNETIC THEORY II
F 3 cr. LEC 3
PREREQUISITE: PHSX 519.
-- Radiation by moving charges. Electromagnetic waves in condensed matter and plasma.
PHSX 523 GENERAL RELATIVITY I
F alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 519.
-- Tensor calculus, differential geometry, and an introduction to Einstein's theory of gravity. The Schwarzschild solution and black hole physics.
PHSX 524 GENERAL RELATIVITY II
S alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 523.
-- Advanced topics in gravitation theory such as singularities, cosmological models, and gravitational waves.
PHSX 531 NONLINEAR OPTICS & LASER SPECTROSCOPY
F alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 507.
-- Two-level atoms in laser fields and applications to nonlinear optics such as photon echoes, second harmonic generation, and stimulated Raman scattering. Atomic and molecular energy level
structure, linear and nonlinear spectroscopy, and applications to gaseous and solid state laser materials.
S 3 cr. LEC 3
PREREQUISITE: PHSX 446.
-- Basic concepts of equilibrium statistical mechanics, with application to classical and quantum systems, will be presented as well as theories of phase transitions in fluid, magnetic, and other
PHSX 544 CONDENSED MATTER PHYSICS I
F alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 446, PHSX 507.
-- Crystal structure and the reciprocal lattice. Quantum theory of electrons and phonons.
PHSX 545 CONDENSED MATTER PHYSICS II
S alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 544.
-- Applications to the transport, optical, dielectric, and magnetic properties of metals, semiconductors, and insulators.
PHSX 555 QUANTUM FIELD THEORY
S alternate years, to be offered odd years 3 cr. LEC 3
PREREQUISITE: PHSX 507.
-- Techniques of canonical and path integral quantization of fields; renormalization theory. Quantum electrodynamics; gauge theories of the fundamental interactions.
PHSX 560 ASTROPHYSICS
F alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 425, PHSX 462, PHSX 446, and PHYS 435.
-- The purpose of this course is to prepare graduate students for thesis-level research in astrophysics, solar physics or related fields. Topics covered include: fluid mechanics, hydrodynamics,
plasma physics, radiation processes and stability of equilibrium states.
PHSX 561 MODERN PHYSICS FOR TEACHERS: PARTICLES AND WAVES
Su 3 cr. LAB 3
PREREQUISITE: Secondary teaching certificate; 2 years teaching experience. PHSX 224, PHSX 401, and PHSX 580 (Advanced Physics by Inquiry.)
-- Students in this capstone course will discuss, perform, and analyze several experiments that demonstrate the particle and wave behaviors of light and electrons. Students will develop methods and
models for teaching these concepts of modern physics to high school students.
F alternate years, to be offered odd years 3 cr. LEC 3
COREQUISITE: PHSX 520.
-- An introduction to the physics of fluids and plasma relevant to astrophysical plasmas such as the solar corona. Topics covered include: magnetostatics, one-fluid (MHD) and two-fluid approaches,
linear waves and instabilities, shocks, transonic flows and collisional effects.
PHSX 566 MATHEMATICAL PHYSICS I
F 3 cr. LEC 3
PREREQUISITE: M 349, M 472, PHSX 320.
-- mathematical methods which find application in physics. Differential equations, contour integration, special functions, integral transforms, boundary value problems, and Green's functions.
PHSX 567 MATHEMATICAL PHYSICS II
S alternate years, to be offered even years 3 cr. LEC 3
PREREQUISITE: PHSX 566.
-- Theory of computational techniques, and applications such as numerical integration, differential equations, Monte Carlo methods, and fast Fourier transforms.
PHSX 582 ASTROBIOLOGY FOR TEACHERS
F,S 3 cr. Online Lec 3
PREREQUISITE: ASTR 371, PHSX 511, or equivalent; PHSX 205, PHSX 220, PHSX 224, or equivalent; BIOB 375 or equivalent; EDSD 366 or equivalent; and Bachelor's degree and minimum of one year of
full-time teaching experience at the secondary level or above.
-- Astrobiology is the study of the origin, evolution, distribution, and destiny of life in the universe. It defines itself as an interdisciplinary science at the intersection of physics, astronomy,
biology, geology, and mathematics, to discover where and under what conditions life can arise and exist in the Universe. The course topics will cover the discovery of planetary systems around other
stars, the nature of habitable zones around distant stars, the existence of life in extreme environments. These concepts will serve as a foundation to study possible extraterrestrial ecosystems on
planets and moons like Mars and Europa.
F,S 3 cr. Online Lec 3
PREREQUISITE: ASTR 371, PHYS 511, or equivalent; PHYS 205, PHSX 220, PHSX 224, or equivalent; EDSD 366 or equivalent; and Bachelor's degree and minimum of one year of full-time teaching experience at
the secondary level or above.
-- This course covers the long chain of events from the birth of the universe in the Big Bang, through the formation of galaxies, stars, and planets by focusing on the scientific questions,
technological challenges, and space missions pursuing the search for origins in alignment with the goals and emphasis of the National Science Education Standards.
F,S,Su 3 cr. TUT
PREREQUISITE: Master's standing and approval of the Dean of Graduate Studies.
-- This course may be used only by students who have completed all of their coursework (and thesis, if on a thesis plan) but who need additional faculty or staff time or help.
PHSX 590 MASTER'S THESIS
F,S,Su 1 - 10 cr. IND Maximum credits unlimited.
PREREQUISITE: Master's standing.
PHSX 591 SPECIAL TOPICS
On Demand 1 - 4 cr. Maximum 12 cr.
PREREQUISITE: Upper division courses and others as determined for each offering.
-- Courses not required in any curriculum for which there is a particular one time need, or given on a trial basis to determine acceptability and demand before requesting a regular course number.
PHSX 592 INDEPENDENT STUDY
On Demand 1 - 3 cr. IND Maximum 6 cr.
PREREQUISITE: Graduate standing, consent of instructor, approval of department head and Dean of Graduate Studies.
-- Directed research and study on an individual basis.
PHSX 594 SEMINAR
On Demand 1 cr. SEM Maximum 8 cr.
PREREQUISITE: Graduate standing or seniors by petition. Course prerequisites as determined for each offering.
-- Topics offered at the graduate level which are not covered in regular courses. Students participate in preparing and presenting discussion material.
PHSX 689 DOCTORAL READING & RESEARCH
On Demand 3 - 5 cr. IND Maximum 15 cr.
PREREQUISITE: Doctoral standing.
-- This course may be used by doctoral students who are reading research publications in the field in preparation for beginning doctoral thesis research.
PHSX 690 DOCTORAL THESIS
F,S,Su 1-10 cr. IND Maximum credits unlimited.
PREREQUISITE: Doctoral standing. | {"url":"http://www.montana.edu/wwwcat/courses/phsx.html","timestamp":"2014-04-19T19:59:02Z","content_type":null,"content_length":"49843","record_id":"<urn:uuid:c97b96af-527e-4eb0-b1b4-ca2ab55ad040>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(__)^2= (csc x-1)(csc x+1) solve for the blank
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fcd02dae4b0c6963ad7ec71","timestamp":"2014-04-17T06:55:27Z","content_type":null,"content_length":"56077","record_id":"<urn:uuid:2eaa5064-5ff8-4b73-b101-a75281cf32bd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precision based floating-point
Mario F.
Precision based floating-point
Well... I couldn't think of something else better to do with my time. So I decided to build a class that wraps around the concept of floating-point numbers with an emphasis on precision at the
expense of accuracy.
const int STRDOUBLE_MAX_PRECISION = 15;
class strdouble {
strdouble(): integer(0), frac(0), exp(0), prec(1) {}
explicit strdouble(const std::string&);
strdouble& operator=(const std::string&);
strdouble& operator++();
strdouble& operator--();
strdouble operator++(int);
strdouble operator--(int);
friend bool operator==(const strdouble&, const strdouble&);
friend bool operator<(const strdouble&, const strdouble&);
friend bool operator>(const strdouble&, const strdouble&);
double value() const;
double integer; /// integer portion of fractional number
double frac; /// normalized fractional portion of fractional number
int exp; /// base 10 exponent of fractional portion
int prec; /// overall precision (significant digits)
The basic idea is to receive a well formatted string and decompose it into the integer and fractional parts. The fractional part is in fact an integral too. exp is a base 10 exponent to be
applied to it. Exp is expected to be always less than 0.
I was planning to also define a constructor accepting a double.
integer + frac * pow(10, exp) will reconstruct the floating-point number.
I have all of the above operator overloads already defined plus some more. Other overloads would eventually be defined of course. Namely the arithmetic operators. Also, some member functions
replacements for the arithmetic operators would also be supplied. These would differ in the sense that the user could specify the precision of the result throught either truncation or rounding.
Anyways... I don't think this approach is the best. Each instance is too big, the class will grow to become slow and clumsy. Every operation subject to conversions...
The questions is... do you think this has some use? Or is it best if perhaps I concentrate more on how doubles are stored in memory work from there in an attempt to create a precision based
floating-point type?
Mario F.
I take it, this is a useless class :)
Not useless, but overkill when you can just use a double data type to do the same thing with less code and less overhead.
Mario F.
My problem with that was having the class instancing with a number that was probably not what the user expected.
double bar = 1 / 3;
CFixedDouble foo(bar, precision);
I don't think that example best proves the point you were going to make. The division will always happen before the double is initialized, and while you could force the double to some artificial
precision from the beginning it's okay to use double's full range; that's what it's there for. If bar were smaller in bytes if it had 0.3334 instead of 0.3 repeating, you'd have a stronger point
I'd think, but it's not. | {"url":"http://cboard.cprogramming.com/cplusplus-programming/80999-precision-based-floating-point-printable-thread.html","timestamp":"2014-04-16T12:03:27Z","content_type":null,"content_length":"11205","record_id":"<urn:uuid:a2321772-ac45-43b8-9ffa-d6d795aa0b2c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume Visualization in Industrial Computational Fluid Dynamics
Computer Graphics & Geometry
Volume Visualization in Industrial Computational Fluid Dynamics
A.Gudzovsky^**, A.Aksenov^**, S.Klimenko^***
**Institute for Computer Aided Design (ICAD) of the Russian Academy of Sciences,
E-mail: FlowVision@glas.apc.org
Istitute of System Programming (ISP) of the Russian Academy of Sciences
E-mail: klimenko@mx.ihep.su
Abstracts: Approaches for building the interactive system of volume visualization and results data analysis in industrial computational fluid dynamics are discussed. Using of simplified laws governed
the image building instead laws of physical radiative transfer in analysed domain is proposed. A new numerical method for visualization of 3D scalar field convenient for application in the computer
aided engineering systems is advanced.
Key words: scientific visualization, computational fluid dynamics, refraction.
Advances in computers and numerical methods have led to the increase of complexity of problems that can be solved by numerical simulation. At present we can say about forming of industrial
computational fluid dynamics (ICFD) - the branch of science and technology handling with numerical simulation of flows in real-life technical installations. ICFD takes into account complex shape of
installations and aggregate of fundamental physical processes. One of the major goal of ICFD is to test a technical installation ahead its appearance in reality.
There are considerable number of ICFD software been used for computer aided engineering (CAE) of technical installations interacted with gas or fluid flows. The software is generally created for
workstation and includes advanced interface, tools for geometry and other input data specifying, and analysis of results. For example we can mention road vehicles, boilers of thermal power stations,
sewage treatment, ventilated production areas (including cleanrooms), wind load on constructions as industrial application of CAE-ICFD systems.
The peculiarity of present time is the advance of powerful and comparatively inexpensive personal computers (PC). It allows to implement CAE-ICFD systems for PC platform. The deepest gap between
capabilities of PC and workstations is in the area of visualization. This makes the very actual problem of developing efficient methods of 3D data visualisation.
It is obvious that universal method for exhaustive representing all possible flows is absent. Only system of visualization tools with the wide spectrum of possibilities can gives answers to different
user's questions about processes in real-life technical installation.
The visualization system for ICFD should include different methods. It must include conventional methods for imaging vector (e.g. velocity, vorticity) and scalar (e.g. temperature, pressure) fields,
drawing of vectors at the ordered set of points (e.g. in the centres of the calculation cells or any user defined grid); trajectories of particles (strips) moving over velocity field; imaging the
scalar isosurfaces, and other.
But the practice shows the need for new visualization tools suitable for analysis of complex flows in real-life technical installation modeled by ICFD code. The volume visualization is effective, but
insufficiently developed analysis technique in ICFD. The objective of this paper is to discuss the approaches for building the interactive system of volume visualization and analysis of results data
in ICFD.
1. Basic principals
The visualization system may be produced for a different purpose and respectively based on different principals. Therefore it is important to declare from the beginning the foundation of our
a) The volume visualization is the method of 3D2D transformation
A matter of the volume visualization is converting of 3D information field into a 2D image. On the one hand, compression of the information is useful for achieving the possibility to analyse 3D data
distribution as a whole. On the other hand, the compression of the information creates integral images suitable for quick qualitative comparison of various design options of the investigated
technical installation.
b) The visualized volume contains gas with optical properties specified by user
We assume that the considered domain contain a gas with variable optical properties. The 3D2D transformation is governed by laws of radiation transfer in gas. At the volume rendering the domain is
projected on the screen in such a way that analyzed function f is transformed to a distribution of intensity of light on view plane. This transformation uses some weight function given by the
relation between f and optical properties of medium. User should can analyse the 3D distribution of f by control of the relations between optical properties of a medium and function f.
c) The volume visualization must be understandable
ICFD visualization has many common approaches and methods with other areas of scientific visualization. At the same time there are some features of ICFD problems, often not completely understood by
the specialists in visualization and computer graphics. Then, it is not a surprise that interesting methods for flow visualization are often developed by ICFD specialists [Levi90].
Underline, that the tools have to provide the presentation of analysed data in form obvious for user. User should use visualization tools for analysing of the technical installation functioning
instead of deciphering the images. It means that the user must can to explain any peculiarity of the image in terms of peculiarities of fluid flow.
In many cases the difficulties in analysis of the numerical results are associated not with poor computer graphics, but with the lack of appropriate visual images. For instance, the difficulties of
visualization of tensor fields are caused by the absence of intuitively clear visual images for tensor characteristics of fluid flow.
d) The volume visualization must be interactive
We will be interested in the visualization methods used at the analytical stage of user interaction with the CAE system. This stage assumes using of fast methods of drawing image on a display. At
this stage of work the visualization and data analysis system have to minimise the time required for user to understand the structure and major characteristic features of the flow. The system must be
interactive to provide the continuous process of analysis. The practice shows that the time for the image building should be no longer than 5 - 10 seconds.
We will not discuss here the problems of the final stage of work - report visualization. At this stage of work the clear understanding of the flow structure exists and the most beautiful presentation
of a number of "show" illustrations is only needed. It is evident that in this case time required for making an image is not critical but the attractiveness of the picture and its ability to make
strong impression are the most important.
2. Peculiarities of the initial data
At the beginning lets formulate the features of ICFD numerical results that we have to represent on the display.
We already mentioned the first feature, namely, complex geometry of the industrial objects. Besides, the geometry of the calculation domain for the majority of ICFD problems is characterised by the
scales differed greatly in sizes. The ratio of the typical size of the whole domain to that of small element defined the flow structure or characteristics may reach hundreds.
Scientific visualization for fundamental research in contrast to the visualization systems for applied research deals with purified phenomena taking place in the domain with simplest geometry.
Therefore various techniques including exotic ones are acceptable in visualization of such phenomena. The most of specialists in the area of visualization [Wijk93, Bril94] use quite trivial flows
(compared with flows typical for ICFD) for examples of application of the developed methods. The question about applicability of the offered methods for solving real-life applied problems is still
open. The practice shows that as a rule the most valuable thing in any idea is its realisation.
We are coming from the fact that analysed scalar and vector fields are three dimensional and linked to the complex geometry of a technical installation. Therefore visualization system should include
advanced tools for presentation of complex geometry of technical installation.
The second specific for ICFD feature is associated with the origin of the visualized data from numerical simulation. The data quantity and structure are determined by the used ICFD method instead of
the requirements of visualization method. For instance, the simulation may be produced with the use of calculation grid of different type: orthogonal Cartesian grid, multiblock body fitted grids,
nonorthogonal grid, local refined grid. The analysis of published ICFD papers shows that the number of cells in the typical used grid is of the order 10^4 210^5. The use of grids with the number of
cells more than 10^6 requires today's supercomputers and therefore is rare. The typically used size of a calculation grid grows quite slowly - roughly one order of magnitude each 10 years. So the
development of visualization methods used grid with a number of cells more than 10^7 [Wood93] is seen as not today problem of ICFD.
It is important not to be overdiligent in trying to obtain the high quality image with the resolution higher than given by ICFD numerical simulations. Certain threat is associated with the use of
operations of filtration, averaging and other actions improving the image. The major purpose of scientific visualization is seen in accurate representation of obtained information instead of drawing
a beautiful picture around results data.
It is quite natural to treat a grid cells as a voxel [Elv92] on the visualization of 3D data coupled with a grid. But in the finite volume methods the values at cell are averaged over cell volume. So
to restore the value of a parameter at arbitrary point of cell it is necessary to reconstruct the function using it's mean values within the cells. This procedure is not always could be reduced to
the simple trilinear interpolation. Ignoring of this fact can cause distortion of the simulation results.
3. Physical basics of volume visualization
Human beings distinguish the objects in the surrounding space due interaction of these objects with light: objects radiate, absorb, reflect, scatter or refract light. The same physical principles as
in reality are used for creating an image on the screen of monitor. There are some examples of the images have been built by using different optical processes in the works, devoted to volume
visualization [Kru91, Kauf94, Hav94, Max95].
So, the most common computer presentation of objects as a set of surfaces (surface visualization) is based on the properties of reflection and refraction of incident light. The process of reflection
is typical to surfaces of rigid and liquid phase not to gas volume. Therefore we shall not consider the reflection below.
Assess then the role of light scattering in the volume visualization. Lets consider the radiative transfer in a domain contained radiative, absorptive, and scattering medium. From the definition, the
spectral intensity of light
The solving of integro-differential equation (1) in common case of multiple scattering medium may be produced only by iterative procedure. So it requires for a considerable processor's time analogous
to rendering of scene with the multiple scattering of light from the surfaces by ray tracing method. Therewith taking into account the scattering can only fog (in a literal sense) the image, make it
not ideal as in poor quality experiment. An illustrative example is given in [Max95].
Note the relations between experimental and computer visualization. After obtaining of the numerical data one can simulate the image as it is seen in the experiment. As the computer technology
progresses our capabilities in this area will only increase. To an extent that it will be possible to reproduce badly done experiment - with spots of light, scattering, diffraction and other
phenomena which hide the information searched in the experiment.
A race after realistic presentation in this direction looks as doubtful. The aim of ICFD visualization is to give a method of cognition but not for hiding the truth by means of reproduction of
nonideal experiment. Lets give such an analogy. There is some interest in solving a task of building on the display the realistic image of the screen covered by a thick layer of a dust. But it seems
that there will be quite few volunteers willing to work at such a conditions when on the image is automatically placed such "antireflecting coating".
As mentioned above the image building must be quickly enough. So image drawing based on volume scattering at the analytical stage of works with CAE-ICFD system is unnecessary. Therefore, at further
discussion we will restrict ourselves with ray casting method for building an image.
Discuss further in more details peculiarities of the building images of volume distribution f by radiation and absorption based method (RABM) and refraction based method (RBM).
The losses of information at 3D field projection on 2D plane inherent for volume visualisation are inevitable. Therewith it is worthwhile to constrict images not in accordance with exact physical
laws but being governed by some simplified laws. These simplified laws must reflect the matter of real physics, produce the image clear to the user and lead to more simple mathematics to provide more
quickly calculation. The examples for absorption (radiation) and refraction are presented below.
3.1 Absorption and radiation
Consider the case when image is formed by radiative and absorptive medium. Let axis z going along the line of sight, z=0 is image plane. As follow from (1) the intensity of light I(x,y) on image
plane (x,y) is
In the method of ray casting brightness of a pixel at point (x,y) on a screen is proportional to I(x,y). In common case the value of I is defined by the relation between f and optical properties of
medium and q. Let consider some particular cases.
In case of nonradiative medium (like for X-ray image) q=0 and illumination I(x,y) depends on distribution of absorptivity along z axis. For example, at =Cf a part of the region with higher integral
optical thickness
will look on the screen more dark.
If the integral optical thickness is small t[l](0)<<1 (i.e. t[l](0)<0.3), then
In contrary case of radiative, but nonabsorptive medium absorptivity =0, the illumination I(x,y) depends on distribution of radiance q along z axis. For example, at q=Cf
a part of the region with higher value of
The obvious disadvantage of using (2) is the necessity to operate with exponent procedure which is processor's time consuming. At the same time it should be remembered that the relationship between f
and , q is controlled by user and is essentially artificial. In this case it is not clear for what purpose the comprehensive modeling of absorption (radiation) process is required.
From the above reason the next simplification of (1) may be proposed. Define the relation between f and so that t[l](0) not exceed 0.3 and the arbitrary relation between f and q. Then combine (4) and
(5) and get the simplified form of (2)
So we get that the volume is actually projected on the screen in such a way that distribution f(x,y,z) is integrated along line of sight (z axis) using some weight function given by the relation
between f and optical properties of medium and q. User can view the distribution of f in the view plane (x,y) as a variety of color and brightness of picture. Moreover he can view the distribution of
f along line of sight (z axis) too if he changes the relation between optical properties of a medium and function f (or for example the gradient of f). Analogy is pertinent to X-ray imaging, where
for selection of the interested objects a special substance increasing absorption is injected into the object.
Additional possibilities for drawing of information saturated pictures arise when color instead of monochrome palette is used. This could be done by calculating distribution I(x,y) for 2-3
wavelengths (colors) for various dependencies of and q from f. By further mixing of colors one can obtain a picture reflecting simultaneously several features of distribution of f in volume. Anyway,
due to the uncertain character of this process it looks as suited better for final (report) stage of visualization. At the analytical stage it is better to use monochrome images [Wood93].
3.2. Refraction
Consider now domain contained only refractive (not radiative and absorptive) medium. The processes of radiative transfer in refractive and in radiative and absorptive gases are principally different.
In radiative and absorptive gas the path of light is srtraight line, but the light intensity is changed along the line in accordance with (1). When a set of parallel light beams enter the domain,
each beam goes through the domain parallel to another, so they does not intersect. In contrast to this the radiative transfer in refractive medium does not described by equation like (1). In
refractive gas the light intensity is not changed along the path, but the path becomes twisted, not straight line. As consequence the situation when any paths pass through one point may take place.
The refractivity of medium is defined by index of refraction n. In common case the equation of the path of light beam is
where k and
The problem is simplified in case when refraction is small and variation of n along z axis is negligible, i.e. n=n(x,y). Such conditions are realised in a set of aerodynamic experiments, for example,
boundary layer on heated plane, flow around 2D wing profile, etc. Methods of visualization based on refraction (Schliren and shadow methods, interferometric [Gold83]) are one of the most powerful
instruments for study of high speed flows of gas and density heterogeneous mediums in experimental fluid dynamics. Assume x(z), y(z) is the equations of light beam, denote
As follows from (8) the entity of refraction is the sensibility of the light path to gradient of n in direction perpendicular to the light path. The image of domain contained medium with nonuniform
distribution of n deforms due to this reason. Note that the beam of light deviates in the direction of n increase.
Modification of the optic laws of refraction has to be so that outlined feature should remain but the mathematical form should simplify. In particular
• the image deformation should be proportional to gradient of n in the plane perpendicular to view direction;
• the variation of n along z axis and respective image deformation should be ignored.
We propose the next form of modified optic laws of refraction satisfied to these conditions. The equations for the path of the light beam for arbitrary distribution of n in domain are
4. Method of test grid
Both RABM and a computer version of popular experimental fluid dynamics RBM (Schliren and shadow methods, interferometric) produce a 'dense' image. It means that all pixel of image are informatively
significant. So if the user want to combine two image simultaneously (for example, distributions of temperature and absolute value of velocity) he have to make one of them transparent. Therewith the
studied technical installation must be shown too and the best form of its presentation is the semitransparent technique. As result the picture becomes considerably overloaded and hard for
As a rule one of two been combined images is more significant, and another is needed as a background. So it is important to construct not 'dense' but 'slight' image for using as a background for
another more important image.
In this connection we propose a computer version of experimental method of test grid. This method is proposed in [Boya88] for study of surface waves caused by perturbation of a fluid in a tank.
In the method of test grid the flux of light entered the studied domain with refractive gas has a form of a rectangular grid consisted of set lines. For instance, grid may be formed as a black lines
on white background. We will see undistorted grid with rectangular cells if index of refraction n in the domain does not change in plane (x,y). The image of a grid will be deformed if a gradient of n
in plane (x,y) exists in the volume. It is similar to a deformation of a tile on the bottom of a swimming pool as seen through the surface of water covered with waves.
The building of image may be produced in the next way. A set of points laying at intersection of grid lines are the source of light beams passed through the domain. The trajectory of light is
described by (9). Points of crossing of these beams with the view plane connected in the appropriate order, that gives the transformed image of the grid. Giving user a possibility to define relation
between analyzed function f and index of refraction n, size of the grid, as well as limits of the region in which the medium refracts light, we get an instrument for analysis of distribution of f in
a direction perpendicular to line of sight.
Among advantages of test grid method are the following:
• high speed of image drawing because of we calculate the path of light emitted from several points at the vertexes of initial grid, not from all pixels;
• compatibility with RABM;
• monochrome image of a test grid easily separated from other objects presented on the picture.
Below, we give two examples of images of a test grid for various distributions of scalar function f. Because of need for use of black-white pictures in this publication we present examples with
simple geometry of calculation domain and simple shape of objects in it. The first example presented in Figure 1 is the visualization of model distribution that consists of two "dense" sphere with
"atmosphere" with density decreasing to zero at some radius. Part a) of Figure 1 is rendered by RABM, part b) -by the method of tested grid, and part c) is the application of parts a) and b).
The second example fall into a cleanroom aerodynamics. The cleanroom studied is presented in Figure 2a). A clean air flows in the room in direction from ceiling to grating and then is removed through
outlet on side under grating. There is a table with heat and contamination source on it. This source model the fire of specific type. The jet of hot gas produces circulation torus-looking flow in the
room. The general idea of velocity field structure is shown in Figure 2b). Both RABM and RBM (method of test grid) are used for simultaneous presentation of contamination and temperature field in
cleanroom in Figure 3.
Given examples only illustrate applications of the proposed method and not aimed at making an impression that this method is a mean for solving all visualization problems arising in ICFD. To our mind
offering to user a wide spectrum of possibilities for forming the images in every particular case is more important than search for panacea - evidently non existent universal method for representing
the entity of all possible flows, giving answers to all user's questions.
Figure 1. Model distribution of scalar function in form of two "dense" sphere with "atmosphere" with density decreasing to zero at some radius. a) rendering by RABM, b) rendering by the method of
tested grid, and part c) the application of a) and b)
Figure 2. Fire and air flow in a cleanroom: a) general view of a cleanroom; b) velocity vectors and flow structure in two perpendicular planes (central and along the wall) passed through source;
arrows show the direction of air flow
Figure 3. Fire and air flow in cleanroom: a) volume visualization of contamination field by RABM; b) the same and presentation of temperature field with the method of test grid
This work performed under support of RFBR: grant # 95-01-00815 (AVG and AAA) and #96-01-01273 (SVK).
[Boya88] V.I.Boyarintsev, A.K.Lednev, V.A.Frost, "Motion of cilinder under the surface of liquid", Preprint #322 Institute of Problems of Mechanics AS USSR, M., 1988, 39 p. (in Russian).
[Bril94] M.Bril, H.Hagen, H.-Chr. Rodrian, W.Djatschin, S.Klimenko, "Streamball Techniques for Flow Visualization", Proc. of Visualization'94, pp. 225-231.
[Gold83] R.J.Goldstein, "Optical Systems for Flow Measurements: Shadowgraph, Schlieren and Interferometric Techniques," Fluid Flow Measurements, ed. by R.J.Goldstein, Hemisphere Publishing Corp.,
1983, pp. 377-421.
[Elvi92] T.T.Elvins, "A Survey of Algorithms for Volume Visualization," Computer Graphics, Vol.26, No.3, Aug. 1992, pp.194-201.
[Have94] G.Havener, L.A.Yates, "Visualizing the Flow with CFI," Aerospase America, June 1994, pp.24-27,43.
[Kauf94] A.Kaufman, K.H.Hohne, W.Kruger, L.Rosenblum, P.Schroder, "Research Issue in Volume Visualiztion," IEEE Computer Graphics & Application, Vol.14, No.2, March 1994, pp.63-67.
[Krue91] W.Krueger, "The Application of Transport Theory to Visualization of 3-D Scalar Data Fields," Computers in Phisics, Jul/Aug 1991, pp.397-406.
[Levi90] Yu.Levi, D.Degani, A.Zeginer, "The Use of Spirality of Flow for Graphic Visualization of Vortex Flows", AIAA J., 1990, No.8, pp. 1347-1352, (Aerospace Technics, 1991, N10, pp. 36-44 (in
[Max95] N.Max, "Optical Models for Volume Rendering," in Visualization in Scientific Computing, 1995, eds. by M.Gobel, H.Muller & B.Urban, Springer-Verlag, pp. 35-40.
[Wijk93] J.J. van Wijk, "Flow Visualization with Surface Particles," IEEE Computer Graphics & Application, Vol.13, No.4, July 1993, pp.18-24.
[Wood93] P.R.Woodward, "Interactive Scientific Visualization of Fluid Flow," Computer, Vol.26, No.10, Oct. 1993, pp.13-25.
Computer Graphics & Geometry | {"url":"http://www.cgg-journal.com/1999-3/02/Cg_97h.htm","timestamp":"2014-04-21T07:19:17Z","content_type":null,"content_length":"32047","record_id":"<urn:uuid:1ea67862-88b6-4aa2-9d46-21b420513fe7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radiant power spectrum from electroluminence spectrum
Hello everyone!
I am having a bit of a problem understanding a paper. Usually the authors use the electroluminescent spectrum to describe an emitter, but in this paper the radiant power is constantly cited.
What is the relation of these two?
Usually the EL spectrum is a.u. vs wavelength and radiant power is (in this paper!) W/(sr m^2) (usually called radiance).
Can these two quantities be converted (as the time information is not known)?
best regards, | {"url":"http://www.physicsforums.com/showthread.php?s=ecbe8148c9fa80ba6095032b3075a37f&p=4357975","timestamp":"2014-04-19T02:23:35Z","content_type":null,"content_length":"20163","record_id":"<urn:uuid:fa58776f-2ec7-452a-a0c9-18aa93548a37>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
So will students still be able to earn two high school credits for Algebra (algebra split over two years)? or must all students take Algebra in one year?
Jennifer Griffin
Pine Plains Central School District
Sent from my iPhone
At this point there is no change; passing any one Math regents is the requirement. Again that is at this point in time.
Kate Martin-Bridge
HS Representative
Sent from my iPhone
Thanks for passing this on. So now I pose the question to the group, how will graduation requirements change for mathematics given the changes to the curriculum? Has anyone heard anything?
Leslie Tanner
HS Math Teacher
>>> Liz Waite <
> 02/05/13 6:46 AM >>>
Looks like we have some new info on engage ny...."a story of functions"
Elizabeth Waite
AMTNYS Coordinator of Reps | {"url":"http://mathforum.org/kb/servlet/JiveServlet/download/671-2433339-8255954-803399/att1.html","timestamp":"2014-04-16T22:12:22Z","content_type":null,"content_length":"2788","record_id":"<urn:uuid:1a14d7bb-2b2e-42bc-a03d-4d2199e535c5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
13.0 Solving an Equation in One Variable
Home | 18.013A | Chapter 13 Tools Glossary Index Up Previous Next
13.0 Solving an Equation in One Variable
Suppose we have an equation of the form f(x) = 0, where f is some, perhaps ugly standard differentiable function on some interval. For example, f(x) might be sin(x) - 0.2, or exp (sin(x)) – csc(x),
or anything else you might want to investigate.
We address the question: how can we find a solution, that is, a value of x for which the statement f(x ) = 0 is true, to within the accuracy of our computations?
We will explore four methods, which we describe here in one sentence each; we then examine them in further detail one at a time.
The first, which is called Newton's method, or the Newton-Raphson method, involves guessing an answer, x[0], then assuming that that answer is incorrect, solving the equation that the linear
approximation to f at x[0] is 0. This is a linear equation whose solution is easy. If we name the solution point to it x[1], we can repeat this step, that is, solve the equation that states that the
linear approximation to f at x[1] is 0, to find x[2], and so on.
We also consider a slightly different method which we will call "Poor Man's Newton" in which instead of evaluating the derivatives needed to form the linear approximation by formal differentiation,
we approximate them numerically. Its only virtue is that you need not differentiate f in order to apply it.
The third method involves guessing two points x[1] and x[2], finding f(x[1]) and f(x[2]) and creating a new guess at the point where the straight line through these crosses the x axis.
The final method, sometimes called "divide and conquer" involves starting with two points x[1] and x[2], at which f takes on values having opposite signs. Then we can evaluate f half way between them
and find an interval half the size of x[2] - x[1] in which again f takes on values having opposite signs. Repeating this step will home in on a solution, so long as f is continuous, that is, has no
Not all equations have solutions so all of these methods must fail for some equations. The first three can produce sequences of guesses, x[1], x[2], ... which flail about and do not converge.
It may be impossible to get started in the last method, since you may not be able to find x[1] and x[2] at which f has opposite signs. It is a slow and steady method, like the tortoise's racing plan,
but it must win once started and improves its accuracy by a factor of two on each iteration. | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter13/section00.html","timestamp":"2014-04-20T01:19:26Z","content_type":null,"content_length":"4434","record_id":"<urn:uuid:79380bb7-680b-4722-99dd-8d0ca4d52510>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unbounded functional
March 28th 2011, 11:12 PM #1
Sep 2009
Unbounded functional
Dear Colleagues,
Could you please help me to solve the following problem:
The space $C^{1}[a,b]$ is the subspace of $C[a,b]$ consists of all continuously differentiable functions. Let $f$ be a functional defined on $C^{1}[a,b]$ given by $f(x)=x^{'}(c),c=(a+b)/2$ where
$x\in C^{1}[a,b]$. Prove that $f$ is not bounded.
Dear Colleagues,
Could you please help me to solve the following problem:
The space $C^{1}[a,b]$ is the subspace of $C[a,b]$ consists of all continuously differentiable functions. Let $f$ be a functional defined on $C^{1}[a,b]$ given by $f(x)=x^{'}(c),c=(a+b)/2$ where
$x\in C^{1}[a,b]$. Prove that $f$ is not bounded.
I think you can do this on your own. Think about it, what if you created your function to be such that for every $\varepsilon>0$ you create a function $f_\varepsilon\in C^1[a,b]$ such that $\
displaystyle f'_\varepsilon\left(\frac{a+b}{2}\right)=\frac{1}{ \varepsilon}$
Thank you very much.
March 28th 2011, 11:27 PM #2
March 29th 2011, 03:07 AM #3
Sep 2009 | {"url":"http://mathhelpforum.com/differential-geometry/176145-unbounded-functional.html","timestamp":"2014-04-18T16:08:38Z","content_type":null,"content_length":"38509","record_id":"<urn:uuid:57d5b52b-8edb-4301-8c15-43632552f86e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimizing shallow-water sound-speed estimation using parameter
ASA 128th Meeting - Austin, Texas - 1994 Nov 28 .. Dec 02
2aAO5. Optimizing shallow-water sound-speed estimation using parameter resolution bounds.
Nicholas C. Makris
Naval Res. Lab., Washington D.C. 20375
A technique is under development to optimize experimental design for estimation of 3-D sound-speed structure by inversion of acoustic data. The motivation is to take advantage of a priori knowledge
of invariant environmental parameters to estimate the minimum number of sensors necessary and their optimal deployment geometry for a well constrained inversion. First, static environmental
information, such as bathymetry, geoacoustic parameters of the sediment, and mean sound-speed structure of the water column, is input to an appropriate range-dependent acoustic model for a given
sensor deployment geometry. Next, a theoretical lower bound on estimation error for the water-column sound-speed structure is obtained via the Cramer--Rao bound. The deployment geometry is then
perturbed until the estimation error is within acceptable bounds for oceanographic and acoustic modeling. However, the choice of sound-speed parametrization can also severely affect the accuracy of
an inversion. For example, an empirical orthogonal function (EOF) representation typically has higher resolution for fewer parameters than a discrete cell representation. But this is at the cost of
more limiting assumptions. These issues are addressed by computing the theoretical lower bound on estimation error for discrete cell, EOF, and Fourier internal wave representations of 3-D sound-speed | {"url":"http://www.auditory.org/asamtgs/asa94aus/2aAO/2aAO5.html","timestamp":"2014-04-16T10:13:52Z","content_type":null,"content_length":"2037","record_id":"<urn:uuid:2ec7b6a5-9551-42e9-851b-2b1d5215fc32>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating "o'clock" position
November 2nd 2008, 01:52 AM #1
Nov 2008
Hi all,
My maths is very rusty, so apologies for any bad terminology.
I'm working on some programming which involves describing places on a roughly circular map (if you're interested...) relative to the centre of it. I would like this description to read along the
lines of "3 o'clock, 34% from centre" - this is to complement a function to search the map, and show the results in a way which can be understood by people with vision difficulties.
The map is an image file, and hence has co-ordinates starting from 0,0 in the top left. I've stored some meta data of the map which I feel will be useful - the co-ordinates of the middle of the
circle, and the radius of it (large enough to encapsulate the whole map).
By subtracting the co-ordinates of a search result from the co-ordinates of the middle of the circle, I'm left with a set of co-ordinates relative to the middle of the circle - as if the circle
was at 0,0 of a 2 axis graph which extends into the positive and negative.
This is where I'm stumped. I need to figure out what clock position (e.g. "3 o'clock") the search result is at, and how far from the centre it is - both of these will no doubt require the radius.
Any ideas or formulas to share?
Many thanks
For the sake of testing, here are the numbers:
Midpoint of map: 650, 635
Radius of circle: 715
Sample search result: 116, 824 (raw image co-ordinates) or -534, 189 (relative to midpoint of map)
Hi all,
My maths is very rusty, so apologies for any bad terminology.
I'm working on some programming which involves describing places on a roughly circular map (if you're interested...) relative to the centre of it. I would like this description to read along the
lines of "3 o'clock, 34% from centre" - this is to complement a function to search the map, and show the results in a way which can be understood by people with vision difficulties.
The map is an image file, and hence has co-ordinates starting from 0,0 in the top left. I've stored some meta data of the map which I feel will be useful - the co-ordinates of the middle of the
circle, and the radius of it (large enough to encapsulate the whole map).
By subtracting the co-ordinates of a search result from the co-ordinates of the middle of the circle, I'm left with a set of co-ordinates relative to the middle of the circle - as if the circle
was at 0,0 of a 2 axis graph which extends into the positive and negative.
This is where I'm stumped. I need to figure out what clock position (e.g. "3 o'clock") the search result is at, and how far from the centre it is - both of these will no doubt require the radius.
Any ideas or formulas to share?
Many thanks
For the sake of testing, here are the numbers:
Midpoint of map: 650, 635
Radius of circle: 715
Sample search result: 116, 824 (raw image co-ordinates) or -534, 189 (relative to midpoint of map)
That's really a nice map!
1. If the search result has the coordinates $R(x_R\ ,\ y_R)$ then the distance to the midpoint is calculated by:
$d=\sqrt{(x_R - 650)^2+(y_R - 635)^2}$
Since you only can use integer numbers the value of d must be truncated. (Depending on the programming language you use the command should be something like trunc(d) or floor(d) or ...)
2. With the coordinates of R you can calulate the angle between the positive x-axis starting at the midpoint:
With this value you can determine the value of $\alpha$.
You have to consider 2 cases:
1. $y_R < 635 ~\implies~ 0^\circ \leq \alpha < 180^\circ$
2. $y_R \geq 635 ~\implies~ 180^\circ \leq \alpha < 360^\circ$
The 3'o clock position must be located in the sector of $75^\circ \leq \alpha < 105^\circ$
Hello, Kefka!
I hope I understood the problem . . .
* * *
* | * P
* Q+ - - - o (x,y)
* | * *
| * r
* |θ* *
- - * - - - - o - - - - * - -
* (xc,yc) *
* | *
* | *
* | *
* * *
The center of the circle is $O(x_c,y_c)$
A given point is $P(x,y)$
The radius is: . $r \:=\:\sqrt{(x-x_c)^2 + (y-y_c)^2}$
If $R$ is the fixed radius of the base circle,
. . . the "percentage from the center" is: . $\frac{r}{R} \times 100$ percent.
Let $\theta$ be the clockwise angle between the y-axis and $OP$.
We see that: . $\begin{array}{ccc} PQ &=&r\sin\theta \\ OQ &=& r\cos\theta \end{array}\quad\Rightarrow\quad \begin{array}{cccc} x &=& x_c + r\sin\theta &{\color{blue}[1]}\\ y &=& y_c + r\cos\
theta & {\color{blue}[2]}\end{array}$
. . $\begin{array}{ccccc}{\color{blue}[1]}\text{ becomes:}& r\sin\theta &=&x-x_c & {\color{blue}[3]}\end{array}$
. . $\begin{array}{ccccc}{\color{blue}[2]}\text{ becomes:} & r\cos\theta &=& y-y_c & {\color{blue}[4]}\end{array}$
Divide [3] by [4]: . $\frac{r\sin\theta}{r\cos\theta} \:=\:\frac{x_c+r\sin\theta}{y_c+r\cos\theta}$
. . Hence: . $\tan\theta \:=\:\frac{x_c+r\sin\theta}{y_c+r\cos\theta}\quad\ Rightarrow\quad \theta \;=\;\tan^{-1}\left(\frac{x_c+r\sin\theta}{y_c+r\cos\theta}\ri ght)$
Then the clock-reading is: . $\frac{\theta}{2\pi}\times 12 \:=\:\frac{6\theta}{\pi}$ o'clock.
Thanks for the help
Have got the distance from centre working with the following code - since (Xc, Yc) is always (0, 0) - the middle of the circle - the formula for that was very simple.... Here it is in code:
[PHP]sqrt( pow( $relx, 2 ) + pow( $rely, 2 ) ); //calculates distance in pixels.[/PHP]
It was then a simple matter of turning it into a % using the radius of the city circle. It's a little bit off since the city isn't a perfect circle, but I'm sure it's close enough.
Now to try figure out the o'clock bits... I've always been much better at reading code than formulae... :P
UPDATE: After some prodding and fiddling and a more code-oriented discussion found at distance and angle between 2 xy coordinates [Archive] - WebDeveloper.com I've got the angle working... It
seems to work! I think always having a (0, 0) point simplifies things a lot. Here's the code to get the angle...
[PHP]$angle = acos( -$rely / $dist ) / (M_PI * 2) * 360; //Calculate angle
$angle = $rely < 0 ? (180-$angle) + 180 : $angle; //Correct for negative quadrants[/PHP]
The second line is an if statement (in shorthand) saying "if the Y co-ordinate of the result is negative, then correct by doing (180-$angle) + 180, otherwise, leave it as is."
Now I'm just trying to figure out the converting to o'clock part... Soroban's formula doesn't seem to work...
e.g. using an angle of 90 (3 o'clock), we have (6 * 90) / Pi... which gives us 172 o'clock.
UPDATE 2: Oops, just figured out I can just divide the angle by 30, with rounding, and it's fine!
Oops, there's some kind of error still lurking... Hmm.
Last edited by Kefka; November 2nd 2008 at 07:25 PM.
Solved at last!
Ok, the error which was lurking turned out to be me calculating the relative Y co-ordinate wrong - it was the inverse of what it should be... so, here's the code in full...
[PHP]$relx = $resultx - $middlex; //Calculate X of result relative to middle of map
$rely = $middley - $resulty; //Calculate Y of result relative to middle of map
$dist = round( sqrt( pow( $relx, 2 ) + pow( $rely, 2 ))); //Calculate distance
$dist2 = $dist > $city_radius ? 100 : round(($dist / $meta[$name[0]]['r']) * 100); //Convert distance to %, correcting if over 100
$angle = acos( $rely / $dist ) / ( M_PI * 2 ) * 360; //Calculate angle
$angle = $relx < 0 ? (180 - $angle) + 180 : $angle; //Correct for negative quadrants
$clock = round($angle / 30); //Convert to o'clock
$clock = $clock == 0 ? 12 : $clock; //Correct if rounded to 0 o'clock
echo "Match found at ".$clock." o'clock, ".$dist2."% out from middle.";[/PHP]
Huzzah! Thanks muchly for the help
Last edited by Kefka; November 2nd 2008 at 08:12 PM.
November 2nd 2008, 04:55 AM #2
November 2nd 2008, 06:11 AM #3
Super Member
May 2006
Lexington, MA (USA)
November 2nd 2008, 06:11 PM #4
Nov 2008
November 2nd 2008, 07:34 PM #5
Nov 2008 | {"url":"http://mathhelpforum.com/math-topics/57023-calculating-o-clock-position.html","timestamp":"2014-04-20T07:04:55Z","content_type":null,"content_length":"56159","record_id":"<urn:uuid:e87158ea-ef18-43c7-be6a-080b7d35792f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
During middle school, Baekhyun was said to really take good care of his seatmate. One time, his seatmate forgot to bring his pencil, so he started worrying. Baekhyun gave his own pencil and told him,
“you can use this” so Baekhyun’s seatmate was really touched. But then right before the guy could express his thanks, Baekhyun opened his mouth to say, “since you’re using it, take my notes for me.” | {"url":"http://jongin-my-bummie.tumblr.com/","timestamp":"2014-04-18T03:14:42Z","content_type":null,"content_length":"42616","record_id":"<urn:uuid:939ec824-7f14-4cc0-9818-2eb52b305b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |