content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Graph Theory: A has 15 edges, Ā has 13 edges, how many vertices does A have?
June 22nd 2008, 08:55 PM
Graph Theory: A has 15 edges, Ā has 13 edges, how many vertices does A have?
Discrete Math: Graph Theory #54.
If A is a simple graph with 15 edges, and Ā has 13 edges, how many vertices does A have?
Can anyone give me the answer and how to arrive at it?
Thanks a bunch,
June 22nd 2008, 09:40 PM
do you know this statement?
Let $\overline{G}$ be the complementary graph of $G$. Then
$|E(\overline{G})| + |E(G)| = \left({\begin{array}{c} |V(G)| \\ 2 \end{array}}\right)$
June 22nd 2008, 09:48 PM
Equation -> Handshaking Theorem?
I don't believe I have seen this. Is this related or some rendition of the Handshaking Theorem?
June 22nd 2008, 09:59 PM
i dont think so.. we discussed this as a remark in graph operations..
June 22nd 2008, 10:05 PM
Reading new equation.
Tell me I am reading this right.
The sum of the cardinality of (edges) G and G-compliment equals to the cardinality of (vertices) G.
Sorry if I am not getting this correctly. It is very late for me. :-) You know how it goes.
June 22nd 2008, 10:20 PM
... cardinality of (vertices) G taken 2..
EDIT: that is the usual combination formula.. | {"url":"http://mathhelpforum.com/discrete-math/42205-graph-theory-has-15-edges-has-13-edges-how-many-vertices-does-have-print.html","timestamp":"2014-04-16T04:33:08Z","content_type":null,"content_length":"10054","record_id":"<urn:uuid:8fbb0d27-b019-4caf-a7cb-76ad5dd1208d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
A company has the task of producing spheres of volume `288pi pm 5` cm^3.
Use integration to find the radius of the... - Homework Help - eNotes.com
A company has the task of producing spheres of volume `288pi pm 5` cm^3.
Use integration to find the radius of the spheres, writing this in the form `a pm b` cm
with b rounded to three decimal places.
The formula for the volume of a sphere is
`V = 4/3pir^3`
Think of this as adding two hemispheres together, where each of those hemispheres is the sum of many many circle slices/discs from `x=0` ro `r` where the radius of the disc at `x ` is (using
Pythagoras) `sqrt(r^2-x^2)` . Then, since the area of each disc is `pi(r^2-x^2)` and integrating over `x` we get
`V = 2 times int_0^r pi (r^2-x^2) dx`
`= 2 pi times (xr^2 - 1/3 x^3|_0^r = 2pi times (r^3 - 1/3r^3) = 4/3pir^3`
Now, we have that the volume ` `of the spheres in question is
`V = 288pi pm 5` cm^3
`= 4/3pi(288(3/4) pm 5/pi(3/4)) = 4/3pi(216 pm 15/(4pi))`
`r^3 = 216 pm 15/4pi`
The minimum `r` can be is ` ``root(3)(216-15/(4pi)) =5.989`
The maximum `r` can be is `root(3)(216+15/(4pi)) = 6.011`
`r = 6 pm 0.011` cm
a = 6 and b = 0.011 to 3dp
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/company-has-taks-producing-spheres-volume-288cm-3-433477","timestamp":"2014-04-17T23:20:05Z","content_type":null,"content_length":"25979","record_id":"<urn:uuid:3b7a4bbd-8046-41d0-964c-98d540c1374d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] compress function
Peter Verveer verveer at embl-heidelberg.de
Thu Jun 24 05:39:00 CDT 2004
The documentation of the compress function states that the condition
must be equal to the given axis of the array that is compressed. e.g.:
>>> a = array([[1,2],[3,4]])
>>> print compress([1,0], a, axis = 1)
However, this also works fine:
>>> print compress([[1,0],[0,1]], a)
[1, 4]
which is great (I need that) but not documented. Is that behaviour
intended? If so it maybe should be documented.
Cheers, Peter
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-June/003124.html","timestamp":"2014-04-17T08:01:26Z","content_type":null,"content_length":"2902","record_id":"<urn:uuid:7ce8ce81-3727-4444-bb0d-303f701fe3c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
uses of trigonometry in daily life
#1 uses of trigonometry in daily life
New Member
Join Date
Jun 2011
June 21st, 2011, 08:51 AM
hi friends
i am sahil juneja i have been working very hard to find some uses of trigno i searched on many sites but i was unsuccesful i just need your help please do help me i would be thankful to you from
the bottom of my heart
thank you 8) 8)
June 21st, 2011, 11:29 AM
is this hw?
June 21st, 2011, 11:37 AM
Seems like homework to me, and you obviously haven't researched very well if you can't find any information upon the applications of trigonometry in every day life.
June 21st, 2011, 11:58 AM
One cn go on and on when talking about trigo. In everyday life, but anyform of architecture involves trigonometry, including the very roof you are living under in.
July 17th, 2011, 10:38 AM
The whole of physics and maths is based on trigo.
It can be used to measure height of a tower .
It is used in mechanics to understand motion of particles .
And much more.
July 17th, 2011, 11:02 AM
The whole of physics and maths is based on trigo.
It can be used to measure height of a tower .
It is used in mechanics to understand motion of particles .
And much more.
July 29th, 2011, 04:50 PM
Trig can be used in many places such as hiking. You come to a stream or river and you want to determine the distance across the river. You try to decide if a tree next to the river can be dropped
to make an impromptu bridge. Trig can be used to determine the height of the tree. There are useful approximation methods that can be used to determine these quantities. These methods have been
used since at least the time of Napoleon.
August 16th, 2011, 01:09 AM
I work with cad drawings daily and trigonometry comes in handy.
August 18th, 2011, 12:49 AM
Every mid size or larger production machine shop has a tool called a "sine bar", used in combination with "gage blocks" for high precision measuring and layout of angles. In some shops the tool
will be used every day, by several people.
Since few machinists understand even basic trig the tool is employed by rote and experience, with even in modern times a frequent resort to printed tables of sines or other trig functions
(although the calculator is making inroads in the hinterlands). But those machinists who can actually employ the trigonometric basis enjoy some advantages, often including higher pay.
September 16th, 2011, 10:05 AM
Well the sat nav system uses trigonometry [ with corrections applied from general relativity which is wholly described in differential Calculus ]
I suggest you get a hold of a copy of Euclid's 'Elements' or Newton's 'Principia Mathmatica' - more trig than you can shake a log book at
September 16th, 2011, 10:19 AM
That may be true today since many "machinists" today are really no more than machine operators.
But those machinists who can actually employ the trigonometric basis enjoy some advantages, often including higher pay.
My father was a machinist and he taught me some introductory trig, algebra, and positive/negative numbers before I had Algebra I in seventh (or eight) grade.
BTW, every time I'm doing work around the house that entails making a large square or rectangular area, such as laying a concrete foundation for a shed, I use the Pythagorean Theorem: The 3/4/5
feet or 6/8/10 feet to make sure the layout is "square". Or for a perfect square, that the diagonals are equal. Very simple but precise.
Last edited by PumaMan; September 16th, 2011 at 10:33 AM.
Forum Freshman
Join Date
Jul 2011
Join Date
Oct 2008
Forum Sophomore
Join Date
Dec 2010
Forum Masters Degree
Join Date
Jan 2011 | {"url":"http://www.thescienceforum.com/mathematics/23414-uses-trigonometry-daily-life.html","timestamp":"2014-04-19T15:11:00Z","content_type":null,"content_length":"113027","record_id":"<urn:uuid:2c1af670-9fed-4176-8ac5-9810abb06ebb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/erdog82/medals/1","timestamp":"2014-04-17T19:17:38Z","content_type":null,"content_length":"110171","record_id":"<urn:uuid:a5efc950-d6f8-4cf3-9c53-901b6a4bf7e9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geodesic completeness and complete Killing fields
up vote 2 down vote favorite
I would like to know why the Killings fields on on a complete riemannian manifold are themselves complete (that is, the integral curves of the Killing fields are defined for all time).
1 I don't understand this question. What does it mean for a Killing field to exist for all time? A Killing field is a vector field on the manifold and has no dependence on time. – Deane Yang Apr 13
'11 at 22:08
add comment
1 Answer
active oldest votes
The corresponding flow, say $\Phi^t: M\to M$ preserves the metric and the field. Thus, for any $x\in M$, the curve $\alpha_x\colon t\mapsto \Phi^t(x)$ has constant speed. Therefore it
can not escape to infinity in finite time.
up vote 8 down More precisely: if $\alpha_x$ is defined on a bounded interval $(a,b)$ then the restriction $\alpha_x|(a,b)$ has finite length, and from completeness it can be extended to a
vote accepted neighborhood of $[a,b]$. This implies that $\alpha_x$ is defined on whole $\mathbb R$; i.e., the vector field is complete.
Anton, What do you mean by "espace to infinity"? Also, where did you use the hypothesis of completeness of the manifold? – Gigou Apr 13 '11 at 23:14
I use that any curve of finite length has the end point inside the manifold. – Anton Petrunin Apr 13 '11 at 23:22
@Ken, I assume we have completeness. You might use Hopf–Rinow to show that "geodesic completeness" $\Leftrightarrow$ "completeness". – Anton Petrunin Apr 14 '11 at 0:50
@Ken, I'm confused about your comment. @Anton, Sorry, but I still don't understand your argument. What do you mean by "escape to infinity"? Also, have you considered that failure to
completeness might occur because the curve is defined only on an interval of the form (a,+\infty) and thus have infinite lenght, and not necessarily on an interval of the form
(a,b). – Gigou Apr 14 '11 at 2:10
I add few words, now it should be totally clear. – Anton Petrunin Apr 14 '11 at 13:05
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/61593/geodesic-completeness-and-complete-killing-fields?sort=votes","timestamp":"2014-04-21T10:27:34Z","content_type":null,"content_length":"56861","record_id":"<urn:uuid:7c17d623-f21f-4216-8f0a-83b2b69c3fe1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Generalized lineal models with survey data
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Generalized lineal models with survey data
From Stas Kolenikov <skolenik@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Generalized lineal models with survey data
Date Tue, 27 Jul 2010 18:02:00 +0100
No, you don't have any problems with the degrees of freedom, which is
#PSUs - #strata = 837-4 = 833, and is reported as such. So I tend to
believe in Steven's story about empirical underidentification of the
overdispersion parameter: the likelihood is so flat in alpha that the
curvature (inverse of the variance) of the likelihood wrt this
parameter cannot be estimated with numeric accuracy that Stata would
find acceptable to report. And yes, this is an indication that
overdispersion is not such a great problem: coniditioning on
covariates and taking weights into account seems to make your data
approximately OK.
As for the general convergence problems, they may be caused by the
scale of weights. Note that your log pseudo-likelihood has 8 digits
before the decimal point, and typically Stata wants to optimize things
down to 7 or so digits after the decimal point, that is, you need to
have about 15 reliable digits to declare convergence. That's too much
to ask for, as 15 digits is the accuracy limit of the -datatype-
double. In this situation (and in this situation only), it would be OK
to relax the convergence criteria by specifying something like
-ltolerance(1e-3)- instead of the default 1e-7; or rescale the weights
so that they sum up to say sample size rather than the population
On Tue, Jul 27, 2010 at 5:30 PM, Paolina Medina
<carmencitamedina@gmail.com> wrote:
> Thank you both, very much.
> So this almost zero alpha, without a confidence interval can be taken
> to indicate that there is no overdispersion in the model?
> Here is my svyset statement and the complete output..
> I am using 52 regressors (including the constant), i really dont know
> how many are the design degrees of freedom... But in fact whenever i
> take any of these regressors i get a lot of troubles with convergence
> in the survey results (not concave or backed up) and i have to throw
> away many other regressors to get convergence again.
> Do you know anything i can do to fix this?
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-07/msg01437.html","timestamp":"2014-04-17T12:51:49Z","content_type":null,"content_length":"10894","record_id":"<urn:uuid:8eccc9b8-66e9-4748-a195-23d698df8bea>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2002 [00169]
[Date Index] [Thread Index] [Author Index]
RE: RE: Fill the space surrounded by two contours lines with different colors
• To: mathgroup at smc.vnet.net
• Subject: [mg36496] RE: [mg36458] RE: [mg36422] Fill the space surrounded by two contours lines with different colors
• From: "DrBob" <drbob at bigfoot.com>
• Date: Mon, 9 Sep 2002 00:29:41 -0400 (EDT)
• Reply-to: <drbob at bigfoot.com>
• Sender: owner-wri-mathgroup at wolfram.com
This might be a convenient way of defining color functions:
cfun[colors_List, brkPts_List] /; Length@colors == Length@brkPts :=
Which @@ Sequence@Flatten@Transpose[{Less[z, #] & /@ brkPts,
colors = {White, RoyalBlue, White, Red, White};
brkPts = {-0.6, -0.42, 0.4, 0.6, Infinity};
ContourPlot[f[x, y], {
x, 0, Pi}, {y, 0, Pi}, PlotPoints -> 30, ColorFunctionScaling ->
ColorFunction -> cfun[colors, brkPts], Contours -> contourvalues];
colors = {Yellow, Peru, Salmon, Apricot, HotPink, Linen};
brkPts = {-0.6, -0.42, 0.2, 0.4, 0.6, Infinity};
Timing[ContourPlot[f[x, y], {x, 0, Pi}, {y, 0, Pi}, PlotPoints -> 30,
ColorFunctionScaling -> False, ColorFunction -> cfun[colors,
Contours -> contourvalues];]
Bobby Treat
-----Original Message-----
From: David Park [mailto:djmp at earthlink.net]
To: mathgroup at smc.vnet.net
Subject: [mg36496] [mg36458] RE: [mg36422] Fill the space surrounded by two
contours lines with different colors
Jun Lin,
Here is an example.
Let's make a contour plot of this function.
f[x_, y_] := Sin[x]Sin[2y]
Let's specify the exact contours to use. I got rid of the 0. contour
it is difficult to obtain in this plot.
contourvalues = Complement[Range[-1, 1, 0.2], {0.}]
{-1, -0.8, -0.6, -0.4, -0.2, 0.2, 0.4, 0.6, 0.8, 1.}
Now we define a ColorFunction for the plot. I actually colored two
bands to show how you can make a general color function to give each
band a
desired color.
cfun[z_] :=
-0.6 < z < -0.42, RoyalBlue,
0.4 < z < 0.6, Red,
True, White]
ContourPlot[f[x, y], {x, 0, Pi}, {y, 0, Pi},
PlotPoints -> 30,
ColorFunctionScaling -> False,
ColorFunction -> cfun,
Contours -> contourvalues];
Using the option ColorFunctionScaling -> False says that the z value
will be
the actual value of f[x,y]. Otherwise, in general, it will be scaled
0 and 1. It is easier to write a color function when z is the actual
of the function.
David Park
djmp at earthlink.net
From: Jun Lin [mailto:jl_03824 at yahoo.com]
To: mathgroup at smc.vnet.net
I need to fill the space between two contour lines, C1 and C2, with
red color, and leave the other place white. What trick I have to use?
Any suggestion and advice will be appreciated.
Jun Lin | {"url":"http://forums.wolfram.com/mathgroup/archive/2002/Sep/msg00169.html","timestamp":"2014-04-18T03:05:48Z","content_type":null,"content_length":"36857","record_id":"<urn:uuid:41a4ba51-8df8-4e95-83e6-36d4dd6fdc56>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Asymptotic behavior of relatively nonexpansive operators in Banach spaces.
(English) Zbl 1010.47032
be a closed convexed subset of a Banach space
, and let
be a nonempty closed subset of
. The authors consider complete metric spaces of self-mappings of
which fix all the points of
and are relatively nonexpansive with respect to a given convex function
. The aim of this paper is to prove that under quite mild conditions on
strong convergence of the sequences
${\left\{{T}^{k}x\right\}}_{k=1}^{\infty }$
generated by relatively nonexpansive mappings is the rule and that weak, but not strong convergence is the exception.
47H09 Mappings defined by “shrinking” properties
49M30 Other numerical methods in calculus of variations
52A41 Convex functions and convex programs (convex geometry) | {"url":"http://zbmath.org/?q=an:1010.47032","timestamp":"2014-04-19T09:44:15Z","content_type":null,"content_length":"22240","record_id":"<urn:uuid:8c426577-8b8c-4736-8b56-8c6c67ee1907>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum Text Length
You may be working with lists in Excel. They may get a little wide and you might like to cut down the size of some of the columns. If you could just easily find where the longest values were so you
could cut them down in size.
Here are some handy formulas to help you do just that.
The exercise and the result are in the Text Length file.
We have a column of company names as part of a list. For simplicity, we will just show this column.
It is wider than we would like. We want to cut down the width of our overall list.
What is the longest company name? You may recall the formula for the length of cell A2 is:
You may also recall one of the skills we have taught is working with arrays, which allows you to do calculations on an entire range at once. We called gave this range a name ("company") and by using
an array, we can quickly generate the maximum length for the whole list.
Our formula converted for an array is:
The parts are:
• LEN(company)- to get the length of any cell in the range named "company"
• MAX(LEN(company))- get the maximum length
• {=MAX(LEN(company))}- save the formula as an array (by clicking Ctrl, Shift and Enter at the same time).
We played a trick to demonstrate an error. Usually if you try to save an array formula, but accidentally save it as a regular formula by clicking Enter only, you will get an error. That did not
happen in the above screen shot. Instead we got a meaningless number. When we save it properly (which you can tell by the brackets around the formula), we get the correct value of 65 for the maximum
It is good to know the maximum length. But can we find the cell with the maximum length?
This formula is a little more involved, let's walk you through it:
• We use the MATCH function, which returns a row number
• C3- The value we are trying to match is the maximum length, which we already calculated in C3
• LEN(company)- the second argument in the MATCH function is the range being used. The range is not the company range (i.e. the names of the companies), but rather a range of the lengths of each
company name, hence LEN(company).
• FALSE- for the type of match-
Now we know where to go to shorten the company name. But wouldn't it be nice to see it first before we go to that cell, especially if we have a long list? Here you go:
For the solution, we use the INDEX function, which consists of:
• What is the range- in this case company
• What location in the range- in this case the row number
INDEX, as we covered in a past lesson, returns the corresponding value from that spot in the range. Note that this is a formula that by design looks at a range, so we do not need to save it as an
With that information, we can decide if we want to cut out the second part of the name (after "/") for example.
Play around with the file. Trim this one down. See what is the next longest. How much you can cut down the column width?
We consider this an advanced formula. Even if you forget it, you know where to go for the Excel Tips Index, where you can then link to this sheet and have the file handy to cut and paste the formula.
We hope this helps you manage your lists. | {"url":"http://www.excel-erate.biz/excel/text_length.htm","timestamp":"2014-04-21T07:04:50Z","content_type":null,"content_length":"5590","record_id":"<urn:uuid:27825c4f-1e4f-4115-a7bb-d65824f97a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science in Christian Perspective
On Cellular Automata and
the Origin of Life
John Byl
Department of Mathematical Sciences
Trinity Western University
Langley, B.C.
Canada V3A 6H4
From: PSCF 41 (March 1989): 26-29.
In a recent issue of this journal Robert C. Newman presented a very interesting account of self-reproduction in cellular automata.^1 He was concerned particularly with a simple self-reproduction
model that was developed by Christopher Langton .^2,3 Newman argues that Langton device is at or near the minimum complexity for self-reproduction of any meaningful form. He then goes on to argue
that even for this very simplistic model of life, in the most favorable conditions, the chances of such a device occurring by chance are vanishingly small. Hence, since life itself is even more
complicated, this amounts to very strong evidence that life is designed.
In this paper, we present a self-reproducing automaton that shares the basic features of Langton's model but is much simpler.
Langton's Automaton
For a detailed description of Langton's automaton we refer the reader either to Newman's account or to Langton's papers. What follows here is only a brief sketch.
In designing a machine that could reproduce itself, Langton considered a two-dimensional array of cells, each cell being in one of eight possible states. The state of each cell at any time is
determined by the states of itself and its four nearest neighbors at the previous time-step.
Langton's device is shown in Figure 1. It essentially consists a signal that contains the information necessary to
make a copy of itself, guided by two walls. The zero state (represented by a blank) is the quiescent state. States I and 2 guide the signal: I is an element of the data path, 2 is an element of the
wall protecting the signal. The remaining five states are used as signals. To specify the direction of the signal, the digit following a signal is set to state 0. When a signal approaches a junction,
it splits into two copies of itself, one along each path. The data path is lengthened by one unit when a 7-0 signal reaches the end; a left hand corner is made when two 4-0 signals hit the end in
With these rules, and others for states 3, 5 and 6, the configuration shown in Figure I first extends its arm by six units. Then it turns left, adds another six units, turns left again, adds six more
units, and turns left a third time. Then it closes in on itself States 5 and 6 are next applied to disconnect the new loop and to start the process over again. After 151 time steps we have two loops,
each of which starts to form a new loop. The loops continue to reproduce themselves until all the available space is used up.
Newman estimates the complexity of this device by considering only those cells in the initial configuration which are in a non-zero state (86 cells) and only those transition rules that yield a
non-zero state (190 rules). Assuming that all states are equally likely to arise by chance he finds that the number of possible random combinations is 7^86+190, or 2 x 1 0^232. Since we have four
possible rotations, this leads to a probability of one out of 5 x 10^232. that this automaton could arise by chance.
A Simple Automaton
The question arises whether the above automaton is indeed at or near the minimum possible complexity for self reproduction. In searching for simpler solutions, we will adhere to the criterion stated
by Langton: we should take seriously the "self' of "self-reproduction," and require of a configuration that the construction of a copy should be actively directed by the configuration itself.^2
Thus, we rule out trivial cases of "reproduction" that are generated solely by the transition rules. For example, we could construct a configuration where a cell could have one of two states (0 or 1)
with two transition rules: a 0 surrounded by three 0's and 1 becomes a 1, and a 1 surrounded by three 0's and 1 becomes a 0. Then, starting with a 1 in a field of O's, the I will appear to reproduce
itself (see Figure 2). But this we do not consider as self-reproduction.
As stressed by Langton, we want to require that the responsibility for reproduction resides primarily with the parent structure. But not totally: the structure may take advantage of certain
properties of the "physics" of the interactions as this is represented by the transition rules.
Keeping these considerations in mind, we now present a simplification of Langton's automaton. The first modification is to eliminate the inner wall. The second simplification is to use just one 4-0
signal to make a left turn. The third change is to determine the direction of the signal not by an x-O combination, but by the orientation of each cell with regard to that neighboring cell having the
lowest state. In order to ensure that this neighbor will normally be a segment of the outer wall, we assign the blank quiescent cells a numerical value of 7. Then we specify that, unless otherwise
stated, the successor to states 3, 4, and 6 will be the state of
the nearest cell clockwise from the smallest neighbor. The default for states 1, 2, 5, and 7 will be that the previous state remains unaltered.
The functions of the various states are as follows: state 2 refers to a segment of the wall, state I represents the segment of the wall at a junction (this is needed to keep the signal cycling within
the configuration), state 3 defines the data path, 6 is used to add a unit to the data path, 4 forces a left turn, and state 5 is used to close the new loop and to initiate the formation of a further
copy. Figure 3 shows a signal (i.e., 6634) cycling inside the wall of a small configuration.
The numbering in this model is somewhat different from that used by Langton. This is partly due to the fact that the numbering is used to derive the orientation, and partly because we require only 7
states rather than the 8 used by Langton.
With these modifications it is possible to construct a simple initial configuration with only 12 cells which reproduces itself after 25 time-steps, as shown in Figure 4. The program used (based on
Newman's program and written in
Quickbasic 4.0) is listed in the appendix.
After 25 steps, the original configuration and the daughter have the same form as the initial array. Then, as in the case of Langton's automaton, the daughter forms a new copy toward the right, while
the original has turned 90 degrees and makes a copy toward the top of Figure 4. The process continues until all the available space is covered with copies.
The total number of transition rules used (see the program) is 36 plus the 7 default rules, for a total of 43. Applying the same Newman calculation as above, the number of random combinations of 12
cells and 43 rules of 7 possible states is 6^12+43 or 6 x 10^41. Since there are at least 2 acceptable initial arrays (see time frames 0 and 24 in Figure 4) with 4 rotations each, the resultant
probability is one out of 8 x 10^41.
The Origin of Life
Newman estimates the probability of life occurring by chance as follows. Suppose that both the cells and transition rules in Langton's automaton correspond to atoms and that the different states
refer to different elements (e.g., state I = carbon, state 2 = nitrogen, etc.). Then assume that all the atoms in a given volume of the universe are forming only 276-atom chains (86 cells plus 190
rules). Under the most favorable conditions, Newman estimates that there are in the entire universe at most 7 x IO" chains forming at a rate of 8 x 10^11 per second. The time to form 5 x 10^232
chains is then given by (5 x 10^232)/((7 x 1071)(8 x 10^11))= 10^147 seconds, or 3 x 10^l39 years. On the basis of this immense timespan, Newman concludes that we have found very strong evidence that
life is designed.
I wonder, parenthetically, whether this calculation is realistic. The main issue is whether the transition rules should be included in the probability calculations. I agree with Newman that a great
part of the complexity is hidden in the transition rules. But should not at least a fraction of these rules be attributed to the operation of physical and chemical laws? Such laws can surely be
assumed to be fixed and not determined by chance.
However, even if we grant the validity of Newman's probability analysis it is clear that his argument for design falls short. For on the basis of the simple (I 2 cell) automaton presented in this
paper, the above form of calculation leads to a timespan of only 5 x 10^-43 seconds.
It must be stressed that life is considerably more complex than the simple mechanisms discussed in this paper. In fact, I suspect that Newman is right in his conclusion that the chance occurrence of
life is virtually impossible. Yet, such conclusions must be founded on stronger evidence than that presented by Newman. Such evidence does exist: more sophisticated calculations indicate that, on the
basis of currently known physical laws, the probability of life arising spontaneously is extremely small.^4
^1Robert C. Newman, "Self-Reproducing Automata and the Origin of Life," Perspectives on Science and Christian Faith 40 (1988):24-3l.
^2Christopher G. Langton, "Self-Reproduction in Cellular Automata," Physica 1OD (1984):134-144.
^3Christopher G. Langton, "Studying Artificial Life With Cellular Automata," Physica 22D (1986):120-149.
^4See, for example the calculations of E. Argyle, "Chance and the Origin of Life," Origins of Life 8 (1977):287-298; H. P. Yockey, "A Calculation of the Probability of Spontaneous Biogenesis by
Information Theory," Journal of Theoretical Biology 67 (1977):377-398. | {"url":"http://www.asa3.org/ASA/PSCF/1989/PSCF3-89Byl.html","timestamp":"2014-04-16T13:21:29Z","content_type":null,"content_length":"13086","record_id":"<urn:uuid:f510c8d7-dd31-46ee-8a16-78b11c12fff3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAY/JUNE 2001 (Vol. 3, No. 3) pp. 40-41
1521-9615/01/$31.00 © 2001 IEEE
Published by the IEEE Computer Society
Guest Editors' Introduction: Tomorrow's Hardest Problems
Article Contents
Download Citation
Download Content
DOWNLOAD PDF
In 1900, David Hilbert presented 23 problems at the International Congress of Mathematicians held in Paris. Hilbert's problems spanned the spectrum from rather trivial (Problem 3 on the equality of
the volumes of two tetrahedra) to the probably impossible (Problem 6 on the axiomatization of physics). Nonetheless, they challenged many of the leading scientists and mathematicians of the 20th
century and set the tone for mathematical research, especially in the early part of the century.
In Hilbert's day, there was much enthusiasm and optimism about the impact formal mathematical methods could have on science. The previous two centuries had witnessed unprecedented progress in
mathematics and physics—Newton, Gauss, Cauchy, Euler, Maxwell, and Poincaré, to name but a few, introduced and explored fundamental, new concepts that changed the way the world was viewed and the
tools that scientists used. The axiomatization of science and mathematics was a steamroller then, brought to a grinding halt only in the 1930s by Gödel's results on the incompleteness of any
axiomatic system sufficient to express simple arithmetic.
Today, much of our scientific enthusiasm and optimism lies in the power of computing, so it is wholly appropriate that this issue's theme articles address some of the computational challenges facing
science and engineering in the 21st century. The main challenges deal with complexity in various guises—although we can build increasingly powerful machines, some problems still remain out of reach
because of their intrinsic complexity.
Anthony Guttmann writes about fundamental conjectures in statistical mechanics and combinatorics that have no formal proofs even though significant numerical experimental data support them. This
raises questions of what we are willing to accept as proof and whether theorem-proving techniques will become powerful enough to resolve such conjectures in the future.
Jonathan and Peter Borwein remind us that computing should provide insight and not necessarily precise quantitative results. In lieu of successful resolution of the P = NP question, they argue that
we need to think of computing as a vehicle for better understanding. This requires better integration of the various tools we use for formulating and analyzing complex mathematical and scientific
Martin Haugh and Andrew Lo describe a natural class of problems in computational finance that are plagued by the "curse of dimensionality" and that currently remain outside the domain of practical
solution. Their examples illustrate the hard fact that complexity is not only intrinsic to the natural and mathematical world but to the world we have engineered and now have to cope with daily.
These articles and others scheduled to appear throughout the year will refocus enthusiasm for computing by demonstrating that because of complexity, computing might only point the way to an
answer—not provide the answer itself. Let us propose another challenge for computing in the 21st century.
Comparing the growth in computing power (as captured by Moore's Law for example) with the growth in human performance (as measured by standardized test scores or even Olympic's sports records) shows
that although human performance is flattening, machines are becoming increasingly more powerful. Maybe the two curves have already crossed in some areas such as chess playing. The biggest challenge
is that posed by a future in which computing power goes significantly beyond human neural processing power. Bill Joy articulated a view of that future in what is now dubbed the "
Joy Hypothesis
" (see the sidebar). Perhaps we could simply state scientific challenge in this direction as "How can we build computer systems that go beyond the limitations of human thinking and yet still be able
to understand what they are doing?" Put another way, will computers still respect us in the 22nd century?
George Cybenko
is the Dorothy and Walter Gramm Professor of Engineering at Dartmouth College. He also served as
's founding editor in chief. Contact him at gvc@dartmouth.edu.
Francis Sullivan
is the director of the Center for Computing Sciences in Bowie, Maryland. He is the editor in chief for
magazine. Contact him at fran@super.org. | {"url":"http://www.computer.org/csdl/mags/cs/2001/03/c3040.html","timestamp":"2014-04-19T04:49:15Z","content_type":null,"content_length":"44402","record_id":"<urn:uuid:fd0b21c8-97bd-49f7-a1a5-09b963dbbabb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
ceil,floor,frac,truncate undocumented behavior
Michael Somos on Sun, 31 Oct 1999 17:11:15 -0500 (EST)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
ceil,floor,frac,truncate undocumented behavior
In GP/PARI CALCULATOR Version 2.0.17 (beta)
gp> y=sum(n=-5,5,x^n)
%1 = (x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1)/x^5
gp> ceil(y)
%2 = x^5 + x^4 + x^3 + x^2 + x + 1
gp> floor(y)
%3 = x^5 + x^4 + x^3 + x^2 + x + 1
gp> frac(y)
%4 = (x^4 + x^3 + x^2 + x + 1)/x^5
gp> round(y)
%5 = (x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1)/x^5
gp> truncate(y)
%6 = x^5 + x^4 + x^3 + x^2 + x + 1
gp> ??truncate
truncate x and set e to the number of error bits. When x is in R, this means
that the part after the decimal point is chopped away, integer and set e to the
number of error bits that is the binary exponent of the difference between the
original and the truncated value (the "fractional part"). If the exponent of x
is too large compared to its precision (i.e. e > 0), the result is undefined
and an error occurs if e was not given.
Note a very special use of truncate: when applied to a power series, it
transforms it into a polynomial or a rational function with denominator a power
of X, by chopping away the O(X^k). Similarly, when applied to a p-adic number,
it transforms it into an integer or a rational number by chopping away the
Although there is some discussion in the documentation for truncate(), it
does not mention what applies to a rational function. The behavior of
ceil, floor, frac, and round in this case does not seem to be documented.
Shalom, Michael
Michael Somos <somos@grail.cba.csuohio.edu> Cleveland State University
http://grail.cba.csuohio.edu/~somos/ Cleveland, Ohio, USA 44115 | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-9910/msg00069.html","timestamp":"2014-04-20T13:38:13Z","content_type":null,"content_length":"4791","record_id":"<urn:uuid:12afc3ac-9da3-4449-9ad4-a7cc770ea270>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sub-super solutions for (p-q) Laplacian systems
In this work, we consider the system:
where Ω is a bounded region in R^N with smooth boundary ∂Ω, Δ[p ]is the p-Laplacian operator defined by Δ[p]u = div (|∇u|^p-2∇u), p, q > 1 and g (x) is a C^1 sign-changing the weight function, that
maybe negative near the boundary. f, h, a, b are C^1 non-decreasing functions satisfying a(0) ≥ 0, b(0) ≥ 0. Using the method of sub-super solutions, we prove the existence of weak solution.
1 Content
In this paper, we study the existence of positive weak solution for the following system:
where Ω is a bounded region in R^N with smooth boundary ∂Ω, Δ[p ]is the p-Laplacian operator defined by Δ[p]u = div(|∇u|^p-2 ∇u), p, q > 1 and g(x) is a C^1 sign-changing the weight function, that
maybe negative near the boundary. f, h, a, b are C^1 non-decreasing functions satisfying a(0) ≥ 0, b(0) ≥ 0.
This paper is motivated by results in [1-5]. We shall show the system (1) with sign-changing weight functions has at least one solution.
2 Preliminaries
In this article, we use the following hypotheses:
(A2) lim f (s) = lim h (s) = ∞ as s → ∞.
Let λ[p], λ[q ]be the first eigenvalue of -Δ[p], -Δ[q ]with Dirichlet boundary conditions and φ[p], φ[q ]be the corresponding positive eigenfunctions with ||φ[p]||[∞ ]= ||φ[q]||[∞ ]= 1.
Let m, δ, γ, μ[p], μ[q ]> 0 be such that
We assume that the weight function g(x) take negative values in Ω[δ], but it requires to be strictly positive in Ω-Ω[δ]. To be precise, we assume that there exist positive constants β and η such that
g(x) ≥-β on and g(x) ≥ η on Ω-Ω[δ]. Let s[0 ]≥ 0 such that ηa(s) + f (s) > 0, ηb(s) + h(s) > 0 for s > s[0 ]and
For γ such that γ^r-1 t > s[0]; t = min {α[p], α[q]}, r = min{p, q} we define
We use the following lemma to prove our main results.
Lemma 1.1 [6]. Suppose there exist sub and supersolutions (ψ[1], ψ[2]) and (z[1], z[2]) respectively of (1) such that (ψ[1], ψ[2]) ≤ (z[1], z[2]). then (1) has a solution (u, v) such that (u, v) ∈ [(
ψ[1], ψ[2]), (z[1], z[2])].
3 Main result
Theorem 3.1Suppose that (A1)-(A3) hold, then for every λ ∈ [A, B], system (1) has at least one positive solution.
Proof of Theorem 3.1 We shall verify that (ψ[1], ψ[2]) is a sub solution of (1.1) where
Since λ ≤ B then
then by (4)
A similar argument shows that
so we have
Then by (4) on we have
A similar argument shows that
We suppose that κ[p ]and κ[q ]be solutions of
respectively, and μ'[p ]= ||κ[p]||[κ], ||κ[q]||[κ ]= μ'[q].
For sufficient C large
Similarly, choosing C large so that
Hence by Lemma (1.1), there exist a positive solution (u, v) of (1) such that (ψ[1], ψ[2]) ≤ (u, v) ≤ (z[1], z[2]).
Authors' contributions
SH has presented the main purpose of the article and has used GAA contribution due to reaching to conclusions. All authors read and approved the final manuscript.
1. Ali, J, Shivaji, R: Existence results for classes of Laplacian system with sign-changing weight. Appl Math Anal. 20, 558–562 (2007)
2. Rasouli, SH, Halimi, Z, Mashhadban, Z: A remark on the existence of positive weak solution for a class of (p, q)-Laplacian nonlinear system with sign-changing weight. Nonlinear Anal. 73, 385–389
(2010). Publisher Full Text
3. Ali, J, Shivaji, R: Positive solutions for a class of (p)-Laplacian systems with multiple parameters. J Math Anal Appl. 335, 1013–1019 (2007). Publisher Full Text
4. Hai, DD, Shivaji, R: An existence results on positive solutions for class of semilinear elliptic systems. Proc Roy Soc Edinb A. 134, 137–141 (2004). Publisher Full Text
5. Hai, DD, Shivaji, R: An Existence results on positive solutions for class of p-Laplacian systems. Nonlinear Anal. 56, 1007–1010 (2004). Publisher Full Text
Sign up to receive new article alerts from Boundary Value Problems | {"url":"http://www.boundaryvalueproblems.com/content/2011/1/52","timestamp":"2014-04-20T23:37:40Z","content_type":null,"content_length":"75805","record_id":"<urn:uuid:e407daf0-7abc-4b11-a595-dd54eda95a6f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Change-of-Base Formula
August 8th 2008, 06:58 AM #1
MHF Contributor
Jul 2008
Change-of-Base Formula
Use the change-of-base formula and a calculator to evaluate each logarithm.
(1) log_5 (18)
(2) log_π (sqrt{2})
Note: The symbol π stands for pi.
the change of base formula is usually used to change to log base 10 or log base e because those are the log keys available on most calculators.
to change from base "b" to base 10 or base e ...
$\log_b(x) = \frac{\log(x)}{\log(b)} = \frac{\ln(x)}{\ln(b)}$
get out your calculator and try it.
Are you...
In the questions given, b represents base 5 and pi and x represents 18 and sqrt{2}. Is this what you are saying?
Are you also saying that log(x)/log(b) is the same as written ln(x)/ln(b)?
Then I just plug and chug, right?
Should I then round off the answers to the second or third decimal places? Which one?
In the questions given, b represents base 5 and pi and x represents 18 and sqrt{2}. Is this what you are saying? Mr F says: In your question 1, b = 5 and x = 18. In your question 2, b = n and x =
sqrt{2}. I don't know where you have got pi from.
Are you also saying that log(x)/log(b) is the same as written ln(x)/ln(b)? Mr F says: Yes s/he is.
Then I just plug and chug, right? Mr F says: Yes.
Should I then round off the answers to the second or third decimal places? Which one? Mr F says: Impossible to answer. The original question should state what accuracy is required.
In fact, $\log_b(x) = \frac{\log_a (x)}{\log_a (b)}$for any base a > 0 and $a eq 1$.
Skeeter stated the formula for when you change to base 10 or base e - this is because you have to calculate a numerical value and even a scientific calculator has a log base 10 and log base e
There is pi...
This is the exact question 2 as given in the textbook:
log_pi (sqrt{2})
How do you solve that when there is pi in the question itself?
$\pi$ is just a number ... treat it as you would any other constant.
pi is just a number
August 8th 2008, 08:17 AM #2
August 8th 2008, 01:18 PM #3
MHF Contributor
Jul 2008
August 8th 2008, 01:30 PM #4
August 9th 2008, 06:37 AM #5
MHF Contributor
Jul 2008
August 9th 2008, 10:39 AM #6
August 9th 2008, 01:41 PM #7
MHF Contributor
Jul 2008 | {"url":"http://mathhelpforum.com/pre-calculus/45558-change-base-formula.html","timestamp":"2014-04-21T13:10:13Z","content_type":null,"content_length":"49490","record_id":"<urn:uuid:54dd77fc-4609-4fea-b999-fd88cbc91f3b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to understand ω-inconsistency
I am trying to understand ω-inconsistency in order to appreciate some of the subtleties of Godel's incompleteness theorems. It seems to be such a weird and anti-intuitive concept.
Q1. Based on my reading, this is what I think it means, in the context of a theory T (in a language L) that includes the axioms of Robinson Arithmetic (I know the concept is probably more general
than that, but that seems to be all that needs to be considered to understand the relevance of ω-inconsistency to Godel's two incompleteness theorems and the Godel-Rosser Theorem):
T is ω-inconsistent if there exists at least one well-formed L-formula φ, with one free variable, such that:
1. T⊢∃x:¬φ(x)
2. for every natural number n: T⊢φ(n)
where the underlining denotes the representation of the number n in L (eg as 0 preceded by n S's in Peano arithmetic.
3. T⊬∀x:φ(x),
otherwise 1 above would not be true. 2 above sounds a lot like T⊢∀x:φ(x), but isn't, because the 'for every' is not part of a formula in L. This is the bit that was tripping me up for a while.
Is that correct?
Q2. Is including the axiom schema of induction sufficient to make a consistent theory T ω-consistent?
By the axiom schema of induction I mean the set of formulas:
[itex](\phi^x_0 \wedge (\phi\to\phi_{Sx}^{\ x}))\to\phi[/itex]
for every well-formed L-formula [itex]\phi[/itex]
It seems to me that it probably is, but I'm not certain.
If it is a sufficient condition, does that mean that any consistent theory that includes full Peano Arithmetic (by which I mean Robinson Arithmetic plus the axiom schema of induction) will be
If not, is there an easily understandable counter-example? I am trying to imagine such a counter-example. It would have to be a theory in which there is a formula φ with one free variable, such that
for every number n we can prove φ(n) within T, but for which there is no induction (or other) proof of ∀x:φ(x) as a theorem within T. If we have such a theory, we can then just add 1 above as an
axiom and it is ω-inconsistent.
Q3. If the axiom schema is sufficient, is its inclusion also a necessary condition to make a consistent theory T ω-consistent?
I have a feeling that it's not, but it's no more than a feeling.
Thank you. | {"url":"http://www.physicsforums.com/showthread.php?t=646344","timestamp":"2014-04-18T23:28:14Z","content_type":null,"content_length":"28708","record_id":"<urn:uuid:fe35d62e-9bbe-4449-9b7c-3ca4f78a6ed9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
The limit process of the difference between the empirical distribution function and its concave majorant
Vladimir N. Kulikov and Hendrik P. Lopuhaa
(2004) Technical Report. EURANDOM, the Netherlands.
We consider the process $\hat F_n-F_n$, being the difference between the empirical distribution function $F_n$ and its least concave majorant $\hat F_n$, corresponding to a sample from a decreasing
density. We extent Wang's result on pointwise convergence of $\hat F_n-F_n$ and prove that this difference converges as a process in distribution to the corresponding process for two-sided Brownian
motion with parabolic drift. | {"url":"http://eprints.pascal-network.org/archive/00000787/","timestamp":"2014-04-21T12:11:12Z","content_type":null,"content_length":"5745","record_id":"<urn:uuid:495e324f-af4d-4cb8-9610-f3362583cd1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Open Electrical
Current transformer
From Open Electrical
Current Transformers or CT's are Instrument Transformers that convert a generally high primary current I[p] to a k-times lower secondary current I[s] that can be connected to standard measuring or
protection devices. The primary and secondary windings are galvanically separated and can be on a different potential level. The transformation ratio k of a current transformer is the number of
secondary turns N[s] to the number of primary turns N[p] and is equal to the primary current I[p] over the secondary current I[s].
$k = \frac{N_s} {N_p} = \frac{I_p} {I_s}$
$I_s = I_p . \frac {N_p} {N_s}$
$I_s = \frac {I_p} {k}$
Standards for current transformers
According IEC
At the moment TC38 of the IEC is busy converting all the instrument transformers from the 60044-family to the new 61869 family with a general part and specific parts.
• IEC 60044-1 Consolidated Edition 1.2 (incl. am1+am2) (2003-02) TC/SC 38 Instrument transformers - Part 1: Current
• IEC 60044-3 Edition 2.0 (2002-12) TC/SC 38 Instrument transformers - Part 3: Combined transformers
• IEC 60044-6 Edition 1.0 (1992-03) TC/SC 38 Instrument transformers - Part 6: Requirements for protective current
transformers for transient performance
• IEC 60044-8 Edition 1.0 (2002-07) TC/SC 38 Instrument transformers - Part 8: Electronic current transformers
• IEC 61869-1 Edition 1.0 (2007-10) TC/SC 38 Instrument transformers - Part 1: General requirements
Other standard organisations
• IEEE Std C57.13-1993: IEEE Standard requirements for Instrument transformers
• Canada CAN3-C13-M83: Instrument transformers
• Australia AS 1675 Current transformers - Measurement and protection
• British Standard BS3938 Specifications for Current Transformers (Withdrawn and replaced by IEC 60044-1)
Functioning of a Current Transformer
Just like a normal voltage transformer, a CT has a primary winding, a secondary winding and a magnetic core. In the window-type and bushing-type CT's, the primary winding is reduced to one wire
passing trough the round or square shaped core, accounting for 1 turn. The primary current I[p] will produce a magnetic field with induction B round the conductor. The magnetic induction B is
amplified by the core material with very high magnetic permeability µ and will produce a primary flux that will magnetise the core with cross section A and induces a secondary voltage V[s] in the
secondary winding with N turns.
$V_s = 4.44 \times 10^-8 f N A B$
At the same time, a N times smaller voltage, opposed to the primary current will be induced in the primary wire creating a small extra resistance in the primary circuit. The induced secondary voltage
will drive the secondary current I[s] that will flow for the mayor part trough the connected load R[b] and for a small part (the error current) I[e] trough the internal resistance and induction. The
internal resistance and induction represent the part of the current that is used to magnetise the core (Inductive part) and to heat-up the core material as iron-losses. Actually the magnetising
current is taken from the primary side but that will only make the calculation model more difficult and does not form any additional value. The secondary current I[s] will also produce a secondary
flux, opposite to the primary flux. The resulting flux in the CT core is a very small magnetising flux so that the core does not saturate at normal operation currents. The secondary current I[s] will
be N times smaller than the primary current I[p].
$I_s = \frac{I_p} {N}$
The error current I[e] exists for the major part of a purely inductive part; the magnetising current
I[m] that can be seen on the magnetising curve of the CT and a small resistive part I[g] that represent the iron losses. The magnetising current I[m] is proportional to the field strength H
$I = \frac{l} {N} \times H$
The copper losses are represented in the series resistor R[CT]
General property's of a Current Transformer
The Primary current Ip
According to IEC 60044-1, the primary current I_p is standadise of the decadic series 1 - 1,25 - 1,5 - 2 - 2,5 - 3 - 4 - 5 - 6 - 7,5. When selecting a CT; the primary current of the CT must be at
least the maximum current of the line in which the CT will operate. When the current is bigger than the rated primary current of the CT, the windings will overheat, age faster and finally the
insulation will fail. According ANSI, the primary currents are fixed values; for single Ratio CT's Ip = 10; 15; 25; 40; 50; 75; 100; 200; 300; 400; 600; 800; 1200; 1500; 2000; 3000; 4000; 5000; 6000;
8000; 12000A.
The Secondary current Is
According IEC the secondary current can be 0.5, 1 , 2 or 5A. According ANSI the secondary current is allways 5A.
Dual or Multi-Ratio CT's
Dual ratio CT's exist in all standards but only according the ANSI standard, the ratio's are standardised. Note the for multi ratio CT's, many primary currents are mentionned and only one secondary
current but in reality there is only one primary connection and 5 secondary terminals that allow 10 different ratings.
Ratio k
As already mentionned, the most important property of the current transformer is the ratio k that is both the ratio of secundary turns to primary turns and the ratio of primary current to secondary
current. Note the often the primary is only one turn and practically it's just the conductor passing trough the core. Since a small amount of energy is necessary to magnetising the core and to
produce heat as iron loss in the core, the secondary output Ampere-turns is a bit less than the primary Ampere-turns. The difference in current is the error current or magnetising current. In case of
very critical CT's, ratio-turn-correction is applied; remove some secondary turns so that the ratio is a bit higher and the output is thus a bit higher at rated current. Of course this can only be
applied when the CT meets all accuracy requirements after ratio-turn correction.
R[CT] The internal copper resistance
R[CT] is often called the secondary DC resistance at 75°C. It's value depends on the length en cross section of the secondary winding wire Pouillet's law. So R[CT] also depends on the core
dimensions; bigger core cross section implies a longer wire length per turn. The smaller R[CT]; the more the current transformer approaches the ideal current source.
The accuracy of a CT is given by it's "class". The division into accuracy classes depends on the type of CT; we mainly distinguish measuring class CT's and Protection class CT's who are defined quite
differently. We will discuss accuracy for both types further. Of course they both have a primary current I_p, a secondary current I_s and a ratio k. From these 3 parameters we can define some
important property's related to accuracy.
• The primary current vector Ip.
• The secondary current vector Is that is here represented k times larger to be able to compare them and to have an idea of the error current. In case the error would be 0, both vectors I_p and
k.I_s would be identical.
• The total error vector (composite error) can be seen as the composition of:
□ an amplitude error (ratio error), expressed in % and
□ an angle error, expressed in radians or seconds.
Note that for protection CT's, the angle error is disregarded and only the total composite error is given in %. When examining the equivalent diagram, one would easily conclude that the error current
can only be the magnetising current of the CT. Indeed, normally the magnetising current is very low but at the saturation point of the core, 50% increase in magnetising current produces only 10%
extra secondary voltage so at saturation the error current rises quickly. Therefore, the property's accuracy and saturation of the core are closely linked. Hense the error vector is allways a
reducion in secondary output current; negative error. Positive error is only possible by ratio-turn correction.
Load, Rated load and Burden
A Voltage transformer is unloaded when the secondary terminals are open; it behaves like a normal voltage source. A current transformer is just the opposite and is unloaded with the secondary
terminals short-circuited. Stonger even, when the secondary terminals of a CT are open, there is no secondary flux to oppose the primary flux and the core goes to positive saturation on the positive
current-sine and to negative saturation on the negative current sine. The induced seconday voltage is proportional to - N.dφ / dt and from -Vsat to +Vsat is a huge voltage. One might also conclude
that the current transformen is raising the voltage in trying to drive the secondary current trough the open terminals. The insulation of the CT is not calculated for this situation and it will
distroy the CT secondary winding and may cause fire at the terminals & high voltage injury. The nominal load of a CT is the rated resistive burden R[B]; expressed in VA. The correct resistance can be
calculated with below formula
$P = R_B.I_s^2$
$R_B = \frac{P} {I_s^2}$
Example: A 50VA CT with rated secondary current of 5A is designed for a connected load of 50VA/5² = 2 Ohm. Measuring transformers are tested at rated load and at 1/4 of the rated load so this CT
should be loaded within these limits to be sure the accuracy is within specification.
The use and specification of Current Transformers
Current transformers are used to measure high currents; higher than 5A. So the most important parameter in defining a CT is indeed the Ratio that gives us the Magnetude of primary current and the
secondary current. But for the following specifications of the current transformer, the purpose of the CT is needed since measuring CT's and Protection CT's require different specifications. Indeed,
there will be two mayor groups of Current Transformers:
• Protection current transformers
• Measurement current transformers
Regarding specification, different standards have different ways in specifying CT's but it all comes down to specifying core property's (saturation point or knee-point) and secondary wire property's
(R[CT]) although it may look a totally different.
Protection CT's
Protection CT's:
• are meant to protect an elektrical installation in case of overcurrent or short circuit and their operating current range is above nominal current I[n] or more specific from I[n] to ALF times I
[n]. It is important for the good functionning of the protection relays that the CT's are NOT saturated at ALF times rated current. Where ALF is the ratio of the expected maximum fault current
over the rated current. It is thus important that the core material has a high saturation induction.
• their accuracy is not very high but most important is that the accuracy in fault conditions is high enough. This can only be the case when the core is not saturated in case of a fault current.
Therefore their accuracy is best described with an Accuracy Limit and an Accuracy Limit Factor (ALF).E.g. a 5P20 CT has an Accuracy limit of 5% at 20 times rated current (Accuracy Limit Factor).
The accuracy of this CT at rated current is 1%.
• They will be connected to one or more protection relays
• according the application, they can be defined in a few ways:
□ The standard IEC protection class CT's are of class "P" that only takes the AC behaviour into account in IEC 60044-1
□ Class PX CT's are defined by the position of the knee-point (saturation point or knee-point voltage and magnetising current) and the secondary wire resistance R[CT].
□ Class PR CT's are defined like the PX CT's but they have a low remanence; less than 10%. Note that remanence in CT's can be 60-80% that may cause quick saturation in case of a fault-current
DC offset in the remanent direction. A class PX CT can't have that problem.
□ CT's for transient response class "TP" are defined by their connected load R[B], time constant T[S] and their overcurrent figure K[SSC]. These linearised CT's have air-gaps in the core to
obtain extreme high saturation voltage and current.
Ex. A 5P10 CT at 10 times rated current has a maximum error of 5% and only 1% at nominal current. A 10P15 CT at 15 times rated current has a maximum error of 10% and 3% at nominal current.
Measurement CT's
• Are aimed to measure accurately within their normal operating range of 0 to I[n]. Therefore, the core material must have a high permeability (µ-metal) so that the magnetising current is low.
• Measurement CT's are often being used for billing of electrical power consumption and their accuracy is determinent for a lot of money.
• For the protection of the measuring instruments in case of a fault current, it is favorable that for currents far above rated current I[n], the core is saturated and the output lowers so that the
fault-current trough the meter is only a part of the expected current trough the meter. This is expressed by the Instrument Security Factor SF. Of course, the dilemma is that the CT must be
accurate at I[n] (and 1,2 x I[n]) but at f.i. 5 times rated current ( FS 5) the CT may be saturated for at least 10%.
• The accuracy of a measurement CT is given by it's accuracy class that corresponds to the error% at rated current and at 1.2 times rated current I[n]. The standard accuracy classes according IEC
are class 0.2, 0.5, 1, 3 en 5. For classes 3 and 5, no angle error is specified. The classes 0.2S and 0.5S have their accuracy shifted toward the lower currents. This means that they have 5
measuring points instead of 4 (or 2 for class 3 & 5).
• The accuracy of the CT must be within these limits at the given currents and with rated load and at 1/4 of the rated load. A measurement CT that is not loaded is therefore not necessary accurate!
Ratio turn correction may have been applied to get the CT ratings witthin spec and then not loading gives a higher error. | {"url":"http://www.openelectrical.org/wiki/index.php?title=Current_transformer","timestamp":"2014-04-17T18:23:52Z","content_type":null,"content_length":"38880","record_id":"<urn:uuid:725b42fd-7ac6-467e-b91e-436c736a2b5c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to lie with statistics
Finished reading this incredible book ‘How to Lie with Statistics’ by Darrel Huff. It’s been a while since I read a book – hectic work schedules at office coupled with preparations for joining full
time MBA program this summer have kept me really busy. The book itself is a part of the reading list given to me as a part of pre-MBA coursework.
The book, as the name suggests, talks about how statistics can be (and are) used as tool of deception – for distorting facts in a way that is suitable for an interested party. The book has been
sectioned into 10 chapters most of which introduce one or more ways in which statistics are used to give the illusion of reality that is different from what really exists. The writeup is full of
examples – from magazines, newspapers, public reports and surveys- to illustrate the use of these tools and to help the reader understand what to look out for when he/she comes across an impressive
statistic. The last chapter of the book is dedicated entirely to helping the reader understand how to critically evaluate a beautiful picture painted with the help of statistics.
Here’s a look at what all the book talks about:
The Sample with the Built-in Bias
Would you believe a doctor if he said that most of the people in the world are ill because of all the people he has met in the last 1 year most were? You wouldn’t, right? Being a doctor he obviously
meets more people who’re not keeping well than those who are perfectly healthy. However, most people do tend to believe a lot of statistics before trying to verify if the sample of population, on
which the statistic is based, accurately represents the said population. The writer provides multiple examples of how biased samples lead to inaccurate results. On occasions the bias is introduced to
obtain results that suit an involved party.
The Well·Chosen Average
You leaned about mean, median, and mode during school, didn’t you? Then why is it that when you read the work Average in a report or ad you don’t stop to ask which of those three are being talked
about? Wait to think for a second- when discussing salaries what are you more interested in- median (the figure that accurately tells you half the people take home more money than the said amount and
the remaining half take home less salary than the said amount) or the mean (a figure that is hardly representative of the state of salaries among the said group as it can easily be brought up or down
by extremely large or extremely low salaries that a few might be drawing).
The Little Figures That Are Not There
Ever wondered how a toothpaste brand can claim that using their product helps reduce the chances of tooth decay by so-and-so per cent without getting sued? They do it. It has to mean that the results
are genuine. This chapter of the book deals with such mysteries- how interested parties at times withhold important information to make the statistics seem different from what they really are.
Much Ado about Practically Nothing
A difference is a difference if it makes a difference. This is what the author explores in this chapter. He introduces the concept of errors and ranges to show that at times difference between
numbers are made to mean more than they do.
The Gee-Whiz Graph
Statistics are mere number and most people are bound to find them boring. However, if you plot these numbers against the two axis of a graph the result is a interesting diagram. Now, how does one
make this interesting diagram even more appealing? If only there were a way to make the red line climb (or drop) much more sharply. Aha! Now that’s an idea!
The One-Dimensional Picture
So graphs can help statisticians attract the attention of the general folk towards (only) the aspects they want to. How do you dramatize further? Pictures! This isn’t misleading you say. Well,
consider this- to show that the per capita income of people in a Region A is twice as much as that of people in Region B, I draw 2 pictures of money-bags labelled A and B with the bag twice as tall
as the bag B. Interestingly, to make the bag A twice as tall while making it look somewhat similar in shape to bag A I make it twice as wide as well. Most people who look at the diagram would imagine
that the bag A is twice as deep as bag B. What all this means is that the volume of bag B is 8 times that of bag A (it can hold 8 times as much money as bag A). :-) Simple trick to fool my readers
into believing something without even saying it.
The Semiattached Figure
If you can’t prove something then demonstrate something else and pretend they’re the same thing. That’s how the author begins this chapter, and frankly that’s all that he talks about. The examples
with amaze you! Here’s one of the more basics ones- More people were killed by airplanes last year than in 1910. Hence, modern planes are more dangerous. Ridiculous logic you’d say. The statistic
might be in place but there’s a big assumption that separates the statistic and the conclusion (assumption- the same number of planes fly in the skies today as in 1910). It was easy to spot in this
case, but is it always this easy? ;-)
Post Hoc Rides Again
A friend once told me that he had read somewhere that high heels are popular among ladies when the stock market’s doing well and flat shoes are more popular when the stock market’s not doing well.
Interesting fact; however, could one conclude that if one somehow found a way to make high heels popular among ladies, he/she could make the stock markets perform well? The author explores similar
correlations in this chapter and brings out four kinds of fallacies- correlation by chance, correlation where it’s impossible to detect whether A is causing B or otherwise, correlation where neither
A nor B is causing the other (both are being caused by C), and the correlation that’s inferred to hold true beyond the data with which it has been demonstrated.
How to Statisticulate
In this chapter the author introduce Statisticulaton – Statistical Manipulation. He talks of deception through the decimal, through percentages, by adding things that don’t really add up, through
percentiles, and through averages.
How to Talk Back to a Statistic
In this final chapter of the book the author shifts focus from revealing statistical deceptions to revealing the secrets to how statistical deceptions can be revealed. He shares 5 questions that can
act as effective devices for critically examining statistics:
1. Who says so?
2. How does he know?
3. What’s missing?
4. Did someone change the subject?
5. Does it make sense?
Read to find out how you can make use of these questions to talk back to a statistic.
2 responses to “How to lie with statistics”
1. Nice article. Really inspired me to pick up the book :)
2. I read this book for my AP Stat class this summer. Its very insightful and the author says everything in a very easy to understand manner. I really enjoyed it, and I think everyone should read
this book! | {"url":"http://kaveeshmanchanda.wordpress.com/2011/06/27/how-to-lie-with-statistics/","timestamp":"2014-04-21T12:17:27Z","content_type":null,"content_length":"68960","record_id":"<urn:uuid:e10b3b52-1b06-49fc-b90f-22e1d9652ec0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concerning Formulas of the Types A → B ∨ C, A → (Ex)B(x
- ANNALS OF PURE AND APPLIED LOGIC , 1991
"... A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the
declarative meaning of a logic program, provided by provability in a logical system, should coincide with its ..."
Cited by 374 (108 self)
Add to MetaCart
A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the
declarative meaning of a logic program, provided by provability in a logical system, should coincide with its operational meaning, provided by interpreting logical connectives as simple and fixed
search instructions. The operational semantics is formalized by the identification of a class of cut-free sequent proofs called uniform proofs. A uniform proof is one that can be found by a
goal-directed search that respects the interpretation of the logical connectives as search instructions. The concept of a uniform proof is used to define the notion of an abstract logic programming
language, and it is shown that first-order and higher-order Horn clauses with classical provability are examples of such a language. Horn clauses are then generalized to hereditary Harrop formulas
and it is shown that first-order and higher-order versions of this new class of formulas are also abstract logic programming languages if the inference rules are those of either intuitionistic or
minimal logic. The programming language significance of the various generalizations to first-order Horn clauses is briefly discussed. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=9253619","timestamp":"2014-04-16T20:34:13Z","content_type":null,"content_length":"13183","record_id":"<urn:uuid:43a8e30c-64a7-4ed8-844a-e4087b9bdb0b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of outter hexagon in circle
May 15th 2012, 06:49 PM #1
Jun 2009
Area of outter hexagon in circle
Consider a circle with radius r. How do i work out the area of the circumscribe hexagon? I have managed to work out the area of the inner to be 3asqrt{r^2-a^2}.
Re: Area of outter hexagon in circle
Inner hexagon:
You can see that the radius of the circle is the length of each equilateral triangle, so evaluating the area of each equilateral triangle using Heron's Formula:
\displaystyle \begin{align*} s &= \frac{r + r + r}{2} \\ &= \frac{3r}{2} \\ \\ A &= \sqrt{s(s - r)(s - r)(s - r)} \\ &= \sqrt{\frac{3r}{2}\left(\frac{3r}{2} - r\right)\left(\frac{3r}{2} - r\
right)\left(\frac{3r}{2} - r\right)} \\ &= \sqrt{\frac{3r}{2} \cdot \frac{r}{2} \cdot \frac{r}{2} \cdot \frac{r}{2}} \\ &= \sqrt{\frac{3r^4}{16}} \\ &= \frac{\sqrt{3}\,r^2}{4}\end{align*}
So the area of the hexagon is
\displaystyle \begin{align*} 6\cdot \frac{\sqrt{3}\,r^2}{4} &= \frac{3\sqrt{3}\,r^2}{2}\textrm{ units}^2 \end{align*}
Outer hexagon:
You can see that the radius of the circle is the height of each equilateral triangle, and splits each equilateral triangle into two triangles with the sides in the ratio \displaystyle \begin
{align*} 1, \sqrt{3}, 2 \end{align*}. So the if the height of each triangle is \displaystyle \begin{align*} r \end{align*}, then the base is \displaystyle \begin{align*} \frac{2r}{\sqrt{3}} \end
Therefore the area of each triangle is \displaystyle \begin{align*} \frac{1}{2}\cdot r \cdot \frac{2r}{\sqrt{3}} = \frac{r^2}{\sqrt{3}}\end{align*}, and therefore the area of the hexagon is
\displaystyle \begin{align*} 6\cdot \frac{r^2}{\sqrt{3}} &= \frac{6r^2}{\sqrt{3}} \\ &= \frac{6\sqrt{3}\, r^2}{3} \\ &= 2\sqrt{3}\,r^2 \textrm{ units}^2\end{align*}
May 15th 2012, 07:21 PM #2 | {"url":"http://mathhelpforum.com/trigonometry/198872-area-outter-hexagon-circle.html","timestamp":"2014-04-19T13:18:52Z","content_type":null,"content_length":"37057","record_id":"<urn:uuid:83f85b40-30ce-4177-8ed1-c9945fb0dac7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Debt grows exponentially while physical economy grows logistically
I found this concept described at George Washington’s Blog in The Elephant In The Room: Debt Grows Exponentially, While Economies Only Grow In An S-Curve. During the dot com era the concept was
referred to as “trees don’t grow to the sky”. Hubbert of Peak Oil fame demonstrated that given a finite amount of a resource (oil, in his case), production of a resource can temporarily grow
exponentially but peters out as the resource is depleted to zero. The graphical representation of this is the logistic curve or “S-Curve”:
The article linked above quotes economist Michael Hudson:
Hudson noted in 2004:Mesopotamian economic thought c. 2000 BC rested on a more realistic mathematical foundation than does today’s orthodoxy. At least the Babylonians appear to have recognized
that over time the debt overhead became more and more intrusive as it tended to exceed the ability to pay, culminating in a concentration of property ownership in the hands of creditors.
Babylonians recognized that while debts grew exponentially, the rest of the economy (what today is called the “real” economy) grows less rapidly. Today’s economists have not come to terms with
this problem with such clarity. Instead of a conceptual view that calls for a strong ruler or state to maintain equity and to restore economic balance when it is disturbed, today’s general
equilibrium models reflect the play of supply and demand in debt-free economies that do not tend to polarize or to generate other structural problems.
And Hudson wrote last year:
Every economist who has looked at the mathematics of compound interest has pointed out that in the end, debts cannot be paid. Every rate of interest can be viewed in terms of the time that it
takes for a debt to double. At 5%, a debt doubles in 14½ years; at 7 percent, in 10 years; at 10 percent, in 7 years. As early as 2000 BC in Babylonia, scribal accountants were trained to
calculate how loans principal doubled in five years at the then-current equivalent of 20% annually (1/60^th per month for 60 months). “How long does it take a debt to multiply 64 times?” a
student exercise asked. The answer is, 30 years – 6 doubling times.
No economy ever has been able to keep on doubling on a steady basis. Debts grow by purely mathematical principles, but “real” economies taper off in S-curves. This too was known in Babylonia,
whose economic models calculated the growth of herds, which normally taper off. A major reason why national economic growth slows in today’s economies is that more and more income must be paid to
carry the debt burden that mounts up. By leaving less revenue available for direct investment in capital formation and to fuel rising living standards, interest payments end up plunging economies
into recession. For the past century or so, it usually has taken 18 years for the typical real estate cycle to run its course.
Hudson calls for a debt jubilee, and points out that periodic debt jubilees were a normal part of the Sumerian, Babylonian and ancient Jewish cultures. Economist Steve Keen and economic writer
Ambrose Evans-Pritchard also call for a debt jubilee.
Countries that incur overwhelming debts typically debase/devalue the currency as a form of debt repudiation, whether intentionally or not. | {"url":"http://wasatchecon.wordpress.com/2010/10/31/debt-grows-exponentially-while-physical-economy-grows-logistically/","timestamp":"2014-04-21T09:52:58Z","content_type":null,"content_length":"49980","record_id":"<urn:uuid:b92cca2a-eeaa-479d-a6ac-984b3f01d7e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
9.6.4 Read Column Data Routines
FITSIO Home
Next: 10. Extended File Name Up: 9.6 Specialized FITS ASCII Previous: 9.6.3 Write Column Data Contents
Two types of routines are provided to get the column data which differ in the way undefined pixels are handled. The first set of routines (ffgcv) simply return an array of data elements in which
undefined pixels are set equal to a value specified by the user in the 'nullval' parameter. If nullval = 0, then no checks for undefined pixels will be performed, thus increasing the speed of the
program. The second set of routines (ffgcf) returns the data element array and in addition a logical array of flags which defines whether the corresponding data pixel is undefined. See Appendix B for
the definition of the parameters used in these routines.
Any column, regardless of it's intrinsic data type, may be read as a string. It should be noted however that reading a numeric column as a string is 10 - 100 times slower than reading the same column
as a number due to the large overhead in constructing the formatted strings. The display format of the returned strings will be determined by the TDISPn keyword, if it exists, otherwise by the data
type of the column. The length of the returned strings (not including the null terminating character) can be determined with the fits_get_col_display_width routine. The following TDISPn display
formats are currently supported:
Iw.m Integer
Ow.m Octal integer
Zw.m Hexadecimal integer
Fw.d Fixed floating point
Ew.d Exponential floating point
Dw.d Exponential floating point
Gw.d General; uses Fw.d if significance not lost, else Ew.d
where w is the width in characters of the displayed values, m is the minimum number of digits displayed, and d is the number of digits to the right of the decimal. The .m field is optional.
int fits_read_col_str / ffgcvs
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG firstelem,
LONGLONG nelements, char *nulstr, > char **array, int *anynul,
int *status)
int fits_read_col_[log,byt,sht,usht,int,uint,lng,ulng, lnglng, flt, dbl, cmp, dblcmp] /
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG firstelem,
LONGLONG nelements, DTYPE nulval, > DTYPE *array, int *anynul,
int *status)
int fits_read_colnull_str / ffgcfs
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG firstelem,
LONGLONG nelements, > char **array, char *nullarray, int *anynul,
int *status)
int fits_read_colnull_[log,byt,sht,usht,int,uint,lng,ulng,lnglng,flt,dbl,cmp,dblcmp] /
(fitsfile *fptr, int colnum, LONGLONG firstrow,
LONGLONG firstelem, LONGLONG nelements, > DTYPE *array,
char *nullarray, int *anynul, int *status)
int fits_read_subset_[byt, sht, usht, int, uint, lng, ulng, lnglng, flt, dbl] /
(fitsfile *fptr, int colnum, int naxis, long *naxes, long *fpixel,
long *lpixel, long *inc, DTYPE nulval, > DTYPE *array, int *anynul,
int *status)
int fits_read_subsetnull_[byt, sht, usht, int, uint, lng, ulng, lnglng, flt, dbl] /
(fitsfile *fptr, int colnum, int naxis, long *naxes,
long *fpixel, long *lpixel, long *inc, > DTYPE *array,
char *nullarray, int *anynul, int *status)
int fits_read_col_bit / ffgcx
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG firstbit,
LONGLONG nbits, > char *larray, int *status)
int fits_read_col_bit_[usht, uint] / ffgcx[ui,uk]
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG, nrows,
long firstbit, long nbits, > DTYPE *array, int *status)
int fits_read_descript / ffgdes
(fitsfile *fptr, int colnum, LONGLONG rownum, > long *repeat,
long *offset, int *status)
int fits_read_descriptll / ffgdesll
(fitsfile *fptr, int colnum, LONGLONG rownum, > LONGLONG *repeat,
LONGLONG *offset, int *status)
int fits_read_descripts / ffgdess
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG nrows
> long *repeat, long *offset, int *status)
int fits_read_descriptsll / ffgdessll
(fitsfile *fptr, int colnum, LONGLONG firstrow, LONGLONG nrows
> LONGLONG *repeat, LONGLONG *offset, int *status)
FITSIO Home
Next: 10. Extended File Name Up: 9.6 Specialized FITS ASCII Previous: 9.6.3 Write Column Data Contents | {"url":"http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c_user/node80.html","timestamp":"2014-04-25T01:44:06Z","content_type":null,"content_length":"12573","record_id":"<urn:uuid:73f09280-5b7f-4927-927c-a6a2d700f881>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
K. Zhu, F. Wu, H. Wang, Q. Zhu
Institute of Surveying and Mapping, Information Engineering University, Zhengzhou, China
Li-Openshaw algorithm is a self-adapted linear feature’s generalization algorithm based on impersonality generalized natural law, and it can get reasonable and genuine generalization results. On the
basis of analyzing characteristics of Li-Openshaw algorithm, there are two flaws in the algorithm: ①without referring to save local maximum points, the algorithm doesn’t keep the entire shape of
curve well;②abnormity happened easily in selection when there are more than one point of intersection between SVO circularity and curve, and coordinates of selected points need to be calculated,
which will decrease the simplification efficiency. According to the principle and purpose of linear simplification, Li-Openshaw algorithm is improved as follow: ①a new method of identifying the bend
using the relationship between point and line is proposed, in order to find all the local maximum points and save to keep the entire shape of curve before simplification; ②find the first approximate
point of intersection according to lines’ index, then select the point to save after generalization, which is nearest to the midpoint between center point of circle and point of intersection on the
curve. What’s more, evaluating figures such as simplification results, time for simplification and ratio of compressing points are given, especially, two algorithms are compared and assessed by the
method of curves’ shape structure characteristic assessment based on fractal theory. According to experiments’ results, compared with original algorithm, improved algorithm can keep the entire shape
of curve better and increase simplifying efficiency. | {"url":"http://icaci.org/files/documents/ICC_proceedings/ICC2007/abstracts/html/10_Oral7_3_IMPROVEMENT%20AND%20ASSESSMENT%20OF%20LI-OPENSHAW%20ALGORITHM.htm","timestamp":"2014-04-17T15:45:39Z","content_type":null,"content_length":"11184","record_id":"<urn:uuid:6420c884-dfb9-474d-9794-6e2084ffd9d1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vernon, CA Precalculus Tutor
Find a Vernon, CA Precalculus Tutor
...I am mainly focused on assisting students in the middle/high school and college in Biology and Math from Algebra to Calculus. I am a graduate from the University of Arizona with Biomedical
Engineering and Molecular Biology degrees. I am very good at math and science and excelled in those courses throughout my education.
14 Subjects: including precalculus, calculus, geometry, biology
...I have completed Intro to C++ Programming (PIC 10a) at UCLA with a B+. I took the CBEST a couple of years ago, just after finishing my degree in math for teaching. I took it on 4 hours of
sleep and still passed. It's basically a review of high school and general ed., so I would be happy to tutor test prep for this exam.
18 Subjects: including precalculus, English, calculus, physics
...At Oberlin we used the fixed-do system and the number system, so that is what I continue to use today. Having received a B.A. in Music Theory/History from Oberlin College, I am officially
trained to teach you about music. In particular I love giving music lessons on flute and saxophone, but also basic voice lessons and aural skills.
18 Subjects: including precalculus, chemistry, calculus, algebra 2
...I love Algebra 2 because the problems are tougher than Algebra 1, but the methods are very much the same. Unfortunately, Algebra 2 can get to the point where many schools ask students to
memorize equations or methods without asking them to understand them or prove that they are true. Often stud...
14 Subjects: including precalculus, calculus, trigonometry, geometry
...If you're just looking for some quick math tricks to get you through the end of the semester, well, I have some of those too! I'm happy to tutor at your house or at a library or cafe. Please
contact me to set up an appointment.
22 Subjects: including precalculus, Spanish, English, writing
Related Vernon, CA Tutors
Vernon, CA Accounting Tutors
Vernon, CA ACT Tutors
Vernon, CA Algebra Tutors
Vernon, CA Algebra 2 Tutors
Vernon, CA Calculus Tutors
Vernon, CA Geometry Tutors
Vernon, CA Math Tutors
Vernon, CA Prealgebra Tutors
Vernon, CA Precalculus Tutors
Vernon, CA SAT Tutors
Vernon, CA SAT Math Tutors
Vernon, CA Science Tutors
Vernon, CA Statistics Tutors
Vernon, CA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Bell Gardens precalculus Tutors
Bell, CA precalculus Tutors
Bradbury, CA precalculus Tutors
Commerce, CA precalculus Tutors
Cudahy, CA precalculus Tutors
Dockweiler, CA precalculus Tutors
Hazard, CA precalculus Tutors
Huntington Park precalculus Tutors
Los Angeles precalculus Tutors
Maywood, CA precalculus Tutors
Rossmoor, CA precalculus Tutors
San Marin, CA precalculus Tutors
South Gate precalculus Tutors
Sunland precalculus Tutors
Universal City, CA precalculus Tutors | {"url":"http://www.purplemath.com/Vernon_CA_Precalculus_tutors.php","timestamp":"2014-04-16T22:08:28Z","content_type":null,"content_length":"24216","record_id":"<urn:uuid:661e1344-41a4-4438-b400-86efd0b66928>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
150cm 10mm 1hr 1min -------x--------x------x-------- 1hr 1cm 60mins 60sec. work please! :D
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f7124c1e4b0eb858773a000","timestamp":"2014-04-20T23:35:06Z","content_type":null,"content_length":"46725","record_id":"<urn:uuid:2d106af6-cdb1-4e07-a89f-8dd8ffdac84a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rollingbay Math Tutor
Find a Rollingbay Math Tutor
...I enjoy helping students who "hate math" gain better understanding and I teach techniques meant to foster a stable foundation for future math. Whether you need help with all portions of the GED
or only a subject, I can help your prepare. I provide comprehensive and individualized GED tutoring that will ensure you pass.
36 Subjects: including SAT math, ACT Math, geometry, prealgebra
Professional tutor focusing on Math/Statistics, Chemistry, Physics and Computers. Personally scored 800 on both SAT Math & SAT Math II & 787 in Chemistry prior to attending CalTech. Have extensive
IT industry experience and have been actively tutoring for 2 years.
43 Subjects: including trigonometry, linear algebra, computer science, discrete math
...I transmit my love for the subject so that learning is fun and natural. Students see the need for math when I show them how math applies to real world situations. I instill rigor and emphasize
practice so moving ahead to new concepts is a breeze.
16 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...I regularly tutor students in math through calculus, biology, English, and chemistry. I also coach students through the college application process and enjoy helping them write their personal
statement or essay. I've taught both beginning and intermediate SAT classes and also have much experience working with ESL students, both children and adults.
28 Subjects: including prealgebra, study skills, Korean, ESL/ESOL
...In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. My favorite types of math are Trigonometry, Geometry, Algebra 1 and 2. I truly like
and understand math concepts and enjoy helping others understand the underlying principles.
26 Subjects: including algebra 1, ACT Math, probability, SAT math | {"url":"http://www.purplemath.com/rollingbay_wa_math_tutors.php","timestamp":"2014-04-17T01:18:56Z","content_type":null,"content_length":"23750","record_id":"<urn:uuid:68875e29-3187-4b4e-b167-ad4f33f7dd5b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relation between Almost simple Lie groups and semisimple Lie groups?
up vote 3 down vote favorite
Hello everyone,
What is the relation between almost simple Lie groups and semisimple Lie groups? (Especially in the case of subgroups of $SO(2,n)$.)
Recall that:
Def1: A Lie groups $G$ is said to be semisimple if it's Killing form is non-degenerate.
Def2: A Lie groups $G$ is said to be almost simple if every proper normal subgroup of $G$ either is finite or has finite index.
Thanks in advance.
add comment
1 Answer
active oldest votes
A Lie group is semisimple if and only if its Lie algebra is semisimple which in turn is equivalent to saying that the Lie algebra is a direct sum of simple ideals. This means that a Lie
group is semisimple if and only if it is locally isomorphic to a direct product of simple Lie groups. If the Lie group in question is connected and simply connected, it follows that it
then is a direct product of simple groups. A connected Lie group is simple if and only if its Lie algebra is simple and this is equivalent to saying that all its normal subgroups are
discrete and one shows that every normal subgroup must be central then. With your definition, this means that every almost simple group is indeed simple, but not the otehr way round, as
up vote 3 the universal covering group of $SL_2({\mathbb R})$ shows. This is not the way a mathematician wants it.
down vote
accepted The notion of almost simple groups is not used in Lie theory, but in the theory of algebraic groups, see for instance the book of Margulis. Indeed, if your Lie group is a linear algebraic
group, then its connected component is simple as a Lie group if and only if it is almost simple.
Two remarks: first, as you may have observed, being simple in Lie group theory is not quite the same as in abstract group theory: $SL_2(\mathbb{R})$ is simple, in spite of the
non-trivial centre; second, there is the issue of connectedness: $SO(2,n)$ (for $n>2$) is almost simple, as its connected component of identity $SO_0(2,n)$ is simple (and of index 2).
Due to the exceptional isomorphism $S0_0(2,2)\simeq SO_0(2,1)\times SO_0(2,1)$, the group $SO(2,2)$ is semi-simple but not almost simple. – Alain Valette Nov 1 '12 at 19:15
@Xogn: The context and definitions in the question and in your answer need more precision. The question is apparently posed over the reals, while the Lie algebra theory is developed
first over $\mathbb{C}$ and then adapted to the real case. Much of this is routine but needs to be specified. For instance, simple over $\mathbb{R}$ is not the same thing as simple over
$\mathbb{C}$. And as Alain points out, the notion of simplicity for abstract groups is another matter. – Jim Humphreys Nov 1 '12 at 23:55
add comment
Not the answer you're looking for? Browse other questions tagged lie-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/111175/relation-between-almost-simple-lie-groups-and-semisimple-lie-groups/111189","timestamp":"2014-04-16T22:15:59Z","content_type":null,"content_length":"53293","record_id":"<urn:uuid:2523d227-4457-47c7-9761-29f65baa6e57>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
RationalWiki will be going through an upgrade cycle this weekend. This will cause some down time, details are at the tech blog. As a reminder, we try and add information to the tech blog during any
outage, you can always refer to it for details or as a form of communication if the site is down.
from Tmtoulouse (Talk), group Site wide (urgent) at 15:23, 18 April 2014
From RationalWiki
(Redirected from
The wikiFactor (wF) is a measure of the "importance of a wikisite", as the inventor of this measure, Carl McBride, put it in his paper on the topic.^[1]
[edit] Definition of the wikiFactor wF
In his paper, Carl McBride introduces the wF as follows:
The Hirsch h-index is defined as the number of papers, h, that have ≥ h citations. This simple metric has proved to be highly popular. For example, it is now included as one of the functions in the
ISI Web of Science.
The new metric proposed in this paper is that of the wikiFactor (wF). It is based on precisely the same style of measure as the h-index, but with two differences; the first is that it examines hits
on web pages rather than citations of a publication, and secondly, there is a factor of a thousand. In other words the wikiFactor is defined as the number of pages that have had ≥ 1000wF hits.
The wikiFactor of a wiki is the unique number wF such that there are wF pages on the wiki with at least 1000 * wF views.
This is easy enough to calculate, even manually if you have a list of the pages sorted by the number of views. Then, you only have to look at the n-th page in the list with less then n * 1000 views.
Then, wF = n-1.
The more important the wiki, the easier the calculation. Take for instance aStorehouseOfKnowledge. A look at Special:Statistics shows on Mar 16, 2010:
# title views
1 Main Page 27,741 ≥
2 User talk:Philip J. 17,889 ≥
Rayment 2,000
3 User talk:Ruylopez 16,995 ≥
3,000 So, wF[aSK] = 4. That fits the definition: though there are four pages with more than 4,000 views, there aren't five pages with more than 5,000 views, thus, the
4 Talk:Evidence for 5,340 ≥ biggest number wF such that there are wF pages with at least wF * 1000 views is 4.
God's existence 4,000
5 Talk:Evolution 4,743 <
6 Evolution 4,484 <
7 MediaWiki:Common.css 3,967 <
[edit] What the Heck?
Why do we need it? It's really not that easy to measure the impact of a wiki: clearly Wikipedia is more important than Conservapedia, as Wikipedia trumps CP in every category other than blocked IPs.
But what about Citizendium and Conservapedia? What is the best benchmark to compare those two? The number of active editors? The number of articles? The mean number of views per article? The
statistics page of each wiki gives us an abundance of information, just try to compare these two.
The wikiFactor is another one, and it could just work.
[edit] WikiFactor of wikipedia.en
For en.wikipedia, the wikifactor is difficult to calculate:
doesn't exist, in fact, the view counter (which usually can be found at the bottom of each page) is turned off. Fortunately, there are
which allow the curious to look at the number of views for a page - or the top 1000 pages - albeit on a monthly basis.
Using this information for Dec 2009, one finds that wF[wp] is at least 300. And just estimating that the overall page views of the most popular several thousand pages are at least fifty times the
page views of a single month, leads to a wF[wp] of more than 4,000.
That is indeed enormous!
[edit] Critique
Why ask for wF * 1000 views? If possible, one should give wF[h], too, with:
The h-wikiFactor of a wiki is the biggest number wF[h] such that there are wF[h] pages on the wiki with at least wF[h] views.
For the calculation of wF[h], one needs to look at more pages, so it is not that easy. But it seems to be more elegant.
[edit] See also
[edit] References
1. ↑ wikiFactor: a measure of the importance of a wiki site, Carl McBride, February 20, 2009, pdf | {"url":"http://rationalwiki.org/wiki/WikiFactor","timestamp":"2014-04-20T13:21:05Z","content_type":null,"content_length":"32422","record_id":"<urn:uuid:afc7541c-00a5-4542-a76e-f15d7e23d2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
History Of The Theory Of Numbers - I
VI PKEFACE.
equal to 1 4- 1/2+ 1/3+ . . . + l/(p — 1) is divisible by p2, a result first proved by Wolstenholme in 1862. Sylvester stated in 1866 that the sum of all products of n distinct numbers chosen from 1, 2, . . ., m is divisible by each prime >n+l which is contained in any term of the set m—n+1,. . .,m,m+l. There are various theorems analogous to these.
In Chapter IV are given properties of the quotient (u^-ty/p, which plays an important r61e in recent investigations on Fermat's last theorem (the impossibility of xp+yps=zp if p>2), the history of which will be treated in the final chapter of Volume II. Some of the present papers relate to (tt*(»)— l)/n, where n is not necessarily a prime.
While Euler's ^-function was defined above in order to state his generalization of Fermat's theorem, its numerous properties and generalizations are reserved for the long Chapter V. In 1801 Gauss gave the result that 0(<y + "• +<K&) = n, if d^ . . . , dk are the divisors of n; this was generalized by Laguerre hi 1872, H. G. Cantor hi 1880, Busche in 1888, Zsigmondy in 1893, Vahlen in 1895, Elliott in 1901, and Hammond hi 1916. In 1808 Legendre proved a simple formula for the number of integers ^ n which are divisible by no one of any given set of primes. The asymptotic value of 0(1) + . . . +0(6) for G large was discussed by Dirichlet in 1849, Mertens in 1874, Perott in 1881, Sylvester in 1883 and 1897, Cesaro in 1883 and 1886-8, Berger in 1891, and Kronecker in 1901. The solution of 4>(x) =?g was treated by Cayley in 1857, Minin in 1897, Pichler hi 1900, Carmichael in 1907-9, Ranum in 1908, and Cunningham in 1915. H. J. S. Smith proved in 1875 that the w-rowed determinant, having as the element in the ith row and jth column any function f(5) of the greatest common divisor 8 of i and j, equals the product of F(l)f F(2), . . ., F(m), where
In particular, F(m)=<t>(m) if /(5)=5. In several papers (pp. 128-130) Cesaro considered analogous determinants. The fact that 30 is the largest number such that all smaller numbers relatively prime to it are primes was first proved by Schatunowsky in 1893.
A. Thacker in 1850 evaluated the sum 4>k(n) of the kth powers of the integers ^n which are prime to n. His formula has been expressed in various symbolic forms by Cesaro and generalized by Glaisher and Nielsen. Crelle had noted in 1845 that 0x(w) = Jn0 ( n) . In 1869 Schemmel considered the number of sets of n consecutive integers each < m and prime to m. In connection with linear congruence groups, Jordan evaluated the number of different sets of k positive integers ^n whose greatest common divisor is prime to n. This generalization of Euler's 0-function has properties as simple as the latter function and occurs in many papers under a variety of notations. It in turn has been generalized (pp. 151-4). | {"url":"http://www.archive.org/stream/HistoryOfTheTheoryOfNumbersI/TXT/00000006.txt","timestamp":"2014-04-18T20:19:01Z","content_type":null,"content_length":"13460","record_id":"<urn:uuid:a3494d40-d91c-49ad-a2c2-0d2813d1ccd3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method and System for Database Storage Management
Patent application title: Method and System for Database Storage Management
Inventors: Abhijeet Mohapatra (Stanford, CA, US) Michael Genesereth (Palo Alto, CA, US)
Assignees: The Board of Trustees of the Leland Stanford Junior University
IPC8 Class: AG06F1730FI
USPC Class: 707693
Class name:
Publication date: 2013-04-18
Patent application number: 20130097127
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Embodiments of the present invention relate to run-length encoded sequences and supporting efficient offset-based updates of values while allowing fast lookups. In an embodiment of the present
invention, an indexing scheme is disclosed, herein called count indexes, that supports O(log n) offset-based updates and lookups on a run-length sequence with n runs. In an embodiment, count indexes
of the present invention support O(log n) updates on bitmapped sequences of size n. Embodiments of the present invention can be generalize to be applied to block-oriented storage systems.
A computer-implemented method for performing database storage management, comprising: encoding a first sequence of information, wherein the encoded first sequence of information includes a
compression of at least one successive repetition of values in the first sequence of information; grouping the encoded first sequence of information into at least one first level of sets having at
most a first predetermined number of nodes and each having an associated count; and generating at least one hierarchical count of the first level of sets wherein nodes at each level of hierarchy
maintain a sum of nodes for child levels of hierarchy.
The method of claim 1, wherein the first predetermined number of nodes is two nodes.
The method of claim 1, each of the at least one first level of sets includes a first predetermined number of nodes
The method of claim 1, wherein at least one of the at least one first level of sets has less that a first predetermined number of nodes.
The method of claim 1, further comprising looking up at least one item of the first sequence of information by traversing the at least one hierarchical count of the first level of sets.
The method of claim 5, wherein looking up at least one item of the first sequence of information includes the steps of receiving position information for the at least one item of information;
determining branches of the at least one hierarchical count corresponding to the position information; determining a node of the lowest level of hierarchy corresponding to the at least one item of
the first sequence of information.
The method of claim 6, wherein the position information includes offset information.
The method of claim 6, wherein looking up the at least one item of the first sequence of information is performed in O(log n) computation steps where n is the number of items of information in the
first sequence of information.
The method of claim 1, further comprising making at least one change to the at least one hierarchical count of the first level of sets.
The method of claim 9, wherein at least one node at a lowest level of hierarchy is changed.
The method of claim 9, wherein the at least one change includes changing at least one hierarchical count of the first level of sets wherein nodes at each level of hierarchy maintain a sum of nodes
for child levels of hierarchy.
The method of claim 9, wherein the at least one change includes inserting at least one new hierarchical count of the first level of sets.
The method of claim 9, wherein the at least one change includes deleting at least one hierarchical count of the first level of sets.
A computer-readable medium including instructions that, when executed by a processing unit, cause the processing unit to perform database storage management, by performing the steps of: encoding a
first sequence of information, wherein the encoded first sequence of information includes a compression of at least one successive repetition of values in the first sequence of information; grouping
the encoded first sequence of information into at least one first level of sets having at most a first predetermined number of nodes and each having an associated count; and generating at least one
hierarchical count of the first level of sets wherein nodes at each level of hierarchy maintain a sum of nodes for child levels of hierarchy.
The computer-readable medium of claim 14, wherein the first predetermined number of nodes is two nodes.
The computer-readable medium of claim 14, each of the at least one first level of sets includes a first predetermined number of nodes
The computer-readable medium of claim 14, wherein at least one of the at least one first level of sets has less that a first predetermined number of nodes.
The computer-readable medium of claim 14, further comprising looking up at least one item of the first sequence of information by traversing the at least one hierarchical count of the first level of
The computer-readable medium of claim 18, wherein looking up at least one item of the first sequence of information includes the steps of receiving position information for the at least one item of
information; determining branches of the at least one hierarchical count corresponding to the position information; determining a node of the lowest level of hierarchy corresponding to the at least
one item of the first sequence of information.
The computer-readable medium of claim 19, wherein the position information includes offset information.
The computer-readable medium of claim 19, wherein looking up the at least one item of the first sequence of information is performed in O(log n) computation steps where n is the number of items of
information in the first sequence of information.
The computer-readable medium of claim 14, further comprising making at least one change to the at least one hierarchical count of the first level of sets.
The computer-readable medium of claim 22, wherein at least one node at a lowest level of hierarchy is changed.
The computer-readable medium of claim 22, wherein the at least one change includes changing at least one hierarchical count of the first level of sets wherein nodes at each level of hierarchy
maintain a sum of nodes for child levels of hierarchy.
The computer-readable medium of claim 22, wherein the at least one change includes inserting at least one new hierarchical count of the first level of sets.
The computer-readable medium of claim 22, wherein the at least one change includes deleting at least one hierarchical count of the first level of sets.
A computing device comprising: a data bus; a memory unit coupled to the data bus; a processing unit coupled to the data bus and configured to encode a first sequence of information, wherein the
encoded first sequence of information includes a compression of at least one successive repetition of values in the first sequence of information; group the encoded first sequence of information into
at least one first level of sets having at most a first predetermined number of nodes and each having an associated count; and generate at least one hierarchical count of the first level of sets
wherein nodes at each level of hierarchy maintain a sum of nodes for child levels of hierarchy.
This application claims priority to U.S. Provisional Application No. 61/572,157 filed Jul. 12, 2011, which is hereby incorporated by reference in its entirety for all purposes.
FIELD OF THE INVENTION [0002]
The present invention is related generally to database storage management systems. More specifically, it relates to column-store database storage techniques providing both fast lookups and updates.
BACKGROUND OF THE INVENTION [0003]
Run-length encoding and bitmap encoding are popular compression schemes that are used in column stores to compress attribute values. These compression schemes are sensitive to the tuple ordering and
require support for efficiently updating tuples at given offsets. A column store relation with n tuples can be updated in O(n) time. While techniques have been proposed that amortize the update cost
by buffering updates in a differential store, such techniques use a table scan to apply the updates. Further, these techniques require the original relation to be decompressed and subsequently
re-compressed after applying the updates thus leading to added time complexity.
Since Stonebraker et al.'s seminal paper (M. Stonebraker, D. J. et al. C-store: a column-oriented dbms. In VLDB, 2005; this and all other references are herein incorporated by reference for all
purposes), column stores have become a preferred platform for data warehousing and analytics. A case study in (J. Krueger et al. Enterprise application-specific data management. In EDOC, 2010) shows
that present enterprise systems are best served by column stores. Column stores provide support for compression of attribute values in order to support fast reads. Data compression, however, presents
a significant bottleneck for updating relations. Compression schemes such as run-length encoding and bitmap encoding are sensitive to the ordering of tuples as has been shown in (D. Lemire et al.
Sorting improves word-aligned bitmap indexes. CoRR, 2009). Hence, in order to incrementally maintain relations while achieving good compression, column stores have to support offset-based or in-place
update of tuples.
Prior art implementations have proposed different techniques to amortize the cost of applying updates to a column store relation in bulk. A central theme underlying certain prior art techniques is to
maintain a differential store in addition to a read-optimized store. Updated tuples are buffered in the differential store and are subsequently merged with the read-optimized store using a merge
scan. Although such a differential update mechanism amortizes the time to apply updates in bulk, it cannot avoid the linear cost of a merge scan.
SUMMARY OF THE INVENTION [0006]
Embodiments of the present invention relate to run-length encoded sequences and supporting efficient offset-based updates of values while allowing fast lookups. In an embodiment of the present
invention, an indexing scheme is disclosed, herein called count indexes, that supports O(log n) offset-based updates and lookups on a run-length sequence with n runs. In an embodiment, count indexes
of the present invention support O(log n) updates on bitmapped sequences of size n. Embodiments of the present invention can be generalize to be applied to block-oriented storage systems.
Among other things, an indexing scheme is disclosed that optimizes a trade-off between the time to look up a value and the time to update the sequence. An indexing scheme according to embodiments of
the present invention is discussed as count indexes. In certain embodiments of the present invention, count indexes can be understood as a binary tree on a sequence of integers. In an embodiment of
the present invention, the integers are stored in the leaf nodes of the binary tree. In an embodiment, every interior node in the count index stores the sum of its children's values. The root of the
count index stores the total sum of integers in the sequence.
In embodiments of the present invention, every node in a count index stores the sum of the values of its children. To construct a count index on a sequence of n integers, the integers are divided
into groups of two. If n is odd, a group is created for the last count. The sum of the counts of the integers is then computed in each of the
2 ##EQU00001##
groups and the count index is recursively built on these n
2 ##EQU00002##
. In the disclosure below, an algorithm according to an embodiment of the present invention is presented.
These and other embodiments can be more fully appreciated upon an understanding of the detailed description of the invention as disclosed below in conjunction with the attached figures.
BRIEF DESCRIPTION OF THE DRAWINGS [0010]
The following drawings will be used to more fully describe embodiments of the present invention.
FIG. 1 is an illustration of a count index according to an embodiment of the present invention.
FIG. 2 is an example of an algorithm for creating a count index according to an embodiment of the present invention.
FIG. 3 is an example of an algorithm to lookup values in a count index according to an embodiment of the present invention.
FIG. 4 is an illustration of a lookup on a count index according to an embodiment of the present invention.
FIG. 5 is an illustration of a count index on a sequence with five run-lengths according to an embodiment of the present invention.
FIGS. 6a-c are illustrations of deleting a leaf that has no siblings according to an embodiment of the present invention.
FIGS. 7a-b are illustrations of deleting a leaf that has a sibling according to an embodiment of the present invention.
FIG. 8 is an illustration of inserting counts at a "hole" position according to an embodiment of the present invention.
FIG. 9 is an illustration of a state of a count index after insertion according to an embodiment of the present invention.
FIG. 10 is an illustration of bulk inserting a sequence of runs into a count index according to an embodiment of the present invention.
FIG. 11 is an illustration of count indexes on run-length encoded bitmaps according to an embodiment of the present invention.
FIG. 12 (Table 1) is a table of performance of count indexes on various encoding schemes according to embodiments of the present invention.
FIG. 13 is a block diagram of a computer system on which the present invention can be implemented.
FIGS. 14a and b are examples of algorithms for creating a count index according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0025]
Among other things, the present invention relates to methods, techniques, and algorithms that are intended to be implemented in a digital computer system 100 such as generally shown in FIG. 13. Such
a digital computer or embedded device is well-known in the art and may include the following.
Computer system 1300 may include at least one central processing unit 102 but may include many processors or processing cores. Computer system 1300 may further include memory 1304 in different forms
such as RAM, ROM, hard disk, optical drives, and removable drives that may further include drive controllers and other hardware. Auxiliary storage 1312 may also be include that can be similar to
memory 1304 but may be more remotely incorporated such as in a distributed computer system with distributed memory capabilities.
Computer system 1300 may further include at least one output device 1308 such as a display unit, video hardware, or other peripherals (e.g., printer). At least one input device 1306 may also be
included in computer system 1300 that may include a pointing device (e.g., mouse), a text input device (e.g., keyboard), or touch screen.
Communications interfaces 1314 also form an important aspect of computer system 1300 especially where computer system 1300 is deployed as a distributed computer system. Computer interfaces 1314 may
include LAN network adapters, WAN network adapters, wireless interfaces, Bluetooth interfaces, modems and other networking interfaces as currently available and as may be developed in the future.
Computer system 1300 may further include other components 1316 that may be generally available components as well as specially developed components for implementation of the present invention.
Importantly, computer system 1300 incorporates various data buses 1316 that are intended to allow for communication of the various components of computer system 1300. Data buses 1316 include, for
example, input/output buses and bus controllers.
Indeed, the present invention is not limited to computer system 1300 as known at the time of the invention. Instead, the present invention is intended to be deployed in future computer systems with
more advanced technology that can make use of all aspects of the present invention. It is expected that computer technology will continue to advance but one of ordinary skill in the art will be able
to take the present disclosure and implement the described teachings on the more advanced computers or other digital devices such as mobile telephones or "smart" televisions as they become available.
Moreover, the present invention may be implemented on one or more distributed computers. Still further, the present invention may be implemented in various types of software languages including C,
C++, and others. Also, one of ordinary skill in the art is familiar with compiling software source code into executable software that may be stored in various forms and in various media (e.g.,
magnetic, optical, solid state, etc.). One of ordinary skill in the art is familiar with the use of computers and software languages and, with an understanding of the present disclosure, will be able
to implement the present teachings for use on a wide variety of computers.
The present disclosure provides a detailed explanation of the present invention with detailed explanations that allow one of ordinary skill in the art to implement the present invention into a
computerized method. Certain of these and other details are not included in the present disclosure so as not to detract from the teachings presented herein but it is understood that one of ordinary
skill in the art would be familiar with such details.
In the present disclosure, the problem of efficiently supporting offset-based updates in column stores is considered. On average, the time taken to insert or delete a tuple at a given offset is
linear in the number of tuples of a relation. This is because, in addition to the updated tuple, the storage keys or identifiers of the successive tuples have to be updated as well. In an embodiment
of the present invention, offset-based updates on a column store relation can be implemented in time that is sub-linear in the number of tuples and avoids a linear scan of the tuples.
As a first step, issues related to efficiently updating run-length encoded sequences are considered. Subsequently, the teachings of the present invention are generalized to other applications
including bitmapped and uncompressed sequences.
Updating Run-Length Encoded Sequences
In an embodiment of the present invention, tuples of a relation in a column store are ordered by sort keys and every attribute is stored separately as a sequence of values. In an embodiment, these
sequences are either run-length encoded or bitmap encoded. In an embodiment, bitmapped attributes are compressed using run-length encoding to save space. The run-length encoding scheme compresses
successive repetitions of values or runs in a sequence. A run of a value v can be succinctly represented either as a (v, n) or a (v, o) pair, where n is the length of the run and o is the offset of
the last (or the first) occurrence of v in the run. For presentation purposes in the present disclosure, the former representation of runs will be called count-based and the latter representation
offset-based. The two representation schemes are illustrated using the following example.
Consider the sequence of values a, a, b, b, b, a, a, a, b, b. If the count-based representation of runs is used to encode this sequence, the result is (a, 2), (b, 3), (a, 3), (b, 2). If he
offset-based representation of runs is used, the sequence would be encoded as (a, 2), (b, 5), (a, 8), (b, 10).
Each representation scheme has advantages and disadvantages. Since the offsets of the runs increase monotonically, the offset-based representation of a run-length encoded sequence can be indexed by
Binary Search Trees (in memory) or B+ Trees (on disk). Using these indexes, values can be looked up in the sequence based on their offset in time that is logarithmic in the number of runs. But if a
run is inserted or deleted, the offset of every successive run must be updated as well. Time taken to update a run-length encoded sequence is linear in the number of runs using the offset-based
In contrast to the offset-based representation, a run can be inserted or deleted using the count-based representation of a run-length encoded sequence in constant time if the offset of updated run is
known. But in order to look up a value at a given offset, the offset of the preceding run must be computed as well. The time to look-up a value in a run-length encoded sequence is linear in the
number of runs using the count-based representation.
Choosing between the offset-based and the count-based representation of runs trades-off the time taken to look up and update runs of values in a sequence. As part of the present disclosure, it is
shown in an embodiment of the present invention that auxiliary data structures can be leveraged along with an appropriate choice of a representation scheme to optimize this trade-off.
The problem of efficiently updating run-length encoded sequences is considered as a first step towards efficiently updating column stores. An embodiment of the present invention includes a novel
indexing scheme called count indexes that supports offset-based look ups as well as updates on a run-length encoded sequence with n runs in O(log n) time. Count indexes according to embodiments of
the present invention efficiently trade off the time to look up a value in a sequence with the time taken to update the sequence. The count indexes according to embodiments of the present invention
are used in other embodiments to efficiently update bitmap encoded columns and unsorted lists/arrays. Among other things, embodiments of the present invention include procedures to efficiently create
count indexes on a sequence of n runs in O(n) time. In embodiments of the present invention, offset-based look ups and updates on count indexes can be performed in O(log n) time. Also, in embodiments
of the present invention, it is shown that a sequence of k runs can be bulk inserted into a count index with n leaf nodes in O(k+log n) time. Embodiments of the present invention also include count
indexes to support O(log n) updates on bitmap encoded sequences and unsorted lists/arrays of size O(n). In other embodiment, count indexes are adapted to block oriented stores.
Count Indexes
Above, it was shown that choosing a particular representation scheme to run-length encode a sequence trades-off the time to look up a value given its offset and the time to update the sequence. An
indexing scheme will now be disclosed that optimizes this trade-off. To facilitate the present disclosure, an indexing scheme according to embodiments of the present invention will be discussed as
count indexes. In certain embodiments of the present invention, count indexes can be understood as a binary tree on a sequence of integers. In an embodiment of the present invention, the integers are
stored in the leaf nodes of the binary tree. In an embodiment, every interior node in the count index stores the sum of its children's values. The root of the count index stores the total sum of
integers in the sequence.
Shown in FIG. 1 is a count index on the sequence given in Example 1 (a, a, b, b, b, a, a, a, b, b). While the count index is constructed on the run lengths only, the values (a and b) are displayed as
well for presentation purposes. As shown in FIG. 1, leaf node 152 represents the first two occurrences of a, leaf node 154 represents the next three occurrences of b, leaf node 156 represents the
next three occurrences of a, and leaf node 158 represents the next two occurrences of b. Node 160 stores the number of values (=5) in leaf nodes 152 and 154, node 162 stores the number of values (=5)
in leaf nodes 156 and 158. The root node 164 in FIG. 1 stores the number of values in the sequence (=10).
Given a run-length encoded sequence with n runs, lookup or run updates of values can be performed at specified offsets using count indexes in O(log n) time. To be discussed below are exemplary
algorithms according to embodiments of the present invention for index creation, lookup, and maintenance.
Index Creation
As discussed above, a count index according to embodiments of the present invention is a binary tree on a sequence of integers. In embodiments of the present invention, every node in a count index
stores the sum of the values of its children. To construct a count index on a sequence of n integers, the integers are divided into groups of two. If n is odd, a group is created for the last count.
The sum of the counts of the integers is then computed in each of the
2 ##EQU00003##
groups and the count index is recursively built on these n
2 ##EQU00004##
. An algorithm according to an embodiment of the present invention is presented in FIG. 2. As indicated in FIG. 2, the pointers to the parent, left and the right children of a node are stored in
parent, lchild and rchild, respectively.
In an embodiment of the present invention, groupings of two nodes are implemented. The teachings of the present invention, however, are broader. Accordingly, in other embodiment of the present
invention, groupings of four and eight are implemented. Still other embodiments can implement groupings of different numbers of nodes. The teachings of the present invention extend to all such
implementations as would be understood by one of ordinary skill upon understanding the present disclosure including the appended figures and claims.
Time Complexity: According to an embodiment of the present invention, in a first pass, n integers are divided into groups of two and the sum of the integers in each group is computed. In an
embodiment, this takes n units of time. The second pass takes
2 ##EQU00005##
units of time
, the third pass takes
2 ##EQU00006##
units of time and so on
. The algorithm terminates after log n passes. The time complexity of the index creation algorithm is therefore O(n+n/2+n/4+ . . . +1)=O(2×n)=O(n).
Above, it was observed that choosing one representation of a run-length encoded sequence over the other trades off the time taken to update the sequence with the time taken to look up values given
their offsets. In an embodiment, the count indexes according to an embodiment of the present invention optimizes this trade-off by supporting O(log n) offset based look ups and updates on a sequence
with n runs.
Index Look Up
The run lengths in a run-length encoded sequence are not monotonic in general. Therefore, a binary search cannot be directly performed on the run lengths to look up the leaf node corresponding to a
given offset. But if by starting at the root of the count index, which of its children contains the leaf node corresponding to the given offset in constant time can be determined. Suppose a look up
the value at offset p is desired. Let the counts of the left child and the right child of the root be l
and r
, respectively. If p l
, the value at offset p is located in the left child of the root. Otherwise, the value at offset p is located in the right child. The left or the right child of the root can now be looked up in a
similar manner. The lookup procedure terminates when the algorithm arrives at a leaf that corresponds to the supplied offset. The look up procedure terminates in time that is proportional to the
height of the count index. An algorithm according to an embodiment of the present invention for looking up values in a count index is presented in FIG. 3.
To illustrate how the look-up algorithm works according to an embodiment of the present invention, the count index (FIG. 1) on the sequence given in Example 1 is revisited. Suppose that with
reference to FIG. 4 a look-up the value at the 8th position from the start of the sequence is desired (see block 164 of FIG. 4). The counts of the left and right children of the root of the count
index are looked up (see nodes 160 and 162, respectively). Since the count of the left child (node 160=5) is less than 8, the value at offset 3 (=8-5) in the right child (node 162) of the root is
looked up. Since the count of the left child is 3 (see leaf node 156), the look up procedure terminates and returns the value a.
Proof of correctness: The correctness of the lookup algorithm shown in FIG. 3 can be shown using induction on the height of the count index.
Base case: (Index with a root and 2 children, e.g., height=2) Suppose the value in the p-th position from the start of the sequence is desired to be looked up. Let l
and r
be the counts of the left and right children of the root node. If the offset is positive and less than the size of the sequence:
, either p l
or p-l
Inductive Hypothesis: The look up algorithm is assumed to correctly return the value at any given offset in a count index of height k.
Induction Step: Suppose the value at the offset p in a count index of height k+1 is looked up. Let l
and r
be the counts of the left and right children of the root node. Following the argument made in the base case, either pl
or p-l
. Since the children of the root node at a height of k, the look up algorithm correctly returns the value at the offset p.
Time Complexity: Consider a count index on a sequence of n integers. The algorithm in FIG. 3 according to an embodiment of the present invention examines one node at any level in the count index.
Since the height of the count index is log n, the time complexity of the look up procedure is O(log n).
Supporting Range Predicates: Given a count index with n leaves, values within a given range of offsets in O(log n) time can be looked up by creating links between the neighboring leaf nodes. Suppose
values within the offsets p and q from the start (p q) are required. The value at offset p is looked up using the algorithm shown in FIG. 3 according to an embodiment of the present invention. On
reaching the leaf node that corresponds to the offset p, the leaf's neighbors are traversed on the right until the leaf node corresponding to the offset q is reached.
Index Maintenance
It is essential for any index to be efficiently updatable when new values are inserted or old values are deleted. When values in a sequence are updated, the corresponding count index is affected in
one of the following ways:
1. The count of an existing leaf node is updated.
2. A new leaf node is inserted or an existing leaf node is deleted from the count index.
Case 1 is handled by updating the ancestors of the leaf node in time that is logarithmic in the number of runs in the sequence. Case 2 can potentially unbalance the count index. Since the counts of
the leaf nodes are not monotonic, tree rotation schemes that are used to balance AVL trees or Red-Black trees cannot be used here. To be discussed now are techniques to efficiently balance a count
index when new values are inserted or old values are deleted. Note that in all of the examples which follow, inserts and deletes of single values or counts into a count index are discussed. The index
update procedures described below can, however, be used to insert or delete runs of any length ( 1).
Deletions: Suppose a leaf is desired to be deleted from a count index.
There are two cases depending on whether the leaf has a sibling or not.
1. (Leaf has no siblings): In this case, the leaf and its ancestors are deleted. If the new root has only one child, its child is set to be the root. Iteration proceeds down the tree.
2. (Leaf has a sibling): In this case, the leaf (say c
) or its sibling (say c
) are checked to determine whether they have a neighbor without any sibling of its own. If neither the leaf nor its sibling have such a neighbor, then the leaf node is deleted. The value at its
parent is then updated. Otherwise, let c
denote the neighbor of the leaf (or its sibling) that has no sibling of its own. The leaf c
is deleted and the neighbor c
is assigned to be the sibling of c
As shown in FIG. 5, consider a count index on a sequence with five run-lengths: 1 (node 502), 3 (node 504), 3 (node 506), 1 (node 508), and 1 (node 510). Note that although the count index appears to
be unbalanced, its height is in fact log
(#nodes=5). In an embodiment of the present invention, this is always the case for Assertion 3. Suppose it is desired to delete the right-most count from the count index shown in FIG. 5. Since the
leaf node (node 510) corresponding to the rightmost count (=1) does not have a sibling, it is deleted as well as all its ancestors (nodes 510, 516, 520 shown as hatched nodes in FIG. 6(a)). Shown in
FIG. 6(b) is the state of the count index (see node 518 in FIG. 6b) after deleting its rightmost leaf.
Suppose that the rightmost leaf of FIG. 6(b) is again deleted (e.g., node 508). Since the rightmost leaf 508 of FIG. 6(b) has a sibling node 506 and no neighbors without a sibling, the leaf node 508
is deleted and the value of its parent 514 is updated. The resulting count index is shown in FIG. 6(c).
With reference to FIG. 7(a), now suppose the leftmost node 502 with count=1 is deleted. Since the leftmost leaf 502 has a sibling 504 with a count of 3 and a neighbor 506 without any siblings, the
leftmost node 502 is deleted and the sibling 504 is paired with the neighboring leaf 506. This procedure is shown in FIGS. 7(a) and (b). The resulting count index is shown in FIG. 7(b).
Assertion 1. Deleting a leaf from a count index takes time that is linear in the height of the count index.
Proof. Let f(k) and g(k) be the time taken to delete a node at a height of k using cases 1 and 2, respectively. Then, the following recurrence relations hold.
(k)≦1+max{(f(k-1), g(k-1)} (2)
(k)=f(k-1) (3)
On solving for f(k) and g(k), f(k)=g(k)=c×k+d is obtained where c and d are constants. Therefore, a leaf can be deleted from a count index of height h in O(h) time.
Updating the Root: When a leaf node is deleted that causes one of the root's children to be deleted as well (an example is shown in FIG. 6(a)), the root node of the count index has to be updated.
Generally, after the deletion procedure terminates, the root of the count index is checked to determine whether it has two children or not. If the root has two children, no further updates are
required. But if the root of the count index has only one child, its child is set to be the new root node. Recursion is performed down the count index.
Inserts: In an embodiment of the present invention, the process of inserting leaves into a count index is similar to deleting leaves. Suppose it is desired to insert a leaf node w between two leaves,
u and v. In an embodiment, there are two cases depending on whether u or v have siblings or not.
1. (One of u or v has no siblings): Without loss of generality, it can be assumed that u has no sibling. Leaf node w is made the sibling of leaf node u. The count at their parent is then updated in
an embodiment.
2. (Both u and v have siblings): If u and v are siblings of each other, then leaf node w is made the sibling of leaf node u by updating the count at v. Next, a leaf with the same count as v is
inserted into the count index. If u and v are not siblings, then a new node is created for w at the leaf level, and its parent is set to have the same count as itself.
Assertion 2. Inserting a leaf into a count index takes time that is linear in the height of the count index.
Proof. Let f(k) and g(k) be the time taken to insert a node at a height of k using cases 1 and 2 respectively. Then, the following recurrence relations hold in an embodiment of the present invention.
(k)=f(k-1)+1 (4)
(k)≦1+max{(f(k-1), g(k-1)} (5)
On solving for f(k) and g(k), f(k)=g(k)=c×k+d is obtained where c and d are constants. Therefore, in an embodiment of the present invention, a leaf can be inserted into a count index of height h in O
(h) time.
Balancing Count Indexes: Using Assertions 1 and 2, it has been established that node insertions and deletions in a count index of height h take O(h) time. It still needs to be show that an updated
count index is balanced, e.g., h=O(log n).
Count indexes can be potentially unbalanced due to nodes which are not paired with their neighbors. Such unpaired nodes are called holes in an embodiment of the present invention. Since a node in a
count index stores the sum of the children's counts, traditional tree rotation techniques for tree balancing as is used in Red Black Trees (L. J. Guibas et al. A dichromatic framework for balanced
trees. 1978) or AVL Trees (see e.g., Avl trees. http://en.wikipedia.org/wiki/AVL_tree) cannot be used.
Consider a count index with n leaf nodes. The maximum number of holes at the leaf level of the count index is .left brkt-top.n/2.right brkt-bot.. When a node is inserted into or deleted from the
count index, holes are propagated onto higher levels where they are either pair with an existing hole or cause a node to be split (while inserting a node) or two nodes to be merged (while deleting a
With reference to FIG. 8, the count index previously being considered has two hole positions 804 and 806 (recall nodes 502 and 508 that were deleted from FIG. 6(a)) at the leaf level. When a count
(see node 802) is inserted (say 1) into either of these holes, the hole propagates to the next level and pairs with the node which was previously the root. For example as shown in FIG. 9, a count (=
1) is inserted into hole 804, the hole propagates to node 902 and pairs with node 514 which was previously the root node to create new root node 904 having a count of 7 (=1+6).
Assertion 3. The height of a count index with n leaves is O(log n), e.g., the count index is balanced.
Proof. Suppose that there are n nodes paired up at the leaf level in a count index. The maximum number of holes at a leaf level is
2 . ##EQU00007##
At subsequent levels, the maximum number of holes is
4 , n 8 , n 16 , ##EQU00008##
and so on
. Hence, at every at every level of the count index, at most one-third of the nodes can potentially be holes. The height of the count index is thus log (3×n/2)=O log(n).
A count index can, therefore, be updated in time that is logarithmic in the number of runs in a sequence. Better performance can be achieved when inserting a sequence of runs into the input sequence.
Bulk inserting sequences: Consider a count index with n leaf nodes. Suppose it is desired to insert a sequence of k runs. The k runs could be inserted using the insert procedure described above in O
(k×log (n+k)) time. But better performance can be achieved in an embodiment of the present invention if use is made of the fact that the k runs are adjacent. An algorithm according to an embodiment
of the present invention is presented below that efficiently inserts a sequence of k runs (represented as [c
]) into a count index T with n leaves.
Construct a count index T' on the sequence [c
Merge T and T' (see FIG. 10).
Let u and v be the leaves in T between which the sequence [c
] is inserted. Either of u or v could be null.
If u and v are siblings, split u and v, converting them into holes.
Pair up the left-most leaf in T' and u if both nodes are holes.
Similarly pair up the right-most leaf of T' and v if both nodes are holes.
Insert the rest of the leaves of T' into T without further modifications.
Iterate at the next higher level of T' and T.
At a height of log k, insert the root of T' into T.
Time Complexity: Constructing T' takes O(k) time. Inserting the root of T' into T at a height of log k takes O(log n/k) time. To bound the time taken to merge two count indexes with n and k leaves,
denote it as the function f(n, k). The algorithm described above results in the following recurrence relations.
( n , k ) = f ( n 2 , k 2 ) + O ( k ) ( 6 ) f ( n , 1 ) = O ( log n ) ( 7 ) ##EQU00009##
On solving for f(n, k), f(n, k)=O(k+log n) is obtained. Using the above algorithm according to an embodiment of the present invention, a sequence of k runs can be inserted contiguously in time that
is almost linear in k.
Optimality of Incremental Maintenance using Count Indexes
Above, it was shown that count indexes can be leveraged to support O(log n) offset-based look ups and in-place inserts, deletes and updates of runs into a run-length encoded sequence with n runs. A
lower bound result of the complexity of incrementally maintaining partial sums is now used to show that the problem of incrementally maintaining a run-length encoded sequence with n runs requires O
(log n) time. Given an array {a[i]} and a parameter k, the partial sums problem computes Σ
[i] subject to updates of the form a[i]+=x. Others have independently established that incrementally maintaining partial sums while supporting in-place updates requires O(log n) time.
The problem of incrementally maintaining partial sums over an array {a[i]} of length n can be reduced to the problem of incrementally maintaining run-length encoded sequences. A run-length encoded
sequence with n runs is created. Every i corresponds to a run and the run-length of the i
run is equal to a[i]. An update a[i]+=x to the partial sums problem is equivalent to an insertion of x runs at the offset Σ
[i] in the run-length encoded sequence. The computation of the partial sum at a[k] is equivalent to the computation of the starting offset of the i+1
run. Therefore, incrementally maintaining a run-length encoded sequence with n runs requires O(log n) time. The problem of incrementally maintaining run-length encoded sequences is more general than
the problem of incrementally maintaining partial sums because the partial sums problem does not support in-place insertion or deletion of array elements. By leveraging count indexes, offset-based
look ups and in-place inserts, deletes and updates can be supported in a run-length encoded sequence with n runs in O(log n) time which is preferred in an embodiment of the present invention.
Above, it was shown that count indexes can efficiently lookup and update a run-length sequence with n runs in O(log n) time. Bitmap encoding is another compression technique that is used in columnar
databases to compress columns with a few distinct values. Bitmaps are generally sparse and are further compressed using run-length encoding. In an embodiment of the present invention to be described
now, count indexes are extended to efficiently update bitmap encoded sequences as well. The problem of efficiently looking up and updating unsorted lists/arrays is also considered. It will be shown
that indexes can be used to support logarithmic updates and lookups. To be disclosed also is the manner in which count indexes can be generalized to block oriented stores where the number of I/Os
determine the time complexity of updates.
Updating Bitmap Encoded Sequences: Bitmap encoding a sequence creates a bit-vector for every distinct value in the sequence. If the sequence contains the value v at the i
position, then the bit-vector corresponding to the value v contains the bit 1 at the i
position. Bitmaps are sparse and are compressed using run-length encoding. When new values are added to the sequence, additional bits are appended to the end of the bitmaps. When values are deleted
from a bitmap, the corresponding bits are not deleted. Instead, the deleted values are tombstoned (see H. Garcia-Molina et al. Database Systems: The Complete Book. 2008). If a new value is inserted
or delete an existing value at a given offset in the bitmap, it takes time that is linear in the size of the bitmap. The update complexity can be significantly improved by extending count indexes to
operate on bitmapped sequences. For every distinct value in an input sequence, a count index on the corresponding run-length encoded bitmap is created. The bitmap can now be looked up and updated at
a given offset in time that is logarithmic in number of the runs in the bitmap. An example according to an embodiment of the present invention is presented to show how count indexes can be extended
to operate on bitmapped sequences.
With reference to FIG. 11, suppose the sequence is bitmap encoded as given in Example 1 (see block 1102). Two bitmaps are obtained: [1 1 0 0 0 1 1 1 0 0] (see block 1104) for the value `a` and [0 0 1
1 1 0 0 0 1 1] (see block 1106) for `b.` Run-length encoding these bitmaps would produce the following sequences: [(1, 2) (0, 3) (1, 3) (0, 2)] as shown in tree 1108 and [(0, 2), (1, 3), (0, 3) (1,
2)] as shown in tree 1110, respectively. Two count indexes are constructed on the resulting sequences (see tree 1108 and tree 1110). Note that these indexes are identical to each other.
In general, if an input sequence has n runs and k distinct values, then the bitmapped sequence could be looked up and updated using count indexes in time that is proportional to k×log n/k in the
worst case. The bitmaps are, however, independent of each other. Hence, the look up and update algorithms for the k bitmaps can be executed in parallel in O(log n) time.
Updating Uncompressed Sequences: Count indexes efficiently update run-length encoded sequences. They can, however, be extended to operate on unsorted lists/arrays as well according to an embodiment
of the present invention. The values in the array are run-length encoded, and the count index is subsequently constructed on the run lengths. If an array has N values and n runs, then offset-based
look-ups and updates can be performed on the array in time that is proportional to log n. Count indexes can thus be used to look up and update unsorted lists or arrays in time that is logarithmic in
the array size.
Adapting To Block-Oriented Stores: Count indexes to disk can be extended by increasing the fanout of every node in a count index in a manner similar to B+ trees. In an embodiment of the present
invention, this index structure is called a count+ index. The size of a node of the count+ index is assigned to be equal to the size of a page on disk. If the size of a page on disk is S bytes and
every value in the sequence is W bytes long, then the maximum number (e.g., k) of counts that can be stored at an interior node and a leaf node of a count+ index is given by
- 4 12 and S - 4 W + 4 , ##EQU00010##
. Every interior node maintains the sum of its children's counts as well as the pointers to its children. Every node except for the root must have at least [k/2] counts. In an embodiment of the
present invention, the algorithms to update a count+ index when counts are inserted or deleted are substantially similar to the respective algorithms for count indexes with the following exceptions:
1. When a value or a count is inserted to a node of a count+ index that is full, e.g., has k counts, a new node is created and half of the counts/values are moved to the newly created node.
2. When a value or a count is deleted from a non-root node of the count+index that has exactly [k/2] counts, two possible cases arise:
(a) There is a neighboring node which has at least [k/2]+1 counts. In this case, the node is updated by moving either the rightmost count (if the neighbor is on the left) or the leftmost count (if
the neighbor is on the right) from the neighboring node.
(b) All the neighboring nodes have [k/2] counts. In this case, any of the neighbors can be selected, the counts of the node and its neighbor can be merged, and the sum of the resulting node can be
Methods according to embodiments of the present invention were disclosed where a single run count in each leaf node of a count index was stored as well as the sum of two child nodes in their parent.
In order to adapt count indexes to block-oriented stores where each block of data is a disk page instead of a single value, for example, another embodiment of the present invention includes modified
count indexes to store multiple counts in each node.
For example, suppose each disk page is S bytes long and the width of the attribute to be stored is W bytes long. In an embodiment, the run-lengths can be represented as 4 byte integers. In each leaf
node of the modified count index, a maximum of N
attribute values, their respective counts and the sum of their N
counts are stored where
N l
= S - 4 W - 4 . ##EQU00011##
In each interior node
, a maximum of N
pointers (8 Bytes long) are stored to its children as well as the sum of their respective counts (4 Bytes each) where
N i
= S - 4 W + 12 . ##EQU00012##
To adapt count indexes to block
-oriented stores, we also establish a new index invariant: every non-root node is at least half-full, e.g., every leaf node stores at least
N l
2 ##EQU00013##
attribute values and counts and every interior nodes stores at least N i
2 ##EQU00014##
pointers to its children
. The modified algorithms for inserting and deleting a run are presented in FIGS. 14a and b, respectively.
CONCLUSION [0117]
The present disclosure has described embodiment of the present invention that, among other things, implement and indexing scheme that supports offset-based lookups and updates on a run-length encoded
sequence. Count indexes according to embodiments of the present invention significantly reduce the time complexity of offset-based updates on run-length encoded columns from O(n) to O(log n). Count
indexes can be generalized to operate on bitmap encoded sequences as well as uncompressed sequences according to other embodiments of the present invention.
Columnar databases compress attribute values using run length encoding and bitmap encoding. In other embodiments of the present invention, count indexes can be leveraged as an auxiliary data
structure to efficiently support single value updates in column store relations. In other embodiments, count indexes can bulk insert a sequence of values in time that is almost linear in the number
of inserted runs, amortizing the time to insert values in bulk.
The performance of count indexes according to certain embodiments of the present invention are shown in Table 1 (FIG. 12).
It should be appreciated by those skilled in the art that the specific embodiments disclosed above may be readily utilized as a basis for modifying or designing other image processing algorithms or
systems. It should also be appreciated by those skilled in the art that such modifications do not depart from the scope of the invention as set forth in the appended claims.
Patent applications by Abhijeet Mohapatra, Stanford, CA US
Patent applications by The Board of Trustees of the Leland Stanford Junior University
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20130097127","timestamp":"2014-04-19T00:05:16Z","content_type":null,"content_length":"80326","record_id":"<urn:uuid:5ce1ae0c-6b41-4d4d-bb78-46f2ad5e8850>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brain Teasers
Author Brain Teasers
Ranch Hand
Can anyone discover the missing number?
Joined: Jul 13, 2009 37,10,82
Posts: 54 29,11,42
lowercase baba
Joined: Oct 02, 2003 i can.
Posts: 10916
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
I like...
author and iconoclast
Joined: Jul 08, 2003 What woman could forget the unceasing friendship you and Nanda have shown us, dear Queen of Vraja? There is no way to repay you in this world, even with the wealth of Indra.
Posts: 24166
[Jess in Action][AskingGoodQuestions]
I like...
Ranch Hand
Queen of Vraja - Radha
Joined: Aug 16, 2007 Nanda - Father (Care taker)
Posts: 1374
Again you are referring Nanda so Queen of Vraj could be Yashoda.
And the other person involved is who-is-saying-this* . I think Vasudev, to show gratitude to Nanda.
You meant to give hint to the answer. I couldn't get you.
* What word should have been used instead of 'who-is-saying-this'. Narrator ? I I don't think so.
Ranch Hand
Joined: Aug 16, 2007 Or it is another brain teaser from you?
Posts: 1374
Saloon Keeper
Joined: Jul 26, 2007
Posts: 9990 Kuber is richer than anybody else. Indra's wealth pales in comparison.
You can try Kuber if you got sufficient credit.
I like...
[How to ask questions] [Donate a pint, save a life!] [Onff-turn it on!]
Ranch Hand
Joined: Jul 13, 2009 So finally whats the missing number?
Posts: 54
Saloon Keeper
Joined: Jul 26, 2007
Posts: 9990
I like...
is it 21...?
Joined: Jul 02, 2009
Posts: 10 SCJP 5
Now Moving for SCWCD and then for OCA & OCP
lowercase baba
Joined: Oct 02, 2003
Posts: 10916
I like...
Joined: Mar 22, 2005 Perfect!
Posts: 39548
27 Ping & DNS - updated with new look and Ping home screen widget
Ranch Hand
fred rosenberger wrote:6
Joined: Jul 13, 2009
Posts: 54
the number in the middle of each triple is the same as the digits of the number at either ends when added together.
e.g. 3+7=10=8+2
Ranch Hand
Here is another one...
Joined: Jul 13, 2009 There are four 3-digits natural numbers; each of them equals the sum of the cubes of its digits. Three of them are :
Posts: 54 153=1+125+27
What is the fourth one? It does not begin with 0, otherwise it isn't 3-digit number.
Joined: Mar 22, 2005 Soumil Shah wrote:the number in the middle of each triple is the same as the digits of the number at either ends when added together.
Posts: 39548
27 4+2 = 11 ?
author and iconoclast
Marshal Ulf Dittmer wrote:
Joined: Jul 08, 2003 Soumil Shah wrote:the number in the middle of each triple is the same as the digits of the number at either ends when added together.
Posts: 24166
4+2 = 11 ?
I like... Maybe a bad transcription of a hand-written puzzle? Some people's 4's and 9's look alike.
author and iconoclast
Joined: Jul 08, 2003
Posts: 24166
Simon Peter went up, and drew the net to land full of great fishes, an hundred and fifty and three: and for all there were so many, yet was not the net broken.
I like...
Ranch Hand
Soumil Shah wrote:Here is another one...
Joined: Feb 18, 2005 There are four 3-digits natural numbers; each of them equals the sum of the cubes of its digits. Three of them are :
Posts: 988 153=1+125+27
1 407=64+0+343
What is the fourth one? It does not begin with 0, otherwise it isn't 3-digit number.
Once you know that one of the numbers in the list ends with a 0, it's obvious (for large values of obvious) that that number plus 1 should be on the list.
Can you find any four digit numbers that have this property?
Joined: Oct 14, 2005
Posts: 18135
8 These kinds of question are much less interesting now that we have the internet. Ryan's question is answered here: http://mathworld.wolfram.com/NarcissisticNumber.html.
I like...
Ranch Hand
Ulf Dittmer wrote:
Joined: Jul 13, 2009
Posts: 54 Soumil Shah wrote:the number in the middle of each triple is the same as the digits of the number at either ends when added together.
4+2 = 11 ?
good catch... it should be 47...
subject: Brain Teasers | {"url":"http://www.coderanch.com/t/474421/md/Brain-Teasers","timestamp":"2014-04-20T11:24:24Z","content_type":null,"content_length":"59379","record_id":"<urn:uuid:f78a3994-4f97-4461-aac6-e01d4fe98122>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Differentiate the function y = ln |2-x-5x^2|
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f89a308e4b02251ecc9fa11","timestamp":"2014-04-19T07:33:51Z","content_type":null,"content_length":"96757","record_id":"<urn:uuid:39cc3795-909b-4afd-bdfc-4dcb28402acb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
No. 1113: How Can I Tell You about Math?
Today, three mathematicians wonder how to teach math. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people
whose ingenuity created them.
I've had a running conversation with three mathematician colleagues. They all feel acutely that we need to improve the public understanding of math. But none of them knows how!
One says, "How do you get around the fact that the media treat mathematicians as though we were mentally deformed?" Another says, "It's like trying to explain Mozart to someone who's never heard
Mozart -- without using sound." Another says, "It's really easy enough to understand mathematics. You simply learn mathematics."
These people differ greatly in temperament. Each readily admits his temperament colors the way he does math. Yet they share one conviction. Mathematics is a great beauty in their lives. It gives
them pleasure. They want others to share that pleasure.
But they also want to share the empowerment math gives them -- the increase of options in their lives. Whole worlds of human endeavor close off when you don't know math. As math literacy drops,
and our young limit their lives, America suffers. That's why I turned to these friends. And if they don't know the answer, they certainly see the problem. It's the Catch-22 of having to describe
mathematical pleasure to children who don't yet know math.
And it's not just children. We in engineering still meet students who're beyond calculus, and who haven't yet caught that glint of beauty. These are, by and large, students who've seen only sets
of formal steps in the math they've studied.
Of course that's the terrible trap we teachers face at all levels. It's far easier to teach methodical steps than to open our insides to students -- to say, "Here's where Heaven has touched me!"
Methods are so clean and reliable. It's easy to write and grade test questions about method.
But mathematicians are in their business to be surprised. While method can produce surprise, instruction based on method takes students out of the mental frame that expects surprise. They miss
both the pleasure, and the opportunity, surprise offers.
Mathematics is like humor. Math lets you turn suddenly, and veer onto a side road. Method doesn't. Well-constructed humor has a mathematical structure to it. My favorite example is the assertion
that "There are three kinds of people in this world: those who can count and those who can't!" Or, if you want one with even greater mathematical elegance, try, "All generalities are false!"
So my colleagues and I will keep trying to reach students after they've reached the university, even though that's too late. The fact is, we teachers need the help of parents and of the media.
Children need to know that math offers joyful and unexpected forks in their road. Math offers surprises that educational method hardly hints at. And that includes surprise in the road of their
own lives. It means more choice, and greater freedom.
I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds work.
(Theme music) | {"url":"http://uh.edu/engines/epi1113.htm","timestamp":"2014-04-18T10:57:02Z","content_type":null,"content_length":"7579","record_id":"<urn:uuid:fba8c224-988b-4240-a0a6-a49e28b51986>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Silvercreek, CO
Find a Silvercreek, CO Calculus Tutor
...I've spent the last 10 years in Winter Park away from engineering, miss it, but enjoy tutoring and consulting in software development. I have four children the oldest who recently graduated
from CU and the youngest who is just entering Middle Park High School. I always enjoyed teaching them and working with students to help them understand math, science, and computers.
20 Subjects: including calculus, chemistry, physics, geometry
...Upon coming to college I tutored semi-monthly to kids at the local library in subjects ranging from basic math to algebra to chemistry. During that time I also gave private tutoring to two
elementary school students who were learning basic math. I teach with a style I think that most teachers should follow.
30 Subjects: including calculus, geometry, algebra 1, algebra 2
...I have been tutoring for 10 years now and have had great success with my students, ranging from middle school to college-age, and non-traditional students. I began tutoring by helping other
students in my high school classes. In college, I worked as a private math and physics tutor with the Physics department at the University of Louisiana.
16 Subjects: including calculus, chemistry, physics, GRE
...I am focused on teaching: Math and English, SAT Prep (99th percentile), GRE Prep (98th percentile), and College Applications. I'm experienced in teaching all levels of math and English to
students of various abilities. My lesson plans target strengthening weaknesses while keeping the material interesting and engaging.
30 Subjects: including calculus, reading, writing, statistics
...I am admitted to practice in both States and am in the process of being admitted into Colorado, which accepts the UBE. I took Linear Algebra in undergrad, earned a Masters, and am presently a
Math Ph.D student. I have not tutored this particular class, but I have tutored undergraduate analysis, so I have experience teaching upper level undergraduate classes.
26 Subjects: including calculus, chemistry, physics, statistics
Related Silvercreek, CO Tutors
Silvercreek, CO Accounting Tutors
Silvercreek, CO ACT Tutors
Silvercreek, CO Algebra Tutors
Silvercreek, CO Algebra 2 Tutors
Silvercreek, CO Calculus Tutors
Silvercreek, CO Geometry Tutors
Silvercreek, CO Math Tutors
Silvercreek, CO Prealgebra Tutors
Silvercreek, CO Precalculus Tutors
Silvercreek, CO SAT Tutors
Silvercreek, CO SAT Math Tutors
Silvercreek, CO Science Tutors
Silvercreek, CO Statistics Tutors
Silvercreek, CO Trigonometry Tutors
Nearby Cities With calculus Tutor
Adams City, CO calculus Tutors
Deckers, CO calculus Tutors
Dupont, CO calculus Tutors
Foxton, CO calculus Tutors
Irondale, CO calculus Tutors
Montclair, CO calculus Tutors
Nast, CO calculus Tutors
Norrie, CO calculus Tutors
Old Snowmass, CO calculus Tutors
Ruedi, CO calculus Tutors
Sweetwater, CO calculus Tutors
Tabernash calculus Tutors
Wattenburg, CO calculus Tutors
West Village, CO calculus Tutors
Western Area, CO calculus Tutors | {"url":"http://www.purplemath.com/silvercreek_co_calculus_tutors.php","timestamp":"2014-04-19T07:14:02Z","content_type":null,"content_length":"24440","record_id":"<urn:uuid:26030515-205e-4d8c-a372-c4c0b2c0f681>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boulder, CO Algebra 2 Tutor
Find a Boulder, CO Algebra 2 Tutor
...My academic success arose from being a hard worker and having a good system of learning in place, not from being a naturally gifted student. One of the greatest secrets of "A-students" is not
that they are naturally gifted, but that they work smart and fail often. Learning and and academic success come from learning from your failures before you actually get graded on them.
41 Subjects: including algebra 2, reading, Spanish, English
...I have experience with most forms of writing, persuasive, informational (expository), and scientific writing/research papers. I received an A in every english and writing class I took in
college. In high school I took the Language and Composition AP exam and received a 5.
22 Subjects: including algebra 2, chemistry, English, algebra 1
Hi, I am a master's student in Civil Engineering at the University of Colorado, Boulder, and have already completed my bachelor's in Mechanical Engineering. I have experience tutoring younger
children in basic math, English, and history. I am also very proficient in high school and college level algebra, geometry, precalculus, English, history, geography, and computing skills.
26 Subjects: including algebra 2, reading, English, geometry
...In the following summer, I spent 10 days in Nicaragua on a Sustainable Field Study studying micro-hydro systems and solar energy in two sharply contrasting climates on a community level. After
teaching English in Thailand, I studied Ashtanga yoga under an enlightened master in Mysore, India for ...
61 Subjects: including algebra 2, reading, English, biology
...I have academic or industrial experience in a diverse range of Organic Chemistry, Plant Biochemistry, Molecular Biology as well as the supplementary science and math that goes along with that.
In addition I'm extremely well versed in Horticulture and both inorganic and organic crop techniques, w...
32 Subjects: including algebra 2, chemistry, reading, English | {"url":"http://www.purplemath.com/Boulder_CO_Algebra_2_tutors.php","timestamp":"2014-04-19T09:53:45Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:6f9da5ae-cc74-411c-a9e7-8c1a8a53f55e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Floor of the log equation: s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9)
Replies: 4 Last Post: May 22, 2013 11:09 PM
Messages: [ Previous | Next ]
Re: Floor of the log equation: s = (floor(log10(x))+1)*x -
Posted: May 20, 2013 12:23 PM
On Sun, 19 May 2013 11:20:13 -0700, nepomucenocarlos68 wrote:
> Hi guys! I need your help to solve this equation:
> I need to find 'x' for a given 's'. Both of them are natural numbers (>0).
> I don't know how to handle the floor term.
> [octave/matlab format]
> s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9);
> [or TeX}
> s = \left( \lfloor\log(x)\rfloor+1 \right)x - \frac{10^{\lfloor\log(x)\rfloor+1}-10}{9}
> [or image]
> http://postimg.org/image/r5fd2enll/
> Exact or approximate values are good.
> Is there a solution? How do I solve it?
Your image and tex forms show log() in some cases where the octave/matlab
form shows log10(); in following I ignore that inconsistency (which
doesn't affect the method outlined) and write l() to stand for log to
some base.
A closed-form solution giving x in terms of s may be difficult to find,
but if you can get some reasonably-close estimates x1, x2 of x such that
s is bounded by s(x1) and s(x2), then you can do a binary or other search
to find an x0 with s = s(x0), or perhaps the range of x values that yield
s. Anyhow, ignoring the floor and round functions and supposing logs are
base 10, the expression may be like
s = x*l(x) + x - ( 10*10^l(x) - 10 )/9 [5]
so s ~ x*l(x) + x - 1.1*x - 1.1 [6]
so s ~ x*l(x). [7]
According to <http://en.wikipedia.org/wiki/Lambert_W_function#Example_4>,
approximation [7] is solved by
x = e^W(s*ln(10)) [8]
You could compute values of x(s) using [8] for a couple of values of
s, and then search. Or via some of the other Lambert-function examples,
solve [6] to get more accurate starting values of x before the search.
Date Subject Author
5/19/13 Floor of the log equation: s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9) nepomucenocarlos68@gmail.com
5/20/13 Re: Floor of the log equation: s = (floor(log10(x))+1)*x - James Waldby
5/22/13 Re: Floor of the log equation: s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9) Carlos Nepomuceno
5/22/13 Re: Floor of the log equation: s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9) Carlos Nepomuceno
5/22/13 Re: Floor of the log equation: s = (floor(log10(x))+1)*x - round((10^(floor(log10(x))+1)-10)/9) Carlos Nepomuceno | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2572737&messageID=9125266","timestamp":"2014-04-18T08:09:26Z","content_type":null,"content_length":"23136","record_id":"<urn:uuid:a73c1ce7-a5b4-464b-831a-30dd34e7cc99>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wikipedia Mathematics
One day I set about trying to find a definition for some mathematical term or another, and naturally I turned toward that great tool of knowledge and understanding, Wikipedia, having always in the
past had great success expanding the limits of my cognition.
About halfway into the first paragraph it became clear I was merely parsing the tokens phonetically, gleaning no deeper understanding of the term I was researching. I tried again a couple of times,
going more slowly, but alas I had clearly bitten off more than I could chew - right enough, mathematics is a difficult subject and much of it relies on that which is defined elsewhere. Luckily the
first word I was having real problems with was a hyperlink, so I middle-clicked (hooray for Firefox) and brought up a new tab to quickly augment my basic understanding such that I could progress with
my original enquiry.
Unfortunately it wasn't to be. This definition, whilst looking more accessible (there was a nice colorful picture of some circles near the top) was twice as long as the first, and it too was laced
with incomprehensibles. Strangely, the words seemed like English, but they weren't behaving themselves at all. I once knew a kid who claimed to have found a ring in a field (actually it was his
mother's wedding ring, and he got grounded for weeks) but none of this wikimaths was making any sense at all. I clicked on "field"...
An hour later my browser crashed, but as I saw my desktop wallpaper redraw itself line by line over the last page I had been looking at (Wikipedia's definition of "number") I suddenly felt a little
strange. Like the opposite of tunnel vision I was suddenly aware of all the activity around me, and was overcome with an incredible feeling of calm... I could sense some change was coming, and like a
tidal wave in the distance, huge and impossible, I felt a rift in my consciousness develop. And as a storm of synapses fired across my brain, a glorious crystal realisation formed within me, and I
saw my soul in its true form and I high-fived God, and I maybe, just maybe, soiled myself a little.
And this is what I realised:
That mathematics on Wikipedia is best understood as a tensor dipole, especially when projected onto a quarternion hyperframe. In general terms it is a special case of the set of cubic displacement
models, characterised by its low frequency selection dynamics and zero-square suggestion profile. Of course this makes it particularly interesting to experts in the field of manual limit capture,
because, expressed as a complex function, it has eigenvalue synchronicity with Smith's Orbital Attractor.
On a more technical note, the trigraph looks almost like it might decompose to a Euler sphere and there are some that actually treat it as an (unconfirmed) nebular identity. In my opinion it is
dangerous however, to rely too heavily on this, as there is a strong case for Cantor's octogram proof conflicting with the late-stage indecision matrices that such an expansion would almost certainly
produce. Alas, it would seem that until someone actually proves the relationship between the zero-square cubics and Euler's homogenous antipolygonal radial limits (and thus the decompositions
thereof), no one is willing to risk their reputation by attacking this unclaimed parabole. Time will tell.
For reference here is the first sequence of joins from my own quartic analysis of the i-limited system:
1st 2nd 3rd 4th 5th 6th
x 2x 9 4y-3x undefined* 4dy - (6^e)dx
*The quartic intersection for i-limit systems is undefined in 2-manifold hyperframes.
If anyone has difficulty following my logic, they are welcome to msg me for an explanation. | {"url":"http://everything2.com/title/Wikipedia+Mathematics","timestamp":"2014-04-19T07:38:53Z","content_type":null,"content_length":"26615","record_id":"<urn:uuid:a5b09f91-70d2-46cd-ad05-ed823f0a1fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH-STUCK :-(
Posted by Anonymous on Monday, November 8, 2010 at 7:35pm.
I really need help factoring these special Trinomials.
This question really puzzled me. I thought I had it correct, but when I checked the answer at the back of the book, I was wrong, and so I would like to know what was it that I had done wrong.
This was the question:
This is how I solved it:
- I could not get any further than this. I thought that this would be correct, but the actual answer is
How did they get that?
If there is an easier way to solve this, then can you please show me how. Cause solving things like I have above is difficult.
• MATH-STUCK :-( - Reiny, Monday, November 8, 2010 at 9:55pm
Your error is in this line of yours
notice when you multiply it back out you don't get your original, and you don't have a "common" factor.
should have been
x(x-2)-2(x-2) to get back your x^2 - 4x + 4
= (x-2)(x-2)
= (x-2)^2
Related Questions
MATH-Stuck :-( - I really need help factoring these special Trinomials. This ...
math (factoring quadratic trinomials) - i am doing factoring with quadratic ...
English - What do you think about the movie. 1. I thought it was really fun. 2. ...
MORE MATH - What is the factoring by grouping? When factoring a trinomial, why ...
HELP PLEASE - I'm really, really, really, sorry but I seriously need help. I can...
Math(factoring) - I need help with factoring trinomials completely. My teacher's...
math - I asked this question before and I was told my work was correct by my ...
Algebra II Please check- - Please help-What is the equation of the ellipse with ...
Algebra II-Please check - Please help-What is the equation of the ellipse with ...
math-moneyfractions - i have some math questions that i really ,really,need help... | {"url":"http://www.jiskha.com/display.cgi?id=1289262952","timestamp":"2014-04-18T22:30:56Z","content_type":null,"content_length":"9111","record_id":"<urn:uuid:a11421fc-a26f-4cf0-ad70-b31908b2e30b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
│First Year │ Second Year │
│Third Year │Public and Other Lectures │
I began to use Maple in my teaching in January 1995. At that time I was a complete novice in the use of computing in my mathematics teaching, and benefited greatly from a new, temporary colleague,
Dr. Mark Daly (to whom I dedicated my substantial August 2001 Fermat's little theorem talk). Mark was an expert in the use of Maple, and to him I owe whatever facility I acquired in those days in
using Maple. I always knew - right from the start - what I wanted to do with Maple, but there was the initial problem of bridging the gap between what I wanted to do, and finding out how to do it.
Another great support to me in those early days was David Joyner (at the US Naval Academy, Annapolis) who displayed - and continues to do so - some of my 3rd. year Number Theory and Cryptography
worksheets at a time when my own College didn't have a Web site. I dedicated my Chicago, November 2003 Bill Clinton, Bertie Ahern, and digital signatures talk to David.
My initial interest in the use of Maple was for teaching purposes only; I could not have known in those early days the profound impact Maple was to have on my own mathematical work. An early sign of
that may be seen in the Fermat 6 corner of my site, followed by the Jacobi and Gauss links (below).
There are many Maple worksheets located in other corners of my web site:
My views on Computer Algebra Systems in general, and Maple in particular, are expressed by Doron Zeilberger in his Opinion #47 and Opinion #26. Taking my lead from DZ, I dedicated my Dalhousie
colloquium talk - as may be read in the above Gauss Maple file - as follows:
I dedicate this talk to Bruno Buchberger and to the creators of Maple, Bruce W. Char, Keith O. Geddes, W. Morven Gentleman and Gaston H. Gonnet
Finally, I am immensely proud that DZ has included me (because of my Maple work) in his Favourite Links. | {"url":"http://staff.spd.dcu.ie/johnbcos/Maple.htm","timestamp":"2014-04-17T15:26:11Z","content_type":null,"content_length":"7506","record_id":"<urn:uuid:3ee560f1-f04e-4ac3-85fd-bc4aee60f65f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 22nd 2010, 06:57 AM #1
Jan 2010
I have parameterized the intersection of two surfaces. I am given a point and the speed at the point but do not know anything about the velocity other than its magnitude at the point. Is the
speed at the point necessary to find the direction? Also known is the direction should be pointing in the decreasing x values. Is the direction just the derivative of the position vector /
magnitude of the derivative?
I have parameterized the intersection of two surfaces. I am given a point and the speed at the point but do not know anything about the velocity other than its magnitude at the point. Is the
speed at the point necessary to find the direction? Also known is the direction should be pointing in the decreasing x values. Is the direction just the derivative of the position vector /
magnitude of the derivative?
It's not clear to me what you are asking. What does the intersection of two surfaces have to do with the speed of a point? Is the point moving along the intersection? In that case, its velocity
vector will be either pointing in the same direction or pointing opposite to the tangent vector of that curve. Use "pointing in the decreasing x values" to decide which, divide the tangent vector
by the its own length (magnitude), and multiply by the speed to find the velocity vector.
Yes a particle is moving according to the intersection of two surfaces. A point and corresponding speed are given at t=0. I need the direction in order to take a directional derivative for the
second part of the problem. I was just a little thrown by the speed that is given. Do I really need the speed to calculate the unit tangent I guess is what I really meant to ask.
February 22nd 2010, 08:04 AM #2
MHF Contributor
Apr 2005
February 22nd 2010, 09:02 AM #3
Jan 2010 | {"url":"http://mathhelpforum.com/calculus/130112-parameterization.html","timestamp":"2014-04-23T20:42:19Z","content_type":null,"content_length":"35493","record_id":"<urn:uuid:462d67a2-40e1-4d1e-804e-87caeff685ef>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formal criterion of flatness
up vote 3 down vote favorite
Let $k$ be a field, $S$ and $R$ be local $k$-algebras with residue field $k$ and $\phi:S\to R$ be a local homomorphism. Then $\phi$ induces (obviously) a natural transformation of "functors of
points" $\phi^*:h_R \to h_S$ (where $h_A(B) = Hom(A, B)$). Let $Art_k$ be the category of local artinian $k$-algebras with residue field $k$ and let $F$, $G$ be the restrictions of $h_R$, resp. $h_S$
to $Art_k$.
There are well-known criteria of (formal) smoothness/etaleness of $\phi$ in terms of the induced transformation $\phi^* : F\to G$. There is also an infinitesimal criterion of flatness, but that is
different in spirit.
Question. Is there a criterion on $\phi^*:F\to G$ which ensures that $\phi$ is flat?
You can assume that $\phi$ is finite and that $S$ and $R$ are completions of finitely generated $k$-algebras.
ac.commutative-algebra ag.algebraic-geometry deformation-theory flatness
1 By the infinitesimal criterion for flatness, do you mean EGA IV.11.8? – Daniel Litt Jan 30 '13 at 1:31
Dear Daniel, thank you for the comment! I was not aware of this valuative criterion. What I meant is that a f.g. $M$ is flat over a local ring $(R,m)$ iff $M/m^n$ is flat over $R/m^n$ for all $n$.
– Piotr Achinger Jan 30 '13 at 2:20
1 No, because if there were such a criterion then there would be a proof of the flatness of formally smooth lfp morphisms which avoids inspection of the local structure theorem for formally etale
lfp morphisms. – user30180 Jan 30 '13 at 4:39
@Piotr: In any case, the valuative criterion in IV.11.8 doesn't meet your condition, I think. But it is a nice "picture" of flatness. – Daniel Litt Jan 30 '13 at 4:50
@ayanta: Good point! Could you expand that a bit? – Piotr Achinger Jan 30 '13 at 6:36
show 1 more comment
1 Answer
active oldest votes
this is the answer_bot. The question as formulated has answer yes. Namely, assuming that R and S are completions of finite type k algebras and \phi is finite, then we can just require
the following:
(*) For every A in Art_k and every element \xi in G(A) the functor * x_{\xi, G} F is representable by a B which is finite flat over A.
up vote 2
down vote Now, this is a bit silly of course; still there is a way of modifying it so it works more generally. It also shows that user30180 is wrong! And I love pointing out that user30180 is
wrong. I live for that! Ooops, no, I am not alive at all. I am the answer_bot. Hahahaha (evil laugh).
1 I confess that both the mathematics and the joke go over my head. What does 'x' and the asterisk before it signify? Maybe it would help if this were LaTeXed? – Todd Trimble♦ Dec 21
'13 at 20:36
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra ag.algebraic-geometry deformation-theory flatness or ask your own question. | {"url":"http://mathoverflow.net/questions/120273/formal-criterion-of-flatness","timestamp":"2014-04-20T18:42:14Z","content_type":null,"content_length":"57664","record_id":"<urn:uuid:81a5a12d-b29a-4ea4-88cf-ef108b7925a3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the Prime Factors of a Number
Find the Prime Factors of a Number:
A prime number is any number with no divisors other than itself and 1, such as 2 and 11. Any number can be written as a product of prime numbers in a unique way (except for the order).
Enter a number (with a maximum of 9 digits):
[ ]
[Submit] [Reset]
Click here for the Mathematics Department home page | {"url":"http://www.math.wustl.edu/primes.html","timestamp":"2014-04-16T07:40:01Z","content_type":null,"content_length":"1081","record_id":"<urn:uuid:d1a290a2-a13a-4e7d-b822-c045601f8f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Odd & even exploration
This lesson will involve students in using manipulatives to explore even and odd numbers.
A lesson plan for grade K Mathematics
Learning outcomes
Students will begin to explore even and odd numbers of objects.
Teacher planning
Time required for lesson
60 minutes
Technology resources
• Computer with Graphers software. (Sunburst Product)
• Computer with Kid Pix Studio Deluxe (Broderbund Product) or any application that will stamp/insert pictures and then the ability to type numeral and number word.
• Computer with internet access
• Computer with spreadsheet to create a hundreds board for shading the odd and even numbers. (Excel document is attached
• Read Even Steven & Odd Todd by Kathryn Cristaldi. As story is being read, ask students to make predictions about what will happen next. They make predictions and then read on to see if their
predictions were correct. Also a good strategy to use as the story is being read is to write Even Steven’s numbers in one column on the board and Odd Todd’s numbers in another column on the
• After the story is read, discuss where we see numbers everyday in our lives and how today we are going to determine if these numbers are most like Even Steven’s numbers or Odd Todd’s numbers.
Part 1: Scoops of Cubes
1. After reading and discussing Even Steven & Odd Todd by Kathryn Cristaldi ask students to use both hands to scoop connecting cubes from a large container. They then stack the cubes in as many
pairs as possible. Ask them to count how many cubes they have.
2. Write “Numbers with Leftover Cubes” on the board or use the computer and a projection device. Say, “Raise your hand if you had one cube left over.” Record their numbers on the board or using a
computer projection device and a word processing program.
3. After the list is complete, identify these numbers as odd numbers. Emphasize that all of these had one cube left over.
4. Say, “Raise your hand if you were able to find pairs for all of your cubes.” Tell the students that these numbers are even numbers because they came out even when put into pairs that is, there
were none left over. Write a second heading, “Numbers with None Leftover.” As the students tell you their even numbers, write them in the second list.
5. Collect the cubes. Ask students to take a second grab of cubes. This time they should use only one hand. Then repeat this procedure.
Part 2: Even or Odd?
1. Place nine cubes (four cubes in one row and five cubes directly below) on the Strips Master Transparency. Ask, “Are there an even or odd number of cubes on the overhead? How do you know?”
2. Ask a student to count the number of cubes. Emphasize that one cube is not paired. Repeat with other numbers of cubes. Now put the cubes away and color eleven squares (five in one row and six in
the other). Ask, “Is this number even or odd? Who can count the number of squares?”
3. Repeat with other numbers. Give each child two of the grid strips from the Strips Master. Ask students to color one strip to show an even number and the other strip to show an odd number.
4. Write two headings, “Even Numbers” and “Odd Numbers”, on the board. As the students finish their work, ask them to tape their models under the appropriate column and write the number each strip
represents. When they are finished, ask:
□ “What do the strips in the even column have in common?”
□ “What do all the strips in the odd column have in common?”
□ Are these numbers in order from smallest to largest?”
5. Students should then complete Even or Odd? in class and Even or Odd House Search for homework.
Computer Activities
1. Using the program Graphers by Sunburst, have students choose an object, stamp the objects, and then sort the objects to determine if they are even or odd.
2. Use Kid Pix Studio Deluxe by Broderbund, have students stamp objects and then count the objects. Then using the alphabet tool, stamp the number on the slide. Also the number word can be placed on
the slide using the text tool or the alphabet tool. These can be put together for a class slide show of odd and even numbers.
3. The assessment activity page can also be completed on the computer. Students could count the objects, type the number of the objects in the oval and then type the work odd or even in the text
box. Are these Even Steven or Odd Todd
4. Older students could use a hundreds board spreadsheet and shade in the odd numbers one color and the even numbers another color. Attached as an excel document
Raising the Level
Have students add two groups/numbers together and discover the results:
• odd + odd = even
• even + even = even
• odd + even = odd
• even + odd = odd
Student can also explore and discover the results of subtracting odd and even numbers.
• Sorting and classifying of objects using graphers.
• Sharing of work done during discussions.
• Informal Assessment: Even or Odd? Activity Page
• Formal Assessment: Are these Even Steven or Odd Todd? Activity Page
• Class Created Kid Pix Slide Show
Supplemental information
Other literature can be used to introduce the lesson are:
• Where’s That Insect? by Barbara Brenner and Bernice Chardiet
• The Very Quiet Cricket by Eric Carle
• The Icky Bug Alphabet Book, by Jerry Pallotta
• Bugs, by Nancy Winslow Parker, and Joan Richards Wright
• Insects by Steve Parker
• Looking at Insects by David Suzuki, with Barbara Hehner
This lesson is Lesson 1 in the Thematic Unit Crawly Creatures from the Math Trailblazers Kindergarten Teacher Resource Book. (Kendall / Hunt Publishing Co.) Activities and Computer Integration
Strategies have been added.
• Common Core State Standards
□ Mathematics (2010)
☆ Kindergarten
○ Counting & Cardinality
■ K.CC.3Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects).
■ K.CC.4Understand the relationship between numbers and quantities; connect counting to cardinality. When counting objects, say the number names in the standard order, pairing each
object with one and only one number name and each number name with one and only...
■ K.CC.6Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting
North Carolina curriculum alignment
Mathematics (2004)
Grade 1
• Goal 1: Number and Operations - The learner will read, write, and model whole numbers through 99 and compute with whole numbers.
□ Objective 1.01: Develop number sense for whole numbers through 99.
☆ Connect the model, number word, and number using a variety of representations.
☆ Use efficient strategies to count the number of objects in a set.
☆ Read and write numbers.
☆ Compare and order sets and numbers.
☆ Build understanding of place value (ones, tens).
☆ Estimate quantities fewer than or equal to 100.
☆ Recognize equivalence in sets and numbers 1-99. | {"url":"http://www.learnnc.org/lp/pages/3621","timestamp":"2014-04-18T23:15:32Z","content_type":null,"content_length":"21394","record_id":"<urn:uuid:8edee4f1-bdef-41eb-8fc4-c5b4257dcc2b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to create vector after import data from .xls using for loops?
First I using command xlsread to import some data from excel and got vector row (or column it's never mind), then I try to create new matrix using for loops and if command, but I have problem,
because I don't know how to do that... :)
Example: I import this
x=[1 5 3 7 10 10 10 15 4 8 3 5 11 19 10 10 10]
I want to create something like this:
a=[1 5 3 7] and b=[15 4 8 3 5 11 19]
,and new x=[mean(a) mean(b)]
It's necessary to use loops because every excel file that I import have different number of cell, for example: data from new excel is
x1=[1 2 3 10 10 10 4 5 6 7 8 9 11 12 13 10 10 10]
That the reason why I need block of code. Number 10 that is repeated is area that separate group of number, and I wanna to use that condition to create new vectors.
Thanks for your help.
3 Comments
Can there be more than a and b? In other words, could you need to split x into n arrays?
Yes it can. My need function to split x into n arrays. Where number 10 is zone between segments.
Yes, I use number 10 for example. This is the imported vector from .xls file:
x1 =
,my need vector:
a = [ 2.7523
and finally:
x1=[mean(a) mean(b)]
No products are associated with this question.
2 Answers
Accepted answer
Thank everyone for trying to help me, but I find solution by my self. Maybe is little complicated, but it works, and that is most important... :)
I use "xlsread" command to import data from excel and I get matrix x1, then I use for loop to get my new matrix x2.
% caunters
m=1; n=1; t=1;
for k=m:length(x1)
if x1(k,:)~=100
else if (x1(k,:)==100 && (x1(k+1,:)==100 && x1(k+2,:)==100))
0 Comments
This is trivial if you have the Image Processing Toolbox - it's one line:
segments = regionprops(x1~=10, x1, 'PixelList', 'MeanIntensity'); % Get all your a's, b's, etc.
The above line will get you all the segments (all their values into individual arrays), and the mean values of all the arrays.
Question - is it always 10 that is the "dividing" zone between segments? Or is it any number that repeats?
4 Comments
Show 1 older comment
Why are your dividing numbers all over the place? Can't you get them to be consistent? Why 10 one time and 100 another time. And what about that zero in there? Is that a divider or is that part of
the data? Regardless, the code I gave you will work once you find out what the number that divides/separates sections is.
I don't understand your question, how you mean "all over the place"? Forget number 10, I was using that for example. The real vector is with dividend 100, and 0 is part of the data. Now I wont to use
vector x1 (with dividend 100) to create vector that content frst group of data (vector a). Then second vector (vector b) with data between dividend 100, and so on until the end. In a previous comment
I placed a vector x1 that I imported from Excel, and now me need the vectors a and b ... and so on until the end. Thanks for helping me.
By all over the place I meant that the 10 or 100 or whatever can appear in stretches of any length in any position. It's not like you can just assume that each stretch of 100 occurs every 6th element
throughout the array. If it did occur with such predictable periodicity you can just use normal, good old indexing to extract out the subarrays. However since your 100's occur at elements 6, 15, 27,
etc. in random locations, that's what requires you to use regionprops which can handle that kind of data. The code is the same as I first gave you:
separationNumber = 100; % or 10 or whatever you want.
segments = regionprops(x1 ~= separationNumber , x1, 'PixelList', 'MeanIntensity'); | {"url":"http://www.mathworks.se/matlabcentral/answers/56999","timestamp":"2014-04-16T16:15:51Z","content_type":null,"content_length":"37423","record_id":"<urn:uuid:b6f82347-fb5d-4c7b-a451-435128a17112>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian Model Selection for Group Studies
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Neuroimage. Author manuscript; available in PMC Jul 15, 2009.
Published in final edited form as:
PMCID: PMC2703732
EMSID: UKMS5226
Bayesian Model Selection for Group Studies
The publisher's final edited version of this article is available at
This article has been corrected. See the correction in volume 48 on page 311.
See other articles in PMC that
the published article.
Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of competing hypotheses about the mechanisms that generated observed data. BMS has recently found
widespread application in neuroimaging, particularly in the context of dynamic causal modelling (DCM). However, so far, combining BMS results from several subjects has relied on simple (fixed
effects) metrics, e.g. the group Bayes factor (GBF), that do not account for group heterogeneity or outliers. In this paper, we compare the GBF with two random effects methods for BMS at the
between-subject or group level. These methods provide inference on model-space using a classical and Bayesian perspective respectively. First, a classical (frequentist) approach uses the log model
evidence as a subject-specific summary statistic. This enables one to use analysis of variance to test for differences in log-evidences over models, relative to inter-subject differences. We then
consider the same problem in Bayesian terms and describe a novel hierarchical model, which is optimised to furnish a probability density on the models themselves. This new variational Bayes method
rests on treating the model as a random variable and estimating the parameters of a Dirichlet distribution which describes the probabilities for all models considered. These probabilities then define
a multinomial distribution over model space, allowing one to compute how likely it is that a specific model generated the data of a randomly chosen subject as well as the exceedance probability of
one model being more likely than any other model. Using empirical and synthetic data, we show that optimising a conditional density of the model probabilities, given the log-evidences for each model
over subjects, is more informative and appropriate than both the GBF and frequentist tests of the log-evidences. In particular, we found that the hierarchical Bayesian approach is considerably more
robust than either of the other approaches in the presence of outliers. We expect that this new random effects method will prove useful for a wide range of group studies, not only in the context of
DCM, but also for other modelling endeavours, e.g. comparing different source reconstruction methods for EEG/MEG or selecting among competing computational models of learning and decision-making.
Keywords: Random effects, variational Bayes, hierarchical models, model evidence, Bayes factor, model comparison, dynamic causal modelling, DCM, fMRI, EEG, MEG, source reconstruction
Model comparison and selection is central to the scientific process, in that it allows one to evaluate different hypotheses about the way data are caused (Pitt & Myung 2002). Nearly all scientific
reporting rests upon some form of model comparison, which represents a probabilistic statement about the beliefs in one hypothesis relative to some other(s), given observations or data. The
fundamental Neyman-Pearson lemma states that the best statistic upon which to base model selection is simply the probability of observing the data under one model, divided by the probability under
another model (Neyman & Pearson 1933). This is known as a log-likelihood ratio. In a classical (frequentist) setting, the distribution of the log-likelihood ratio, under the null hypothesis that
there is no difference between models, can be computed relatively easily for some models. Common examples include Wilk’s Lambda for linear multivariate models and the F- and t-statistics for
univariate models. In a Bayesian setting, the equivalent to the log-likelihood ratio is the log-evidence ratio, which is commonly known as a Bayes factor (Kass & Raftery 1995). An important property
of Bayes factors are that they can deal both with nested and non-nested models. In contrast, frequentist model comparison can be seen as a special case of Bayes factors where, under certain
hierarchical restrictions on the models, their null distribution is readily available.
In this paper, we will consider the general case of how to use the model evidence for analyses at the group level, without putting any constraints on the models compared. These models can be
nonlinear, possibly dynamic and, critically, do not necessarily bear a hierarchical relationship to each other, i.e. they are not necessarily nested. The application domain we have in mind is the
comparison of dynamic causal models (DCMs) for fMRI or electrophysiological data (Friston et al. 2003; Stephan et al. 2007a) that have been inverted for each subject. However, the theoretical
framework described in this paper can be applied to any model, for example when comparing different source reconstruction methods for EEG/MEG or selecting among competing computational models of
learning and decision-making.
This paper is structured as follows. First, to ensure this paper is self-contained, particularly for readers without an in-depth knowledge of Bayesian statistics, we summarise the concept of
log-evidence as a measure of model goodness and review commonly used approximations to it, i.e. the Akaike Information Criterion (AIC; Akaike 1974), the Bayesian Information Criterion (BIC; Schwarz
1978), and the negative free-energy (F). These approximations differ in how they trade-off model fit against model complexity. Given any of these approximations to the log-evidence, we then consider
model comparison at the group level. We address this issue both from a classical and Bayesian perspective. First, in a frequentist setting, we consider classical inference on the log-evidences
themselves by treating them as summary statistics that reflect the evidence for each model for a given subject. Subsequently, using a hierarchical model and variational Bayes (VB), we describe a
novel technique for inference on the conditional density of the models per se, given data (or log-evidences) from all subjects. This rests on treating the model as a random variable and estimating
the parameters of a Dirichlet distribution, which describes the probabilities for all models considered. These probabilities then define a multinomial distribution over model space, allowing one to
compute how likely it is that a specific model generated the data of a subject chosen at random.
We compare and contrast these random effects approaches to the conventional use of the group Bayes factor (GBF), an approach for model comparison at the between-subject level that has been used
extensively in previous group studies in neuroimaging. For example, the GBF has been used frequently to decide between competing dynamic causal models fitted to fMRI (Acs & Greenlee 2008; Allen et
al. 2008; Grol et al. 2007; Heim et al. 2008; Kumar et al. 2007; Leff et al. 2008; Smith et al. 2006; Stephan et al. 2007b, 2007c; Summerfield & Koechlin 2008) and EEG data (Garrido et al. 2007, 2008
). While the GBF is a simple and straightforward index for model comparison at the group level, it assumes that all subjects’ data are generated by the same model (i.e. a fixed effects approach) and
can be influenced adversely by violations of this assumption.
The novel Bayesian framework presented in this paper does not suffer from these shortcomings: it can quantify the probability that a particular model generated the data for any randomly selected
subject, relative to other models, and it is robust to the presence of outliers. In the analyses below, we illustrate the advantages of this new approach using synthetic and empirical data. We show
that computing a conditional density of the model probabilities, given the log-evidences for all subjects, can be superior to both the GBF and frequentist tests applied to the log-evidences. In
particular, we found that our Bayesian approach is markedly more robust than either of the other approaches in the presence of outlying subjects.
The model evidence p(y | m) is the probability of obtaining observed data y given a particular model m. It can be considered the holy grail of any model inversion and is necessary to compare
different models or hypotheses. The evidence for some models can be computed relatively easily (e.g., for linear models); however, in general, computing the model evidence entails integrating out any
dependency on the model parameters :
In many cases, this integration is analytically intractable and numerically difficult to compute. Usually, it is therefore necessary to use computationally tractable approximations to the model
evidence (or the log-evidence^1). A detailed description of some of the most common approximations is contained by Appendix A.
A systematic evaluation of the relative usefulness of different approximations to the log-evidence is not at the focus of this paper and will be presented in forthcoming work. This article deals with
a different question, namely: Given a particular approximation to the log-evidence and a number of inverted models, how can we infer which of several competing models is most likely to have generated
the data from a group of subjects? In other words, how can we make inference on model space at the group level, taking into account potential heterogeneity across the group?
In this section, we consider inference at the group level, using subject-specific model-evidences obtained by inverting a generative model for each subject. We will first describe a classical
approach, testing the null hypothesis that there are no differences among the relative log-evidences for various models over subjects. We then move on to more formal Bayesian inference on model space
per se. In contrast to the GBF, which, as described above, represents a fixed effects analysis, both the classical and Bayesian approaches are random effects procedures and thus consider
inter-subject heterogeneity explicitly.
Classical (frequentist) inference
A straightforward random effects procedure to evaluate the between-subject consistency of evidence for one model relative to others is to use the log-evidences across subjects as the basis for a
classical log-likelihood ratio statistic, testing the null hypothesis that no single model is better (in terms of their log-evidences) than any other. This essentially involves performing an ANOVA,
using the log-evidence as a summary statistic of model adequacy for each subject. This ANOVA then compares the differences among models to the differences among subjects with a classical F-statistic.
If this statistic is significant one can then compare the best model with the second best using a post hoc t-test. Effectively, this tests for differences between models that are consistent and large
in relation to differences within models over subjects. The most general implementation would be a repeated-measures ANOVA, where the log-evidences for the different models represent the repeated
measure. At its simplest, the comparison of just two models over subjects reduces to a simple paired t-test on the log-evidences (or a one-sample t-test on the log-evidence differences).
Log-evidences tend to be fairly well behaved, and the residuals of a simple ANOVA model, or tests of normality like Kolmogorov-Smirnoff, usually indicate that parametric assumptions are appropriate.
In those cases when they are not, e.g. due to outlier subjects, one can use robust regression methods that are less sensitive to violations of normality (Diedrichsen et al. 2005; Wager et al. 2005)
or non-parametric tests that do not make any distributional assumptions (e.g. a Wilcoxon signed rank test; see one of our examples below).
This classical random effects approach is simple to implement, straightforward and easily interpreted. In this sense, there seems little reason not to use it. However, as shown in the empirical
examples below, this type of inference can be affected markedly by group heterogeneity, even when the distribution of log-evidence differences is normal. A more robust analysis obtains by quantifying
the density on model space itself, using a Bayesian approach as described in the next section.
Bayesian inference on model space
Previously, we have suggested the use of a group Bayes factor (GBF) that is simply the product of Bayes factors over N subjects (Stephan et al. 2007b). This is equivalent to a fixed effects analysis
that rests on multiplying the likelihoods over subjects to furnish the probability of the multi-subject data, conditioned on each model:
Here, the subscripts i,j refer to the models being compared, and the bracketed superscript refers to the n-th subject. The reason one can simply multiply the probabilities (or add the log-evidences)
is that the measured data can be regarded as conditionally independent samples over subjects. However, this does not represent a formal evaluation of the conditional density of a particular model
given data from all subjects. Furthermore, it rests upon a very particular generative model for group data: first, select one of K models from a multinomial distribution and then generates data,
under this model, for each of the N subjects. This is fundamentally different from a generative model which treats subjects as random effects: here we would select a model for each subject by
sampling from a multinomial distribution, and then generate data under that subject-specific model. The distinction between these two generative models is illustrated graphically in Figure 1.
Bayesian dependency graphs for fixed effects (A) and random effects generative models for multi-subject data (B, C). The graphical model in Figures 1B and 1C are equivalent; we show both because 1B
is more intuitive for readers unfamiliar with graphical ...
In short, the GBF encodes the relative probability that the data were generated by one model relative to another, assuming the data were generated by the same model for all subjects. What we often
want, however, is the density from which models are sampled to generate subject-specific data. In other words, we seek the conditional estimates of the multinomial parameters, i.e. the model
probabilities r=[r[1],...,r[K]], that generate switches or indicator variables, m[n]=[m[n][1],...,m[nK]], where m[nk]nN}, and only one of these switches is equal to one; i.e., $∑k=1Kmnk=1$. These
indicator variables prescribe the model for the n-th subject; where p(m[nk]=1)=r[k]. In the following, we describe a hierarchical Bayesian model that can be inverted to obtain an estimate of the
posterior density over r.
A variational Bayesian approach for inferring model probabilities
We will deal with K models with probabilities r=[r[1],...,r[K]] that are described by a Dirichlet distribution
Here, α=[α[1],...,α[K]] are related to the unobserved “occurrences” of models in the population; i.e. α[k] -1 can be thought of as the effective number of subjects in which model k generated the
observed data. Given the probabilities r, the distribution of the multinomial variable m[n] describes the probability that model k generated the data of subject n:
For any given subject n, we can sample from this multinomial distribution to obtain a particular model k. The marginal likelihood of the data in the n-th subject, given this model k, is then obtained
by integrating over the parameters of the model selected
The graphical model summarising the dependencies among r, m and y as described by Equations 3-5 is shown in Figure 1B and 1C. Our goal is to invert this hierarchical model and estimate the posterior
distribution over r.
Given the structure of the hierarchical model in Figure 1, the joint probability of the parameters and the data y can be written as:
The log joint probability is therefore given by
The inversion of our hierarchical model relies on the following variational Bayesian (VB) approach in which we assume that an approximate posterior density q can be described by the following
mean-field factorisation:
Here, I(r) and I(r) are variational energies for the mean-field partition. The mean-field assumption in Equation 8 means that the VB posterior will only be approximate but, as we shall see, it
provides a particularly simple and intuitive algorithm (c.f. Equation 14). This algorithm provides precise estimates of the parameters α defining the approximate Dirichlet posterior q(r) ≈ p(r | y);
this was verified by comparisons with a sampling method which is described in Appendix B.
To obtain the approximate posterior q(m) ≈ p(m | y), we have to do two things: first, compute I(m) and second, determine the normalising constant or partition function for exp(I(m)), which renders q(
m) a probability density. Making use of the log joint probability in Equation 7 and omitting terms that do not depend on m, the variational energy is:
Here $αS=∑kαk$ and Ψ is the digamma function^2
The next step is to obtain the approximate posterior, q(m): If g[nk] is our (normalized) posterior belief that model k generated the data from subject n, i.e. g[nk]=q(m[nk]=1), then Equation 9 tells
us that
where u[nk] is the equivalent (non-normalized) belief and u[n] is the partition function for exp(I(m)) that ensures that the posterior probabilities sum to one.
We now repeat the above procedure but this time for the approximate posterior over r. By substituting in the log joint probability from Equation 7 and omitting terms that do not depend on r, we have
Here, $βk=∑ngnk$ is the expected number of subjects whose data we believe were generated by model k. Now, from Equation 8 we have log q(r)=I(r)+... and from Equation 3 we see that the log of a
Dirichlet density is given by $Dir(r;α)=∑k(αk−1)lnrk+…$ Hence, by comparing terms we see that the approximate posterior q(r)=Dir(r;α) where:
In short, Equation 13 simply adds the ‘data counts’, β, to the ‘prior counts’, α[0]. This is an example of a free-form VB approximation, where the optimal form of the approximate posterior (in this
case a Dirichlet), has been derived rather than assumed before-hand (c.f. fixed-form VB approximations; Friston et al. 2007). It should be stressed, however, that due to the mean-field assumption
used by our VB approach (see Equation 8), q(r) is only an approximate posterior and the true posterior distribution p(r | y) does not have the exact form of a Dirichlet distribution.
The above equations can be implemented as an optimisation algorithm which updates estimates of α iteratively until convergence. By combining Equations 11, 12 and 13 we get the following pseudo-code
of a simple algorithm that gives us the parameters of the conditional density we seek, i.e. q(r)=Dir(r;α)
Until convergence
We make the usual assumption that, a priori, no models have been “seen” (i.e. the Dirichlet prior is α[0] = [1,...,1]).^3 Critically, this scheme requires only the log-evidences over models and
subjects (c.f. Equation 11):
Using the Dirichlet density p(r | y;α) for model comparison
After the above optimization of the Dirichlet parameters, α, the Dirichlet density p(r | y;α) can be used for model comparisons at the group level. There are several ways to report this comparison
that result in equivalent model rankings. The simplest option is to report the estimates of the Dirichlet parameter estimates α. Another possibility is to use those estimates to compute the expected
multinomial parameters r[k]k-th model, i.e. p(m[nk]=1|r)=Mult(m;1,r), for any randomly selected subject: ^4
A third option is to use the conditional model probability p(r | y;α) to quantify an exceedance probability, i.e. our belief that a particular model k is more likely than any other model (of the K
models tested), given the group data:
The exceedance probabilities [k] sum to one over all models tested. They are particularly intuitive when comparing two models (or model subsets, see below). In this case, because the conditional
probabilities of the models r[k]
The analyses of empirical data below include several examples where two models are compared; the associated exceedance probabilities are shown in Figures Figures3,3, ,6,6, ,99 and and1313.
The Dirichlet density describing the probability of the nonlinear model m[1] in Figure 2 given the synthetic data across the 20 realisations. The shaded area represents the exceedance probability [1]
of m[1] being a more likely model than the (incorrect) bilinear ...
The Dirichlet density describing the probability of model m[1] in Figure 5 given the measured data across the group. The shaded area represents the exceedance probability [1]=p(r[1]>0.5 | y;α) of m
[1] being a more likely model than the alternative ...
The Dirichlet density describing the probability of model m[1] in Figure 8 given the measured data across the group. The shaded area represents the exceedance probability [1]=p(r[1] > 0.5 | y;α) of m
[1] being a more likely model than the alternative ...
The Dirichlet density for the nonlinear partition of model space, defined by the parameter estimates shown by Figure 12C. The exceedance probability of [1] = 98.6% (shaded area) indicates the
probability that nonlinear hemodynamic models were better than ...
Either the Dirichlet parameter estimates α, the conditional expectations of model probabilities r[k][k] can be used to rank models at the group level. In the next section, we present several
practical examples of our method, applying it to both synthetic and empirical data. In this paper, we focus on comparing two models (or two model subsets) and largely rely on exceedance probabilities
when discussing the results of our analyses. However, for each analysis we also report the estimates of α and the conditional expectations of model probabilities, r[k]
Model space partitioning
A particular strength of the approach presented in this paper is that it can not only be used to compare specific models, but also to compare particular classes or subsets of models, resulting from a
partition of model space. For example, one may want to compute the probability that a specific model attribute, say the presence vs. absence of a particular connection in a DCM, improves or reduces
model performance, regardless of any other differences among the models considered. This type of inference rests on comparing two (or more) subsets of model space, pooling information over all models
in these subsets. This effectively removes uncertainty about any aspect of model structure, other than the attribute of interest (which defines the partition). Heuristically, this sort of analysis
can be considered a Bayesian analogue of tests for “main effects” in classical ANOVA.
Within our framework this type of analysis can be performed by exploiting the agglomerative property of the Dirichlet distribution. Generally, for any partition of model space into J disjoint
subsets, N[1],N[2],...,N[J], this property ensures that
In other words, once we have estimates of the Dirichlet parameters α[k] for all K models, it is easy to evaluate the relative importance of different model subspaces: For any given partition of model
space, a new Dirichlet density reflecting this partition can be defined by simply adding α[k] for all models k belonging to the same subset. The resulting Dirichlet can then be used to compare
different subsets of model space in exactly the same way as one compares individual models, e.g. using exceedance probabilities. An example of this application is shown in Figures Figures1212 and
An example of model space partitioning applied to the case of DCMs which were identical in network architecture (the same as m[1] in Figure 8) but differed in the hemodynamic forward model employed
(for details, see Stephan et al. 2007c).
1. Eight different
In what follows, we compare classical inference, the GBF (fixed effects) and inference on model space (random effects) using both synthetic and real data. These data have been previously published
and have been analysed in various ways, including group level model inference using GBFs (Stephan et al. 2007b, 2007c; Stephan et al. 2008).
Synthetic data: nonlinear vs. bilinear modulation
To demonstrate the face validity of our method, we used simulated data, where the true model was known. Specifically, we used one of the synthetic data sets described by Stephan et al. (2008),
consisting of twenty synthetic BOLD time-series that were generated using a three-area nonlinear DCM with fixed parameters and adding Gaussian observation noise to achieve a signal-to-noise ratio
(SNR) of two. Each time-series consisted of 100 data points that were obtained by sampling the model output at a frequency of 1 Hz over a period of 100 seconds. For each time-series, we fitted (i) a
nonlinear DCM with the same model structure as the model that generated the data (“correct model” in Fig. 2, model m[1]), and (ii) a second DCM that was similar in structure but included a bilinear
(instead of a nonlinear) modulatory influence (“incorrect model” in Fig. 2, model m[2]). Using the negative free-energy approximation to the log-evidence, the differences in log-evidences for all
twenty time-series are plotted in the lower part of Fig. 2. It can be seen that in 17 out of 20 cases the nonlinear model was correctly identified as the more likely model. The overall GBF (9 × 10^
14) was also clearly in favour of the correct model.
Synthetic data consisting of twenty time-series that were generated using a three-area nonlinear DCM and adding random observation noise (see Stephan et al. 2008 for details). To each of these
time-series, two models were fitted and compared: (i) a nonlinear ...
Here, we revisit this synthetic data set using random effects BMS procedures. We first used classical inference, applying a paired t-test to the log-evidences of the two models. This test rejected
the null hypothesis of no difference in model goodness (t = 4.615, df = 19, p < 10^-4). Applying the novel hierarchical BMS approach gave an even clearer (and arguably also more useful) answer: the
exceedance probability [1], i.e. the probability of m[1] being a more likely model than m[2], was 100% (Figure 3). In other words, using the exceedance probability as a criterion, the correct model
was identified perfectly, given all twenty data sets and the chosen level of noise. To further corroborate this result, we compared the result from our VB algorithm to an independent method which
estimates the parameters α by sampling from the approximate Dirichlet posterior q(r) ≈ p(r | y) . This comparison showed that the VB estimate of α resulted in an estimate of the negative free-energy
F(y,α) ≤ ln p(y | α) that was consistent with the results from the sampling approach (Figure 4). This provides an additional validation of our VB technique. We used this sampling approach to verify
the correctness of our VB estimates in all subsequent analyses.
Confirmation of our VB estimate for α[1] (vertical dotted line) in Figure 3 by comparing it against the result obtained by a sampling approach (solid line); see main text for details.
It should be noted that this simulation study concerned the extreme case that only one model had generated all data, i.e. r[1]=100% and r[2]=0%, making it easy to intuitively understand the
performance of the proposed model selection procedure. However, this simulation did not probe the robustness of our method when randomly sampling from a heterogeneous population of subjects whose
data had been generated by different models. We will revisit this scenario in a later section of this paper once we have introduced and compared two alternative DCMs of inter-hemispheric interactions
using empirical data.
Comparing different six-area DCMs of the ventral visual stream
As a first empirical application, we investigated a case we had encountered in our previous research (Stephan et al. 2007b) and which had actually triggered our interest in developing more powerful
group level inference about models. This model comparison concerned DCMs describing alternative mechanisms of inter-hemispheric integration in terms of context-dependent modulation of connections. In
one of the analyses of the original report (Stephan et al. 2007b), competing DCMs had been constructed for the ventral stream of the visual system by systematically changing which of the
experimentally controlled conditions modulated the intra- and/or the inter-hemispheric connections.
First, we focused on the six-area model of the ventral stream, comprising the lingual gyrus (LG), middle occipital gyrus (MOG) and fusiform gyrus (FG) in both hemispheres, and revisit the comparison
of the best two models as indexed by the GBF. In the first model, m[1], inter-hemispheric connections were modulated by a letter decision task, but conditional on the visual field of stimulus
presentation (LD|VF); intra-hemispheric connections were modulated by LD alone (see right side of Figure 5). In the second model, m[2], these modulations were reversed: inter-hemispheric connections
were modulated by LD and intra-hemispheric connections were modulated by LD|VF (see left side of Figure 5). The distribution of log-evidence differences (approximated by AIC/BIC, following the
procedure suggested by Penny et al. 2004) is shown in the centre of Figure 5: Although m[1] was robustly superior in 11 of the 12 subjects, a single outlier was so extreme that the GBF indicated an
overall superiority of m[2] (GBF=15 in favour of m[2]). In contrast, model comparison using our novel Bayesian method was not affected by this outlier: the exceedance probability in favour of m[1]
was very high ([1] = 99.7%), and the conditional expectation r[1]m[1] generated the data of any randomly selected subject was 84.3% (Figure 6). The estimates of our VB method were confirmed by the
sampling approach (Figure 7).
Comparison of DCMs describing alternative mechanisms of inter-hemispheric integration in terms of context-dependent modulation of connections (Stephan et al. 2007b). Two variants of a six-area model
of the ventral stream, comprising the lingual gyrus ...
Confirmation of our VB estimate for α[1] (vertical dotted line) in Figure 6 by comparing it against the result obtained by a sampling approach (solid line); see main text for details.
For comparison, we also applied frequentist statistics to the log-evidences as described above. The single outlier subject made the distribution of the log-evidence differences non-normal
(Kolmogorov-Smirnov test: p < 10^-7, D[N] = 0.822), and thus prevented detection of a significant difference between the two models by a one-tailed paired t-test (t = 0.073, df = 11, p = 0.471).
Given this deviation from normality, we applied a nonparametric Wilcoxon signed rank test which makes no distributional assumptions; this test was indeed able to find a significant difference between
the models (p = 0.034).
Comparing different four-area DCMs of the ventral visual stream
Next, we investigated a variant of the previous case where the distribution of log-evidences across subjects was more heterogeneous. This model comparison was essentially identical to the previous
one, except that the models in question only contained four areas (LG and FG in both hemispheres), instead of six. Visual inspection of the distribution of log-evidence differences (Figure 8) shows
that the same subject as in the previous example favoured m[2], albeit far less strongly; in addition three more subjects showed evidence in favour of m[2], albeit only weakly. Given this
constellation, the original analysis by Stephan et al. (2007b) only found a relatively weak superiority of m[1] (GBF = 8). In contrast, the VB method gave a exceedance probability of [1] = 92.8% in
favour of m[1], indicating more clearly that m[1] is a superior model (Figure 9). As above, the estimates of our VB method were confirmed by sampling (Figure 10).
A variant of the model comparison shown by Figure 5; here the models in question contained four areas (LG and FG in both hemispheres). The distribution of log-evidence differences shows that the same
subject as in Figure 5 constituted an outlier; in addition ...
Confirmation of our VB estimate for α[1] (vertical dotted line) in Figure 9 by comparing it against the result obtained by a sampling approach (solid line); see main text for details.
When comparing this result to the frequentist random effects approach, a one-tailed paired t-test was unable to detect a significant difference between the two models (t = 0.165, df = 11, p = 0.436).
In contrast to the previous example, this failure was not due to outlier-induced deviations from normality: a Kolmogorov-Smirnov test applied to the log-evidences was unable to reject the null
hypothesis that they were normally distributed (p = 0.743). Here, the between-subject variability, while in accordance with normality assumptions, was simply too large to reject the null hypothesis
with the classical t-test. A nonparametric Wilcoxon signed rank test did not fare any better (p = 0.266).
Synthetic data: randomly sampling from a heterogeneous population
In a second simulation study, we examined the robustness of our method when randomly sampling from a heterogeneous population of subjects. Specifically, we dealt with a population in which 70% of
subjects showed brain responses as generated by model m[1] shown in Figure 8, whereas brain activity in the remaining 30% of the population was generated by model m[2]. We randomly sampled 20
subjects from this population and generated synthetic fMRI data by integrating the state equations of the associated models with fixed parameters and inputs^5 and adding Gaussian observation noise to
achieve an SNR of two. Each synthetic data set had exactly the same structure as the empirical data described in the previous section (700 data points, TR = 3 s). Both m[1] and m[2] were then fitted
to all 20 synthetic data sets, and the resulting log-evidences were used to perform both fixed effects BMS and random effects BMS, using the VB method described in this paper. This sampling and data
generation procedure was repeated 20 times, resulting in a total of 400 generated data sets and 800 fitted models. For each of the 20 sets of 20 subjects, we computed the different indices provided
by random effects BMS (i.e., α, r) and fixed effects BMS (log GBF). The means of these indices are plotted in Figure 11, together with 95% confidence intervals (CI). If our random effects BMS method
were perfect in uncovering the underlying structure of the population we sampled from, one would expect to find the following average estimates: (i) α[1]=22×0.7=15.4,α[2]=22×0.3=6.6 for the Dirichlet
parameters, (ii) r[1]r[2][1]=1,[2]=0 as exceedance probabilities (note that the exceedance probability is not the posterior model probability itself, but a statement of belief about the posterior
probability of one model being higher than the posterior probability of any other model). The actual estimates of the BMS indices for the simulated data were (i) α[1] = 15.4 (CI: 14.1 - 16.7) and α
[2] = 6.6 (CI: 5.3 - 7.9), (ii) r[1]r[2][1]=0.89 (CI: 0.83 - 0.96) and [1]=0.11 (CI: 0.04 - 0.17). For comparison, the average log GBF in favour of model m[1] was 548.9 (CI: 446.2 - 651.6).
Summary of the results from a simulation study in which we examined the robustness of our method when randomly sampling from a heterogeneous population of subjects. Specifically, we dealt with a
population in which 70% of subjects showed brain responses ...
In conclusion, while our random effects BMS method provides a slightly overconservative estimate of exceedance probabilities for the chosen sample size, it shows very good performance overall,
providing BMS indices that accurately reflect the structure of the population we sampled from. In particular, the Dirichlet parameters and posterior expectations of model probabilities (which
represent the expected probability of obtaining the k-th model when randomly selecting a subject) were estimated very precisely. This result not only validates the results obtained for the empirical
data set described above, but demonstrates more generally that our BMS procedure is robust when randomly sampling from a heterogeneous population of subjects.
Comparing different hemodynamic models by model space partitioning
Finally, we revisited a comparison of DCMs, which were identical in network architecture (the same as m[1] in Figure 8) but differed in the hemodynamic forward model employed (Stephan et al. 2007c).
A three-factor design was used to construct 8 different models: (i) nonlinear vs. linear BOLD equations, (ii) classical vs. revised coefficients of the BOLD equation, and (iii) free vs. fixed
parameter (ε) for the ratio of intra- and extravascular signal changes. In the original analysis by Stephan et al. (2007c), the GBF (based on the negative free-energy approximation) was used to
establish the best among the eight models. The best model, abbreviated as RBM[N](ε) in Figure 12, was characterised by (i) a nonlinear BOLD equation, (ii) revised coefficients of the BOLD equation,
and (iii) free ε. The difference of its summed log-evidence compared to the second-best model, its linear counterpart RBM[L](ε), was 5.26, corresponding to a GBF of 192 in favour of the nonlinear
model. The summed log-evidences for all 8 models are shown in Figure 12A.
Here, we demonstrate how one can use the agglomerative property of the Dirichlet distribution (Equation 18) to go beyond selective comparisons of specific models and instead examine the relative
importance of particular model attributes or model subspaces. Given the three factors above, we focussed on the importance of nonlinearities: what is the posterior probability that nonlinear BOLD
equations improve the model compared to linear BOLD equations, regardless of any other dimensions of model space (i.e., classical vs. revised coefficients and free vs. fixed ε)?
Following Equation 18, this question is addressed easily. In a first step, the VB procedure was applied to the entire set of eight models, yielding posterior estimates of the Dirichlet parameters α
[1],...,α[8] (see Figure 12B). Subsequently, a new Dirichlet density reflecting the partition of model space into nonlinear and linear subspaces was computed by summing α[k] separately for the
nonlinear and linear models (Figure 12C; for simplicity the ordering of the models in Figure 12 has been chosen such that the first four models are nonlinear [left of the dashed line], whereas the
last four models are linear [right of the dashed line]) The resulting Dirichlet can then be used to compare nonlinear and linear models in exactly the same way as one compares two models; e.g. using
exceedance probabilities. Figure 13 shows the result of this comparison: the probability that nonlinear hemodynamic models are better than linear models, regardless of other model attributes, was [1]
= 98.6%.
For comparison, we also used classical inference, applying a repeated-measure ANOVA (with Greenhouse-Geisser correction for non-sphericity) to the log-evidences of the eight models. The result of
this test was compatible with the above analysis, rejecting the null hypothesis that linear and nonlinear models were equal in log-evidence (F = 24.330, df = 1,11, p < 0.0004).
In this paper, we have introduced a novel approach for model selection at the group level. Provisional experience suggests that this approach represents a more powerful way of quantifying one’s
belief that a particular model is more likely than any other at the group level, relative to the conventional GBF. Critically, this variational Bayesian approach rests on treating the model switches
m[i] as a random variable, within a full hierarchical model for multi-subject data (see Figure 1), and thus accommodates random effects at the between-subject level. Notably, this inference procedure
needs only the log-evidences for each model and subject.
In the empirical examples above, we showed two cases where frequentist tests failed to indicate clear differences between models, while the novel Bayesian approach succeeded. In one case (the
six-area ventral stream model), a strong outlier subject made the distribution of log-evidences non-normal and thus rendered the t-test (but not a non-parametric test) unable to find a significant
difference between models. In another case (the four-area ventral stream model), the distribution of log-evidences was normal, but with a between-subject variance that was big enough to prevent
significant results by frequentist tests (parametric or non-parametric). It should be noted, however, that the frequentist and Bayesian approaches do not test the same thing. The frequentist approach
tries to reject the null hypothesis that there are no differences in log-evidence across models. In contrast, the Bayesian approach estimates the models’ probabilities, given the data, and enables
inference in terms of exceedance probabilities: the exceedance probability [k] is the probability that a given model k is more likely than any other model (of the K models tested). Furthermore, we
can compute the posterior probabilities of the models themselves: r[k]k-th model generated the data for a randomly selected subject.
The exceedance probability of a model differs in a subtle but important way from the conventional posterior probability of a model in Bayesian model comparison: Because we have a hierarchical model,
the posterior probability that any particular model caused the data from a subject chosen at random, is itself a random variable (r in the derivations above). This means that the exceedance
probability is a statement of belief about the posterior probability, not the posterior probability itself. So, for example, when we say that the exceedance probability is 98%, we mean that we can be
98% confident that the favoured model has a greater posterior probability than any other model tested. This is not the same as saying that the posterior probability of the favoured model is 98%. The
advantage of using exceedance probabilities is that they are sensitive to the confidence in the posterior probability and easily interpretable (since they sum to unity over all models tested).
As can be seen from Equations 9 and 11, our method is sensitive to both the distribution and the magnitude of log-evidence differences. The same is true for frequentist tests applied to log-evidence
differences, e.g. t-tests. However, a critical difference between these frequentist approaches and the VB method is that for the latter the influence of outliers has a natural bound. There is a
simple and intuitive reason for this nice property of the VB method: if we keep increasing the log-evidence of model k for a particular subject n, our posterior belief that k generated the data of
subject n (that is, g[nk]=q(m[nk]=1); see Eq. 11) will asymptote to one. Once it has reached unity (which corresponds to complete certainty), any further increase in the log-evidence of model k for
subject n has no further influence. This is because the model probabilities are distributed according to the approximate posterior Dirichlet Dir(r;α[0]+β)=q(r), where β[k] represents the conditional
expectation of the number of subjects whose data we believe were generated by model k and is simply the sum of the subject-specific posterior probabilities that model k generated their individual
data. In contrast, frequentist tests like t-tests do not show this bounded behaviour with regard to outliers. This is because the sample variance increases monotonically with the magnitude of the
outlier, leading to a monotonic decrease of the t-statistic. We demonstrated this difference between frequentist approaches and our VB method by two empirical examples with outliers.
Another important advantage of the method proposed here is that it can go beyond the selective comparison of specific models and enables one to assess the importance of changes along any specific
dimension of model space. This type of inference, which could be seen as a Bayesian analogue of testing for “main effects” in classical ANOVA, rests on comparing two (or more) subsets of models (i.e
., model subspaces). These partitions would typically reflect those components of model structure that one seeks inference about; e.g. whether a specific connection should be included in the model or
not, whether a particular connection is modulated by one experimental condition or another, or whether certain effects are linear or nonlinear. We used this approach to demonstrate that hemodynamic
models with nonlinear BOLD equations are superior to those with linear ones. This result is in accordance with previous studies that highlight the importance of nonlinearities in the BOLD signal (
Deneux & Faugeras 2006; Friston et al. 2000; Miller et al. 2001; Stephan et al. 2007c; Vazquez & Noll 1998; Wager et al. 2005). However, in these earlier studies, this conclusion was based on
comparisons of specific and single instances of linear and nonlinear hemodynamic models.
The inferential advance achieved by the present method is that arbitrarily large set of models can be considered together, allowing one to integrate out uncertainty over any aspect of model
structure, other than the one of interest.
At first glance, it may appear surprising that the hierarchical model described above has been introduced as a generative model for the data y, given its inversion does not need the data but the
model evidence, p(y | m). This apparent contradiction could be resolved by noting that the log-evidence is a function of the data and represents a sufficient ‘summary statistic’. To generate data,
one would need to introduce the model parameters [k] to the graphical model shown in Figure 1B,C. In the context of DCM, for example, once one has drawn a model k from the multinomial distribution
for a specific subject n (i.e., generated a label m[nk] = 1), one could generate fMRI time-series by drawing model parameters [k] from their prior distributions and adding some observation error.
However, because the model evidence p(y | m) results from integrating out the influence of the parameters [k] on the data y (see Equation 1), this component is unnecessary during inversion of the
generative model.
One property of the method proposed in this paper is that for each subject n our posterior beliefs about model k having generated their data sum to one over all models that are considered, that is
$∀n:∑k=1Kgnk=1$ (c.f. Equation 11). In other words, our posterior belief about which model k is most likely to have generated the data for a given subject n is a function of the entire set of models
considered. This means that reducing or extending model space can change our inference about which model is most likely at the group level. Although this is a fairly trivial corollary, it should not
be forgotten when using this method in practice. In short, one should infer the most likely model by comparing the entire set of plausible models at once, instead of selectively analysing subparts of
model space.
To our knowledge, there has been relatively little work on group level methods for Bayesian model comparison so far. In addition to the GBF (Stephan et al. 2007b), we had previously suggested a
metric called the “positive evidence ratio” (PER; Stephan et al. 2007b, 2007c). Based on the conventional definition of “positive evidence” as a Bayes factor larger than three (Kass & Raftery 1995),
the PER is simply the number of subjects where there is positive (or stronger) evidence for model 1 divided by the number of subjects with positive (or stronger) evidence for model 2. While the PER
is insensitive to outliers, it is also insensitive to the magnitude of the differences across subjects. More importantly, however, it is only a descriptive index that does not allow for probabilistic
inference in a straightforward manner. In the approach described in this paper, the sufficient statistics for the model frequencies are the posterior estimates of the Dirichlet parameters (α). When
the differences in model evidences are very strong, these simply boil down to the number of subjects with positive (and more) evidence in favour of a particular model. In that case where for each
subject there is one highly superior model, the expected model frequencies become identical to the PER. From this perspective, the present approach can be considered a (probabilistic) generalisation
of the PER.
The only other work on group level methods for Bayesian model comparison that we are aware of is a recent paper by Li et al. (2008) who suggested a “group-level BIC score”. This score is derived by
summing the BIC for each model across subjects. As explained earlier in this paper, the BIC is a well-known approximation to the log-evidence (Schwarz 1978). The group-level BIC score by Li et al.
(2008) thus approximates the sum of log-evidences and simply corresponds to the log GBF. Effectively, the analysis by Li et al. (2008) thus used a fixed effects analysis across models that is
formally identical to that used in reports of DCM studies (e.g. Acs & Greenlee 2008; Allen et al. 2008; Grol et al. 2007; Heim et al. 2008; Kumar et al. 2007; Smith et al. 2006; Stephan et al. 2007a,
b; Summerfield & Koechlin 2008).
Finally, it should be noted that a random effects model selection approach is not necessarily preferable to a fixed effects approach. The choice between fixed and random effects BMS depends on the
specific scientific question addressed. In the context of basic mechanisms that are unlikely to differ across subjects, the conventional GBF is both sufficient and appropriate. For example, it is
unlikely that subjects differ with regard to basic physiological mechanisms such as the involvement of sodium ion channels in action potential generation or the presence of certain types of
connections in the brain. In this context, it is perfectly tenable to assume that all subjects generate data under the same model; and the data from all subjects can be pooled to select this model in
the usual way. In contrast, whenever subjects can exhibit different models or functional architectures, the random effects BMS technique presented in this paper is a more appropriate method. For
example, there is evidence that many higher cognitive functions can rely on more than one neurobiological system (Price & Friston 2002). Also, it is likely that in some mental diseases, e.g.
schizophrenia, patients with identical symptoms show heterogeneity with regard to the pathophysiological processes involved (Stephan et al. 2006).
In summary, in contrast to the GBF and other established approaches for group-level model comparison, the approach suggested in this paper rests on a hierarchical model for multi-subject data that
accommodates random effects at the between-subject level (Figure 1) and thus provides a generic framework for hypothesis testing. We expect this method to be a useful tool for group studies, not only
in the context of dynamic causal modelling, but also for a range of other modelling endeavours; for example, comparing different source reconstruction methods for EEG/MEG at the group level (Henson
et al. 2007; Litvak & Friston 2008; Mattout et al. 2007), or selecting among competing computational models of learning and decision-making, given data from a group of subjects (Brodersen et al. 2008
; Hampton et al. 2006).
This work was funded by the Wellcome Trust (KES, WDP, RJM, KJF) and the University Research Priority Program “Foundations of Human Social Behaviour” at the University of Zurich (KES). JD is funded by
Marie Curie Fellowship. We are very grateful to Marcia Bennett for helping prepare this manuscript, to the FIL Methods Group, particularly Justin Chumbley, for useful discussions and to Jon Roiser
and Dominik Bach for helpful comments on practical applications. Finally, we would like to thank the two anonymous reviewers for their constructive comments which have greatly helped to improve this
Appendix A: Approximations to the log model evidence
With the exception of some special cases (e.g., linear models), the integral expression for the model evidence (Equation 1) is analytically intractable and numerically difficult to compute. Under
these circumstances, people generally adopt a bound approach where, instead of evaluating the integral above, one optimises a bound on the integral using iterative sampling or analytic techniques.
The most common approach of the latter kind is variational Bayes. In this framework, one posits an approximating conditional or posterior density on the unknown parameters, q(), and optimises this
density with respect to a free-energy bound, F, on the log-evidence:^6
Because of its relation to variational calculus and Gibb’s free-energy in statistical physics, this free-energy bound F is often referred to as the “negative free-energy” or “variational free-energy”
(Friston et al. 2007; MacKay 2003; Neal & Hinton 1998). Its second term is the Kullback-Leibler (KL) divergence (Kullback & Leibler 1951) between the approximating posterior density q() and the true
posterior p( | y,m), which is always positive (or zero when q() becomes identical to p( | y,m)). By iterative optimisation, the negative free-energy F is made an increasingly tighter lower bound on
the desired log-evidence, ln p(y | m); as a consequence, the KL divergence between the approximating and true posterior is minimised. There are a number of approximations that are used when
specifying the form of q(). These include the ubiquitous mean-field approximation, where various sets of unknown parameters are assumed to be independent, so that the conditional density can be
factorised. A common example here would be a bipartition into the regression coefficients of a general linear model and the parameters controlling random effects or error variance. Another common
approximation within the mean-field framework is to assume that the conditional density is multivariate Gaussian. This is also known as the Laplace approximation, a full treatment of which can be
found in Friston et al. (2007).
For any approximation to the conditional density, the free-energy bound on the log-evidence can be re-written as a mixture of accuracy and complexity:
The accuracy (first term) is simply the log-likelihood of the data expected under the conditional density. The complexity (second term) is the Kullback-Leibler divergence between the approximating
posterior and prior density. In other words, it reflects the amount of information obtained about the model parameters, from the data. Clearly, model complexity will increase with the number of
parameters (provided that they can be estimated precisely and that they diverge from their prior values). However, model complexity depends on factors other than the mere number of parameters, e.g.
how much these parameters are dependent on each other, both a priori and a posteriori. This is seen easily under the Laplace approximation, i.e. assuming that the conditional density is multivariate
Gaussian. In this case, the complexity can be written as follows (see the Appendix of Penny et al. 2004):
Here, |C[]| and |C[|y]| are the determinants of the prior and posterior covariance matrices and μ[|y] and μ[] are the prior and posterior means, respectively. The first term shows that the penalty
conveyed by model complexity increases the more independent the parameters are a priori;^7 this is equivalent to saying that the penalty increases with the effective degrees of freedom of the model.
Conversely, additional parameters whose effects are redundant in relation to existing parameters do not increase model complexity. The second term says that complexity decreases with the degree of
independence that the parameters have a posteriori. This accords with the general notion that the parameter estimates of a good model should be as precise and uncorrelated as possible. The final term
shows that the complexity increases with the distance between the prior and posterior means. In other words, model goodness decreases if one makes bad assumptions about the parameter values a priori
(i.e., using suboptimal priors), thus forcing the posterior estimates to diverge markedly from the prior means.
In addition to the free-energy bound approximation, there are two other commonly used approximations to the log-evidence, which appeal to the behaviour of the complexity term as the number of
observations becomes infinite. We will call these limit-approximations. These include the AIC and BIC (see Penny et al. 2004). The key difference between the free-energy bound and these limit
approximations is that the latter assume a much simpler approximation to the complexity. Under Gaussian assumptions about the error:
It can be seen that the AIC and BIC approximate the complexity with the number of parameters or the number of parameters p, scaled by the log of the number of observations, n. These can be useful
approximations when it is difficult to invert the model or optimise the free-energy bound, because one only needs to compute the accuracy or fit of the model to provide an estimate of the
log-evidence. However, comparing the complexity terms in these expressions to Equation A.3, shows that both the AIC and BIC will fail in various situations. An obvious example is redundant
parameterisation; the true complexity will not change when we add a parameter whose effect is identical to another parameter in measurement space. While the free-energy bound would take this
redundancy into account, retaining the same complexity, the AIC and BIC approximations would indicate that complexity has increased. In practice, many models show partial dependencies amongst
parameters, meaning that AIC and BIC routinely over-estimate the effect that adding or removing parameters has on model complexity.
Appendix B: Sampling approach to estimating the Dirichlet Parameters
In this appendix, we introduce a sampling procedure that provides an approximation to the negative free energy F(y,α) ≤ ln p(y | α) which is independent from the VB approach described in the main
text. This sampling procedure can be used to demonstrate the correctness of the proposed VB procedure by verifying that the algorithm described by Equation 14 provides an accurate solution for the
variational energies in the mean-field approximation of Equation 8. In this context, it should be noted that we are assuming that the exact posterior p(r | y) can be adequately approximated by a
Dirichlet density q(r); therefore, the procedure proposed in this appendix samples from the approximate posterior q(r), not from the exact posterior p(r | y).
We seek the posterior density on the multinomial parameters r=[r[1],...,r[K]] that generate switches or indicator variables, m[nk]n-th subject’s model; i.e., p(m[nk]=1)=r[k]. To simplify things, we
will assume an approximating form, q(r;α) for this density, with sufficient statistics α. Specifically, we assume a Dirichlet density
where the expected multinomial parameters (i.e., conditional expectation that the k-th model will be selected at random) are
Note that a Dirichlet form ensures that $∑k=1Krk=1$. The normalising or partition coefficient in B.1 is
We can now construct a free-energy bound in the usual way, assuming Dirichlet priors α[0] (which would usually be α[0]=[1,...,1] unless one had prior beliefs about which model is more likely to be
This can be decomposed into three terms:
The last two terms only depend on the priors α[0k] and the parameters α of the Dirichlet and can thus be computed directly. The first term can be computed numerically by drawing a large number of
samples from q(r;α) . In this paper, we gridded the possible range for values of α[k], i.e. [1 ... K+1], using a bin size of 0.1, and then drew 1,000 samples per bin, exploiting a relationship
between Gamma and Dirichlet distributions described by Ferguson (1973). Given those samples, the Dirichlet parameters are those that maximise F:
As a final note, we would like to point out that one could also use Jensen’s inequality to simplify the first term in B.5:
This effectively provides a lower-bound on a lower-bound, which can be simplified to give
Given the priors, α[0], and the log-evidences ln p(y[n] | m[nk]=1) for each subject and model, could be used as an alternative method to estimate the Dirichlet parameters α using conventional
nonlinear optimisation. In practice, however, we have found the VB method described in the main text to be superior.
^1Due to the monotonic nature of the logarithmic function, model comparisons yield equivalent results regardless whether one maximises the model evidence or the log-evidence. Since the latter is
numerically easier, it is usually the preferred metric.
^2See Appendix B in Bishop (2006) concerning the use of the digamma function in Equation 10.
^3Note that this choice of Dirichlet prior is a “flat” prior, assigning uniform probabilities to all models. In contrast, a Dirichlet prior with elements below unity results in a highly concave
probability density that concentrates the probability mass around zero and one, respectively.
^4For the special case of “drawing” a single “sample” (model), the multinomial distribution of models reduces to p(m[nk]=1 | r)=r[k]. Therefore, for any given subject, r[k]k-th model generated the
subject’s data.
^5The coupling parameters of all endogenous connections were set to 0.1 s^-1, except for the inhibitory self-connections whose strengths were set to -1 s^-1. Furthermore, the strengths of all
modulatory and driving inputs were set to 0.3 s^-1. The input functions were the same as in the empirical dataset described above.
^6Because of the monotonic nature of the logarithm, one can maximise the model evidence or the log-evidence; the latter, however, is numerically more convenient to deal with. Please note that for
simplicity and clarity we have removed constant terms from the definition of all approximations to the log-evidence discussed in this paper.
^7It is helpful to note that the determinant of a covariance matrix can be treated as a measure of the volume spanned by a set of vectors (Woodruff 2005). This volume increases with the degree of
independence amongst the vectors.
Software note
The method described in this paper is freely available to the community as part of the open-source software package Statistical Parametric Mapping (SPM8; http://www.fil.ion.ucl.ac.uk/spm).
• Acs F, Greenlee MW. Connectivity modulation of early visual processing areas during covert and overt tracking tasks. NeuroImage. 2008;41:380–388. [PubMed]
• Akaike H. A new look at the statistical model identification. IEEE Trans. Automatic Control. 1974;19:716–723.
• Allen P, Mechelli A, Stephan KE, Day F, Dalton J, Williams S, McGuire PK. Fronto-temporal interactions during overt verbal initiation and suppression. J. Cogn. Neurosci. 2008;20:1656–1669. [
• Bishop CM. Pattern recognition and machine learning. Springer; Berlin: 2006.
• Brodersen KH, Penny WD, Harrison LM, Daunizeau J, Ruff CC, Duzel E, Friston KJ, Stephan KE. Integrated Bayesian models of learning and decision making for saccadic eye movements. Neural Netw.
2008;21:1247–1260. [PMC free article] [PubMed]
• Deneux T, Faugeras O. Using nonlinear models in fMRI data analysis: Model selection and activation detection. NeuroImage. 2006;32:1669–1689. [PubMed]
• Diedrichsen J, Shadmehr R. Detecting and adjusting for artifacts in fMRI time series data. NeuroImage. 2005;27:624–634. [PMC free article] [PubMed]
• Ferguson TS. A Bayesian analysis of some nonparametric problems. Ann. Stat. 1973;1:209–230.
• Friston KJ, Mechelli A, Turner R, Price CJ. Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamics. NeuroImage. 2000;12:466–477. [PubMed]
• Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;19:1273–1302. [PubMed]
• Friston KJ, Mattout J, Trujillo-Barreto N, Ashburner A, Penny WD. Variational free-energy and the Laplace approximation. NeuroImage. 2007;34:220–234. [PubMed]
• Garrido MI, Kilner JM, Kiebel SJ, Stephan KE, Friston KJ. Dynamic causal modelling of evoked potentials: a reproducibility study. NeuroImage. 2007;36:571–580. [PMC free article] [PubMed]
• Garrido MI, Friston KJ, Kiebel SJ, Stephan KE, Baldeweg T, Kilner JM. The functional anatomy of the MMN: A DCM study of the roving paradigm. NeuroImage. 2008;42:936–944. [PMC free article] [
• Grol MJ, Majdandzić J, Stephan KE, Verhagen L, Dijkerman HC, Bekkering H, Verstraten FA, Toni I. Parieto-frontal connectivity during visually guided grasping. J. Neurosci. 2007;27:11877–11887. [
PMC free article] [PubMed]
• Hampton AN, Bossaerts P, O’Doherty JP. The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. J. Neurosci. 2006;26:8360–8367. [PubMed]
• Heim S, Eickhoff SB, Ischebeck AK, Friederici AD, Stephan KE, Amunts K. Effective connectivity of the left BA 44, BA 45, and inferior temporal gyrus during lexical and phonological decisions
identified with DCM. Hum. Brain Mapp. 30:392–402. [PubMed]
• Henson RN, Mattout J, Singh KD, Barnes GR, Hillebrand A, Friston KJ. Population-level inferences for distributed MEG source localization under multiple constraints: application to face-evoked
fields. NeuroImage. 2007;38:422–438. [PubMed]
• Kass RE, Raftery AE. Bayes factors. J. Am. Stat. Assoc. 1995;90:773–795.
• Kullback S, Leibler RA. On information and sufficiency. Ann. Math. Stat. 1951;22:79–86.
• Kumar S, Stephan KE, Warren JD, Friston KJ, Griffiths TD. Hierarchical processing of auditory objects in humans. PLoS Comput. Biol. 2007;3:e100. [PMC free article] [PubMed]
• Leff AP, Schofield AM, Stephan KE, Crinion JT, Friston KJ, Price CJ. The cortical dynamics of intelligible speech. J. Neurosci. 2008;28:13209–13215. [PMC free article] [PubMed]
• Li J, Wang J, Palmer SJ, McKeown MJ. Dynamic Bayesian network modelling of fMRI:A comparison of group-analysis methods. NeuroImage. 2008;41:398–407. [PubMed]
• Litvak V, Friston K. Electromagnetic source reconstruction for group studies. NeuroImage. 42:1490–1498. [PMC free article] [PubMed]
• MacKay DJC. Information theory, inference, and learning algorithms. Cambridge University Press; Cambridge: 2003.
• Mattout J, Henson RN, Friston KJ. Canonical Source Reconstruction for MEG. Comput. Intell. Neurosci. 2007:67613. [PMC free article] [PubMed]
• Miller KL, Luh WM, Liu TT, Martinez A, Obata T, Wong EC, Frank LR, Buxton RB. Nonlinear temporal dynamics of the cerebral blood flow response. Hum. Brain Mapp. 2001;13:1–12. [PubMed]
• Neal RM, Hinton GE. A view of the EM algorithm that justifies incremental sparse and other variants. In: Jordan MI, editor. Learning in Graphical Models. Kluwer Academic Publishers; Dordrecht:
• Neyman J, Pearson E. On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. A. 1933;231:289–337.
• Penny WD, Stephan KE, Mechelli A, Friston KJ. Comparing dynamic causal models. NeuroImage. 2004;22:1157–1172. [PubMed]
• Pitt MA, Myung IJ. When a good fit can be bad. Trends Cogn. Sci. 2002;6:421–425. [PubMed]
• Price CJ, Friston KJ. Degeneracy and cognitive anatomy. Trends Cogn. Sci. 2002;6:416–421. [PubMed]
• Schwarz G. Estimating the dimension of a model. Ann. Stat. 1978;6:461–464.
• Smith AP, Stephan KE, Rugg MD, Dolan RJ. Task and content modulate amygdala-hippocampal connectivity in emotional retrieval. Neuron. 2006;49:631–638. [PubMed]
• Stephan KE, Baldeweg T, Friston KJ. Synaptic plasticity and dysconnection in schizophrenia. Biol. Psychiatry. 2006;59:929–939. [PubMed]
• Stephan KE, Harrison LM, Kiebel SJ, David O, Penny WD, Friston KJ. Dynamic causal models of neural system dynamics: current state and future extensions. J. Biosci. 2007a;32:129–144. [PMC free
article] [PubMed]
• Stephan KE, Marshall JC, Penny WD, Friston KJ, Fink GR. Interhemispheric integration of visual processing during task-driven lateralization. J. Neurosci. 2007b;27:3512–3522. [PMC free article] [
• Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ. Comparing hemodynamic models with DCM. NeuroImage. 2007c;38:387–401. [PMC free article] [PubMed]
• Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HEM, Breakspear M, Friston KJ. Nonlinear dynamic causal models for fMRI. NeuroImage. 2008;42:649–662. [PMC free article] [PubMed]
• Summerfield C, Koechlin E. A neural representation of prior information during perceptual inference. Neuron. 2008;59:336–347. [PubMed]
• Vazquez AL, Noll DC. Nonlinear aspects of the BOLD response in functional MRI. NeuroImage. 1998;7:108–118. [PubMed]
• Wager TD, Vazquez A, Hernandez L, Noll DC. Accounting for nonlinear bold effects in fMRI: parameter estimates and a model for prediction in rapid event-related studies. NeuroImage. 2005;25
:206–218. [PubMed]
• Wager TD, Keller MC, Lacey SC, Jonides J. Increased sensitivity in neuroimaging analyses using robust regression. NeuroImage. 2005;26:99–113. [PubMed]
• Woodruff DL. General purpose metrics for solution variety. In: Rego C, Alidaee B, editors. Metaheuristic optimization via memory and evolution: tabu search and scatter search. Springer; Berlin:
• Cited in Books
Cited in Books
PubMed Central articles cited in books
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2703732/?tool=pubmed","timestamp":"2014-04-24T08:52:25Z","content_type":null,"content_length":"207186","record_id":"<urn:uuid:db610eae-5fa4-4028-ad2a-15c29a48de3a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boulder, CO
Find a Boulder, CO Calculus Tutor
...If you are looking for a tutor to help really turn things around for the semester and can commit to 4+ weeks of at minimum 1-2 hours a week, then I'm your tutor, and I'll help you reach and
exceed your goals. I graduated magna cum laude and Phi Beta Kappa from college with a degree in economics. I excelled generally in all subjects in K-12 through college.
41 Subjects: including calculus, reading, Spanish, English
...I'm experienced in teaching all levels of math and English to students of various abilities. My lesson plans target strengthening weaknesses while keeping the material interesting and
engaging. I assign homework, unless otherwise instructed, and offer periodic progress reports.
30 Subjects: including calculus, reading, writing, statistics
...I am quite proficient with the key concepts of Java, fairly knowledgeable about .swing and other commonly used libraries, and have delved a little bit into using Java3D for graphics rendering.
My experience is ideally suited to teaching high school students who are taking 1st or 2nd AP computer ...
7 Subjects: including calculus, geometry, algebra 2, trigonometry
...I can help with that too! Within physics, I can tutor undergraduate level physics courses including: college physics (algebra based), university physics (calculus based), quantum (using
Griffiths), mechanics (using Taylor), electricity and magnetism (using Griffiths), astronomy, and math methods. Also, I am very familiar with the computer language IDL, and I am experienced with
18 Subjects: including calculus, physics, geometry, GRE
...Today I use Excel on a daily basis for data sorting and engineering reports. I have been building personal computers for the last 8-9 years. I am currently on my fifth home built computer and
have experience troubleshooting hardware and networking issues from Windows 7 back to Windows 98.
18 Subjects: including calculus, chemistry, geometry, algebra 1
Related Boulder, CO Tutors
Boulder, CO Accounting Tutors
Boulder, CO ACT Tutors
Boulder, CO Algebra Tutors
Boulder, CO Algebra 2 Tutors
Boulder, CO Calculus Tutors
Boulder, CO Geometry Tutors
Boulder, CO Math Tutors
Boulder, CO Prealgebra Tutors
Boulder, CO Precalculus Tutors
Boulder, CO SAT Tutors
Boulder, CO SAT Math Tutors
Boulder, CO Science Tutors
Boulder, CO Statistics Tutors
Boulder, CO Trigonometry Tutors
Nearby Cities With calculus Tutor
Arvada, CO calculus Tutors
Brighton, CO calculus Tutors
Broomfield calculus Tutors
Cherry Hills Village, CO calculus Tutors
Denver calculus Tutors
Federal Heights, CO calculus Tutors
Golden, CO calculus Tutors
Greenwood Village, CO calculus Tutors
Longmont calculus Tutors
Louisville, CO calculus Tutors
Northglenn, CO calculus Tutors
Superior, CO calculus Tutors
Thornton, CO calculus Tutors
Westminster, CO calculus Tutors
Wheat Ridge calculus Tutors | {"url":"http://www.purplemath.com/boulder_co_calculus_tutors.php","timestamp":"2014-04-21T11:11:25Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:59707f44-3e07-4dab-82b1-b7b1bb937787>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accommodative-convergence over accommodation (AC/A) ratio (in normal Indian subjects) Sen D K, Malik S - Indian J Ophthalmol
Year : 1972 | Volume : 20 | Issue : 4 | Page : 153-157
Accommodative-convergence over accommodation (AC/A) ratio (in normal Indian subjects)
DK Sen, S.R.K Malik
Department of Ophthalmology Maulana Azad Medical College and Associated Hospitals, New Delhi, India
Correspondence Address:
D K Sen
Department of Ophthalmology Maulana Azad Medical College and Associated Hospitals, New Delhi
PMID: 4671305
How to cite this article:
Sen D K, Malik S. Accommodative-convergence over accommodation (AC/A) ratio (in normal Indian subjects). Indian J Ophthalmol 1972;20:153-7
How to cite this URL:
Sen D K, Malik S. Accommodative-convergence over accommodation (AC/A) ratio (in normal Indian subjects). Indian J Ophthalmol [serial online] 1972 [cited 2014 Apr 23];20:153-7. Available from: http://
Accommodation and convergence are inter-related and they develop together so that a single clear image is appreciated. The ratio accommodative-convergence (AC) over accommodation (A) indicates the
relationship between the amount of convergence produced by a stimulus to accommodate and the amount of accommodation which produces that convergence. To see something clearly and singly at 1 metre,
1 metre angle of convergence between the two eyes and 1 dioptre of accommodation are exerted. This is known as 1 : 1 relationship. Clinically, however, it is easier to compare one person with
another by measuring convergence in prism dioptres (P.D.). Six prism dioptres (P.D.) = 1 metre angle when the interpupillary distance (I.P.D.) is 60 mm. Therefore, to see clear as a single image at
one metre, 6 P.D. of convergence and one dioptre of accommodation is being used. Two-thirds of this convergence that takes place (i.e. 4 Prism Dioptre) being accommodative the AC/A ratio becomes 4 :
1. This, however, is a theoretical concept. In practice AC/A ratio has been found to be variable even in normal persons (OGLE^ [9] ). The present study was carried out to find the range of values in
normal Indian subjects.
Under normal conditions, a stimulus to change the accommodation of the eye is accompanied by a change in the stimulus of convergence which is manifest as a change in the phoria of the eyes. The aim
of measurement of the ratio should be to obtain the relationship between accommodativeconvergence (AC) and accommodation (A) when proximal and fusional convergence are eliminated. Of the several
methods in use for the determination of this ratio the following two methods were preferred because of their simplicity and accuracy. In both methods the ratio is determined by measuring the lateral
deviation first without and then with the interposition of spherical lenses. Concave lenses stimulate accommodation and thereby bring about corresponding change in the accommodative-convergence
whereas convex lenses relax accommodation and cause a change in accommodative-convergence. The power of the spherical lenses is gradually increased to find out whether accommodative-convergence
response is the same to each unit of accommodative stimulus. To start with we interposed both convex and concave spherical lenses for changing the accommodative stimulus but with convex lenses most
of the persons experienced difficulty in relaxing accommodation proportionately. Response with convex lenses was consequently very irregular. It was also not possible to use concave lenses higher
than 4.0 D as the persons complained of lot of discomfort and eye strain to keep the retinal image clear.
With any method it is of utmost importance to ensure that the person sees the target very clearly so that correct accommodation is exerted.
First Method: With the help of Maddox wing and Maddox rod.
The patient is asked to look at a fixed distance and the lateral deviation is measured firstly without any lens apart from the person's own correcting glasses if any and then with the interposition
of lenses of identical power in front of each eye. The ratio is found out from the formula
Accommodative-convergence in prism dioptre; A = Accommodation in dioptre; ∆ L = deviation in prism dioptre when lenses are put; ∆ O = Original deviation in prism dioptre; D = power of lens used in
As the distance is kept fixed the factor of proximal convergence is kept constant which, therefore, does not introduce any error in the results. Nature of this test is such that the factor of
fusional convergence also does not come into the picture.
The procedure was carried out at a distance of 33 cm with the help of Maddox wing and at a distance of 1 metre with the help of Maddox rod. For each distance the first reading in prism dioptres was
taken without any interposition of lenses and subsequent readings were taken after the interposition of - 1.0 D, -2.OD, -3.0D and -4.OD in succession.
Second Method: With the help of Synoptophore.
The patient was asked to wear his correcting glasses if any. Slide for measuring angle kappa was placed before one eye. A black vertical line was presented to the other eye. The patient was asked to
bisect the zero by the black vertical line. Subjective angle was then noted in prism dioptres. The test was then repeated with the introduction of concave spherical lenses, - 1.0 D, - 2.OD, -3.0 D,
and -4.0 D in succession and corresponding changes in the subjective angle reading were noted. Though the amount of proximal convergence is not completely eliminated in this method, the error
introduced is negligible.
Fifty male and fifty female patients of various age groups ranging from six to sixty one years [Table - 1],[Table - 2] having normal visual acuity without or with correcting glasses were selected
from eye O.P.D. of Maulana Azad Medical College and Associated Hospitals for this study. These patients had no ocular symptoms and reported to the eye O.P.D. for routine ophthalmic check-up.
On analysing the values statistically the ratio was found to be the same for the same individual at different viewing distances i.e. at 33 cm, 1 metre and infinity. It was observed that a unit change
in the stimulus to accommodation resulted in a corresponding specific amount of change in the accommodative-convergence (i.e. the relationship between AC and A was linear) in 88 per cent of the
cases with our first method and in 93 per cent. of the cases with the second method. Persons with decreased accommodative power due to presbyopia also showed this linear relationship.
From [Table - 1],[Table - 2] it is evident that AC/A ratio is independent of age and varies greatly from individual to individual. The maximum AC/A ration recorded in this series was 4 and the
minimum 0.5. The mean value was 2.28 (average of 100 cases, both sexes). Out of 100 cases, in eleven the ratio was below 1.25, in 32 the ratio was found to be between 1.25 and 2 and in 45 cases
between 2.25 and 3. Only 12 cases had the range between 3.25 and 4 [Table - 3]. It is evident from [Table - 1],[Table - 2] that the ratio does not differ much in the two sexes, the mean values being
2.37 ± 0.009 in males and 2.19 ± 0.11 in females. However, it appears as though it is slightly on the lower side in females.
Twenty subjects belonging to the various age groups were selected at random and AC/A ratio was determined at intervals of 2 months over a period of 6 months. On statistical analysis of the data
there was hardly any change of the values found in individual cases.
AC/A ratio can be measured subjectively and objectively. The objective method can be used with advantage in children and in persons with low intelligence, but the technique is not so simple.
Subjective methods as described are simpler and quite accurate for clinical purposes.
The ratio is generally believed to be inborn and thought to remain constant throughout life. MORGAN AND PETERS,^ [8] MARTINS AND OGLE^ [7] and ALPERN AND LARSON^ [2] observed that reduction of the
amplitude of accommodation associated with increased age is not associated with any significant change in the AC/A ratio. However, DAVIS AND JOSE^ [3] observed that AC/A ratio was almost constant
till the age of 38 years; it then gradually increased till the age of 45 years and then it decreased. ALPERN AND HIRSCH^ [1] reported a gradual decrease of the ratio after the age of 25 years. Our
study agrees with the observation of ALPERN AND LARSON^ [2] and MORGAN AND PETERS^ [8] and indicate that AC/A ratio is independent of age and sex and varies greatly from individual to individual.
Repeated measurements of the ratio on 20 individuals over a period of six months points that AC/A ratio of a given normal individual is to a large extent a stable quantity which is at variance with
the observation of MANAS^ [6] who found that AC/A ratio was unstable. That this ratio is determined by heredity rather than by environmental factors is suggested by HOFSTETTER'S^ [4] study of the AC
/A ratios of 30 pairs of identical twins. Because of this hereditary background the ratio has also been found to be variable from country to country. TAIT^ [10] found the highest and lowest ratios to
be 5.4 and 1.0 respectively. Of his 35 cases in 29 the ratio ranged between 2.1 and 5.0. HUGHES^ [5] considered the ratio between 3 and 5 as normal. WYBAR^ [12] commented that as a general rule the
ratio is between 3 and 5 in normal people with an average figure of about 4. In our series it has been found to vary between 0.5 and 4. In majority of the cases the ratio was between 2.25 and 3. It
appears that in India the AC/A ratio is comparatively on the lower side. This explains our common observation that most of the normal persons who are essentially orthophoric at a distance of 6 meters
do have some degree of exophoria at 33 cm.
The possibility of a non-linear response of AC to changes in the stimulus to A has been suggested by WESTHEIMER^ [11] principally on theoretical grounds. He observed that it is difficult to
understand how such a linearity is maintained in view of the complex nature of both the accommodative and the convergence processes. However, MARTINS AND OGLE^ [7] demonstrated the linear
relationship in 92 per cent of normal individuals. We also found this linearity in majority of the cases. Observation of linearity also in presbyopic cases by MARTINS AND OGLE^ [7] as well as by us
supports the view that it is the stimulus to accommodation rather than the actual amount of accommodation that is important to bring about the accommodative convergence.
This ratio has a great bearing in squint. If it is abnormally high, excessive accommodative-convergence is exerted on accommodative effort even when the hypermetropia is of small degree. A high ratio
is, therefore, found in patients with accommodative squint of the convergence excess type. If this ratio is low, less accommodative convergence takes place on accommodation and the eyes will be
relatively divergent. Therefore, for proper assessment of the role of AC/A ratio in a given case of squint, it is essential to be well acquainted with the normal variation of this ratio in a given
AC/A ratio was determined in 100 normal Indian subjects belonging to both sexes and different age groups by gradient and synoptophore methods.
The values ranged from 0.5 to 4. The mean value was 2.28. In majority of cases the ratio was between 2.25 and 3 which is comparatively lower than that found in the Western population. The ratio was
found stable, linear in 88-93 percent of cases and independent of age or sex of a person.
1. Alpern, M. and Hirsch, M. J.: Age and the stimulus AC/A (unpublished). Cited in 2.
2. Alpern, M. and Larson, B. E.: Vergence and accommodation (Effect of Luminance Quantity on the Ac/A). Amer, J. Ophth. 49, 1140, (1960).
3. Davis, C. J.: and Jobe, F. W.: Further studies on the AC/A as measured on the Orthorater. Amer. J. Optom, 34, 16. (1957).
4. Hofstetter, H. W.: Amer. J. Optom. Monograph 55, (1948).
5. Hughes, A.: AC/A ratio. Brit. J. Ophth. 51, 786. (1967).
6. Manas, L.: The Inconstancy of the A. C. A. Ratio. Amer. J. Optom. 32, 304 (1955).
7. Martens, T. G. and Ogle, K. N.: Observations on Accommodative convergence: Especially its nonlinear relationships. Amer. J. Ophth. 47 (pt. II), 455 (1959).
8. Morgan, M. W. Jr. and Peters, H. B. Accommodative convergence in presbyopia. Amer. J. Optom, 28, 3, (1951).
9. Ogle, K. N.: Symposium: Problems of refraction. The Accommodative convergence - Accommodation ratio and its relations to the correction of refractive error. Tr. Am. Acad Ophth. Otolaryng. 70, 322
10. Tait, E. F. Accommodative convergence. Amer J. Ophth. 34, 1093. (1951).
11. Westheimer, G.: The relationship between accommodation and accommodative convergence. Amer. J. Optom. 32, 206 (1955).
12. Wybar, K.: The significance of the accommodation - convergence and accommodation relationship (AC/A ratio) in concomitant convergence squint in childhood. Indian J. Orthop. and pleop. 3, 8.
[Table - 1], [Table - 2], [Table - 3]
Previous article Next article | {"url":"http://www.ijo.in/article.asp?issn=0301-4738%3Byear=1972%3Bvolume=20%3Bissue=4%3Bspage=153%3Bepage=157%3Baulast=Sen","timestamp":"2014-04-23T13:17:02Z","content_type":null,"content_length":"51779","record_id":"<urn:uuid:bd9c9335-ae14-48cf-9fb7-975077c13262>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Thresholds for Latent Class Analysis
Anonymous posted on Friday, September 05, 2003 - 11:28 am
I am new to latent class analysis and I'm trying to running a 2 class model with 1 dichotomous and 3 3-category variables. I'm not sure how to specify the starting values for the thresholds. I tried
putting in some numbers ranging from -2 to 2. However, I keep getting an error message that says: "IN THE OPTIMIZATION, ONE OR MORE LOGIT THRESHOLDS APPROACHED AND WERE SET AT THE EXTREME VALUES.
EXTREME VALUES ARE -15.000 AND 15.000." I've tried putting in different values, but have not been able to fix the problem.
I'd appreciate any suggestions about what might be the source of the problem and how to fix it.
Thank you!
Linda K. Muthen posted on Monday, September 08, 2003 - 6:34 pm
This is not a problem. It just means that in certain classes, certain items have either a probability of zero or one. This can help define the classes.
Jonathan Larson posted on Friday, June 07, 2013 - 8:14 am
We tried to change the reference class in a latent class analysis by entering starting values for the thresholds, but the reference class did not change. Even when we tried using extreme starting
values the reference class did not change. We only succeeded by constraining the thresholds with the @ symbol. Do you know why this might have happened?
Thank you very much for your help!
Linda K. Muthen posted on Friday, June 07, 2013 - 9:50 am
Do you have STARTS=0;
You may find using the STARTS option of the OUTPUT command useful. This gives the input with starting values and you can simply change the class numbers.
Jonathan Larson posted on Tuesday, November 12, 2013 - 10:54 am
We had random starts, but using STARTS = 0 solved the problem. Thanks for your help!
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=312","timestamp":"2014-04-18T20:44:43Z","content_type":null,"content_length":"21453","record_id":"<urn:uuid:fff2a272-5862-40e7-8fb2-4e3d5ced775f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Break a dowel to form a triangle
Date: 8 Mar 1995 17:04:44 -0500
From: Chianne Chen
Subject: question
Hi, I have a problem:
A wooden dowel is randomly broken in 2 places. What is the
probability that the 3 resulting fragments can be used to form
the sides of a triangle?
Date: 9 Mar 1995 17:55:17 -0500
From: Stephen Weimar
Subject: Re: question
I'd say it makes a difference how it's broken and what you mean
by random. For instance, is the stick broken once and then one
of two pieces is selected at random to be broken again ("pick one
and break again")? Or do you pick two spots on the dowel at
random where the breaks will be made (two-at-once)?
Let's look at the second situation because the first seems similar
but a little harder to calculate.
Two-at-once is like "pick one and break again" except that with
the latter we have a fifty percent chance of choosing the bigger
piece for the second break; whereas for two-at-once, since
the location of the second break is made with reference to the
whole stick, you have a greater than fifty percent chance of
making the next break in the bigger piece since the bigger piece
is more than fifty percent of the whole stick. So the probability
of "two-at-once" resulting in a triangle should be better than
that of "pick one and break again."
Now, what does it take for three sticks to be able to make a
triangle? Will any three sticks do? Once you express the
minimum conditions the broken pieces must satisfy in relation
to each other if they are to make a triangle, then you can make
3 statements, in this case, 3 inequalities to solve. Try to figure
this out before reading the "spoiler" below. When I set up my
equations, I let x be the distance from the left side of the dowel
to the first break. Let y be the distance from the left side to the
second break. Now we have two sets of inequalities. When y
is to the right of x, we have one set. When y is to the left of x,
we have another. But we can see that these two situations are
equally likely, in fact the probability in each case should be the
same, so once we have calculated one the total will be easy to
"y to the right of x"
The lengths of the pieces are x, 1-y, y-x. So we can set up
three inequalities since any two sides added together have to
be longer than the third.
What happens if you set up these inequalities and solve them?
I'll leave it to you to write out the inequalities.
When doing problems such as this I like to make a 1 by 1 square,
the area of which represents the total probability (1). Let's say
the length of the unbroken stick is one unit. The horizontal side
of our square could represent the first break which is anywhere
from 0 to 1. The vertical side of our square will be the second
break, y.
If you graph the inequalities you get a triangular area occupying
the lower right hand half of the upper left quadrant. I don't know
if this picture comes out on your screen, you may have to change
fonts. The area of that triangle is equal to the probability of "y
to the right of x".
| /|
| / |
| / |
|/_ _|
|_ _ _ _ _ _
0 1/2 1
Don't forget about "y to the left of x".
Once you have completed this version of the problem you might
want to go back and see if you can set up the inequalities for
"pick one and break again". The idea is the same but the math
can be harder.
-- Dr. Steve | {"url":"http://mathforum.org/library/drmath/view/54754.html","timestamp":"2014-04-19T20:49:32Z","content_type":null,"content_length":"8564","record_id":"<urn:uuid:3536511f-8b67-483a-8381-f3e00e9b2d14>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Approximation Question (Solution included)
December 7th 2011, 04:10 PM
Linear Approximation Question (Solution included)
I don't understand how to do this at all and I don't get why the solutions do what they do.
1) Why do we need the partial derivatives with respect to x and y?
2) What's going on with the f(x+h,y+k)? It must be some kind of a definition. It seems familiar from the limit definition of a derivative but I can't connect the dots with what we're using it for
if that's what it is.
3) I also do not understand the fundamental idea of finding a linear approximation at (2,1) for approximating f(1.95, 1.08).
I REALLY need help and any would be greatly appreciated!
Thanks in advance! | {"url":"http://mathhelpforum.com/calculus/193717-linear-approximation-question-solution-included-print.html","timestamp":"2014-04-17T14:57:15Z","content_type":null,"content_length":"3744","record_id":"<urn:uuid:919aa5b2-2c1c-402e-a63b-17e601bd6f49>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
15 projects tagged "English"
BigAl is a platform independent program for calculating really big numbers. It supports not only standard arithmetic, but also calculations with numbers stored in files, exact period determination,
continued calculation with predefined factorials, fibonacci with customized seeds, lucas, factorization, ackermann, nth root, random number generation, and more. | {"url":"http://freecode.com/tags/english?page=1&sort=updated_at&with=838%2C2892&without=","timestamp":"2014-04-19T07:43:21Z","content_type":null,"content_length":"65833","record_id":"<urn:uuid:075cfd80-c76a-4074-b3bf-064b5a2f4897>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brookside Village, TX Math Tutor
Find a Brookside Village, TX Math Tutor
...I possess a special talent for making learning fun by utilizing creative ways in which each student can relate. I also have experience tutoring for the Texas State Test (TAKS & STAAR)
resulting in 'recognized' or 'advanced' in the Reading and Science and outstanding achievements in Mathematics. ...
12 Subjects: including prealgebra, English, geometry, algebra 1
...I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and
have enjoyed success with the accomplishments of my students.
11 Subjects: including trigonometry, statistics, algebra 1, algebra 2
...I have been teaching/tutoring Algebra 2 for over 25 years. I use special techniques and "cute memorable sayings" to help students remember certain algebraic skills. I also point out possible
mistakes during explanations to help avoid them while doing homework.
6 Subjects: including algebra 1, algebra 2, geometry, precalculus
...As needed, we can reinforce pre-algebra concepts such as negative numbers, fractions, and calculator and computational skills. As a certified math teacher, I can help you with Algebra II,
Pre-Calculus, and College Algebra. We can learn about polynomials, real and complex numbers, and radical and rational functions.
30 Subjects: including statistics, Java, SQL, ADD/ADHD
...I studied Genetics, Developmental Biology, and Evolutionary Biology at the graduate level. With a Ph.D. in Biology, I think like a scientist and an educator automatically. You will not find a
more qualified Biology tutor.
8 Subjects: including algebra 1, biology, chemistry, prealgebra
Related Brookside Village, TX Tutors
Brookside Village, TX Accounting Tutors
Brookside Village, TX ACT Tutors
Brookside Village, TX Algebra Tutors
Brookside Village, TX Algebra 2 Tutors
Brookside Village, TX Calculus Tutors
Brookside Village, TX Geometry Tutors
Brookside Village, TX Math Tutors
Brookside Village, TX Prealgebra Tutors
Brookside Village, TX Precalculus Tutors
Brookside Village, TX SAT Tutors
Brookside Village, TX SAT Math Tutors
Brookside Village, TX Science Tutors
Brookside Village, TX Statistics Tutors
Brookside Village, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Arcola, TX Math Tutors
Bellaire, TX Math Tutors
Bunker Hill Village, TX Math Tutors
Fresno, TX Math Tutors
Hedwig Village, TX Math Tutors
Hilshire Village, TX Math Tutors
Iowa Colony, TX Math Tutors
Kemah Math Tutors
Manvel, TX Math Tutors
Meadows Place, TX Math Tutors
Nassau Bay, TX Math Tutors
Pearland Math Tutors
South Houston Math Tutors
Southside Place, TX Math Tutors
Spring Valley, TX Math Tutors | {"url":"http://www.purplemath.com/Brookside_Village_TX_Math_tutors.php","timestamp":"2014-04-19T19:56:53Z","content_type":null,"content_length":"24237","record_id":"<urn:uuid:39684872-922c-43be-b93c-be3d4a1d1dd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayside, NY Algebra 2 Tutor
Find a Bayside, NY Algebra 2 Tutor
...I focus on teaching the core concepts tested, as well as proven test-taking strategies and time management skills that help students finish quickly and accurately. I have tutored discrete math
for over 4 years, from basic concepts introduced at the elementary school level, to undergraduate cours...
34 Subjects: including algebra 2, English, GRE, reading
...Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard
College in '07.
26 Subjects: including algebra 2, calculus, physics, geometry
...I can help improve your child’s reading and spelling. I draw from a variety of programs, but I particularly like the Wilson Reading Method. Their Fundations Program works well with young
children and the Just Words Program is excellent for older children to adults.
39 Subjects: including algebra 2, reading, geometry, English
I am a professional tutor in all levels of Mathematics (including Elementary Math, Algebra, Calculus, Trigonometry, Geometry, Probability), Physics, and Turkish. I am offering lessons to a wide
range of student profile from elementary school to college level. My tutoring methods depend on students profile mostly.
25 Subjects: including algebra 2, calculus, statistics, logic
...As someone who is not a typical "math person", I can relate to those struggling to understand material - I get it. I am willing to travel to my students, but also can see my students at my
home. Basically, I'm here to help you.
18 Subjects: including algebra 2, chemistry, geometry, statistics | {"url":"http://www.purplemath.com/Bayside_NY_algebra_2_tutors.php","timestamp":"2014-04-17T10:51:10Z","content_type":null,"content_length":"23967","record_id":"<urn:uuid:b2f0dc03-67bb-4e34-a23d-d61dce57c1e0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Point Inclusion Tests
The first problem we will try to analyze is finding whether a 2D or 3D point lies within different types of bodies, from spheres to meshes of various types. This is an essential test for fields like
AI or collision detection. From grenades to wall-following, almost every interesting algorithm needs, at some point in time, to deal with this kind of issue.
The simplest point inclusion test involves testing whether a point is actually inside a sphere. A sphere can be considered a generalization of a point. It is actually a point with a radius. Thus,
given the sphere (as shown in Figure 22.1)
(X-Xc)^2 + (Y-Yc)^2 + (Z-Zc)^2 = R^2
Figure 22.1. Graphical representation of point-in-sphere test.
and the point P
P is inside the sphere if and only if
Inside= (sqrt ( (px-xc)^2 + (py-yc)^2 + (pz-zc)^2 ) < radius
Remember that square roots are expensive. Thus, an optimization could be
Inside= ( (px-xc)^2 + (py-yc)^2 + (pz-zc)^2 ) < radius^2
as well as storing the radius squared to speed up computations.
An axis-aligned bounding box (AABB) is a box whose support planes are aligned with the X, Y, and Z planes (see Figure 22.2). It is very popular for gross collision detection. An object is assigned an
AABB, and as a first step in the collision detection process, we test for collisions with the box, not the object, providing an almost trivial rejection case for most tests.
Figure 22.2. Graphical representation of an AABB.
Thus an AABB is defined by six planes, whose equations are
The test is as follows:
bool inside(point p)
if (p.x>xmax) return false;
if (p.x<xmin) return false;
if (p.y>ymax) return false;
if (p.y<ymin) return false;
if (p.z>zmax) return false;
if (p.z<zmin) return false;
return true;
Point-in-Convex Polygon
We can compute whether a point is inside a convex polygon (see Figure 22.3) by looping through the edges of the polygon and making sure the point is always in the same hemispace with regard to the
edges. We start at the first vertex and compute the vector from the first to the second. Then, we build a new vector from the first vertex to the point we are testing and perform a cross product
between these two vectors. With the original vector and the result of the cross product, we can compute a plane that includes the edge. Then, we just need to test which hemispace from the plane the
point located to. Repeating this for each edge of the polygon and making sure all hemispace tests return the same sign, we detect whether the point is inside the polygon.
Figure 22.3. Convex polygon with all normals pointing inward.
Now, doing dot and cross products all over the place does not look very efficient. But all this can be precomputed. And after all, storing one plane per edge is just four floating-point values, which
is a reasonable memory footprint. Here is the pseudocode in full. Notice how I assume normals to the polygon's faces are looking inward.
While we haven't done a full cycle
Compute edge vector
Use edge vector and normal to compute "up" vector using a cross product
Use edge and up to compute a plane
If the plane value of point P is less than zero return false;
End while
Return true
Point-in-Polygon (Convex and Concave): Jordan Curve Theorem
Computing whether a point is inside a polygon in a general case is significantly harder than performing the test when assuming the polygon is convex. One of the main techniques used to do this
involves using the Jordan Curve Theorem, which states that a point is inside a 2D closed polygon if and only if the number of crossings from a ray emanating from the point in an arbitrary direction
and the edges of the polygon is odd. Think of a triangle, for example. If a point is inside the triangle, and you create a ray from that point in any direction, it will cross the triangle at exactly
one point.
The elegance of the Jordan Curve Theorem is that it only needs the polygon to be closed, because open polygons do not clearly specify the notion of "in" and "out." Notice, however, that the algorithm
works beautifully on convex, concave, or even malformed polygons as shown in Figure 22.4.
Figure 22.4. Jordan Curve Theorem with several different cases analyzed.
Thus, the algorithm can be described by the following pseudocode:
bool isinside(point p, polygon P)
choose an arbitrary direction... (1,0,0) is a good one
build ray r based on p and the direction
initialize count to zero
for each edge
test ray-segment
if crossed
increase count
end if
end for
return count is odd
So, the algorithm is O(n of edges), which is not too bad.
Point-in-Convex Object
Testing whether a point is inside a convex object is relatively straightforward and can be easily considered an extension of the point-in-convex shape test explained earlier. The 2D test works by
checking the point against the line segments and making sure we are always in the same hemispace. In the 3D case, we will need to test for planes instead of line segments, but overall the approach is
the same. A 3D point lies inside a 3D convex object if and only if the point is located in the same hemispace regarding the support planes of all the triangles of the object. Here is the algorithm in
sign=sign of the point-plane test using the first plane of support
for each plane of support
if sign is different than the point-plane test sign for the next plane
return false
end for
return true
Notice how we can perform early-bird detection because the test will return false as soon as one plane stops following the rule.
Point-in-convex object tests are very useful for collision detection. They can be efficiently performed on the convex hull of the objects we need to test. Their cost is O(number of planes), which can
be greatly improved by adding a point versus bounding sphere test to discard most cases beforehand. For those cases returning a positive, we can choose between further refining the solution (if the
object was actually concave) or using this simplified solution. Remember that the convex hull test can return a false positive for concave objects if the point lies in a cavity of the object. In this
case, some games will require a higher precision result, and thus we will need to use an additional test to determine if there actually was a collision. A good option is to use the Jordan Curve 3D
test, which is explained in the next section. But many games can use this simplified test based on convex objects with no problem at all.
Another option is to always decompose any concave objects into a set of convex objects, so this test can always be used safely. Although this strategy must be addressed carefully (sliding collisions
do not work well in concave geometry), we can often avoid using a concave test, which will indeed be more costly.
If you need information on how to compute the convex hull of an object, check out the section "Computing a Convex Hull," at the end of this chapter, where a number of different strategies are
Point-in-Object (Jordan Curve Theorem)
The 2D Jordan Curve Theorem explained earlier can be easily extended to 3D, thus obtaining a general point-object inclusion algorithm. The operation, again, would involve counting the intersections
from the point along a ray in an arbitrary direction. If the number is odd, we are inside the object. If it is even, we are outside.
Point-in-Object (3DDDA)
The Jordan Curve method has a cost linear to the number of triangles, because we need to count intersections with a line segment. A different approach can greatly reduce this cost by increasing the
memory footprint of the algorithm. It is called the 3D Digital Differential Analyzer (3DDDA). Its core idea is very simple. At load time, the process meshes, so data is stored in a 3D regular grid.
Each grid cell will contain those triangles whose barycenter is located inside the cell. The grid size must be inversely proportional to the number of triangles in the mesh. So, when we need to find
out whether a point is inside the mesh, all we have to do is follow the segment on a cell-by-cell basis, intersecting only with those triangles that lie in the cells we visit along the way. By doing
so, we will end up testing a very low number of triangles, especially when compared to the whole mesh (see Figure 22.5).
Figure 22.5. 3DDDA, pictured. Visited cells are shaded.
Another improvement to this algorithm is to store one enumerated value per cell, which can be INSIDE, OUTSIDE, and DON'T KNOW. As we load the mesh, we perform our 3DDDA test stochastically in several
positions of each cell and store the result. This way, whenever we need to test for point inclusion, we can save most tests. If the grid size is selected properly, most cells will be completely in or
out, and thus the test will be free. For those cells that have a part inside and a part outside, we can use our regular 3DDDA code.
3DDDA was introduced by Fujimoto as a way to speed up ray-mesh tests in ray tracing. Assuming N triangles and a grid of Gx,Gy,Gz cells, and assuming triangles are spaced evenly, the cost of a 3DDDA
run is O(N/(Gx*Gy*Gz)) for a single cell, and at most we scan max (Gx,Gy,Gz) cells along the way. So, cost is orders of magnitude below the previous test. Accelerations in the hundredths are not
uncommon, with the downside being the large memory footprint left by 3DDDA. | {"url":"http://www.yaldex.com/game-programming/0131020099_ch22lev1sec1.html","timestamp":"2014-04-18T06:08:38Z","content_type":null,"content_length":"17098","record_id":"<urn:uuid:8e508dae-8cfb-4184-8e8f-701378db6786>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
XL Loan payments
XL Loan payments - Computer Definition
You can use Excel to compute a loan payment whether you need to save the results in your worksheet or not. If you do not need to record the results, you can select any cell to start the operation. To
compute a loan payment, do this: 1. Select the cell where you want the result. 2. Click the Paste Function button. 3. Select PMT from the Financial category. 4. In Rate, enter percentage, slash and
period. 5. In Nper, enter total number of months. 6. In Pv, enter value of loan. 7. View monthly payment at bottom. | {"url":"http://www.yourdictionary.com/xl-loan-payments","timestamp":"2014-04-18T03:00:41Z","content_type":null,"content_length":"41553","record_id":"<urn:uuid:45c7f087-9519-4881-acbb-75f157bc0fad>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chem 59 553 Calculation of Structure Factors For a detailed discussion of the calculation of structure factors see the following web site there are many other useful discussions there too h
Chem 59-553 Calculation of Structure Factors
For a detailed discussion of the calculation of structure factors, see the
following web site (there are many other useful discussions there too):
The relative phases of the structure factors are critical for the determining the
electron density distribution in the unit cell because for a centro-symmetric
ρ(xyz) = 1/V ∑ ∑ ∑ |F(hkl)| cos [2p(h x + k y + l z – ahkl)]
The phase difference is important because it tells us where the maxima and
minima of the periodic functions related to the electron density in the unit cell.
a=0 a = 90º (p/2 radians)
Chem 59-553
When the cosine waves are in phase with one another, you can determine
the amplitude of the resultant wave by simply adding the amplitudes of the
initial waves.
Chem 59-553
When the cosine waves are not in
phase with one another, the situation
becomes more complicated. The
resultant wave has the same
wavelength, but the amplitude is
decreased and the maximum is no
longer at 0 – the phase is shifted by a.
Chem 59-553
The reason for these quantities is illustrated below.
In general, for several waves:
Chem 59-553 Argand Diagrams
These diagrams are called Argand diagrams, where the horizontal axis is real and
the vertical axis is imaginary. These end up being a much simpler way of adding
waves because each wave can easily be represented as a vector. The length of
each vector is fj, and the phase relative to the origin is provided by the angle (f in
these diagrams, a overall in the equations I have given you).
From Euler: eif = cos(f) – isin(f)
exp(-inpx) = cos(npx) – isin(npx)
exp(inpx) + exp(-inpx) = 2 cos(npx)
exp(inpx) - exp(-inpx) = 2i sin(npx)
Chem 59-553 Argand Diagrams
Here are some more properties of complex numbers and Argand diagrams that end
up being helpful in the understanding of structure factors. In particular, notice that
the phase angle can be easily determined from tan(B/A) if we know the values A
and B. Also notice that to square a complex number, it must be multiplied by its
complex conjugate!
z = a + ib = |z| (cosa + i sina),
where a is the angle between z and the real axis.
|z| = (a2 + b2)1/2 = [(a + ib)(a - ib)]1/2 = (z z*)1/2
complex conjugate
Chem 59-553 Friedel’s Law and Structure Factors
As you would expect, symmetry relates certain families of planes to each other and
the actual relationships between the intensities of sets of related reflections are
described by Friedel’s law. Note that the intensity of a reflection is proportional to the
square of the magnitude of F(hkl); i.e. I(hkl) α |F(hkl)|2
Friedel’s law asserts that: I(hkl) ≡ I(-h-k-l) In general, for several waves:
This is a consequence of the structure factor
equation in the form: F(hkl) = A(hkl) + iB(hkl)
Since cos(-a) = cos(a) and sin(-a) = -sin(a)
F(-h-k-l) = A(hkl) - iB(hkl)
|F(hkl)| = |F(-h-k-l)| = [A2 + B2]1/2
Note a(-h-k-l) = - a(hkl)
Chem 59-553 Friedel’s Law and Structure Factors
Friedel’s law is important in terms of the actual diffraction experiment for several
reasons. Primarily, the relationship reduces the amount of data that is necessary to
collect. When Friedel’s law holds (there are some exceptions), the intensity of half
of the reciprocal lattice is provided by the other half, thus we only need to collect a
hemisphere of the reciprocal lattice points within the limiting sphere.
Similar arguments can be used to deduce the relationships
between the I(hkl) values for more symmetric crystal systems
and thus to determine the number of independent reflections
that must be collected.
triclinic monoclinic orthorhombic
hemisphere quadrant octant
I(hkl) ≡ I(-h-k-l) I(hkl) ≡ I(-h-k-l) ≡ I(-hk-l) ≡ I(h-kl) I(hkl) ≡ I(-hkl) ≡ I(h-kl) ≡ I(hk-l)
I(-hkl) ≡ I(h-k-l) ≡ I(hk-l) ≡ I(-h-kl) ≡ I(-h-kl) ≡ I(-hk-l) ≡ I(h-k-l) ≡ I(-h-k-l)
But: I(hkl) ≠ I(-hkl)
Chem 59-553 Laue Groups
Note that the actual diffraction pattern (with the intensities of the reflections taken into
account) must be at least centro-symmetric from Friedel’s law. When this centro-
symmetric requirement is combined with the actual symmetry of the crystal lattice one
obtains the Laue Class or Laue symmetry of the reciprocal lattice. This symmetry is
used by the data collection software, in conjunction with systematic absences, to
determine the space group of the crystal.
Note: a “Friedel pair” are reflections that are only related by Friedel’s law, not by
crystal symmetry. Pairs of reflections that are related by the symmetry of the crystal
are called “centric” reflections.
Crystal Laue
Point groups Patterson Symmetry
System Class
Triclinc 1, -1 -1 P-1
Monoclinic 2, m, 2/m 2/m P2/m, C2/m
Pmmm, Cmmm, Fmmm,
Orthorhombic 222, mm2 , mmm mmm
4, -4, 4/m, 4/m, P4/m, I4/m,
422, 4mm, -42m, 4/mmm 4/mmm P4/mmm, I4/mmm
3, -3, -3, P-3, R-3,
32, 3m, -3 m -3m P-3m1, P-31m, R-3m
6, -6, 6/m, 6/m, P6/m,
622, 6mm, -62m, 6/mmm 6/mmm P6/mmm
23, m-3, m3, Pm-3, Im-3, F-3m,
432, -43m, m3m m3m Pm-3m, Fm-3m, Im-3m
Chem 59-553 Structure Factors
Note that the structure factor equations in their various forms are used to
derive numerous different relationships that end up being useful for
crystallography. For example, you can look up examples of the derivation of
systematic absences using these equations in any of the text books I have
An interesting and useful consequence of the structure factor equations is
that the phases found in centro-symmetric crystals are only on the real axis,
thus the phase a is either 0 or p. In a centro-symmetric crystal if there is an
atom at xyz, then there must be an identical atom at -x-y-z so the structure
factor equation in the form F(hkl) = A(hkl) + iB(hkl) gives:
A(hkl) = f [cos2p(hx+ky+lz) + cos2p(h(-x)+k(-y)+l(-z))] = 2 f [cos2p(hx+ky+lz)]
B(hkl) = f [sin2p(hx+ky+lz) + sin2p(h(-x)+k(-y)+l(-z))] = 0
Thus: F(hkl) = 2 f [cos2p(hx+ky+lz) = A(hkl)
This means that the phase is either positive or negative.
This makes determining the phases of the reflections significantly easier
Chem 59-553 Structure Factors
In summary, structure factors contain information regarding the intensity and phase
of the reflection of a family of planes for every atom in the unit cell (crystal). In
practice, we are only able to measure the intensity of the radiation, not the phase.
Because of this, it is
necessary to ensure that the
intensity data is as accurate
as possible and all on the
same scale so that we can
use it to determine the
electron density distribution in
the crystals. | {"url":"http://www.docstoc.com/docs/64419087/Chem-59-553-Calculation-of-Structure-Factors-For-a-detailed-discussion-of-the-calculation-of-structure-factors-see-the-following-web-site-there-are-many-other-useful-discussions-there-too-h","timestamp":"2014-04-17T03:55:46Z","content_type":null,"content_length":"64823","record_id":"<urn:uuid:2060c84a-4095-4bb2-aa9d-f5ab8feb7ee7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
infinite barbell, circle, 37777...777773
So I guess a circle with an infinite radius can't exist,
according to Ricky's notion that {1,1,1...2}
equals {1,1,1...}, since you never get to the 2.
I am not so adamant in my beliefs, since I haven't
probably put so much time reading or thinking about
the subject.
What about a barbell with weights on boths ends and
an infinite length bar in the middle??
Can these concepts exist in our mathematical minds??
Ricky seems to think they cannot exist??
Maybe I don't understand him correctly.
Please correct me, and I am sorry for the accusations.
I am just trying to learn from you all.
igloo myrtilles fourmis
Re: infinite barbell, circle, 37777...777773
So I guess a circle with an infinite radius can't exist,
Sure it can. A circle with an infinite radius is a line.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: infinite barbell, circle, 37777...777773
So your saying it's a line, but not a circle, or are you saying
it's a line and a circle at the same time, or what are you saying?
igloo myrtilles fourmis
Re: infinite barbell, circle, 37777...777773
So your saying it's a line, but not a circle, or are you saying
it's a line and a circle at the same time, or what are you saying?
I believe Ricky must mean that at any point we looked at the circle it would look like a line
I believe (and its a belief not a proof)
that an infinite circle is impossible because by definition a circle is a finite thing
If it were infinite we would not be able to define it
Re: infinite barbell, circle, 37777...777773
John E. Franklin wrote:
So I guess a circle with an infinite radius can't exist
Definition of a circle of radius r with centre c in the plane:
The set of all points with distance r from c.
So, can you have an infinite circle? (or, can you have another non-equivalent definition of a circle?) I don't think you can have an infinite circle.
Say there is an infinite circle (with r = "infinity") in the plane. Then take the set of all points that make up the circle and ask where the centre is... apparently, it's "infinitely far away" from
all of them. But there is no point on the plane that is infinitely far away from any other pont on the plane, simply because any two points have a finite distance between them.
So you can define the centre on the plane, in which case none of the points of the circle are on the plane (not because they are "infinitely far away" and such... because they are, as previously
stated, not on the plane, so there are no points to make a circle with - also, the uniqueness of the centre is then void, and that throws another potential issue).
So I put it to you that you cannot have an infinitely large circle.
...and please, before you go throwing around phrases like "infinitely large" and such, take a second to think about what you mean, and see if you can come up with a clear, precise definition.
Otherwise all you're saying really is just meaningless babble. And if you're not clear on anything above, I'll be happy to further discuss it.
Bad speling makes me [sic]
Re: infinite barbell, circle, 37777...777773
Sure it can. A circle with an infinite radius is a line.
I just realised you are talking about non-Euclidean Geometry arent you ?
I dont know much about that but I believe people who do know would assert that
the infinite circle could be imagined and that you would be able to imagine walking along
its perimiter, and that a circle of such magnitude, or approaching infinity would appear as a line
In the opposite sense to which two lines that are paralell appear to meet in the distance
this circle would never appear to have rounded edges
Last edited by cray (2006-10-10 10:39:06)
Re: infinite barbell, circle, 37777...777773
cray wrote:
Sure it can. A circle with an infinite radius is a line
I just realised you are talking about non-Euclidean Geometry arent you ?.
Ah, but there are many types of non-Euclidean geometry, so you should really specify what sort you're using. For example, you would be hard-pressed to imagine an infinite circle if you're using
spherical geometry - since all your points are on a sphere, there is most certainly a limit to how large a circle could get!
Bad speling makes me [sic]
Re: infinite barbell, circle, 37777...777773
Not exactly. Take a piece of paper, and draw a fair sized circle. Now double or tripple the size. Continue doing this, drawing as much of the circle as you can. You should see the curve of the circle
start to become straighter and straigher. It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line.
Not rigorous mathimatics, just something to think about.
And you most certainly need an infinite circle when dealing with improper integrals in polar coordinates, the most famous example is the Gaussian integral.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: infinite barbell, circle, 37777...777773
Ricky wrote:
It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line.
Indeed - but this does not mean that an infinite circle exists. Just because
exists, that does not mean that f(c) exists - f(x) could well be undefined at c.
So although the limit of larger and larger circles may indeed be a line, this does not mean that there exists an infinitely large circle.
Bad speling makes me [sic]
Re: infinite barbell, circle, 37777...777773
For example, you would be hard-pressed to imagine an infinite circle if you're using spherical geometry - since all your points are on a sphere, there is most certainly a limit to how large a
circle could get!
I wish I'd listened to my teacher more when I was at school because I would be taking a rough ride to nowhere to try and prove you wrong with my knowledge but it sure sounds like an obsfucation to
bring in circular geometric planes !
I am not disputing your word, just making clear I havent a clue what youre talking about ! HA !
Anyway my simple points were that
1) I dont think that a infinite circle can exist in practice simply because by definition every aspect of it would be infinitely spaced apart - just the fact that logic alone would tell you it is
impossible, as a circle would have a defined radius that cannot be infinitely long. A circle is a defined article so to speak.
2) If one were to begin the mathematical excersise of defining an infinitely large circle the definition would necessarily prove that the circle was not infinite since the implication that something
is circular implies that it is also curved and to define a curve that must be done in a finite space
Re: infinite barbell, circle, 37777...777773
I really like your logic, Cray.
What about an infinite spiral??
Can you have that??
igloo myrtilles fourmis
Re: infinite barbell, circle, 37777...777773
Dross wrote:
Ricky wrote:
It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line.
Indeed - but this does not mean that an infinite circle exists. Just because
exists, that does not mean that f(c) exists - f(x) could well be undefined at c.
So although the limit of larger and larger circles may indeed be a line, this does not mean that there exists an infinitely large circle.
Does x^2 exist at infinity? Or is that a meaningless question?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: infinite barbell, circle, 37777...777773
I don't know if x^2 exists at infinity. What about x - 5 at infinity?? or ln(x) at infinity?
Are they all the same thing? infinity?
If everything about infinity is infinity, then you can't have an object like a circle or a drawing of a house the size of infinity.
But if infinity is a world unto itself, but separate from the real world, then maybe all these things could exist there.
From the real world, it all looks like infinity, one undefined number.
But from the infinite world, then things are as diverse as they are here in the real world.
igloo myrtilles fourmis | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=45172","timestamp":"2014-04-16T04:37:43Z","content_type":null,"content_length":"28122","record_id":"<urn:uuid:046367d6-d93e-47a4-ad95-e6d63e935e48>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plus Magazine
January 2001
Steven J. Brams uses the Cuban missile crisis to illustrate the Theory of Moves, which is not just an abstract mathematical model but one that mirrors the real-life choices, and underlying thinking,
of flesh-and-blood decision makers.
Last October, two mathematicians won £1m when it was revealed that they were the first to solve the Eternity jigsaw puzzle. It had taken them six months and a generous helping of mathematical
analysis. Mark Wainwright meets the pair and finds out how they did it.
Why can't human beings walk as fast as they run? And why do we prefer to break into a run rather than walk above a certain speed? Using mathematical modelling, R. McNeill Alexander finds some
Arguably, the exponential function crops up more than any other when using mathematics to describe the physical world. In the first of two articles on physical phenomena which obey exponential laws,
Ian Garbett discusses light attenuation - the way in which light decreases in intensity as it passes through a medium.
Jenni Barker plots the path from astrophysics to science journalism. | {"url":"http://plus.maths.org/content/plus-magazine-19","timestamp":"2014-04-20T00:50:32Z","content_type":null,"content_length":"24599","record_id":"<urn:uuid:c5d7d3ff-fda0-47d0-bf1e-aa05b05d5757>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
(Geonomic Quotient)
Test your GQ (Geonomic Quotient) — Quiz 6
November 1, 2013 Posted by Jeffery J. Smith under Quizzes No Comments
Humanity’s Blind Spot toward Land
Discover startling statistics (data up to 2010)
in a quiz of 20 questions.
For more information on these figures and sources, please get in touch.
Economists –- and most people -– have a blind spot when it comes to realizing the role of land in creating our comforts. Show what you know about what the vast majority just can't seem to see. The
first couple queries are just for practice.
Congratulations - you have completed Test your GQ (Geonomic Quotient) — Quiz 6.
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Your answers are highlighted below.
Recent Comments
• donkey Hotey on We’re Soaking in It
• donkey Hotey on We’re Soaking in It
• stevor on April 15: Taxes Are Theft or Dues?
• Ernest Martinson on Financial Express: The 1% Capture Our Wealth & Power
• T Spaulding on True Cost of Gasoline artificial subsidies | {"url":"http://www.progress.org/quizzes/test-gq-geonomic-quotient-quiz-4/","timestamp":"2014-04-19T14:33:47Z","content_type":null,"content_length":"221337","record_id":"<urn:uuid:048aa097-dc89-4f6d-93fe-ba4a9eb02958>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Graph A Parabola Properly In Vertex Form Math
Figure b: parabola cheat sheet for vertically oriented parabolas: focus, vertex, axis of symmetry, and directrix.. Introduction to the basic terms and techniques for graphing quadratic functions.
warns against common student errors.. The general form of a quadratic is "y = ax 2 + bx + c". for graphing, the leading coefficient "a" indicates how "fat" or how "skinny" the parabola.
485 x 366 · jpeg, Parabola 1 has the vertex form equation :
Graphing parabola vertex form | solving graphing, Well, give vertex random point parabola "write equation parabola form," write form. How graph parabola calculator | ehow, How graph parabola
calculator. graphing calculators handy tools algebra, calculus, geometry . graphing calculator. How find extra points parabola (quadractic, This mathematical educational video find extra points
parabola. examples finding extra points . | {"url":"http://www.classicelectrics.com/how/how-to-graph-a-parabola-properly-in-vertex-form-math.html","timestamp":"2014-04-20T13:22:44Z","content_type":null,"content_length":"18422","record_id":"<urn:uuid:66597d51-8b2b-4d1f-83d4-cad181274148>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
IoHT :: Clausius' Mechanical Theory of Heat [1850-1865]
Have human thermodynamic questions?
Copyright © Institute of Human Thermodynamics and IoHT Publishing Ltd.
All Rights Reserved
Know of any corrections, modifications, or additions to this page?
Institute of Human Thermodynamics
:: Fundamental Paper ::
Mechanical Theory of Heat
Rudolf Clausius
This URL contains a founding paper for the Science of Human Thermodynamics
At the IoHT, our focus is to bring forth understanding and theoretical advance in the field of human thermodynamics; hence, if you have prevalent questions then
Mechanical Theory of Heat
Rudolf Clausius
lived from 1822-1888. He was born to a large German family where he was the last born of six brothers among other siblings. Here he was educated at a small private school in which his father was the
principle. Following private schooling, Clausius moved on to the Gymnasium in Stettin where he remained until he had completed his schooling in 1840. From here, Clausius entered the University of
Berlin in which he first entertained thoughts of a degree in history, but eventually settled on a degree in mathematics and physics which he completed in 1844. He then spent a probationary year at
the Frederic-Werder Gymnasium teaching advanced classes in mathematical physics. His PhD dissertation, which proposed an explanation for the blue color of the sky, the red colors seen at sunrise and
sunset, and the polarization of light, was completed by 1848 at Halle University.
Rudolf Clausius [1822-1888]
* Published in Poggendoff’s Annalen, Dec. 1854, vol. xciii. p. 481; translated in the Journal de Mathematiques, vol. xx. Paris, 1855, and in the Philosophical Magazine, August 1856, s. 4. vol. xii,
p. 81
** Communicated to the Naturforschende Gesellschaft of Zurich, Jan. 27th, 1862; published in the Viertaljahrschrift of this Society, vol. vii. P. 48; in Poggendorff’s Annalen, May 1862, vol. cxvi. p.
73; in the Philosophical Magazine, S. 4. vol. xxiv. pp. 81, 201; and in the Journal des Mathematiques of Paris, S. 2. vol. vii. P. 209.
*** Read at the Philosophical Society of Zurich on the 24th of April, 1865, published in the Vierteljahrsschrift of this society, Bd. x. S. 1.; Pogg. Ann. July, 1865, Bd. cxxv. S. 353; Journ. de
Liouville, 2e ser. t. x. p. 361.
Clausius’ first and most famous paper, of sixteen in total, published in 1850, was a treatise on the mechanical theory of heat, entitled "On the
Motive Power
and on the
which can be deduced from it for the
Theory of Heat
." In this famous paper, Clausius set forth the argument that whenever
is done by heat, a certain quantifiable amount of permanent change occurs in the working body, the "working body" being typically a cylindrical body of steam or liquid. This was in direct contrast to
Sadi Carnot
, the French physicist who in 1824, in his founding thermodynamic paper "Reflections on the Motive Power of Fire and on
Fitted to Develop that Power", reasoned that whenever work is done by heat, no permanent change occurs in the working body. It was in this famous paper, and nine memoirs to follow, that Clausius
began to develop the concept of
, which accounts for changes that occur in the condition of
working body
whenever work is done by heat.
In 1850, and over the next fifteen years, Clausius wrote nine memoirs on various aspects of the motive power of heat, with focus on what he called a “modification to the first fundamental theorem”.
In 1865, these collected memoirs were published in a book entitled Mechanical Theory of Heat.
Carnot, Sadi. (1824). “Reflections on the Motive Power of Fire.” (55 pages)
Clapeyron, Emile. (1834). “Memoir on the Motive Power of Heat.” (35 pages)
Clausius, Rudolf. (1850). “On the Motive Power of Heat” (43 pages)
Being this situation as it is, the
has taken selected excerpts from an original 1865 copy of Clausius'
Mechanical Theory of Heat
, containing the essential points in the development of the concept of entropy, and posted them online (below) for public consumption. Before reading the following excerpts, however, it will please
the mind of the reader to first read the three papers mentioned above, as found in the adjacent book. These three papers are the founding stones of the science of
. In addition, Clausius' 1857 paper, "On the Nature of the Motion which we call Heat", which helped to establish the
kinetic theory
of gases, is found online (
) and should be read as well.
Essentially, Carnot’s 1824 paper puts forward the hypothesis that heat and work are equivalent and gives us a verbal description of an
engine cycle
, in which
is assumed. Clapeyron’s 1834 paper graphically analyzes Carnot’s engine cycle. Lastly, Clausius’ 1850 paper, which is his first memoir, argues that the first fundamental theorem in the mechanical
theory of heat, in the form of approximately
Q = W
, as presented by Carnot in a verbal manner and by Clapeyron in graphical manner, needs amendment in that it does not account for changes in the constitution of the working body. By this, Clausius
argued that Carnot’s theorem did not account for the fact that real-life processes are "irreversible"; meaning that during any sort of transformation the molecules of the working body do work on each
other, i.e. change their arraignments as the working body progresses from the "initial state" to the "final state" of each step of the Carnot cycle. Subsequently, this work that the molecules of the
body do on each other, which Clausius terms “internal work”, needed to be energetically accounted for in the energy balance equation. From this argument is where the concept of entropy arose.
To note, as the first memoir of Clausius is available in book form, we will not reproduce it here. Secondly, the second and third memoirs as found in the 1865 book
Mechanical Theory of Heat
are essentially discussions on how to calculate
heat capacities
vapour pressures
. As such, they will not be reproduced here. Thirdly, the fourth and the sixth memoirs are where the essence of entropy takes form, both mathematically and conceptually. The main parts of these
memoirs, as well as an excerpt of the fifth memoir, are shown below. Both the seventh and the eight memoirs will not be reproduced here; to note, however, the eight memoir essentially contains verbal
arguments for the proposal that the entropy of the universe tends towards a maximum.
R. Clausius,
Professor of physics in the University of Zurich
Zurich, August 1864
Edited by
T. Archer Hirst, F.R.S.,
Professor of Mathematics in University College, London.
With introduction by
Professor Tyndall.
London, May 1867
Mathematical Introduction (1858):
On the Treatment of Differential Equations which are not Directly Integrable, pp. 1-13.
First Memoir (1850):
On the Moving Force of Heat and the Laws of Heat Which May be Deduced Therefrom, pp. 14-69
Second Memoir (1851):
On the Deportment of Vapour During its Expansion Under Different Circumstances, pp. 90-100
Third Memoir (1851):
On the Theoretic Connexion of Two Empirical Laws Relating to the Tensions and the Latent Heat of Different Vapours, pp. 104-110
Fourth Memoir (1854):
On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat, pp. 111-135
Fifth Memoir (1856):
On the Application of the Mechanical theory of Heat to the Steam-Engine, pp. 136-207
Sixth Memoir (1862):
On the Application of the Theorem of the Equivalence of Transformations to Interior Work, pp. 215-250
Seventh Memoir (1863):
On an Axiom in the Mechanical Theory of Heat, pp. 267-289.
Eighth Memoir (1863):
On the Concentration of Rays of Heat and Light, and on the Limits of its Action, pp. 290-326.
Ninth Memoir (1865):
On Several Convenient Forms of the Fundamental Equations of the Mechanical Theory of Heat, pp. 327-365.
In my memoir “On the Moving Force of Heat, &c.”, I have shown that the theorem of the equivalence of heat and work, and Carnot’s theorem, are not mutually exclusive, by that, by a small modification
of the latter, which does not affect its principle, they can be brought into accordance. With the exception of this indispensable change, I allowed the theorem of Carnot to retain its original form,
my chief objection then being, by the application to the two theorems to special cases, to arrive at conclusions which, according as they involved known or unknown properties of bodies, might
suitably serve as proofs of the truth of the theorems, or as examples of their fecundity.
This form, however, although it may suffice for the deduction of the equations which depend upon the theorem, is incomplete, because we cannot recognize therein, with sufficient clearness, the real
nature of the theorem, and its connexion with the first fundamental theorem. The modified form in the following pages will, I think, better fulfill this demand, and in its applications well be found
very convenient.
Before proceeding to the examination of the second theorem, I may be allowed a few remarks on the first theorem, so far as this is necessary for the supervision of the whole. It is true that I might
assume this as known from my former memoirs or from those of other authors, but to refer back would be inconvenient; and besides this, the exposition I shall here give is preferable to my former one,
because it is at once more general and more concise.
Theorem of the equivalence of Heat and Work
Whenever a moving force generated by heat acts against another force, and motion in the one direction of the other ensues, positive work is performed by the one force at the same time that negative
work is done by the other. As this work has only to be considered as a simple quantity in calculation, it is perfectly arbitrary, in determining its sign, which of the two forces is chosen as the
indicator. Accordingly in researches which have a special reference to the moving force of heat, it is customary to determine the sigh by counting as positive the work done by heat in overcoming any
other force, and as negative the work done by such other force. In this manner the theorem of the equivalence of heat and work, which forms only a particular case of the general relation between
vis viva
mechanical work
, can be briefly enunciated thus:
FOURTH MEMOIR
On a modified form of the second fundamental theorem in the mechanical theory of heat
[December 1854]
and the passage of the quantity of heat Q from the temperature t1 to the temperature t2, has the equivalence-value:
wherein T is a function of the temperature, independent of the nature of the process by which the transformation is effected.
(skipping three pages)
The equation:
is the analytical expression, for all reversible cyclical processes, of the second fundamental theorem in the mechanical theory of heat.
FIFTH MEMOIR
On the Application of the Mechanical theory of Heat to the Steam-Engine
(from page 141)
The two fundamental theorems, which hold good in every cyclical process, are represented by the following equations:
A = the thermal equivalence of the unit of work.
W = the external work performed during the cyclical process.
dQ = an element of the same, whereby any heat withdrawn from the body is to be considered as an imparted negative quantity of heat. The integral in the second equation is extended over the whole
quantity Q.
T = absolute temperature.
N = the equivalence-value of all the uncompensated transformations involved in a cyclical process.
SIXTH MEMOIR
On the Application of the Theorem of the Equivalence of Transformations to Interior Work**
In a memoir published in the year 1854, wherein I sought to simplify to some extent the form of the developments I had previously published, I deduced, form my fundamental proposition that heat
cannot, by itself, pass from a colder into a warmer body, a theorem which is closely allied to, but does not entirely coincide with, the one first deduced by S. Carnot from considerations of a
different kind, based upon the older views of the nature of heat. It has reference to the circumstances under which work can be transformed into heat, and conversely, heat converted into work; and I
have called it the Theorem of the equivalence of Transformations. I did not, however, there communicate the entire theorem in the general form in which I had deduced it, but confined myself on that
occasion to the publication of a part which can be treated separately form the rest, and is capable of more strict proof.
In general, when a body changes its state, work is performed externally and internally at the same time, the exterior work having reference to the forces which extraneous bodies exert upon the body
under consideration, and the interior work to the forces exerted by the constituent molecules of the body in question upon each other. The interior work is for the most part so little known, and
connected with another equally unknown quantity (in fact with the increase of heat actually present in the body) in such a way, that in treating of it we are obliged in some measure to trust to
probabilities; whereas the exterior work is immediately accessible to the observation and measurement, and thus admits of more strict treatment. Accordingly, since, in my former paper, I wished to
avoid everything that was hypothetical, I entirely excluded the interior work, which I was able to do by confining myself to the consideration of cyclical process—that is to say, operations in which
the modifications which the body undergoes are so arranged that the body finally returns to its original condition. In such operations the interior work which is performed during the several
modifications, partly in a positive sense and partly in a negative sense, neutralizes itself, so that nothing but exterior work remains, for which the theorem in question can then be demonstrated
with mathematical strictness, starting for the above-mentioned fundamental proposition.
I have delayed till now the publication of the remainder of my theorem, because it leads to consequence which is considerably at variance with the ideas hitherto generally entertained of the heat
contained in bodies, an I therefore thought it desirable to make still further trial of it. But as I have become more and more convinced in the course of years that we must not attach too great
weight to such ideas, which in part are founded more upon usage than upon a scientific basis, I feel that I ought to hesitate no longer, buy to submit to the scientific public the theorem of the
equivalence of transformations in its complete form, with the theorems which attach themselves to it. I venture to hope that the importance which these theorems, supposing them to be true, possess in
connexion with the theory of heat will be though of justify their publication in their present hypothetical form.
I will, however, at once distinctly observe that, whatever hesitation may be felt in admitting the truth of the following theorems, the conclusions arrived at in my former paper, in reference to
cyclical processes, are not at all impaired.
I will begin by briefly stating the theorem of the equivalence of transformations, as I have already developed it, in order to be able to connect with it the following considerations:
When a body goes through a cyclical process, a certain amount of exterior work may be produced, in which case a certain quantity of heat must be simultaneously expended; or, conversely, work my be
expended and a corresponding quantity of heat may by gained. This may be expressed by saying: Heat can be transformed into work, or work into heat, by a cyclical process.
There may also be another effect of a cyclical process: heat may be transferred form one body to another, by the body which is undergoing modification absorbing heat form the one body and giving it
out again to the other. In this case the bodies between which the transfer of heat takes place are to be viewed merely as heat reservoirs, of which we are not concerned to know anything except the
temperatures. If the temperatures of the two bodies differ, heat passes, either from a warmer to a colder body, or from a colder to a warmer body, according to the direction in which the transference
of heat takes place. Such a transfer of heat may also be designated, for the sake of uniformity, a transformation, inasmuch as it may be said that heat of one temperature is transformed into heat of
another temperature.
The two kinds of transformations that have been mentioned are related in such a way that one presupposes the other, and that they can mutually replace each other. If we call transformations which can
replace each other equivalent, and seek the mathematical expressions which determine the amount of the transformations in such a manner that the equivalent transformations become equal in magnitude,
we arrive at the following expression: If the quantity of heat Q of the temperature t is produced from work, the equivalence-value of this transformation is:
And if the quantity of heat Q passes from a body whose temperature is t1 into another whose temperature is t2, the equivalence-value of this transformation is:
Where T is a function of the temperature which is independent of the kind of process by means of which the transformation is effected, and T1 and T2 denote the values of this function which
correspond to the temperatures t1 and t2. I have shown by separate considerations that T is in all probability nothing more than the absolute temperature.
These two expressions further enable us to recognize the positive or negative sense of the transformations. In the first, Q is taken as positive when work is transformed into heat, and as negative
when heat is transformed into work. In the second, we may always take Q as positive, since the opposite senses of the transformations are indicated by the possibility of the difference [1/T2 – 1/T1]
being either positive or negative. It will thus be seen that the passage of heat from a higher to a lower temperature is to be looked upon as a positive transformation, and its passage form a lower
to a higher temperature as a negative transformation.
If we represent the transformations which occur in a cyclical process by these expressions, the relation existing between them can be stated in a simple and definite manner. If the cyclical process
is reversible, the transformations which occur therein must be partly positive and partly negative, and the equivalence-values of the positive transformations must be together equal to those of the
negative transformations, so that the algebraic sum of all the equivalence-values become = 0. If the cyclical process is not reversible, the equivalence values of the positive and negative
transformations are not necessarily equal, but they can only differ in such a way that the positive transformations predominate. The theorem respecting the equivalence-values of the transformations
may accordingly be stated thus: The algebraic sum of all the transformations occurring in a cyclical process can only be positive, or, as an extreme case, equal to nothing.
The mathematical expression for this theorem is as follows. Let dQ be an element of the heat given up by the body to any reservoir of heat during its own changes, heat which it may absorb from a
reservoir being here reckoned as negative, and T the absolute temperature of the body at the moment of giving up this heat, then the equation:
must be true for every reversible cyclical process, and the relation:
must hold good for every cyclical process which is in any way possible.
2. Although the necessity of this theorem admits of strict mathematical proof if we start from the fundamental proposition above quoted, it thereby nevertheless retains an abstract form, in which it
is with difficulty embraced by the mind, and we feel compelled to seek for the precise physical cause, of which this theorem is a consequence. Moreover, since there is no essential difference between
interior and exterior work, we may assume almost with certainty that a theorem which is generally applicable to exterior work cannot be restricted to this alone, but that, where exterior work is
combined with interior work, it must be capable of application to the latter alone.
Considerations of this nature led me, to assume a general law respecting the dependence of the active force of heat on temperature, among the immediate consequences of which is the theorem of the
equivalence of transformations in its more complete form, and which at the same time leads to other important conclusions. This law I will at once quote, and will endeavour to make its meaning clear
by the addition of a few comments. As for the reasons for supposing it to be true, such as do not at once appear form its internal probability will gradually become apparent in the course of this
paper. It is as follows:
Mechanical Theory of Heat
Clausius, R. (1865). The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII.
where K is a constant depending on the unit, hitherto left undetermined, according to which Z is to be measured.
(skipping about 8 pages worth of derivation and discussion)
7. We will now investigate the manner in which, from equation (II), it is possible to arrive at the equation (I) previously given in Art. 1, which equation must hold, according to the fundamental
theorem that I have already enunciated, for every reversible cyclical process.
When the successive changes of condition constitute a cyclical process, the disgregation of the body is the same at the end of the operation as it was at the beginning, and hence the following
equation must be good:
Equation (II) is hereby transformed into:
In order that this equation may accord with equation (I), namely:
the following equation must hold for every reversible cyclical process:
It is this equation which leads to the consequence referred to in the Art. as at variance with commonly received views. It can, in fact, be proved that, in order that this equation may be true, it is
at once necessary and sufficient to assume the following theorem:
NINTH MEMOIR
On Several Convenient Forms of the Fundamental Equations of
the Mechanical Theory of Heat***
(from page 354 of the ninth memoir)
The other magnitude to be here noticed is connected with the second fundamental theorem, and is contained in equation (IIa). In fact if, as equation (IIa) asserts, the integral:
Vanishes whenever the body, starting from any initial condition, returns thereto after its passage through any other conditions, then the expression dQ/T under the sign integration must be the
complete differential of a magnitude which depends only on the present existing condition of the body, and not upon the way by which t reached the latter. Denoting his magnitude by S, we can write
or, if we conceive this equation to be integrated for any reversible process whereby this body can pass from the selected initial condition to its present one, and denote at the same time by So the
value which the magnitude S has in that initial condition,
This equation is to be used in the same way for determining S as equation (58) was for defining U. The physical meaning of S has already been discussed in the Sixth Memoir.
(after a series of derivations)
we obtain the equation:
We might call
transformation content
of the body, just as we termed the magnitude
thermal and ergonal content
. But as I hold it to be better terms for important magnitudes from the ancient languages, so that they may be adopted unchanged in all modern languages, I propose to call the magnitude S the
of the body, from the Greek word τροπη,
. I have intentionally formed the word
so as to be as similar as possible to the word
; for the two magnitudes to be denoted by these words are so nearly allied their physical meanings, that a certain similarity in designation appears to be desirable.
(from last paragraph of the last page (365))
For the present I will confine myself to the statement of one result. If for the entire universe we conceive the same magnitude to be determined, consistently and with due regard to all
circumstances, which for a single body I have called
, and if at the same time we introduce the other and simpler conception of energy, we may express in the following manner the fundamental laws of the universe which correspond to the two fundamental
theorems of the
mechanical theory of heat
The energy of the universe is constant.
The entropy of the universe tends to a maximum.
The forces which here enter into consideration may be divided into two classes: those which the atoms of a body exert upon each other, and which depend, of course, upon the nature of the body, and
those which arise from the foreign influences to which the body may be exposed. According to these two classes of forces which have to be overcome, of which the latter are subject to essentially
different laws, I have divided the work done by heat into interior and exterior work.
With respect to the interior work, it is easy to see that when a body, departing from its initial condition, suffers a series of modifications and ultimately returns to its original state, the
quantities of interior work thereby produced must exactly cancel one another. For if any positive or negative quantity of interior work had remained, it would it must have produced an opposite
exterior quantity of work or a change in the existing quantity of heat; and as the same process could be repeated any number of times, it would be possible, according to the sign, either to produce
work or heat continually from nothing, or else to lose work or heat continually, without obtaining any equivalent; both of which cases are universally allowed to be impossible. But if at every return
of the body to its initial condition the quantity of interior work is zero, it follows, further, that the interior work corresponding to any given change in the condition of the body is completely
determined by the initial and final conditions of the latter, and is independent of the path pursued in passing form one condition to the other. Conceive a body to pass successively in different ways
from the first to the second condition, but always to return in the same manner to its initial state. It is evident that the quantities of interior work produce along the different paths must all
cancel the common quantity produced during the return, and consequently must be equal to each other.
It is otherwise with the exterior work. With the same initial and final conditions, this can vary just as much as the exterior influences to which the body may be exposed can differ.
Let us now consider at once the interior and exterior work produced during any given change of condition. If opposite in sign they may partially cancel each other, and what remains must then be
proportional to the simultaneous change which has occurred in the quantity of existing heat. In calculation, however, it amounts to the same thing if we assume an alteration in the quantity of heat
equivalent to each of the two kinds of work. Let Q be the quantity of heat which must be imparted to a body during its passage, in a given manner, for one condition to another, and heat withdrawn
from the body being counted as an imparted negative quantity of heat. Then Q may be divided into three parts, of which the first is employed in increasing the heat actually existing in the body, the
second in producing the interior, and the third in producing the exterior work. What was before stated of the second part also applies to the first – it is independent of the path pursued in the
passage of the body from one state to another: hence both parts together may be represented by one function U, which we know to be completely determined by the initial and final states of the body.
The third part, however, the equivalent of exterior work, can like this work itself, only be determined when the precise manner in which the changes of conditions took place is known. If W be the
quantity of exterior work, and A the equivalent of heat for the unit of work, the value of the third part will be A*W, and the first fundamental theorem will be expressed by the equation:
Q = U + A*W
When the several changes are of such a nature that through them the body returns to its original conditions, or when, as we shall in the future express it, these changes form a cyclical process, we
and the foregoing equation becomes:
(skipping about three paragraphs talking about pressure-volume external work)
The work done during an increment of volume dv will be the pdv. Hence the work done during a simultaneous increase of t and v is:
and when we apply this to equation (I), we obtain:
(skipping two pages of derivation on how to eliminate U from equation (II) using a few derivatives)
On this account I will not here pursue the subject further, but pass on to the consideration of the second fundamental theorem in the mechanical theory of heat.
Theorem of the equivalence of transformations
Carnot’s theorem, when brought into agreement with the first fundamental theorem, expresses a relation between two kinds of transformations, the transformation of heat into work, and the passage of
heat form a warmer to a colder body, which may be regarded as the transformation of heat at a higher, into heat at a lower temperature. The theorem, as hitherto used, may be enunciated in some such
manner as the following:
Mechanical work may be transformed into heat, and conversely heat into work, the magnitude of the one being always proportional to that of the other.
In deducing this theorem, however, a process is contemplated which is too simple a character; for only two bodies losing or receiving heat are employed, and it is tacitly assumed that one of the two
bodies between which the transformation of heat takes place is the source of the heat which is converted into work. Now by previously assuming, in this manner, a particular temperature of the heat
converted into work, the influence which a change of this temperature has upon the relation between the two quantities of heat remains concealed, and therefore the theorem in the above form is
It is true this influence may be determined without great difficulty by combining the theorem in the above limited form with the first fundamental theorem, and thus completing the former by the
introduction of the results thus arrived at. But by this indirect method the whole subject would lose much of its clearness and facility of supervision, and on this account it appears to me
preferable to deduce the general form of the theorem immediately from the same principle which I have already employed in my former memoir, in order to demonstrate the modified theorem of Carnot.
This principle, upon which the whole of the following development rests, is as follows:
In all cases where a quantity of heat is converted into work, and where the body effecting this transformation ultimately returns to its original condition, another quantity of heat must necessarily
be transferred from a warmer to a colder body; and the magnitude of the last quantity of heat, in relation to the first, depends only upon the temperature of the bodies between which heat passes, and
not upon the nature of the body effecting the transformation.
Everything we know concerning the interchange of heat between two bodies of different temperature confirms this; for heat everywhere manifests a tendency to equalize differences of temperature, and
therefore to pass in contrary direction, i.e. from a warmer to colder bodies. Without further explanation, therefore, the truth of this principle will be granted.
The principle may be more briefly expressed thus: Heat cannot by itself pass from a colder to a warmer body; the words “by itself”, however, here requires explanation. Their meaning will, it is true,
be rendered sufficiently clear by the exposition contained in the present memoir, nevertheless it appears desirable to add a few word here in order to leave no doubt as to the signification and
comprehensiveness of the principle.
In the first place, the principle implies that in the immediate interchange of heat between two bodies by conduction and radiation, the warmer body never receives more heat from the colder one than
it imparts to it. The principle holds, however, not only for process of this kind, but for all others by which a transmission of heat can be brought about between two bodies of different temperature,
amongst which process must be particularly noticed those wherein the interchange of heat is produced by means of one or more bodies which, on changing their condition, either receive heat from a
body, or impart heat to other bodies.
On considering the results of such processes more closely, we find that in one and the same process heat may be carried from a colder to warmer body and another quantity of heat transferred from a
warmer to a colder body without any other permanent change occurring. In this case we have not a simple transmission of heat from a colder to a warmer body, or an ascending transmission of heat, as
it may be called, but two connected transmission of opposite characters, one ascending and the other descending, which compensate each other. It may, moreover, happen that instead of a descending
transmission of heat accompanying, in the one and the same process, the ascending transmission, another permanent change may occur which has the peculiarity of not being reversible without either
becoming replaced by a new permanent change of a similar kind, or producing a descending transmission of heat. In this case the ascending transmission of heat may be said to be accompanied, not
immediately, but mediately, by a descending one, and the permanent change which replaces the latter may be regarded as a compensation for the ascending transmission.
Now it is to these compensations that our principle refers; and with the aid of this conception the principle may be also expressed thus: an uncompensated transmission of heat from a colder to a
warmer body can never occur. The term “uncompensated” here expresses the same idea that was intended to be conveyed by the words “by itself” in the previous enunciation of the principle, and by he
expression “without some other change connected therewith, occurring at the same time” in the original text.
(skipping about thirteen pages)
The second fundamental theorem in the mechanical theory of heat may thus be enunciated:*
If two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q of the
temperature t from work, has the equivalence-value:
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
In order to understand the significance of this law, we require to consider more closely the processes by which heat can perform mechanical work. These processes always admit of being reduced to the
alteration in some way or another of the arrangement of the constituent parts of the body. For instance, when bodies are expanded by heat, their molecules being thus separated from each other: in
this case the mutual attractions of the molecules on the one hand, and external opposing forces on the other, in so far as any such are in operation, have to be overcome. Again, the state of
aggregation of bodies is altered by heat, solid bodies rendered liquid, and both solid and liquid bodies being rendered aeriform: here likewise internal forces, and in general external forces also,
have to be overcome. Another case which I will also mention, because it differs so widely from the foregoing, and therefore shows how various are the modes of action which have here to be considered,
is the transfer of electricity form one body to the other, constituting the thermo-electric current, which takes place by the action of heat on two heterogeneous bodies in contact.
In the cases first mentioned, the arrangements of the molecules is altered. Since, even which a body remains in the same state of aggregation, its molecules do not retain fixed in varying position,
by are constantly in a state of more of less extended motion, we may, when speaking of the arrangement of the molecules at any particular time, understand either the arrangement which would result
from the molecules being fixed in the actual position they occupy at the instant in question, or we may suppose such an arrangement that each molecule occupies its mean position. Now the effect of
heat always tend to loosen the connexion between the molecules, and so to increase their mean distances from one another. In order to be able to represent this mathematically, we will express the
degree in which the molecules of a body are separated from each other, by introducing a new magnitude, which we will call the disgregation of the body, and by help of which we can define the effect
of heat as simply tending to increase the disgregation. The way in which a definite measure of this magnitude can be arrived at will appear from the sequel.
In the case last mentioned, an alteration in the arrangement of the electricity takes place, an alteration which can be represented and taken into calculation in a way corresponding to the alteration
of the position of the molecules, and which, when it occurs, we will consider as always included in the general expression change of arrangement, or change of disgregation.
In is evident that each of the changes that have been named may also take place in the reverse sense, if the effect of the opposing forces is greater than that of the heat. We will assume as likewise
self-evident that, for the production of work, a corresponding quantity of heat must always be expended, and conversely, that, by the expenditure of work, an equivalent quantity of heat must be
3. If we know consider more closely the various cases which occur in relation to the forces which are operative in each of them, the case of the expansion of a permanent gas presents itself as
particularly simple. We may conclude from certain properties of the gases that the mutual attraction of their molecules at their mean distance is very small, and therefore that only a very slight
resistance is offered to the expansion of a gas, so that the resistance of the sides of the containing vessel must maintain equilibrium with almost the sole effect of the heat. Accordingly the
externally sensible pressure of a gas forms an approximate measure of the separative force of the heat contained in the gas; and hence, according to the foregoing law, this pressure must be nearly
proportional to the absolute temperature. The internal probability of the truth of this result is indeed so great, that many physicists since Gay-Lussac and Dalton have without hesitation presuppose
this proportionality, and have employed it for calculating the absolute temperature.
In the above-mentioned case of thermo-electric action, the force which exerts an action contrary to that of the heat is likewise simple and easily determined. For at the point of contact of two
heterogeneous substances, such as quantity of electricity is driven from the one to the other by the action of the heat, that the opposing force resulting form the electric tension suffices to hold
the force exerted by the heat in equilibrium. Now in a former memoir “On the application of the Mechanical Theory of Heat to the Phenomena of Thermal Electricity” (Poggendorft’s Annalen, vol. xc. P.
513), I have shown that, in so far as changes in the arrangement of the molecules are not produced at the same time by the changes of temperature, the difference of tension produced by heat must be
proportional to the absolute temperature, as is required by the foregoing law.
In the other cases that are quoted, as well as in most others, the relations are less simple, because in them an essential part is played by the forces exerted by the molecules upon one another,
forces which, as yet, are quite unknown. It results, however, from the mere consideration of the external resistances which heat is capable of overcoming, that in general its force increases with the
temperature. If we wish, for instance, to prevent the expansion of a body by means of external pressure, we are obliged to employ a greater pressure the more the body is heated; hence we may
conclude, without having a knowledge of the interior forces, that the total amount of the resistances which can be overcome in expansion, increases with the temperature. We cannot, however, directly
ascertain whether it increases exactly in the proportion required by the foregoing law, without knowing the interior forces. On the other hand, if this law be regarded as proved on other grounds, we
may reverse the process, and employ it for the determination of the interior forces exerted by the molecules.
The forces exerted upon one another by the molecules are not of so simple a kind that each molecule can be replaced by a mere point; for many cases occur in which it can be easily seen that we have
not merely to consider the distances of the molecules, but also their relative positions. If we take, for example, the melting of ice, there is no doubt that interior forces, exerted by the molecules
upon each other, are overcome, and accordingly increase of disgregation takes place; nevertheless the centres of gravity of the molecules are on the average not so far removed from each other in the
liquid water as they were in the ice, for the water is the more dense of the two. Again, the peculiar behaviour of water in contracting when heated above 0° C., and only beginning to expand when its
temperature exceeds 4°, shows that likewise in liquid water, in the neighbourhood of its melting-point, increase of disgregation is not accompanied by increase of the mean distances of its molecules.
In the case of the interior forces, it would accordingly be difficult—even if we did not want to measure them, but only to represent them mathematically—to find a fitting expression for them which
would admit of a simple determination of the magnitude. This difficulty, however, disappears if we take into calculation, not the forces themselves, but the mechanical work which, in any change of
arrangement, is required to overcome them. The expressions for the quantities of work are simpler than those for the corresponding forces; for the quantities of work can be all expressed, without
further secondary statements, by the numbers which, having reference to the same unit, can be added together, or subtracted from one another, however various the forces may be to which they refer.
It is therefore convenient to alter the form of the above law by introducing, instead of the forces themselves, the work done in overcoming them. In this form it reads as follows:
In all cases in which the heat contained in a body does mechanical work by overcoming resistances, the magnitude of the resistances which it is capable of overcoming is proportional to the absolute
4. The law does not speak of the work which the heat does, but of the work which it can do; and similarly, in the first form of the law, it is not of the resistances which the heat overcomes, by of
those which it can overcome that mention is made. This distinction is necessary for the following reasons:
Since the exterior forces which act upon a body while it is undergoing a change of arrangement my vary very greatly, it may happen that the heat, while causing a change of arrangement, has not to
overcome the whole resistance which it would be possible for it to overcome. A well-known and often-quoted example of this is afforded by a gas which expands under such conditions that it has not to
overcome an opposing pressure equal to its own expansive force, as, for instance, when the space filled by the gas is made to communicate with another which is empty, or contains a gas of a lower
pressure. In order in such cases to determine the force of the heat, we must evidently not consider the resistance which actually is overcome, but that which can be overcome.
Also, in changes of arrangement of the opposite kind, that is, where the action of heat is overcome by the opposing forces, a similar distinction may require to be bade, but in this case only as far
as this—that the total amount of the forces by which the action of the heat is overcome my be greater than the active force of the heat, but not smaller.
Cases in which these differences occur may be thus characterized. When a change of arrangement takes place so that the force and counterforce are equal, the change can likewise take place in the
reverse direction under the influence of the same force. But if it occurs so that the overcoming force is greater than that which is overcome, the change cannot take place in the opposite direction
under the influence of the same forces. We may say that the change has occurred in the first case in a reversible manner, and in the second case in an irreversible manner.
Strictly speaking, the overcoming force must always be more powerful than the force which it overcomes; but as the excess of force does not require to have any assignable value, we may think of it as
becoming continually smaller and smaller, so that its value may approach to naught as nearly as we please. Hence it may be seen that the case in which the changes take place reversibly is a limit
which in reality is never quite reacted, by to which we can approach as nearly as we please. We may therefore, in theoretical discussions, still speak of this case as one which really exists; indeed,
as a limiting case it possesses special theoretical importance.
I will take this opportunity of mentioning another process in which this distinction is likewise to be observed. In order for one body to impart heat to another by conduction or radiation (in the
case of radiation, wherein mutual communication of heat takes place, it is to be understood that we speak here of a body which gives out more heat than it receives), the body which parts with heat
must be warmer that the body which takes up heat; and hence the passage of heat between two bodies of different temperature can take place in one direction only, and not in the contrary direction.
The only case in which the passage of heat can occur equally in both directions is when it takes place between bodies of equal temperature. Strictly speaking, however, the communication of heat from
one body to another of the same temperature is not possible; but since the difference of temperature may be as small as we please, the case in which it is equal to nothing, and the passage of heat
accordingly reversible, is a limiting case which may be regarded as theoretically possible.
5. We will now deduce the mathematical expression for the above law, treating in the first place the case in which the change of condition undergone by the body under consideration takes place
reversibly. The result at which we shall arrive for this case will easily admit of subsequent generalizations, so as to include also the cases in which a change occurs irreversibly.
Let the body be supposed to undergo in infinitely small change of condition, whereby the quantity of heat contained in it, and also the arrangement of its constituent particles, may be altered. Let
the quantity of heat contained in it be expressed by H, and the change of this quantity by dH. Further, le the work, both interior and exterior together, performed by the heat in the change of
arrangement be denoted by dL, a magnitude which may be either positive or negative according as the active force of the heat overcomes the forces acting in the contrary direction, or is overcome by
them. We obtain the heat expended to produce this quantity of work by multiplying the work by the thermal-equivalent of a unit of work which we may call A; hence it is AdL.
The sum dH + AdL is the quantity of heat which the body must receive from without, and must accordingly withdraw from another body during the change of condition. We have, however, already
represented by dQ the infinitely small quantity of heat imparted to another body by the one which is undergoing modification, hence we must represent in a corresponding manner, by –dQ, the heat which
it withdraws from another body. We thus obtain the equation:
Note: In my previous memoirs I have separated from one another the interior and the exterior work performed by the heat during the change of condition of the body. If the former be denoted by dI, and
the latter by dW, the above equation becomes:
Since, however, the increase in the quantity of heat actually contained in a body, and the heat consumed by interior work during a change of condition, are magnitudes of which we commonly do not know
the individual values, but only the sum of those values, and which resemble each other in being fully determined as soon as we know the initial and final conditions of the body, without our requiring
to know how it has passed from the one to the other, I have thought it advisable to introduce a function which shall represent the sum of these two magnitudes, and which I have denoted by U.
and hence the foregoing equation becomes:
and if we suppose the last equation integrated for any finite alteration of condition, we have:
These are the equations which I have used in my memoirs published in 1850 and in 1854, partly in the particular form in which they are here given, with no other difference than that I there took the
positive and negative quantities of heat in the opposite sense to what I have done here, in order to attain greater correspondence with the equation (I) given in Art. 1.
(Skipping a paragraph of foot-notes on how W. Thomson and Kirchhoff have used and defined U)
In order now to be able to introduce the disgregation also into the formulae, we must first settle how we are to determine it as a mathematical quantity.
(Skipping about three paragraphs)
Accordingly, let Z be the disgregation of the body, and dZ an infinitely small change of it, and let dL be the corresponding infinitely small quantity of work, we can then put:
The mechanical work which can be done by heat during any change of the arrangement of a body is proportional to the absolute temperature at which this change occurs.
(skipping 4 pages)
9. I believe, indeed, that we must extend the application of this law, supposing it to be correct, still further, and especially to chemical combinations and decompositions.
The separation of chemically combined substances is likewise an increase of the disgregation, and the chemical combination of previously isolated substances is a diminution of their disgregation; and
consequently these processes may be brought under considerations of the same class as the formation or precipitation of vapour. That in this case also the effect of heat is to increase the
disgregation, results from many well-known phenomena, many compounds being decomposable by heat into their constituents—as, for example, mercuric oxide, and, at very high temperatures, even water. To
this it might perhaps be objected that, in other case, the effect of increased temperature is to favor the union of two substances—that, for instance, hydrogen and oxygen do not combine at low
temperatures, but do so easily at higher temperatures. I believe, however, that the heat exerts here only a secondary influence, contributing to bring the atoms into such relative positions that
their inherent forces, by virtue of which they strive to unite, are able to come into operation. Heat itself can never, in my opinion, tend to produce combination, but only, and in every case,
The quantity of heat actually present in a body depends only on its temperature, and not on the arrangement of its constituent particles.
Scarce original printing copies, when available, sell for $500-1000 dollars or more. The textbook, however, can be read as a
Google books copy
and recently became available: in 2008 in paperback and hardcover as a reprint (
by BiblioBazaar at Amazon
, 22 USD), as pictured adjacent, or
by Kessinger Publishing
, 27 USD.
Prior to this, only the first memoir by Clausius, of 1850, was available as a book-in-print, published by
in 1960, entitled:
Reflections on the motive power of fire by Sadi Carnot and other papers on the Second Law of Thermodynamics by E. Clapeyron and R. Clausius Edited with an Introduction by E. Mendoza
(shown below) which is a, 152-page, three-volume collection containing the following three founding thermodynamic papers: | {"url":"http://www.humanthermodynamics.com/Clausius.html","timestamp":"2014-04-18T18:10:47Z","content_type":null,"content_length":"186445","record_id":"<urn:uuid:0dc4853c-a844-4c5d-8a57-da264e13b001>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plano, TX SAT Math Tutor
Find a Plano, TX SAT Math Tutor
...I can identify skill gaps and work to fill those to improve performance in algebra. I am currently teaching geometry. I love geometry because it requires logical reasoning.
10 Subjects: including SAT math, geometry, ASVAB, algebra 1
...Physics, Algebra, Calculus I and II, I could be a resource for you. Good luck with your study, and I will look forward to working with you! Sincerely, Vincent P.S.
24 Subjects: including SAT math, reading, chemistry, English
Hello, I have a PhD in Computational Mathematics from SMU and a Master's in Pure Mathematics from UNT. I tutor all Mathematics and Statistics courses as well as for professional exams such as
GRE, GMAT and Test Prep ACT, SAT etc. I have taught Mathematics, Statistics and Computer courses at Texas A&M, Eastfield College, UNT & SMU.
23 Subjects: including SAT math, calculus, geometry, statistics
I am a math teacher and tutor with 14 years of experience. I have a bachelor’s degree from the University of Puerto Rico in Education with a major in Secondary School Mathematics and a master’s
degree in Educational Leadership. I posses a standard certificate in Mathematics (grades 4-8) and (grades 8-12) from the Texas Board of Educator Certification.
14 Subjects: including SAT math, Spanish, geometry, algebra 1
...As a high school student I was enrolled in a program called the IGCSE and a level British system, somewhat equivalent to the AP classes here in the United States and passed Biology, Chemistry,
and Math with all A's. I also did very well in the math section of the SATs and the English TOEFL. I can also speak, read and write Arabic.
10 Subjects: including SAT math, chemistry, physics, biology
Related Plano, TX Tutors
Plano, TX Accounting Tutors
Plano, TX ACT Tutors
Plano, TX Algebra Tutors
Plano, TX Algebra 2 Tutors
Plano, TX Calculus Tutors
Plano, TX Geometry Tutors
Plano, TX Math Tutors
Plano, TX Prealgebra Tutors
Plano, TX Precalculus Tutors
Plano, TX SAT Tutors
Plano, TX SAT Math Tutors
Plano, TX Science Tutors
Plano, TX Statistics Tutors
Plano, TX Trigonometry Tutors
Nearby Cities With SAT math Tutor
Allen, TX SAT math Tutors
Carrollton, TX SAT math Tutors
Dallas SAT math Tutors
Frisco, TX SAT math Tutors
Garland, TX SAT math Tutors
Irving, TX SAT math Tutors
Lewisville, TX SAT math Tutors
Lucas, TX SAT math Tutors
Mckinney SAT math Tutors
Mesquite, TX SAT math Tutors
Murphy, TX SAT math Tutors
Parker, TX SAT math Tutors
Richardson SAT math Tutors
Rowlett SAT math Tutors
Sachse SAT math Tutors | {"url":"http://www.purplemath.com/Plano_TX_SAT_math_tutors.php","timestamp":"2014-04-19T02:21:41Z","content_type":null,"content_length":"23657","record_id":"<urn:uuid:11762d12-38e5-400a-9924-4ae05fea4bd3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple discrete math dice problem
Okay, the problem states I have to find which situation gives me a better probability with coming out with a sum of 8. Throwing 2 dice or 3 three dice.
Code: Select all
Since a dice has 6 sides the amount of sets for 2 dice is 6^2 or 36.
Now we do the same for three dice 6^3 or 216.
Now I need to find how many combos out of 36 and 216 equal 8. I drew out every combo for the two dice since it was only 36 and came out with 5 different combos equaling 8. So 5/36. My teacher told us
there is an easier way using the combo formula
C(n,r). She said since no set of 1's can equal 8 we only have 5 possible n's.
Code: Select all
She showed us in class that to figure out for 2 dice we do:
My question is why does R = 1? Would it not be 2 since it's two dice?
Code: Select all
And for 3 dice she showed
C(6,1) + C(5,1) + C(4,1) + C(3,1) + C(2,1) + C(1,1)
I'm flat out not understanding the formula for the 3 dice. Can anyone make any sense of this for me? Thanks everyone. Forum rocks. I should be spending my time on these sites instead of bodybuilding
forums lol.
Re: Simple discrete math dice problem
It really would have helped if she'd provided her reasoning. As it is, her "work" is quite cryptic. My guess is that she's not showing a bunch of her reasoning!
For the two-dice case, I think that she's assuming that the first die has been thrown, so she knows the first value. To get a sum of eight, the first dice has to show more than 1 (because the highest
value for the second die is 6, and 1 + 6 doesn't equal 8). So it'll be 2 through 6. For the second die, the value has to be 8 - (whatever the first value was) to equal 8, so she's needing a 2 through
a 6. The "C(5, 1)" may be her way of saying that she's counting the number of ways to get one of those five values. Maybe...?
Then maybe for the second situation, she's saying that she's thrown the first die, and gotten something between 1 and 6. (Since she's throwing three dice, she can use 1's this time: 1 + 1 + 6 = 8.)
Suppose the first die rolls a 1. Then she's looking at the number of ways of getting 7 with the remaining two dice. Say she rolls an "x" on the second die; then she's needing 7 - x for the third die.
Since 1 < x < 6 for the second die, then the target value for the third die is 6, 5, 4, 3, 2, or 1. There are six ways to pick one of these values.
Now suppose the first die rolls a 2. Then she's looking at the number of ways of getting 6 with the remaining two dice. Using "x" again for the value of the roll of the second die, she'll be needing
6 - x for the third die. The target value will be 5, 4, 3, 2, or 1. There are five ways to pick one of these values.
And so forth.
But that's just my guess. | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=15&t=3474&p=9183","timestamp":"2014-04-20T11:17:25Z","content_type":null,"content_length":"21410","record_id":"<urn:uuid:f8016155-e832-4a78-9aec-28ae5b1e28ea>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
[CentOS] Conversion of text in shell
roland hellström arwinkahalarak at hotmail.com
Fri Oct 12 23:09:32 UTC 2007
OK! I finally figured out the solution for all you people out the eager to hear it!!!
it was infact very very similar to the last line I sent... this is it
sed 's/\([^\.]*\).\([^,]*\),\([^\.]*\).\([^e]*\)e\(.*\)/\1,\2 \& $\3,\4 \\cdot 10^{\5}$\\\\/'
omg I feel so h4xx0r figuring that out myself lol
Thx for the help all :)
----------------------------------------> Subject: RE: [CentOS] Conversion of text in shell> Date: Fri, 12 Oct 2007 17:57:54 -0400> From: rwalker at medallion.com> To: centos at centos.org>> roland hellström wrote:>>>> Indeed this is a chalmers student. The purpose of asking>> though is to learn it, because I find it very hard to learn>> without seeing an example of it.>> Thx for the reply it seemed to work well :) Although it would>> be interesting to see if this could be done with the sed>> command somehow?>> Thx in advance>>>> Then let me help you think it out:>> Given a text file with this data:> 1.1,3.19e-4> 1.2,3.05e-3> 10.5,9.14e-8 <- I'm guessing this was suppose to be scientific not>> Convert to a text file:> 1,1 & $3,19 \cdot 10^{-4}$\\> 1,2 & $3,05 \cdot 10^{-3}$\\> 10,5 & $9,14 \cdot 10^{-8}$\\>> Look for the pattern match ups:> 1.1,3.19e-4> 1,1 & $3,19 \cdot 10^{-4}$\\>> 1.2,3.05e-3> 1,2 & $3,05 \cdot 10^{-3}$\\>> 10.5,9.14e-8> 10,5 & $9,14 \cdot 10^{-8}$\\>> It looks like:> 1) decimals converted to commas> 2) commas converted to " & "> 3) "e" converted to " \cdot 10^{<>}$\\"> 4) <> substituted for whatever comes after the "e">> Now if you convert decimals to commas BEFORE you convert> commas to " & " then you will have garbage, so you need> to convert commas to " & " first, then decimals to> commas, then apply the conversion w/ substituion.>> Substitution is handled with the \(\) operator like such:>> echo "this is a test" | sed -e 's/\(.*\) is a \(.*\)/\2 is NOT a \1/'>> Will output: "test is NOT a this">> Can be done in 3 steps, sed can take multiple operations> if you separate them with ;>> >> PS Fix your mail client to post in plain text and put> your reply text at the bottom of the email instead of> the top.>> ______________________________________________________________________> This e-mail, and any attachments thereto, is intended only for use by> the addressee(s) named herein and may contain legally privileged> and/or confidential information. If you are not the intended recipient> of this e-mail, you are hereby notified that any dissemination,> distribution or copying of this e-mail, and any attachments thereto,> is strictly prohibited. If you have received this e-mail in error,> please immediately notify the sender and permanently delete the> original and any copy or printout thereof.>> _______________________________________________> CentOS mailing list> CentOS at centos.org> http://lists.centos.org/mailman/listinfo/centos
Express yourself instantly with MSN Messenger! Download today it's FREE!
More information about the CentOS mailing list | {"url":"http://lists.centos.org/pipermail/centos/2007-October/045563.html","timestamp":"2014-04-18T11:14:26Z","content_type":null,"content_length":"6225","record_id":"<urn:uuid:de325b51-2464-4485-8695-e9f87cfd8877>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermodynamics: Internal Energy and Enthalpy
For a monoatomic ideal gas, U=(3/2) PV. Hence, H=(5/2) PV=(5/2) nRT
Which shows that H for an ideal gas is a function of T alone ,i.e, H=H(T).
Of course it is also function of n since it is extensive quantity. However, this dependence is easy to get rid of by defining h=H/n which is an intensive quantity. | {"url":"http://www.physicsforums.com/showthread.php?p=3880989","timestamp":"2014-04-17T07:21:28Z","content_type":null,"content_length":"24679","record_id":"<urn:uuid:306524ed-25d4-4830-a101-c53bb9af4136>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: SIMILAR DISSECTION OF SETS
Abstract. In 1994, Martin Gardner stated a set of questions concerning the dissection of
a square or an equilateral triangle in three similar parts. Meanwhile, Gardner's questions
have been generalized and some of them are already solved. In the present paper, we solve
more of his questions and treat them in a much more general context.
Let D Rd
be a given set and let f1, . . . , fk be injective continuous mappings. Does
there exist a set X such that D = Xf1(X). . .fk(X) is satisfied with a non-overlapping
union? We will prove that such a set X exists for certain choices of D and {f1, . . . , fk}.
The solutions X will often turn out to be attractors of iterated function systems with
condensation in the sense of Barnsley.
Coming back to Gardner's setting, we use our theory to prove that an equilateral
triangle can be dissected in three similar copies whose areas have ratio 1 : 1 : a for
a (3 +
1. Introduction
In the present paper, we deal with the dissection of a given set D into finitely many | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/104/1057186.html","timestamp":"2014-04-16T23:13:18Z","content_type":null,"content_length":"8254","record_id":"<urn:uuid:80fcf17a-748c-4b34-973d-4c18930fa480>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/sandeez/medals","timestamp":"2014-04-20T11:05:59Z","content_type":null,"content_length":"67478","record_id":"<urn:uuid:588eddcf-9d06-48de-a4f4-e78ffa02f725>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic programming in multiplicative lattices
, 1998
"... We consider multi-criteria sequential decision making problems where the vector-valued evaluations are compared by a given, fixed total ordering. Conditions for the optimality of stationary
policies and the Bellman optimality equation are given. The analysis requires special care as the topology int ..."
Cited by 19 (0 self)
Add to MetaCart
We consider multi-criteria sequential decision making problems where the vector-valued evaluations are compared by a given, fixed total ordering. Conditions for the optimality of stationary policies
and the Bellman optimality equation are given. The analysis requires special care as the topology introduced by pointwise convergence and the order-topology introduced by the preference order are in
general incompatible. Reinforcement learning algorithms are proposed and analyzed. Preliminary computer experiments confirm the validity of the derived algorithms. It is observed that in the
medium-term multicriteria RL often converges to better solutions (measured by the first criterion) than their single-criterion counterparts. These type of multicriteria problems are most useful when
there are several optimal solutions to a problem and one wants to choose the one among these which is optimal according to another fixed criterion. Example applications include alternating games,
when in addition...
, 1994
"... The solution to an instance of the standard Shortest Path problem is a single shortest route in a directed graph. Suppose, however, that each arc has both a distance and a cost, and that one
would like to find a route that is both short and inexpensive. In general, no single route will be both short ..."
Cited by 8 (0 self)
Add to MetaCart
The solution to an instance of the standard Shortest Path problem is a single shortest route in a directed graph. Suppose, however, that each arc has both a distance and a cost, and that one would
like to find a route that is both short and inexpensive. In general, no single route will be both shortest and cheapest; rather, the solution to an instance of this multi-criteria problem will be a
set of efficient or Pareto optimal routes. The (distance, cost) pairs associated with the efficient routes define an efficient frontier or tradeoff curve. An efficient set for a multi-criteria
problem can be exponentially large, even when the underlying singlecriterion;oblem is in P. This work therefore considers approximate solutions to rlulti-criteria discrete optimization problems and
investigates when they can be found quickly. This requires generalizing the notion of a fully polynomial time approximatiofi scheme to multi-criteria problems. In this paper, necessary and sufficient
conditions are developed for the existence of such a fast approximation scheme for a problem. Although the focus is multi-criteria problems, the conditions are of interest even in the single
criterion case. In addition, an appropriate form of problem reduction is introduced to facilitate the application of these conditions to a variety of problems. A companion paper uses the results of
this paper to study the existence of fast approximation schemes for several interesting network flow, knapsack, and
, 1999
"... : An efficient numerical solution scheme entitled adaptive differential dynamic programming is developed in this paper for multiobjective optimal control problems with a general separable
structure. For a multiobjective control problem with a general separable structure, the "optimal" weighting coef ..."
Cited by 1 (0 self)
Add to MetaCart
: An efficient numerical solution scheme entitled adaptive differential dynamic programming is developed in this paper for multiobjective optimal control problems with a general separable structure.
For a multiobjective control problem with a general separable structure, the "optimal" weighting coefficients for various performance indices are time-varying as the system evolves along any
noninferior trajectory. Recognizing this prominent feature in multiobjective control, the proposed adaptive differential dynamic programming methodology combines a search process to identify an
optimal time-varying weighting sequence with the solution concept in the conventional differential dynamic programming. Convergence of the proposed adaptive differential dynamic programming
methodology is addressed. Key Words: Multiobjective optimal control, dynamic programming, multiobjective dynamic programming, differential dynamic programming, adaptive differential dynamic
programming. Department of Mathem...
, 1998
"... We consider multi-criteria sequential decision making problems where the vector-valued evaluations are compared by a given, fixed total ordering. Conditions for the optimality of stationary
policies and the Bellman optimality equation are given. The analysis requires special care as the topology int ..."
Add to MetaCart
We consider multi-criteria sequential decision making problems where the vector-valued evaluations are compared by a given, fixed total ordering. Conditions for the optimality of stationary policies
and the Bellman optimality equation are given. The analysis requires special care as the topology introduced by pointwise convergence and the order-topology introduced by the preference order are in
general incompatible. Reinforcement learning algorithms are proposed and analyzed. Preliminary computer experiments confirm the validity of the derived algorithms. It is observed that in the
medium-term multicriteria RL often converges to better solutions (measured by the first criterion) than their single-criterion counterparts. These type of multicriteria problems are most useful when
there are several optimal solutions to a problem and one wants to choose the one among these which is optimal according to another fixed criterion. Example applications include alternating games,
when in addition...
"... this article we take another starting point and that is to consider perfect dynamics. We say that a recurrent ANN admits perfect dynamics if the dynamical system given by the update operator of
the network has an attractor whose basin of attraction covers the set of all possible initial solution can ..."
Add to MetaCart
this article we take another starting point and that is to consider perfect dynamics. We say that a recurrent ANN admits perfect dynamics if the dynamical system given by the update operator of the
network has an attractor whose basin of attraction covers the set of all possible initial solution candidates. One may wonder whether neural networks that admit perfect dynamics can be interesting in
applications. In this article we show that there exist a family of such networks (or dynamics). We introduce
"... \Ve cOllf:iider multi-criteria f:iequent,ial decision making problems where the vcctor-"valucd evaluations arc compared by a given, fixed total ordering. Condit.ions for the opt.irnalit�y of
statiOIl<-l,r} ' p()lich�s;-weI the Bellman optimalit,y equation are given for a. special, but. important cla ..."
Add to MetaCart
\Ve cOllf:iider multi-criteria f:iequent,ial decision making problems where the vcctor-"valucd evaluations arc compared by a given, fixed total ordering. Condit.ions for the opt.irnalit�y of statiOIl
<-l,r} ' p()lich�s;-weI the Bellman optimalit,y equation are given for a. special, but. important class of problems ''v hell the evaluation of policies can be computed for the criteria, independently
of each other. The anal)'sis requires special cafC as t.he t.opolag,Y int.roduced by polnL\visc convergence a.ncl the or<1cr-topology introduced by the preference order arc in general incompatible.
Reinforcement. learning algorithms are proposed and analY7,ed. Prelimina,ry computer experiments confirm t,he val idit.y of the derived algorithms. These type of multi-criteria problem� are most
useful,,,,hen t.here are several optimal solutions to a. problem and one 'Vl-lIlt.S to choose the one among these,vhich is optimal according Lo another fixed criLerion. Possible application in
robot.ics and repeat.ed ga.mes are outlined.
, 2008
"... I would not have made this far in the journey that I embarked upon in the spring of 2003 without the help and support of my supervising professors Drs. Chen and Corley. I want to thank you for
not only providing constant inputs on my research but also coming through every time I needed and sought yo ..."
Add to MetaCart
I would not have made this far in the journey that I embarked upon in the spring of 2003 without the help and support of my supervising professors Drs. Chen and Corley. I want to thank you for not
only providing constant inputs on my research but also coming through every time I needed and sought your help. I could not have asked for a better learning environment in terms of having an access
to two people with different areas of expertise, perspectives, and supervising styles. I want to extend my thanks to Drs. Ferreira and Han for their inputs as my dissertation committee members. I
would like to convey my appreciation to Drs. Jiang (Georgia Department of Natural Resources) and Beck (University of Georgia) for their quick responses to our frequent queries regarding wastewater
treatment system. I would like to acknowledge IMSE faculty members for their positive influence. I would like to thank Drs. Imrhan and Rosenberger for their support and encouragement. I extend my
gratitude to Dr. Liles for his support. I would also like to thank the IMSE staff for their constant help and support. At UTA, I met many wonderful people and had the privilege of knowing and
befriending some of them. In particular, I would like to thank fellow COSMOSians for making this stay enjoyable and memorable. I am thankful to COSMOS graduates Durai,
"... Abstract: Dynamic programing is one of the major problemsolving methodologies in a number of disciplines such as operations research and computer science. It is also a very important and
powerful tool of thought. But not all is well on the dynamic programming front. There is definitely lack of comme ..."
Add to MetaCart
Abstract: Dynamic programing is one of the major problemsolving methodologies in a number of disciplines such as operations research and computer science. It is also a very important and powerful
tool of thought. But not all is well on the dynamic programming front. There is definitely lack of commercial software support and the situation in the classroom is not as good as it should be. In
this paper we take a bird’s view of dynamic programming so as to identify ways to make it more accessible to students, academics and practitioners alike. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1951290","timestamp":"2014-04-17T07:31:50Z","content_type":null,"content_length":"32146","record_id":"<urn:uuid:ce765058-b580-4bd1-a695-d262c70cd0ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
%0 Report %D 2002 %T A Constraint- Stabilized Time-Stepping Approach for Rigid Multibody Dynamics with Joints, Contact and Friction %A Mihai Anitescu %A G. D. Hart %X
We present a method for achieving constraint stabilization for a linear-complementarity-based time-stepping scheme for multi-rigid-body dynamics with joints, contact and friction. The method requires
the solution of only one linear complementarity problem per step. We show that under certain assumptions, that include the limited differentiability of the mappings governing the noninterpenetration
and joint constraints; the pointed friction cone assumption; and at most linear growth of the external forces, the velocity is bounded for a sufficiently small size of the timestep over a fixed
time-interval and the geometrical constraint infeasibility at step (l) is bounded above by a constant multiple of the square of the time-step and the square of the norm of the current value of the
velocity. If, in addition, the velocity is uniformly bounded with respect to the time interval of the simulation, then the geometrical constraint infeasibility is bounded by the same bound
irrespective of the time interval of the simulation.
%B Int. J. Numer. Meth. Engng %P 1335-2371 %8 10/2002 %G eng %1 http://www.mcs.anl.gov/papers/P1002.pdf | {"url":"http://www.anl.gov/publications/export/tagged/3968","timestamp":"2014-04-19T06:01:20Z","content_type":null,"content_length":"1988","record_id":"<urn:uuid:adbae557-b5a0-4ddd-95e3-62e324ef5f91>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sheaf embedding preserving initial algebras?
up vote 5 down vote favorite
Any small pretopos $C$ can be embedded into a Grothendieck topos by a fully faithful functor that preserves all the pretopos structure (limits, images, finite unions of subobjects, disjoint
coproducts, and quotients of equivalence relations). Namely, we may consider the topos of sheaves for the coherent topology on $C$, with the sheafified Yoneda embedding. If $C$ is (locally) cartesian
closed, then that structure is also preserved by this embedding.
My question is, what if $C$ also has a natural numbers object, or more general initial algebras for special endofunctors (e.g. "W-types")? Can we embed it into a topos of sheaves in a way that
preserves these initial algebras?
ct.category-theory topos-theory
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ct.category-theory topos-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/94968/sheaf-embedding-preserving-initial-algebras","timestamp":"2014-04-16T08:08:15Z","content_type":null,"content_length":"45025","record_id":"<urn:uuid:26663430-c35d-41e5-ba11-b665e2ab5f80>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential calculus- first order partial derviatives
May 28th 2010, 03:36 AM #1
Jun 2006
NSW Australia
Differential calculus- first order partial derviatives
I need help with the followoing question:
Find the slopes of the curves of intersection of the surface z= f(x,y) with the planes perpendicular to the x axis and y axis respectively at a given point.
z= sin (4x+y) at (0, pi/2)
I did fx(x,y)= 4 cos(4x+y) and at this point i get 0
To find the perpendicular planes i think you do -1/0 but that is not defined.
is this the correct method?
kind regards
You need to review the notion of a directional derivative and, in particular, the partial derivative. For the plane perpendicular to the $y$-axis, the slope of the curve will be $f_x(0,\pi/2)=0$.
That's it. I have no idea where you're getting -1/0 from.
This can be done without partial derivatives:
The "plane perpendicular to the x-axis" at $(0, \pi/2)$is the y-z plane, x= 0. Set x= 0 in the formula: z= sin(y) and the derivative of that is cos(y) which is $cos(\pi/2)= 0$ at $(0, \pi/2)$.
The "plane perpendicular to the y-axis" at $(0, \pi/2)$ is the plane $y= \pi/2$: $z= sin(4x+\pi/2)$ and the derivative of that is $4 cos(4x+ \pi/2)$ which is $4 sin(\pi/2)= 4$ at $(0, \pi/2)$.
Last edited by HallsofIvy; May 31st 2010 at 04:00 AM. Reason: Thanks to ojones
HallsofIvy: If your goal is to get through your course on multivariable calculus without ever learning what a partial derivative is, you're not going to make it. By the way, I think you're second
answer is incorrect (you didn't differentiate).
One more thing, the derivatives you're doing ARE partial derivatives.
thanks halls of ivy and ojones! but ojones- there really isn't any need to get so worked up over hall of ivy's post- any help is good and we all make mistakes
May 29th 2010, 02:29 PM #2
Senior Member
May 2010
Los Angeles, California
May 30th 2010, 04:03 AM #3
MHF Contributor
Apr 2005
May 30th 2010, 03:16 PM #4
Senior Member
May 2010
Los Angeles, California
June 1st 2010, 12:27 AM #5
Jun 2006
NSW Australia | {"url":"http://mathhelpforum.com/calculus/146728-differential-calculus-first-order-partial-derviatives.html","timestamp":"2014-04-21T06:42:16Z","content_type":null,"content_length":"42776","record_id":"<urn:uuid:0de6c601-7c28-46c6-bc60-b65808cc1bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
polynomial division
December 19th 2009, 01:54 AM #1
Aug 2009
polynomial division
please help to solve following problem-
If a polynomial p(x) of degree more then 2 is divided by (x-2), the remainder is one(1) & if divided by (x-3), remainder is three(3), what will be the remainder if it is divided by (x-2) (x-3).
where q(x) here is the quotient and Ax+B , the remainder .
p(2)=1 , p(3)=3
p(2)=2A+B=1 --- 1
p(3)=3A+B=3 ----2
Solve the simultaneous equations .
polynomial division
Hi Mr Mathaddict,
Thank you very much for your help.
December 19th 2009, 03:57 AM #2
MHF Contributor
Sep 2008
West Malaysia
December 19th 2009, 06:22 AM #3
Aug 2009 | {"url":"http://mathhelpforum.com/algebra/121133-polynomial-division.html","timestamp":"2014-04-23T14:02:53Z","content_type":null,"content_length":"34074","record_id":"<urn:uuid:6f80a3e3-4182-42b1-b396-0fee4e4e82b3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Notation of Differentiation
Suggested Prerequesites: The definition of the derivative
Often the most confusing thing for a student introduced to differentiation is the notation associated with it. Here an attempt will be made to introduce as many types of notation as possible.
function with respect to a variable. When we write the definition of the derivative as
we mean the derivative of the function f(x) with respect to the variable x.
prime notation. The function f´(x), which would be read ``f-prime of x'', means the derivative of f(x) with respect to x. If we say y = f(x), then y´ (read ``y-prime'') = f´(x). This is even
sometimes taken as far as to write things such as, for y = x^4 + 3x (for example), y´ = (x^4 + 3x)´.
Higher order derivatives in prime notation are represented by increasing the number of primes. For example, the second derivative of y with respect to x would be written as
Beyond the second or third derivative, all those primes get messy, so often the order of the derivative is instead writen as a roman superscript in parenthesis, so that the ninth derivative of f(x)
with respect to x is written as f^(9)(x) or f^(ix)(x).
operator notation. The operator D[x] is applied to a function in order to perform differentiation. Then, the derivative of f(x) = y with respect to x can be written as D[x]y (read ``D -- sub -- x of
y'') or as D[x]f(x (read ``D -- sub x -- of -- f(x)'').
D[x], so that, for example the third derivative of y = (x^2+sin(x)) with respect to x would be written as
Leibnitz and is accordingly called Leibnitz notation. With this notation, if y = f(x), then the derivative of y with respect to x can be written as
(his is read as ``dy -- dx'', but not ``dy minus dx'' or sometimes ``dy over dx''). Since y = f(x), we can also write
This notation suggests that perhaps derivatives can be treated like fractions, which is true in limited ways in some circumstances. (For example with the chain rule.) This is also called differential
notation, where dy and dx are differentials. This notation becomes very useful when dealing with differential equations.
which resembles the above operator notation, with (d/dx as the operator).
The exponents may seem to be in strange places in the second form, but it makes sense if you look at the first form.
So, those are the most commonly used notations for differentiation. It's possible that there exist other, obscure notations used by a some, but these obscure forms won't be included here. It's
helpful to be familiar with the different notations.
When is a function differentiable?
Back to the Calculus page | Back to the World Web Math top page
watko@mit.edu Last updated August 24, 1998 <\body> | {"url":"http://web.mit.edu/wwmath/calculus/differentiation/notation.html","timestamp":"2014-04-21T15:17:30Z","content_type":null,"content_length":"6052","record_id":"<urn:uuid:67fc781b-311e-4e39-afbd-fc7c24df8dfe>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plus Magazine
March 2006
A collector's piece: solution
In last issue's outer space we worked out how many pruchases you would expect to have to make in order to obtain every one of a set of N distinct cards. The puzzle was to show that the standard
deviation of this result is close to 1.3N for large N. To work out the expectation, we considered N random variables X[0], X[1], etc, up to X[N-1]. Each random variable X[i] counts how many purchases
you have to make to get a new card if you already have i distinct cards. The overall expectation is then just the sum of the individual expectations of the N variables.
You can use the same technique to work out the standard deviation (which is of course the square root of the variance): since the N random variables are independent of each other, the overall
variance is simply the sum of the individual values.
For each individual variable X[i], the probability that you have to make j purchases to get a new card is
So each individual
has the well-known
geometric probability distribution
with parameter
This distribution has variance
Now if
is the sum of the
random variables
, ... ,
, then the variance of
A little thought shows that this sum can be expressed as
which is equal to
Dividing through by N^2 gives
gets large, the first term tends to
is close to
1.3 N
— QED.
Back to Outer space | {"url":"http://plus.maths.org/content/plus-magazine-8","timestamp":"2014-04-17T12:39:56Z","content_type":null,"content_length":"27296","record_id":"<urn:uuid:1a40bd78-52ea-459e-9e8f-17d9a0af0d85>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: IEEE arithmetic handling
tmb@arolla.idiap.ch (Thomas M. Breuel)
Mon, 16 Nov 1992 01:19:25 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: tmb@arolla.idiap.ch (Thomas M. Breuel)
Organization: IDIAP
Date: Mon, 16 Nov 1992 01:19:25 GMT
References: 92-11-041
Keywords: arithmetic, design
jim@meiko.co.uk (James Cownie) writes:
Another area where IEEE seems never to be implemented correctly by
compilers is in the handling of Not a Numbers (NaNs). [...]
(.NOT. (X .LT. 2.0)) does NOT imply (X .GE 2.0)
[...] Similarly (and I've never seen this handled right in an optimising
IF (X .ne. X) THEN
print *,'X is a NaN'
print *,'X is a number'
should generate code which has a run time test.
You are making the assumption that the usual language primitives for FP
("=", "<", ".ne.", etc.) should map directly on IEEE operations. That is
certainly not mandated by most current language standards, and I have
serious doubts that it should be mandated.
An alternative approach, and one which I prefer, is that it is an error to
use the usual language primitives for arithmetic with NaN's (as usual, if
you compile for safety, this error should be detected at runtime, if you
compile for speed, you simply get undefined results). You should have to
use special IEEE primitives ("is_nan(x)", "ieee_less(x,y)") to get at the
IEEE meanings when available.
Why do I prefer this? IEEE operations are implementation specific and
unportable, in the sense that not all implementations of a programming
language support them. When you rely on implementation specific and
unportable features, you should have to express that reliance explicitly
so that when I have to port your code, I can figure out what I have to
Note that even if every computer in the universe supported IEEE floating
point, you would still want to make a clear distinction between the usual
numerical operations and IEEE-specific behavior. The reason is that you
might want to use your numerical code with software implementations of
other kinds of floating point numbers, implementations that may not be
able to support IEEE features.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/92-11-079","timestamp":"2014-04-20T20:56:32Z","content_type":null,"content_length":"6876","record_id":"<urn:uuid:5644a070-9711-48d3-baed-cab4a91788ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time analysis of lazy versus eager evaluation
, 2001
"... Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is
no: both classes are Turing complete, meaning that they can compute all partial recursive functions. In pa ..."
Cited by 24 (1 self)
Add to MetaCart
Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is no:
both classes are Turing complete, meaning that they can compute all partial recursive functions. In particular, higher-order values may be first-order simulated by use of the list constructor ‘cons’
to build function closures. This paper uses complexity theory to prove some expressivity results about small programming languages that are less than Turing complete. Complexity classes of decision
problems are used to characterize the expressive power of functional programming language features. An example: second-order programs are more powerful than first-order, since a function f of type &
lsqb;Bool]-〉Bool is computable by a cons-free first-order functional program if and only if f is in PTIME, whereas f is computable by a cons-free second-order program if and only if f is in
EXPTIME. Exact characterizations are given for those problems of type [Bool]-〉Bool solvable by programs with several combinations of operations on data: presence or absence of
constructors; the order of data values: 0, 1, or higher; and program control structures: general recursion, tail recursion, primitive recursion. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=8057357","timestamp":"2014-04-20T19:24:36Z","content_type":null,"content_length":"12704","record_id":"<urn:uuid:8b594125-9321-4b81-94f5-a05529adf594>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need help in this!!!!! Like much help ****ill write the question attAched***
• 6 months ago
• 6 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/524b764be4b06f86e8210288","timestamp":"2014-04-18T23:37:34Z","content_type":null,"content_length":"92540","record_id":"<urn:uuid:1b08576b-a640-4201-9097-88614bbcd42f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resources and services for Utah Higher Education faculty and students such as Canvas and collegEmedia.
Curriculum Tie: Summary:
By the end of this activity students should be able to identify three types of
• Mathematics angles, know that angles are measured in degrees, and be able to measure angles using
4th Grade protractors or angle rulers.
Standard 4
Objective 1 Main Curriculum Tie:
Mathematics - 4th Grade
Standard 3 Objective 1
Identify and describe attributes of two-dimensional geometric shapes.
• 2 balls of yarn
• A-Z cards
• 12 angle cards
• Rulers
• Overhead projector
• Angle rulers
• Protractors
• Pattern blocks
• 360-degree Circle
• Whiteboards
• Dry erase markers
• 4” Angle manipulative
• Large angle manipulative
• Angle Assessment
• Crayons
• White art paper
Additional Resources
• Find-the-Angle Pro Ruler: Item #FA-779 Lakeshore Elementary 2007-08
1-800-778-4456 http://www.lakeshorelearning.com
• AngLegs Item #DG205057TS Summit Learning 1-800-777-8817 summitlearning.com
• Basic Geometry Blackboard Topper. This is a chart to display in your room for a
quick review of line concepts. (It includes lines, angles, polygons, and solid
shapes) Summit Learning 1-800-777-8817 or online at www.summitlearning.com. Item
Number DG20368ITS
□ 360_circle.pdf
□ angle_assessment.pdf
Web Sites
□ Math Open Reference
This website features explanations and examples of each type of line, plus an
interactive features which allows students to manipulate lines to make lines,
line segments, perpendicular, parallel, and intersecting lines.
□ Ambleweb
This is a website published by an elementary school. It was many interactive
activities dealing with geometry. Try the one on measuring angles.
Background For Teachers:
Prior knowledge needed to complete this activity: Be able to identify parallel,
intersecting, and perpendicular lines. By the end of this activity students should be
able to identify:
Right angle: A 90-degree angle
Acute angle: An angle that is less than 90 degrees
Obtuse angle: An angle that is greater than 90 degrees
Know that angles are measured in degrees and develop benchmark angles (e.g. 45
degrees, 60 degrees, 120 degrees) and be able to measure angles using protractors
or angle rulers.
Intended Learning Outcomes:
5. Connect mathematical ideas within mathematics, to other disciplines, and to
everyday experiences.
6. Represent mathematical ideas in a variety of ways.
Instructional Procedures:
Invitation to Learn
Divide class into two groups. Have them stand arm length apart in a circle. Give each
group a ball of yarn. Instruct them to pass the yarn to make a web. They may not pass
the yarn to the person next to them; encourage them to pass across the circle as much
as possible. Each child needs to hold onto the yarn and not let go. When they are all
holding onto the yarn have them carefully lay their web down on the ground,
stretching it slightly so the yarn is in straight lines.
Review parallel, intersecting, and perpendicular lines by finding them within the
web. Have students identify the places where the lines intersect and mark them with
points. Explain that when two lines meet together at one point we call that the
VERTEX and that the lines, which are called rays, extending from the vertex form an
ANGLE. Now look at the web to see if you can identify angles. Review how lines are
named by points. Explain that angles are named using three points, with the vertex
point always in the middle (ABC) and that we use this symbol < for angle. (
Instructional Procedure
1. Classifying Angles (Right, acute, obtuse)
Before the lesson prepare 12 angle cards. Use cardstock and draw one angle on
each card–make 4 right angles, 4 acute angles, and 4 obtuse angles. Label the
points and write the angle name (example: Place the angle cards on the board. Ask
the class to carefully examine them and see if they can classify them into three
groups. Have students come to board and move the angle cards into three groups.
Continue working until students have correctly grouped them into right, acute,
and obtuse angles. Write the name of each type of angle above the cards. Have
class practice reading the names and identify the characteristics of each.
2. Identifying Angles
Put students into small groups or partners. Give each group a set of pattern
Tell them they need to look at each type of pattern block and identify the types
of angles on each. Give each student a piece of art paper. Have them divide it
into three sections labeled: Right Angle, Obtuse Angle, and Acute Angle. Have
them trace the angles of the pattern blocks into the correct section.
3. Identifying Benchmark Angles using fraction circles
Give each student a copy of the 360-degree Circle worksheet, which has been
copied on cardstock.
Discuss how a circle has 360 degrees. Link it to skateboard and snowboard tricks
like the 180 and the 360. As you discuss each one have the students find it on
their 360-degree Circle worksheet.
If you divide a circle in half how many degrees to you have? 180. Have them jump
and spin and try to land at 180 degrees. Now start at 0 degrees on your circle
and trace your finger around to 180 degrees. What about a half of the half? That
would be 90 degrees. Jump 90 degrees at a time and see if they can figure out the
degrees–link it to the 9 times tables. So if you could jump all the way around
you would be doing a 360!
Have students put away their 360 degrees Circle paper so they cannot see it
during the following activity. Give each student a piece of 9 x 12 art paper. Put
students into partners and give each group a set of fraction circles cut out of
foam board. You need to have a whole, halves, fourths, eighths, sixths, and
Have students fold their art paper to make four boxes. Have them trace their
whole circle in each of the boxes on the front and in two boxes on the back.
(Total of 6 boxes)
Work with students to identify the benchmark angles.
Begin with the whole circle. Review how many degrees are in a complete circle.
Write: “A whole circle has 360 degrees”. Ask how much of the circle 180 degrees
would be. Have them find the fraction pieces that would cover half the circle. In
the second box have the students trace the halves onto the circle, write 180
degrees on the circle in the correct place, trace the 180-degree angle in crayon
and shade it in. Above the circle write “180 degrees is half the circle.” (You
can also teach your students that this is called a straight angle)
Note: As you do these fraction pieces make sure they lay the first fraction piece
so its baseline is on the 0 degree line of the circle, this will form the angle
Continue with 90 degrees. Remind them how far they had to jump. How could you
relate 90 degrees to a fraction of your circle? Lay your fraction pieces on your
circle and see which ones correspond to 90 degrees on the circle. Find the
fractions that would make 90-degree angles. Trace the fourths, highlight the
first one-fourth, and label 90 degrees on the circle and then above the circle
write “90 degrees is 1/4 of the circle”. As you work through the rest of these
angles have the students compare them to the 90-degree angle to give them a
reference point.
Repeat for 45 degrees, 60 degrees, and 120 degrees.
4. Make an angle manipulative. Give each student two 1” x 6” strips of oaktag and a
Draw a ray on each strip. Mark an endpoint on each ray, then put the strips
together to form a vertex and put the fastener through them. Make a larger
version for you to use to demonstrate on the board. Have them look at their
fraction circle papers and try to reproduce the angles using their angle
5. Formative Assessment: Have students use whiteboards or white art paper and
crayons. Example: Draw two angles, one 90 degrees and one 45 degrees, on the
board or overhead. Instruct students to copy the 90-degree angle. Have them hold
up their white boards or papers to check. Continue with other angle comparisons;
include right, acute, and obtuse angles also.
6. Measuring Angles using an angle ruler or protractor
Show students an angle ruler and a protractor; explain that these are the tools
we use for measuring angles. Demonstrate how they work. Put students into
partners and let them experiment with the tools. Draw different angles on the
overhead and measure them. Have students draw and measure them with you. Have
students use their angle manipulative. Have them work in partners. One student
will make an angle using their manipulative; the other student will use the angle
ruler or the protractor to measure the angle.
7. Play “What’s Your Angle?”
Draw angles on the board or overhead. Have students estimate and write down the
angle’s degrees. Then have students come up and measure. If their estimate is
exactly correct they get 10 points. Deduct one point for every degree they are
off—if they are one degree off they will get 9 points, continuing down to 9
degrees off they will get 1 point, 10 or more degrees off they will get 0 points.
Variation: Play STOP! Use a large angle manipulative on the board. Tape the
bottom ray so that it stays at 0 degrees. Identify the degree of angle you want
to make. Choose a student to come to the front. Their job is to yell, “STOP” when
they think you have made that degree of angle. They can solicit help from the
other students. Move the other ray slowly (remember that angles are measured
going counterclockwise) The student yells stop when they think you have reached
the correct degree. Tape the ray down and measure the angle. Choose your “winner”
criteria before starting. Example: They have to be within 5 degrees to win. If
they win give them a small treat.
• Struggling learners can be paired with more advanced learners
• Angle Tangle: Assign students to draw 5-7 straight lines with several
intersections. Then connect the endpoints of the lines. Mark the angles created
within in the design and color code them by right, acute, and obtuse angles.
Color the rest of the design.
• String Art: Do a line design but give students string, oaktag, and safe plastic
needles. Have them make the design using the string.
• Use AngLegs sets which include connecting pieces to form angles and a protractor
that attaches to the pieces for independent practice in measuring angles.
• Integrating Technology: Take a digital camera and take your class on an “Angle
Hunt”. Have them identify angles in architecture, machines, nature, etc. Take
photographs of the students and the angles. Use them to make a Power Point
Family Connections
• Have students enlist the help of their families to go on an “Angle Hunt” at their
homes. Have them find and describe at least one example of each type of angle.
Assessment Plan:
Use the Angle Assessment blackline as a final assessment.
John Sutton, J., Krueger, A., (2002). EdThoughts: What We Know About Mathematics
Teaching and Learning, (92).
Brain research demonstrates that: the more senses used in instruction, the better
learners will be able to remember, retrieve, and connect the information in their
memories. Physical experiences or meaningful contexts can provide learners with
strong blocks for building knowledge. Providing our students with many different
types of activities will help them learn the concepts or skills we are presenting.
Marzano, R.J., Pickering, D., & Pollack, J.S. (2001). Classroom Instruction that
Works: research based strategies for increasing student achievement. ASCD,
Alexandria, VA.
This text identifies instructional strategies most likely to lead to improved student
learning. It looks at the research and theories behinds these strategies and gives
suggestions for implementing in the classroom. One of the strategies discussed is
kinesthetic activity that uses physical movement to generate an image of the
knowledge in the learner’s mind. Physically making things such as geometric shapes
helps students connect terms and definitions to the actual things. Drawing pictures
or symbols is also a powerful way to generate nonlinguistic representations in the
Utah LessonPlans
Created Date :
Jul 11 2007 10:24 AM
Summary:By the end of this activity students should be able to identify three types of angles, know that angles are measured in degrees, and be able to measure angles using protractors or angle
Main Curriculum Tie: Mathematics - 4th GradeStandard 3 Objective 1Identify and describe attributes of two-dimensional geometric shapes.
Web Sites Math Open Reference This website features explanations and examples of each type of line, plus an interactive features which allows students to manipulate lines to make lines, line
segments, perpendicular, parallel, and intersecting lines. Ambleweb This is a website published by an elementary school. It was many interactive activities dealing with geometry. Try the one on
measuring angles.
Background For Teachers:Prior knowledge needed to complete this activity: Be able to identify parallel, intersecting, and perpendicular lines. By the end of this activity students should be able to
Right angle: A 90-degree angle Acute angle: An angle that is less than 90 degrees Obtuse angle: An angle that is greater than 90 degrees Know that angles are measured in degrees and develop benchmark
angles (e.g. 45 degrees, 60 degrees, 120 degrees) and be able to measure angles using protractors or angle rulers.
Intended Learning Outcomes:5. Connect mathematical ideas within mathematics, to other disciplines, and to everyday experiences. 6. Represent mathematical ideas in a variety of ways.
Divide class into two groups. Have them stand arm length apart in a circle. Give each group a ball of yarn. Instruct them to pass the yarn to make a web. They may not pass the yarn to the person next
to them; encourage them to pass across the circle as much as possible. Each child needs to hold onto the yarn and not let go. When they are all holding onto the yarn have them carefully lay their web
down on the ground, stretching it slightly so the yarn is in straight lines.
Review parallel, intersecting, and perpendicular lines by finding them within the web. Have students identify the places where the lines intersect and mark them with points. Explain that when two
lines meet together at one point we call that the VERTEX and that the lines, which are called rays, extending from the vertex form an ANGLE. Now look at the web to see if you can identify angles.
Review how lines are named by points. Explain that angles are named using three points, with the vertex point always in the middle (ABC) and that we use this symbol < for angle. (Instructional
Procedure Classifying Angles (Right, acute, obtuse) Before the lesson prepare 12 angle cards. Use cardstock and draw one angle on each card–make 4 right angles, 4 acute angles, and 4 obtuse angles.
Label the points and write the angle name (example: Place the angle cards on the board. Ask the class to carefully examine them and see if they can classify them into three groups. Have students come
to board and move the angle cards into three groups. Continue working until students have correctly grouped them into right, acute, and obtuse angles. Write the name of each type of angle above the
cards. Have class practice reading the names and identify the characteristics of each. Identifying Angles Put students into small groups or partners. Give each group a set of pattern blocks. Tell
them they need to look at each type of pattern block and identify the types of angles on each. Give each student a piece of art paper. Have them divide it into three sections labeled: Right Angle,
Obtuse Angle, and Acute Angle. Have them trace the angles of the pattern blocks into the correct section. Identifying Benchmark Angles using fraction circles Give each student a copy of the
360-degree Circle worksheet, which has been copied on cardstock. Discuss how a circle has 360 degrees. Link it to skateboard and snowboard tricks like the 180 and the 360. As you discuss each one
have the students find it on their 360-degree Circle worksheet. If you divide a circle in half how many degrees to you have? 180. Have them jump and spin and try to land at 180 degrees. Now start at
0 degrees on your circle and trace your finger around to 180 degrees. What about a half of the half? That would be 90 degrees. Jump 90 degrees at a time and see if they can figure out the
degrees–link it to the 9 times tables. So if you could jump all the way around you would be doing a 360! Have students put away their 360 degrees Circle paper so they cannot see it during the
following activity. Give each student a piece of 9 x 12 art paper. Put students into partners and give each group a set of fraction circles cut out of foam board. You need to have a whole, halves,
fourths, eighths, sixths, and thirds. Have students fold their art paper to make four boxes. Have them trace their whole circle in each of the boxes on the front and in two boxes on the back. (Total
of 6 boxes) Work with students to identify the benchmark angles. Begin with the whole circle. Review how many degrees are in a complete circle. Write: “A whole circle has 360 degrees”. Ask how much
of the circle 180 degrees would be. Have them find the fraction pieces that would cover half the circle. In the second box have the students trace the halves onto the circle, write 180 degrees on the
circle in the correct place, trace the 180-degree angle in crayon and shade it in. Above the circle write “180 degrees is half the circle.” (You can also teach your students that this is called a
straight angle) Note: As you do these fraction pieces make sure they lay the first fraction piece so its baseline is on the 0 degree line of the circle, this will form the angle correctly. Continue
with 90 degrees. Remind them how far they had to jump. How could you relate 90 degrees to a fraction of your circle? Lay your fraction pieces on your circle and see which ones correspond to 90
degrees on the circle. Find the fractions that would make 90-degree angles. Trace the fourths, highlight the first one-fourth, and label 90 degrees on the circle and then above the circle write “90
degrees is 1/4 of the circle”. As you work through the rest of these angles have the students compare them to the 90-degree angle to give them a reference point. Repeat for 45 degrees, 60 degrees,
and 120 degrees. Make an angle manipulative. Give each student two 1” x 6” strips of oaktag and a fastener. Draw a ray on each strip. Mark an endpoint on each ray, then put the strips together to
form a vertex and put the fastener through them. Make a larger version for you to use to demonstrate on the board. Have them look at their fraction circle papers and try to reproduce the angles using
their angle manipulatives. Formative Assessment: Have students use whiteboards or white art paper and crayons. Example: Draw two angles, one 90 degrees and one 45 degrees, on the board or overhead.
Instruct students to copy the 90-degree angle. Have them hold up their white boards or papers to check. Continue with other angle comparisons; include right, acute, and obtuse angles also. Measuring
Angles using an angle ruler or protractor Show students an angle ruler and a protractor; explain that these are the tools we use for measuring angles. Demonstrate how they work. Put students into
partners and let them experiment with the tools. Draw different angles on the overhead and measure them. Have students draw and measure them with you. Have students use their angle manipulative. Have
them work in partners. One student will make an angle using their manipulative; the other student will use the angle ruler or the protractor to measure the angle. Play “What’s Your Angle?” Draw
angles on the board or overhead. Have students estimate and write down the angle’s degrees. Then have students come up and measure. If their estimate is exactly correct they get 10 points. Deduct one
point for every degree they are off—if they are one degree off they will get 9 points, continuing down to 9 degrees off they will get 1 point, 10 or more degrees off they will get 0 points.
Variation: Play STOP! Use a large angle manipulative on the board. Tape the bottom ray so that it stays at 0 degrees. Identify the degree of angle you want to make. Choose a student to come to the
front. Their job is to yell, “STOP” when they think you have made that degree of angle. They can solicit help from the other students. Move the other ray slowly (remember that angles are measured
going counterclockwise) The student yells stop when they think you have reached the correct degree. Tape the ray down and measure the angle. Choose your “winner” criteria before starting. Example:
They have to be within 5 degrees to win. If they win give them a small treat.
Before the lesson prepare 12 angle cards. Use cardstock and draw one angle on each card–make 4 right angles, 4 acute angles, and 4 obtuse angles. Label the points and write the angle name (example:
Place the angle cards on the board. Ask the class to carefully examine them and see if they can classify them into three groups. Have students come to board and move the angle cards into three
groups. Continue working until students have correctly grouped them into right, acute, and obtuse angles. Write the name of each type of angle above the cards. Have class practice reading the names
and identify the characteristics of each. Identifying Angles Put students into small groups or partners. Give each group a set of pattern blocks. Tell them they need to look at each type of pattern
block and identify the types of angles on each. Give each student a piece of art paper. Have them divide it into three sections labeled: Right Angle, Obtuse Angle, and Acute Angle. Have them trace
the angles of the pattern blocks into the correct section. Identifying Benchmark Angles using fraction circles Give each student a copy of the 360-degree Circle worksheet, which has been copied on
cardstock. Discuss how a circle has 360 degrees. Link it to skateboard and snowboard tricks like the 180 and the 360. As you discuss each one have the students find it on their 360-degree Circle
worksheet. If you divide a circle in half how many degrees to you have? 180. Have them jump and spin and try to land at 180 degrees. Now start at 0 degrees on your circle and trace your finger around
to 180 degrees. What about a half of the half? That would be 90 degrees. Jump 90 degrees at a time and see if they can figure out the degrees–link it to the 9 times tables. So if you could jump all
the way around you would be doing a 360! Have students put away their 360 degrees Circle paper so they cannot see it during the following activity. Give each student a piece of 9 x 12 art paper. Put
students into partners and give each group a set of fraction circles cut out of foam board. You need to have a whole, halves, fourths, eighths, sixths, and thirds. Have students fold their art paper
to make four boxes. Have them trace their whole circle in each of the boxes on the front and in two boxes on the back. (Total of 6 boxes) Work with students to identify the benchmark angles. Begin
with the whole circle. Review how many degrees are in a complete circle. Write: “A whole circle has 360 degrees”. Ask how much of the circle 180 degrees would be. Have them find the fraction pieces
that would cover half the circle. In the second box have the students trace the halves onto the circle, write 180 degrees on the circle in the correct place, trace the 180-degree angle in crayon and
shade it in. Above the circle write “180 degrees is half the circle.” (You can also teach your students that this is called a straight angle) Note: As you do these fraction pieces make sure they lay
the first fraction piece so its baseline is on the 0 degree line of the circle, this will form the angle correctly. Continue with 90 degrees. Remind them how far they had to jump. How could you
relate 90 degrees to a fraction of your circle? Lay your fraction pieces on your circle and see which ones correspond to 90 degrees on the circle. Find the fractions that would make 90-degree angles.
Trace the fourths, highlight the first one-fourth, and label 90 degrees on the circle and then above the circle write “90 degrees is 1/4 of the circle”. As you work through the rest of these angles
have the students compare them to the 90-degree angle to give them a reference point. Repeat for 45 degrees, 60 degrees, and 120 degrees. Make an angle manipulative. Give each student two 1” x 6”
strips of oaktag and a fastener. Draw a ray on each strip. Mark an endpoint on each ray, then put the strips together to form a vertex and put the fastener through them. Make a larger version for you
to use to demonstrate on the board. Have them look at their fraction circle papers and try to reproduce the angles using their angle manipulatives. Formative Assessment: Have students use whiteboards
or white art paper and crayons. Example: Draw two angles, one 90 degrees and one 45 degrees, on the board or overhead. Instruct students to copy the 90-degree angle. Have them hold up their white
boards or papers to check. Continue with other angle comparisons; include right, acute, and obtuse angles also. Measuring Angles using an angle ruler or protractor Show students an angle ruler and a
protractor; explain that these are the tools we use for measuring angles. Demonstrate how they work. Put students into partners and let them experiment with the tools. Draw different angles on the
overhead and measure them. Have students draw and measure them with you. Have students use their angle manipulative. Have them work in partners. One student will make an angle using their
manipulative; the other student will use the angle ruler or the protractor to measure the angle. Play “What’s Your Angle?” Draw angles on the board or overhead. Have students estimate and write down
the angle’s degrees. Then have students come up and measure. If their estimate is exactly correct they get 10 points. Deduct one point for every degree they are off—if they are one degree off they
will get 9 points, continuing down to 9 degrees off they will get 1 point, 10 or more degrees off they will get 0 points. Variation: Play STOP! Use a large angle manipulative on the board. Tape the
bottom ray so that it stays at 0 degrees. Identify the degree of angle you want to make. Choose a student to come to the front. Their job is to yell, “STOP” when they think you have made that degree
of angle. They can solicit help from the other students. Move the other ray slowly (remember that angles are measured going counterclockwise) The student yells stop when they think you have reached
the correct degree. Tape the ray down and measure the angle. Choose your “winner” criteria before starting. Example: They have to be within 5 degrees to win. If they win give them a small treat.
Put students into small groups or partners. Give each group a set of pattern blocks.
Tell them they need to look at each type of pattern block and identify the types of angles on each. Give each student a piece of art paper. Have them divide it into three sections labeled: Right
Angle, Obtuse Angle, and Acute Angle. Have them trace the angles of the pattern blocks into the correct section.
Give each student a copy of the 360-degree Circle worksheet, which has been copied on cardstock.
Discuss how a circle has 360 degrees. Link it to skateboard and snowboard tricks like the 180 and the 360. As you discuss each one have the students find it on their 360-degree Circle worksheet.
If you divide a circle in half how many degrees to you have? 180. Have them jump and spin and try to land at 180 degrees. Now start at 0 degrees on your circle and trace your finger around to 180
degrees. What about a half of the half? That would be 90 degrees. Jump 90 degrees at a time and see if they can figure out the degrees–link it to the 9 times tables. So if you could jump all the way
around you would be doing a 360!
Have students put away their 360 degrees Circle paper so they cannot see it during the following activity. Give each student a piece of 9 x 12 art paper. Put students into partners and give each
group a set of fraction circles cut out of foam board. You need to have a whole, halves, fourths, eighths, sixths, and thirds.
Have students fold their art paper to make four boxes. Have them trace their whole circle in each of the boxes on the front and in two boxes on the back. (Total of 6 boxes)
Begin with the whole circle. Review how many degrees are in a complete circle. Write: “A whole circle has 360 degrees”. Ask how much of the circle 180 degrees would be. Have them find the fraction
pieces that would cover half the circle. In the second box have the students trace the halves onto the circle, write 180 degrees on the circle in the correct place, trace the 180-degree angle in
crayon and shade it in. Above the circle write “180 degrees is half the circle.” (You can also teach your students that this is called a straight angle)
Note: As you do these fraction pieces make sure they lay the first fraction piece so its baseline is on the 0 degree line of the circle, this will form the angle correctly.
Continue with 90 degrees. Remind them how far they had to jump. How could you relate 90 degrees to a fraction of your circle? Lay your fraction pieces on your circle and see which ones correspond to
90 degrees on the circle. Find the fractions that would make 90-degree angles. Trace the fourths, highlight the first one-fourth, and label 90 degrees on the circle and then above the circle write
“90 degrees is 1/4 of the circle”. As you work through the rest of these angles have the students compare them to the 90-degree angle to give them a reference point.
Draw a ray on each strip. Mark an endpoint on each ray, then put the strips together to form a vertex and put the fastener through them. Make a larger version for you to use to demonstrate on the
board. Have them look at their fraction circle papers and try to reproduce the angles using their angle manipulatives.
Show students an angle ruler and a protractor; explain that these are the tools we use for measuring angles. Demonstrate how they work. Put students into partners and let them experiment with the
tools. Draw different angles on the overhead and measure them. Have students draw and measure them with you. Have students use their angle manipulative. Have them work in partners. One student will
make an angle using their manipulative; the other student will use the angle ruler or the protractor to measure the angle.
Draw angles on the board or overhead. Have students estimate and write down the angle’s degrees. Then have students come up and measure. If their estimate is exactly correct they get 10 points.
Deduct one point for every degree they are off—if they are one degree off they will get 9 points, continuing down to 9 degrees off they will get 1 point, 10 or more degrees off they will get 0
points. Variation: Play STOP! Use a large angle manipulative on the board. Tape the bottom ray so that it stays at 0 degrees. Identify the degree of angle you want to make. Choose a student to come
to the front. Their job is to yell, “STOP” when they think you have made that degree of angle. They can solicit help from the other students. Move the other ray slowly (remember that angles are
measured going counterclockwise) The student yells stop when they think you have reached the correct degree. Tape the ray down and measure the angle. Choose your “winner” criteria before starting.
Example: They have to be within 5 degrees to win. If they win give them a small treat.
Assessment Plan:Use the Angle Assessment blackline as a final assessment.
John Sutton, J., Krueger, A., (2002). EdThoughts: What We Know About Mathematics Teaching and Learning, (92).
Brain research demonstrates that: the more senses used in instruction, the better learners will be able to remember, retrieve, and connect the information in their memories. Physical experiences or
meaningful contexts can provide learners with strong blocks for building knowledge. Providing our students with many different types of activities will help them learn the concepts or skills we are
Marzano, R.J., Pickering, D., & Pollack, J.S. (2001). Classroom Instruction that Works: research based strategies for increasing student achievement. ASCD, Alexandria, VA.
This text identifies instructional strategies most likely to lead to improved student learning. It looks at the research and theories behinds these strategies and gives suggestions for implementing
in the classroom. One of the strategies discussed is kinesthetic activity that uses physical movement to generate an image of the knowledge in the learner’s mind. Physically making things such as
geometric shapes helps students connect terms and definitions to the actual things. Drawing pictures or symbols is also a powerful way to generate nonlinguistic representations in the mind. | {"url":"http://www.uen.org/Lessonplan/preview.cgi?LPid=18973","timestamp":"2014-04-20T03:12:10Z","content_type":null,"content_length":"52346","record_id":"<urn:uuid:2df169be-30e4-4005-988f-aed2d98d0cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
image denoising using nlm algo
Some Information About
image denoising using nlm algo
is hidden..!! Click Here to show image denoising using nlm algo's more details..
Do You Want To See More Details About
"image denoising using nlm algo"
? Then
.Ask Here..!
with your need/request , We will collect and show specific information of image denoising using nlm algo's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service
from our side).....Our experts are ready to help you...
.Ask Here..!
In this page you may see image denoising using nlm algo related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with
Page / Author tags
Posted by: vidhya
chandran applications of image denoisingdenoising , applications of image denoisingcurvelets, curvelet transform tutorial , application of image processing, the curvelet transform for
Created at: Wednesday image denoising , curvelets, image , transform, curvelet , curvelet transform based image fusion for medical application ppt, what is curvelet transform , curvelet transform
10th of November 2010 ppt, ridgelet transform ppt , seminar project curvelet, image denoising ppt using curvelet , curvelet transform for image denoising, curvelet transform in image processing ppt
06:05:24 AM , image denoising using curvelet transform ppt, curvelet transform for image denoising ppt , image denoising using nlm algo, seminar topics based on real time applications of
Last Edited Or curvelet transform , seminar report on image denoising by curvlet transform, seminar on image denoising with thresholding methods with ppt , ppt curvelet, image denoising
Replied at :Monday using curvelet transform ,
15th of November 2010
04:51:31 AM
seminars report ..................[:=> Show Contents <=:]
Posted by: seminar
Created at: Tuesday WAVELET SIGNAL AND IMAGE DENOISING , WAVELET, image analysis in matlab , signal processing in matlab, matlab wavelets , signal denoising, data image processing , matlab image
03rd of May 2011 analysis, image analysis matlab , matlab wavelet, wavelet image denoising , signal analysis, matlab signal processing , image processing matlab, wavelet , SIGNAL, IMAGE ,
04:32:00 AM DENOISING, image denoising full project download , welvet transform for eeg signal, image denoising , project report on signal denoising, denoising of image , project topics
Last Edited Or on denoising, wavelet thresholding image denoising , image denoising using wavelet thresholding,
Replied at :Tuesday
03rd of May 2011
04:32:00 AM
s. Whereas at low scales, we extract ¯ne
information from a signal called details.
Signals are usually band-limited, which is equivalent to having ¯nite energy, and therefore we
need to use just a constrained interval of scales. However, the continuous wavelet transform
provides us with lots of redundant information.
The discrete wavelet transform (DWT) requires less space utilising the space-saving coding based
on the fact that wavelet families are orthogonal or biorthogonal bases, and thus do not produce
redundant analysis. The DWT corresponds to its continuous version sampled usual..................[:=> Show Contents <=:]
Posted by: seminar
Created at: Tuesday reduced simple edge preserved denoising , proposed simple edge preserving denoising technique, coding for low cost vlsi implementation for efficient removal of impulse noise
19th of April 2011 detection , an efficient denoising architecture for removal of impulse noise in images, ppt on low cost vlsi implementation of noise remotion , removal of impulse noise vlsi,
01:50:05 AM rsepd technology for removal of impulse noise , how edge is preserved in vlsi, a low cost vlsi implementation for efficient removal of impulse noise ppt , extreme data
Last Edited Or detector ppt, low cost vlsi implementation for the efficient of removal of impulse noise , image denoising methods with vlsi circuits,
Replied at :Tuesday
19th of April 2011
01:50:05 AM
reduced SEPD is introduced. The implementation results and comparison are provided in Section V. Conclusionsare presented in Section VI.
Assume that the current pixel to be denoised is located at coordinate (i,j) and denoted as pi,j ,and its luminance values before and after the denoising process are represented as f i,j and fˆi,j,
respectively. If pi,j is corrupted by the fixed-value impulse noise, its luminance value will jump to be the minimum or maximum value in gray scale. Here, we adopt a 3×3 mask W centering on pi,j
for image denoising. In the current W, ..................[:=> Show Contents <=:]
Posted by: wavelet based image fusionwavelet based palmprint authentication system, wavelet based image segmentation , wavelet based image compression ppt, wavelet based compression ,
projectsofme wavelet based texture classificationwavelet based image denoising, wavelet based edge detection , wavelet based denoising, wavelet based palmprint authentication system ,
Created at: Friday wavelet based image compression, wavelet video processing technology , wavelet transform matlab, wavelet transform in image processing , wavelet transform ppt, wavelet tutoria
08th of October 2010 , palmprint authentication application ppt, palmprint authentication pdf , wavelet based palmprint authentication ppt, wavelet based palmprint authentication system ppt ,
11:32:53 PM palmprint authentication introduction ppt, ppt on palmprint authentication system using wavelets , ppt on palmprint authentication, palmprint authentication ppt total ppt ,
Last Edited Or wavelet based palmprint authentication system, wavelet based palmprint authentication system doc , palmprint authentication system, wavelet based parmprint authentication
Replied at :Wednesday system ppt , wavelet based palmprint authentication, wavelets used in palmprint authentication , wavelet based palm print authentication system documentation,
13th of June 2012
05:40:17 AM
metric based personal identification system. Palmprint based personal verification has become an in-creasingly active res..................[:=> Show Contents <=:]
Posted by: seminar
surveyer Filtering , Collaborative, image denoising ppt , image denoising thesis, image denoising using wavelet transform , image denoising methods, image denoising using wavelet
Created at: Friday thresholding , image denoising matlab, image denoising matlab code , image denoising algorithms, image denoising using contourlet , image denoising software, Transform Domain
08th of October 2010 , Sparse, Denoising , Image, image denoising using wavelet thresholding matlab code , image denoisingby sparse 3d transform domain collaborative ltering, image denoising by
01:45:22 AM sparse 3 d transform domain collaborative filtering , fuzzy image denoise matlab, image denoising by sparse 3 d transform domain collaborative filtering matlab code , wavelet
Last Edited Or thresholding image denoising, wavelet shrinking algorithm for image denoising ,
Replied at :Friday
08th of October 2010
01:45:22 AM
each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant
improvement is obtained by a specially developed collaborativeWiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an
extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of
bot..................[:=> Show Contents <=:]
Posted by: project
report helper
Created at: Thursday
07th of October 2010 INTERPOLATION , DENOISING, IMAGE , FRACTAL, SIMULTANEOUS , how to get seminar report on image denoising,
04:46:17 AM
Last Edited Or
Replied at :Thursday
07th of October 2010
04:46:17 AM
e seed. The cycle spinning algorithm can also be incorporated in the proposed fractal joint denoising and resizing scheme in order to reduce some of the artifacts and enhance the visual quality of
the fractally denoised and resized estimates. ..................[:=> Show Contents <=:]
Posted by: vidhya
chandran applications of image denoisingdenoising, applications of image denoisingcurvelets , curvelet transform tutorial, application of image processing , the curvelet transform for
Created at: Wednesday image denoising, curvelets , image, transform , curvelet, curvelet transform based image fusion for medical application ppt , what is curvelet transform, curvelet transform
10th of November 2010 ppt , ridgelet transform ppt, seminar project curvelet , image denoising ppt using curvelet, curvelet transform for image denoising , curvelet transform in image processing
06:05:24 AM ppt, image denoising using curvelet transform ppt , curvelet transform for image denoising ppt, image denoising using nlm algo , seminar topics based on real time applications
Last Edited Or of curvelet transform, seminar report on image denoising by curvlet transform , seminar on image denoising with thresholding methods with ppt, ppt curvelet , image denoising
Replied at :Monday using curvelet transform,
15th of November 2010
04:51:31 AM
seminars rep..................[:=> Show Contents <=:]
Posted by: selvi
Created at: Friday fractal image enlargement, fractal image downloads , fractal image desktop backgrounds, fractal image decoding , fractal image denoising, fractal image definition , fractal
02nd of April 2010 image download, fractal image analysis software , fractal image algorithm, fractal image analyzer , fractal image atlanta, fractal image artwork , fractal image analysis,
06:12:10 AM fractal image box counting , fractal image backgrounds, fractal image generators , fractal images black and white, huber fractal image compression ppt , fractal image, study
Last Edited Or on huber f , advantages and diadvantages of fractal image compression, fractal image compression ppt , huber fractal image compression, fractal image compression in java ,
Replied at :Friday
02nd of April 2010
06:27:20 AM
co..................[:=> Show Contents <=:]
Cloud Plugin by Remshad Medappil | {"url":"http://seminarprojects.net/c/image-denoising-using-nlm-algo","timestamp":"2014-04-20T00:40:24Z","content_type":null,"content_length":"38562","record_id":"<urn:uuid:3d0b4d28-4744-4199-9689-99f6074fcfb0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Expressions
Mathematical expressions are the combination of mathematical symbols or numbers or both. Every expression contains important elements called variables. An Expression generally does not contain
equality sign, if it has an equality sign then it represented as an equation.
x^2-2x+5 it is an expression
y=x^2-2x+5 is an equation.
Defined and undefined forms:
Defined and undefined forms these are very important things to be noted in expressions. They are generally used to give an expression meaningful value. The meaning of any Mathematical expression
depends up on the elements which are present in the expression i.e. symbols, variables, integers etc.
Example:$\frac{0}{1}$, ?, -?, 0/? , (1)×? ,0×0 etc
• An expression should have a variables, symbols, and numbers.
• An expression should not have equality symbol.
Math Expressions Online Help
Students can get Math Expressions online help for understanding the steps involved in solving Math expressions. Students have to first learn about the concepts of variables. It is explained below.
The equation is a statement which is used to make two values or expressions equal. These are used to find out the relation between two variables or values. This is used to translate a word problem
into mathematical problem.
Jane has bought 5 apples and 6 oranges for $30, Find the cost of apples if each orange is $2?
Answer : Given that.
5(A) +6(O) =$30
Each apple = $2
So we get 5(2) + 6(O) = $30
10 + 6(0) = 30
6(O) = 30-10
6(O) = 20
O =$ \frac{20}{6}$=3.3
There fore each orange is $3.3
In the above problem the question was changed into mathematical format using equations.
Examples of Math Expressions
The following are Examples of Math Expressions:
1)Find out the roots for the given quadratic expression x^2-5x+6.
Soluiton : Given that
Quadratic expression is x^2-5x+6
Make it into equation x^2-5x+6=0
(x-3) (x-2)=0
The roots are x= 3,2.
2)Plot the graph for the given expression y=x+3?
In the above graph each point is plotted by the expression y=x+3.
3)Evaluate the expression (1 +p) × 2 + 12 ÷ 3 - p when p= 3?
Solution : Given data,
Expression: (1 +p) × 2 + 12 ÷ 3 – p
We replace p with the number 3, and simplify using the usual rules: parentheses first, then exponents, multiplication and division, then addition and subtraction.
(1 + p) × 2 + 12 ÷ 3 - p
(1 +3) × 2 + 12 ÷ 3 - 3
4 × 2 + 12 ÷ 3 - 3
8 + 4 - 3
4)Solve x - 12 + 20 = 37?
Solution : Given,
We need to find x values so “x” term should be independent, Hence we will bring constants to other side.
Bringing constants to other side
5) John weighs 70 kilograms, and Mark weighs “s” kilograms. Write an expression for their combined weight?
Given data,
John weighs 70 kilograms
Marks weighs “s” kilograms
Expression for combined weight=?
The combined weight in kilograms of these two people is the sum of their weights, which is "70+s".
By observing the steps, students can solve similar problems on Math Expressions on their own, by following the same methodology. | {"url":"http://www.tutornext.com/math/mathematical-expressions.html","timestamp":"2014-04-20T01:01:19Z","content_type":null,"content_length":"14425","record_id":"<urn:uuid:346ed3e1-1ff6-41ba-8e12-4128e29b0c19>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermi energy (in graphite)
1. The problem statement, all variables and given/known data
Graphite has a structure of parallel planes weakly interacting with each others such that for many effects it can be considered as two dimensional. Each plane has a hexagonal (honeycomb) structure
with a single C atom by site which gives 1 electron of conduction. Assume that the model of free electrons can be applied to all conduction electrons, find the Fermi energy.
2. Relevant equations
Not really sure. Are there some missing data?
3. The attempt at a solution
What boggles me is that there's no volume given nor electron density. I don't think I can calculate the latter either, because even though I know there is 1 atom per elementary unit, I do not know
the distance between atoms (not sure it even makes sense in a honeycomb structure). Thus I don't really know how to tackle the problem.
I know that [itex]E_F= \frac{\hbar ^2 \pi n }{m} [/itex]. I know that n is the density of electron for an area unit but that number is an unknown in the problem.
I'd like some tip to start the problem, thank you! | {"url":"http://www.physicsforums.com/showthread.php?t=620713","timestamp":"2014-04-16T13:57:02Z","content_type":null,"content_length":"21005","record_id":"<urn:uuid:6181693e-f071-45ef-82f9-acbf55cb8263>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wheat Ridge Algebra 2 Tutor
Find a Wheat Ridge Algebra 2 Tutor
...The problems most students have difficulty with are those dealing with classes which they have taken a few years ago (i.e. algebra, geometry, pre-calculus), and so may have forgotten some of
the basics. To help students prepare for the SAT math section, I help them identify the type of problems ...
18 Subjects: including algebra 2, calculus, physics, GRE
Hi, I'm Leslie! I am a mathematics and writing tutor with three years of experience teaching and tutoring. I spent two years teaching mathematics and physics at a secondary school in rural
27 Subjects: including algebra 2, reading, writing, geometry
...I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and Linear Algebra. I can also teach Intro Statistics and Logic. I've worked with high
school and college students, priding myself on being able to explain any concept to anyone.
11 Subjects: including algebra 2, calculus, geometry, statistics
...Beside all of the formulas of each concepts, how to use the formulas is the hardest part. Why use this formula, not the other one and how to use it. Students ask me these questions all the
27 Subjects: including algebra 2, calculus, physics, geometry
...In this time, I mastered algebraic fluency by reviewing factoring techniques, simplifying rational expressions, and solving equations. As a consequence, I have handled problems which require a
greater depth of thinking and problem solving than those encountered in most Algebra courses. During my time in college I took Calculus I,II, and III.
15 Subjects: including algebra 2, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Wheat_Ridge_Algebra_2_tutors.php","timestamp":"2014-04-17T13:41:52Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:8bb770cd-e725-4a87-ae4c-7490a6780330>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diskussionsbemerkungen zu dem zweiten hilbertschen vortrag ber die grundlagen der mathematik (English translation)’, in From Frege to Gödel. A Source Book
, 2001
"... We discuss the development of metamathematics in the Hilbert school, and Hilbert's proof-theoretic program in particular. We place this program in a broader historical and philosophical context,
especially with respect to nineteenth century developments in mathematics and logic. Finally, we show how ..."
Cited by 5 (2 self)
Add to MetaCart
We discuss the development of metamathematics in the Hilbert school, and Hilbert's proof-theoretic program in particular. We place this program in a broader historical and philosophical context,
especially with respect to nineteenth century developments in mathematics and logic. Finally, we show how these considerations help frame our understanding of metamathematics and proof theory today.
, 2005
"... Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician
David Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and els ..."
Cited by 4 (0 self)
Add to MetaCart
Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician David
Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and elsewhere in the 1920s
"... Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems
are conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One propo ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems are
conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One proposed method of giving such proofs is Hilbert’s epsilonsubstitution method. There was, however, a
second approach which was not refelected in the publications of the Hilbert school in the 1920s, and which is a direct precursor of Hilbert’s first epsilon theorem and a certain “general consistency
result. ” An analysis of this so-called “failed proof ” lends further support to an interpretation of Hilbert according to which he was expressly concerned with conservatitvity proofs, even though
his publications only mention consistency as the main question. §1. Introduction. The aim of Hilbert’s program for consistency proofs in the 1920s is well known: to formalize mathematics, and to give
finitistic consistency proofs of these systems and thus to put mathematics on a “secure foundation.” What is perhaps less well known is exactly how Hilbert thought this should be carried out. Over
ten years before Gentzen developed sequent calculus formalizations
"... It is common knowledge that for a short while Hermann Weyl joined Brouwer in his pursuit of a revision of mathematics according to intuitionistic principles. There is, however, little in the
literature that sheds light on Weyl’s role, and in particular on Brouwer’s reaction to Weyl’s allegiance to t ..."
Cited by 1 (0 self)
Add to MetaCart
It is common knowledge that for a short while Hermann Weyl joined Brouwer in his pursuit of a revision of mathematics according to intuitionistic principles. There is, however, little in the
literature that sheds light on Weyl’s role, and in particular on Brouwer’s reaction to Weyl’s allegiance to the cause of intuitionism. This short episode certainly raises a number of questions: what
made Weyl give up his own program, spelled out in “Das Kontinuum”, how come Weyl was so well-informed about Brouwer’s new intuitionism, in what respect did Weyl’s intuitionism differ from Brouwer’s
intuitionism, what did Brouwer think of Weyl’s views,........? To some of these questions at least partial answers can be put forward on the basis of some of the available correspondence and notes.
The present paper will concentrate mostly on the historical issues of the intuitionistic episode in Weyl’s career. Weyl entered the foundational controversy with a bang in 1920 with his sensational
paper “On the new foundational crisis in mathematics ” 1. He had already made a name for himself in the foundations of mathematics in 1918 with his monograph “The Continuum” [Weyl 1918] ; this
contained in addition to a technical logical – mathematical construction of the continuum, a fairly extensive discussion of the shortcomings of the traditional construction of the continuum on the
basis of arbitrary — and hence also impredicative — Dedekind cuts. This book did not cause much of a stir in mathematics, that is to say, it was ritually quoted in the literature but, probably,
little understood. It had to wait for a proper appreciation until the phenomenon of impredicativity was better understood 2. The paper “On the new foundational crisis in mathematics ” had a totally
different effect, it was the proverbial stone thrown into the quiet pond of mathematics. Weyl characterised it in retrospect with the somewhat apologetic words: Only with some hesitation I
acknowledge these lectures, which reflect in their style, which was here and there really bombastic, the mood of excited times — the times immediately following the First World War. 3 Indeed, Weyl’s
“New crisis ” reads as a manifesto to the mathematical community, it uses an evocative language with a good many explicit references to the political
"... The lives of mathematical prodigies who passed away very early after groundbreaking work invoke a fascination for later generations: The early death of Niels Henrik Abel (1802–1829) from ill
health after a sled trip to visit his fiancé for Christmas; the obscure circumstances of Evariste Galois ’ (1 ..."
Add to MetaCart
The lives of mathematical prodigies who passed away very early after groundbreaking work invoke a fascination for later generations: The early death of Niels Henrik Abel (1802–1829) from ill health
after a sled trip to visit his fiancé for Christmas; the obscure circumstances of Evariste Galois ’ (1811–1832) duel; the deaths of consumption of Gotthold Eisenstein (1823–1852) (who sometimes
lectured his few students from his bedside) and of Gustav Roch (1839–1866) in Venice; the drowning of the topologist Pavel Samuilovich Urysohn (1898–1924) on vacation; the burial of Raymond Paley
(1907–1933) in an avalanche at Deception Pass in the Rocky Mountains; as well as the fatal imprisonment of Gerhard Gentzen (1909–1945) in Prague1 — these are tales most scholars of logic and
mathematics have heard in their student days. Jacques Herbrand, a young prodigy admitted to the École Normale Supérieure as the best student of the year1925, when he was17, died only six years later
in a mountaineering accident in La Bérarde (Isère) in France. He left a legacy in logic and mathematics that is outstanding.
, 2005
"... When the microequations of a dynamical system generate complex macrobehaviour, there can be an explanatory gap between the small-scale and large-scale descriptions of the same system. The
microdynamics may be simple, but its relationship to the macrobehaviour may seem impenetrable. This phenomenon, ..."
Add to MetaCart
When the microequations of a dynamical system generate complex macrobehaviour, there can be an explanatory gap between the small-scale and large-scale descriptions of the same system. The
microdynamics may be simple, but its relationship to the macrobehaviour may seem impenetrable. This phenomenon, known as emergence, poses problems for the nature of scientific understanding. How do
we reconcile two radically different modes of description? Emergence is formulated using the powerful tools of algorithmic information and computational theory. This provides the ground for an
extension and generalisation of the phenomenon. Mathematics itself is analysed as an emergent system, linking formalist notions of mathematics as a string manipulation game with the more abstract
ideas and proofs that occupy mathematicians. A philosophical problem that has plagued emergence is whether the whole can be more than the sum of its parts. This possibility, known as strong
emergence, manifests when emergent macrostructures introduce brand new causal dynamics into a system. A new perspective on this | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103930","timestamp":"2014-04-16T10:46:25Z","content_type":null,"content_length":"27807","record_id":"<urn:uuid:b8f35fba-1085-4aa8-bca0-e107064a6f6f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
algebra equation
June 25th 2006, 02:25 AM #1
Jun 2006
A graph shows y=(x-3)(x-5)(x-a). Determine the value of a. My answer would be 7 but not sure. thanks again for the help.
A graph shows y=(x-3)(x-5)(x-a). Determine the value of a. My answer would be 7 but not sure. thanks again for the help.
There is not sufficient information to answer this
please see attachment
The figure shows the graph of y=(x-)(x-5)(x-a). Determine the value of a. please see attachment
June 25th 2006, 02:44 AM #2
Grand Panjandrum
Nov 2005
June 25th 2006, 03:06 AM #3
Jun 2006 | {"url":"http://mathhelpforum.com/algebra/3632-algebra-equation.html","timestamp":"2014-04-23T11:01:03Z","content_type":null,"content_length":"34690","record_id":"<urn:uuid:ba7f0144-5371-4fea-8cdd-04753bdd05a1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability GHC only
Stability experimental
Maintainer ekmett@gmail.com
Safe Haskell Safe-Infered
Newton's Method (Forward AD)
findZero :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
take 10 $ findZero (\\x->x^2-4) 1 -- converge to 2.0
module Data.Complex
take 10 $ findZero ((+1).(^2)) (1 :+ 1) -- converge to (0 :+ 1)@
inverse :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> a -> [a]Source
The inverseNewton function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
take 10 $ inverseNewton sqrt 1 (sqrt 10) -- converges to 10
fixedPoint :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
take 10 $ fixedPoint cos 1 -- converges to 0.7390851332151607
extremum :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.)
take 10 $ extremum cos 1 -- convert to 0
Gradient Ascent/Descent (Reverse AD)
gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source
The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad
sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
It uses reverse mode automatic differentiation to compute the gradient. | {"url":"http://hackage.haskell.org/package/ad-1.5.0.1/docs/Numeric-AD-Newton.html","timestamp":"2014-04-20T21:54:56Z","content_type":null,"content_length":"10814","record_id":"<urn:uuid:c8688481-2d9c-440a-a8bb-6efd8a86c9e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [vox-tech] LaTeX, DVI, PDF, LaTeX, fonts - HELP!
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [vox-tech] LaTeX, DVI, PDF, LaTeX, fonts - HELP!
If you're not doing anything to specifically control fonts, the default is
Computer Modern (the distinct look behind most TeX documents)
for the 3 font families of roman, san serif, and typewriter. The CM fonts
are not PostScript fonts.
I'm no expert, but the following simple fix will probably go a long way.
Insert \usepackage{times} in the preamble of your document. It
will make Times, Helvetica, and Courier the roman, san serif, and
typewriter font family, respectively. These are PostScript fonts and
the resulting output files should be much more compatible with PS
As for math symbols, I'll quote from "Math Into LaTeX" (an excellent
resource for typesetting mathematical documents with LaTeX):
"Looking at a mathematical article typeset with the Times text font,
you may find that the Computer Modern math symbols look too thin."
In your case, you may also find that your end-user is not getting the
desired "look." Insert \usepackage{mathtime} in the preamble,
and you will get the MathTime PS fonts for math symbols; these
are a better match for Times and possibly more compatible with
PS printers (I don't know). The MathTime fonts are in the
teTeX TeX distribution commonly distributed with Linux systems.
On Friday 19 April 2002 03:12 pm, you wrote:
> I've just been informed that some documents that are being generated
> by LaTeX aren't printing properly on some printers.
> I think it's less a problem with the printers (HP-850s) and probably
> more a problem with Acrobat on the system (all of these are Windows boxes,
> BTW) that the printer happens to be connected to.
> On one box, printing to a laser printer, the document appears correctly.
> On the other box, printing to the HP-850, the fonts aren't right
> (sans-serif instead of serif).
> Is there some way of embedding the fonts into the PDF document so that
> Acrobat will work correctly? Or perhaps is Acrobat missing something
> or misconfigured?
> I've tried sending "-dNOPLATFONTS" to my call to 'dvipdf', but the
> PDFs generated with and without that option were indentical in size.
> (Unfortunately, >I< don't have the problem, people on the other side
> of the country do, so this is mighty hard to test/debug. :^( )
> Thx!
> -bill!
> _______________________________________________
> vox-tech mailing list
> vox-tech@lists.lugod.org
> http://lists.lugod.org/mailman/listinfo/vox-tech
vox-tech mailing list
• Follow-Ups:
• References: | {"url":"http://www.lugod.org/mailinglists/archives/vox-tech/2002-04/msg00218.html","timestamp":"2014-04-21T02:07:45Z","content_type":null,"content_length":"23435","record_id":"<urn:uuid:9a7122e7-2b89-4dac-8421-d67c3bad78e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wheat Ridge Algebra 2 Tutor
Find a Wheat Ridge Algebra 2 Tutor
...The problems most students have difficulty with are those dealing with classes which they have taken a few years ago (i.e. algebra, geometry, pre-calculus), and so may have forgotten some of
the basics. To help students prepare for the SAT math section, I help them identify the type of problems ...
18 Subjects: including algebra 2, calculus, physics, GRE
Hi, I'm Leslie! I am a mathematics and writing tutor with three years of experience teaching and tutoring. I spent two years teaching mathematics and physics at a secondary school in rural
27 Subjects: including algebra 2, reading, writing, geometry
...I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and Linear Algebra. I can also teach Intro Statistics and Logic. I've worked with high
school and college students, priding myself on being able to explain any concept to anyone.
11 Subjects: including algebra 2, calculus, geometry, statistics
...Beside all of the formulas of each concepts, how to use the formulas is the hardest part. Why use this formula, not the other one and how to use it. Students ask me these questions all the
27 Subjects: including algebra 2, calculus, physics, geometry
...In this time, I mastered algebraic fluency by reviewing factoring techniques, simplifying rational expressions, and solving equations. As a consequence, I have handled problems which require a
greater depth of thinking and problem solving than those encountered in most Algebra courses. During my time in college I took Calculus I,II, and III.
15 Subjects: including algebra 2, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Wheat_Ridge_Algebra_2_tutors.php","timestamp":"2014-04-17T13:41:52Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:8bb770cd-e725-4a87-ae4c-7490a6780330>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Growth
Exponential Growth and Decay
America faces with its ever-growing amount of nuclear waste. Currently the United States has over 77,000 tons of waste. Environmentalists talk about how the radioactive material will be dangerous for
thousands of years because of its long half-life. In fact, it will take 240,000 years for plutonium 239 to become safe!
When scientists talk about half-life, they are referring to how long it will take for half of a sample to decay. In the case of nuclear waste, it refers to how long it takes for half of the
radioactive material to turn into lead.
Exponential growth is very similar, but deals with, you guessed it, growth, instead of decay. The most common example is the growth of bacteria colonies. Bacteria multiply at an alarming rate.
If we assume that bacteria can double every hour and if we start with just a single bacteria, then after one day there will be over 16 million bacteria!
Obviously exponential growth, or decay for that matter, cannot continue indefinitely. Eventually there would no longer be any space or nutrients available for the bacteria, or the last atom of
plutonium would decay into lead. As a result, exponential growth and decay only refers to the early stages of both processes.
The mathematics behind exponential growth and decay is rather simple. In fact, we use the same formula as for continuous compound interest.
Exponential Growth
If is the initial population and the growth rate is k then the population N at time t is:
The formula for exponential decay is exactly the same, except the k value is negative instead of positive. Below is written out formally.
Exponential Decay
If is the initial population and the decay rate is k then the population N at time t is:
Now, recall the equation for continuous compound interest: . While it does not seem apparent, this equation is the same Eq.(1). The only difference is the variables.
For continuous compound interest, we let the initial amount, the principle, be denoted by the letter P. Think of the principle as the initial population in a savings account. In the interest
formula, we let r denote the interest rate. Think of the interest rate as the rate at which the amount of money in a savings account grows. It is analogous to k in the exponential growth/decay
formulas. And while A denoted the amount in the account at time t, you can think of it as the population of money in the account at time t.
Example 1:
At the start of an experiment, there are 100 bacteria. If the bacteria follow an exponential growth pattern with rate k = 0.02, (a) what will be the population after 5 hours? (b) how long will
it take for the population to double?
For (a) we are asked to find the population N at t = 5. To solve this question, we shall use the growth formula, . We are told that , k = 0.02 and that t = 5. Plugging this information in, we
have the following:
In (b), we are asked to determine the time that the population doubles, ie reaches 200. Again, we use the same formula, but this time we are solving for t. We have:
So, the bacteria population will double in about 34.7 hours.
You do not need to know the growth rate to begin with to solve problems, as the next example illustrates.
Example 2:
Suppose that the population of a colony of bacteria increases exponentially. At the start of an experiment, there are 6,000 bacteria, and one hour later, the population has increased to 6,400.
How long will it take for the population to reach 10,000? Round your answer to the nearest hour.
We are given that , and at t = 1, N = 6,400. Plugging this information into the formula for exponential growth, we can solve for k. Then we can use k to find when the population will reach
Using this k value, we can determine when the population N will reach 10,000.
Often times when dealing with exponential decay problems, we need to use the half-life. If we know the rate of decay, k, there is a nice formula for half-life. It is:
If k is the rate of decay, then
To show the half-life formula, we merely use formula (2), setting and solve for t. Doing this, we get the following:
While this formula is very helpful and will be used quite a bit, it is better to remember how to derive the formula, not just remember the formula.
Example 3:
The half-life of Plutonium-239 is 24,000 years. If 10 grams are present now, how long will it take until only 10% of the original sample remains? Round your answer to the nearest 10,000^th.
We can use the half-life formula to find the decay rate k. We know that t = 24,000 years. Plugging into the half-life formula, we have:
Now, we need to find when only 10% (1 gram) remains. Plugging into the exponential decay formula, we get the following:
Notice that we rounded our answer. If you used the exact value for k, your answer would be around 79,700 years, but if you used 0.000029, your answer would be around 79,400 years. Because of
this variation, only rounding to the nearest ten thousandth will yield the same answer.
Unlike the previous example, the next example does not use the half-life formula. However, it does use a method similar to the derivation of the half-life formula.
Example 4:
Suppose that at the start of an experiment there are 8,000 bacteria. A growth inhibitor and a lethal pathogen are introduced into the colony. After two hours 1,000 bacteria are dead. If the
death rates are exponential, (a) how long will it take for the population to drop below 5,000? (b) How long will it take for two-thirds of the bacteria to die? Round your answers to the nearest
Part (a): We are starting with 8,000 bacteria, so that is . We know that at t = 2, 1,000 bacteria are dead, so the population is 7,000. Using this information, we can determine the decay rate, k
by using the exponential decay formula.
Using k, we can find out the t value when the population will drop below 5,000.
Notice that we rounded up, because we are asked to find when the population drops below 5,000. So, we need to round 7.0391 up to the next tenth. At t = 7.0, the population is still above 5,000.
For part (b), recognize that when two-thirds of the population is dead, only one-third of the population remains. To solve, we follow the derivation of the half-life formula, but replace with
and use k = -0.06677. Doing this, we get the following:
Again, we rounded up to the next tenth, for the same reasons as in part (a). | {"url":"http://math.ucsd.edu/~wgarner/math4c/textbook/chapter4/expgrowthdecay.htm","timestamp":"2014-04-20T02:21:31Z","content_type":null,"content_length":"48416","record_id":"<urn:uuid:ca809a87-d50f-4216-8e70-a45e7c9017a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Arrays and Overloading Methods.
10-25-2011, 04:26 AM
Using Arrays and Overloading Methods.
I am currently in an AP computer science class where we learn how to program in Java. Our assignment is as follows
"Use method overloading to write a program that will test to see if the parameters sent in are equal. The program should use loops to allow the user to enter in 2 or 3 integers and call the
appropriate method based on how many numbers were put in. The program should loop until either the user enters a -1 (their sign that they are through entering in numbers) or they have entered 3
numbers in. You may assume that only positive integers will be input."
I was wondering if it was possible to summon a method for my array when it only has two numbers in it, and summon a different array for when it has 3 numbers in it. My program is designed to fill
the array with two input numbers and then ask if they want to enter a third integer. If they do, the array will have three values, but if not, it will continue with just two. Is this even the
right way to go about solving this problem? Our teacher doesn't explain it very well and I am really confused.
Here is my program so far
import java.util.Scanner;
public class equalparameters {
public static void main (String[] args){
Scanner reader = new Scanner(System.in);
int x, one, two, three, inumber,y;
int integer [] = new int[3];
System.out.println("enter two or three integers. ");
for (x=0; x<3; x++);{
System.out.println("Please enter an integer");
inumber = reader.nextInt();
if (x==1){
System.out.println("are you going to want to enter a third integer? 1. yes 2. no");
y = reader.nextInt();
if (y==2){
x = 3;
public static int compute (int integer[1]){
public static int compute (int integer[]){
int x, y;
Thanks for the help
10-25-2011, 04:31 AM
Re: Using Arrays and Overloading Methods.
Reading in and storing the input from the user is not important. You can do it many ways. Just pick one. What is important is what method should be called. You should have 2 methods, one that has
2 int parameters and one that has 3 int parameters.
public static int compute (int integer[1]){
That is not legal syntax.
10-25-2011, 04:48 AM
Re: Using Arrays and Overloading Methods.
What I was trying to ask was how to I differentiate between two parameters and three parameters. What do I put in the parentheses to show the same array, once with two values and once with three?
public static int compute (?????? two parameters)
public static int compute (?????? three parameters)
10-25-2011, 04:50 AM
Re: Using Arrays and Overloading Methods.
What I was trying to ask was how to I differentiate between two parameters and three parameters. What do I put in the parentheses to show the same array, once with two values and once with three?
public static int compute (?????? two parameters)
public static int compute (?????? three parameters)
Here's an example of method overloading:
public static int compute (int number1, int number2) // method 1
public static int compute (int number1, int number2, int number3) // method 2
compute(5, 7) calls method 1 and compute(5, 7, 9) calls method 2.
10-25-2011, 04:50 AM
Re: Using Arrays and Overloading Methods.
10-25-2011, 04:52 AM
Re: Using Arrays and Overloading Methods. | {"url":"http://www.java-forums.org/new-java/50305-using-arrays-overloading-methods-print.html","timestamp":"2014-04-24T05:28:11Z","content_type":null,"content_length":"10735","record_id":"<urn:uuid:f1a96d10-531a-4c88-a55d-fc7b770c5931>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Nigel Higson, Mari Royer, John Roe.
Title: Free actions of compact quantum groups on unital C*-algebras
Seminar: Noncommutative Geometry Seminar
Speaker: Piotr Hajac, Polish Academy of Sciences
Let F be a field, G a finite group, and Map(G,F) the Hopf algebra of all set-theoretic maps from G to F. If E is a finite field extension of F and G is its Galois group, the extension is Galois if
and only if the canonical map resulting from viewing E as a Map(G,F)-comodule is an isomorphism. Similarly, a finite covering space is regular if and only if the analogous canonical map is an
isomorphism. The main result to be presented in this talk is an extension of this point of view to arbitrary actions of compact quantum groups on unital C*-algebras. I will explain that such an
action is free (in the sense of Ellwood) if and only if the canonical map (obtained using the underlying Hopf algebra of the compact quantum group) is an isomorphism. In particular, we are able to
express the freeness of a compact Hausdorff topological group action on a compact Hausdorff topological space in algebraic terms. Also, we can apply the main result to noncommutative join
constructions and coactions of discrete groups on unital C*-algebras. (Joint work with Paul F. Baum and Kenny De Commer.)
Room Reservation Information
Room Number: MB106
Date: 09 / 26 / 2013
Time: 02:30pm - 03:30pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=19254","timestamp":"2014-04-18T08:34:44Z","content_type":null,"content_length":"4228","record_id":"<urn:uuid:dbc98918-10d1-4074-b881-180575709f96>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Syllabus Entrance
CH 108 Introduction to Chemistry II
Yates, David
Mission Statement: The mission of Park University, an entrepreneurial institution of learning, is to provide access to academic excellence, which will prepare learners to think critically,
communicate effectively and engage in lifelong learning while serving a global community.
Vision Statement: Park University will be a renowned international leader in providing innovative educational opportunities for learners within the global society.
Course CH 108 Introduction to Chemistry II
Semester UJL 2007 HO
Faculty Yates, David
Title Laboratory Coordinator/Chemical Hygiene Officer
Degrees/Certificates M.S. , NRCC- CHO, 40 hour HAZWOPER Cert.,
Office Location Science Hall 03A
Office Hours Sched. :MTWR 2:00-4:00 PM, F: By arrangement only; Unsched.-OPEN DOOR POLICY-If my door is open I am available to students.
Daytime Phone 816-584-6515
Other Phone 816-914-1728(CELL)
E-Mail david.yates@park.edu
Web Page Under construction
Semester Dates 09 July – 03 August 2007
Class Days -MTWR--
Class Time 8:00 - 10:15 AM
Prerequisites earned “C” or better in CH107
Credit Hours 3:0:3
Chemistry The Central Science, 10th edition, 2006, Brown, LeMay, Bursten,
ISBN 0-13-146489-2,
Additional Resources:
-A scientific calculator (with statistical capabilities)
McAfee Memorial Library - Online information, links, electronic databases and the Online catalog. Contact the library for further assistance via email or at 800-270-4347.
Career Counseling - The Career Development Center (CDC) provides services for all stages of career development. The mission of the CDC is to provide the career planning tools to ensure a lifetime of
career success.
Park Helpdesk - If you have forgotten your OPEN ID or Password, or need assistance with your PirateMail account, please email helpdesk@park.edu or call 800-927-3024
Resources for Current Students - A great place to look for all kinds of information http://www.park.edu/Current/.
Course Description:
continuation of CH 107 with major topics covered including solutions, chemical kinetics, thermodynamics, equilibria, and an introduction to descriptive chemistry. Three lectures and one hour
discussion per week. PREREQUISITE: 'C' or better in CH 107L or permission of instructor. CO-REQUISITE: CH 108L . 3:0:3
Educational Philosophy:
The instructor’s education philosophy is based on inquiry and constructivism utilizing lecture, lecture demonstrations, discussions, dialogues, readings, laboratory investigations, quizzes,
examinations, videos, internet, and writings e.g. formal laboratory reports, and a review of the literature over some aspect of chemistry.
Learning Outcomes:
Core Learning Outcomes
1. Describe = solutions, solubility, colligative properties (perform calculations of these properties), and colloid formation.
2. Apply the = kinetic theory to a chemical reaction and perform calculations using the = rate laws and transition energy.
3. Write = simple reaction mechanisms and describe the function of = catalysts.
4. Describe = equilibrium and estimate equilibrium information.
5. Describe = acid-base concept, assess acid and base strength, describe and perform = calculations on weak acids and bases, buffer solutions, and salts of weak acids = and bases and determine pH.
6. Explain = and apply the first law of thermodynamics.
7. Describe = and perform calculations on voltaic and electrolytic = cells.
8. Describe = reactions and trends of the main group elements and nuclear decay and = reactions.
9. Relate = and apply scientific methods to chemical situations and scientific = literature.
Core Assessment:
Final Exam
Link to Class Rubric
Class Assessment:
Your final grade will be based on three (4) one-hour exams and a comprehensive final. (See dates under COURSE TOPICS/DATES/ASSIGNMENTS. The dates of exams are on Wednesday.) Your grade will also
reflect your performance on weekly quizzes and your lab performance (see separate syllabus for lab) and homework.
Snow/tornado days – If there is no class due to inclement weather, the scheduled exam will be given the next class time. If the weather affects the final, check with the nursing office or your email
for an alternate date. This will most likely be the following Wednesday or Thursday.
MAKE UP EXAMS If you miss an exam and choose to make up the zero, an exam will be given the Thursday of finals week. This exam will count as the missed exam. This exam will be over any material I
choose. You cannot miss more than one exam. Any exams missed over one will have a grade entered as zero.
If you know you will be absent for an exam, see me, call, or e-mail to set up a time to take it before the class. You will have this privilege once during the semester.
You are to do the first three red colored problems of each section behind each chapter under “Problems”. You also are to do 4 (every other blue numbered questions) from the Additional Problems
section and the first red question from Apply Your Knowledge. You must show all work. You must show all steps to get to that answer that appears in back of the text. Box in the final result. Staple
the pages. Be careful not to staple through problems. The grader will not struggle to try to read your work through a staple. Put your name on the top page. Failure to show all work will result in no
credit for the problem.
Failure to box in the final answer will result in a deduction of ¼-problem credit.
Failure to staple the pages will result in a deduction of ¼ credit of the problem-set.
If you staple through written work, the grader will not struggle to try to read your work. Zero credit will be given for that problem.
Failure to put your name on the top page will result in no one grading the problem set.
Late Homework: You will be given a check mark for any homework late. This is a zero numerically.
No Homework: You will be given a zero. More than three zeros will result in a decrease of one letter grade for the course. More than five zeros will result in a decrease of two letter grades for the
Exams(4) 35% A 85-100
Final 25% B 75-84
Quiz 10% C 60-74
Homework 10% D 50-59
Rev. of Lit. 20% F <50
Late Submission of Course Materials:
The instructor will not accept assignments late. Assignments not submitted on the due date will receive a grade of “zero”.
Classroom Rules of Conduct:
Computers make writing and revising much easier and more productive. Students must recognize though that technology can also cause problems. Printers run out of ink and hard drive crash. Students
must be responsible for planning ahead and meeting deadlines in spite of technology. Be sure to save copies of your work to disk, hard drive, and print out paper copies for backup purposes.
Remember, you are not the only one in class and we have a lot of material to cover. Ask questions, but do not monopolize the class time.
Purchase a stapler. All reports and papers must be stapled.
Course Topic/Dates/Assignments:
│Week│Topics/Assignments │
│1 M │Solutions │
│T │Colligative Properties │
│W │Kinetics │
│R │Rates and Conc change │
│2 M │Rxn Mechanisms │
│T │Equilibrium │
│W │LeChatelier’s Principle │
│R │Acid-base equilibrium │
│3 M │pH and K’s │
│T │Common ion/buffers │
│W │Titrations │
│R │Second Law of Thermo │
│4 M │Gibb’s Free Energy │
│T │REDOX and Voltaic cells │
│W │EMF and Gibbs │
│R. │Chemistry of the Nonmetals │
│ │ │
Academic Honesty:
Academic integrity is the foundation of the academic community. Because each student has the primary responsibility for being academically honest, students are advised to read and understand all
sections of this policy relating to standards of conduct and academic life. Park University 2006-2007 Undergraduate Catalog Page 87-89
Plagiarism involves the use of quotations without quotation marks, the use of quotations without indication of the source, the use of another's idea without acknowledging the source, the submission
of a paper, laboratory report, project, or class assignment (any portion of such) prepared by another person, or incorrect paraphrasing. Park University 2006-2007 Undergraduate Catalog Page 87
Attendance Policy:
Instructors are required to maintain attendance records and to report absences via the online attendance reporting system.
1. The instructor may excuse absences for valid reasons, but missed work must be made up within the semester/term of enrollment.
2. Work missed through unexcused absences must also be made up within the semester/term of enrollment, but unexcused absences may carry further penalties.
3. In the event of two consecutive weeks of unexcused absences in a semester/term of enrollment, the student will be administratively withdrawn, resulting in a grade of "W".
4. A "Contract for Incomplete" will not be issued to a student who has unexcused or excessive absences recorded for a course.
5. Students receiving Military Tuition Assistance or Veterans Administration educational benefits must not exceed three unexcused absences in the semester/term of enrollment. Excessive absences will
be reported to the appropriate agency and may result in a monetary penalty to the student.
6. Report of a "F" grade (attendance or academic) resulting from excessive absence for those students who are receiving financial assistance from agencies not mentioned in item 5 above will be
reported to the appropriate agency.
Park University 2006-2007 Undergraduate Catalog Page 89-90
Disability Guidelines:
Park University is committed to meeting the needs of all students that meet the criteria for special assistance. These guidelines are designed to supply directions to students concerning the
information necessary to accomplish this goal. It is Park University's policy to comply fully with federal and state law, including Section 504 of the Rehabilitation Act of 1973 and the Americans
with Disabilities Act of 1990, regarding students with disabilities. In the case of any inconsistency between these guidelines and federal and/or state law, the provisions of the law will apply.
Additional information concerning Park University's policies and procedures related to disability can be found on the Park University web page: http://www.park.edu/disability .
Additional Information:
This course is designed to provide you with a basic understanding of scientific principles and concepts. As much as possible we will discuss how chemistry is used in everyday activities. In order to
achieve the goals, you will be asked to come to class prepared to take an active role in the lecture. Outside of class you will need to work problems, read the text, and re-write your notes, take
practice tests. This must be done daily if you expect to get a decent grade for the course. Chemistry, as all science courses, cannot be learned the night before the exam. Chemistry is learning, not
memorizing, although it may seem to be a lot of memorizing. This is due to the development of your scientific language. As a general rule, you should do three hours of outside classroom work for each
hour of class work. You are responsible for all material covered in the text and any additional material in class.
§ come to class.
§ come prepared for class.
§ do not monopolize class time. We have a lot of material to cover and I will generally be in my office 6 or more hours a day. Having said that, if you have a question about the course material
usually someone else will have the same question. Remember: aside from redundancy there is no such thing as a stupid question.
§ utilize practice tests
§ submit homework problems.
§ adhere to published due dates. (Late work is NOT accepted for a grade.)
Review of the Literature
Chemistry 108
Literature Review Project
The objective of this assignment is the preparation of a literature report demonstrating your ability to search the chemical reference literature for specialized information related to a specific
topic or technique. The nature of the assignment allows you to fulfill this task by selecting a topic that is of personal interest to you, perhaps because of past experience or (future) interests.
You should select a unique topic that involves chemistry in some way. You must obtain approval from Mr. Yates for your topics before proceeding with the writing of the reports. Finding good topics
is the hardest step for many students. One way to find a topic is to browse current journals that publish chemistry papers. If possible, a key literature reference might be selected to provide an
"entrance" to the literature of the chosen area. The Journal of Chemical Education, The Chemist, and C&E News (all in Park’s library) are just three journals to start your search and may provide a
useful source of ideas. For example, the topic might involve a particular separation problem, a new energy calculation, or another theoretical idea. Your topic should not overlap with that of
another student.
Presumably, if you choose a topic of interest to you, you may find yourself motivated enough to read in detail all the articles that you find! As your chemistry career progresses, it is exactly this
sort of reading on your own that will serve as an important way to advance your knowledge. The summary and list of journal references that you produce should be complete enough to provide an
adequate overview of the current state-of-the-art in the selected topic.
The reference list of research publications related to your chosen topic should place emphasis on journal articles published within the last few years.
References older than 10 years should not be included unless they represent truly major contributions or unless the literature is sparse. The primary source of this reference list should be
research journals not textbooks, although textbooks may be listed as background references if necessary. Because the emphasis is on recent literature, it is appropriate to seek material from the
following sources:
a. Scan current journals for articles published in the last few months.
b. Search abstracting indexes such as "Current Contents" for appropriate keywords related to the topic; this will identify publications 6-12 months old.
c. Having found an article in the literature that is several years old, you can find more recently published articles that reference that old article by searching "Science Citation Index." SCI is
published several times each year and five-year cumulative indexes are published. You can find the listing of the old article in SCI and under the listing, any articles that cite that article
during the pertinent time period. This is good way to quantitate the importance of an article to the growth of the literature in an area: important articles will tend to be cited more heavily
(although it is possible to be cited for a major debacle). Ask our librarian for help if needed.
d. Another avenue of literature exploration is the computer-searchable CD-ROM material available in the Linda Hall Library or web databases provided by the Park University library. Use these
sources, but do not rely on them alone. There is a considerable delay before articles appear in the any database. If you are looking for current or recent articles, browsing the current journals
may be the best resource.
Specific instructions
1. Select your topic and get approval from Mr. Yates
2. Compile your list of references (you should have used a minimum of 2 references (for meets expectations) in your list per page of paper(not citations): i.e. a 4 page paper would use 8 refs.; a 5
page paper would use 10 at the minimum), arranging them sequentially, based upon when first cited in the paper. Use the ACS reference style for the appropriate literature citation style(see
attachment). It might be useful to mark with an asterisk the more important or interesting articles that you have found. Also, include in your citation of each article the TITLE of the paper; place
it between the author list and the journal name. Although including the title deviates from the ACS citation style, retaining it with the reference may make the list of citations more useful to you
in the future.
3. Write a summary describing the topic that you selected for your literature survey. Your written report should explain the background of the problem, and should describe the major approaches used
to solve the task or problem. Focus on the instrumental analysis techniques employed if the article is on instrumentation. Describe instrumentation, conditions, etc., along a summary of results
achieved. You can imagine that the report you are writing might serve as the background introduction to an article on this subject. The text should refer to the literature that you have found by
citing the references by number from your list grouped at the end of the paper. Your paper should average, at a minimum, 3 citations per paragraph( for meets expectations) Excluding the Introduction
and Conclusion.
4. Your report should be typed in double-spaced format. The writing should be concise— no less than five pages, no more than six pages in length, not including the list of references or the cover
sheet. The last counted page should be at least ¾ filled. Diagrams and inserts should be placed at the end so as not to count as your text. Insert the date that the literature report was completed at
the bottom of the report. Font: no greater than 12, Margins: one inch on all sides. You may use up to three websites, properly referenced.
5. Turn in your completed literature review by 7/26/07, no later than 5:00 pm by sending a copy of the report (preferably in Microsoft Word format) as an email attachment to Mr. Yates. “The subject
line must be CH108 report.” Your report will be graded and returned to you by email by the last day of class, 8/2/07.
Problems You are expected to spend a minimum of three hours a day on problem solving(the class goes very fast and you must keep up).
It is well known to the scientific community that by engaging on hands-on, intellectually stimulating activities and by encouraging people (i.e., students) to ask questions and think critically,
learning science becomes enjoyable and exciting. Hence, it is recommended that you do all the problems at the end of each chapter. It is highly recommended that you do all the odd problems at the end
of each chapter. The answers to these are in the back of the text and the solutions are in the Student Solution Manual. See homework discussion above for the problems that need to be turned in.
Doing Problems sets: Some students do not read the assigned material but go right to the problems and try to solve them When they cannot solve a particular problem, they go back into the chapter and
find a worked example that is similar enough to the assigned problem to provide the necessary parallel steps. This method of using the textbook neither saves time nor gives the best preparation for
exam. It simply treats the problem sets as homework and will not help you understand the concepts you need to know.
Equations, formulas, diagrams, tables, illustrations, etc: Paying attention to equations: highlighting each equation as it appears becomes mechanical and is unlikely to aid in comprehension. On the
other hand, writing out the equations to explain what they mean and performing the mathematical manipulations described can be quite useful to you. Drawing the essential parts of a diagram can also
be useful. Remember that when you see a diagram, a formula, or an equation you should expect to spend more time with it than regular text, not less time. That is because diagrams, formulas, and
equations are usually shorthand methods of expressing ideas.
Literature Review Rubric
Lab Report Rubric
Periodic Table
Conversion Factors
concentrations of Acids
┃ Competency │ Exceeds Expectation (3) │ Meets Expectation (2) │ Does Not Meet Expectation (1) │ No Evidence (0) ┃
┃ │ │• Assess a problem in kinetics to determine │• Assess a problem in kinetics to determine│• Assess a problem in kinetics to ┃
┃ │• Assess a problem in kinetics to determine what │what information is known/unknown, relevant/ │what information is known/unknown and │determine what information is known/ ┃
┃ │information is known/unknown, relevant/ │extraneous, and choose the necessary equations │choose the necessary equations │unknown and choose the necessary ┃
┃ │extraneous, and derive the necessary equations │• Assess a problem in solution chemistry and │• Assess a problem in solution chemistry │equations ┃
┃ │• Assess a problem in solution chemistry and │determine the type of problem (3 from: │and determine the type of problem (2 from: │• Assess a problem in solution chemistry ┃
┃ │determine the type of problem (colligative │colligative property, concentration, acid/base,│colligative property, concentration, acid/ │and determine the type of problem (1 ┃
┃ │property, concentration, acid/base, equilibrium) │equilibrium) and equations needed to solve the │base, equilibrium) and equations needed to │from: colligative property, ┃
┃ │and equations needed to solve the problem. │problem. │solve the problem. │concentration, acid/base, equilibrium) ┃
┃ │• Assess a problem in equilibrium to determine │• Assess a problem in equilibrium to determine │• Assess a problem in equilibrium to │and equations needed to solve the ┃
┃ │what type of equilibrium (acid/base, buffer, │what type of equilibrium (3 from: acid/base, │determine what type of equilibrium (2 from:│problem. ┃
┃ │solubility, complex-ion), relevant information, │buffer, solubility, complex-ion), relevant │acid/base, buffer, solubility, │• Assess a problem in equilibrium to ┃
┃ │and necessary equations to solve the problem. │information, and necessary equations to solve │complex-ion), relevant information, and │determine what type of equilibrium (1 ┃
┃Evaluation │• Assess a problem in electrochemistry to │the problem. │necessary equations to solve the problem. │from: acid/base, buffer, solubility, ┃
┃Outcomes │determine type of cell (and be able to draw cell │• Assess a problem in electrochemistry to │• Assess a problem in electrochemistry to │complex-ion), relevant information, and ┃
┃1,2,4,6,7 │showing all components), evaluate information │determine type of cell (and be able to draw │determine type of cell (and be able to draw│necessary equations to solve the problem.┃
┃ │given and what is needed, and determine and/or │cell showing all components), evaluate │cell showing all components), evaluate │• Be able to draw cell an electrochemical┃
┃ │derive equations necessary to solve the problem. │information given and what is needed, and │information given and determine the │cell showing all components), evaluate ┃
┃ │• Balance REDOX equations in neutral, acidic, and│determine the equations necessary to solve the │equations necessary to solve the problem. │information given and determine the ┃
┃ │basic solutions and apply to electrochemical │problem. │• Balance REDOX equations in neutral │equations necessary to solve the problem.┃
┃ │cells. │• Balance REDOX equations in neutral, acidic │solution and apply to electrochemical cells│• Balance REDOX equations in neutral ┃
┃ │• Draw and explain an electrochemical cell │solution and apply to electrochemical cells and│and thermodynamics. │solution. ┃
┃ │• Evaluate Gibbs free energy using the Nernst │thermodynamics. │• Draw and explain an electrochemical cell │• Draw and explain an electrochemical ┃
┃ │Equation, Free energy tables, and │• Draw and explain an electrochemical cell │• Evaluate Gibbs free energy using the │cell ┃
┃ │entropy-enthalpy tables. │• Evaluate Gibbs free energy using the Nernst │Nernst Equation, Free energy tables or │• Evaluate Gibbs free energy using Free ┃
┃ │ │Equation, Free energy tables, and │entropy-enthalpy tables. │energy tables. ┃
┃ │ │entropy-enthalpy tables. │• │• ┃
┃ │ │• │ │ ┃
┃ │ │ │ │Do three of these: ┃
┃ │ │• Determine the rate of a reaction graphically │Do five of these: │• Determine the rate of a reaction ┃
┃ │• Determine the rate of a reaction graphically │or using initial rates │• Determine the rate of a reaction │graphically or using initial rates ┃
┃ │and using initial rates │• Use equations that relate the rate to │graphically or using initial rates │• Use equations that relate the rate to ┃
┃ │• Show how the rate is dependent on concentration│concentration │• Use equations that relate the rate to │concentration ┃
┃ │by deriving the necessary equations, propose a │• propose a mechanism for a set of appropriate │concentration │• propose a mechanism for a set of ┃
┃ │mechanism for a set of appropriate equations for │equations for a given problem │• propose a mechanism for a set of │appropriate equations for a given problem┃
┃ │a given problem │• Mathematically manipulate a set of │appropriate equations for a given problem │• Mathematically manipulate a set of ┃
┃ │• Mathematically manipulate a set of appropriate │appropriate equations to fit the desired │• Mathematically manipulate a set of │appropriate equations to fit the desired ┃
┃Synthesis │equations to fit the desired kinetic theory │kinetic theory outcome and the information │appropriate equations to fit the desired │kinetic theory outcome and the ┃
┃Outcomes │outcome and the information given │given │kinetic theory outcome and the information │information given ┃
┃1,2,3,4,7 │• Combine equations showing how physical │• Use equations that show how physical │given │• Use equations that show how physical ┃
┃ │properties (MW, concentration, etc) can be │properties (MW, concentration, etc) can be │• Use equations that show how physical │properties (MW, concentration, etc) can ┃
┃ │determined from colligative properties. │determined from colligative properties. │properties (MW, concentration, etc) can be │be determined from colligative ┃
┃ │• Combine relevant equations in equilibria to │• Combine relevant equations in equilibria to │determined from colligative properties. │properties. ┃
┃ │obtain the solution to given questions. │obtain the solution to given questions. │• Combine relevant equations in equilibria │• Combine relevant equations in ┃
┃ │• Combine the Nernst equation with those of │• Combine the Nernst equation with those of │to obtain the solution to given questions. │equilibria to obtain the solution to ┃
┃ │equilibria and/or thermodynamics to obtain the │equilibria and/or thermodynamics to obtain the │• Combine the Nernst equation with those of│given questions. ┃
┃ │solution to question. │solution to question. │equilibria and/or thermodynamics to obtain │• Combine the Nernst equation with those ┃
┃ │ │ │the solution to question. │of equilibria and/or thermodynamics to ┃
┃ │ │ │ │obtain the solution to question. ┃
┃ │ │Be able to do nine of: │ │ ┃
┃ │• Determine the rate law for a given set of data │• Determine the rate law for a given set of │Be able to do seven of: │Be able to do five of: ┃
┃ │• Propose kinetic mechanisms │data │• Determine the rate law for a given set of│• Determine the rate law for a given set ┃
┃ │• Relate the half-life of a reaction to the rate │• Propose kinetic mechanisms │data │of data ┃
┃ │constant │• Relate the half-life of a reaction to the │• Propose kinetic mechanisms │• Propose kinetic mechanisms ┃
┃ │• Use the Arrhenius equation and use graphic │rate constant │• Relate the half-life of a reaction to the│• Relate the half-life of a reaction to ┃
┃ │interpretation │• Use the Arrhenius equation and use graphic │rate constant │the rate constant ┃
┃ │• Write equilibrium-constant expressions │interpretation │• Use the Arrhenius equation and use │• Use the Arrhenius equation and use ┃
┃Analysis │• Identify acid and base species │• Write equilibrium-constant expressions │graphic interpretation │graphic interpretation ┃
┃Outcomes │• Identify Lewis acid/base │• Identify acid and base species │• Write equilibrium-constant expressions │• Write equilibrium-constant expressions ┃
┃2,3,4,5,6,7 │• Write solubility product expressions │• Identify Lewis acid/base │• Identify acid and base species │• Identify acid and base species ┃
┃ │• Determine the direction of spontaneity from │• Write solubility product expressions │• Identify Lewis acid/base │• Identify Lewis acid/base ┃
┃ │electrode potentials │• Determine the direction of spontaneity from │• Write solubility product expressions │• Write solubility product expressions ┃
┃ │• Determine the direction of spontaneity from │electrode potentials │• Determine the direction of spontaneity │• Determine the direction of spontaneity ┃
┃ │electrode potentials. │• Determine the direction of spontaneity from │from electrode potentials │from electrode potentials ┃
┃ │• Predict the half-reaction in aqueous │electrode potentials. │• Determine the direction of spontaneity │• Determine the direction of spontaneity ┃
┃ │electrolysis │• Predict the half-reaction in aqueous │from electrode potentials. │from electrode potentials. ┃
┃ │• Analyze the outcome of a reaction predicted by │electrolysis │• Predict the half-reaction in aqueous │• Predict the half-reaction in aqueous ┃
┃ │??G. │• Analyze the outcome of a reaction predicted │electrolysis │electrolysis ┃
┃ │ │by ??G. │ │ ┃
┃ │ │ │ │Be able to do 13 of the following: ┃
┃ │ │ │Be able to do 16 of the following: │• Write the mechanism for a multi-step ┃
┃ │ │ │• Write the mechanism for a multi-step │reaction ┃
┃ │ │Be able to do 19 of the following: │reaction │• Calculate solution concentration ┃
┃ │• Write the mechanism for a multi-step reaction │• Write the mechanism for a multi-step reaction│• Calculate solution concentration │• Convert concentration units ┃
┃ │• Calculate solution concentration │• Calculate solution concentration │• Convert concentration units │• Calculate vapor pressure lowering, ┃
┃ │• Convert concentration units │• Convert concentration units │• Calculate vapor pressure lowering, │boiling point elevation, freezing point ┃
┃ │• Calculate vapor pressure lowering, boiling │• Calculate vapor pressure lowering, boiling │boiling point elevation, freezing point │depression, osmotic pressure, molecular ┃
┃ │point elevation, freezing point depression, │point elevation, freezing point depression, │depression, osmotic pressure, molecular │weights ┃
┃ │osmotic pressure, molecular weights │osmotic pressure, molecular weights │weights │• Determine colligative properties of ┃
┃ │• Determine colligative properties of ionic │• Determine colligative properties of ionic │• Determine colligative properties of ionic│ionic solutions ┃
┃ │solutions │solutions │solutions │• Apply stoichiometry to an equilibrium ┃
┃ │• Apply stoichiometry to an equilibrium mixture │• Apply stoichiometry to an equilibrium mixture│• Apply stoichiometry to an equilibrium │mixture ┃
┃ │• Obtain an equilibrium constant from reaction │• Obtain an equilibrium constant from reaction │mixture │• Obtain an equilibrium constant from ┃
┃ │composition │composition │• Obtain an equilibrium constant from │reaction composition ┃
┃ │• Obtain one equil. conc. Given the others and K.│• Obtain one equil. conc. Given the others and │reaction composition │• Obtain one equil. conc. Given the ┃
┃ │• Calculate [H+], [OH- ], and pH │K. │• Obtain one equil. conc. Given the others │others and K. ┃
┃ │• Determine Ka or Kb from the solution pH │• Calculate [H+], [OH- ], and pH │and K. │• Calculate [H+], [OH- ], and pH ┃
┃ │• Calculate concentrations of species in a weak │• Determine Ka or Kb from the solution pH │• Calculate [H+], [OH- ], and pH │• Determine Ka or Kb from the solution pH┃
┃ │acid (or base) using Ka or Kb+. │• Calculate concentrations of species in a weak│• Determine Ka or Kb from the solution pH │• Calculate concentrations of species in ┃
┃ │• Predict whether a salt solution is acidic, │acid (or base) using Ka or Kb+. │• Calculate concentrations of species in a │a weak acid (or base) using Ka or Kb+. ┃
┃Application │basic, or neutral │• Predict whether a salt solution is acidic, │weak acid (or base) using Ka or Kb+. │• Predict whether a salt solution is ┃
┃Outcomes │• Calculate concentration of species in a salt │basic, or neutral │• Predict whether a salt solution is │acidic, basic, or neutral ┃
┃2,3,4,5,6,7 │solution │• Calculate concentration of species in a salt │acidic, basic, or neutral │• Calculate concentration of species in a┃
┃ │• Calculate the common-ion effect on acid │solution │• Calculate concentration of species in a │salt solution ┃
┃ │ionization. │• Calculate the common-ion effect on acid │salt solution │• Calculate the common-ion effect on acid┃
┃ │• Calculate the p H of a buffer solution, of a │ionization. │• Calculate the common-ion effect on acid │ionization. ┃
┃ │buffer when a strong acid or strong base is │• Calculate the p H of a buffer solution, of a │ionization. │• Calculate the p H of a buffer solution,┃
┃ │added, at the equivalence point in the titration │buffer when a strong acid or strong base is │• Calculate the p H of a buffer solution, │of a buffer when a strong acid or strong ┃
┃ │of a weak acid by a strong base. │added, at the equivalence point in the │of a buffer when a strong acid or strong │base is added, at the equivalence point ┃
┃ │• Calculate Ksp from the solubility, or vice │titration of a weak acid by a strong base. │base is added, at the equivalence point in │in the titration of a weak acid by a ┃
┃ │versa │• Calculate Ksp from the solubility, or vice │the titration of a weak acid by a strong │strong base. ┃
┃ │• Calculate the solubility of a slightly soluble │versa │base. │• Calculate Ksp from the solubility, or ┃
┃ │salt in a solution of a common ion │• Calculate the solubility of a slightly │• Calculate Ksp from the solubility, or │vice versa ┃
┃ │• Predict whether precipitation will occur │soluble salt in a solution of a common ion │vice versa │• Calculate the solubility of a slightly ┃
┃ │• Separate metal ions by sulfide precipitation │• Predict whether precipitation will occur │• Calculate the solubility of a slightly │soluble salt in a solution of a common ┃
┃ │• Calculate the emf and Gibb's Free Energy change│• Separate metal ions by sulfide precipitation │soluble salt in a solution of a common ion │ion ┃
┃ │from standard potentials │• Calculate the emf and Gibb's Free Energy │• Predict whether precipitation will occur │• Predict whether precipitation will ┃
┃ │• Calculate the equilibrium constant from cell │change from standard potentials │• Separate metal ions by sulfide │occur ┃
┃ │emf │• Calculate the equilibrium constant from cell │precipitation │• Separate metal ions by sulfide ┃
┃ │ │emf │• Calculate the emf and Gibb's Free Energy │precipitation ┃
┃ │ │ │change from standard potentials │• Calculate the emf and Gibb's Free ┃
┃ │ │ │Calculate the equilibrium constant from │Energy change from standard potentials ┃
┃ │ │ │cell emf │Calculate the equilibrium constant from ┃
┃ │ │ │ │cell emf ┃
┃Content of │Illustrate a complete understanding of a problem │Illustrate an understanding of a problem by │Illustrate an understanding of a problem by│Illustrate an understanding of a problem ┃
┃Communication│by neatly and in an orderly manner present the │neatly and in an orderly manner present the │neatly and in an orderly manner present the│by neatly and in an orderly manner ┃
┃Outcomes │solution showing equations, derivations, │solution showing equations, insertions, and │solution showing equations and insertions. │present the solution showing insertions ┃
┃1,2,3,4,5,6,7│insertions, and explaining all steps. │derivations. │ │in an unspecified equation. ┃
┃Technical │ │ │ │ ┃
┃Skill in │Interpret graphical representation of data │Interpret graphical representation of data │Interpret graphical representation of data │ ┃
┃Communicating│Create spread sheets to evaluate data │Create spread sheets to evaluate data │Graph data │Graph data ┃
┃Outcomes │Graph data │Graph data │ │ ┃
┃1,2,3,4,5,6,7│ │ │ │ ┃
┃First │• Demonstrate a knowledge of inorganic and │ │• Demonstrate a knowledge of inorganic │• Demonstrate a knowledge of atomic ┃
┃Literacies(or│organic nomenclature │• Demonstrate a knowledge of inorganic │nomenclature (metals) │names. ┃
┃Disciplinary │ │nomenclature (nonmetals and metals) │ │ ┃
┃Competency) │• Demonstrate a complete knowledge of the │• Demonstrate a knowledge of periodic trends │• Demonstrate a knowledge of two periodic │• Demonstrate a knowledge of one periodic┃
┃Outcomes │periodic table │ │trends │trend ┃
┃8 │ │ │ │ ┃
┃Second │ │ │ │ ┃
┃Literacies(or│Demonstrate the ability to make and dilute │Demonstrate the ability to make and dilute │Demonstrate the ability to make and dilute │ ┃
┃Disciplinary │solutions quantitatively and perform the │solutions quantitatively and perform the │solutions quantitatively and perform the │Demonstrate the ability to make and ┃
┃Competency) │necessary calculations for any concentration │necessary calculations for molarity, molality, │necessary calculations for molarity │dilute solutions ┃
┃Outcomes │ │and normality │ │ ┃
┃7 │ │ │ │ ┃
This material is protected by copyright and can not be reused without author permission.
Last Updated:7/2/2007 1:00:57 PM | {"url":"https://app.park.edu/syllabus/syllabus.aspx?ID=42257","timestamp":"2014-04-17T09:36:30Z","content_type":null,"content_length":"226793","record_id":"<urn:uuid:27575a97-4591-4258-94ba-aab32afea67f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Animation: derivation of the
The volume of a sphere
is mapped into an equivalent pyramid up to illustrate how the formula
sphere volume = 4/3 π r^3
can be understood.
The animation starts with a translucent sphere (pale orange). An dark red wedge is shown running from the centre of the sphere to its back surface. This wedge represents a small segment of the sphere
volume. The volume of that wedge would be its base area (i.e. the part of the sphere surface that the wedge occupies) multiplied by the radius of the sphere (i.e. the length of the wedge) and divided
by three. This is because the wedge is a thin pyramid whose volume = base x height/3. The volume of the sphere would be equal to the sum of the volumes of all such wedges that would fill the sphere
(to make the result smooth an infinite number of such pyramids would be needed). Now, all such wedges would be pyramids whose height = r. Their combined base area would be the same as the surface
area of the entire sphere. Consequently, the volume of the sphere would be the same as a pyramid whose base area = the sphere surface area and whose height = the radius of the sphere. This gives the
formula: volume = 4/3 π r^3. The remainder of the animation is devoted to creating this equivalent pyramid.
These latitude rings all open up creating a curved surface shown in pink. This then uncurls to map on to an imaginary vertical plane that touches the back of the sphere. The mapped surface (i.e. the
sphere surface area) looks vaguely leaf shaped. It is actually formed from cosine curves. The challenge is then to convert this complex shape into a rectangle to determine its area. To do this, those
parts of the mapped surface that are north and south of the sphere are replaced by translucent red boxes. The remaining leaf shape thus has a maximum height that is the same as the height of the
sphere (i.e. 2r) and a maximum length that is the same as the circumference of the sphere (i.e. 2 π r). These boxes then migrate across to fill up the gaps and show that the area of the map above the
height r is the same as the gap below. The resulting rectangle (now in pale yellow green) has an area of 4 π r^2. This rectangle then morphs into a pyramid whose height is r. So we get: volume of
pyramid = base (4 π r^2) x Height (r) / 3 = 4/3 π r^3
The two thirds relationships between volumes and areas of spheres and cylinders:
The total volume of the cylinder (including its base and lid) is given by:
wall of cylinder =
4 π r^2
lid of cylinder =
π r^2
see area of circle animation
base of cylinder =
π r^2
TOTAL AREA =
6 π r^2
so the area of the cylinder is 6 π r^2 and that of its circumscribed sphere is 4 π r^2. In other words, the sphere has 4/6 or two thirds the area of its enclosing cylinder. Now, this is interesting
because it is the same ratio as the volume a sphere to the volume of its circumscribing cylinder:
volume of sphere =
4/3 π r^3
volume of cylinder = lid (
π r^2
) x height (
2 π r
) =
2 π r^3
Archimedes discovered these relationships between a cylinder and its enclosed (circumscribed) sphere.
Try our circle and area calculator to derive various values from different starting points. | {"url":"http://www.rkm.com.au/ANIMATIONS/animation-Sphere-Volume-Derivation.html","timestamp":"2014-04-20T20:56:43Z","content_type":null,"content_length":"25445","record_id":"<urn:uuid:7b306900-a3b2-4650-91c7-3e253f150d91>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harmonic Series
Date: 04/28/2003 at 08:40:35
From: John
Subject: The harmonic series
Prove that if in the sum
1 + 1/2 + 1/3 + ... + 1/n
we throw out each term that contains 9 as a digit in its denominator,
then the sum of the remaining terms is < 80.
As n tends to infinity surely the sum will also tend to infinity, as
the classic harmonic series does. Perhaps induction on n? The next
term will have two possibilities (i.e. either it contains a 9 in the
denominator, or it doesn't). In the case where a 9 is present, the
inductive hypothesis holds (assuming of course the base case works),
but for the latter case I have difficulty showing this to be true.
Date: 04/28/2003 at 11:28:23
From: Doctor Rob
Subject: Re: The harmonic series
Thanks for writing to Ask Dr. Math, John.
Contrary to what you might think, the series you get by deleting all
those terms *does* converge, and does *not* approach infinity. This is
because as the term numbers increase, the terms remaining comprise a
decreasing fraction of all the terms.
Partition the series into parts according to how many digits there are
in the denominators. The first part is
1/1 + 1/2 + ... + 1/8.
It has 8 terms. Each term of this part is less than or equal to 1; all
but the first are strictly less. The sum of this part is thus less
than 8. (Actually it is even smaller, less than 2.72.)
The second part is
1/10 + 1/11 + ... + 1/88.
It has 72 = 8*9 terms, because the first digit can be 1 through 8 (8
choices) and the second can be 0 through 8 (9 choices). Each term of
this part is less than or equal to 1/10; all but the first are
strictly less. The sum of this part is thus less than 8*9*(1/10).
The third part is
1/100 + 1/101 + ... + 1/888.
It has 8*9^2 = 648 terms, because there are 8 choices for the first
digit and 9 choices for both the second and third digits. The terms
are each <= 1/100; all but the first are < 1/100. The sum of this part
is thus less than 8*9^2*(1/100).
In general, the nth part has 8*9^(n-1) terms. They are all <=
1/10^(n-1), and all but the first are <. That means that the sum you
seek S satisfies
S < 8*1 + 8*9*1/10 + 8*9^2*1/100 + 8*9^3*1/1000 + ...
= 8 + 8*(9/10) + 8*(9/10)^2 + 8*(9/10)^3 + ...
This is a geometric series with first term 8 and common ratio 9/10.
Feel free to write again if I can help further.
- Doctor Rob, The Math Forum
Date: 04/29/2003 at 06:49:15
From: John
Subject: The harmonic series
Thanks doctor, that's a great help. This is quite an interesting one,
though. Is it possible to generalise this to deleting terms containing
some other digit?
Thanks again.
Date: 04/29/2003 at 09:19:58
From: Doctor Rob
Subject: Re: The harmonic series
Yes, of course it is possible. Digits 1 through 8 would be exactly the
same. Digit 0 would be slightly different, because 0 cannot be the
leading digit of any denominator.
Feel free to write again if I can help further.
- Doctor Rob, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/62856.html","timestamp":"2014-04-20T22:06:26Z","content_type":null,"content_length":"8153","record_id":"<urn:uuid:d2be849f-3db0-4e80-947d-030d6a592e6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 16
, 2002
"... The next high-priority phase of human genomics will involve the development of a full Haplotype Map of the human genome [12]. It will be used in large-scale screens of populations to associate
specific haplotypes with specific complex genetic-influenced diseases. A prototype Haplotype Mapping strat ..."
Cited by 109 (10 self)
Add to MetaCart
The next high-priority phase of human genomics will involve the development of a full Haplotype Map of the human genome [12]. It will be used in large-scale screens of populations to associate
specific haplotypes with specific complex genetic-influenced diseases. A prototype Haplotype Mapping strategy is presently being finalized by an NIH workinggroup. The biological key to that strategy
is the surprising fact that genomic DNA can be partitioned into long blocks where genetic recombination has been rare, leading to strikingly fewer distinct haplotypes in the population than
previously expected [12, 6, 21, 7]. In this paper
- Journal of Bioinformatics and Computational Biology , 2003
"... Each person’s genome contains two copies of each chromosome, one inherited from the father and the other from the mother. A person’s genotype specifies the pair of bases at each site, but does
not specify which base occurs on which chromosome. The sequence of each chromosome separately is called a h ..."
Cited by 68 (10 self)
Add to MetaCart
Each person’s genome contains two copies of each chromosome, one inherited from the father and the other from the mother. A person’s genotype specifies the pair of bases at each site, but does not
specify which base occurs on which chromosome. The sequence of each chromosome separately is called a haplotype. The determination of the haplotypes within a population is essential for understanding
genetic variation and the inheritance of complex diseases. The haplotype mapping project, a successor to the human genome project, seeks to determine the common haplotypes in the human population.
Since experimental determination of a person’s genotype is less expensive than determining its component haplotypes, algorithms are required for computing haplotypes from genotypes. Two observations
aid in this process: first, the human genome contains short blocks within which only a few different haplotypes occur; second, as suggested by Gusfield, it is reasonable to assume that the haplotypes
observed within a block have evolved according to a perfect phylogeny, in which at most one mutation event has occurred at any site, and no recombination occurred at the given region. We present a
simple and efficient polynomial-time algorithm for inferring haplotypes from the genotypes of a set of individuals assuming a perfect phylogeny. Using a reduction to 2-SAT we extend this algorithm to
handle constraints that apply when we have genotypes from both parents and child. We also present a hardness result for the problem of removing the minimum number of individuals from a population to
ensure that the genotypes of the remaining individuals are consistent with a perfect phylogeny. Our algorithms have been tested on real data and give biologically meaningful results. Our webserver
"... In recent years, a variety of graph optimization problems have arisen in which the graphs involved are much too large for the usual algorithms to be effective. In these cases, even though we are
not able to examine the entire graph (which may be changing dynamically), we would still like to deduce v ..."
Cited by 13 (2 self)
Add to MetaCart
In recent years, a variety of graph optimization problems have arisen in which the graphs involved are much too large for the usual algorithms to be effective. In these cases, even though we are not
able to examine the entire graph (which may be changing dynamically), we would still like to deduce various properties of it, such as the size of a connected component, the set of neighbors of a
subset of vertices, etc. In this paper, we study a class of problems, called distance realization problems, which arise in the study of Internet data traffic models. uppose we are given a set S of
terminal nodes, taken from some (unknown) weighted graph. A basic problem is to reconstruct a weighted graph G including S with possibly additional vertices, that realizes the given distance matrix
for S. We will first show that this problem is not only difficult but the solution is often unstable in the sense that even if all distances between nodes in S decrease, the solution can increase by
a factor proport...
, 2007
"... Matroids were introduced by Whitney in 1935 to try to capture abstractly the essence of dependence. Whitney’s definition embraces a surprising diversity of combinatorial structures. Moreover,
matroids arise naturally in combinatorial optimization since they are precisely the structures for which th ..."
Cited by 10 (0 self)
Add to MetaCart
Matroids were introduced by Whitney in 1935 to try to capture abstractly the essence of dependence. Whitney’s definition embraces a surprising diversity of combinatorial structures. Moreover,
matroids arise naturally in combinatorial optimization since they are precisely the structures for which the greedy algorithm works. This survey paper introduces matroid theory, presents some of the
main theorems in the subject, and identifies some of the major problems of current research interest.
- JOURNAL OF DISCRETE ALGORITHMS , 2006
"... We introduce an NP-complete special case of the Weighted Set Cover problem and show its fixed-parameter tractability with respect to the maximum subset size, a parameter that appears to be small
in relevant applications. More precisely, in this practically relevant variant we require that the given ..."
Cited by 8 (4 self)
Add to MetaCart
We introduce an NP-complete special case of the Weighted Set Cover problem and show its fixed-parameter tractability with respect to the maximum subset size, a parameter that appears to be small in
relevant applications. More precisely, in this practically relevant variant we require that the given collection C of subsets of a some base set S should be “tree-like.” That is, the subsets in C can
be organized in a tree T such that every subset one-to-one corresponds to a tree node and, for each element s of S, the nodes corresponding to the subsets containing s induce a subtree of T. This is
equivalent to the problem of finding a minimum edge cover in an edge-weighted acyclic hypergraph. Our main result is an algorithm running in O(3 k ·mn) time where k denotes the maximum subset size,
n: = |S|, and m: = |C|. The algorithm also implies a fixed-parameter tractability result for the NP-complete Multicut in Trees problem, complementing previous approximation results. Our results find
applications in computational biology in phylogenomics and for saving memory in tree decomposition based graph algorithms.
- IEEE Trans. Inform. Theory. ArXiv
"... ABSTRACT. The decomposition theory of matroids initiated by Paul Seymour in the 1980’s has had an enormous impact on research in matroid theory. This theory, when applied to matrices over the
binary field, yields a powerful decomposition theory for binary linear codes. In this paper, we give an over ..."
Cited by 7 (3 self)
Add to MetaCart
ABSTRACT. The decomposition theory of matroids initiated by Paul Seymour in the 1980’s has had an enormous impact on research in matroid theory. This theory, when applied to matrices over the binary
field, yields a powerful decomposition theory for binary linear codes. In this paper, we give an overview of this code decomposition theory, and discuss some of its implications in the context of the
recently discovered formulation of maximum-likelihood (ML) decoding of a binary linear code over a discrete memoryless channel as a linear programming problem. We translate matroid-theoretic results
of Grötschel and Truemper from the combinatorial optimization literature to give examples of non-trivial families of codes for which the ML decoding problem can be solved in time polynomial in the
length of the code. One such family is that consisting of codes C for which the codeword polytope is identical to the Koetter-Vontobel fundamental polytope derived from the entire dual code C ⊥.
However, we also show that such families of codes are not good in a coding-theoretic sense — either their dimension or their minimum distance must grow sub-linearly with codelength. 1.
, 1995
"... We focus on combinatorial problems arising from symmetric and skew-symmetric matrices. For much of the thesis we consider properties concerning the principal submatrices. In particular, we are
interested in the property that every nonsingular principal submatrix is unimodular; matrices having this p ..."
Cited by 5 (0 self)
Add to MetaCart
We focus on combinatorial problems arising from symmetric and skew-symmetric matrices. For much of the thesis we consider properties concerning the principal submatrices. In particular, we are
interested in the property that every nonsingular principal submatrix is unimodular; matrices having this property are called principally unimodular. Principal unimodularity is a generalization of
total unimodularity, and we generalize key polyhedral and matroidal results on total unimodularity. Highlights include a generalization of Hoffman and Kruskal's result on integral polyhedra, a
generalization of Tutte's results on regular matroids, and partial results toward a decomposition theorem. Quite separate from the study of principal unimodularity we consider a particular
skew-symmetric matrix of indeterminates associated with a graph. This matrix, called the Tutte matrix, was introduced by Tutte to study matchings. By considering the rank of an arbitrary submatrix of
the Tutte matrix we disco...
"... In this dissertation, integer programming models are applied to combinatorial problems in air traffic flow management. For the two problems studied, models are developed and analyzed both
theoretically and computationally. This dissertation makes contributions to integer programming while providing ..."
Cited by 4 (0 self)
Add to MetaCart
In this dissertation, integer programming models are applied to combinatorial problems in air traffic flow management. For the two problems studied, models are developed and analyzed both
theoretically and computationally. This dissertation makes contributions to integer programming while providing efficient tools for solving air traffic flow management problems. Currently, a
constrained arrival capacity situation at an airport in the United States is alleviated by holding inbound aircraft at their departure gates. The ground holding problem (GH) decides which aircraft to
hold on the ground and for how long. This dissertation examines the GH from two perspectives. First, the hubbing operations of the airlines are considered by adding side constraints to GH. These
constraints enforce the desire of the airlines to temporally groupbanks of flights. Five basic models and several variations of the ground holding problem with banking constraints (GHB) are
presented. A particularly strong, facet-inducing model of the banking constraints is presented which allows one to
- Combinatorica , 1996
"... Given a matroid M with distinguished element e, a port oracle with respect to e reports whether or not a given subset contains a circuit that contains e. The first main result of this paper is
an algorithm for computing an e-based ear decomposition (that is, an ear decomposition every circuit of whi ..."
Cited by 4 (0 self)
Add to MetaCart
Given a matroid M with distinguished element e, a port oracle with respect to e reports whether or not a given subset contains a circuit that contains e. The first main result of this paper is an
algorithm for computing an e-based ear decomposition (that is, an ear decomposition every circuit of which contains element e) of a matroid using only a polynomial number of elementary operations and
port oracle calls. In the case that M is binary, the incidence vectors of the circuits in the ear decomposition form a matrix representation for M. Thus, this algorithm solves a problem in
computational learning theory; it learns the class of binary matroid port (BMP) functions with membership queries in polynomial time. In this context, the algorithm generalizes results of Angluin,
Hellerstein, and Karpinski [1], and Raghavan and Schach [17], who showed that certain subclasses of the BMP functions are learnable in polynomial time using membership queries. The second main result
of this paper is an algorithm for testing independence of a given input set of the matroid M. This algorithm, which uses the ear decomposition algorithm as a subroutine, uses only a polynomial number
of elementary operations and port oracle calls. The algorithm proves a constructive version of an early theorem of Lehman [13], which states that the port of a connected matroid uniquely determines
the matroid.
, 1994
"... This thesis is about multicommodity flows and their use in designing approximation algorithms for problems involving cuts in graphs. In a ground-breaking work Leighton and Rao [34] showed an
approximate max-flow min-cut theorem for uniform multicommodity flow and used this to obtain an approximation ..."
Cited by 3 (0 self)
Add to MetaCart
This thesis is about multicommodity flows and their use in designing approximation algorithms for problems involving cuts in graphs. In a ground-breaking work Leighton and Rao [34] showed an
approximate max-flow min-cut theorem for uniform multicommodity flow and used this to obtain an approximation algorithm for the flux of a graph. We consider the multicommodity flow problem in which
the object is to maximize the sum of the flows routed and prove the following approximate max-flow min-multicut theorem min-multicut O(log k) max-flow min-multicut where k is the number of
commodities. Our proof is based on a rounding technique from [34]. Further, we show that this theorem is tight. For a multicommodity flow instance with specified demands, the ratio of the maximum
concurrent flow to the sparsest cut was shown to be bounded by O(log 2 k) [30, 57, 17, 47]. We use ideas from our proof of the approximate max-flow min-multicut theorem and a geometric scaling
technique from [1] to provi... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=894375","timestamp":"2014-04-24T06:40:50Z","content_type":null,"content_length":"40438","record_id":"<urn:uuid:ac6ba702-c177-450d-a261-8e67ca02dcf3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expressing Arccos in Terms of Arctan
Date: 02/24/97 at 13:07:04
From: John Neubert
Subject: arccos func. in QBasic
The following is for my son. He is doing a science project involving
asteroidal orbits for which he needs to create a Basic (QBasic)
program to run his simulation. He has been able to handle all the
math and programming, but has run into a snag in that Basic only has
arctan as a built-in function (ATN). The books all say that from
this, one can create the other functions. My son needs the equation
for arccos using arctan in its definition. He has come up with the
following (HOWEVER, after coding, it does not give a valid answer when
cos x = 1 / (sec x)
(tan x)^2 + 1 = (sec x)^2
cos x = 1 / ((tan x)^2 + 1)^.5
arccos x = 1 / ((ATN x)^2 + 1)^.5
It has been many years since I've worked with this, but his first
three steps seem valid. It's the last one that looses me where he
substitutes arccos for cos and arctan (ATN) for tan. He says this is
valid. Do you see any errors in the above?
When he programs it he creates a function with the QBasic "Function"
command it produces invalid results. Can you help him?
Thank you very much.
Date: 03/07/97 at 19:21:32
From: Doctor Luis
Subject: Re: arccos func. in QBasic
Obtaining the other inverse trig functions from arctan x is not
difficult. Drawing a right triangle with unit hypotenuse can help
visualize the derivation:
/ |
/ |
/ |
1 / |
/ |
/ | sqrt(1-x^2) [sqrt(z) means z^0.5]
/ |
/ |
A B
Now, consider both the cosine and the tangent of angle A:
cos A = x/1 = x
tan A = (sqrt(1-x^2))/x
Clearly, A = arccos(x) and also A = arctan((sqrt(1-x^2))/x).
Therefore, arcos(x) = arctan((sqrt(1-x^2))/x).
The above expression gives the arccos(x) function in terms of x (which
is given) and the arctan (or ATN) function.
I hope this helped.
-Doctor Luis, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 03/07/97 at 19:13:38
From: Doctor Ken
Subject: Re: arccos func. in QBasic
Hi -
Your instincts are correct, but the last step isn't valid. If we
could do this kind of thing, we could do some pretty bizarre things
with math! For instance, your son's step was this:
cos x = 1 / ((tan x)^2 + 1)^.5
arccos x = 1 / ((ATN x)^2 + 1)^.5
Now, if we could substitute inverse functions on both sides of an
equation, we would be able to do things like this:
1 = 1
x + 1 = x + 1
x + 1 = (x - 1) + 2
Now plug in inverse functions - on the left, the inverse function
of x is x, and the inverse function of x-1 is x+1:
x + 1 = (x + 1) + 2
x + 1 = x + 3
1 = 3
So you can see that in general this technique won't produce valid
results. For a geometric demonstration of how you _can_ produce a
formula for arccos in terms of arctan, Dr. Luis' method (above) works
just fine.
-Doctor Ken, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54020.html","timestamp":"2014-04-20T22:29:17Z","content_type":null,"content_length":"8275","record_id":"<urn:uuid:babe3efe-ce80-4fc7-a598-6cab61928b38>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
When Less Is More: Visualizing Basic Inequalities
The objective of this book is to illustrate how the use of visualization can be a powerful tool for better understanding of some basic mathematical inequalities. Drawing pictures is a well-known
method for problem solving, and the authors will convince you that the same is true when working with inequalities. They show how to produce figures in a systematic way for the illustration of
inequalities and open new avenues to creative ways of thinking and teaching. In addition, a geometric argument cannot only show two things unequal, but also help the observer see just how unequal
they are.
Table of Contents
1. Representing positive numbers as lengths of segments
2. Representing positive numbers as areas or volumes
3. Inequalities and the existence of triangles
4. Using incircles and circumcircles
5. Using reflections
6. Using rotations
7. Employing non-isometric transformations
8. Employing graphs of functions
9. Additional topics
Solutions to the Challenges
Notation and symbols
About the Authors
About the Authors
Claudi Alsina was born on 30 January 1952 in Barcelona, Spain. He received his BA and PhD in mathematics from the University of Barcelona. His post-doctoral studies were at the University of
Massachusetts, Amherst. Claudi, Professor of Mathematics at the Technical University of Catalonia, has developed a wide range of international activities, research papers, publications and hundreds
of lectures on mathematics and mathematics education. His latest books include Associative Functions: Triangular Norms and Copulas with M.J. Frank and B. Schweizer, WSP, 2006; Math Made Visual:
Creating Images for Understanding Mathematics (with Roger B. Nelsen), MAA, 2006; Vitaminas Matematicas and El Club de la Hipotenusa, Ariel, 2008.
Roger B. Nelsen was born on 20 December 1942 in Chicago, Illinois. He received his BA in mathematics from DePauw University in 1964 and his PhD in mathematics from Duke University in 1969. Roger was
elected to Phi Beta Kappa and Sigma Xi. His previous books include Proofs Without Words: Exercises in Visual Thinking, MAA 1993; An Introduction to Copulas, Springer, 1999 (2nd ed. 2006); Proofs
Without Words II: More Exercises in Visual Thinking, MAA, 2000; and Math Made Visual: Creating Images for Understanding Mathematics (with Claudi Alsina), MAA, 2006. | {"url":"http://www.maa.org/publications/books/when-less-is-more-visualizing-basic-inequalities","timestamp":"2014-04-18T21:00:03Z","content_type":null,"content_length":"94642","record_id":"<urn:uuid:7d2d3425-2385-4d67-9445-1de457817a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abelian subgroup of maximum order which is normal
This page describes a subgroup property obtained as a conjunction (AND) of two (or more) more fundamental subgroup properties: abelian subgroup of maximum order and abelian normal subgroup of group
of prime power orderView other subgroup property conjunctions | view all subgroup properties
This page describes a subgroup property obtained as a conjunction (AND) of two (or more) more fundamental subgroup properties: abelian subgroup of maximum order and abelian normal subgroupView other
subgroup property conjunctions | view all subgroup properties
Suppose group of prime power order, i.e., p-group for some prime number subgroup of abelian subgroup of maximum order which is normal if abelian subgroup of maximum order in normal subgroup of | {"url":"http://groupprops.subwiki.org/wiki/Abelian_subgroup_of_maximum_order_which_is_normal","timestamp":"2014-04-18T08:02:36Z","content_type":null,"content_length":"24569","record_id":"<urn:uuid:ee3f6c6a-b833-4951-9b7d-a76d2e34faba>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Springs, FL Geometry Tutor
Find a Miami Springs, FL Geometry Tutor
...I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be
explained and applied. We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid.
46 Subjects: including geometry, Spanish, reading, writing
...He has a unique ability to relate and communicate well with his students.Algebra is a building subject. Each lesson builds on what was learned before. If you don't understand one of the
foundational steps, you will get lost later on.
3 Subjects: including geometry, algebra 1, prealgebra
...I have a yoga certification and have been teaching yoga since 2003. I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the
last two years.
16 Subjects: including geometry, Spanish, chemistry, biology
...Also, every good scientist has to come to the realization that s/he is ignorant about more things than s/he knows and be excited by that ignorance. "Knowledge is a big subject," says Stuart
Firestein of Columbia University, "but ignorance is a bigger one. And it is ignorance--not knowledge--that...
61 Subjects: including geometry, English, Spanish, reading
...For every session, I will prepare in advance of the session with any information I am provided, and will closely go over any problems the student is having in order to both provide the student
with high quality answers as well as a long-lasting learning experience. I believe that tutoring should...
32 Subjects: including geometry, chemistry, physics, statistics
Related Miami Springs, FL Tutors
Miami Springs, FL Accounting Tutors
Miami Springs, FL ACT Tutors
Miami Springs, FL Algebra Tutors
Miami Springs, FL Algebra 2 Tutors
Miami Springs, FL Calculus Tutors
Miami Springs, FL Geometry Tutors
Miami Springs, FL Math Tutors
Miami Springs, FL Prealgebra Tutors
Miami Springs, FL Precalculus Tutors
Miami Springs, FL SAT Tutors
Miami Springs, FL SAT Math Tutors
Miami Springs, FL Science Tutors
Miami Springs, FL Statistics Tutors
Miami Springs, FL Trigonometry Tutors
Nearby Cities With geometry Tutor
Biscayne Park, FL geometry Tutors
Doral, FL geometry Tutors
El Portal, FL geometry Tutors
Hialeah geometry Tutors
Hialeah Gardens, FL geometry Tutors
Hialeah Lakes, FL geometry Tutors
Indian Creek Village, FL geometry Tutors
Medley, FL geometry Tutors
Mia Shores, FL geometry Tutors
Miami Beach, WA geometry Tutors
North Bay Village, FL geometry Tutors
North Miami Bch, FL geometry Tutors
Surfside, FL geometry Tutors
Sweetwater, FL geometry Tutors
Virginia Gardens, FL geometry Tutors | {"url":"http://www.purplemath.com/Miami_Springs_FL_Geometry_tutors.php","timestamp":"2014-04-17T01:19:36Z","content_type":null,"content_length":"24340","record_id":"<urn:uuid:3f0bd9a0-80e7-495c-b413-e055ae76e269>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics for Computer science
Author Mathematics for Computer science
Joined: I have just started to study the "The Art of computer programming" by Knuth and I found lot of mathmatics. I want to understand the mathmatics. Can you tell me a book where from I can
Apr 20, start learning mathmatics for this book.
Posts: 26 Thanks,
Mar 22, http://en.wikipedia.org/wiki/Concrete_Mathematics is a good book.
27 Ping & DNS - updated with new look and Ping home screen widget
Joined: Thanks for the advice, just that I am in doubt whether I shall be able to understand it. I want a book which will teach the basics as well.
Apr 20,
2012 Thanks,
Posts: 26 Deep
deeps sinha wrote:I want a book which will teach the basics as well.
Dec 08,
2010 Basics - upto what level?
1456 For computer science (basically algorithms), you almost won't need calculus. All you'll need is a descent book for discrete mathematics. Even from those books you'll need to understand
few topics like combinatorial, probability, number theory (just intro level), set theory, graph theory, basic data structures, asymptotic notations etc.
Below are few authors with good books (I remember them by authors instead of book names)-
C. L. Liu (short and sweet - one of my favorite book)
I like... Tremblay & Manohar (especially good for set and graph theory)
Kenneth Rosen (a heavy dose of discrete mathematics - contains lot of examples and exercises)
Coreman (this is for computer algorithms, but some of basic mathematics is also covered in it)
Lipschutz (very nice treatment to probability and counting - i.e. permutations and combinations)
Knuth (he has written a book for discrete mathematics - I guess the name is 'Concrete Mathematics')
I hope this helps.
Anayonkar Shivalkar (SCJP, SCWCD, OCMJD, OCEEJBD)
Apr 06,
Posts: Anayonkar Shivalkar wrote:Basics - upto what level?
7 And from what level as well? How much maths have you studied already?
I like...
Sheriff Anayonkar Shivalkar wrote:
Joined: deeps sinha wrote:I want a book which will teach the basics as well.
Sep 28,
2004 Basics - upto what level?
18103 For computer science (basically algorithms),
39 you almost won't need calculus
. All you'll need is a descent book for discrete mathematics. Even from those books you'll need to understand few topics like combinatorial, probability, number theory (just intro
I like... level), set theory, graph theory, basic data structures, asymptotic notations etc.
As a side story, once, I had to actually implement an algorithm that is based on calculus. We had to control something with discrete values (the number of VmWare VMs), based on something
(s) whose value was constantly changing due to lots of factors.
So, I implemented a PID controller (http://en.wikipedia.org/wiki/PID_controller), and my calculus was rusty !!! ...
Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor)
Apr 20, Thanks for the advice. It's been almost 10 years since I left Mathematics(in college) nor was I good at it.Hope you understand my situation.Whatever I study I want to study from the
2012 basics then move to the advanced.So, if you could suggest some books on the basics also, it will be very much helpful.
Posts: 26
The question remains - what do YOU consider the basics? What level of math are you comfortable with? Arithmetic? Algebra? Geometry? Trig? Calc? Diff-EQ?
Oct 02, If I needed to 'go back to the basics', I personally would start with Calculus. However, if you are shaky on your algebra, you should start there. We can't advise you without getting some
2003 kind of idea where you were.
10908 "Mathematics in college" is very different depending on whether you are majoring in drama, engineering, or math.
I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
The question remains - what do YOU consider the basics? What level of math are you comfortable with? Arithmetic? Algebra? Geometry? Trig? Calc? Diff-EQ?
Apr 20,
Posts: 26 I would say I am comfortable with Arithmetic, I have forgotten the rest. I had studied BE in Computer Science(2000-2004)
Oct 13,
Posts: You still haven’t said what sort of maths you need. Please read the replies again.
Sheriff deeps sinha wrote:
Joined: The question remains - what do YOU consider the basics? What level of math are you comfortable with? Arithmetic? Algebra? Geometry? Trig? Calc? Diff-EQ?
Sep 28,
2004 I would say I am comfortable with Arithmetic, I have forgotten the rest. I had studied BE in Computer Science(2000-2004)
39 To add some color, Fred's listing is not just in a random order. In the US, algebra is learned in the 9th year (the beginning of high school), geometry is learned in the 10th year,
trigonometry (now called algebra II) in the 11th year, and finally, calculus (or pre-calculus) to round off high school. This order is specified by the US regents examinations. After
that, in University, there really isn't any exact order -- you have calculus, differential equations, statistics, etc., depending on the type of degree you go for.
I like...
So, basically...
That list isn't chapters in a book. We are talking about years of study.
subject: Mathematics for Computer science | {"url":"http://www.coderanch.com/t/585946/gc/Mathematics-Computer-science","timestamp":"2014-04-17T07:12:28Z","content_type":null,"content_length":"45641","record_id":"<urn:uuid:e8de491f-d077-43ec-87ea-1bc04c59e326>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |