content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
How Do You Graph a Line If You're Given the Slope and a Single Point?
You can't learn about linear equations without learning about slope. The slope of a line is the steepness of the line. There are many ways to think about slope. Slope is the rise over the run, the
change in 'y' over the change in 'x', or the gradient of a line. Check out this tutorial to learn about slope! | {"url":"http://virtualnerd.com/algebra-1/linear-equation-analysis/slope-rate-of-change/slope-examples/graph-line-given-slope-point","timestamp":"2014-04-16T22:18:24Z","content_type":null,"content_length":"23777","record_id":"<urn:uuid:3892d506-03cd-46f1-a82c-f2b833c8cfa3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
he Boundary of P and NP
On The Boundary of P and NP (Spring 2011)
Lecturers: Irit Dinur and Uri Feige
Location: Ziskind 1
Time: Monday 11:00 - 13:00
The final assignment / exam is due July 22. Good luck!
Syllabus: We plan to discuss issues on the borderline between P and NP, including exponential time algorithms, algorithms for random instances, approximation algorithms, the PCP theorem and hardness
of approximation, the unique games conjecture.
Prerequisites: The course assumes previous basic knowledge in algorithms, computational complexity (in particular, NP-completeness and the notion of reductions among problems), probability theory
(only of finite probability spaces), and linear algebra.
Homework: The homework assignments are an important part of the course. They need to be handed in two weeks after they are given, they are a prerequisite for receiving credit in the course, and their
grades will determine a substantial part of the final grade.
Lecture notes for at least some of the lectures will become available on the web.
Requirements: Attendance, homework assignments, final project.
Lecture Notes:
Lecture #1 (March 14, 2011)
Related links:
A new book on approximation algorithms by Williamson and Shmoys.
An online course of Ryan O'donnell on Analysis of Boolean functions. | {"url":"http://www.wisdom.weizmann.ac.il/~dinuri/courses/11-BoundaryPNP/","timestamp":"2014-04-19T19:35:03Z","content_type":null,"content_length":"5424","record_id":"<urn:uuid:a62c3646-7f02-43fe-a657-6d791b6121bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stability stable
Maintainer Patrick Perry <patperry@stanford.edu>
Read and write matrices and vectors in the Matrix Market format (see http://math.nist.gov/MatrixMarket/).
Matrix and vector type descriptors
data Type Source
Specifies what kind of object is stored in the file. Note that "vector" is a non-standard format.
Eq Type
Read Type
Show Type
data Field Source
Specifies the element type. Pattern matrices do not have any elements, only indices, and only make sense for coordinate matrices and vectors.
Eq Field
Read Field
Show Field
data Format Source
Specifies either sparse or dense storage. In sparse ("coordinate") storage, elements are given in (i,j,x) triplets for matrices (or (i,x) for vectors). Indices are 1-based, so that A(1,1) is the
first element of a matrix, and x(1) is the first element of a vector.
In dense ("array") storage, elements are given in column-major order.
In both cases, each element is given on a separate line.
Eq Format
Read Format
Show Format
data MatrixType Source
Specifies any special structure in the matrix. For symmetric and hermition matrices, only the lower-triangular part of the matrix is given. For skew matrices, only the entries below the diagonal are
Eq MatrixType
Read MatrixType
Show MatrixType
Dense Vector I/O
hPutVector :: Show a => Handle -> Field -> Int -> [a] -> IO ()Source
Write a dense vector with the given dimension and elements to a file. If the field is given as Pattern, no elements are written, only the header and size.
hPutVectorWithDesc :: Show a => Handle -> String -> Field -> Int -> [a] -> IO ()Source
Write a dense vector along with a description, which is put in the comment section of the file.
hGetVector :: Read a => Handle -> Field -> IO (Int, Maybe [a])Source
Lazily read a dense vector from a file. The vector dimension and elements are returned. The file is closed when the operation finishes. If the field is Pattern, the elements list will be Nothing.
Sparse Vector I/O
hPutCoordVector :: Show a => Handle -> Field -> Int -> Int -> [(Int, a)] -> IO ()Source
Write a coordinate vector with the given dimension and size to a file. The indices are 1-based, so that x(1) is the first element of the vector. If the field is Pattern, only the indices are used.
hGetCoordVector :: Read a => Handle -> Field -> IO (Int, Int, Either [Int] [(Int, a)])Source
Lazily read a coordinate vector from a file. The vector dimension, size, and elements are returned. The file is closed when the operation finishes. If the field is given as Pattern, only a list of
indices is returned.
Dense Matrix I/O
hPutMatrix :: Show a => Handle -> Field -> MatrixType -> (Int, Int) -> [a] -> IO ()Source
Write a dense matrix with the given shape and elements in column-major order to a file. If the field is given as Pattern, no elements are written, only the header and size.
hGetMatrix :: Read a => Handle -> Field -> MatrixType -> IO ((Int, Int), Maybe [a])Source
Lazily read a dense matrix from a file, returning the matrix shape and its elements in column-major order. The file is closed when the operation finishes. If the field is given as Pattern, Nothing is
returned instead of an element list.
Sparse Matrix I/O
hPutCoordMatrix :: Show a => Handle -> Field -> MatrixType -> (Int, Int) -> Int -> [((Int, Int), a)] -> IO ()Source
Write a coordinate matrix with the given shape and size to a file. The indices are 1-based, so that A(1,1) is the first element of the matrix. If the field is Pattern, only the indices are used.
hGetCoordMatrix :: Read a => Handle -> Field -> MatrixType -> IO ((Int, Int), Int, Either [(Int, Int)] [((Int, Int), a)])Source
Lazily read a coordinate vector from a file. The vector dimension, size, and elements are returned. The file is closed when the operation finishes. If the field is Pattern, only the indices are
Banner Functions
hGetBanner :: Handle -> IO (Type, Format, Field, Maybe MatrixType)Source
Read the Matrix Market banner (including comments) from a file. The comments are discarded and the banner information is returned.
hPutComments :: Handle -> String -> IO ()Source
Write a string as a Matrix Market file comment. This prepends each line with '%' and then writes it out to the file.
hGetComments :: Handle -> IO StringSource
Read the comments from a file, stripping the leading '%' from each line, until reaching a line that does not start with the comment character. | {"url":"http://hackage.haskell.org/package/matrix-market-1.0/docs/System-IO-MatrixMarket.html","timestamp":"2014-04-18T08:50:12Z","content_type":null,"content_length":"46861","record_id":"<urn:uuid:436a0044-d453-4f59-8f57-b4a669bfdd49>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert milligrams to calorie - Conversion of Measurement Units
›› Convert milligram to calorie [burned]
›› More information from the unit converter
How many milligrams in 1 calorie? The answer is 129.59782.
We assume you are converting between milligram and calorie [burned].
You can view more details on each measurement unit:
milligrams or calorie
The SI base unit for mass is the kilogram.
1 kilogram is equal to 1000000 milligrams, or 7716.17917647 calorie.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between milligrams and calories.
Type in your own numbers in the form to convert the units!
›› Definition: Milligram
The SI prefix "milli" represents a factor of 10^-3, or in exponential notation, 1E-3.
So 1 milligram = 10^-3 grams-force.
›› Definition: Calorie
A pound of body fat is roughly equivalent to 3500 calories burned through activity.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0029 seconds. | {"url":"http://www.convertunits.com/from/milligrams/to/calorie","timestamp":"2014-04-19T11:56:36Z","content_type":null,"content_length":"20223","record_id":"<urn:uuid:112bdc9c-b154-4591-9aec-345001cf6529>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nth Term of Quadratic Sequences - Waldomaths
Nth Term of Quadratic Sequences
This applet investigates the method of differences to find the nth term of an integer quadratic sequence - an² + bn + c. It demonstrates a systematic method for finding the nth term, to practise it,
and to see why it works
UK Years 10-13, KS4, KS5, Higher GCSE Mathematics, AS - Shape and Space, Investigative tools
Instructions below See also: Linear sequences Simple quadratic sequences Cubic sequences PRINT
How to Use this Applet
This program is essentially a machine for finding the rule or formula for a quadratic sequence (S), which has nth term = an² + bn + c, where n is the term or sequence number (1, 2, 3, 4, 5, etc.). A
new problem is generated randomly by clicking the "new problem" button, and for each new problem you are trying to find the values of a, b and c, which are all integers. If the box "increasing
sequences only" is ticked then a is always positive. If not then a can be positive or negative (but never zero, as this would mean that the sequence is not quadratic!). Clicking the "reset" button
takes you back to the beginning of the current sequence. You can show or hide the graph or the working by using the boxes at the bottom. This applet uses what is known as a "method of differences".
Play around! | {"url":"http://www.waldomaths.com/QuadSeq2L.jsp","timestamp":"2014-04-16T10:37:52Z","content_type":null,"content_length":"13841","record_id":"<urn:uuid:5b978971-3606-4cdc-b978-459fcf42644b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Category Theory
Sorry things have been so slow around here. I know I keep promising that I'm going to post more frequently, but it's hard. Life as an engineer at a startup is exhausting. There's so much work to do!
And the work is so fun - it's easy to let it eat up all of your time.
Anyway... last good-math post 'round these parts was about monoids and programming. Today, I'm going to talk about monads and programming!
If you recall, monoids are an algebraic/categorical construct that, when implemented in a programming language, captures the abstract notion of foldability. For example, if you've got a list of
numbers, you can fold that list down to a single number using addition. Folding collections of values is something that comes up in a lot of programming problems - capturing that concept with a
programming construct allows you to write code that exploits the general concept of foldability in many different contexts.
Monads are a construct that have become incredibly important in functional programming, and they're very, very poorly understood by most people. That's a shame, because the real concept is actually
simple: a monad is all about the concept of sequencing. A monad is, basically, a container that you can wrap something in. Once it's wrapped, you can form a sequence of transformations on it. The
result of each step is the input to the next. That's really what it's all about. And when you express it that way, you can begin to see why it's such an important concept.
I think that people are confused by monads for two reasons:
1. Monads are almost always described in very, very abstract terms. I'll also get into the abstract details, but I'll start by elaborating on the simple description I gave above.
2. Monads in Haskell, which are where most people are introduced to them, are very confusing. The basic monad operations are swamped with tons of funny symbols, and the basic monad operations are
named in incredibly misleading ways. ("return" does almost the exact opposite of what you expect return to do!)
In programming terms, what's a monad?
Basically, a monadic computation consists of three pieces:
1. A monadic type, M which is a parametric type wrapper that can wrap a value of any type.
2. An operation which can wrap a value in M.
3. An operation which takes a function that transforms a value wraped in M into another value (possibly with a different type) wrapped in M.
Whenever you describe something very abstractly like this, it can seem rather esoteric. But this is just a slightly more formal way of saying what I said up above: it's a wrapper for a series of
transformations on the wrapped value.
Let me give you an example. At foursquare, we do all of our server programming in Scala. In a year at foursquare, I've seen exactly one null pointer exception. That's amazing - NPEs are ridiculously
common in Java programming. But in Scala at foursquare, we don't allow nulls to be used at all. If you have a value which could either be an instance of A, or no value, we use an option type. An
Option[T] can be either Some(t: T) or None.
So far, this is nice, but not really that different from a null. The main difference is that it allows you to say, in the type system, whether or not a given value might be null. But: Option is a
monad. In Scala, that means that I can use map on it. (map is one of the transformation functions!)
val t: Option[Int] = ...
val u: Option[String] = t.map( _ + 2 ).map(_.toString)
What this does is: if t is Some(x), then it adds two to it, and returns Some(x+2); then it takes the result of the first map, and converts it toa string, returning an Option[String]. If t is None,
then running map on it always returns None. So I can write code which takes care of the null case, without having to write out any conditional tests of nullness - because optionality is a monad.
In a good implementation of a monad, I can do a bit more than that. If I've got a Monad[T], I can use a map-like operation with a function that takes a T and returns a Monad[T].
For an example, we can look at lists - because List is a monad:
val l: List[Int] = List(1, 2, 3)
l.flatMap({ e => List( (e, e), (e+1, e+2) ) })
res0: List[(Int, Int)] = List((1,1), (2,3), (2,2), (3,4), (3,3), (4,5))
The monad map operation does a flatten on the map steps. That means a lot of things. You can see one in the rather silly example above.
You can take values, and wrap them as a list. THen you can perform a series of operations on those elements of a list - sequencing over the elements of the list. Each operation, in turn, returns a
list; the result of the monadic computation is a single list, concatenating, in order, the lists returned by each element. In Scala, the flatMap operation captures the monadic concept: basically, if
you can flatmap something, it's a monad.
Let's look at it a bit more specifically.
1. The monadic type: List[T].
2. A function to wrap a value into the monad: the constructor function from List def apply[T](value: T): List[T]
3. The map operation: def flatMap[T, U](op: T => List[U]): List[U].
(In the original version of this post, the I put the wrong type in flatMap in the list above. In the explanation demonstrating flatMap, the type is correct. Thanks to John Armstrong for catching
You can build monads around just about any kind of type wrapper where it makes sense to map over the values that it wraps: collections, like lists, maps, and options. Various kinds of state -
variable environments (where the wrapped values are, essentially, functions from identifiers to values), or IO state. And plenty of other things. Anything where you perform a sequence of operations
over a wrapped value, it's a monad.
Now that we have some understanding of what this thing we're talking about it, what is it in mathematical terms? For that, we turn to category theory.
Fundamentally, in category theory a monad is a category with a particular kind of structure. It's a category with one object. That category has a collection of arrows which (obviously) are from the
single object to itself. That one-object category has a functor from the category to itself. (As a reminder, a functor is an arrow between categories in the category of (small) categories.)
The first trick to the monad, in terms of theory, is that it's fundamentally about the functor: since the functor maps from a category to the same category, so you can almost ignore the category;
it's implicit in the definition of the functor. So we can almost treat the monad as if it were just the functor - that is, a kind of transition function.
The other big trick is closely related to that. For the programming language application of monads, we can think of the single object in the category as the set of all possible states. So we have a
category object, which is essentially the collection of all possible states; and there are arrows between the states representing possible state transitions. So the monad's functor is really just a
mapping from arrows to different arrows - which basically represents the way that changing the state causes a change in the possible transitions to other states.
So what a monad gives us, in terms of category theory, in a conceptual framework that captures the concept of a state transition system, in terms of transition functions that invisibly carry a state.
When that's translated into programming languages, that becomes a value that implicitly takes an input state, possibly updates it, and returns an output state. Sound familiar?
Let's take a moment and get formal. As usual for category theory, first there are some preliminary definitions.
1. Given a category, C, 1[C] is the identity functor from C to C.
2. Given a category C with a functor T : C → C, T^2 = T º T.
3. Given a functor T, 1[T] : T → T is the natural transformation from T to T.
Now, with that out of the way, we can give the complete formal definition of a monad. Given a category C, a monad on C is a triple: (T:C→C, η:1[C]→T, μ:T^2 → T), where T is a functor, and η and μ are
natural transformations. The members of the triple must make the following two diagrams commute.
Commutativity of composition with μ
Commutativity of composition with η
What these two diagrams mean is that successive applications of the state-transition functor over C behave associatively - that any sequence of composing monadic functors will result in a functor
with full monadic structure; and that the monadic structure will always preserve. Together, these mean that any sequence of operations (that is, applications of the monad functor) are themselves
monad functors - so the building a sequence of monadic state transformers is guaranteed to behave as a proper monadic state transition - so what happens inside of the monadic functor is fine - to the
rest of the universe, the difference between a sequence and a single simple operation is indistinguishable: the state will be consistently passed from application to application with the correct
chaining behavior, and to the outside world, the entire monadic chain looks like like a single atomic monadic operation.
Now, what does this mean in terms of programming? Each element of a monadic sequence in Haskell is an instantiation of the monadic functor - that is, it's an arrow between states - a function, not a
simple value - which is the basic trick to monads. They look like a sequence of statements; in fact, each statement in a monad is actually a function from state to state. And it looks like we're
writing sequence code - when what we're actually doing is writing function compositions - so that when we're done writing a monadic sequence, what we've actually done is written a function definition
in terms of a sequence of function compositions.
Understanding that, we can now clearly understand why we need the return function to use a non-monad expression inside of a monadic sequence - because each step in the sequence needs to be an
instance of the monadic functor; an expression that isn't an instance of the monadic functor couldn't be composed with the functions in the sequence. The return function is really nothing but a
function that combines a non-monadic expression with the id functor.
In light of this, let's go back and look at the definition of Monad in the Haskell standard prelude.
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
return :: a -> m a
fail :: String -> m a
-- Minimal complete definition:
-- (>>=), return
m >> k = m >>= \_ -> k
fail s = error s
The declaration of monad is connected with the definition of functor - if you look, you can see the connection. The fundamental operation of Monad is ">>=" - the chaining operation, which is
basically the haskell version of the map operation, which is type m a -> (a -> m b) -> m b is deeply connected with the fmap operation from Functor's fmap operation - the a in m a is generally going
to be a type which can be a Functor. (Remember what I said about haskell and monads? I really prefer map and flatMap to >> and >>=).
So the value type wrapped in the monad is a functor - in fact, the functor from the category definition! And the ">>=" operation is just the functor composition operation from the monad definition.
A proper implementation of a monad needs to follow some fundamental rules - the rules are, basically, just Haskell translations of the structure-preserving rules about functors and natural
transformations in the category-theoretic monad. There are two groups of laws - laws about the Functor class, which should hold for the transition function wrapped in the monad class; and laws about
the monadic operations in the Monad class. One important thing to realize about the functor and monad laws is that they are not enforced - in fact, cannot be enforced! - but monad-based code using
monad implementations that do not follow them may not work correctly. (A compile-time method for correctly verifying the enforcement of these rules can be shown to be equivalent to the halting
There are two simple laws for Functor, and it's pretty obvious why they're fundamentally just strucure-preservation requirements. The functor class only has one operation, called fmap, and the two
functor laws are about how it must behave.
1. fmap id = id
(Mapping ID over any structured sequence results in an unmodified sequence)
2. fmap (f . g) = (fmap f) . (fmap g)
("." is the function composition operation; this just says that fmap preserves the structure to ensure that that mapping is associative with composition.)
The monad laws are a bit harder, but not much. The mainly govern how monadic operations interact with non-monadic operations in terms of the "return" and ">>=" operations of the Monad class.
1. return x >>= f = f x
(injecting a value into the monad is basically the same as passing it as a parameter down the chain - return is really just the identity functor passing its result on to the next step. I hate the
use of "return". In a state functor, in exactly the right context, it does sort-of look like a return statement in an imperative language. But in pretty much all real code, return is the function
that wraps a value into the monad.)
2. f >>= return = f
(If you don't specify a value for a return, it's the same as just returning the result of the previous step in the sequence - again, return is just identity, so passing something into return
shouldn't affect it.)
3. seq >>= return . f = fmap f seq
(composing return with a function is equivalent to invoking that function on the result of the monad sequence to that point, and wrapping the result in the monad - in other words, it's just
composition with identity.)
4. seq >>= (\x -> f x >>= g) = (seq >>= f) >>= g
(Your implementation of ">>=" needs to be semantically equivalent to the usual translation; that is, it must behave like a functor composition.)
Introducing Algebraic Data Structures via Category Theory: Monoids
Since joining foursquare, I've been spending almost all of my time writing functional programs. At foursquare, we do all of our server programming in Scala, and we have a very strong bias towards
writing our scala code very functionally.
This has increased my interest in category theory in an interesting way. As a programming language geek, I'm obviously fascinated by data structures. Category theory provides a really interesting
handle on a way of looking at a kind of generic data structures.
Historically (as much as that word can be used for anything in computer science), we've thought about data structures primarily in a algorithmic and structural ways.
For example, binary trees. A binary tree consists of a collection of linked nodes. We can define the structure recursively really easily: a binary tree is a node, which contains pointers to at most
two other binary trees.
In the functional programming world, people have started to think about things in algebraic ways. So instead of just defining data structures in terms of structure, we also think about them in very
algebraic ways. That is, we think about structures in terms of how they behave, instead of how they're built.
For example, there's a structure called a monoid. A monoid is a very simple idea: it's an algebraic structure with a set of values S, one binary operation *, and one value i in S which is an identity
value for *. To be a monoid, these objects must satisfy some rules called the monad laws:
There are some really obvious examples of monoids - like the set of integers with addition and 0 or integers with multiplication and 1. But there are many, many others.
Lists with concatenation and the empty list are a monoid: for any list,
l ++ [] == l, [] + l == l, and concatenation is associative.
Why should we care if data structures like are monoids? Because we can write very general code in terms of the algebraic construction, and then use it over all of the different operations. Monoids
provide the tools you need to build fold operations. Every kind of fold - that is, operations that collapse a sequence of other operations into a single value - can be defined in terms of monoids. So
you can write a fold operation that works on lists, strings, numbers, optional values, maps, and god-only-knows what else. Any data structure which is a monoid is a data structure with a meaningful
fold operation: monoids encapsulate the requirements of foldability.
And that's where category theory comes in. Category theory provides a generic method for talking about algebraic structures like monoids. After all, what category theory does is provide a way of
describing structures in terms of how their operations can be composed: that's exactly what you want for talking about algebraic data structures.
The categorical construction of a monoid is, alas, pretty complicated. It's a simple idea - but defining it solely in terms of the composition behavior of function-like objects does take a bit of
effort. But it's really worth it: when you see a monoidal category, it's obvious what the elements are in terms of programming. And when we get to even more complicated structures, like monads,
pullbacks, etc., the advantage will be even clearer.
A monoidal category is a category with a functor, where the functor has the basic properties of a algebraic monoid. So it's a category C, paired with a bi-functor - that is a two-argument functor
⊗:C×C→C. This is the categorical form of the tensor operation from the algebraic monoid. To make it a monoidal category, we need to take the tensor operation, and define the properties that it needs
to have. They're called its coherence conditions, and basically, they're the properties that are needed to make the diagrams that we're going to use commute.
So - the tensor functor is a bifunctor from C×C to C. There is also an object I∈C, which is called the unit object, which is basically the identity element of the monoid. As we would expect from the
algebraic definition, the tensor functor has two basic properties: associativity, and identity.
Associativity is expressed categorically using a natural isomorphism, which we'll name α. For any three object X, Y, and Z, α includes a component α[X,Y,Z] (which I'll label α(X,Y,Z) in diagrams,
because subscripts in diagrams are a pain!), which is a mapping from (X⊗Y)⊗Z to X⊗(Y⊗Z). The natural isomorphism says, in categorical terms, that the the two objects on either side of its mappings
are equivalent.
The identity property is again expressed via natural isomorphism. The category must include an object I (called the unit), and two natural isomorphisms, called &lamba; and ρ. For any arrow X in C, &
lamba; and ρ contain components λ[X] and ρ[X] such that λ[X] maps from I⊗X→X, and ρ[X] maps from X⊗I to X.
Now, all of the pieces that we need are on the table. All we need to do is explain how they all fit together - what kinds of properties these pieces need to have for this to - that is, for these
definitions to give us a structure that looks like the algebraic notion of monoidal structures, but built in category theory. The properties are, more or less, exact correspondences with the
associativity and identity requirements of the algebraic monoid. But with category theory, we can say it visually. The two diagrams below each describe one of the two properties.
The upper (pentagonal) diagram must commute for all A, B, C, and D. It describes the associativity property. Each arrow in the diagram is a component of the natural isomorphism over the category, and
the diagram describes what it means for the natural isomorphism to define associativity.
Similarly, the bottom diagram defines identity. The arrows are all components of natural isomorphisms, and they describe the properties that the natural isomorphisms must have in order for them,
together with the unit I to define identity.
Like I said, the definition is a lot more complicated. But look at the diagram: you can see folding in it, in the chains of arrows in the commutative diagram.
Categorical Equalizers
Category theory is really all about building math using composition. Everything we do, we do it by defining things completely in terms of composition. We've seen a lot of that. For example, we
defined subclasses (and other sub-things) in terms of composition. It sounds strange, but it took me an amazingly long time to really grasp that. (I learned category theory from some not-very-good
textbooks, which were so concerned with the formalisms that they never bothered to explain why any of it mattered, or to build any intuition.)
One thing that's really important that we haven't talked about yet is equality. In category theory, we define equality on arrows using something called a pullback. We'll use that notion of equality
for a lot of other things - like natural transformations. But before we can do pullbacks, we need to look at a weaker notion of arrow equality - something called the equalizer of a pair of arrows.
As part of my attempt to be better than those books that I complained about, we'll start with intuition, by looking at what an equalizer is in terms of sets. Suppose we have two functions .
Now, in addition, suppose that they have a not-empty intersection: that is, that there's some set of values for which . We can take that set of values, and call it . is the equalizer of the functions
and . It's the largest set where if you restrict the inputs to the functions
to members of , then and are equal.
Now, let's look at the category theoretic version of that. We have objects and . We have two arrows . This is the category analogue of the setup of sets and functions from above.
To get to the equalizer, we need to add an object C which is a subobject of A (which corresponds to the subset of A on which f and g agree in the set model).
The equalizer of and is the pair of the object , and an arrow . (That is, the object and arrow that define C as a subobject of A.) This object and arrow must satisfy the following conditions:
2. if then such that
That second one is the mouthful. What it says is:
• Suppose that I have any arrow j from some other object D to A:
• if f and g agree on composition about j, then there can only be one unique arrow from C to D which composes with j to get to A.
In other words, (C, i) is a selector for the arrows on which A and B agree; you can only compose an arrow to A in a way that will compose equivalently with f and g to B if you go through (C, i).
Or in diagram form, k in the following diagram is necessarily unique:
There are a couple of interesting properties of equalizers that are worth mentioning. The morphism in an equalizer is a *always* monic arrow (monomorphism); and if it's epic (an epimorphism), then it
must also be iso (an isomorphism).
The pullback is very nearly the same construction as the equalizer we just looked at; except it's abstracting one step further.
Suppose we have two arrows pointing to the same target: and . Then the pullback of of f and g is the triple of an object and two arrows . The elements of this triple must meet the following
3. for every triple , there is exactly one unique arrow where , and .
As happens so frequently in category theory, this is clearer using a diagram.
If you look at this, you should definitely be able to see how this corresponds to the categorical equalizer. If you're careful and clever, you can also see the resemblance to categorical product
(which is why we use the ×[A] syntax). It's a general construction that says that f and g are equivalent with respect to the product-like object B×[A]C.
Here's the neat thing. Work backwards through this abstraction process to figure out what this construction means if objects are sets and arrows are functions, and what's the pullback of the sets
A and B?
Right back where we started, almost. The pullback is an equalizer; working it back shows that.
What's a subset? That's easy: if we have two sets A and B, A is a subset of B if every member of A is also a member of B.
We can take the same basic idea, and apply it to something which a tad more structure, to get subgroups. What's a subgroup? If we have two groups A and B, and the values in group A are a subset of
the values in group B, then A is a subgroup of B.
The point of category theory is to take concepts like "subset" and generalize them so that we can apply the same idea in many different domains. In category theory, we don't ask "what's a subset?".
We ask, for any structured THING, what does it mean to be a sub-THING? We're being very general here, and that's always a bit tricky. We'll start by building a basic construction, and look at it in
terms of sets and subsets, where we already understand the specific concept.
In terms of sets, the most generic way of defining subsets is using functions. Suppose we have a set, A. How can we define all of the subsets of A, in terms of functions? We can do it using injective
functions, as follows. (As a reminder, a function from X to Y where every value in X is mapped to a distinct function in y.)
For the set, A, we can take the set of all injective functions to A. We'll call that set of functions Inj(A).
Given Inj(A), we can define equivalence classes over Inj(A), so that and are equivalent if there is an isomorphism between X and Y.
The domain of each function in one of the equivalence classes in Inj(A) is a function isomorphic to a subset of A. So each equivalence class of injective functions defines a subset of A.
And there we go: we've got a very abstract definition of subsets.
Now we can take that, and generalize that function-based definition to categories, so that it can define a sub-object of any kind of object that can be represented in a category.
Before we jump in, let me review one important definition from before; the monomorphism, or monic arrow.
A monic arrow is an arrow such that
(That is, if any two arrows composed with in end up at the same object only if they are the same.)
So, basically, the monic arrow is the category theoretic version of an injective function. We've taken the idea of what an injective function means, in terms of how functions compose, and when we
generalized it, the result is the monic arrow.
Suppose we have a category , and an object . If there are are two monic arrows and , and
there is an arrow such that , then we say (read "f factors through g"). Now, we can take that "≤" relation, and use it to define an equivalence class of morphisms using .
What we wind up with using that equivalence relation is a set of equivalence classes of monomorphisms pointing at A. Each of those equivalence classes of morphisms defines a subobject of A. (Within
the equivalence classes are objects which have isomorphisms, so the sources of those arrows are equivalent with respect to this relation.) A subobject of A is the sources of an arrow in one of those
equivalence classes.
It's exactly the same thing as the function-based definition of sets. We've created a very general concept of sub-THING, which works exactly the same way as sub-sets, but can be applied to any
category-theoretic structure.
Interpreting Lambda Calculus using Closed Cartesian Categories
Today I'm going to show you the basic idea behind the equivalency of closed cartesian categories and typed lambda calculus. I'll do that by showing you how the λ-theory of any simply typed lambda
calculus can be mapped onto a CCC.
First, let's define the term "lambda theory". In the simply typed lambda calculus, we always have a set of base types - the types of simple atomic values that can appear in lambda expressions. A
lambda theory is a simply typed lambda calculus, plus a set of additional rules that define equivalences over the base types.
So, for example, if one of the base types of a lambda calculus was the natural numbers, the lambda theory would need to include rules to define equality over the natural numbers:
1. x = y if x=0 and y=0; and
2. x = y if x=s(x') and y=s(y') and x' = y'
So. Suppose we have a lambda-theory . We can construct a corresponding category . The objects in are the types in . The arrows in correspond to families of expressions in ; an arrow
corresponds to the set of expressions of type that contain a single free variable of type .
The semantics of the lambda-theory can be defined by a functor; in particular, a cartesian closed functor that maps from to the closed cartesian category of Sets. (It's worth noting that this is
completely equivalent to the normal Kripke semantics for lambda calculus; but when you get into more complex lambda calculi, like Hindley-Milner variants, this categorical formulation is much
We describe how we build the category for the lambda theory in terms of a CCC using something called an interpretation function. It's really just a notation that allows us to describe the translation
recursively. The interpretation function is written using brackets: is the categorical interpretation of the type from lambda calculus.
So, first, we define an object for each type in . We need to include a special
type, which we call unit. The idea behind unit is that we need to be able to talk about "functions" that either don't take any real paramaters, or functions that don't return anything. Unit is a type
which contains exactly one atomic value. Since there's only one possible value for unit, and unit doesn't have any extractable sub-values, conceptually, it doesn't ever need to be passed around. So
it's a "value" that never needs to get passed - perfect for a content-free placeholder.
Anyway, here we go with the base rules:
Next, we need to define the typing rules for complex types:
Now for the really interesting part. We need to look at type derivations - that is, the type inference rules of the lambda calculus - to show how to do the correspondences between more complicated
expressions. Just like we did in lambda calculus, the type derivations are done with a context, containing a set of type judgements. Each type judgement assigns a type to a lambda term. There are two
translation rules for contexts:
We also need to describe what to do with the values of the primitive types:
• For each value , there is an arrow .
And now the rest of the rules. Each of these is of the form , where we're saying that entails the type judgement . What it means is the object corresponding to the type information covering a type
inference for an expression corresponds to the arrow in .
• Unit evaluation: . (A unit expression is a special arrow "!" to the unit object.)
• Simple Typed Expressions: . (A simple value expression is an arrow composing with ! to form an arrow from Γ to the type object of Cs type.)
• Free Variables: (A term which is a free variable of type A is an arrow from the product of Γ and the type object A to A; That is, an unknown value of type A is some arrow whose start point will
be inferred by the continued interpretation of gamma, and which ends at A. So this is going to be an arrow from either unit or a parameter type to A - which is a statement that this expression
evaluates to a value of type A.)
• Inferred typed expressions: , where (If the type rules of Γ plus the judgement gives us , then the term is an arrow starting from the product of the interpretation of the full type context with
), and ending at . This is almost the same as the previous rule: it says that this will evaluate to an arrow for an expression that results in type .)
• Function Abstraction: . (A function maps to an arrow from the type context to an exponential , which is a function from to .)
• Function application: , , . (function evaluation takes the eval arrow from the categorical exponent, and uses it to evaluate out the function.)
There are also two projection rules for the decomposing categorical products, but they're basically more of the same, and this is already dense enough.
The intuition behind this is:
• arrows between types are families of values. A particular value is a particular arrow from unit to a type object.
• the categorical exponent in a CC is exactly the same thing as a function type in λ-calculus; and an arrow to an exponent is the same thing as a function value. Evaluating the function is using
the categorical exponent's eval arrow to "decompose" the exponent, and produce an arrow to the function's result type; that arrow is the value that the function evaluates to.
• And the semantics - called functorial semantics - maps from the objects in this category, to the category of Sets; function values to function arrows; type objects to sets; values to value
objects and arrows. (For example, the natural number type would be an object in , and the set of natural numbers in the sets category would be the target of the functor.)
Aside from the fact that this is actually a very clean way of defining the semantics of a not-so-simply typed lambda calculus, it's also very useful in practice. There is a way of executing lambda
calculus expressions as programs that is based on this, called the Categorical Abstract Machine. The best performing lambda-calculus based programming language (and my personal all-time-favorite
programming language), Objective-CAML had its first implementation based on the CAM. (CAML stands for categorical abstract machine language.).
From this, you can see how the CCCs and λ-calculus are related. It turns out that that relation is not just cool, but downright useful. Concepts from category theory - like monads, pullbacks, and
functors are really useful things in programming languages! In some later posts, I'll talk a bit about that. My current favorite programming language, Scala, is one of the languages where there's a
very active stream of work in applying categorical ideas to real-world programming problems.
Sidetrack from the CCCs: Lambda Calculus
So, last post, I finally defined closed cartesian categories. And I alluded to the fact that the CCCs are, essentially, equivalent to the simply typed λ calculus. But I didn't really talk about what
that meant.
Before I can get to that, you need to know what λ calculus is. Many readers are probably familiar, but others aren't. And as it happens, I absolutely love λ calculus.
In computer science, especially in the field of programming languages, we tend to use λ calculus a whole lot. It's also extensively used by logicians studying the nature of computation and the
structure of discrete mathematics. λ calculus is great for a lot of reasons, among them:
1. It's very simple.
2. It's Turing complete: if a function can be computed by any possible computing device, then it can be written in λ-calculus.
3. It's easy to read and write.
4. Its semantics are strong enough that we can do reasoning from it.
5. It's got a good solid model.
6. It's easy to create variants to explore the properties of various alternative ways of structuring computations or semantics.
The ease of reading and writing λ calculus is a big deal. It's led to the development of a lot of extremely good programming languages based, to one degree or another, on the λ calculus: Lisp, ML,
Haskell, and my current favorite, Scala, are very strongly λ calculus based.
The λ calculus is based on the concept of functions. In the pure λ calculus, everything is a function; there are no values except for functions. In fact, we can pretty much build up all of
mathematics using λ-calculus.
With the lead-in out of the way, let's dive in a look at λ-calculus. To define a calculus, you need to define two things: the syntax, which describes how valid expressions can be written in the
calculus; and a set of rules that allow you to symbolically manipulate the expressions.
Lambda Calculus Syntax
The λ calculus has exactly three kinds of expressions:
1. Function definition: a function in λ calculus is an expression, written: λ param . body, which defines a function with one parameter.
2. Identifier reference: an identifier reference is a name which matches the name of a parameter defined in a function expression enclosing the reference.
3. Function application: applying a function is written by putting the function value in front of its parameter, as in x y to apply the function x to the value y.
There's a trick that we play in λ calculus: if you look at the definition above, you'll notice that a function (lambda expression) only takes one parameter. That seems like a very big constraint -
how can you even implement addition with only one parameter?
It turns out to be no problem, because of the fact that functions are, themselves, values. Instead of writing a two parameter function, you can write a one parameter function that returns a one
parameter function, which can then operate on the second parameter. In the end, it's effectively the same thing as a two parameter function. Taking a two-parameter function, and representing it by
two one-parameter functions is called currying, after the great logician Haskell Curry.
For example, suppose we wanted to write a function to add x and y. We'd like to write something like: λ x y . x + y. The way we do that with one-parameter functions is: we first write a function with
one parameter, which returns another function with one parameter.
Adding x plus y becomes writing a one-parameter function with parameter x, which returns another one parameter function which adds x to its parameter: λ x . (λ y . x + y).
Now that we know that adding multiple parameter functions doesn't really add anything but a bit of simplified syntax, we'll go ahead and use them when it's convenient.
One important syntactic issue that I haven't mentioned yet is closure or complete binding. For a λ calculus expression to be evaluated, it cannot reference any identifiers that are not bound. An
identifier is bound if it a parameter in an enclosing λ expression; if an identifier is not bound in any enclosing context, then it is called a free variable. Let's look quickly at a few examples:
• λ x . p x y: in this expression, y and p are free, because they're not the parameter of any enclosing λ expression; x is bound because it's a parameter of the function definition enclosing the
expression p x y where it's referenced.
• λ x y.y x: in this expression both x and y are bound, because they are parameters of the function definition, and there are no free variables.
• λ y . (λ x . p x y). This one is a tad more complicated, because we've got the inner λ. So let's start there. In the inner λ, λ x . p x y, y and p are free and x is bound. In the full expression,
both x and y are bound: x is bound by the inner λ, and y is bound by the other λ. "p" is still free.
We'll often use "free(x)" to mean the set of identifiers that are free in the expression "x".
A λ calculus expression is valid (and thus evaluatable) only when all of its variables are bound. But when we look at smaller subexpressions of a complex expression, taken out of context, they can
have free variables - and making sure that the variables that are free in subexpressions are treated right is very important.
Lambda Calculus Evaluation Rules
There are only two real rules for evaluating expressions in λ calculus; they're called α and β. α is also called "conversion", and β is also called "reduction".
α is a renaming operation; basically it says that the names of variables are unimportant: given any expression in λ calculus, we can change the name of the parameter to a function as long as we
change all free references to it inside the body.
So - for instance, if we had an expression like:
λ x . if (= x 0) then 1 else x^2
We can do an α to replace X with Y (written "α[x/y]" and get):
λ y . if (= y 0) then 1 else y^2
Doing α does not change the meaning of the expression in any way. But as we'll see later, it's important because without it, we'd often wind up with situations where a single variable symbol is bound
by two different enclosing λs. This will be particularly important when we get to recursion.
β reduction is where things get interesting: this single rule is all that's needed to make the λ calculus capable of performing any computation that can be done by a machine.
β basically says that if you have a function application, you can replace it with a copy of the body of the function with references to the parameter identifiers replaced by references to the
parameter value in the application. That sounds confusing, but it's actually pretty easy when you see it in action.
Suppose we have the application expression: (λ x . x + 1) 3. By performing a beta reduction, we can replace the application by taking the body x + 1 of the function, and substituting (or αing) the
value of the parameter (3) for the parameter variable symbol (x). So we replace all references to x with 3. So the result of doing a beta reduction xs 3 + 1.
A slightly more complicated example is the expression:
λ y . (λ x . x + y)) q
It's an interesting expression, because it's a λ expression that when applied, results in another λ expression: that is, it's a function that creates functions. When we do beta reduction in this,
we're replacing all references to the parameter y with the identifier q; so, the result is λ x . x + q.
One more example, just for the sake of being annoying. Suppose we have: (λ x y. x y) (λ z . z * z) 3
That's a function that takes two parameters, and applies the first one to the second one. When we evaluate that, we replace the parameter x in the body of the first function with λ z . z * z; and we
replace the parameter y with 3, getting: (λ z . z * z) 3. And we can perform beta on that, getting 3 * 3.
Written formally, beta says: λ x . B e = B[x := e] if free(e) ⊂ free(B[x := e])
That condition on the end, "if free(e) ⊂ free(B[x := e]" is why we need α: we can only do beta reduction if doing it doesn't create any collisions between bound identifiers and free identifiers: if
the identifier "z" is free in "e", then we need to be sure that the beta-reduction doesn't make "z" become bound. If there is a name collision between a variable that is bound in "B" and a variable
that is free in "e", then we need to use α to change the identifier names so that they're different.
As usual, an example will make that clearer: Suppose we have a expression defining a function, λ z . (λ x . x+z). Now, suppose we want to apply it: (λ z . (λ x . x + z)) (x + 2). In the parameter (x
+ 2), x is free. Now, suppose we break the rule and go ahead and do beta. We'd get "λ x . x + x + 2". The variable that was free in x + 2 is now bound! We've changed the meaning of the function,
which we shouldn't be able to do. If we were to apply that function after the incorrect β, we'd get (λ x . x + x + 2) 3. By beta, we'd get 3 + 3 + 2, or 8.
What if we did α the way we were supposed to?
First, we'd do an α to prevent the name overlap. By α[x/y], we would get λ z . (λ y . y + z) (x+2).
Then by β, we'd get "λ y . y + x + 2". If we apply this function the way we did above, then by β, we'd get 3+x+2.
3+x+2 and 3+3+2 are very different results!
And that's pretty much it. There's another optional rule you can add called η-conversion. η is a rule that adds extensionality, which provides a way of expressing equality between functions.
η says that in any λ expression, I can replace the value f with the value g if/f for all possible parameter values x, f x = g x.
What I've described here is Turing complete - a full effective computation system. To make it useful, and see how this can be used to do real stuff, we need to define a bunch of basic functions that
allow us to do math, condition tests, recursion, etc. I'll talk about those in my next post.
It'l also important to point out that while I've gone through a basic definition of λ calculus, and described its mechanics, I haven't yet defined a model for λ-calculus. That's quite an important
omission! λ-calculus was played with by logicians for several years before they were able to come up with a complete model for it, and it was a matter of great concern that although it looked
correct, the early attempts to define a model for it were failures! And without a valid model, the results of the system are meaningless. An invalid model in a logical system like calculus is like a
contradiction in axioms: it means that nothing that it produces is valid.
Categorical Computation Characterized By Closed Cartesian Categories
One of my favorite categorical structures is a thing called a closed cartesian category, or CCC for short. Since I'm a computer scientist/software engineer, it's a natural: CCCs are, basically, the
categorical structure of lambda calculus - and thus, effectively, a categorical model of computation. However, before we can talk about the CCCs, we need - what else? - more definitions.
Cartesian Categories
A cartesian category (note not cartesian closed category) is a category:
1. With a terminal object , and
2. , the objects and arrows of the categorical product .
So, a cartesian category is a category closed with respect to product. Many of the common categories are cartesian: the category of sets, and the category of enumerable sets, And of course, the
meaning of the categorical product in set? Cartesian product of sets.
Categorical Exponentials
To get from cartesian categories to cartesian closed categories, we also need to define categorical exponentials. Like categorical product, the value of a categorical exponential is not required to
included in a category. The exponential is a complicated definition, and it's a bit hard to really get your head around, but it's well worth the effort. If categorical products are the categorical
generalization of set products, then the categorical exponential is the categorical version of a function space. It gives us the ability to talk about structures that are the generalized version of
"all functions from A to B".
Given two objects x and y from a category C, their categorical exponential x^y, if it exists in the category, is defined by a set of values:
• An object ,
• An arrow , called an evaluation map.
• , an operation . (That is, an operation mapping from arrows to arrows.)
These values must have the following properties:
1. :
To make that a bit easier to understand, let's turn it into a diagram.
As I alluded to earlier, you can also think of it as a generalization of a function space. is the set of all functions from y to x. The evaluation map is simple description in categorical terms of an
operation that applies a function from a to b (an arrow) to a value from a, resulting in an a value from b.
So what does the categorical exponential mean? I think it's easiest to explain in terms of sets and functions first, and then just step it back to the more general case of objects and arrows.
If X and Y are sets, then is the set of functions from Y to X.
Now, look at the diagram:
1. The top part says, basically, that is a function from to to : so takes a member of , and uses it to select a function from to .
2. The vertical arrow says:
1. given the pair , maps to a value in .
2. given a pair , we're going through a function. It's almost like currying:
1. The vertical arrow going down is basically taking , and currying it to .
2. Per the top part of the diagram, selects a function from to . (That is, a member of .)
3. So, at the end of the vertical arrow, we have a pair .
3. The "eval" arrow maps from the pair of a function and a value to the result of applying the function to the value.
Cartesian Closed Categories
Now - the abstraction step is actually kind of easy: all we're doing is saying that there is a structure of mappings from object to object here. This particular structure has the essential
properties of what it means to apply a function to a value. The internal values and precise meanings of the arrows connecting the values can end up being different things, but no matter what, it
will come down to something very much like function application.
With exponentials and products, we can finally say what the cartesian closed categories (CCCs). A Cartesian closed category is a category that is closed with respect to both products and
Why do we care? Well, the CCCs are in a pretty deep sense equivalent to the simply typed lambda calculus. That means that the CCCs are deeply tied to the fundamental nature of computation. The
structure of the CCCs - with its closure WRT product and exponential - is an expression of the basic capability of an effective computing system. So next, we'll take a look at a couple of
examples of what we can do with the CCCs as a categorical model of computation.
Building Structure in Category Theory: Definitions to Build On
The thing that I think is most interesting about category theory is that what it's really fundamentally about is structure. The abstractions of category theory let you talk about structures in an
elegant way; and category diagrams let you illustrate structures in a simple visual way. Morphisms express the structure of a category; functors are higher level morphisms that express the structure
of relationships between categories.
In my last category theory post, I showed how you can use category theory to describe the basic idea of symmetry and group actions. Symmetry is, basically, an immunity to transformation - that is, a
kind of structural property of an object or system where applying some kind of transformation to that object doesn't change the object in any detectable way. The beauty of category theory is that it
makes that definition much simpler.
Symmetry transformations are just the tip of the iceberg of the kinds of structural things we can talk about using categories. Category theory lets you build up pretty much any mathematical construct
that you'd like to study, and describe transformations on it in terms of functors. In fact, you can even look at the underlying conceptual structure of category theory using category theory itself,
by creating a category in which categories are objects, and functors are the arrows between categories.
So what happens if we take the same kind of thing that we did to get group actions, and we pull out a level, so that instead of looking at the category of categories, focusing on arrows from the
specific category of a group to the category of sets, we do it with arrows between members of the category of functors?
We get the general concept of a natural transformation. A natural transformation is a morphism from functor to functor, which preserves the full structure of morphism composition within the
categories mapped by the functors. The original inventor of category theory said that natural transformations were the real point of category theory - they're what he wanted to study.
Suppose we have two categories, C and D. And suppose we also have two functors, F, G : C → D. A natural transformation from F to G, which we'll call η maps every object x in C to an arrow η[x] : F(x)
→ G(x). η[x] has the property that for every arrow a : x → y in C, η[y] º F(a) = G(a) º η[x]. If this is true, we call η[x] the component of η for (or at) x.
That paragraph is a bit of a whopper to interpret. Fortunately, we can draw a diagram to help illustrate what that means. The following diagram commutes if η has the property described in that
I think this is one of the places where the diagrams really help. We're talking about a relatively straightforward property here, but it's very confusing to write about in equational form. But given
the commutative diagram, you can see that it's not so hard: the path η[y] º F(a) and the path G(a) º η<sub compose to the same thing: that is, the transformation η hasn't changed the structure
expressed by the morphisms.
And that's precisely the point of the natural transformation: it's a way of showing the relationships between different descriptions of structures - just the next step up the ladder. The basic
morphisms of a category express the structure of the category; functors express the structure of relationships between categories; and natural transformations express the structure of relationships
between relationships.
Of course, this being a discussion of category theory, we can't get any further without some definitions. To get to some of the interesting material that involves things like natural transformations,
we need to know about a bunch of standard constructions: initial and final objects, products, exponentials... Then we'll use those basic constructs to build some really fascinating constructs. That's
where things will get really fun.
So let's start with initial and finial objects.
An initial object is a pretty simple idea: it's an object with exactly one arrow to each of the other objects in the category. To be formal, given a category , an object is an initial object if and
only if . We generally write for the initial object in a category. Similarly, there's a dual concept of a terminal object , which is object for which there's exactly one arrow from every object in
the category to.
Given two objects in a category, if they're both initial, they must be isomorphic. It's pretty easy to prove: here's the sketch. Remember the definition of isomorphism in category theory. An
isomorphism is an arrow , where such that and . If an object is initial, then there's an arrow from it to every other object --- including the other initial object. And there's an arrow back, because
the other one is initial. The iso-arrows between the two initials obviously compose to identities.
Now, let's move on to categorical products. Categorical products define the product of two objects in a category. The basic concept is simple - it's a generalization of cartesian product of two sets.
It's important because products are one of the major ways of building complex structures using simple categories.
Given a category , and two objects , the categorical product consists of:
• An object , often written ;
• two arrows and , where , , and .
• a "pairing" operation, which for every object , maps the pair of arrows and
to an arrow , where has the
following properties:
The first two of those properties are the separation arrows, to get from the product to its components; and the third is the merging arrow, to get from the components to the product. We can say the
same thing about the relationships in the product in an easier way using a commutative diagram:
One important thing to understand is that categorical products do not have to exist. This definition doen not say that given any two objects and , that is a member of the category. What it says is
what the categorical product
looks like if it exists. If, for a given pair a and b of objects, there is an object that meets this definition, then the product of a and b exists in the category. If not, it doesn't. For many
categories, the products don't exist for some or even all of the objects in the category. But as we'll see later, the categories for which the products do exist have some really interesting
Fun with Functors
So far, we've looked at the minimal basics of categories: what they are, and how to categorize the kinds of arrows that exist in categories in terms of how they compose with other arrows. Just that
much is already enlightening about the nature of category theory: the focus is always on composition.
But to get to really interesting stuff, we need to build up a bit more, so that we can look at more interesting constructs. So now, we're going to look at functors. Functors are one of the most
fundamental constructions in category theory: they give us the ability to create multi-level constructions.
What's a functor? Well, it's basically a structure-preserving mapping between categories. So what does that actually mean? Let's be a bit formal:
A functor from a category to a category is a mapping from to that:
• Maps each member to an object .
• Maps each arrow to an arrow , where:
□ . (Identity is preserved by the functor mapping of morphisms.)
□ . (Commutativity is preserved by the Functor mapping of morphisms.)
Note: The original version of this post contained a major typo. In the second condition on functors, the "n" and the "o" were reversed. With them in this direction, the definition is actually the
definition of something called a covariant functor. Alas, I can't even pretend that I mixed up covariant and contravariant functors; the error wasn't nearly so intelligent. I just accidentally
reversed the symbols, and the result happened to make sense in the wrong way.
That's the standard textbook gunk for defining a functor. But if you look back at the original definition of a category, you should notice that this looks familiar. In fact, it's almost identical to
the definition of the necessary properties of arrows!
We can make functors much easier to understand by talking about them in the language of categories themselves. Functors are really nothing but morphisms - they're morphisms in a category of
There's a kind of category, called a small category. (I happen to dislike the term "small" category, but I don't get a say!) A small category is a category whose collections of objects and arrows are
sets, not proper classes.
(As a quick reminder: in set theory, a class is a collection of sets that can be defined by a non-paradoxical property that all of its members share. Some classes are sets of sets; some classes are
not sets; they lack some of the required properties of sets - but still, the class is a collection with a well-defined, non-paradoxical, unambiguous property. If a class isn't a set of sets, but just
a collection that isn't a set, then it's called a proper class.)
Any category whose collections of objects and arrows are sets, not proper classes, are called small categories. Small categories are, basically, categories that are well-behaved - meaning that their
collections of objects and arrows don't have any of the obnoxious properties that would prevent them from being sets.
The small categories are, quite beautifully, the objects of a category called Cat. (For some reason, category theorists like three-letter labels.) The arrows of Cat are all functors - functors really
just morphisms between categories. Once you wrap you head around that, then the meaning of a functor, and the meaning of a structure-preserving transformation become extremely easy to understand.
Functors come up over and over again, all over mathematics. They're an amazingly useful notion. I was looking for a list of examples of things that you can describe using functors, and found a really
wonderful list on wikipedia.. I highly recommend following that link and taking a look at the list. I'll just mention one particularly interesting example: groups and group actions.
If you've been reading GM/BM for a very long time, you'll remember my posts on group theory. In a very important sense, the entire point of group theory is to study symmetry. But working from a set
theoretic base, it takes a lot of work to get to the point where you can actually define symmetry. It took many posts to build up the structure - not to present set theory, but just to present the
set theoretic constructs that you need to define what symmetry means, and how a symmetric transformation was nothing but a group action. Category theory makes that so much easier that it's downright
dazzling. Ready?
Every group can be represented as a category with a single object. A functor from the category of a group to the category of Sets is a group action on the set that is the target of the functor. Poof!
Since symmetry means structure-preserving transformation; and a functor is a structure preserving transformation - well, they're almost the same thing. The functor is an even more general abstraction
of that concept: group symmetry is just one particular case of a functor transformation. Once you get functors, understanding symmetry is easy. And so are lots of other things.
And of course, you can always carry these things further. There is a category of functors themselves; and notions which can be most easily understood in terms of functors operating on the category of
This last bit should make it clear why category theory is affectionately known as abstract nonsense. Category theory operates at a level of abstraction where almost anything can be wrapped up in it;
and once you've wrapped something up in a category, almost anything you can do with it can itself be wrapped up as a category - levels upon levels, categories of categories, categories of functors on
categories of functors on categories, ad infinitum. And yet, it makes sense. It captures a useful, comprehensible notion. All that abstraction, to the point where it seems like nothing could possibly
come out of it. And then out pops a piece of beautiful crystal. It's really remarkable.
Category Diagrams
One of the most unusual things about category theory that I really love is category diagrams. In category theory, many things that would be expressed as equations or complex formulae in most
mathematical formalisms can be presented as diagrams in category theory. If you are, like me, a very visual thinker, than category diagrams can present information in a remarkably clear form - and
the categorical form of many statements of proofs can be much clearer than the alternatives because it can be presented in diagram form.
A category diagram is a directed graph, where the nodes are objects from a category, and the edges are morphisms. Category theorists say that a graph commutes if, for any two paths through arrows in
the diagram from node A to node B, the composition of all edges from the first path is equal to the composition of all edges from the second path. (But it's important to note that you do need to be
careful here: merely because you can draw a diagram doesn't mean that it necessarily commutes, just like being able to write an equation doesn't mean that the equation is true! You do need to show
that your diagram is correct and commutes.)
As usual, an example will make that clearer.
This diagram is a way of expressing the associativy property of morphisms: . The way that the diagram illustrates this is: is the morphism from A to C. When we compose that with , we wind up at D.
Alternatively, is the arrow from B to D; if we compose that with , we wind up at D. The two paths: and are both paths from A to D, therefore if the diagram commutes, they must be equal. And the
arrows on the diagram are all valid arrows, arranged in connections that do fit the rules of composition correctly, so the diagram does commute.
Let's look at one more diagram, which we'll use to define an interesting concept, the principal morphism between two objects. The principle morphism is a single arrow from A to B such that any
composition of morphisms that goes from A to B will end up being equivalent to it.
In diagram form, a morphism m is principle if , the following diagram commutes:
In words, this says that is a principal morphism if for every endomorphic arrow , and for every arrow from A to B, is is the result of composing and . There's also something interesting about this
diagram that you should notice: A appears twice in the diagram! It's the same object; we just draw it in two places to make the commutation pattern easier to see. A single object can appear in a
diagram as many times as you want to to make the pattern of commutation easy to see. When you're looking at a diagram, you need to be a bit careful to read the labels to make sure you know what it
One more definition by diagram: is a a retraction pair, and A is a retract of B (written ) if the following diagram commutes:
That is, are a retraction pair if . | {"url":"http://scientopia.org/blogs/goodmath/category/good-math/category-theory/","timestamp":"2014-04-16T04:13:43Z","content_type":null,"content_length":"201016","record_id":"<urn:uuid:0fc98d6a-0811-4ba1-8853-39566dcfb484>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rectangular Window Transform (Cont'd)
Next | Prev | Up | Top | JOS Index | JOS Pubs | JOS Home | Search
Above, we found the rectangular window transform to be the aliased sinc function:
This (real) result is for the zero-centered rectangular window. For the causal case, a linear phase term appears:
As the sampling rate goes to infinity, the aliased sinc function approaches the regular sinc function
More generally, we may plot both the magnitude and phase of the window transform versus frequency:
In audio work, we more typically plot the window transform magnitude on a decibel (dB) scale:
Since the DTFT of the rectangular window approximates the sinc function, it should ``roll off'' at approximately 6 dB per octave, as verified in the log-log plot below:
As the sampling rate approaches infinity, the rectangular-window transform ( aliasing in the frequency domain, due to sampling in the time domain.
Next | Prev | Up | Top | JOS Index | JOS Pubs | JOS Home | Search Download Intro421.pdf
Download Intro421_2up.pdf
Download Intro421_4up.pdf
[Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/Intro421/Rectangular_Window_Transform_Cont_d.html","timestamp":"2014-04-21T04:10:03Z","content_type":null,"content_length":"12490","record_id":"<urn:uuid:305514f4-77a0-47a3-a5e6-460f1d343ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elmhurst, NY Algebra 2 Tutor
Find an Elmhurst, NY Algebra 2 Tutor
...I try to guide students to understanding the material by trying to ground problems in real life situations: you can see whether an answer makes sense based on some sort of intuition, rather
than just going through the algorithm and hoping you don't mess up. I'm a big fan of unit analysis, where ...
18 Subjects: including algebra 2, calculus, trigonometry, SAT math
...Computer Science was my course of study in college: I received an engineering degree in Computer Science from Princeton University, and programming was obviously a huge part of my education.
When tutoring, I particularly enjoy explaining concepts like data structures and basic algorithms in a us...
37 Subjects: including algebra 2, chemistry, physics, calculus
...But, please keep in mind that I am working really hard to finish my PhD right now, so I am available for appointments after 7pm on weeknights and after 10am on most weekends. I request 24 hours
notice for cancellation. Also, it's worth noting that my teaching approaches vary according to the needs of the student.
17 Subjects: including algebra 2, calculus, geometry, biology
...There, I took a course in Sociology. While at Beloit College, I continued my study of sociology, taking a general survey, a class in social statistics, and a number of classes on race, gender,
and culture. I applied and was admitted (I declined the offer) for Ph.D. level study in Sociology at the University of Virginia and Master's level study at Portland State University.
50 Subjects: including algebra 2, English, reading, algebra 1
...My patient, polite and easy-going manner coupled with my ability to model various methods for understanding and “seeing” things, accentuates my success as a teacher and tutor. My teaching
strategies include giving a mixed review of problems. I also always have the student model and explain what they've learned, showing their process for deriving an answer.
16 Subjects: including algebra 2, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/Elmhurst_NY_Algebra_2_tutors.php","timestamp":"2014-04-18T03:42:00Z","content_type":null,"content_length":"24286","record_id":"<urn:uuid:afc55c4f-4f45-4329-94f5-3a00490b44ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
all 4 comments
[–]Galvnayr1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
What you're asked to do is find the null space (kernel) of the matrix A.
That is, find all vectors x such that Ax = 0.
The easiest way to do this is to first put A into Reduced Row Echelon Form (RREF) like so. You will see the second row is a scalar multiple of the first row, so this row is dependent.
The matrix above is in RREF.
Now take your RREF matrix and write out the x1, x2, x3 and x4 rows. like this.
x1 - 5x2 -5x3 +2x4 = 0
x2 = 0
x3 = 0 (this corresponds to the third row which doesn't exist)
x4 = 0 (same as above)
Now, look at your free variables - x2, x3 and x4. They can be anything as no matter what that row still gets mapped to zero.
Now solve for x1 in terms of these three free variables.
x1 = 5x2+5x3-2x4
now write this as a vector (x1,x2,x3,x4)
and you get (5x2+5x3-2x4,x2,x3,x4)
finally, we write this as a linear combination of our free variables x2, x3, x4:
x = x2(5,1,0,0) + x3(5,0,1,0) + x4(-2,0,0,1)
This is your answer, a basis for the kernel of A and ALL solutions of Ax=0.
[–]rlee890 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
Now take your RREF matrix and write out the x1, x2, x3 and x4 rows. like this.
x1 - 5x2 -5x3 +2x4 = 0
x2 = 0
x3 = 0 (this corresponds to the third row which doesn't exist)
x4 = 0 (same as above)
This part doesn't work well if you have a matrix like
[–]rlee890 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
First, find the reduced row echelon form of the matrix. Let n denote the number of columns of this reduced matrix where the entry in that column has an entry before it in the row (or equivalently,
the number of columns minus the row rank).
Then multiply the reduced A by the column vector [x1;x2;x2;x4]. This will give you a system of equations of the form c1 x1 + c2 x2 + ... = 0.
Isolate as many variables as possible while minimizing the number of variable you are expressing them as. For this problem, you just need to find x1 of the form x1= ? x2 + ? x3 + ? x4.
Then substitute in the values of the isolated variable back into the the column vector [x1;x2;x2;x4]. If you did the last step correctly, you will have a vector expressed using only n variable.
Separate the column vector into n vectors where each vector contains only one variable. This should give you something like [?x2;x2;0;0]+[?x3;0;x3;0]+[?x4;0;0;x4]. Now just factor the variable out of
the vector and you will have produced the form you desire. | {"url":"http://www.reddit.com/r/cheatatmathhomework/comments/1brnza/linear_describe_all_solutions_of_ax0/","timestamp":"2014-04-19T05:09:17Z","content_type":null,"content_length":"57849","record_id":"<urn:uuid:4ef7225c-ab10-49c4-adc2-47617570ef9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Longest Induced Cycles in Circulant Graphs
In this paper we study the length of the longest induced cycle in the unit circulant graph $X_n = Cay({\Bbb Z}_n; {\Bbb Z}_n^*)$, where ${\Bbb Z}_n^*$ is the group of units in ${\Bbb Z}_n$. Using
residues modulo the primes dividing $n$, we introduce a representation of the vertices that reduces the problem to a purely combinatorial question of comparing strings of symbols. This representation
allows us to prove that the multiplicity of each prime dividing $n$, and even the value of each prime (if sufficiently large) has no effect on the length of the longest induced cycle in $X_n$. We
also see that if $n$ has $r$ distinct prime divisors, $X_n$ always contains an induced cycle of length $2^r+2$, improving the $r \ln r$ lower bound of Berrezbeitia and Giudici. Moreover, we extend
our results for $X_n$ to conjunctions of complete $k_i$-partite graphs, where $k_i$ need not be finite, and also to unit circulant graphs on any quotient of a Dedekind domain.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v12i1r52","timestamp":"2014-04-16T13:48:38Z","content_type":null,"content_length":"15266","record_id":"<urn:uuid:39d78239-4bb2-4ac0-8497-744cd62fd59e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivariant homotopy theory: some history questions
up vote 11 down vote favorite
I have sometimes wondered about the following:
(1) Who was the first articulate that in dealing with $G$-equivariant cohomology theories ($G$ a finite group or a compact Lie group), it is best to work in an $RO(G)$-graded context?
(2) Who was the first to realize that the correct set up for equivariant stable homotopy was to work in a complete universe?
(3) At what time did these ideas first come to the surface? The seventies? The eighties? How much do they predate the Segal conjecture?
(Maybe I should thank Peter May in advance?)
ho.history-overview at.algebraic-topology
1 Suggestion: add the ho.history-and-overview tag – Yemon Choi Sep 3 '12 at 23:19
Thanks. I did it. – John Klein Sep 3 '12 at 23:41
1 I think Bredon was the father of equivariant cohomology. – Benjamin Steinberg Sep 4 '12 at 0:42
1 But did Bredon introduce the idea of grading by $RO(G)$? – John Klein Sep 4 '12 at 0:43
add comment
1 Answer
active oldest votes
Well if you insist John :) (1) The first explicit formulation I know of is in the nice paper: Klaus Wirthmuller. Equivariant homology and duality. Manuscripta Math. 11 (1974), 373–390. He
writes: "The ideas developed here partly originate from suggestions made by T. tom Dieck, who introduced me to the subject." They were thinking about equivariant Poincar\'e duality and
about equivariant cobordism, which make $RO(G)$ grading inevitable. (2) That may be tom Dieck and may be me. I'm honestly not sure. I was explicitly using universes nonequivariantly in
1972, with good reason. I think tom Dieck may have at least implicitly used complete universes in his work on cobordism. Certainly I was using complete $G$-universes by some time around or
before 1974-75. The question is confused by the fact that tom Dieck was in Chicago lecturing on equivariant things that year. See Tammo tom Dieck. The Burnside ring and equivariant stable
up vote homotopy. Lecture notes by Michael C. Bix. Department of Mathematics, University of Chicago, Chicago, Ill., 1975. (3) The Segal conjecture in its simplest form dates from 1970. However,
17 down early work on it was not based on equivariant stable homotopy theory, let alone $RO(G)$ grading. As late as 1983, Frank Adams wrote a paper ``Graeme Segal's Burnside ring conjecture'' in
vote which he barely mentioned equivariant cohomotopy, and then a bit skeptically. There is a 1982 paper "Classifying $G$-spaces and the Segal conjecture'' by Lewis, McClure, and myself that
accepted proves the equivalence of the nonequivariant and equivariant versions of the Segal conjecture, before Carlsson's proof.
It is worth emphasizing that, in a sense, $RO(G)$-grading has no philosophical justification. Logically, grading should be on the equivariant Picard group which is considerably larger (at
least under space level equivalence) or on the stable equivalence classes of representation spheres (and their negatives), which is considerably smaller.
6 I just saw the comment about Bredon. He introduced ordinary $\mathbf{Z}$-graded cohomology theories, in 1966 I believe. (I heard him talk about it that year). Most people outside core
algebraic topology mean Borel cohomology, not Bredon cohomology, when they say "equivariant cohomology'', and Borel certainly came earlier. Of course, neither considered $RO(G)$-grading.
– Peter May Sep 4 '12 at 1:11
2 Thanks Peter. You've given me a fairly thorough answer. – John Klein Sep 4 '12 at 1:28
add comment
Not the answer you're looking for? Browse other questions tagged ho.history-overview at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/106285/equivariant-homotopy-theory-some-history-questions/106290","timestamp":"2014-04-18T00:58:56Z","content_type":null,"content_length":"60731","record_id":"<urn:uuid:9fbf32b5-8ce0-4645-998f-9510d519c2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do I find the x and y intercepts of 6x-4y=12 and how do i the equation in slope-intercept form?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
put x=0 to find Y intercept and Y=0 to find x
Best Response
You've already chosen the best response.
slope-intercept form? what does that look like? life already told you 90% of what to do. You just have to do it.
Best Response
You've already chosen the best response.
slope intercept form should look like y=mx+b.....where mis ur slope and b is ur y intercept, so:
Best Response
You've already chosen the best response.
rearranging then
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
|dw:1357718430099:dw| just look when x=0 your gonna get y intercept and when y=0 your gonna get x intercept
Best Response
You've already chosen the best response.
well, that's one of the 2! if you did it correctly. didnt check
Best Response
You've already chosen the best response.
x=0, -4y=12 y=-12/4=-3
Best Response
You've already chosen the best response.
similarly you can find for x
Best Response
You've already chosen the best response.
for finding x i would do the same as did for y?
Best Response
You've already chosen the best response.
exactly, just put y=0
Best Response
You've already chosen the best response.
x int = 2 y int is -3
Best Response
You've already chosen the best response.
# 1 to find the intercepts plug in y= 0 and solve for x 6x- 4y = 12 6x4* ( 0 ) = 12 plug in y = 0 6x-0 = 12 multiply -4 and 0 to get 0 . 6x=12 simplify x-= 12/6 divide both sides by 6 to isolate
x x = 2 reduce , so the x - intercepts is ( 2, 0 ) ================= Now look here , y - intercept ,o find the intercepts plug in x= 0 and solve for y 6x-4y= 12 start with the given equation. 6*
(0) -4y = 12 plug in x = 0 0-4y = 12 multiply 6 and 0 to get 0 . -4y = 12 simplify y= 12/-4 divide both sides by -4 to isolate y. y= -3 reduce. SO therefore the Y - intercepts is ( 0. -3 ) .
Best Response
You've already chosen the best response.
Graph of y=-3x+2 through the point (0,2) and we can see that the point lies on the line. Since we know the equation has a slope of -3 and goes through the point (0,2), this verifies our answer.
If this helped , Medal me ..
Best Response
You've already chosen the best response.
would that be how i could find the slope intercept?
Best Response
You've already chosen the best response.
yes indeed , ..
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ed1e3fe4b07cd2b6492f1f","timestamp":"2014-04-20T08:24:38Z","content_type":null,"content_length":"79568","record_id":"<urn:uuid:e01bf75e-3227-4cc0-b811-9b0c86a804e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Logic Foundations for Information Science th edition by Li | 9783764399764 | Chegg.com
Mathematical Logic 1st edition
Foundations for Information Science
Details about this item
Mathematical Logic: Mathematical logic is a branch of mathematics that takes axiom systems and mathematical proofs as its objects of study. This book shows how it can also provide a foundation for
the development of information science and technology. The first five chapters systematically present the core topics of classical mathematical logic, including the syntax and models of first-order
languages, formal inference systems, computability and representability, and Gödel's theorems. The last five chapters present extensions and developments of classical mathematical logic, particularly
the concepts of version sequences of formal theories and their limits, the system of revision calculus, proschemes (formal descriptions of proof methods and strategies) and their properties, and the
theory of inductive inference. All of these themes contribute to a formal theory of axiomatization and its application to the process of developing information technology and scientific theories. The
book also describes the paradigm of three kinds of language environments for theories and it presents the basic properties required of a meta-language environment. Finally, the book brings these
themes together by describing a workflow for scientific research in the information era in which formal methods, interactive software and human invention are all used to their advantage. This book
represents a valuable reference for graduate and undergraduate students and researchers in mathematics, information science and technology, and other relevant areas of natural sciences. Its first
five chapters serve as an undergraduate text in mathematical logic and the last five chapters are addressed to graduate students in relevant disciplines.
Back to top
Rent Mathematical Logic 1st edition today, or search our site for Wei textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Birkhauser Verlag GmbH. | {"url":"http://www.chegg.com/textbooks/mathematical-logic-1st-edition-9783764399764-3764399767?ii=9&trackid=ef35e5b6&omre_ir=1&omre_sp=","timestamp":"2014-04-17T09:16:30Z","content_type":null,"content_length":"22032","record_id":"<urn:uuid:66a07c4e-0d3b-4872-8fe1-9d4c6f9a7764>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 Algebra Question. Please Help. (10 Points)?
1 Algebra Question. Please Help. (10 Points)?
Create a unique example of dividing a polynomial by a monomial and provide the simplified form. Explain, in complete sentences, the two ways used to simplify this expression and how you would check
your quotient for accuracy.
Please Help.
how unique does it have to be? (15x^6+10x^4-5x^2)/5x^2
so 3x^4+2x^2-1
May 23 at 4:1
(x^3-y^3)/(x-y)= (x-y)(x^2+xy+y^2)/(x-y)= x^2+xy+y^2
x-y) x^3-y^3( x^2+xy+y^2
- +
- +
- +
May 23 at 7:48 | {"url":"http://www.qfak.com/education_reference/science_mathematics/?id=b824825","timestamp":"2014-04-16T19:02:26Z","content_type":null,"content_length":"14855","record_id":"<urn:uuid:b17e0e6b-b2b0-4109-98e5-af3692e44a73>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imagining something to the power of fraction(1/2,1/3,3/4) on the number line
February 18th 2013, 04:12 PM #1
Feb 2013
Imagining something to the power of fraction(1/2,1/3,3/4) on the number line
It took me five minutes to learn latex but finally my question is here.
$x^2 = y$
$x = \sqrt{y}$
I am able to imagine $x^2$ on the number line in my mind How to imagine something to a power of 1/2 or any fraction on the number line.
Thank you
Re: Imagining something to the power of fraction(1/2,1/3,3/4) on the number line
You say "finally my question is here", but I see know question!
You say "I am able to imagine on the number line in my mind". What do you mean by that? If x= 2 then $x^2= 4$, a point on the number line, but for general x, $x^2$ can be any point representing a
non-negative number.
As for $sqrt{x}= x^{1/2}$, if x= 4 then $\sqrt{x}= 2$ but, for any non-negative x, $\sqrt{x}$ can, again, be any point representing a non-negative number.
Re: Imagining something to the power of fraction(1/2,1/3,3/4) on the number line
$x^3 = x \times x \times x$
$x^\frac{1}{2}$ = ?? represented in the terms of product of x
I know this is basics but please provide the imaginative way to represent this
Re: Imagining something to the power of fraction(1/2,1/3,3/4) on the number line
$x^3 = x \times x \times x = y$ where y >= x >= 1
$x^\frac{1}{n} = ??? = t$ then 1 <= t <= x and t to the power of the fraction denominator = x
example $\sqrt{2} = 2^\frac{1}{2} = 1.414^2 \approx 2$
I am still not able to get how the reverse of an operation can be represented in the same form
multiplication is the reverse of division. Here I see the reverse of exponentiation can be represented using exponentiation itself but the operation is different and its not exponentiation at all
Exponentiation - Wikipedia, the free encyclopedia.
nth root - Wikipedia, the free encyclopedia
In calculus, roots are treated as special cases of exponentiation, where the exponent is a fraction:
$\sqrt[n]{x} \,=\, x^{1/n}$ = why is this ??
February 18th 2013, 04:39 PM #2
MHF Contributor
Apr 2005
February 18th 2013, 07:36 PM #3
Feb 2013
February 19th 2013, 09:27 PM #4
Feb 2013 | {"url":"http://mathhelpforum.com/algebra/213366-imagining-something-power-fraction-1-2-1-3-3-4-number-line.html","timestamp":"2014-04-19T15:17:28Z","content_type":null,"content_length":"41995","record_id":"<urn:uuid:5e3e7880-5f88-457e-a2e7-1732946835b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding max min dimensions of cylinder that will give the biggest surface area.
April 28th 2009, 04:34 PM
finding max min dimensions of cylinder that will give the biggest surface area.
i need to know the measurements(height and radius) of the cylinder that will give the biggest surface area, using the material 2m^2. It also has to fit a tennis racket of 32 cm width, and 68.5cm
so min is (32, 68.5) . the variables are h and r... how do you solve this? thank you!!
April 29th 2009, 08:20 AM
i need to know the measurements(height and radius) of the cylinder that will give the biggest surface area, using the material 2m^2. It also has to fit a tennis racket of 32 cm width, and 68.5cm
so min is (32, 68.5) . the variables are h and r... how do you solve this? thank you!!
Is that exactly how the question was worded? | {"url":"http://mathhelpforum.com/calculus/86308-finding-max-min-dimensions-cylinder-will-give-biggest-surface-area-print.html","timestamp":"2014-04-20T22:35:17Z","content_type":null,"content_length":"4564","record_id":"<urn:uuid:bf088fc6-bef8-4fc1-b188-8b1e3d26ac43>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to implement new algebraic structures in Sage
The aim of this tutorial is to explain how one can benefit from Sage’s category framework and coercion model when implementing new algebraic structures. It is based on a worksheet created in 2011.
We illustrate the concepts of Sage’s category framework and coercion model by means of a detailed example, namely a toy implementation of fraction fields. The code is developed step by step, so that
the reader can focus on one detail in each part of this tutorial. The complete code can be found in the appendix.
Base classes
In Sage, a “Parent” is an object of a category and contains elements. Parents should inherit from sage.structure.parent.Parent and their elements from sage.structure.element.Element.
Sage provides appropriate sub–classes of Parent and Element for a variety of more concrete algebraic structures, such as groups, rings, or fields, and of their elements. But some old stuff in Sage
doesn’t use it. Volunteers for refactoring are welcome!
The parent
Since we wish to implement a special kind of fields, namely fraction fields, it makes sense to build on top of the base class sage.rings.ring.Field provided by Sage.
sage: from sage.rings.ring import Field
This base class provides a lot more methods than a general parent:
sage: [p for p in dir(Field) if p not in dir(Parent)]
['__div__', '__fraction_field', '__ideal_monoid', '__iter__',
'__pow__', '__rdiv__', '__rpow__', '__rxor__', '__xor__',
'_an_element', '_an_element_c', '_an_element_impl', '_coerce_',
'_coerce_c', '_coerce_impl', '_coerce_self', '_coerce_try',
'_default_category', '_gens', '_gens_dict',
'_has_coerce_map_from', '_ideal_class_', '_latex_names', '_list',
'_one_element', '_pseudo_fraction_field',
'_random_nonzero_element', '_richcmp', '_unit_ideal',
'_zero_element', '_zero_ideal', 'algebraic_closure',
'base_extend', 'cardinality', 'class_group', 'coerce_map_from_c',
'coerce_map_from_impl', 'content', 'divides', 'extension',
'fraction_field', 'frobenius_endomorphism', 'gcd', 'gen', 'gens',
'get_action_c', 'get_action_impl', 'has_coerce_map_from_c',
'has_coerce_map_from_impl', 'ideal', 'ideal_monoid',
'integral_closure', 'is_commutative', 'is_field', 'is_finite',
'is_integral_domain', 'is_integrally_closed', 'is_noetherian',
'is_prime_field', 'is_ring', 'is_subring', 'is_zero',
'krull_dimension', 'list', 'ngens', 'one', 'one_element',
'order', 'prime_subfield', 'principal_ideal', 'quo', 'quotient',
'quotient_ring', 'random_element', 'unit_ideal', 'zero',
'zero_element', 'zero_ideal', 'zeta', 'zeta_order']
The following is a very basic implementation of fraction fields, that needs to be complemented later.
sage: from sage.structure.unique_representation import UniqueRepresentation
sage: class MyFrac(UniqueRepresentation, Field):
....: def __init__(self, base):
....: if base not in IntegralDomains():
....: raise ValueError, "%s is no integral domain"%base
....: Field.__init__(self, base)
....: def _repr_(self):
....: return "NewFrac(%s)"%repr(self.base())
....: def base_ring(self):
....: return self.base().base_ring()
....: def characteristic(self):
....: return self.base().characteristic()
This basic implementation is formed by the following steps:
• Any ring in Sage has a base and a base ring. The “usual” fraction field of a ring \(R\) has the base \(R\) and the base ring R.base_ring():
sage: Frac(QQ['x']).base(), Frac(QQ['x']).base_ring()
(Univariate Polynomial Ring in x over Rational Field, Rational Field)
Declaring the base is easy: We just pass it as an argument to the field constructor.
sage: Field(ZZ['x']).base()
Univariate Polynomial Ring in x over Integer Ring
We are implementing a seperate method returning the base ring.
• Python uses double–underscore methods for arithemetic methods and string representations. Sage’s base classes often have a default implementation, and it is requested to implement SINGLE
underscore methods _repr_, and similarly _add_, _mul_ etc.
• You are encouraged to make your parent “unique”. That’s to say, parents should only evaluate equal if they are identical. Sage provides frameworks to create unique parents. We use here the most
easy one: Inheriting from the class sage.structure.unique_representation.UniqueRepresentation is enough. Making parents unique can be quite important for an efficient implementation, because the
repeated creation of “the same” parent would take a lot of time.
• Fraction fields are only defined for integral domains. Hence, we raise an error if the given ring does not belong to the category of integral domains. This is our first use case of categories.
• Last, we add a method that returns the characteristic of the field. We don’t go into details, but some automated tests that we study below implicitly rely on this method.
We see that our basic implementation correctly refuses a ring that is not an integral domain:
sage: MyFrac(ZZ['x'])
NewFrac(Univariate Polynomial Ring in x over Integer Ring)
sage: MyFrac(Integers(15))
Traceback (most recent call last):
ValueError: Ring of integers modulo 15 is no integral domain
Inheritance from UniqueRepresentation automatically provides our class with pickling, preserving the unique parent condition. If we had defined the class in some external module or in an interactive
session, pickling would work immediately.
However, for making the following example work in Sage’s doctesting framework, we need to assign our class as an attribute of the __main__ module, so that the class can be looked up during
sage: import __main__
sage: __main__.MyFrac = MyFrac
sage: loads(dumps(MyFrac(ZZ))) is MyFrac(ZZ)
In the following sections, we will successively add or change details of MyFrac. Rather than giving a full class definition in each step, we define new versions of MyFrac by inheriting from the
previously defined version of MyFrac. We believe this will help the reader to focus on the single detail that is relevant in each section.
The complete code can be found in the appendix.
The elements
We use the base class sage.structure.element.FieldElement. Note that in the creation of field elements it is not tested that the given parent is a field:
sage: from sage.structure.element import FieldElement
sage: FieldElement(ZZ)
Generic element of a structure
Our toy implementation of fraction field elements is based on the following considerations:
• A fraction field element is defined by numerator and denominator, which both need to be elements of the base. There should be methods returning numerator resp. denominator.
• The denominator must not be zero, and (provided that the base is an ordered ring) we can make it non-negative, without loss of generality. By default, the denominator is one.
• The string representation is returned by the single–underscore method _repr_. In order to make our fraction field elements distinguishable from those already present in Sage, we use a different
string representation.
• Arithmetic is implemented in single–underscore method _add_, _mul_, etc. We do not override the default double underscore __add__, __mul__, since otherwise, we could not use Sage’s coercion
• In the single underscore methods and in __cmp__, we can assume that both arguments belong to the same parent. This is one benefit of the coercion model. Note that __cmp__ should be provided,
since otherwise comparison does not work in the way expected in Python:
sage: class Foo(sage.structure.element.Element):
....: def __init__(self, parent, x):
....: self.x = x
....: def _repr_(self):
....: return "<%s>"%self.x
sage: a = Foo(ZZ, 1)
sage: b = Foo(ZZ, 2)
sage: cmp(a,b)
Traceback (most recent call last):
NotImplementedError: BUG: sort algorithm for elements of 'None' not implemented
• When constructing new elements as the result of arithmetic operations, we do not directly name our class, but we use self.__class__. Later, this will come in handy.
This gives rise to the following code:
sage: class MyElement(FieldElement):
....: def __init__(self, parent,n,d=None):
....: B = parent.base()
....: if d is None:
....: d = B.one_element()
....: if n not in B or d not in B:
....: raise ValueError("Numerator and denominator must be elements of %s"%B)
....: # Numerator and denominator should not just be "in" B,
....: # but should be defined as elements of B
....: d = B(d)
....: n = B(n)
....: if d==0:
....: raise ZeroDivisionError("The denominator must not be zero")
....: if d<0:
....: self.n = -n
....: self.d = -d
....: else:
....: self.n = n
....: self.d = d
....: FieldElement.__init__(self,parent)
....: def numerator(self):
....: return self.n
....: def denominator(self):
....: return self.d
....: def _repr_(self):
....: return "(%s):(%s)"%(self.n,self.d)
....: def __cmp__(self, other):
....: return cmp(self.n*other.denominator(), other.numerator()*self.d)
....: def _add_(self, other):
....: C = self.__class__
....: D = self.d*other.denominator()
....: return C(self.parent(), self.n*other.denominator()+self.d*other.numerator(), D)
....: def _sub_(self, other):
....: C = self.__class__
....: D = self.d*other.denominator()
....: return C(self.parent(), self.n*other.denominator()-self.d*other.numerator(),D)
....: def _mul_(self, other):
....: C = self.__class__
....: return C(self.parent(), self.n*other.numerator(), self.d*other.denominator())
....: def _div_(self, other):
....: C = self.__class__
....: return C(self.parent(), self.n*other.denominator(), self.d*other.numerator())
Features and limitations of the basic implementation
Thanks to the single underscore methods, some basic arithmetics works, if we stay inside a single parent structure:
sage: P = MyFrac(ZZ)
sage: a = MyElement(P, 3, 4)
sage: b = MyElement(P, 1, 2)
sage: a+b, a-b, a*b, a/b
((10):(8), (2):(8), (3):(8), (6):(4))
sage: a-b == MyElement(P, 1, 4)
We didn’t implement exponentiation—but it just works:
There is a default implementation of element tests. We can already do
since \(a\) is defined as an element of \(P\). However, we can not verify yet that the integers are contained in the fraction field of the ring of integers. It does not even give a wrong answer, but
results in an error:
sage: 1 in P
Traceback (most recent call last):
We will take care of this later.
Categories in Sage
Sometimes the base classes do not reflect the mathematics: The set of \(m\times n\) matrices over a field forms, in general, not more than a vector space. Hence, this set (called MatrixSpace) is not
implemented on top of sage.rings.ring.Ring. However, if \(m=n\), then the matrix space is an algebra, thus, is a ring.
From the point of view of Python base classes, both cases are the same:
sage: MS1 = MatrixSpace(QQ,2,3)
sage: isinstance(MS1, Ring)
sage: MS2 = MatrixSpace(QQ,2)
sage: isinstance(MS2, Ring)
Sage’s category framework can differentiate the two cases:
sage: Rings()
Category of rings
sage: MS1 in Rings()
sage: MS2 in Rings()
Surprisingly, MS2 has more methods than MS1, even though their classes coincide:
sage: import inspect
sage: len([s for s in dir(MS1) if inspect.ismethod(getattr(MS1,s,None))])
sage: len([s for s in dir(MS2) if inspect.ismethod(getattr(MS2,s,None))])
sage: MS1.__class__ is MS2.__class__
Below, we will explain how this can be taken advantage of.
It is no surprise that our parent \(P\) defined above knows that it belongs to the category of fields, as it is derived from the base class of fields.
sage: P.category()
Category of fields
However, we could choose a smaller category, namely the category of quotient fields.
Why should one choose a category?
One can provide default methods for all objects of a category, and for all elements of such objects. Hence, the category framework is a way to inherit useful stuff that is not present in the base
classes. These default methods do not rely on implementation details, but on mathematical concepts.
In addition, the categories define test suites for their objects and elements—see the last section. Hence, one also gets basic sanity tests for free.
How does the category framework work?
Abstract base classes for the objects (“parent_class”) and the elements of objects (“element_class”) are provided by attributes of the category. During initialisation of a parent, the class of the
parent is dynamically changed into a sub–class of the category’s parent class. Likewise, sub–classes of the category’s element class are available for the creation of elements of the parent, as
explained below.
A dynamic change of classes does not work in Cython. Nevertheless, method inheritance still works, by virtue of a __getattr__ method.
It is strongly recommended to use the category framework both in Python and in Cython.
Let us see whether there is any gain in chosing the category of quotient fields instead of the category of fields:
sage: QuotientFields().parent_class, QuotientFields().element_class
(<class 'sage.categories.quotient_fields.QuotientFields.parent_class'>,
<class 'sage.categories.quotient_fields.QuotientFields.element_class'>)
sage: [p for p in dir(QuotientFields().parent_class) if p not in dir(Fields().parent_class)]
sage: [p for p in dir(QuotientFields().element_class) if p not in dir(Fields().element_class)]
['_derivative', 'denominator', 'derivative', 'factor',
'numerator', 'partial_fraction_decomposition']
So, there is no immediate gain for our fraction fields, but additional methods become available to our fraction field elements. Note that some of these methods are place-holders: There is no default
implementation, but it is required (respectively is optional) to implement these methods:
sage: QuotientFields().element_class.denominator
<abstract method denominator at ...>
sage: from sage.misc.abstract_method import abstract_methods_of_class
sage: abstract_methods_of_class(QuotientFields().element_class)['optional']
['_add_', '_mul_']
sage: abstract_methods_of_class(QuotientFields().element_class)['required']
['__nonzero__', 'denominator', 'numerator']
Hence, when implementing elements of a quotient field, it is required to implement methods returning the denominator and the numerator, and a method that tells whether the element is nonzero, and in
addition, it is optional (but certainly recommended) to provide some arithmetic methods. If one forgets to implement the required methods, the test suites of the category framework will complain—see
Implementing the category framework for the parent
We simply need to declare the correct category by an optional argument of the field constructor, where we provide the possibility to override the default category:
sage: from sage.categories.quotient_fields import QuotientFields
sage: class MyFrac(MyFrac):
....: def __init__(self, base, category=None):
....: if base not in IntegralDomains():
....: raise ValueError, "%s is no integral domain"%base
....: Field.__init__(self, base, category=category or QuotientFields())
When constructing instances of MyFrac, their class is dynamically changed into a new class called MyFrac_with_category. It is a common sub–class of MyFrac and of the category’s parent class:
sage: P = MyFrac(ZZ)
sage: type(P)
<class '__main__.MyFrac_with_category'>
sage: isinstance(P,MyFrac)
sage: isinstance(P,QuotientFields().parent_class)
The fraction field \(P\) inherits additional methods. For example, the base class Field does not have a method sum. But \(P\) inherits such method from the category of commutative additive
monoids—see sum():
sage: P.sum.__module__
We have seen above that we can add elements. Nevertheless, the sum method does not work, yet:
sage: a = MyElement(P, 3, 4)
sage: b = MyElement(P, 1, 2)
sage: c = MyElement(P, -1, 2)
sage: P.sum([a, b, c])
Traceback (most recent call last):
The reason is that the sum method starts with the return value of P.zero_element(), which defaults to P(0)—but the conversion of integers into P is not implemented, yet.
Implementing the category framework for the elements
Similar to what we have seen for parents, a new class is dynamically created that combines the element class of the parent’s category with the class that we have implemented above. However, the
category framework is implemented in a different way for elements than for parents:
• We provide the parent \(P\) (or its class) with an attribute called “Element”, whose value is a class.
• The parent automatically obtains an attribute P.element_class, that subclasses both P.Element and P.category().element_class.
Hence, for providing our fraction fields with their own element classes, we just need to add a single line to our class:
sage: class MyFrac(MyFrac):
....: Element = MyElement
This little change provides several benefits:
• We can now create elements by simply calling the parent:
sage: P = MyFrac(ZZ)
sage: P(1), P(2,3)
((1):(1), (2):(3))
• There is a method zero_element returning the expected result:
sage: P.zero_element()
• The sum method mentioned above suddenly works:
sage: a = MyElement(P, 9, 4)
sage: b = MyElement(P, 1, 2)
sage: c = MyElement(P, -1, 2)
sage: P.sum([a,b,c])
What did happen behind the scenes to make this work?
We provided P.Element, and thus obtain P.element_class, which is a lazy attribute. It provides a dynamic class, which is a sub–class of both MyElement defined above and of P.category().element_class:
sage: P.__class__.element_class
<sage.misc.lazy_attribute.lazy_attribute object at ...>
sage: P.element_class
<class '__main__.MyFrac_with_category.element_class'>
sage: type(P.element_class)
<class 'sage.structure.dynamic_class.DynamicMetaclass'>
sage: issubclass(P.element_class, MyElement)
sage: issubclass(P.element_class,P.category().element_class)
The default __call__ method of \(P\) passes the given arguments to P.element_class, adding the argument parent=P. This is why we are now able to create elements by calling the parent.
In particular, these elements are instances of that new dynamic class:
sage: type(P(2,3))
<class '__main__.MyFrac_with_category.element_class'>
All elements of \(P\) should use the element class. In order to make sure that this also holds for the result of arithmetic operations, we created them as instances of self.__class__ in the
arithmetic methods of MyElement.
P.zero_element() defaults to returning P(0) and thus returns an instance of P.element_class. Since P.sum([...]) starts the summation with P.zero_element() and the class of the sum only depends on the
first summand, by our implementation, we have:
sage: type(a)
<class '__main__.MyElement'>
sage: isinstance(a,P.element_class)
sage: type(P.sum([a,b,c]))
<class '__main__.MyFrac_with_category.element_class'>
The method factor provided by P.category().element_class (see above) simply works:
sage: a; a.factor(); P(6,4).factor()
2^-2 * 3^2
2^-1 * 3
But that’s surprising: The element \(a\) is just an instance of MyElement, but not of P.element_class, and its class does not know about the factor method. In fact, this is due to a __getattr__
method defined for sage.structure.element.Element.
sage: hasattr(type(a), 'factor')
sage: hasattr(P.element_class, 'factor')
sage: hasattr(a, 'factor')
A first note on performance
The category framework is sometimes blamed for speed regressions, as in trac ticket #9138 and trac ticket #11900. But if the category framework is used properly, then it is fast. For illustration, we
determine the time needed to access an attribute inherited from the element class. First, we consider an element that uses the class that we implemented above, but does not use the category framework
sage: type(a)
<class '__main__.MyElement'>
sage: timeit('a.factor',number=1000) # random
1000 loops, best of 3: 2 us per loop
Now, we consider an element that is equal to \(a\), but uses the category framework properly:
sage: a2 = P(9,4)
sage: a2 == a
sage: type(a2)
<class '__main__.MyFrac_with_category.element_class'>
sage: timeit('a2.factor',number=1000) # random
1000 loops, best of 3: 365 ns per loop
So, don’t be afraid of using categories!
Coercion—the basics
Theoretical background
Coercion is not just type conversion
“Coercion” in the C programming language means “automatic type conversion”. However, in Sage, coercion is involved if one wants to be able to do arithmetic, comparisons, etc. between elements of
distinct parents. Hence, coercion is not about a change of types, but about a change of parents.
As an illustration, we show that elements of the same type may very well belong to rather different parents:
sage: P1 = QQ['v,w']; P2 = ZZ['w,v']
sage: type(P1.gen()) == type(P2.gen())
sage: P1 == P2
\(P_2\) naturally is a sub–ring of \(P_1\). So, it makes sense to be able to add elements of the two rings—the result should then live in \(P_1\), and indeed it does:
sage: (P1.gen()+P2.gen()).parent() is P1
It would be rather inconvenient if one needed to manually convert an element of \(P_2\) into \(P_1\) before adding. The coercion system does that conversion automatically.
Not every conversion is a coercion
A coercion happens implicitly, without being explicitly requested by the user. Hence, coercion must be based on mathematical rigour. In our example, any element of \(P_2\) can be naturally
interpreted as an element of \(P_1\). We thus have:
sage: P1.has_coerce_map_from(P2)
sage: P1.coerce_map_from(P2)
Call morphism:
From: Multivariate Polynomial Ring in w, v over Integer Ring
To: Multivariate Polynomial Ring in v, w over Rational Field
While there is a conversion from \(P_1\) to \(P_2\) (namely restricted to polynomials with integral coefficients), this conversion is not a coercion:
sage: P2.convert_map_from(P1)
Call morphism:
From: Multivariate Polynomial Ring in v, w over Rational Field
To: Multivariate Polynomial Ring in w, v over Integer Ring
sage: P2.has_coerce_map_from(P1)
sage: P2.coerce_map_from(P1) is None
The four axioms requested for coercions
1. A coercion is a morphism in an appropriate category.
This first axiom has two implications:
1. A coercion is defined on all elements of a parent.
A polynomial of degree zero over the integers can be interpreted as an integer—but the attempt to convert a polynomial of non-zero degree would result in an error:
sage: ZZ(P2.one())
sage: ZZ(P2.gen(1))
Traceback (most recent call last):
TypeError: not a constant polynomial
Hence, we only have a partial map. This is fine for a conversion, but a partial map does not qualify as a coercion.
2. Coercions are structure preserving.
Any real number can be converted to an integer, namely by rounding. However, such a conversion is not useful in arithmetic operations, since the underlying algebraic structure is not
sage: int(1.6)+int(2.7) == int(1.6+2.7)
The structure that is to be preserved depends on the category of the involved parents. For example, the coercion from the integers into the rational field is a homomorphism of euclidean
sage: QQ.coerce_map_from(ZZ).category_for()
Category of euclidean domains
2. There is at most one coercion from one parent to another
In addition, if there is a coercion from \(P_2\) to \(P_1\), then a conversion from \(P_2\) to \(P_1\) is defined for all elements of \(P_2\) and coincides with the coercion.
sage: P1.coerce_map_from(P2) is P1.convert_map_from(P2)
3. Coercions can be composed
If there is a coercion \(\varphi: P_1 \to P_2\) and another coercion \(\psi: P_2 \to P_3\), then the composition of \(\varphi\) followed by \(\psi\) must yield the unique coercion from \(P_1\) to
4. The identity is a coercion
Together with the two preceding axioms, it follows: If there are coercions from \(P_1\) to \(P_2\) and from \(P_2\) to \(P_1\), then they are mutually inverse.
Implementing a conversion
We have seen above that some conversions into our fraction fields became available after providing the attribute Element. However, we can not convert elements of a fraction field into elements of
another fraction field, yet:
sage: P(2/3)
Traceback (most recent call last):
ValueError: Numerator and denominator must be elements of Integer Ring
For implementing a conversion, the default __call__ method should (almost) never be overridden. Instead, we implement the method _element_constructor_, that should return an instance of the parent’s
element class. Some old parent classes violate that rule—please help to refactor them!
sage: class MyFrac(MyFrac):
....: def _element_constructor_(self, *args, **kwds):
....: if len(args)!=1:
....: return self.element_class(self, *args, **kwds)
....: x = args[0]
....: try:
....: P = x.parent()
....: except AttributeError:
....: return self.element_class(self, x, **kwds)
....: if P in QuotientFields() and P != self.base():
....: return self.element_class(self, x.numerator(), x.denominator(), **kwds)
....: return self.element_class(self, x, **kwds)
In addition to the conversion from the base ring and from pairs of base ring elements, we now also have a conversion from the rationals to our fraction field of \(\ZZ\):
sage: P = MyFrac(ZZ)
sage: P(2); P(2,3); P(3/4)
Recall that above, the test \(1 \in P\) failed with an error. We try again and find that the error has disappeared. This is because we are now able to convert the integer \(1\) into \(P\). But the
containment test still yields a wrong answer:
The technical reason: We have a conversion \(P(1)\) of \(1\) into \(P\), but this is not known as a coercion—yet!
sage: P.has_coerce_map_from(ZZ), P.has_coerce_map_from(QQ)
(False, False)
Establishing a coercion
There are two main ways to make Sage use a particular conversion as a coercion:
• One can use sage.structure.parent.Parent.register_coercion(), normally during initialisation of the parent (see documentation of the method).
• A more flexible way is to provide a method _coerce_map_from_ for the parent.
Let \(P\) and \(R\) be parents. If P._coerce_map_from_(R) returns False or None, then there is no coercion from \(R\) to \(P\). If it returns a map with domain \(R\) and codomain \(P\), then this map
is used for coercion. If it returns True, then the conversion from \(R\) to \(P\) is used as coercion.
Note that in the following implementation, we need a special case for the rational field, since QQ.base() is not the ring of integers.
sage: class MyFrac(MyFrac):
....: def _coerce_map_from_(self, S):
....: if self.base().has_coerce_map_from(S):
....: return True
....: if S in QuotientFields():
....: if self.base().has_coerce_map_from(S.base()):
....: return True
....: if hasattr(S,'ring_of_integers') and self.base().has_coerce_map_from(S.ring_of_integers()):
....: return True
By the method above, a parent coercing into the base ring will also coerce into the fraction field, and a fraction field coerces into another fraction field if there is a coercion of the
corresponding base rings. Now, we have:
sage: P = MyFrac(QQ['x'])
sage: P.has_coerce_map_from(ZZ['x']), P.has_coerce_map_from(Frac(ZZ['x'])), P.has_coerce_map_from(QQ)
(True, True, True)
We can now use coercion from \(\ZZ[x]\) and from \(\QQ\) into \(P\) for arithmetic operations between the two rings:
sage: 3/4+P(2)+ZZ['x'].gen(), (P(2)+ZZ['x'].gen()).parent() is P
((4*x + 11):(4), True)
Equality and element containment
Recall that above, the test \(1 \in P\) gave a wrong answer. Let us repeat the test now:
Why is that?
The default element containment test \(x \in P\) is based on the interplay of three building blocks: conversion, coercion, and equality test.
1. Clearly, if the conversion \(P(x)\) raises an error, then \(x\) can not be seen as an element of \(P\). On the other hand, a conversion \(P(x)\) can generally do very nasty things. So, the fact
that \(P(x)\) works without error is necessary, but not sufficient for \(x \in P\).
2. If \(P\) is the parent of \(x\), then the conversion \(P(x)\) will not change \(x\) (at least, that’s the default). Hence, we will have \(x=P(x)\).
3. Sage uses coercion not only for arithmetic operations, but also for comparison: If there is a coercion from the parent of \(x\) to \(P\), then the equality test x==P(x) reduces to P(x)==P(x).
Otherwise, x==P(x) will evaluate as false.
That leads to the following default implementation of element containment testing:
\(x \in P\) holds if and only if the test x==P(x) does not raise an error and evaluates as true.
If the user is not happy with that behaviour, the “magical” Python method __contains__ can be overridden.
Coercion—the advanced parts
So far, we are able to add integers and rational numbers to elements of our new implementation of the fraction field of \(\ZZ\).
sage: 1/2+P(2,3)+1
Surprisingly, we can even add a polynomial over the integers to an element of \(P\), even though the result lives in a new parent, namely in a polynomial ring over \(P\):
sage: P(1/2) + ZZ['x'].gen(), (P(1/2) + ZZ['x'].gen()).parent() is P['x']
((1):(1)*x + (1):(2), True)
In the next, seemingly more easy example, there “obviously” is a coercion from the fraction field of \(\ZZ\) to the fraction field of \(\ZZ[x]\). However, Sage does not know enough about our new
implementation of fraction fields. Hence, it does not recognise the coercion:
sage: Frac(ZZ['x']).has_coerce_map_from(P)
Two obvious questions arise:
1. How / why has the new ring been constructed in the example above?
2. How can we establish a coercion from \(P\) to \(\mathrm{Frac}(\ZZ[x])\)?
The key to answering both question is the construction of parents from simpler pieces, that we are studying now. Note that we will answer the second question not by providing a coercion from \(P\) to
\(\mathrm{Frac}(\ZZ[x])\), but by teaching Sage to automatically construct \(\mathrm{MyFrac}(\ZZ[x])\) and coerce both \(P\) and \(\mathrm{Frac}(\ZZ[x])\) into it.
If we are lucky, a parent can tell how it has been constructed:
sage: Poly,R = QQ['x'].construction()
sage: Poly,R
(Poly[x], Rational Field)
sage: Fract,R = QQ.construction()
sage: Fract,R
(FractionField, Integer Ring)
In both cases, the first value returned by construction() is a mathematical construction, called construction functor—see ConstructionFunctor. The second return value is a simpler parent to which the
construction functor is applied.
Being functors, the same construction can be applied to different objects of a category:
sage: Poly(QQ) is QQ['x']
sage: Poly(ZZ) is ZZ['x']
sage: Poly(P) is P['x']
sage: Fract(QQ['x'])
Fraction Field of Univariate Polynomial Ring in x over Rational Field
Let us see on which categories these construction functors are defined:
sage: Poly.domain()
Category of rings
sage: Poly.codomain()
Category of rings
sage: Fract.domain()
Category of integral domains
sage: Fract.codomain()
Category of fields
In particular, the construction functors can be composed:
sage: Poly*Fract
sage: (Poly*Fract)(ZZ) is QQ[x]
In addition, it is assumed that we have a coercion from input to output of the construction functor:
sage: ((Poly*Fract)(ZZ))._coerce_map_from_(ZZ)
Composite map:
From: Integer Ring
To: Univariate Polynomial Ring in x over Rational Field
Defn: Natural morphism:
From: Integer Ring
To: Rational Field
Polynomial base injection morphism:
From: Rational Field
To: Univariate Polynomial Ring in x over Rational Field
Construction functors do not necessarily commute:
sage: (Fract*Poly)(ZZ)
Fraction Field of Univariate Polynomial Ring in x over Integer Ring
The pushout of construction functors
We can now formulate our problem. We have parents \(P_1\), \(P_2\) and \(R\), and construction functors \(F_1\), \(F_2\), such that \(P_1 = F_1(R)\) and \(P_2 = F_2(R)\). We want to find a new
construction functor \(F_3\), such that both \(P_1\) and \(P_2\) coerce into \(P_3 = F_3(R)\).
In analogy to a notion of category theory, \(P_3\) is called the pushout() of \(P_1\) and \(P_2\); and similarly \(F_3\) is called the pushout of \(F_1\) and \(F_2\).
sage: from sage.categories.pushout import pushout
sage: pushout(Fract(ZZ),Poly(ZZ))
Univariate Polynomial Ring in x over Rational Field
\(F_1\circ F_2\) and \(F_2\circ F_1\) are natural candidates for the pushout of \(F_1\) and \(F_2\). However, the order of the functors must rely on a canonical choice. “Indecomposable” construction
functors have a rank, and this allows to order them canonically:
If F1.rank is smaller than F2.rank, then the pushout is \(F_2\circ F_1\) (hence, \(F_1\) is applied first).
We have
sage: Fract.rank, Poly.rank
(5, 9)
and thus the pushout is
sage: Fract.pushout(Poly), Poly.pushout(Fract)
(Poly[x](FractionField(...)), Poly[x](FractionField(...)))
This is why the example above has worked.
However, only “elementary” construction functors have a rank:
sage: (Fract*Poly).rank
Traceback (most recent call last):
AttributeError: 'CompositeConstructionFunctor' object has no attribute 'rank'
Shuffling composite construction functors
If composed construction fuctors \(...\circ F_2\circ F_1\) and \(...\circ G_2\circ G_1\) are given, then Sage determines their pushout by shuffling the constituents:
• If F1.rank < G1.rank then we apply \(F_1\) first, and continue with \(...\circ F_3\circ F_2\) and \(...\circ G_2\circ G_1\).
• If F1.rank > G1.rank then we apply \(G_1\) first, and continue with \(...\circ F_2\circ F_1\) and \(...\circ G_3\circ G_2\).
If F1.rank == G1.rank, then the tie needs to be broken by other techniques (see below).
As an illustration, we first get us some functors and then see how chains of functors are shuffled.
sage: AlgClos, R = CC.construction(); AlgClos
sage: Compl, R = RR.construction(); Compl
sage: Matr, R = (MatrixSpace(ZZ,3)).construction(); Matr
sage: AlgClos.rank, Compl.rank, Fract.rank, Poly.rank, Matr.rank
(3, 4, 5, 9, 10)
When we apply Fract, AlgClos, Poly and Fract to the ring of integers, we obtain:
sage: (Fract*Poly*AlgClos*Fract)(ZZ)
Fraction Field of Univariate Polynomial Ring in x over Algebraic Field
When we apply Compl, Matr and Poly to the ring of integers, we obtain:
sage: (Poly*Matr*Compl)(ZZ)
Univariate Polynomial Ring in x over Full MatrixSpace of 3 by 3 dense matrices over Real Field with 53 bits of precision
Applying the shuffling procedure yields
sage: (Poly*Matr*Fract*Poly*AlgClos*Fract*Compl)(ZZ)
Univariate Polynomial Ring in x over Full MatrixSpace of 3 by 3 dense matrices over Fraction Field of Univariate Polynomial Ring in x over Complex Field with 53 bits of precision
and this is indeed equal to the pushout found by Sage:
sage: pushout((Fract*Poly*AlgClos*Fract)(ZZ), (Poly*Matr*Compl)(ZZ))
Univariate Polynomial Ring in x over Full MatrixSpace of 3 by 3 dense matrices over Fraction Field of Univariate Polynomial Ring in x over Complex Field with 53 bits of precision
Breaking the tie
If F1.rank==G1.rank then Sage’s pushout constructions offers two ways to proceed:
1. Construction functors have a method merge() that either returns None or returns a construction functor—see below. If either F1.merge(G1) or G1.merge(F1) returns a construction functor \(H_1\),
then we apply \(H_1\) and continue with \(...\circ F_3\circ F_2\) and \(...\circ G_3\circ G_2\).
2. Construction functors have a method commutes(). If either F1.commutes(G1) or G1.commutes(F1) returns True, then we apply both \(F_1\) and \(G_1\) in any order, and continue with \(...\circ F_3\
circ F_2\) and \(...\circ G_3\circ G_2\).
By default, F1.merge(G1) returns F1 if F1==G1, and returns None otherwise. The commutes() method exists, but it seems that so far nobody has implemented two functors of the same rank that commute.
Establishing a default implementation
The typical application of merge() is to provide a coercion between different implementations of the same algebraic structure.
If F1(P) and F2(P) are different implementations of the same thing, then F1.merge(F2)(P) should return the default implementation.
We want to boldly turn our toy implementation of fraction fields into the new default implementation. Hence:
• Next, we implement a new version of the “usual” fraction field functor, having the same rank, but returning our new implementation.
• We make our new implementation the default, by virtue of a merge method.
• Do not override the default __call__ method of ConstructionFunctor—implement _apply_functor instead.
• Declare domain and codomain of the functor during initialisation.
sage: from sage.categories.pushout import ConstructionFunctor
sage: class MyFracFunctor(ConstructionFunctor):
....: rank = 5
....: def __init__(self):
....: ConstructionFunctor.__init__(self, IntegralDomains(), Fields())
....: def _apply_functor(self, R):
....: return MyFrac(R)
....: def merge(self, other):
....: if isinstance(other, (type(self), sage.categories.pushout.FractionField)):
....: return self
sage: MyFracFunctor()
We verify that our functor can really be used to construct our implementation of fraction fields, and that it can be merged with either itself or the usual fraction field constructor:
sage: MyFracFunctor()(ZZ)
NewFrac(Integer Ring)
sage: MyFracFunctor().merge(MyFracFunctor())
sage: MyFracFunctor().merge(Fract)
There remains to let our new fraction fields know about the new construction functor:
sage: class MyFrac(MyFrac):
....: def construction(self):
....: return MyFracFunctor(), self.base()
sage: MyFrac(ZZ['x']).construction()
(MyFracFunctor, Univariate Polynomial Ring in x over Integer Ring)
Due to merging, we have:
sage: pushout(MyFrac(ZZ['x']), Frac(QQ['x']))
NewFrac(Univariate Polynomial Ring in x over Rational Field)
A second note on performance
Being able to do arithmetics involving elements of different parents, with the automatic creation of a pushout to contain the result, is certainly convenient—but one should not rely on it, if speed
matters. Simply the conversion of elements into different parents takes time. Moreover, by trac ticket #14058, the pushout may be subject to Python’s cyclic garbage collection. Hence, if one does not
keep a strong reference to it, the same parent may be created repeatedly, which is a waste of time. In the following example, we illustrate the slow–down resulting from blindly relying on coercion:
sage: ZZxy = ZZ['x','y']
sage: a = ZZxy('x')
sage: b = 1/2
sage: timeit("c = a+b") # random
10000 loops, best of 3: 172 us per loop
sage: QQxy = QQ['x','y']
sage: timeit("c2 = QQxy(a)+QQxy(b)") # random
10000 loops, best of 3: 168 us per loop
sage: a2 = QQxy(a)
sage: b2 = QQxy(b)
sage: timeit("c2 = a2+b2") # random
100000 loops, best of 3: 10.5 us per loop
Hence, if one avoids the explicit or implicit conversion into the pushout, but works in the pushout right away, one can get a more than 10–fold speed–up.
The test suites of the category framework
The category framework does not only provide functionality but also a test framework. This section logically belongs to the section on categories, but without the bits that we have implemented in the
section on coercion, our implementation of fraction fields would not have passed the tests yet.
“Abstract” methods
We have already seen above that a category can require/suggest certain parent or element methods, that the user must/should implement. This is in order to smoothly blend with the methods that already
exist in Sage.
The methods that ought to be provided are called abstract_method(). Let us see what methods are needed for quotient fields and their elements:
sage: from sage.misc.abstract_method import abstract_methods_of_class
sage: abstract_methods_of_class(QuotientFields().parent_class)['optional']
sage: abstract_methods_of_class(QuotientFields().parent_class)['required']
Hence, the only required method (that is actually required for all parents that belong to the category of sets) is an element containment test. That’s fine, because the base class Parent provides a
default containment test.
The elements have to provide more:
sage: abstract_methods_of_class(QuotientFields().element_class)['optional']
['_add_', '_mul_']
sage: abstract_methods_of_class(QuotientFields().element_class)['required']
['__nonzero__', 'denominator', 'numerator']
Hence, the elements must provide denominator() and numerator() methods, and must be able to tell whether they are zero or not. The base class Element provides a default __nonzero__() method. In
addition, the elements may provide Sage’s single underscore arithmetic methods (actually any ring element should provide them).
The _test_... methods
If a parent or element method’s name start with “_test_”, it gives rise to a test in the automatic test suite. For example, it is tested
• whether a parent \(P\) actually is an instance of the parent class of the category of \(P\),
• whether the user has implemented the required abstract methods,
• whether some defining structural properties (e.g., commutativity) hold.
For example, if one forgets to implement required methods, one obtains the following error:
sage: class Foo(Parent):
....: Element = sage.structure.element.Element
....: def __init__(self):
....: Parent.__init__(self, category=QuotientFields())
sage: Bar = Foo()
sage: bar = Bar.element_class(Bar)
sage: bar._test_not_implemented_methods()
Traceback (most recent call last):
AssertionError: Not implemented method: denominator
Here are the tests that form the test suite of quotient fields:
sage: [t for t in dir(QuotientFields().parent_class) if t.startswith('_test_')]
'_test_one', '_test_prod',
We have implemented all abstract methods (or inherit them from base classes), we use the category framework, and we have implemented coercions. So, we are confident that the test suite runs without
an error. In fact, it does!
The following trick with the __main__ module is only needed in doctests, not in an interactive session or when defining the classes externally.
sage: __main__.MyFrac = MyFrac
sage: __main__.MyElement = MyElement
sage: P = MyFrac(ZZ['x'])
sage: TestSuite(P).run()
Let us see what tests are actually performed:
sage: TestSuite(P).run(verbose=True)
running ._test_additive_associativity() . . . pass
running ._test_an_element() . . . pass
running ._test_associativity() . . . pass
running ._test_category() . . . pass
running ._test_characteristic() . . . pass
running ._test_characteristic_fields() . . . pass
running ._test_distributivity() . . . pass
running ._test_elements() . . .
Running the test suite of self.an_element()
running ._test_category() . . . pass
running ._test_eq() . . . pass
running ._test_nonzero_equal() . . . pass
running ._test_not_implemented_methods() . . . pass
running ._test_pickling() . . . pass
running ._test_elements_eq_reflexive() . . . pass
running ._test_elements_eq_symmetric() . . . pass
running ._test_elements_eq_transitive() . . . pass
running ._test_elements_neq() . . . pass
running ._test_eq() . . . pass
running ._test_not_implemented_methods() . . . pass
running ._test_one() . . . pass
running ._test_pickling() . . . pass
running ._test_prod() . . . pass
running ._test_some_elements() . . . pass
running ._test_zero() . . . pass
Implementing a new category with additional tests
As one can see, tests are also performed on elements. There are methods that return one element or a list of some elements, relying on “typical” elements that can be found in most algebraic
sage: P.an_element(); P.some_elements()
Unfortunately, the list of elements that is returned by the default method is of length one, and that single element could also be a bit more interesting. The method an_element relies on a method
_an_element_(), so, we implement that. We also override the some_elements method.
sage: class MyFrac(MyFrac):
....: def _an_element_(self):
....: a = self.base().an_element()
....: b = self.base_ring().an_element()
....: if (a+b)!=0:
....: return self(a)**2/(self(a+b)**3)
....: if b != 0:
....: return self(a)/self(b)**2
....: return self(a)**2*self(b)**3
....: def some_elements(self):
....: return [self.an_element(),self(self.base().an_element()),self(self.base_ring().an_element())]
sage: P = MyFrac(ZZ['x'])
sage: P.an_element(); P.some_elements()
(x^2):(x^3 + 3*x^2 + 3*x + 1)
[(x^2):(x^3 + 3*x^2 + 3*x + 1), (x):(1), (1):(1)]
Now, as we have more interesting elements, we may also add a test for the “factor” method. Recall that the method was inherited from the category, but it appears that it is not tested.
Normally, a test for a method defined by a category should be provided by the same category. Hence, since factor is defined in the category of quotient fields, a test should be added there. But we
won’t change source code here and will instead create a sub–category.
Apparently, If \(e\) is an element of a quotient field, the product of the factors returned by e.factor() should be equal to \(e\). For forming the product, we use the prod method, that, no surprise,
is inherited from another category:
sage: P.prod.__module__
When we want to create a sub–category, we need to provide a method super_categories(), that returns a list of all immediate super categories (here: category of quotient fields).
A sub–category \(S\) of a category \(C\) is not implemented as a sub–class of C.__class__! \(S\) becomes a sub–category of \(C\) only if S.super_categories() returns (a sub–category of) \(C\)!
The parent and element methods of a category are provided as methods of classes that are the attributes ParentMethods and Element Methods of the category, as follows:
sage: from sage.categories.category import Category
sage: class QuotientFieldsWithTest(Category): # do *not* inherit from QuotientFields, but ...
....: def super_categories(self):
....: return [QuotientFields()] # ... declare QuotientFields as a super category!
....: class ParentMethods:
....: pass
....: class ElementMethods:
....: def _test_factorisation(self, **options):
....: P = self.parent()
....: assert self == P.prod([P(b)**e for b,e in self.factor()])
We provide an instance of our quotient field implementation with that new category. Note that categories have a default _repr_ method, that guesses a good string representation from the name of the
class: QuotientFieldsWithTest becomes “quotient fields with test”.
The following trick with the __main__ module is only needed in doctests, not in an interactive session or when defining the classes externally.
sage: __main__.MyFrac = MyFrac
sage: __main__.MyElement = MyElement
sage: __main__.QuotientFieldsWithTest = QuotientFieldsWithTest
sage: P = MyFrac(ZZ['x'], category=QuotientFieldsWithTest())
sage: P.category()
Category of quotient fields with test
The new test is inherited from the category. Since an_element() is returning a complicated element, _test_factorisation is a serious test:
sage: P.an_element()._test_factorisation
<bound method MyFrac_with_category.element_class._test_factorisation of (x^2):(x^3 + 3*x^2 + 3*x + 1)>
sage: P.an_element().factor()
(x + 1)^-3 * x^2
Last, we observe that the new test has automatically become part of the test suite. We remark that the existing tests became more serious as well, since we made
sage.structure.parent.Parent.an_element() return something more interesting.
sage: TestSuite(P).run(verbose=True)
running ._test_additive_associativity() . . . pass
running ._test_an_element() . . . pass
running ._test_associativity() . . . pass
running ._test_category() . . . pass
running ._test_characteristic() . . . pass
running ._test_characteristic_fields() . . . pass
running ._test_distributivity() . . . pass
running ._test_elements() . . .
Running the test suite of self.an_element()
running ._test_category() . . . pass
running ._test_eq() . . . pass
running ._test_factorisation() . . . pass
running ._test_nonzero_equal() . . . pass
running ._test_not_implemented_methods() . . . pass
running ._test_pickling() . . . pass
running ._test_elements_eq_reflexive() . . . pass
running ._test_elements_eq_symmetric() . . . pass
running ._test_elements_eq_transitive() . . . pass
running ._test_elements_neq() . . . pass
running ._test_eq() . . . pass
running ._test_not_implemented_methods() . . . pass
running ._test_one() . . . pass
running ._test_pickling() . . . pass
running ._test_prod() . . . pass
running ._test_some_elements() . . . pass
running ._test_zero() . . . pass | {"url":"http://sagemath.org/doc/thematic_tutorials/coercion_and_categories.html","timestamp":"2014-04-18T08:06:35Z","content_type":null,"content_length":"182975","record_id":"<urn:uuid:5d0e79af-bc88-4c26-8fdd-bbf95c5a4f34>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the representation of McCarthy’s amb in the pi-calculus
- MATHEMATICAL STRUCTURES IN COMPUTER SCIENCE , 2008
"... We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a
non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a sin ..."
Cited by 15 (9 self)
Add to MetaCart
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic
operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This
strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour
is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational
reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations,
in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with
so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a
standardisation theorem for fair normal order reduction. The structure of the ordering <= c is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t.
- INTERNATIONAL CONFERENCE ON THEORETICAL COMPUTER SCIENCE , 2008
"... We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and
mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalenc ..."
Cited by 10 (7 self)
Add to MetaCart
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and
mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations
we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in
implementations of language extensions.
, 2008
"... This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi
that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as we ..."
Cited by 5 (3 self)
Add to MetaCart
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that
use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and
extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in call-by-value calculi if
substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or
similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some
new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple
typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
, 2011
"... Abstract. In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure
declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure
declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a
typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and
should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that
call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s
seq-operator, which for instance justifies the use of the do-notation. 1
, 2011
"... Abstract. The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a
pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure
functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive
unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF.
This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if
Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO. 1
, 2012
"... Abstract. We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models
Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructe ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models
Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add
concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence. 1
, 2009
"... This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their
typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An additio ..."
Cited by 1 (1 self)
Add to MetaCart
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed
variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural
constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped
contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be
accompanied with a type modification by generalizing or instantiating types.
, 2009
"... Abstract. Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers
and handled futures can correctly encode each other. Correctness means that our encodings preserve and refle ..."
Add to MetaCart
Abstract. Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and
handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program
semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers
in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from
high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs. 1
, 2006
"... Abstract. Reasoning about the correctness of program transformations requires a notion of program equivalence. We present an observational semantics for the concurrent lambda calculus with
futures λ(fut), which formalizes the operational semantics of the programming language Alice ML. We show that n ..."
Add to MetaCart
Abstract. Reasoning about the correctness of program transformations requires a notion of program equivalence. We present an observational semantics for the concurrent lambda calculus with futures λ
(fut), which formalizes the operational semantics of the programming language Alice ML. We show that natural program optimizations, as well as partial evaluation with respect to deterministic rules,
are correct for λ(fut). This relies on a number of fundamental properties that we establish for our observational semantics. 1
"... For the issue of translations between programming languages with observational semantics, this paper clarifies the notions, the relevant questions, and the methods, constructs a general
framework, and provides several tools for proving various correctness properties of translations like adequacy and ..."
Add to MetaCart
For the issue of translations between programming languages with observational semantics, this paper clarifies the notions, the relevant questions, and the methods, constructs a general framework,
and provides several tools for proving various correctness properties of translations like adequacy and full abstractness, with a special emphasis on observational correctness. We will demonstrate
that a wide range of programming languages and programming calculi and their translations can make advantageous use of our framework for focusing the analysis of their correctness. Keywords:
Contextual equivalence, correctness, semantics, translations | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4004953","timestamp":"2014-04-17T05:01:01Z","content_type":null,"content_length":"38659","record_id":"<urn:uuid:44c5ccbd-2a36-4179-a0db-03265f36e2ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steinberg Representations of Finite Groups of Lie Type
up vote 7 down vote favorite
Let G be a finite group of Lie type. Assume G is also of universal type. Is the Steinberg representation of G generic, i.e., does the Steinberg representation admit a Whittaker model?
A Whittaker model for a representation of G is defined in a similar fashion as in the case of GL(2, F) in Bump's "Automorphic Forms and Representations." I am interested in the genericity of the
Steinberg representation of a group of matrices over a finite field.
finite-groups rt.representation-theory
In the setting of p-adic groups, the answer is "yes". (Taking generic to have its usual meaning of "admits a Whittaker model".) Assuming that the meaning of generic is similar in the finite group
context, I would guess that answer is again yes. Could you remind me of the definition of generic in the finite group context? – Emerton Mar 11 '10 at 15:23
add comment
3 Answers
active oldest votes
I think that for finite groups of Lie type, the analogue of "having a Whittaker model" is that the representation occurs in a Gelfand-Graev representation: these are the representations
obtained by inducing a "regular" character from the unipotent subgroup of a rational Borel. Such representations are multiplicity free and so constitute a "model" (in the sense I think
people say "Whittaker model"). Now when the center of $G$ is connected, all regular characters are conjugate under the action of the maximal torus of the Borel, so the Gelfand-Graev
representation is unique (otherwise there is a family of such representations). In their famous paper, Deligne and Lusztig decompose the Gelfand-Graev representation in this case and show
up vote 8 that there is exactly one constituent in each "geometric conjugacy class" of irreducible representations (which can be thought of as a semisimple conjugacy class in the dual group). The
down vote Steinberg representation is then the representative in the conjugacy class of the identity element -- that is the representative among the "unipotent representations".
To focus more on the actual question (!) the character of the Steinberg representation is explicitly known, and it is easy to check from this that its restriction to $U$ is the regular
representation, so it certainly occurs in the Gelfand-Graev representation.
1 I agree with this answer. A few points: I think that the fact that the Steinberg representation is generic ("generic" = "having a Whittaker model") is proven in Carter's "Finite Groups
of Lie type" (Chatper 8, maybe? That's what MathSciNet says - I don't have the book in front of me), where "generic" will be called "regular". This elaborates the Deligne-Lusztig
reference given above. – Marty Mar 12 '10 at 1:38
Two textbook references: (1) the detailed treatment of Gelfand-Graev characters in Chapter 8 of R.W. Carter, Finite Groups of Lie Type (Chapter 6 is about the Steinberg character); (2)
the concise treatment in Chapter 14 of Digne-Michel, Representations of Finite Groups of Lie Type, where 14.39 defines "regular" character as a constituent of Gelfand-Graev and gives
the Steinberg character as an explicit example. The general theory requires extra care if the ambient algebraic group has a disconnected center. (For me the term "generic" isn't at all
helpful in this setting.) – Jim Humphreys Mar 12 '10 at 11:57
add comment
I don't think so. Let $T$ be an $F$-stable torus. A character of $T^F$ is in general position if its stabiliser under $N_G(T)/T$ is trivial. I assume by generic you mean "obtained by
Deligne-Lusztig induction from a character in general position". (These are exactly the characters which appear in MacDonald's conjecture, and are therefore "generic".)
up vote 2 In this setting the Steinberg character is the opposite of generic. It appears, for example, when one induces the trivial character from a split torus (and I think it occurs in the
down vote Deligne-Lusztig induction from any $F$-stable torus, but am not sure). For example, in $SL_2$ the (Harish-Chandra = Deligne-Lusztig) induction of the trivial character yields $1 + St$ and
Deligne-Ludztig induction of the trivial character from the non-split torus yields $1 - St$.
1 Hey Geordie, I think from the point of view of DL theory, "generic" is a terrible term, and "regular" is much better: the section of the map from the irreps of G to semisimple classes in
the dual group given by the "regular' representations that Deligne-Lusztig produce gives a lovely analogous to the section you get from the adjoint quotient to the regular conjugacy
classes. – Kevin McGerty Mar 12 '10 at 14:47
Kevin, yes, I agree! I was trying to guess what generic means (and even asked a few people here). By the way, do you know if the first bracketed statement in the second paragraph is
true? Namely if one always gets the Steinberg when doing DL of the trivial modules from a torus? – Geordie Williamson Mar 12 '10 at 19:08
Geordie, the character St does occur (once) as a constituent of the DL character coming from the trivial character of any maximal torus. A quick reference is 7.6.6 in Carter's book,
using the DL computation of inner products (as pointed out by Carter just after he defines "unipotent" characters in 12.1. I haven't traced this explicitly back to the DL paper or
Lusztig's further development, but it's clearly a consequence of the earliest work on DL characters. – Jim Humphreys Mar 12 '10 at 21:19
add comment
Yes, what does "generic" mean for a finite group? Geordie is correct that the Steinberg representation is far from being a typical Deligne-Lusztig character. In fact, its unique features
make it "special" for both ordinary and modular representation theory of finite groups of Lie type. I surveyed a lot of this in Bull. Amer. Math. Soc. 16 (1987), openly accessible at AMS
up vote 1 e-math. Even for p-adic groups, it seems the correct analogue of the Steinberg representation is the "special representation".
down vote
Yes, but these are "generic", with the standard interpretation of that adjective in the p-adic theory. – Emerton Mar 11 '10 at 16:22
There is one characteristic p situation in which representations like Steinberg (having maximum possible dimension) become generic in a geometric sense: consider all irreducible
representations of the Lie algebra of a semisimple algebraic group, where those coming from the group ("restricted") such as the trivial representation are the least generic and where
"most" representations have maximum dimension (p raised to the number of positive roots). But for finite or algebraic groups only the restricted ones are visible. – Jim Humphreys Mar 11
'10 at 17:19
add comment
Not the answer you're looking for? Browse other questions tagged finite-groups rt.representation-theory or ask your own question. | {"url":"https://mathoverflow.net/questions/17821/steinberg-representations-of-finite-groups-of-lie-type/17865","timestamp":"2014-04-17T13:17:43Z","content_type":null,"content_length":"70894","record_id":"<urn:uuid:ce5c7138-13e8-4b5b-8887-fb7d2393f4ea>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please help me solve? I'm looking at this problem and not seeing how to work it so that my answer comes out looking like the choices...
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510f2ad0e4b0d9aa3c47bf86","timestamp":"2014-04-16T17:10:05Z","content_type":null,"content_length":"323144","record_id":"<urn:uuid:9cd95b4d-e908-470f-b40e-24a37bd2a442>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: kites
Replies: 5 Last Post: Nov 14, 1997 2:46 PM
Messages: [ Previous | Next ]
Re: kites
Posted: Nov 13, 1997 3:01 PM
On Thu, 13 Nov 1997, Eileen M. Klimick Schoaff wrote:
> Just curious. Is there an official name for a non-convex quadrilateral with
> two pair of adjacent sides congruent, i.e., a kite that is non-convex?
I don't think there is an "official" name - names that appear in SA
textbooks are "arrowheads" or "darts" which I rather like.
Michael de Villiers
> students have named them the StarTrek insignia, the Pontiac symbol, and the
> concave kite. We do an exercise found in the old Geometric Supposer manual
> where you reflect the point of intersection of the diagonals across the four
> sides of a quadrilateral and then determine the relationship between the
> resulting quadrilateral and the original. Squares produce squares,
> parallelograms produce parallelograms, rectangles produce rhombi and vice
> versa, kites produce isosceles trapezoids that are not parallelograms (unless
> the kite is a square) and vice versa most of the time (sometimes the kite is
> not convex). The key is the relationship of the diagonals in the original
> figure. The vertex angles of the new quadrilateral are equal to the angles
> formed by the diagonals of the original quadrilateral. And the vertex angles
> of the original quadrilateral are equal to the angles formed by the diagonals
> of the new quadrilateral. (I have not seen this proof in any textbook, but one
> of my undergraduate students proved it a few years ago.)
> So what do you call that thing? Essentially, one of the vertices of a convex
> kite is reflected across the shorter diagonal.
> Eileen Schoaff
> Buffalo State College | {"url":"http://mathforum.org/kb/thread.jspa?threadID=350790&messageID=1074700","timestamp":"2014-04-18T15:40:20Z","content_type":null,"content_length":"23395","record_id":"<urn:uuid:8931fa31-a392-4119-8fae-3235486c2790>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jessica H.'s Resources
After several months of carrying some pretty heavy textbooks around with me, I recently decided to switch to a Kindle Fire and start using electronic textbooks. Although there are times when a good
old-fashioned book really cannot be replaced, I'm very pleased with the weight of my tutoring bag now, and my students seem to be enjoying the switch as well. I'm able to download... read more
F(X) = 1/X-5 + 1 (answer)
This looks like an equation that should be graphed to me. f(x) = 1 + 1 x-5 Unfortunately, this forum is not exactly set up to show the graphical result of a function. What are you supposed to be
doing with this...
Is the first hour of tutoring free? I want to find a fit for myson? (answer)
Hi Monica, Sometimes it's hard for a tutor to come up with a specific plan for a specific student before you meet. I'm sure YOU know your son very well, but a tutor won't know anything about your son
except for what you may have told a prospective tutor. As a general rule, the...
Tricky Word Problem (answer)
When a number, half of a number, and a third of a number are added together, the sum is 385. Find the three numbers. Let's make the mystery number represented as, "n," for 'number.' We start with n.
We know we're going...
in the problem what percent of 20 is 30 how do you get the 1.5 (answer)
Whenever you see the word "IS," in a math problem, think EQUAL. what percent of 20 is 30 what percent of 20 = 30 Since 30 is larger than 20, your percentage will be larger than 100%. (100% of 20 =
20) You can turn this...
If 32000 dollars is invested at an interest rate of 9 percent per year, find the value of the investment at the end of 5 years for the following compounding met (answer)
I assume you're having trouble with your interest rate formula, which is Interest= principle x rate x time. Here's a different (longer) way to work the problem. Start by finding out how much 9% of
$32,000 (per every 12 months) works out to be. (multiply...
Find their difference? (answer)
Sun, what does your teacher expect? I would assume your teacher would have covered the definition of "trustworthy figures" to you at some point. Without knowing if this is a specific math problem
(and no more), or if there's another part to this question (as with a statistical analysis...
How to find slope and y-intercept and how to graph it (answer)
Y-Intercept is simply wherever your line crosses the Y axis (where your line 'intercepts' the y-axis). Slope can be found with a simple formula - if you have two sets of points on a graph, all you
need to do is plug in values: y2-y1 _____ = slope x2-x1 If...
5x-7(-4x for x=2 (answer)
If the problem is supposed to read: 5x-7 = -4x where x=2, then all you need to do is plug in x=2. 5(2)-7 = -4(2) You'll follow Order of Operations to solve- PEMDAS (Parenthesis, Exponents, Multiply,
Divide, Add, and Subtract). You don't have any distributing... | {"url":"http://www.wyzant.com/resources/users/84101090","timestamp":"2014-04-17T10:49:36Z","content_type":null,"content_length":"55732","record_id":"<urn:uuid:90a6399d-156e-44ac-ae2b-ffa57923fb74>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westwood, WA Math Tutor
Find a Westwood, WA Math Tutor
...I teach with patience and thoughtful communication. I studied computer science in college, with additional coursework in biology, physics, chemistry, anatomy and physiology, and linguistics.
After graduation I worked as a technical writer and computer programmer for eight years.
18 Subjects: including algebra 1, algebra 2, biology, chemistry
...I started teaching prealgebra in 1984, my very first teaching job. I had a split 7/8th grade class. Since that time, I've tutored and taught prealgebra for three years.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra
...I approach the material in a patient, non-threatening manner, and try to accommodate the student's needs and time-table to the best of my ability. It is my goal to make math an interesting, if
not enjoyable subject. I am detail oriented, and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience.
12 Subjects: including geometry, ASVAB, algebra 1, algebra 2
...Specifically, I helped diagnose a student in my classroom with Aspergers by reporting symptoms. Students with Aspergers need to have instructions and settings modified to their specific
strengths and preferences. Students with Aspergers need extra help in creating relationships and noticing emotional details.
22 Subjects: including prealgebra, reading, English, dyslexia
...Therefore, I have both teaching and tutoring experience, as well as hands-on expertise in the science field. My two biggest passions are science and education. I take pride in being a
full-blown nerd: I read biology books for fun, listen to science podcasts in my free time, and laugh at corny s...
9 Subjects: including algebra 1, algebra 2, biology, chemistry
Related Westwood, WA Tutors
Westwood, WA Accounting Tutors
Westwood, WA ACT Tutors
Westwood, WA Algebra Tutors
Westwood, WA Algebra 2 Tutors
Westwood, WA Calculus Tutors
Westwood, WA Geometry Tutors
Westwood, WA Math Tutors
Westwood, WA Prealgebra Tutors
Westwood, WA Precalculus Tutors
Westwood, WA SAT Tutors
Westwood, WA SAT Math Tutors
Westwood, WA Science Tutors
Westwood, WA Statistics Tutors
Westwood, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Brownsville, WA Math Tutors
Enetai, WA Math Tutors
Gilberton, WA Math Tutors
Harper, WA Math Tutors
Lemolo, WA Math Tutors
Marine Drive, WA Math Tutors
Orchard Heights, WA Math Tutors
Pearson, WA Math Tutors
Rocky Point, WA Math Tutors
South Park Village, WA Math Tutors
Virginia, WA Math Tutors
Waterman, WA Math Tutors
Wautauga Beach, WA Math Tutors
West Hills, WA Math Tutors
West Park, WA Math Tutors | {"url":"http://www.purplemath.com/westwood_wa_math_tutors.php","timestamp":"2014-04-17T11:06:02Z","content_type":null,"content_length":"23858","record_id":"<urn:uuid:5bfee729-d573-4266-ae4e-fd1dc54acbcf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bethesda, MD SAT Math Tutor
Find a Bethesda, MD SAT Math Tutor
...An experienced and professional tutor, I work closely with students to target lofty, yet achievable goals. I seek to challenge my students so that they work to achieve their highest potential.
I do all that I can to meet my students' time needs, from meeting in a variety of convenient locations to setting up appointments by text message.
13 Subjects: including SAT math, reading, writing, GRE
...Fellow classmates always called upon me to proof and edit their work on papers, essays, or articles. I have a love and passion for the English language, and love making other work better.
Personal ACT Scores: 33 Composite, 32 Reading My BA in Political Science and Japanese from Tufts Universit...
33 Subjects: including SAT math, reading, English, writing
...My tutoring style involves coaching the student into reaching the answers on their own. For example, If they struggle to find the correct equation, I teach them how to locate resources and how
to look at the variables present to select the best way to solve a problem. After working through a pr...
39 Subjects: including SAT math, Spanish, chemistry, writing
...I also keep track of the student's practice scores in order to monitor their progress, as well as motivate the student by showing them improvement. As a graduate of the International
Baccalaureate program in high school, along with an intense love for literature that I followed into college and ...
27 Subjects: including SAT math, reading, writing, English
I taught at the math department at Virginia Commonwealth University for three semesters, and I had such a good time! I am an energetic and happy teacher and I really enjoy finding several
different ways to explain something new until it clicks for the student; I am very patient in this sense. Also...
13 Subjects: including SAT math, calculus, GRE, writing
Related Bethesda, MD Tutors
Bethesda, MD Accounting Tutors
Bethesda, MD ACT Tutors
Bethesda, MD Algebra Tutors
Bethesda, MD Algebra 2 Tutors
Bethesda, MD Calculus Tutors
Bethesda, MD Geometry Tutors
Bethesda, MD Math Tutors
Bethesda, MD Prealgebra Tutors
Bethesda, MD Precalculus Tutors
Bethesda, MD SAT Tutors
Bethesda, MD SAT Math Tutors
Bethesda, MD Science Tutors
Bethesda, MD Statistics Tutors
Bethesda, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Arlington, VA SAT math Tutors
Chevy Chase SAT math Tutors
Chevy Chase Village, MD SAT math Tutors
Chevy Chs Vlg, MD SAT math Tutors
Falls Church SAT math Tutors
Gaithersburg SAT math Tutors
Hyattsville SAT math Tutors
Martins Add, MD SAT math Tutors
Martins Additions, MD SAT math Tutors
Mc Lean, VA SAT math Tutors
Rockville, MD SAT math Tutors
Silver Spring, MD SAT math Tutors
Somerset, MD SAT math Tutors
Takoma Park SAT math Tutors
Washington, DC SAT math Tutors | {"url":"http://www.purplemath.com/Bethesda_MD_SAT_math_tutors.php","timestamp":"2014-04-19T23:22:51Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:710436b1-a580-44ff-a08a-da44b95c4928>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the Volume of a Cylinder
August 31st 2008, 09:20 AM #1
Junior Member
Aug 2008
Finding the Volume of a Cylinder
Hello Everyone!
I was wondering if I calculated this problem right.
Diameter of Cylinder
Height of Cylinder
Heres the formula I used: V=π x radius squared x Height
Heres how I started the problem
V= 3.14 x .79(squared) x 2.140 = 4.19370236cm3
V = 4,194cm3 ?
Thank you guys in advance
Last edited by Rainy2Day; August 31st 2008 at 09:34 AM.
$V = \pi r^2 h$
radius is squared, not diameter.
Oh! I must have copied the formula down wrong.
Thank you for point that out
August 31st 2008, 09:25 AM #2
August 31st 2008, 09:31 AM #3
Junior Member
Aug 2008 | {"url":"http://mathhelpforum.com/math-topics/47238-finding-volume-cylinder.html","timestamp":"2014-04-18T17:36:31Z","content_type":null,"content_length":"35813","record_id":"<urn:uuid:bc292f75-9420-4d48-83bc-240408171158>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding divisors
10-01-2008 #1
Registered User
Join Date
Sep 2008
Ok the divisors of 10 are 1 , 2 , 5 , 10 . Ok I'm making a program, but I can't think of a way to find the divisor of a number. Example if someone types 6 how do I find it's divisor? I just need
help on te math portion.
Note: I'm not asking people to do this code for me I just need some help on finding the divisor of a number.
Do you know how to build a factor tree?
You can do this recursively. If it is even, divide in half, and factor again. If the number is odd, check for prime, otherwise factor the composite.
1 and the integer are given factors of any integer.
You can do a very basic type of proceedure for brute forcing it.
int nfactors(int number, std::vector<int> &factors)
int n = 0, i = 1, limit = number;
for(;i < limit; ++i)
limit = number/i;
if(limit != i)
// One could sort the results at this point, but bleh... you didn't specify that.
return n;
You can do a very basic type of proceedure for brute forcing it.
n seems rather redundant since there is factors.size().
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
True.... I have no explanation for that other than I have been getting less and less sleep each passing day. Though, that is hardly a valid excuse for a programmer.
Are you trying to reduce fractions, xbusterx? You could always use the Euclidean Algorithm. That would be a much more apt method for finding a greatest common factor.
I know that but even numbers don't just divide by 2 , they divide by 3 , 5 , 5 etc as well. I know I have to use num % n1 = 0 . But ya I'm lost on the much part plz help.
10-02-2008 #2
10-02-2008 #3
Registered User
Join Date
Sep 2008
10-02-2008 #4
10-02-2008 #5
10-02-2008 #6
10-04-2008 #7
Registered User
Join Date
Sep 2008
10-04-2008 #8
10-04-2008 #9
Registered User
Join Date
Sep 2008
10-04-2008 #10
10-04-2008 #11
Registered User
Join Date
Sep 2008
10-04-2008 #12
10-04-2008 #13
Registered User
Join Date
Sep 2008
10-04-2008 #14
10-04-2008 #15
Registered User
Join Date
Sep 2008 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/107650-finding-divisors.html","timestamp":"2014-04-20T00:10:34Z","content_type":null,"content_length":"99852","record_id":"<urn:uuid:dcbdfc32-109d-41d6-9114-2b7eea527d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Consistency strength order
hendrik@topoi.pooq.com hendrik at topoi.pooq.com
Sat May 3 15:03:50 EDT 2008
On Fri, May 02, 2008 at 06:19:19PM -0700, Robert M. Solovay wrote:
> I can't answer your question as to what the right consistency strength
> notion is, in gneeral. But the following (old unpublished) example of
> mine shows that the two notions can diverge. (The theory in question is
> rather artificial.)
> Assume that "ZFC + "there is an inaccessible cardinal" is
> consistent. (Call this theory ZFCI for short.)
> Then there is a theory T, obtained by adjoining a single sentence
> phi to ZFC such that:
> 1) T has the same arithmetical consequences as ZFC.
> 2) It is finitistically (that is, in "primitive recursive
> arithmetic") provable that "Con(T) iff Con(ZFCI)".
> The proof has much in common with the proofs of my old results
> on the provability logic of Peano Arithmetic.
Either this is trivial, or I completely misunderstand it.
Can phi not just be "there is an inaccessible cardinal"? In that case,
T would be ZFCI.
In either case, I would appreciate an explanation of what you meant.
-- hendrik boom
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-May/012867.html","timestamp":"2014-04-16T13:37:22Z","content_type":null,"content_length":"3632","record_id":"<urn:uuid:aa6ac19a-b88e-49ac-ba52-8f46b15f2cdd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why can't decimal numbers be represented exactly in binary?
up vote 156 down vote favorite
There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the ==
operator to compare it to another floating-point number. I understand the principles behind floating-point representation.
What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place
and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and
they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.
What's going on?
Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional
values are:
... 1000 100 10 1 1/10 1/100 ...
In binary, they would be:
... 8 4 2 1 1/2 1/4 1/8 ...
There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.
math floating-point
38 In binary, the number 3 is represented as 2¹+2°=2+1. Nice and easy. Now, take a look at 1/3. How would you represent that, using negative powers of 2? Experiment a little and you'll see that 1/3
equals the sum of the infinite sequence 2^-2 + 2^-4 + 2^-6 + 2^-8 + ..., ie. not that easy to represent exact in binary. – Lars Haugseth Jul 6 '09 at 21:33
Jon Skeet answers the question in your body very well. One thing that is missing is that you actually ask two different questions. The title question is "why can't decimal numbers be represented
11 exactly in binary?" The answer is, they can be. Between your title and body you conflate the idea of "binary" and the idea of a "floating point representation." Floating point is a way of
expressing decimal numbers in a fixed number of binary digits at the cost of precision. Binary is just a different base for counting and can express any number decimal can, given an infinite
number of digits. – Chris Blackwell Jul 6 '09 at 22:22
There's several systems that have exact decimal representation. It works pretty much like you describe. The SQL decimal type is one example. LISP languages have it built in. There are several
3 commercial and opensource libraries for using exact decimal calculations. It's just that there's no hardware support for this, and just most languages and hardware out there implements the IEEE
standards for representing an infinite amount of numbers in 32 or 64 bits. – nos Jul 10 '09 at 20:47
add comment
22 Answers
active oldest votes
Decimal numbers can be represented exactly, if you have enough space - just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in .NET)
then plenty of values which can't be represented exactly in binary floating point can be exactly represented.
Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as
a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.
The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to multiply a number by 3: you can always multiply by an integer without running
into problems. But when you divide by a number which is prime and isn't a factor of your base, you can run into trouble (and will do so if you try to divide 1 by that number).
up vote 231 Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/
down vote 5 - and 5 is the prime that causes problems between decimal and binary.
Side note to deal with the problem of finite representations:
Some floating decimal point types have a fixed size like System.Decimal others like java.math.BigDecimal are "arbitrarily large" - but they'll hit a limit at some point, whether it's
system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large
number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number
of decimal digits, you can exactly represent any number which is exactly representable as a floating binary point.
6 That's a damn fine example sir! – Tom Ritter Jul 6 '09 at 20:23
81 Humanity is screwed by having more than 2 fingers in total. – Jon Skeet Jul 6 '09 at 20:26
19 Yeah, there are 10 kinds of people in the world - those who understand binary and those who don't. – duffymo Jul 6 '09 at 20:26
54 @JonSkeet: Ctrl+Alt+Delete would look awkward with just two fingers. – Lars Haugseth Jul 6 '09 at 21:39
@muusbolla: No. The numbers represented by the decimal representation 1 and the decimal representation 0.9... (infinitely repeating 9s after the decimal point) are equal. Perhaps
10 the easiest way to see this is the following: Let x = 0.9.... Note that 10x = 9.9..... Therefore 9x = 10x - x = 9.9... - 0.9... = 9 so that 9x = 9 and x = 1. There are other ways to
see this, but I believe that this is the simplest. – Jason Nov 4 '09 at 16:36
show 23 more comments
The reason for the imprecision is the nature of number bases. In base 10, you can't exactly represent 1/3. It becomes 0.333... However, in base 3, 1/3 is exactly represented by 0.1 and 1/2
is an infinitely repeating decimal (tresimal?). The values that can be finitely represented depend on the number of unique prime factors of the base, so base 30 [2 * 3 * 5] can represent
more fractions than base 2 or base 10. Even more for base 210 [2 * 3 * 5 * 7].
up vote 22 This is a separate issue from the "floating point error". The inaccuracy there is because a few billion values are spread across a much greater range. So if you have 23 bits for the
down vote significand, you can only represent about 8.3 million distinct values. Then an 8-bit exponent provides 256 options for distributing those values. This scheme allows the most precise
decimals to occur near 0, so you can almost represent 0.1.
1 It's not at all unrelated to floating point representation. The base you choose for your floating point defines which numbers can be exactly represented. Yes, fixed-size floating point
numbers will always have limits to their accuracy - but it's the base that stops many decimal numbers being exactly representable in binary floating point, not the size. Take as many
bits as you want - you still won't be able to represent decimal 0.1 in binary exactly. – Jon Skeet Jul 6 '09 at 20:33
1 Correct, I updated it to hopefully clarify that the second paragraph is referring to errors caused by a separate issue. – James M. Jul 6 '09 at 20:42
Right - yup, that's a lot better :) – Jon Skeet Jul 6 '09 at 20:48
add comment
The root (mathematical) reason is that when you are dealing with integers, they are countably infinite.
Which means, even though there are an infinite amount of them, we could "count out" all of the items in the sequence, without skipping any. That means if we want to get the item in the
610000000000000th position in the list, we can figure it out via a formula.
However, real numbers are uncountably infinite. You can't say "give me the real number at position 610000000000000" and get back an answer. The reason is because, even between 0 and 1,
there are an infinite number of values, when you are considering floating-point values. The same holds true for any two floating point numbers.
up vote 12 More info:
down vote
Update: My apologies, I appear to have misinterpreted the question. My response is about why we cannot represent every real value, I hadn't realized that floating point was automatically
classified as rational.
5 Actually, rational numbers are countably infinite. But not every real number is a rational number. I can certainly produce a sequence of exact decimal numbers which will reach any
exact decimal number you want to give me eventually. It's if you need to deal with irrational numbers as well that you get into uncountably infinite sets. – Jon Skeet Jul 6 '09 at
True, I should be saying "real", not "floating-point". Will clarify. – TM. Jul 6 '09 at 20:27
1 At which point the logic becomes less applicable, IMO - because not only can we not deal with all real numbers using binary floating point, but we can't even deal with all rational
numbers (such as 0.1). In other words, I don't think it's really to do with countability at all :) – Jon Skeet Jul 6 '09 at 20:31
1 Floating point numbers are, by definition, rational. – molf Jul 6 '09 at 20:40
@TM: But the OP isn't trying to represent all the real numbers. He's trying to represent all exact decimal numbers, which is a subset of the rational numbers, and therefore only
3 countably infinite. If he were using an infinite set of bits as a decimal floating point type then he'd be fine. It's using those bits as a binary floating point type that causes
problems with decimal numbers. – Jon Skeet Jul 6 '09 at 20:41
show 2 more comments
You might find this helpful to understand exactly what's going on inside a floating point nubmber: Anatomy of a floating point number.
up vote 10 down vote
add comment
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the
decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
Let's step away for a moment from the particulars of bases 10 and 2. Let's ask - in base b, what numbers have terminating representations, and what numbers don't? A moment's thought tells
us that a number x has a terminating b-representation if and only if there exists an integer n such that x b^n is an integer.
So, for example, x = 11/500 has a terminating 10-representation, because we can pick n = 3 and then x b^n = 22, an integer. However x = 1/3 does not, because whatever n we pick we will
not be able to get rid of the 3.
This second example prompts us to think about factors, and we can see that for any rational x = p/q (assumed to be in lowest terms), we can answer the question by comparing the prime
factorisations of b and q. If q has any prime factors not in the prime factorisation of b, we will never be able to find a suitable n to get rid of these factors.
Thus for base 10, any p/q where q has prime factors other than 2 or 5 will not have a terminating representation.
So now going back to bases 10 and 2, we see that any rational with a terminating 10-representation will be of the form p/q exactly when q has only 2s and 5s in its prime factorisation;
and that same number will have a terminating 2-representatiion exactly when q has only 2s in its prime factorisation.
up vote 8 But one of these cases is a subset of the other! Whenever
down vote
q has only 2s in its prime factorisation
it obviously is also true that
q has only 2s and 5s in its prime factorisation
or, put another way, whenever p/q has a terminating 2-representation, p/q has a terminating 10-representation. The converse however does not hold - whenever q has a 5 in its prime
factorisation, it will have a terminating 10-representation , but not a terminating 2-representation. This is the 0.1 example mentioned by other answers.
So there we have the answer to your question - because the prime factors of 2 are a subset of the prime factors of 10, all 2-terminating numbers are 10-terminating numbers, but not vice
versa. It's not about 61 versus 6.1 - it's about 10 versus 2.
As a closing note, if by some quirk people used (say) base 17 but our computers used base 5, your intuition would never have been led astray by this - there would be no (non-zero,
non-integer) numbers which terminated in both cases!
add comment
To repeat what I said in my comment to Mr. Skeet: we can represent 1/3, 1/9, 1/27, or any rational in decimal notation. We do it by adding an extra symbol. For example, a line over the
digits that repeat in the decimal expansion of the number. What we need to represent decimal numbers as a sequence of binary numbers are 1) a sequence of binary numbers, 2) a radix point,
and 3) some other symbol to indicate the repeating part of the sequence.
up vote 6 Hehner's quote notation is a way of doing this. He uses a quote symbol to represent the repeating part of the sequence. The article: http://www.cs.toronto.edu/~hehner/ratno.pdf and the
down vote Wikipedia entry: http://en.wikipedia.org/wiki/Quote_notation.
There's nothing that says we can't add a symbol to our representation system, so we can represent decimal rationals exactly using binary quote notation, and vice versa.
That notation system works if we know where the cycle starts and ends. Humans are pretty good at detecting cycles. But, in general, computers aren't. To use be able to use a repetition
symbol effectively, the computer would have to be able to figure out where the cycles are after doing a calculation. For the number 1/3, for example, the cycle starts right away. But for
the number 1/97, the cycle doesn't show itself until you've worked out the answer to at least 96 digits. (Actually, you'd need 96*2+1 = 193 digits to be sure.) – Barry Brown Sep 25 '09
at 17:04
4 Actually it's not hard at all for the computer to detect the cycle. If you read Hehner's paper he describes how to detect the cycles for the various arithmetic operations. For example,
in the division algorithm, which uses repeated subtraction, you know where the cycle begins when you see a difference that you have seen before. – ntownsend Sep 28 '09 at 14:08
2 Also, the question was about representing numbers exactly. Sometimes exact representation means a lot of bits. The beauty of quote notation is that Hehner demonstrates that on average
there is a 31% saving in the size of representation compared to the standard 32-bit fixed-length representation. – ntownsend Sep 28 '09 at 14:14
add comment
If you make a big enough number with floating point (as it can do exponents), then you'll end up with inexactness in front of the decimal point, too. So I don't think your question is
entirely valid because the premise is wrong; it's not the case that shifting by 10 will always create more precision, because at some point the floating point number will have to use
up vote 4 exponents to represent the largeness of the number and will lose some precision that way as well.
down vote
add comment
BCD - Binary-coded Decimal - representations are exact. They are not very space-efficient, but that's a trade-off you have to make for accuracy in this case.
up vote 4 down vote
1 BCD are no more or less exact than any other base. Example: how do you represent 1/3 exactly in BCD? You can't. – Jörg W Mittag Jul 7 '09 at 9:23
7 BCD is an exact representation of a DECIMAL, thus the, um, "decimal" part of its name. There is no exact decimal representation of 1/3 either. – Alan Jul 11 '09 at 15:07
add comment
It's the same reason you cannot represent 1/3 exactly in base 10, you need to say 0.33333(3). In binary it is the same type of problem but just occurs for different set of
up vote 4 down vote
add comment
(Note: I'll append 'b' to indicate binary numbers here. All other numbers are given in decimal)
One way to think about things is in terms of something like scientific notation. We're used to seeing numbers expressed in scientific notation like, 6.022141 * 10^23. Floating point numbers
are stored internally using a similar format - mantissa and exponent, but using powers of two instead of ten.
Your 61.0 could be rewritten as 1.90625 * 2^5, or 1.11101b * 2^101b with the mantissa and exponents. To multiply that by ten and (move the decimal point), we can do:
(1.90625 * 2^5) * (1.25 * 2^3) = (2.3828125 * 2^8) = (1.19140625 * 2^9)
or in with the mantissa and exponents in binary:
(1.11101b * 2^101b) * (1.01b * 2^11b) = (10.0110001b * 2^1000b) = (1.00110001b * 2^1001b)
Note what we did there to multiply the numbers. We multiplied the mantissas and added the exponents. Then, since the mantissa ended greater than two, we normalized the result by bumping the
exponent. It's just like when we adjust the exponent after doing an operation on numbers in decimal scientific notation. In each case, the values that we worked with had a finite
representation in binary, and so the values output by the basic multiplication and addition operations also produced values with a finite representation.
Now, consider how we'd divide 61 by 10. We'd start by dividing the mantissas, 1.90625 and 1.25. In decimal, this gives 1.525, a nice short number. But what is this if we convert it to
binary? We'll do it the usual way -- subtracting out the largest power of two whenever possible, just like converting integer decimals to binary, but we'll use negative powers of two:
1.525 - 1*2^0 --> 1
0.525 - 1*2^-1 --> 1
0.025 - 0*2^-2 --> 0
up vote 4 0.025 - 0*2^-3 --> 0
down vote 0.025 - 0*2^-4 --> 0
0.025 - 0*2^-5 --> 0
0.025 - 1*2^-6 --> 1
0.009375 - 1*2^-7 --> 1
0.0015625 - 0*2^-8 --> 0
0.0015625 - 0*2^-9 --> 0
0.0015625 - 1*2^-10 --> 1
0.0005859375 - 1*2^-11 --> 1
Uh oh. Now we're in trouble. It turns out that 1.90625 / 1.25 = 1.525, is a repeating fraction when expressed in binary: 1.11101b / 1.01b = 1.10000110011...b Our machines only have so many
bits to hold that mantissa and so they'll just round the fraction and assume zeroes beyond a certain point. The error you see when you divide 61 by 10 is the difference between:
1.100001100110011001100110011001100110011...b * 2^10b
and, say:
1.100001100110011001100110b * 2^10b
It's this rounding of the mantissa that leads to the loss of precision that we associate with floating point values. Even when the mantissa can be expressed exactly (e.g., when just adding
two numbers), we can still get numeric loss if the mantissa needs too many digits to fit after normalizing the exponent.
We actually do this sort of thing all the time when we round decimal numbers to a manageable size and just give the first few digits of it. Because we express the result in decimal it feels
natural. But if we rounded a decimal and then converted it to a different base, it'd look just as ugly as the decimals we get due to floating point rounding.
add comment
This is a good question.
All your question is based on "how do we represent a number?"
ALL the numbers can be represented with decimal representation or with binary (2's complement) representation. All of them !!
BUT some (most of them) require infinite number of elements ("0" or "1" for the binary position, or "0", "1" to "9" for the decimal representation).
Like 1/3 in decimal representation (1/3 = 0.3333333... <- with an infinite number of "3")
up vote 3 Like 0.1 in binary ( 0.1 = 0.00011001100110011.... <- with an infinite number of "0011")
down vote
Everything is in that concept. Since your computer can only consider finite set of digits (decimal or binary), only some numbers can be exactly represented in your computer...
And as said Jon, 3 is a prime number which isn't a factor of 10, so 1/3 cannot be represented with a finite number of elements in base 10.
Even with arithmetic with arbitrary precision, the numbering position system in base 2 is not able to fully describe 6.1, although it can represent 61.
For 6.1, we must use another representation (like decimal representation, or IEEE 854 that allows base 2 or base 10 for the representation of floating-point values)
You could represent 1/3 as the fraction itself. You don't need an infinite amount of bits to represent it. You just represent it as the fraction 1/3, instead of the result of taking 1 and
dividing it by 3. Several systems works that way. You then need a way to use the standard / * + - and similar operators to work on the representation of fractions, but that's pretty easy
- you can do those operations with a pen and paper, teaching a computer to do it is no big deal. – nos Jul 10 '09 at 20:55
I was talking about "binary (2's complement) representation". Because, of course, using an other representation may help you to represent some number with finite number of elements (and
you will need infinite number of elements for some others) – ThibThib Aug 11 '09 at 17:18
add comment
I'm surprised no one has stated this yet: use continued fractions. Any rational number can be represented finitely in binary this way.
Some examples:
1/3 (0.3333...)
0; 3
5/9 (0.5555...)
0; 1, 1, 4
10/43 (0.232558139534883720930...)
0; 4, 3, 3
9093/18478 (0.49209871198181621387596060179673...)
up vote 2
down vote 0; 2, 31, 7, 8, 5
From here, there are a variety of known ways to store a sequence of integers in memory.
In addition to storing your number with perfect accuracy, continued fractions also have some other benefits, such as best rational approximation. If you decide to terminate the sequence
of numbers in a continued fraction early, the remaining digits (when recombined to a fraction) will give you the best possible fraction. This is how approximations to pi are found:
Pi's continued fraction:
3; 7, 15, 1, 292 ...
Terminating the sequence at 1, this gives the fraction:
which is an excellent rational approximation.
But how would you represent that in binary? For example 15 requires 4 bits to be represented but 292 requires 9. How does the hardware (or even the software) know where the bit
boundaries are between each? It's the efficiency versus accuracy tradeoff. – ardentsonata Jul 5 '13 at 19:26
add comment
The problem is that you do not really know whether the number actually is exactly 61.0 . Consider this:
float a = 60;
float b = 0.1;
up vote 1 down vote float c = a + b * 10;
What is the value of c? It is not exactly 61, because b is not really .1 because .1 does not have an exact binary representation.
add comment
There's a threshold because the meaning of the digit has gone from integer to non-integer. To represent 61, you have 6*10^1 + 1*10^0; 10^1 and 10^0 are both integers. 6.1 is 6*10^0 +
up vote 1 1*10^-1, but 10^-1 is 1/10, which is definitely not an integer. That's how you end up in Inexactville.
down vote
add comment
A parallel can be made of fractions and whole numbers. Some fractions eg 1/7 cannot be represented in decimal form without lots and lots of decimals. Because floating point is binary
up vote 1 based the special cases change but the same sort of accuracy problems present themselves.
down vote
add comment
There are an infinite number of rational numbers, and a finite number of bits with which to represent them. See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
up vote 0 down
But even with an infinite number of bits, if you used a floating binary point, you still wouldn't be able to represent 0.1 exactly, just like you can't represent 1/3 exactly in
decimal even with an infinite number of bits. – Jon Skeet Jul 6 '09 at 20:48
3 @Jon That's untrue: with an infinite number of decimals, I can for example express 'one third' exactly. The real-world problem is that not physically possible to have "an infinite
number" of decimals or of bits. – ChrisW Jul 6 '09 at 21:16
add comment
The number 61.0 does indeed have an exact floating-point operation—but that's not true for all integers. If you wrote a loop that added one to both a double-precision floating point number
and a 64-bit integer, eventually you'd reach a point where the 64-bit integer perfectly represents a number, but the floating point doesn't—because there aren't enough significant bits.
It's just much easier to reach the point of approximation on the right side of the decimal point. If you started writing out all the numbers in binary floating point, it'd make more sense.
up vote 0
down vote Another way of thinking about it is that when you note that 61.0 is perfectly representable in base 10, and shifting the decimal point around doesn't change that, you're performing
multiplication by powers of ten (10^1, 10^-1). In floating point, multiplying by powers of two does not affect the precision of the number. Try taking 61.0 and dividing it by three
repeatedly for an illustration of how a perfectly precise number can lose its precise representation.
add comment
you know integer numbers right? each bit represent 2^n
well its the same for floating point(with some distinctions) but the bits represent 2^-n 2^-1=1/2=0.5
up vote 0 down vote 2^-2=1/(2*2)=0.25
Floating point binary representation:
sign Exponent Fraction(i think invisible 1 is appended to the fraction )
B11 B10 B9 B8 B7 B6 B5 B4 B3 B2 B1 B0
add comment
The high scoring answer above nailed it.
First you were mixing base 2 and base 10 in your question, then when you put a number on the right side that is not divisible into the base you get problems. Like 1/3 in decimal because 3
doesnt go into a power of 10 or 1/5 in binary which doesnt go into a power of 2.
up vote
0 down Another comment though NEVER use equal with floating point numbers, period. Even if it is an exact representation there are some numbers in some floating point systems that can be accurately
vote represented in more than one way (IEEE is bad about this, it is a horrible floating point spec to start with, so expect headaches). No different here 1/3 is not EQUAL to the number on your
calculator 0.3333333, no matter how many 3's there are to the right of the decimal point. It is or can be close enough but is not equal. so you would expect something like 2*1/3 to not equal
2/3 depending on the rounding. Never use equal with floating point.
add comment
As we have been discussing, in floating point arithmetic, the decimal 0.1 cannot be perfectly represented in binary.
Floating point and integer representations provide grids or lattices for the numbers represented. As arithmetic is done, the results fall off the grid and have to be put back onto the
up vote 0 down grid by rounding. Example is 1/10 on a binary grid.
If we use binary coded decimal representation as one gentleman suggested, would we be able to keep numbers on the grid?
1 Decimal numbers, sure. But that's just by definition. You can't represent 1/3 in decimal, any more than you can represent 0.1 in binary. Any quantization scheme fails for an
infinitely large set of numbers. – Kylotan Feb 18 '11 at 20:45
add comment
Refer this link all your doubts about floating point numbers gets clear. http://kipirvine.com/asm/workbook/floating_tut.htm
up vote 0 down vote
add comment
In the equation
2^x = y ;
x = log(y) / log(2)
Hence, I was just wondering if we could have a logarithmic base system for binary like,
2^1, 2^0, 2^(log(1/2) / log(2)), 2^(log(1/4) / log(2)), 2^(log(1/8) / log(2)),2^(log(1/16) / log(2)) ........
up vote 0 down vote That might be able to solve the problem, so if you wanted to write something like 32.41 in binary, that would be
2^5 + 2^(log(0.4) / log(2)) + 2^(log(0.01) / log(2))
2^5 + 2^(log(0.41) / log(2))
add comment
Not the answer you're looking for? Browse other questions tagged math floating-point or ask your own question. | {"url":"http://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary","timestamp":"2014-04-16T20:28:01Z","content_type":null,"content_length":"182166","record_id":"<urn:uuid:55a31bc6-0e9a-4aa6-844e-e74922e29133>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
differentiation applying two rules
Re: differentiation applying two rules
oh ! so is my answer incorrect ?
Re: differentiation applying two rules
here is my second example - please see if my inital steps are correct- while I get my head around what I did wrong in first example- ta
Re: differentiation applying two rules
Re: differentiation applying two rules
Ok I read what you said and I tried again- you can't fail me for trying. Please have a look
Re: differentiation applying two rules
For that last example, you overlooked squaring the entire denominator.
What I meant earlier is that you are comfortable with the situation
where the "function nesting" is one level deep, when applying the chain rule.
$e^{6x}$ is one level beyond $e^x$
$\frac{d}{dx}e^{6x}=\frac{d}{dx}e^q=\frac{dq}{dx} \frac{d}{dq}e^q$
has an extra stage.
Beginning with x, we have
but then we take the square root of this
so the "nesting" is 2 levels deep, whereas it was only 1 level deep for $e^{6x}$
$\frac{d}{dx}\left(1+3x^2\right)^{\frac{1}{2}}= \frac{d}{dx}w^{\frac{1}{2}}=\frac{dw}{dx}\frac{d}{ dw}w^{\frac{1}{2}}$
How is that ?
Re: differentiation applying two rules
Re: differentiation applying two rules
Hey following some discussion with someone else and you- I have done some work- do you mind seeing what I have done is correct so I know I am ok to work with product and quotient rule both
Re: differentiation applying two rules
so you are telling me I am finally right
Re: differentiation applying two rules
I take it I finally understand how it works- Please say yes!!! I have also completed my own second eaxmple - heres my final version please look at it - I am typing so i can send it-therefore
please excaue miner mistakes but please point to it
Re: differentiation applying two rules
Re: differentiation applying two rules
Ok I will do squaring and see if I can simplify in any way. However I take it rest of is ok. Is that correct? Also I know why I made mistake in "v" because I wasn't careful but since you pointed
it out I think I will be extra carful in writing my steps out. Just away from the computer for next hour so when get back will send you my correction. I must say you have given me that confidence
I thought I'll never have . Sincere thank you | {"url":"http://mathhelpforum.com/calculus/183746-differentiation-applying-two-rules-2-print.html","timestamp":"2014-04-18T14:13:29Z","content_type":null,"content_length":"17953","record_id":"<urn:uuid:6a4ef9a9-912d-49fa-805f-c978af116683>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real closed field+ model thoery
up vote 1 down vote favorite
Is it true that every real closed field can be elementarily embedded in some other real closed filed with the same Archimedean classes (I mean in a proper extension)? Can for example real numbers be
elementarily embedded in another real closed field with the same Archimedean classes? R (real numbers) is not the only Archimedean field, is it right?
real-algebra model-theory
2 If you're asking whether the field of reals has a proper, Archimedean extension, the answer is no, and the question is not at the level appropriate for MO. So I'll vote to close. If you intended
to ask something else, please clarify. – Andreas Blass Dec 12 '11 at 21:05
Do you mean Archimedean classes of the additive group or of the multiplicative group? – Emil Jeřábek Dec 13 '11 at 12:59
add comment
1 Answer
active oldest votes
The question is somewhat ambiguous, it’s not clear whether the Archimedean classes are meant to be additive or multiplicative. I will assume the former, i.e., equivalence classes of the
relation $$a\sim b\Leftrightarrow\mathrm{sign}(a)=\mathrm{sign}(b)\land\exists n\in\omega\smallsetminus\{0\}\\,(n^{-1}|a|\le|b|\le n|a|).$$ First, since real-closed fields (rcf) have
elimination of quantifiers, any embedding between them is automatically elementary. Thus the question is whether every rcf $R$ has a proper rcf extension $S$ with the same Archimedean
classes (i.e., every $s\in S$ is $\sim$ to some $r\in R$).
As Andreas noted above, this property does not hold in general, and in particular, $\mathbb R$ has no proper Archimedean extension. On the other hand, it holds for many other real-closed
fields: for example, any Archimedean rcf different from $\mathbb R$ has a proper Archimedean rcf extension (namely, $\mathbb R$). I think the following characterization holds:
Proposition: If $R$ is a rcf, the following are equivalent:
1. $R$ has a proper rcf extension with the same Archimedean classes.
2. There is a Dedekind cut $\langle A,B\rangle$ on the interval $(0,1)_R$ such that $$\tag{$*$}\forall a\in A\\,\exists b\in B\\,\frac{a+b}2\in A\qquad\text{and}\qquad\forall b\in B\\,\
exists a\in A\\,\frac{a+b}2\in B.$$
On the one hand, let $S\supseteq R$ be a rcf with the same Archimedean classes and $x\in S\smallsetminus R$. We can assume $x>1$. There exists $c\in R$ such that $c\sim x$; WLOG $c< x<
2c$. Then $0< x/c-1< 1$, and the Dedekind cut on $R$ determined by $x/c-1$ is easily seen to satisfy $(*)$.
up vote 3 On the other hand, assume the cut $\langle A,B\rangle$ is given. We define an ordering on the rational function field $F=R(x)$ as follows. Using the fact that every nonzero polynomial is
down vote a product of linear polynomials and polynomials of the form $(x-a)^2+b$, where $b>0$, we see that for every $f(x)/g(x)\in F$, there are $a\in A$ and $b\in B$ such that $f$ and $g$ have
accepted constant sign on $(a,b)_R$; we define the sign of $f(x)/g(x)$ to be the sign it assumes on $(a,b)_R$. This makes $F$ an ordered field. Let $S$ be its real closure. For a given $\alpha\in
S$, there exists $c\in R$ such that $\alpha\sim c$ whenever:
1. $\alpha=x-a$, $a\in R$. This follows from $(*)$.
2. $\alpha=(x-a)^2+b$, $a,b\in R$, $b>0$. This follows from 1: if $u\sim u'$, $v\sim v'$, and $u,v>0$, then $u+v\sim u'+v'$.
3. $\alpha\in F$. Every such $\alpha$ is a product of an element of $R$ and elements of the form 1 or 2 or their inverses, and $u\sim u'$ and $v\sim v'$ imply $uv\sim u'v'$ and $u/v\sim
4. $\alpha\in S$ is such that $\alpha^k\in F$ for some integer $k>0$. We have $\alpha^k\sim c$ by 3, hence $\alpha\sim\sqrt[k]c\in R$.
5. $\alpha\in S$. We have $\sum_{i\le d}u\_i\alpha^i=0$ for some $u\_i\in F$, $u\_d\ne0$. Let $i$ be such that the Archimedean class of $u\_i\alpha^i$ is maximal. Since the sum above is
$0$, there exists $j\ne i$ such that $u\_j\alpha^j\sim-u\_i\alpha^i$. Then $\alpha^{j-i}\sim-u\_i/u\_j$, hence $\alpha\sim c$ for some $c\in R$ by 4.
Thus $S$ is a proper rcf extensions of $R$ with the same Archimedean classes as $R$.
add comment
Not the answer you're looking for? Browse other questions tagged real-algebra model-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/83281/real-closed-field-model-thoery/83342","timestamp":"2014-04-21T15:53:30Z","content_type":null,"content_length":"54677","record_id":"<urn:uuid:c72d5d61-9538-4fcc-a477-aa0c6ae65fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 158.26603
Autor: Erdös, Pál; Hajnal, András; Rado, R.
Title: Partition relations for cardinal numbers (In English)
Source: Acta Math. Acad. Sci. Hung. 16, 93-196 (1965).
Review: Small Greek letters denote ordinal numbers, small Roman letters denote cardinal numbers (i.e. initial ordinal), always p,r,s < \omega[0], |X| is the cardinality of X, [X]^r denotes the set of
all r element subsets of X. The partition relations I, II, III are defined as follows. The relation I: a > (b[\nu])^r[\nu < \lambda] holds true if and only if for every partition [X]^r = \bigcup[\
nu < \lambda] J[\nu], |X| = a, there is a \nu[0] < \lambda and a subset Y \subseteq X such that |Y| = b and [Y]^r \subseteq J[\nu[0]]. The relation II: a > (b[\nu])^ < \aleph[0][\nu < \lambda]
means that for every partition [X]^ < \aleph[0] = \bigcup[\nu < \lambda] J[\nu], where [X]^ < \aleph[0] = \bigcup[r < \omega[0]] [X]^r, there exista a \nu[0] < \lambda and a subset Y \subseteq X, |Y|
= b[\nu[0]] and [Y]^ < \aleph[0] \subseteq J[\nu[0]]. The relation III:
\pmatrix a[0] \\ \vdots \\a[s] \endpmatrix > \pmatrix b[0\nu] \\ \vdots
b[s\nu] \endpmatrix^r[0],...,r[s][\nu < \lambda]
is equivalent to the following condition. Let |X[p]| = a[p] for p \leq s
, X[p] are disjoint, [X[0],...,X[s]]^r[0],...r[s] = {X: X \subseteq X[0] \cup ··· \cup X[s], |X\cap X[p]| = r[p] for p \leq s} = \bigcup[\nu < \lambda] J[\nu]. Then there exist sets Y[r] \subseteq X
[r], for r \leq s and a \nu[0] < \lambda such that |Y[r]| = b[r\nu[0]] for r \leq s and
[Y[0],...,Y[s]]^r[0],...,r[s] \subseteq J[\nu[0]].
"In this paper our first major aim is to discuss as completely as possible the relation I. Our most general results in this direction are stated in Theorems I and II, ... . If we disregard cases when
among the given cardinals there occur inaccessible numbers greater than \aleph[0], and if we assume the General Continuum Hypothesis, then our results are complete for r = 2, ... . It seems that
there are only two essentially different methods for obtaining positive partition formulae: those given in Lemma 1 and those given in Lemma 3 ... In Lemma 5 we state powerful new methods for
constructing examples of particular partitions which yield negative I-relation. ... Our second major aim is an investigation of the polarized partition relation III."
The exact formulation of the Lemma 1 is complicated; its contents may be shortly formulated as follows: in every sufficiently great tree, in which from every edge goes out a small number of branches,
there is a large branche.
The simplest canonization lemma (the Lemma 3 proved using the Generalized Continuum Hypothesis) may be stated as follows: Let |S| = a > a' (a' is the smallest cardinal with which a is cofinal); r \
geq 1, a = \sup {a[\xi] < a'}, a[\xi[1]] < a[\xi[2]] for \xi[1] < \xi[2] < a', [S]^r = \bigcup[\nu < \lambda] J[\nu], \lambda < a. Then there are disjoint sets S[\sigma], \sigma < a', |S[\sigma]| = a
[\sigma], S[\sigma] \subseteq S and for X,Y in [\bigcup[\sigma < a'] S[\sigma] ]^r, the relations |X\cap S[\sigma]| = |Y\cap S[\sigma]| for \sigma < a' are equivalent to the condition: there is a \nu
[0] < \lambda such that X,Y in J[\nu[0]].
Define \alpha \dot- 1 = \alpha for \alpha limit \alpha \dot- 1 = \beta if and only if \alpha = \beta+1, cr(\alpha) = cf(cf(\alpha \dot- 1). Let us denote:
(R) \aleph[\beta+(r-2)] > (b[\xi])^r[\xi] < \lambda,
(IA) b[0] = \aleph[\beta],
(IB) b[\xi] < \aleph[\beta] for \xi < \lambda,
(CA) prod[1 \leq \xi < \lambda] b[\xi] \leq \aleph[cr(\beta)],
(CB) prod[\xi < \lambda] b[\xi] < \aleph[\beta],
(D) r \geq 3, \beta > cf(\beta) > cf(\beta) \dot- 1 > cr\beta, b[\xi] < \aleph[0] for 1 \leq \xi < \lambda.
The first main theorem may be stated as follows. Let \lambda \geq 2, 2 \leq r < b[\xi] \leq \aleph[\beta] for \xi < \lambda. Assuming the Generalized Continuum Hypothesis we have:
(i) If (IA) holds, (D) does not holds, then (R) implies (CA).
(ii) If (IA) holds and b[1] \geq \aleph[0], then (R) implies (CA).
(iii) If (IA) holds and \aleph[\beta]' is not inaccessible, then (CA) implies (R).
(iv) If (IA) holds and b[\xi] < \aleph[\beta] ' for 0 < \xi < \lambda then (CA) implies (R).
(v) If (IB) holds, then (CB) is equivalent to (R).
Let us denote:
(IIA) b[0] > \aleph[\alpha \dot- (r-2)].
(IIB) b[\xi] \leq \aleph[\gamma], \xi < \lambda, \alpha = \gamma+s, \gamma limit and s < r-2.
(IIC1) b[0] = \aleph[\alpha \dot- (r-2)],
(IIC2) b[\xi] < \aleph[\alpha \dot- (r-2)] for \xi < \lambda.
(R0) \aleph[\alpha] > (b[\xi])^r[\xi < \lambda].
The second main theorem: Let \lambda \geq 2, 2 \leq r < b[\xi] \leq \aleph[\alpha] for \xi < \lambda.
Assuming the Generalized Continuum Hypothesis we have:
(i) If (IIA) holds, then (R0) is false.
(ii) If (IIB) and (IIC1) hold, (R0) implies that \aleph[\alpha \dot-(r-2)] is inaccessible.
(iii) If (IIB) and (IIC2) hold, then (R0) is equivalent to the condition prod[\xi < \lambda] b[\xi] < \aleph[\alpha \dot-(r-2)].
The proofs are based on Lemmas 1, 2, 3 and 5. The Lemma 2 and 5 are the stepping-up and stepping-down Lemmas respectively, i.e. they are of the form "if a > (b[\xi])^r[\xi < \lambda], then a^+ >
(b[\xi]+1)^r+1[\xi < \lambda]" and "if a \not > (b[\xi])^r[\xi < \lambda], then 2^a \not > (b[\xi]+1)^r+1[\xi < \lambda]", respectively (of course, under some assumptions).
A great part of the paper is devoted to the study of relations IV and V. The relation IV: a > [b[\xi] ]^r[\xi < c] (relation V: a > [b]^r[c,d]) is equivalent to the condition: whenever |S| = a,
[S]^r = \bigcup[\xi < c] J[\xi], where the J[\xi] are disjoint, then there are a set X \subseteq S and a number \xi[0] < c (a set D \subseteq c) such that |X| = b[\xi[0]] and [X]^r \cap J[\xi[0]] = Ø
(|X| = b, |D| \leq a and [X]^r \subseteq \bigcup[\xi in D] J[\xi]). Some results (assuming the Generalized Continuum Hypothesis):
(i) \aleph[\alpha+1] \not > [\aleph[\alpha+1] ]^2[\aleph_{\alpha+1]} for any \alpha.
(ii) Let r \geq 2 and \alpha > cf (\alpha). Then \aleph[\alpha] \not > [\aleph[\alpha]]^r[2^r-1].
(iii) If \aleph[\alpha]' is \aleph[0] or a measurable cardinal, then \aleph[\alpha] > [\aleph[\alpha]]^r[c] for c > 2^r-1 and \aleph[\alpha] > [\aleph[\alpha]]^r[c2^r-1] for c < \aleph[\alpha].
(iv) \aleph[2] > [\aleph[0], \aleph[1], \aleph[1] ]^3.
On the other hand, there are many open problems, e.g. \aleph[2] > [\aleph[1]]^3[4]?, \aleph[3] > [\aleph[1]]^2[\aleph[2],\aleph[0]]?
In the second part, the authors investigate the polarized partition relation \binom{a}{b} > \pmatrix a[0], a[1] \\ b[0], b1 \endpmatrix, i. e. a special case of the relation III. A complete
discussion is given, however, the results are not complete. Many other relations and problems are studied, but it is impossible to give a full list of them here.
The paper is rather difficult to read and gives the impression of a condensed version of a monography.
Reviewer: L.Bukovský
Classif.: * 05D10 Ramsey theory
03E05 Combinatorial set theory (logic)
04A20 Combinatorial set theory
03-02 Research monographs (mathematical logic)
05E10 Tableaux, etc.
04A10 Ordinal and cardinal numbers; generalizations
Index Words: set theory
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/15826603.htm","timestamp":"2014-04-17T19:01:33Z","content_type":null,"content_length":"16528","record_id":"<urn:uuid:26b1822e-9660-4e55-90a6-27520023d28f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
The equation of the normal to a curve which is parallel to a given line
October 9th 2009, 05:41 AM
The equation of the normal to a curve which is parallel to a given line
this is the question:
find the equation of the normal to the curve y=3+2x-x^2 which is parallel to the line 2y-x-3=0
please help me...i dont know how to do..(Crying)(Crying)
October 9th 2009, 05:51 AM
mr fantastic
The gradient of 2y-x-3=0 is 1/2. Therefore the gradient of the required normal is 1/2 therefore the gradient of the tangent is -2. So solve dy/dx = -2 to get the x-coordinate and hence
y-coordinate of the required point on the curve. Now you have a point and you have a gradient and therefore you can write down the equation of the normal line.
October 9th 2009, 05:59 AM
WRITE it
The gradient of 2y-x-3=0 is 1/2. Therefore the gradient of the required normal is 1/2 therefore the gradient of the tangent is -2. So solve dy/dx = -2 to get the x-coordinate and hence
y-coordinate of the required point on the curve. Now you have a point and you have a gradient and therefore you can write down the equation of the normal line.
can u write it?it dificult to me.
October 9th 2009, 06:03 AM
i am sure you know that the gradient of $y=1/2x+3/2$ is 1/2
$\frac{dy}{dx}=-2x+2$ ... gradient of tangent
so gradient of normal will be $-\frac{1}{2-2x}$
have it equal to 1/2 . Do you know why ?
then solve for x , sub into the original equation to get y.
After that , make use the formula that i gave you in my post tp your other question to get the equation .
October 9th 2009, 06:04 AM
mr fantastic
You have spent no more than 8 minutes thinking about what I posted before aksing for more help. You need to spend more time than that thinking about the question and the reply I gave.
Have you tried doing what I suggested? What don't you understand? Where do you get stuck? | {"url":"http://mathhelpforum.com/calculus/107009-equation-normal-curve-parallel-given-line-print.html","timestamp":"2014-04-20T03:22:18Z","content_type":null,"content_length":"8817","record_id":"<urn:uuid:3f832cde-e1b4-4f61-b592-76d7ca7392bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
pydelay is a program which translates a system of delay differential equations (DDEs) into simulation C-code and compiles and runs the code (using scipy weave). This way it is easy to quickly
implement a system of DDEs but you still have the speed of C. The Homepage can be found here:
It is largely inspired by PyDSTool.
The algorithm used is based on the Bogacki-Shampine method which is also implemented in Matlab’s dde23 .
We also want to mention PyDDE – a different python program for solving DDEs.
pydelay is licensed under the MIT License.
You can cite pydelay through the arXiv.:
title = {pydelay -- a python tool for solving delay differential equations},
author = {Flunkert, V. and Sch{\"o}ll, E. },
year = {2009},
howpublished = {{\tt arXiv:0911.1633~[nlin.CD]}},
Installation and requirements
You need python and python headers files (in debian/ubuntu these are in the package python-dev), numpy and scipy and the gcc-compiler.
To plot the solutions and run the examples you also need matplotlib.
To install pydelay download the latest tar.gz from the website and install the package in the usual way:
cd pydelay-$version
python setup.py install
When the package is installed, you can get some info about the functions and the usage with:
TODO easy_install
For Arch linux there is a PKGBUILD .
The solver has not been tested on a windows machine. It could perhaps work under cygwin.
An example
The following example shows the basic usage. It solves the Mackey-Glass equations for initial conditions which lead to a periodic orbit (see for this example).
# import pydelay and numpy and pylab
import numpy as np
import pylab as pl
from pydelay import dde23
# define the equations
eqns = {
'x' : '0.25 * x(t-tau) / (1.0 + pow(x(t-tau),p)) -0.1*x'
#define the parameters
params = {
'tau': 15,
'p' : 10
# Initialise the solver
dde = dde23(eqns=eqns, params=params)
# set the simulation parameters
# (solve from t=0 to t=1000 and limit the maximum step size to 1.0)
dde.set_sim_params(tfinal=1000, dtmax=1.0)
# set the history of to the constant function 0.5 (using a python lambda function)
histfunc = {
'x': lambda t: 0.5
dde.hist_from_funcs(histfunc, 51)
# run the simulator
# Make a plot of x(t) vs x(t-tau):
# Sample the solution twice with a stepsize of dt=0.1:
# once in the interval [515, 1000]
sol1 = dde.sample(515, 1000, 0.1)
x1 = sol1['x']
# and once between [500, 1000-15]
sol2 = dde.sample(500, 1000-15, 0.1)
x2 = sol2['x']
pl.plot(x1, x2)
pl.ylabel('$x(t - 15)$')
[source code, hires.png, pdf]
Defining the equations, delays and parameters
Equations are defined using a python dictionary. The keys are the variable names and the entry is the right hand side of the differential equation. The string defining the equation has to be a valid
C expression, i.e., use pow(a,b) instead of a**b etc.
Delays are written as (t-delay), where delay can be some expression involving parameters and numbers but not (yet) involving the time t or the dynamic variables:
eqns = {
'y1': '- y1 * y2(t-tau) + y2(t-1.0)',
'y2': 'a * y1 * y2(t-2*tau) - y2',
'y3': 'y2 - y2(t-(tau+1))'
Complex variables can be defined by adding ':c' or ':C' in the eqn-dictionary. The imaginary unit can be used through 'ii' in the equations:
eqns = {
'z:c': '(la + ii*w0 + g*pow(abs(z),2) )*z + b*(z(t-tau) - z(t))',
Parameters are defined in a separate dictionary where the keys are the parameter names, i.e.,:
params = {
'a' : 0.2,
'tau': 1.0
Setting the history
The history of the variables is stored in the dictionary dde23.hist. The keys are the variable names and there is an additional key 't' for the time array of the history.
There is a second dictionary dde23.Vhist where the time derivatives of the history is stored (this is needed for the solver). When the solver is initialized, i.e.,:
dde = dde23(eqns, params)
the history of all variables (defined in eqns) is initialized to an array of length nn=101 filled with zeros. The time array is evenly spaced in the interval [-maxdelay, 0].
It is possible to manipulate these arrays directly, however this is not recommended since one easily ends up with an ill-defined history resulting for example in segfaults or false results.
Instead use the following methods to set the history.
dde23.hist_from_funcs(dic, nn=101)
Initialise the histories with the functions stored in the dictionary dic. The keys are the variable names. The function will be called as f(t) for t in [-maxdelay, 0] on nn samples in the
This function provides the simplest way to set the history. It is often convenient to use python lambda functions for f. This way you can define the history function in place.
If any variable names are missing in the dictionaries, the history of these variables is set to zero and a warning is printed. If the dictionary contains keys not matching any variables these
entries are ignored and a warning is printed.
Example: Initialise the history of the variables x and y with cos and sin functions using a finer sampling resolution:
from math import sin, cos
histdic = {
'x': lambda t: cos(0.2*t),
'y': lambda t: sin(0.2*t)
dde.hist_from_funcs(histdic, 500)
dde23.hist_from_arrays(dic, useend=True)
Initialise the history using a dictionary of arrays with variable names as keys. Additionally a time array can be given corresponding to the key t. All arrays in dic have to have the same
If an array for t is given the history is interpreted as points (t,var). Otherwise the arrays will be evenly spaced out over the interval [-maxdelay, 0].
If useend is True the time array is shifted such that the end time is zero. This is useful if you want to use the result of a previous simulation as the history.
If any variable names are missing in the dictionaries, the history of these variables is set to zero and a warning is printed. If the dictionary contains keys not matching any variables (or 't')
these entries are ignored and a warning is printed.
t = numpy.linspace(0, 1, 500)
x = numpy.cos(0.2*t)
y = numpy.sin(0.2*t)
histdic = {
't': t,
'x': x,
'y': y
Note that the previously used methods hist_from_dict, hist_from_array and hist_from_func (the last two without s) have been removed, since it was too easy to make mistakes with them.
The solution
After the solver has run, the solution (including the history) is stored in the dictionary dde23.sol. The keys are again the variable names and the time 't'. Since the solver uses an adaptive step
size method, the solution is not sampled at regular times.
To sample the solutions at regular (or other custom spaced) times there are two functions.
dde23.sample(tstart=None, tfinal=None, dt=None)
Sample the solution with dt steps between tstart and tfinal.
tstart, tfinal
Start and end value of the interval to sample. If nothing is specified tstart is set to zero and tfinal is set to the simulation end time.
Sampling size used. If nothing is specified a reasonable value is calculated.
Returns a dictionary with the sampled arrays. The keys are the variable names. The key 't' corresponds to the sampling times.
Sample the solutions at times t.
Array of time points on which to sample the solution.
Returns a dictionary with the sampled arrays. The keys are the variable names. The key 't' corresponds to the sampling times.
These functions use a cubic spline interpolation of the solution data.
Noise can be included in the simulations. Note however, that the method used is quite crude (an Euler method will be added which is better suited for noise dominated dynamics). The deterministic
terms are calculated with the usual Runge-Kutta method and then the noise term is added with the proper scaling of $\sqrt{dt}$ at the final step. To get accurate results one should use small time
steps, i.e., dtmax should be set small enough.
The noise is defined in a separate dictionary. The function gwn() can be accessed in the noise string and is a Gaussian white noise term of unit variance. The following code specifies an
Ornstein-Uhlenbeck process.:
eqns = { 'x': '-x' }
noise = { 'x': 'D * gwn()'}
params = { 'D': 0.00001 }
dde = dde23(eqns=eqns, params=params, noise=noise)
You can also use noise terms of other forms by specifying an appropriate C-function (see the section on custom C-code).
Custom C-code
You can access custom C-functions in your equations by adding the definition as supportcode for the solver. In the following example a function f(w,t) is defined through C-code and accessed in the
eqn string.:
# define the eqn f is the C-function defined below
eqns = { 'x': '- x + k*x(t-tau) + A*f(w,t)' }
params = {
'k' : 0.1,
'w' : 2.0,
'A' : 0.5,
'tau': 10.0
mycode = """
double f(double t, double w) {
return sin(w * t);
dde = dde23(eqns=eqns, params=params, supportcode=mycode)
When defining custom code you have to be careful with the types. The type of complex variables in the C-code is Complex. Note in the above example that w has to be given as an input to the function,
because the parameters can only be accessed from the eqns string and not inside the supportcode. (Should this be changed?)
Using custom C-code is often useful for switching terms on and off. For example the Heaviside function may be defined and used as follows.:
# define the eqn f is the C-function defined below
eqns = { 'z:c': '(la+ii*w)*z - Heavi(t-t0)* K*(z-z(t-tau))' }
params = {
'K' : 0.1 ,
'w' : 1.0,
'la' : 0.1,
'tau': pi,
't0' : 2*pi
mycode = """
double Heavi(double t) {
return 1.0;
return 0.0;
dde = dde23(eqns=eqns, params=params, supportcode=mycode)
This code would switch a control term on when t>t0. Note that Heavi(t-t0) does not get translated to a delay term, because Heavi is not a system variable.
Since this scenario occurs so frequent the Heaviside function (as defined above) is included by default in the source code.
Use and modify generated code
The compilation of the generated code is done with scipy.weave. Instead of using weave to run the code you can directly access the generated code via the function dde23.output_ccode(). This function
returns the generated code as a string which you can then store in a source file.
To run the generated code manually you have to set the precompiler flag\ #define MANUAL (uncomment the line in the source file) to exclude the python related parts and include some other parts making
the code a valid stand alone source file. After this the code should compile with g++ -lm -o prog source.cpp and you can run the program manually.
You can specify the history of all variables in the source file by setting the for loops after the comment\ /* set the history here ... */.
Running the code manually can help you debug, if some problem occurs and also allows you to extend the code easily.
Another example
The following example shows some of the things discussed above. The code simulates the Lang-Kobayashi laser equations
import numpy as np
import pylab as pl
from pydelay import dde23
tfinal = 10000
tau = 1000
#the laser equations
eqns = {
'E:c': '0.5*(1.0+ii*a)*E*n + K*E(t-tau)',
'n' : '(p - n - (1.0 +n) * pow(abs(E),2))/T'
params = {
'a' : 4.0,
'p' : 1.0,
'T' : 200.0,
'K' : 0.1,
'tau': tau,
'nu' : 10**-5,
'n0' : 10.0
noise = { 'E': 'sqrt(0.5*nu*(n+n0)) * (gwn() + ii*gwn())' }
dde = dde23(eqns=eqns, params=params, noise=noise)
# use a dictionary to set the history
thist = np.linspace(0, tau, tfinal)
Ehist = np.zeros(len(thist))+1.0
nhist = np.zeros(len(thist))-0.2
dic = {'t' : thist, 'E': Ehist, 'n': nhist}
# 'useend' is True by default in hist_from_dict and thus the
# time array is shifted correctly
t = dde.sol['t']
E = dde.sol['E']
n = dde.sol['n']
spl = dde.sample(-tau, tfinal, 0.1)
pl.plot(t[:-1], t[1:] - t[:-1], '0.8', label='step size')
pl.plot(spl['t'], abs(spl['E']), 'g', label='sampled solution')
pl.plot(t, abs(E), '.', label='calculated points')
pl.xlim((0.95*tfinal, tfinal))
[source code, hires.png, pdf] | {"url":"http://pydelay.sourceforge.net/","timestamp":"2014-04-18T13:29:16Z","content_type":null,"content_length":"61043","record_id":"<urn:uuid:191535a5-3fd2-46d1-a7eb-6f9e7eca7423>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Delta-Y/Delta-X as a function of X and Delta-X (econ related
08-30-2010, 01:38 PM #1
New Member
Join Date
Aug 2010
Delta-Y/Delta-X as a function of X and Delta-X (econ related
I've got the following review problem in my micro-econ course:
y = f(x) = 5x^2 - 4x
(a) Find delta-y/delta-x as a function of x and delta-x.
(b) Find dy/dx.
(b)'s just the derivative and I've confirmed the answer in the book. However, I cannot work out how one arrives at this solution for (a): 10x + (5delta-x) - 4
Where does the 5delta-x come from?
The text gives this form:
delta-y/delta-x = [f(x - delta-x) - f(x)] / delta-x
Using that, I got 5delta-x - 6
Any help would be greatly appreciated. Thanks in advance.
Re: Delta-Y/Delta-X as a function of X and Delta-X (econ rel
[tex]f(x ) \ = \ 5x^2-4x, \ \frac{dy}{dx} \ = \ \frac{f(x+\Delta x)-f(x)}{\Delta x}[/tex]
[tex]= \ \frac{5(x+\Delta x)^2-4(x+\Delta x)- (5x^2-4x)}{\Delta x}[/tex]
[tex]= \ \frac{5(x^2+2x\Delta x+\Delta x^2)-4x-4\Delta x-5x^2+4x}{\Delta x}[/tex]
[tex]= \ \frac{5x^2+10x\Delta x+5\Delta x^2-4x-4\Delta x-5x^2+4x}{\Delta x}[/tex]
[tex]= \ \frac{10x\Delta x+5\Delta x^2-4\Delta x}{\Delta x}[/tex]
[tex]= \ 10x+5\Delta x-4[/tex]
I am not, therefore I do not think. Contrapositive of Descartes' quip.
Re: Delta-Y/Delta-X as a function of X and Delta-X (econ rel
Thanks for the steps!
Looks like I could've benefited from having worked it a third time. I now see what I'd been doing incorrectly.
08-30-2010, 02:27 PM #2
Senior Member
Join Date
Mar 2009
08-30-2010, 03:01 PM #3
New Member
Join Date
Aug 2010 | {"url":"http://www.freemathhelp.com/forum/threads/67178-Delta-Y-Delta-X-as-a-function-of-X-and-Delta-X-(econ-related","timestamp":"2014-04-20T18:23:59Z","content_type":null,"content_length":"43220","record_id":"<urn:uuid:f9c8433b-4b26-4ca9-8be5-d8e612866b97>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenGL:Tutorials:Vertex operations with GLSL
From GPWiki
An OpenGL Shader Tutorial
This tutorial is intended to demonstrate the basic use of a vertex shader. Familiarity with OpenGL and its extension system is assumed. The code contained here is C++, OpenGL Version 1.4 with the use
of ARB Extensions. For OpenGL Version 2.0 and higher, the ARB suffix on identifiers may be removed.
Vertex Transformation
A rather interesting application of a shader is to flatten a model.
void main(void)
//A new temporary position variable, since gl_Vertex is read only
vec4 position = vec4(gl_Vertex);
position.z = 0.0; //Set the z coordinate of the vertex to zero.
//gl_ModelViewProjectionMatrix is a precalculated matrix. it is equal to gl_ModelViewMatrix * gl_ProjectionMatrix
gl_Position = gl_ModelViewProjectionMatrix * position; //Transform it.
This code will result in a flat model, since the Z component of all the input vertexes is set to zero before the world transformation.
Waving the model
Mathematical functions can also be used in the vertex shader. A simple application would be to modify the z coordinate of the model, based on its x position. This will create a simple waving effect.
void main(void)
//Another new temporary position variable, since gl_Vertex is read only
vec4 position = vec4(gl_Vertex);
//Now, the z coordinate is the result of sin(position.x); This will create a wavy model
position.z = sin(position.x);
//Translate the vertex.
gl_Position = gl_ModelViewProjectionMatrix * position;
This works by setting the Z component of the input vertex to the sine of the x component. This makes the model wave up and down as it progresses along the x axis.
Fragment Shader
The fragment shader used for these demostrations is trivial.
void main()
//Set the result as a white model.
gl_FragColor = vec4(1, 1, 1, 1); | {"url":"http://content.gpwiki.org/index.php/OpenGL:Tutorials:Vertex_operations_with_GLSL","timestamp":"2014-04-21T07:57:56Z","content_type":null,"content_length":"23624","record_id":"<urn:uuid:fd0ff0a6-a34a-437b-9ce0-27a7cc1e75ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
EE1291 ELECTRICAL ENGINEERING AND CONTROL SYSTEMS Syllabus - Anna university
Resources Awards & Gifts
Active MembersToday
» Last 7 Daysmore...
Articles/Knowledge Sharing
EE1291 ELECTRICAL ENGINEERING AND CONTROL SYSTEMS Syllabus - Anna university
Posted Date: 13-Aug-2008 Last Updated: Category: Syllabus
Author: Ramkumar Member Level: Gold Points: 2
Find the detail on EE1291 ELECTRICAL ENGINEERING AND CONTROL SYSTEMS Syllabus - Anna university.
PART – A ELECTRICAL ENGINEERING 4 0 0 100
To expose the students to the basic concept of circuits and machines.
1. To study Kirchoff’s laws and be able to do simple problems using mesh and nodal analysis.
2. To study the phasor representation, complex power and three phase circuits and do simple problems.
3. To study qualitatively about the construction and principle of operation of D.C. machines and to do simple problems.
4. To study qualitatively the construction and principle of operation of transformers and three phase induction motors and to do simple problems.
5. To study qualitatively the construction details and principle of operation of single-phase induction motor and special machines.
UNIT I D.C. CIRCUITS 6
Kirchoff’s laws – simple resistance circuits – mesh and nodal analysis – simple problems.
UNIT II A.C. CIRCUITS 6
Sinusoidal voltage – RMS ,average and peak values – phasor representation – power factor – single phase RC,RL and RLC circuits – simple series and parallel circuits – complex
power – three phase circuits – line and phase values – power measurement – simple problems.
UNIT III D.C. MACHINES (QUALITATIVE TREATMENT ONLY) 6
Constructional details and operating principle of D.C. generators – emf equation – characteristics – principle of operation of D.C. motors – characteristics – starting.
Constructional details and principle of operation of transformers – emf equation – parameters of transformers – regulation, losses and efficiency - introduction to three phase
transformers. constructional details and principle of operation of three phase induction motor – characteristics- starting – losses and efficiency.
Constructional details and principle of operation of single phase induction motors – starting – servomotor, stepper motor, variable reluctance motors.-applications.
L = 30
1. D.P.Kothari and I.J. Nagrath “Basic Electrical Engineering”, Tata McGraw Hill Ltd, second edition, 2002.
1. Stephen J.Chapman “Electrical Machinery Fundamentals”, McGraw Hill Publishing Company Ltd, third edition, 1999.
2. K.Murugesh Kumar, “Electric Machines”, Vikas Publishing House (P) Ltd, 2002.
PART – B CONTROL SYSTEMS
1. To expose the students to the basic concepts of control systems.
1. To study control problem, control system dynamics and feedback principles.
2. To study time response of first and second order systems and basic state variable analysis and to do simple problems.
3. To study the concept of stability and criteria for stability and to do simple problems.
4. To study the frequency response through polar plots and Bode plots and Nyquist stability criteria and to do simple problems.
5. To study the different type of control system components.
The control problem – differential equation of physical systems – control over system dynamics by feedback – regenerative feedback – transfer function – block diagram - algebra –
signal flow graphs.
Time response of first and second order system – steady state errors – error constants – design
specification of second order systems – state variable analysis – simple problems.
Concept of stability – stability conditions and criteria – Hurwitz and Routh criterian – relative Stability analysis.
Correlation between time and frequency response – polar plots , Bode plots – stability in frequency domain using Nyquist stability criterion – simple problems.
UNIT V CONTROL SYSTEM COMPONENTS 6 Control components – servomotors , stepper motor – hydraulic and pneumatic systems.
L = 30 Total = 60
1. I.J.Nagrath and M.Gopal “Control system Engineering” New age International Publishing Company Ltd, third edition 2003.
1. M.Gopal “Control Systems – Principle and Design”, McGraw Hill Publishing Company Ltd, second edition, 2003.
2. Joseph J.Distafeno et-al “Shaums outline series – theory and Problems of Feedback
3. control systems, Tata McGraw Hill publishing company Ltd, 2003.
In part A there shall be five questions from Electrical Engineering and five questions from control systems (one from each unit). In Part B the compulsory question shall have one
part from Electrical Engineering and another from Control Systems. Each of the ‘either or’ form question shall have an Electrical Engineering part as well as Control Systems
part. For example,
Q 12 (a)(i) pertains to Electrical Engineering
12(a)(ii) pertains to Control Systems
Q 12(b)(i) pertains to Electrical Engineering
Q 12(b)(ii) pertains to Control Systems
The other questions shall be set similarly.
Reference: www.annauniv.edu/academics/index.html/
Did you like this resource? Share it with your friends and show your love!
Responses to "EE1291 ELECTRICAL ENGINEERING AND CONTROL SYSTEMS Syllabus - Anna university"
No responses found. Be the first to respond...
Post Comment:
Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic.
No HTML formatting and links to other web sites are allowed.
This is a strictly moderated site. Absolutely no spam allowed.
Name: Sign In to fill automatically.
Email: (Will not be published, but required to validate comment)
Type the numbers and letters shown on the left.
Submit Article
Return to
Article Index | {"url":"http://www.indiastudychannel.com/resources/33344-EE-ELECTRICAL-ENGINEERING-AND-CONTROL-SYSTEMS.aspx","timestamp":"2014-04-19T19:34:50Z","content_type":null,"content_length":"30893","record_id":"<urn:uuid:a61be859-2195-4754-b0d1-a54b121de813>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Website Detail Page
edited by Judy Spicer
supported by the National Science Foundation
This instructional module offers a wide variety of exemplary resources to support a unit on data analysis. Standards-based lessons on graph interpretation are presented in the context of real-world
applications, such as population growth, junk mail, and global temperatures. Don't miss the "applet" collection, offering fun and interactive virtual activities on graphing and statistics for grades
This module meets several standards within Benchmarks for Science Literacy (see Standards link), but is also aligned with data analysis standards found in the National Council for Teachers of
Mathematics Standards (NCTM).
Please note that this resource requires Flash, or Java Applet Plug-in.
Subjects Levels Resource Types
- Collection
- Instructional Material
= Activity
= Best practice
Education Practices
= Curriculum support
- Active Learning
= Game
= Modeling
- Middle School = Instructor Guide/Manual
- Technology
- High School = Interactive Simulation
= Multimedia
= Lesson/Lesson Plan
Other Sciences
= Model
- Mathematics
= Student Guide
= Unit of Instruction
- Audio/Visual
= Movie/Animation
Appropriate Courses Categories Ratings
- Physical Science
- Lesson Plan
- Physics First
- Activity
- Conceptual Physics
- New teachers
- Algebra-based Physics
Intended Users:
Access Rights:
Free access
This material is released under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 license.
Rights Holder:
The Ohio State University
bar graphs, coordinate graphs, data interpretation, graph interpretation, graph reading, graph skills, histograms, linear regression, scatterplot, statistics
Record Creator:
Metadata instance created October 25, 2010 by Caroline Hall
Record Updated:
January 19, 2011 by Lyle Barbato
Last Update
when Cataloged:
February 25, 2007
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2A. Patterns and Relationships
• 3-5: 2A/E2. Mathematical ideas can be represented concretely, graphically, or symbolically.
9. The Mathematical World
9B. Symbolic Relationships
• 3-5: 9B/E2. Tables and graphs can show how values of one quantity are related to values of another.
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily,
increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in
steps, or do something different from any of these.
9E. Reasoning
• 6-8: 9E/M5. In formal logic, a single example can never prove that a generalization is always true, but sometimes a single example can prove that a generalization is not always true. Proving a
generalization to be false is easier than proving it to be true.
12. Habits of Mind
12D. Communication Skills
• 6-8: 12D/M1. Organize information in simple tables and graphs and identify relationships they reveal.
• 6-8: 12D/M2. Read simple tables and graphs produced by others and describe in words what they show.
AAAS Benchmark Alignments (1993 Version)
E. Reasoning
• 9E (6-8) #4. People are using incorrect logic when they make a statement such as "If A is true, then B is true; but A isn't true, therefore B isn't true either."
11. COMMON THEMES
B. Models
• 11B (3-5) #2. Geometric figures, number sequences, graphs, diagrams, sketches, number lines, maps, and stories can be used to represent objects, events, and processes in the real world, although
such representations can never be exact in every detail.
12. HABITS OF MIND
C. Manipulation and Observation
• 12C (9-12) #2. Use computers for producing tables and graphs and for making spreadsheet calculations.
D. Communication Skills
• 12D (6-8) #4. Understand writing that incorporates circle charts, bar and line graphs, two-way data tables, diagrams, and symbols.
E. Critical-Response Skills
• 12E (6-8) #4. Be aware that there may be more than one good way to interpret a given set of findings.
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=10438">Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. February 25, 2007.</a>
, edited by J. Spicer (2005), WWW Document, (http://msteacher.org/epubs/math/math3/math.aspx).
Middle School Portal: Data Analysis: As Real World As It Gets, edited by J. Spicer (2005), <http://msteacher.org/epubs/math/math3/math.aspx>.
Spicer, J. (Ed.). (2007, February 25). Middle School Portal: Data Analysis: As Real World As It Gets. Retrieved April 18, 2014, from http://msteacher.org/epubs/math/math3/math.aspx
Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. February 25, 2007. http://msteacher.org/epubs/math/math3/math.aspx (accessed 18 April 2014).
Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. 2005. 25 Feb. 2007. National Science Foundation. 18 Apr. 2014 <http://msteacher.org/epubs/math/math3/math.aspx>.
@misc{ Title = {Middle School Portal: Data Analysis: As Real World As It Gets}, Volume = {2014}, Number = {18 April 2014}, Month = {February 25, 2007}, Year = {2005} }
%A Judy Spicer, (ed)
%T Middle School Portal: Data Analysis: As Real World As It Gets
%D February 25, 2007
%U http://msteacher.org/epubs/math/math3/math.aspx
%O text/html
%0 Electronic Source
%D February 25, 2007
%T Middle School Portal: Data Analysis: As Real World As It Gets
%E Spicer, Judy
%V 2014
%N 18 April 2014
%8 February 25, 2007
%9 text/html
%U http://msteacher.org/epubs/math/math3/math.aspx
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=10438","timestamp":"2014-04-18T18:45:19Z","content_type":null,"content_length":"46386","record_id":"<urn:uuid:52e6b1a7-8074-4cd8-b0b0-02ffb102fe82>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
classifying topos of a localic groupoid
under construction – am being interrupted
The classifying topos of a localic groupoid $\mathcal{G}$ is a an incarnation of a localic groupoid in the world of toposes. At least in good cases, geometric morphisms into it classify $\mathcal{G}$
-principal bundles.
Recall that a localic groupoid is a groupoid $\mathcal{G} = (\mathcal{G}_1 \stackrel{\to}{\to} \mathcal{G}_0)$internal to locales/Grothendieck-(0,1)-toposes.
Let $N_\bullet \mathcal{G} : \Delta^{op} \to Locales$ be the simplicial object in locales given by the nerve of $\mathcal{G}$. By applying the sheaf topos functor $Sh : Locale \to Topos$ to this, we
obtain a simplicial topos $Sh(N \mathcal{G}) : [n] \mapsto Sh(N_n \mathcal{G})$. Let $tr_2 Sh(N \mathcal{G})$ be its 2-truncation, then the 2-colimit
$Sh(\mathcal{G}) := \lim_{\to_{[n]}} tr_2 Sh(N_\bullet \mathcal{G})$
in the 2-category of toposes? is called the classifying topos of $\mathcal{G}$.
This has an explicit description along the lines discussed at sheaves on a simplicial topological space.
Proposition (Joyal-Tierney)
For every Grothendieck topos $E$ there is a localic groupoid $\mathcal{G}$ such that $E \simeq Sh(\mathcal{G})$.
The original result appears in
• Andre Joyal, M. Tierney, An extension of the Galois theory of Grothendieck Mem. Amer. Math. Soc. no 309 (1984)
An extension of the equivalence to morphisms is discussed in
• Ieke Moerdijk, The classifying topos of a continuous groupoid I , Trans. Amer. Math. Soc. Volume 310, Number 2, (1988)
The above equivalence of categories can in fact be lifted to an equivalence between the bicategory of localic groupoids, complete flat bispaces, and their morphisms and the bicategory of Grothendieck
toposes, geometric morphisms, and natural transformations. The equivalence is implemented by the classifying topos functor, as explained in
• Ieke Moerdijk, The classifying topos of a continuous groupoid II, Cahiers de topologie et géométrie différentielle catégoriques 31, no. 2 (1990), 137–168. | {"url":"http://ncatlab.org/nlab/show/classifying+topos+of+a+localic+groupoid","timestamp":"2014-04-18T11:04:40Z","content_type":null,"content_length":"19221","record_id":"<urn:uuid:98c89ef3-ddff-45d2-a18d-12e28bb5e957>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Braintree, MA Math Tutor
Find an East Braintree, MA Math Tutor
...Try me. History is one of my passions. I studied history in college and graduated from Wash.
21 Subjects: including algebra 1, algebra 2, SAT math, prealgebra
...I was an SAT instructor for Princeton Review and Kaplan. I was also a Summit private tutor for SAT, both Math and English. All of my students reach their full potential, and some of my
students got a perfect score on the SAT Math section: 800/800.
67 Subjects: including precalculus, marketing, logic, geography
I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have
tutored a wide range of students - from middle school to college level.
14 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your
location unless driving is more than 30 minutes.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
Hi! I really enjoy tutoring students. In fact, I love tutoring math so much that I recently completed the MTEL requirements to become a Math Teacher.
31 Subjects: including geometry, study skills, trigonometry, statistics
Related East Braintree, MA Tutors
East Braintree, MA Accounting Tutors
East Braintree, MA ACT Tutors
East Braintree, MA Algebra Tutors
East Braintree, MA Algebra 2 Tutors
East Braintree, MA Calculus Tutors
East Braintree, MA Geometry Tutors
East Braintree, MA Math Tutors
East Braintree, MA Prealgebra Tutors
East Braintree, MA Precalculus Tutors
East Braintree, MA SAT Tutors
East Braintree, MA SAT Math Tutors
East Braintree, MA Science Tutors
East Braintree, MA Statistics Tutors
East Braintree, MA Trigonometry Tutors
Nearby Cities With Math Tutor
Braintree Highlands, MA Math Tutors
Braintree Hld, MA Math Tutors
East Milton, MA Math Tutors
Grove Hall, MA Math Tutors
Houghs Neck, MA Math Tutors
Marina Bay, MA Math Tutors
Norfolk Downs, MA Math Tutors
North Quincy, MA Math Tutors
North Weymouth Math Tutors
Quincy Center, MA Math Tutors
South Quincy, MA Math Tutors
Squantum, MA Math Tutors
West Quincy, MA Math Tutors
Weymouth Lndg, MA Math Tutors
Wollaston, MA Math Tutors | {"url":"http://www.purplemath.com/East_Braintree_MA_Math_tutors.php","timestamp":"2014-04-16T16:29:23Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:5d2828f6-e0a2-42b3-aa95-1e2233e76993>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum Likehood Estimator
February 6th 2009, 06:12 PM #1
Junior Member
Nov 2007
Maximum Likehood Estimator
If f_Y(y;theta) = (theta*k^theta)*(1/y)^(theta+1), y>=k, theta>=1,
a) Find the maximum likelihood estimator for theta if information has been collected on a random sample of 25 people.
b) Find the method of moments estimator for theta if information has been collected on a random sample of 25 people.
Assume k is unknown.
Any help would be appreciated.
For an example of what to do, read this: http://www.mathhelpforum.com/math-he...estimator.html
The only thought I have at the moment is that $E(Y) = \frac{k \theta}{\theta - 1}$ and so let $\frac{k \theta}{\theta - 1} = \overline{y}$ and solve for $\theta$.
For question a), I attached what I am getting so far. Am I on the right track here? I'm not sure where to go now.
Edit merge: Actually I already spotted an error, it should be 25lnk in the derivative..oops.
Last edited by mr fantastic; February 7th 2009 at 11:57 AM. Reason: Merged posts
How did you come up with that expected value in question b)? And if you solve for theta in that equation, won't you get theta=(ybar-ybar)/k, which means that theta equals zero?
$\overline{y}$ is the sample mean. $E(Y)$ is NOT the same as $\overline{y}$. I had thought this would be clear. So you get $\theta$ in terms of the sample eman.
I found E(Y) using the usual formula: $E(Y) = \int_k^{+\infty} y \, f_Y (y, \theta) \, dy$. Again, this is something that I thought would be clear.
February 7th 2009, 02:24 AM #2
February 7th 2009, 02:27 AM #3
February 7th 2009, 11:53 AM #4
Junior Member
Nov 2007
February 7th 2009, 12:40 PM #5
Junior Member
Nov 2007
February 7th 2009, 12:57 PM #6 | {"url":"http://mathhelpforum.com/advanced-statistics/72229-maximum-likehood-estimator.html","timestamp":"2014-04-16T10:50:39Z","content_type":null,"content_length":"50170","record_id":"<urn:uuid:60d76f79-306b-40f2-8227-72032d1395e8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
A routine for the IBM 650 computer was used which selects the significant
variables on the basis of the significance of the b value subjected to a
t test.
The equation with the highest R2 value was:
(2) =-449.7 + 430.6 X2 + 14.9 X X2 0.46 X12 36.9 X2
The R2 is .423. For this equation when X2, precipitation, equals zero,
Y is negative for all values of X1. With one inch of precipitation Y is
positive but small from 0 to about 27 animal units. With 2 inches of pre-
cipitation Y reaches a maximum at 32 animal units, and with three inches
reaches a maximum at 48 animal units per section. For X2 = 4.7 inches,
brF C)
Ce t-e er E. I\ Jl F Irmd
The data upon which this paper is based was taken from seven years of a
grazing study at the Texas Ranch Experiment OW$Station, SL Sonora, Texas.
The grazing studies included grazing the single classes of livestock alone
and grazing cattle, sheep and goats in various combinations. Also, the
grazing rate was varied from 16 to 32 to 48 animal units per section. The
livestock was weighed every four months, and the gain or loss in weight re-
In order to make a first estimate of the production functions involved,
the data for cattle grazing alone was fitted to a quadratic function of the
(1) Y = a+bI X1 + b2 X2 + b X1 X2 + b X1 + b5 X2 + b X i + b7 X2
where Y = pounds of beef produced per section per four months,
X1 = the number of animal units p cattle per section year
long, and
X2 = the precipitation in inches for the four month period.
Y reaches a maximum at 76 animal units per section.
The first partial derivitive of Y, with respect to X1, that is the
marginal physical product (MPP) of X1 is a linear function in X1 and X2 and
(3) IL = 14.9 X2 9.92 X1
Multiplying by the price of cattle yields the marginal value product (MVP)
of X1. Figure I shows the MVP curve of X1 when X2 is valued at the mean or
4.7 inches.
_/ I ILC
FIGURE I CArE
The maximum profit rate of grazing for cattle alone can be determined
by equating the MVP of X1 with the marginal factor cost (MFC) of an additional
animal unit of cattle. An estimate of the MFC is $8-.5, which is shown also
in Figure I. The estimated MFC is based on a charge of 750 tax per head,
$1.00 for veterinary services and supplies, a death loss of $3.50 based on 2%
death loss, and an interest charge of $2.80 based on 6% of a $140 investment
for a four month period. Notice that the MVP is equated with the MFC at
32 animal units per section.
Because the original production function is based on data for a seven
year period, the physical production functions should reflect changes in
the pasture over this seven year period. That is, any effect of the heavy
rate of grazing on the pasture and on the production from the pasture will
be reflected in the physical data.
Miller and Merrill, however, have determined that the change in value
of the pasture based on the capitalized change in carrying capacity amounts
to a gain of 200 per acre for the stocking rate of 16 animal units, no change
for 32 animal units, and a loss of 54# per acre per year under the heavy
rate of grazing with cattle alone. Beginning at 32 animal units, this
amounts to a gain of 500 per animal unit as the grazing is decreased, and
a loss of 500 per animal unit when increasing from 32 animal units up. The
adjusted MFC curve in Figure I takes into account this additional cost. Based
on this Figure, the optimum grazing rate for cattle alone remain sat 32
animal units for the average rainfall.
The optimum grazing rate figured in Figure I is based upon a buying
and selling price of $20.00 per hundred. Figure II shows the effect on op-
timum grazing rate of 4 levels of purchasing price and 4 levels of selling
price. The heaviest optimum rate of stocking would be when the rancher
bought at a price of 15 dollars a hundred, and sold at a price of $30.00
per hundred. The lowest rate of grazing is when the rancher purchases
his cattle at $30.00 and sells at only $15.00. In fact, it does not pay
to graze at all under these conditions, as shown in Figure II.
Because MVP is sensitive to precipitation as well as price it would
be well to consider the effect of basing decisions upon different expected
levels of rainfall. Figures I and II are based un the arithmetic mean or
4.7 inches. In many situations the geometric mean is a more descriptive
measure of central tendency than is the arithmetic mean. The sample geometric
mean of rainfall for a four month period is 2.93 or about 3 inches. Consider-
ing the unadjusted MFC, the optimum grazing rate using the geometric mean
is only about 5 animal units per section. When considering the adjusted
MFC, optimum grazing rate increases, because the grazing is at less than 32
animal units. Thus, the optimum rate of grazing, considering the adjusted
MFC, is about 25 animal units per section. This is shown in Figure III.
It can be seen that in this situation, basing the decision on the geometric
mean, rather than the arithmetic mean *~Mm1ef isAmore conservative A Game
theory could be used as an approach to determine grazing strategies with at
i MVP
J 6/ o I 5
least two more different levels of *naa In this situation, stocking
rates would be the ranchers strategy and amount of precipitation would be
nature's strategy. The payoff would be based on the net income which would
be obtained for each combination of stocking rate and precipitation. Another
game matrix might take into consideration the relationships in Figure II.
The rancher would know at time of purchase what the price would be. Nature's
strategy would be the selling price.
It is a fairly simple procedure to determine optimum stocking rate for
a single class of livestock such as for cattle alone, as has been done. To
determine the optimum stocking rate for combinations of livestock,,is -n
-^ .. . .'."'- (.
more difficult, and to determine the optimum combinations yet more difficult.
In considering the problem of optimum stocking rate for a combination, say
cattle, sheep and goats, it is necessary to convert the products into some
common denominator. For the purpose of illustration, the products for cattle,
sheep and goats were valued as follows in dollar terms: beef at 200 a pound,
lamb at 180, wool at 500, and mohair at 800. The value from each of the
livestock products was then added together to form the total value product
(TVP) from the complete combination.
The function fitted using the same routine as previously was of the form,
(4) Y = a+b1 X1 + b2 X2 + b3 X1 + b X2 + b X1 X2
where Y = the TVP from all three classes of livestock together L the
period from July to June of each year,
X1 = the stocking rate in animal units per section, and
X2 = the annual precipitation from July to June.
Notice that the square root of the independent variables was omitted from
the equation. These were not significant in ha the original equation and
also their omission facilitates the taking of derivitives and working with
the functions. Thus, it was beneficial to omit them from this equation.
The function selected, had an R value of .90. The equation was,
A 2
(5) Y = 5.03 + 57.37 X1 0.52 X1 + 0.78 X1 X2.
Notice here that the TVP is primarily a function of the stocking rate.
Precipitation enters only as a positive interaction term. This is desirable
over the range of the data, because one would expect no decrease in product
as precipitation increased within the range of the data. In fact, one would
not expect a decrease in product as precipitation increased until precipitation
was at a very high level. For this reason, it is desirable to have no X2
term in the equation, unless it is negative and of a very small absolute
value. It is doubtful that the b value for X2 would be significantly
different from zero at any rate.
Once again, the partial derivitive of Y with respect to X1 is a linear
function in X1 and X2. It is,
(6) = 57.73 1.04 X1 + 0.78 X2
Because is already in total value terms this need not be multiplied by
the price of the product. Thus, this equation itself represents the marginal
value product of X1.
It is interesting to note that with the combination of livestock,
(including sheep and goats), Y is positive even'though X2 is zero. In fact,
Y increases to 55 animal units before it begins to decrease. In placing X2
at its arithmetic mean 14.13 inches, Y increases to 65 animal units before
it starts to diminish. With X2 at 25, the maximum Y is reached at about 74
animal units per section.
iM F
FIGURE IV /kAe<-
Figure IV shows the MVP of X1 for the arithmetic mean value of X2. The sample
geometric mean is 12.08 and does not vary significantly from the arithmetic
mean in this case. Also in Figure TV are the unadjusted and adjusted MFC
curves. Equating adjusted MFC curve with the MVP shows that optimum stocking
rate for the combination of cattle, sheep and goats is at 47 animal units
per section. It should be noted that due to combining the value of the
livestock products, the MVP will be sensitive to relative changes in the
prices for these products. The MVP for a slightly different estimate of the
relative prices may well be considerably lower, or at least different, than
the MVP shown.
Figure V shows the projection of the prouIction surface in the factor
factor dimension. If X2 is fixed at, say, 2 inches one can see that production
reaches a maximum and then declines, which is a desirable characteristic of
the function. This characteristic would hold for all expected levels of X2.
W*e letting X2 vary while X1 is held constant, at say 30, it can be seen
that production never reaches a maximum. This situation would be expected
for all expected levels of X2. The curves eventually become asympotic to the
X axis. | {"url":"http://ufdc.ufl.edu/UF00094300/00001","timestamp":"2014-04-16T19:41:24Z","content_type":null,"content_length":"24548","record_id":"<urn:uuid:b9fc1f0e-83b0-42fa-b456-f2361f49dc23>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precision Triangular-Wave Generator Uses a Single IC
Precision Triangular-Wave Generator Uses a Single IC
By: Akshay Bhat, Senior Strategic Applications Engineer
Abstract: This application note outlines the implementation of a single-supply, triangular wave oscillator using the MAX9000 and some passive components. The application circuit uses an op amp,
comparator, and a voltage reference as active building blocks. The MAX9000 is chosen because it integrates these three components.
The linearity of triangular waveforms makes triangular wave generators useful in many "sweep" circuits and test equipment. For example, switched-mode power supplies and induction motor-control
circuits often use a triangular wave oscillator as part of the pulse width modulator (PWM) circuit. This article presents a compact, triangular wave oscillator using a single MAX9000 IC and some
passive components. The MAX9000 device family integrates a high-speed operational amplifier, a high-speed comparator, and a precision bandgap reference.
Circuit Description
Figure 1. Design concept for a basic triangular-wave generator.
Figure 2. A triangular wave generator using the MAX9000.
Figure 1 shows a basic triangular wave generator circuit.¹ It is comprised of two basic building blocks: an integrator to generate the triangular wave output, and a comparator with external
hysteresis (Schmitt trigger) to set the amplitude of the triangular wave as desired.
The op amp is configured as an integrator to provide the triangular output. This approach is based on the simple fact that integration of a constant voltage results in a linear ramp. The integrator's
output is fed back to its inverting input by a Schmitt trigger. The input threshold voltages of the Schmitt trigger are designed to change state corresponding to the desired peak voltages of the
triangular wave output.
There is a drawback to the circuit in Figure 1: the peaks of the triangular wave can only be symmetrical about the reference voltage applied to the comparator's inverting input. To generate a
triangular wave from 0.5V to 4.5V, for example, a reference voltage of (0.5V + 4.5V)/2 = 2.5V is needed. Since a standard bandgap reference has an output voltage of 1.23V, it would be preferable if
the voltage range of the triangular wave could be set independent of the bandgap reference. This flexibility is achieved by adding resistor R3 to the hysteresis network, as illustrated in Figure 2
where the circuit uses the MAX9000. Resistor R3 enables the peaks of the triangular wave to be set independent of the reference voltage.
Design Considerations
Step 1. Build the "Trigger-Happy" Comparator (Schmitt Trigger Design)
a) Select R2
The input bias current at CIN+ of the comparator is less than 80nA. To minimize errors caused by the input bias current, the current through R2 should be at least 8µA. Current through R2 is (V[REF] -
V[OUT])/R2. Considering the two possible output states to solve for R2, two formulas result:
R2 = V[REF]/I[R2]
R2 = [(V[DD] - V[REF])/I[R2]]
Use the smaller of the two resulting resistor values. For example, with V[DD] = 5V, V[REF] = 1.23V, and I[R2] = 8µA, the two values of R2 are 471.25kΩ and 153.75kΩ. Therefore, for R2 choose the
standard value of 154kΩ.
b) Select R1 and R3
During the rising ramp of the triangular wave, the comparator's output is tripped low (V[SS]). Similarly, the falling ramp requires the comparator's output to be at logic high (V[DD]). That is, the
comparator must change state corresponding to the required peak and valley points of the triangular wave. Applying nodal analysis at the noninverting input of the comparator and solving for these two
thresholds, gives the following simultaneous equations:
In this example, the voltage range of the triangular wave is from 0.5V to 4.5V. Hence substituting V[IH] = 4.5V, V[IL] = 0.5V, V[DD] = 5V, and V[REF] = 1.23V yields R1 = 124kΩ and R3 = 66.5kΩ.
Step 2. Make a Clean Sweep (Integrator Design)
Considering the two possible output states of the comparator, the magnitude of the current flowing through the resistor R4 is given as:
I[R4] = (V[DD] - V[REF])/R4
I[R4] = V[REF]/R4
The maximum input bias current of the op amp is 2nA. Therefore, to minimize errors the current through R4 must always be greater than 0.2µA. This constraint implies that:
R4 < 6.12MΩ
The frequency of the triangular waveform is given as:
For this example, choose f = 25kHz, V[OUT], [P-P] = 4V (for a 0.5V to 4.5V triangular wave), and V[REF] = 1.23V. This gives the time constant as R4 × C = 9.27µs. Select C = 220pF and R4 = 42.2kΩ.
Step 3. Look Before You Leap
The resulting output will match the designed frequency, if the op amp is not slew limited. Since the feedback capacitor charges (or discharges) at a constant current, the maximum rate of change of
output signal is:
Recognizing process variations, the op amp must have a typical slew rate 40% higher than the output signal's maximum rate of change, at least 0.56V/µs in this case. Referring to the MAX9000's data
sheet, the slew rate of the op amp is 0.85V/µs which is adequate for the 25kHz waveform.
Figure 3 plots the output waveform of the circuit in Figure 2.
Figure 3. Output waveform for the triangular wave circuit in Figure 2.
¹ Terrell, David L., "Opamps: Design, Application and Troubleshooting," ISBN 0-7506-9702-4.
Related Parts
MAX9000 Low-Power, High-Speed, Single-Supply Op Amp Comparator Reference ICs Free Samples
Next Steps
EE-Mail Subscribe to EE-Mail and receive automatic notice of new documents in your areas of interest.
Download Download, PDF Format (78kB)
APP 4362: Jun 28, 2010
APPLICATION NOTE 4362, AN4362, AN 4362, APP4362, Appnote4362, Appnote 4362 | {"url":"http://www.maximintegrated.com/app-notes/index.mvp/id/4362","timestamp":"2014-04-18T20:42:19Z","content_type":null,"content_length":"67826","record_id":"<urn:uuid:6e807e4c-c275-4491-96e5-1ed007ff0a3f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question on a trace inequality
up vote 5 down vote favorite
Let $A, B\in M_{n}(\mathbb{R})$ be symmetric positive definite matrices. It is easy to see $Tr(A^2+AB^2A)=Tr(A^2+BA^2B)$. Numerical experiments indicate $$Tr[(A^2+AB^2A)^{-1}]\ge Tr[(A^2+BA^2B)^
{-1}],~~(1)$$ but it seems difficult to show it.
Remark. When $n=2,3$, by direct computation, (1) is true. Here is an expriment done by matlab:
for s=1:1000
{\bf Updated.} What about $A, B\in M_{n}(\mathbb{C})$ be positive definite Hermitian matrices.
2 Is there any motivation you hope that it is true? – Tran Chieu Minh Apr 10 '10 at 18:19
@Minh: This is a very special case for a result (I wish to establish) in quantum information, but it is still difficult to prove this special case. – Sunni Apr 10 '10 at 19:53
Nice inequality, nice proof! – Tran Chieu Minh Apr 11 '10 at 0:53
add comment
2 Answers
active oldest votes
Note first that $A^2+AB^2A=(A+iAB)(A-iBA)$. The reverse product is $(A-iBA)(A+iAB)=A^2+BA^2B-i(BA^2-A^2B)=X-iC$. Thus, the quantity on the left is $\operatorname{Tr} (X-iC)^{-1}$ and that
on the right is $\operatorname{Tr} X^{-1}$. Moreover, the self-adjoint complex matrix $X-iC$ is positive definite (as the product of an invertible operator and its adjoint). Similarly,
considering the factorization $A^2+AB^2A=(A-iAB)(A+iBA)$, we can write the quantity on the left as $\operatorname{Tr} (X+iC)^{-1}$. Symmetrizing, we see that it will suffice to show that $
up vote 14 (X-iC)^{-1}+(X+iC)^{-1}\ge 2X^{-1}$ in the sense of quadratic forms (then the inequality for traces will hold too). We can multiply by $X^{1/2}$ from both sides to reduce it to $(I-iD)^
down vote {-1}+(I+iD)^{-1}\ge 2I$ where $D=X^{-1/2}CX^{-1/2}$ and both operators on the left are positive definite. Diagonalizing the self-adjoint operator $iD$, we see that the inequality reduces
accepted to $(1+p)^{-1}+(1-p)^{-1}\ge 2$ for $p\in(-1,1)$.
4 Le plus court chemin entre deux vérités dans le domaine réel passe par le domaine complexe. ---Jacques Hadamard – Sunni Apr 11 '10 at 3:43
Sometimes I can be a complete idiot. See the revised version for the general case. – fedja Apr 11 '10 at 4:22
add comment
For what it is worth, a weaker conjecture is proved below.
Applying the formula for the derivative of the inverse $$d\(M^{-1}\) = -M^{-1}\ dM\ M^{-1},$$ to compute the t=0 derivative of the LHS of $$Tr\(A^2+A\(t^{1/2}B\)^2A\)^{-1}-Tr\(A^2+\(t^{1/2}
B\)A^2\(t^{1/2}B\))^{-1} \ge 0$$ gives $$Tr\(A^{-2}BA^2BA^{-2}\)\ge Tr\(A^{-1}B^2A^{-1}\)=Tr\(BA^{-2}B\).$$ Replacing $A^{-2}$ by $P$ gives the weaker conjecture that $$Tr\(PBP^{-1}BP\)\ge
Tr\(BPB\)$$ for positive B and P.
up vote 1
down vote PROOF OF WEAKER CONJECTURE: By the spectral theorem, we may take P=Diag($p_1,p_2,...$). Then $$Tr(BPB)=\Sigma p_j |B_{ij}|^2=\Sigma |B_{ij}|^2 (p_i+p_j)/2 $$ and $$Tr\(PBP^{-1}BP\)=\Sigma |
B_{ij}|^2 p_i^2 p_j^{-1}=\Sigma |B_{ij}|^2 (p_i^2 p_j^{-1}+p_i^{-1}p_j^2)/2.$$ It remains to show that $$p_i^2 p_j^{-1}+p_i^{-1}p_j^2\ge p_i+p_j$$ for positive $p_{i,j}$. By homogeneity we
may take $p_i=1$. Multiplying through by $p_j$, the inequality now follows from the identity $$1+p^3-p-p^2=(p-1)^2(1+p)\ge 0.$$ $\square$
Can't understand anything. If the inequality is scale-invariant, how on Earth can you make B small? – fedja Apr 11 '10 at 1:38
Is your 'proof of weaker conjecture' means that the proof is under the assumption '$B$ is sufficiently small'? – Sunni Apr 11 '10 at 3:10
fedja: Sorry, there was a gap in the part where I reduced consideration to small B. I took it out, I don't currently know how to extend to prove the full conjecture. – Jon Apr 11 '10 at
minwalin: I clarified the weaker conjecture, it is as stated just above "PROOF." – Jon Apr 11 '10 at 3:21
[deleted earlier comment] – Yemon Choi Apr 11 '10 at 3:48
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/20924/a-question-on-a-trace-inequality/20964","timestamp":"2014-04-21T15:40:26Z","content_type":null,"content_length":"65802","record_id":"<urn:uuid:26ccd5a9-b988-47f1-880a-6cafad306239>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yarn Numbering System (Yarn Count) | Direct Count System | Indirect Count System
Yarn Count:
Count is a numerical value, which express the coarseness or fineness (diameter) of the yarn and also indicate the relationship between length and weight(the mass per unit length or the length per
unit mass)of that yarn. Therefore, the concept of yarn count has been introduced which specifies a certain ratio of length to weight.
The fineness of the yarn is usually expressed in terms of its linear density or count. There are a number of systems and units for expressing yarn fineness. But they are classified as follows .
Types of Yarn Count:
1. Direct Count System
2. Indirect Count System
1. Direct Count System: The weight of a fixed length of yarn is determined. The weight per unit length is the yarn count! The common features of aII direct count systems are the length of yarn is
fixed and the weight of yarn varies according to its fineness.
The following formula is used to calculate the yarn count:
N= (W×l) / L
N =Yarn count or numbering system
W =Weight of the sample at the official regain in the unit of the system
L=Length of the sample
l=Unit of length of the sample
1. Tex system ..........................NO. of grams per 1000 meters
2. Denier .................................No. of Grams per 9000 meters
3. Deci Tex ..............................No. of grams per 10,000 metres
4. Millitex ................................No. of milligrams per 1000 metres
5. Kilotex............................... .No. of kilograms per 1000 metres.
6. Jute count........................No. of lb per 14,400 yds
The Tex of a yarn indicates the weight in grammes of 1000 metres yarn. So that 40Tex means 1000 meters of yarn weigh 40gm.
From above discussion it is concluded that, higher the yarn number(count) coarser the yarn and lower the number finer the yarn.
2. Indirect Count System:
The length of a fixed weight of yarn is measured. The length per unit weight is the yarn count. The common features of all indirect count systems are the weight of yarn is fixed and the Length of
yarn varies according to its fineness.
The following formula is used to calculate they are count:
N = (L×w) / W×l
N =Yarn count or numbering system
W =Weight of the sample at the official regain in the unit of the system
L=Length of the sample
l=Unit of length of the sample
w = Unit of weight of the sample.
1. Ne: No of 840 yards yarn weighing in One pound
2. Nm: No of one kilometer yarn weighing in One Kilogram
The Ne indicate show many hanks of 840 yards length weigh one English pound. So that 32 Ne Means 32 hanks of 840yards i.e.32x840 yards length weigh one pound.
For the determination of the count of yarn, it is necessary to determine the weight of a known length of the yarn. For taking out known lengths of yarns, a wrap-reel is used. The length of yarn
reeled off depends upon the count system used. One of the most important requirements for a spinner is to maintain the average count and count variation within control.
Yarn Count Variation:
The term count variation is generally used to express variation in the weight of a lea and this is expressed as C.V.%. The number of samples and the length being considered for count checking affects
this. While assessing count variation, it is very important to test adequate number of leas. After reeling the appropriate length of yarn, the yarn is conditioned in the standard atmosphere for
testing before it's weight is determined.
2 comments:
I don't even understand how I ended up right here, but I thought this post used to be great. I do not understand who you might be however certainly you're going to a well-known blogger when you
aren't already. Cheers!
Body Glove Swimwear
my web page - Body Glove Swimwear
Imtiyaz Pachloo said...
ANY one please help me out. I want to make hand made rugs. Please suggest what quality of wool ( micron and count ) I need to make fine rugs with fine looks.
Textile Learner is the largest Textile Blog over the net. It is an ultimate reference for textile students. It describes textile articles in comprehensive. It also supplies news on latest textile
technology, educational institute news of the world. | {"url":"http://textilelearner.blogspot.com/2012/05/yarn-numbering-system-yarn-count-direct.html","timestamp":"2014-04-19T15:13:37Z","content_type":null,"content_length":"273986","record_id":"<urn:uuid:d4465450-0967-4219-aad3-add22ba4bab0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
motion under gravity
thanks for any help, not too sure what it means when it asks for the equation of motion
mass*acceleration= force and here the force is gravity, downward, -mg and friction, $-\gamma v^3$ (- because the force is always opposing the motion). $m\frac{dv}{dt}= -mg- \gamma v^3$. The "terminal
velocity" is when dv/dt= 0. To find the "equation of motion" solve that equation for v and then solve $\frac{dx}{dt}= v$ to get the height as a function of t. | {"url":"http://mathhelpforum.com/differential-equations/140938-motion-under-gravity.html","timestamp":"2014-04-20T16:48:37Z","content_type":null,"content_length":"33571","record_id":"<urn:uuid:ec120d06-03a3-4289-aa4f-8160d62a0315>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 99% Movement - Linking Math to Economics
Connecting mathematics with economics can be a liberating force for all involved.
For students, as well as the general populous, math and economics are often intangibly linked. However, sometimes the connection between the two disciplines is concrete and clear. Consider the
current pervasive socio-economic unrest. The Occupy Wall Street (OWS) Movement is attractive to our youth, especially because celebrities including rapper Lupe Fiasco and actress Susan Sarandon have
become involved.
The popularity of the movement stems from the way mathematics and economics are intertwined and made relevant for Americans. According to the October 2011 issue of Time magazine, an estimated
“200,000 people have protested in more than 900 cities in 82 countries around the world.” OWS continues to gain steady global support because the proponents are lucidly fusing mathematics and
economics in a way that resonates with the vocal majority.
The "99 Percenters"
The thriving movement has even abbreviated their own name mathematically as the “99 percenters” to distance themselves from the wealthiest in our country. According to recent research, the top 1
percent of the population control 40 percent of the wealth which Nobel Prize-winning economist Joseph Stieglitz detailed in the May 2011 issue of Vanity Fair. It is a statistic that has prompted many
of the 99 percenters into action, and it can likewise motivate students to connect mathematics to economics.
Economics in the Classroom
One way to get your students thinking about the link between economics and math is by having them do comprehensive research on the historical distribution of wealth. They could then plot the data and
note the mathematical model it best fits - linear or exponential. They could also compare this data to the inflationary rate?
Twenty-five years ago, the top 1 percent possessed 33% of the wealth, according to Stieglitz. A recent report from the nonpartisan Congressional Budget Office(CBO) stated that the richest 1% nearly
tripled their income from 1979 to 2007, a staggering 275 percent increase. Therefore, another great, collaborative project for students to engage in would be an investigation into the mathematical
and economic historical progression describing how the top 1% increased their wealth.
• Students might determine what a graph of this rate would look like.
• They could predict what would happen to our society if such a rate continues.
• Students could discuss whether it is sustainable? Why or why not?
Delving Into the Current Economic Situation
A similar joint mathematical/economic analysis might be conducted on the percent of income the top 1% of the population is currently amassing. In fact, the upper 1% is possessed of nearly 25% of the
nation’s income. By contrast, a quarter of a century ago, the top 1% only took in 12% of the nation’s income. In stark contrast, men with no higher than a high school degree have seen their wages
decrease by 12% over the past twenty-five years. Students could use this data to design comprehensive graphs, and, without a doubt, this information would be relevant to students looking toward the
future. In addition, the overwhelming majority believe taxes should be raised on the ultra-wealthy. This could spiral off into a social studies project, including a study on shaping legislation to
meet economic needs.
With this transition in wealth, various other economic problems have occurred. It was recently announced that the American rate of poverty has grown to its highest level in thirty-five years.
Approximately 46.2 million Americans are scraping by below the poverty line.
• Students could identify what percent, fraction, and decimal of people this is of the entire U.S. population.
• They could couple this data with the number of people who are unemployed, underemployed, discouraged (workers who have given up on looking for work), and part-timers who wish to be full-timers.
• This too could be a launching pad for students to come up with mathematical/economic solutions, integrating language arts, social studies, and math, to find solutions to poverty in America.
Social Justice and Math
In an article titled “Mathematical Power: Exploring Critical Pedagogy In Mathematics and Statistics,” author Lawrence M. Lesser and Sally Blake write: “Though traditionally viewed as value-free,
mathematics is actually one of the most powerful, yet underutilized, venues for working toward the goals of critical pedagogy - social, political, and economic justice for all.” We, as teachers, must
therefore believe that all students are capable of understanding mathematics in order to protect their own well-being. Students should know that higher paying jobs generally require a person to use
more mathematics, and that the ability to analyze our world mathematically directly enables them to survive and thrive. As math scholar Marilyn Frankenstein has eloquently written, “Politically,
people can be more easily oppressed when they cannot break through the numerical lies and obfuscations thrown at them on a daily basis.” We, as teachers, then, have a duty to fight statistical
manipulation by having students implement mathematical and economic concepts to empower themselves.
Teachers could also use unemployment data as a way to delve into a mathematical exploration. After three years since the financial crisis, the unemployment rate is still at the highest since the
Great Depression.
• Students could investigate what our current 9% unemployment rate actually means.
• They could compare and contrast the data for those who want to work, but cannot find a job (14 million Americans) to those who want to work full-time, but can only find part-time work. How does
that information affect the unemployment and underemployment rates?
• Students could also compare the corporate profit trends versus to what has transpired for the majority of wage earners.
After the various data is gathered, students can use spreadsheet programs to create scatterplots. They can use the information to describe different trends, using linear, exponential, polynomial,
etc. lines, and make some predictions about the future. Here are more lessons that combine mathematics and economics.
Economics Lesson Plans:
Students learn about the positive and negative aspects of credit cards and credit card offers. Students gather data from the Federal Reserve’s website. Students also compare and contrast various
kinds of credit cards. This lesson provides a way for students to see how economics works in their daily lives.
Students gather and analyze data from the Great Depression. Students also create a multimedia, interdisciplinary project about the Great Depression more generally. This can be a tool used to help
students learn how to organize information for use in a graph or other mathematical representation.
Federal Reserve's Monetary Policy
Students research and explain the development and purpose of the Federal Reserve’s Beige Book. This lesson can help students learn how the history of economics effects the present.
A Case Study: Gross Domestic Product - January 27, 2006
Learners gain access to easily understood, timely interpretations of monthly announcements of rates of change in real GDP and the accompanying related data in the U.S. economy. This information can
be used as part of student’s research analyzing income.
Students examine what was in in store for Wall Street following the NASDAQ's 547.57 point plunge on Tuesday, April 4, 2000. They evaluate how they might manage a heavily laden high-tech portfolio
before deciding how to invest in the market. The discussions that follow this lesson could help students understand why the current movement is called Occupy Wall Street. | {"url":"http://www.lessonplanet.com/article/math/the-99-movement-linking-math-to-economics","timestamp":"2014-04-16T05:03:47Z","content_type":null,"content_length":"44936","record_id":"<urn:uuid:bdff12b2-1b56-4c11-9474-f10697e03b07>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hamed Hamid Muhammed: Stanford Research Communication Program
I-RITE Statement Archive
Hamed Hamid Muhammed
Centre for Image Analysis
Uppsala University
June 2002
In my field of study, image analysis, I am trying to develop new methods for analysing satellite images. There currently exist many sophisticated satellite systems for acquiring various types of
images, but there is still a need for new and more efficient methods for extracting as much useful information as possible from these images.
One of the most important fields of image analysis research is analysing remotely sensed images acquired by satellite and airborne systems. Remote sensing offers the opportunity to study a relatively
large region on the Earth by a single image. In addition, satellite and airborne remote sensing analysis systems offer efficient and environmentally friendly non-destructive techniques.
My research aims to analyse these kinds of images in a way that takes into account the entire content of the available data set (which is an image) to extract as much useful information as possible
from these data. The statistical method is one of the most commonly used and efficient techniques for this purpose, where it is allowed to have data affected by some kind of disturbing factors. This
is important because it is impossible to make sure that we have exactly the same conditions when acquiring satellite images at different times, even when using the same instrument. Weather conditions
and variations in the sensitivity of the instrument are the main disturbing factors.
Remote sensing systems acquire very large image data sets, which require fast and efficient analysis techniques. What the statistical methods are good at is for looking at the data and trying to know
which part of it contains the useful information that we are interested in. This little part is what we actually need to focus on, because by using only this part, we still get as much useful
information as possible, even if we had used the whole data set.
It is obvious that less computational and memory requirements are needed when reducing the amount of data we have to process. Statistical methods look at the statistical properties of the data
samples to decide which of them are useful. Many statistical properties can be utilised here, such as the mean value of the data samples and how much each of them deviates from the mean value, etc.
The following example illustrates the basic idea of using statistics. Suppose that we have two apples beside each other on a plate and an ant on the same table. The task for the ant is to take a
photo in which the two apples can be recognized clearly. Most likely, the easiest way for the ant to do this is to go as far as possible from the plate and around it until the two apples can be
recognized. Another way would be for the ant to be at a certain place on the table and move the plate away from itself (a translation) and then rotate it until it becomes possible for the ant to
recognize the apples. The latter way is what we try to do when using statistical methods. But the hard task here is to find the suitable statistical properties of the data that can be used to get a
better view in order to decide which of the data samples are useful for our analysis.
In the case of the plate with the apples, moving the plate away from the ant corresponds to multiplying the view by a factor smaller than one (a translation factor). Rotating the plate corresponds to
multiplying the view by a rotation factor, which is more complex than the simple translation factor. However, it is possible to combine the two factors into one which can be used to produce a new
view, one showing both the translation and rotation effects. Multiplying the view by a factor is called "projecting the view" on that factor. In other words, the new view is the projection of the
view on the factor, or is what can be seen from the factor's point of view. An even better factor would be one that not only gets a better view, but also goes further and finds the interesting part
of the view.
Many methods exist for finding such factors, called "basis vectors" in this field, and these methods are referred to as "transformations." The difference between these methods is in the choice of a
suitable set of statistical properties of the data samples that are considered when finding the basis vectors. Finally, our image data set is projected on these basis vectors. So far, the resulting
projections correspond to obtaining better views. The next step is to study these basis vectors to try to build smarter ones that can be used for both obtaining the best view and extracting the
useful information from it. | {"url":"http://www.stanford.edu/group/i-rite/statements/2002/muhammed.htm","timestamp":"2014-04-19T05:23:54Z","content_type":null,"content_length":"12860","record_id":"<urn:uuid:c6cbfcbe-ea51-4d82-94c7-825c5dba6189>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus and Its Applications, Twelfth Edition
is a comprehensive print and online program for students majoring in business, economics, life science, or social sciences. Without sacrificing mathematical integrity, the book clearly presents the
concepts with a large quantity of exceptional, in-depth exercises. The authors' proven formula—pairing substantial amounts of graphical analysis and informal geometric proofs with an abundance of
exercises—has proven to be tremendously successful with both students and instructors. The textbook is supported by a wide array of supplements as well as MyMathLab
and MathXL
, the most widely adopted and acclaimed online homework and assessment system on the market.
CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book.
Table of Contents
0. Functions
0.1 Functions and Their Graphs
0.2 Some Important Functions
0.3 The Algebra of Functions
0.4 Zeros of Functions-The Quadratic Formula and Factoring
0.5 Exponents and Power Functions
0.6 Functions and Graphs in Applications
1. The Derivative
1.1 The Slope of a Straight Line
1.2 The Slope of a Curve at a Point
1.3 The Derivative
1.4 Limits and the Derivative
1.5 Differentiability and Continuity
1.6 Some Rules for Differentiation
1.7 More About Derivatives
1.8 The Derivative as a Rate of Change
2. Applications of the Derivative
2.1 Describing Graphs of Functions
2.2 The First and Second Derivative Rules
2.3 The First and Second Derivative Tests and Curve Sketching
2.4 Curve Sketching (Conclusion)
2.5 Optimization Problems
2.6 Further Optimization Problems
2.7 Applications of Derivatives to Business and Economics
3. Techniques of Differentiation
3.1 The Product and Quotient Rules
3.2 The Chain Rule and the General Power Rule
3.3 Implicit Differentiation and Related Rates
4. Logarithm Functions
4.1 Exponential Functions
4.2 The Exponential Function e^x
4.3 Differentiation of Exponential Functions
4.4 The Natural Logarithm Function
4.5 The Derivative of ln x
4.6 Properties of the Natural Logarithm Function
5. Applications of the Exponential and Natural Logarithm Functions
5.1 Exponential Growth and Decay
5.2 Compound Interest
5.3 Applications of the Natural Logarithm Function to Economics
5.4 Further Exponential Models
6. The Definite Integral
6.1 Antidifferentiation
6.2 Areas and Riemann Sums
6.3 Definite Integrals and the Fundamental Theorem
6.4 Areas in the xy-Plane
6.5 Applications of the Definite Integral
7. Functions of Several Variables
7.1 Examples of Functions of Several Variables
7.2 Partial Derivatives
7.3 Maxima and Minima of Functions of Several Variables
7.4 Lagrange Multipliers and Constrained Optimization
7.5 The Method of Least Squares
7.6 Double Integrals
8. The Trigonometric Functions
8.1 Radian Measure of Angles
8.2 The Sine and the Cosine
8.3 Differentiation and Integration of sin t and cos t
8.4 The Tangent and Other Trigonometric Functions
9. Techniques of Integration
9.1 Integration by Substitution
9.2 Integration by Parts
9.3 Evaluation of Definite Integrals
9.4 Approximation of Definite Integrals
9.5 Some Applications of the Integral
9.6 Improper Integrals
10. Differential Equations
10.1 Solutions of Differential Equations
10.2 Separation of Variables
10.3 First-Order Linear Differential Equations
10.4 Applications of First-Order Linear Differential Equations
10.5 Graphing Solutions of Differential Equations
10.6 Applications of Differential Equations
10.7 Numerical Solution of Differential Equations
11. Taylor Polynomials and Infinite Series
11.1 Taylor Polynomials
11.2 The Newton-Raphson Algorithm
11.3 Infinite Series
11.4 Series with Positive Terms
11.5 Taylor Series
12. Probability and Calculus
12.1 Discrete Random Variables
12.2 Continuous Random Variables
12.3 Expected Value and Variance
12.4 Exponential and Normal Random Variables
12.5 Poisson and Geometric Random Variables
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Calculus & Its Applications, CourseSmart eTextbook, 12th Edition
Format: Safari Book
$75.99 | ISBN-13: 978-0-321-56077-3 | {"url":"http://www.mypearsonstore.com/bookstore/calculus-its-applications-coursesmart-etextbook-0321560779","timestamp":"2014-04-16T19:44:27Z","content_type":null,"content_length":"18590","record_id":"<urn:uuid:06a6a70a-cbc5-4a15-8587-14a97a2051a5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
67-69 Shock Retainer/washer
I thought this was one of our original shock retainers (one on the right).
Still has the trunk spatter on it but it measures 1.5" in diameter??
Other originals I have have similar design bur measure in that 1 3/8" ballpark (1.390").
Now I'm not so sure it was off our 68 but have no idea where else it would have come from???
Has anyone observed or have these 1.5" shock retainers/washers?
The AIM calls out the same part# shock retainer for the front and rear shocks. | {"url":"http://www.camaros.org/forum/index.php?topic=9148.msg65106","timestamp":"2014-04-19T17:27:54Z","content_type":null,"content_length":"44184","record_id":"<urn:uuid:01b4798f-ff26-4c9d-9956-48e68efd5a53>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Fresh Approach to Representing Syntax with Static Binders in
Functional Programming
A Fresh Approach to Representing Syntax with Static Binders in Functional Programming
Andrew M. Pitts
Cambridge University Computer Laboratory
Cambridge CB3 0FD, UK
Tell category theorists about the concept of abstract syntax for a language and they may say ``that's just the initial algebra for a sum-of-products functor on the category of sets''. Despite what
you might think, they are trying to be helpful, since the initiality property is the common denominator of both definitions by structural recursion and proofs by structural induction [5, Sect. 4.4].
In recent years we have learned how to extend this initial algebra view of abstract syntax to encompass languages with statically scoped binders. In the presence of such binders one wants to abstract
away from the specific names of bound variables, either by quotienting parse trees by a suitable notion of alpha-equivalence, or by replacing conventional parse trees with ones containing de Bruijn
indices [1]. By changing from the category of sets to other well-known, but still `set-like' categories of sheaves or presheaves, one can regain an initial algebra view of this even more than
normally abstract syntax---the pay-off being new and automatically generated forms of structural recursion and induction that respect alpha-equivalence [2, 3].
One good test of these new ideas is to see if they give rise to new forms of functional programming. In fact they do. The paper [6] sketches a functional programming language for representing and
manipulating syntactical structure involving binders, based on the mathematical model of variable-binding in [3, 4]. In this ML-like language there are new forms of type for names and name-binding
that come along with facilities for declaring fresh names, for binding names in abstractions and for pulling apart such name-abstractions via pattern-matching. The key idea is that properly abstract
uses of names, i.e. ones that do not descend below the level of alpha-conversion, can be imposed on the user by a static type system that deduces information about the freshness of names. Even though
we appear to be giving users a `gensym' facility, the type system restricts the way it can be used to the extent that we keep within effect-free functional programming, in the sense that the usual
laws of pure functional programming remain valid (augmented with new laws for names and name-abstractions).
In this talk I will introduce this new approach to representing languages static binders in functional programming and discuss some of the difficulties we have had verifying its semantic properties.
N. G. de Bruijn. Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem. Indag. Math., 34:381--392, 1972.
M. P. Fiore, G. D. Plotkin, and D. Turi. Abstract syntax and variable binding. In 14th Annual Symposium on Logic in Computer Science, pages 193--202. IEEE Computer Society Press, Washington,
M. J. Gabbay and A. M. Pitts. A new approach to abstract syntax involving binders. In 14th Annual Symposium on Logic in Computer Science, pages 214--224. IEEE Computer Society Press, Washington,
M. J. Gabbay and A. M. Pitts. A new approach to abstract syntax with variable binding. Formal Aspects of Computing, ?:?--?, 2001. Special issue in honour of Rod Burstall. To appear.
J. Meseguer and J. A. Goguen. Initiality, induction and computability. In M. Nivat and J. C. Reynolds, editors, Algebraic Methods in Semantics, chapter 14, pages 459--541. Cambridge University
Press, 1985.
A. M. Pitts and M. J. Gabbay. A metalanguage for programming with bound names modulo renaming. In R. Backhouse and J. N. Oliveira, editors, Mathematics of Program Construction. 5th International
Conference, MPC2000, Ponte de Lima, Portugal, July 2000. Proceedings, volume 1837 of Lecture Notes in Computer Science, pages 230--255. Springer-Verlag, Heidelberg, 2000.
This document was translated from L^AT[E]X by H^EV^EA. | {"url":"http://cristal.inria.fr/ICFP2001/Abstracts/pitts.html","timestamp":"2014-04-17T06:41:42Z","content_type":null,"content_length":"6447","record_id":"<urn:uuid:64724ff5-a2b9-4e6a-95b2-e148a0fa370b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volo, IL Geometry Tutor
Find a Volo, IL Geometry Tutor
...I have been tutoring high school and college students for the past six years. Previously I taught at Georgia Institute of Technology from which I received a Bachelor's in Electrical
Engineering and a Master's in Applied Mathematics. I also have 25 years of experience in the corporate world.
18 Subjects: including geometry, calculus, physics, GRE
...As a professional tutor, it is my goal to understand where the language barrier is in a topic and help you/your child overcome that barrier. My philosophy is to break down complex concepts
into digestible chunks that allow the student to build confidence and excel towards their goals. I first b...
26 Subjects: including geometry, chemistry, Spanish, reading
...In addition to MBA, I have a PhD in Engineering so I believe I am well qualified to teach mathematics courses as well as any business courses(undergraduate and MBA level) . I have several
years of professional experience through my work history with several companies which adds to my business exp...
22 Subjects: including geometry, calculus, physics, statistics
...In addition to my classes, I have had experience researching at Northwestern University in Evanston, IL and Feinberg School of Medicine in Chicago, IL. My research projects regarded estrogen
receptors and tuberculosis respectively. While excelling as a student, I have had my fair share of tutoring experiences.
16 Subjects: including geometry, chemistry, calculus, algebra 2
...My primary goal as a tutor is to adapt to each individual's learning style in order to make learning as efficient and fun as possible. I've been told I have a knack for explaining mathematical
concepts in a clear and concise way, which is fortunate for me because I genuinely enjoy helping studen...
25 Subjects: including geometry, calculus, statistics, algebra 1
Related Volo, IL Tutors
Volo, IL Accounting Tutors
Volo, IL ACT Tutors
Volo, IL Algebra Tutors
Volo, IL Algebra 2 Tutors
Volo, IL Calculus Tutors
Volo, IL Geometry Tutors
Volo, IL Math Tutors
Volo, IL Prealgebra Tutors
Volo, IL Precalculus Tutors
Volo, IL SAT Tutors
Volo, IL SAT Math Tutors
Volo, IL Science Tutors
Volo, IL Statistics Tutors
Volo, IL Trigonometry Tutors
Nearby Cities With geometry Tutor
Buffalo Grove geometry Tutors
Bull Valley, IL geometry Tutors
Crystal Lake, IL geometry Tutors
Fox Lake, IL geometry Tutors
Gurnee geometry Tutors
Holiday Hills, IL geometry Tutors
Island Lake geometry Tutors
Johnsburg, IL geometry Tutors
Lakemoor, IL geometry Tutors
Mchenry, IL geometry Tutors
Port Barrington, IL geometry Tutors
Round Lake Beach, IL geometry Tutors
Round Lake Park, IL geometry Tutors
Round Lake, IL geometry Tutors
Wheeling, IL geometry Tutors | {"url":"http://www.purplemath.com/volo_il_geometry_tutors.php","timestamp":"2014-04-17T04:39:12Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:595806f7-1c65-41b4-b697-91424d03bbb0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Found 2 records.
RFC4556, "Public Key Cryptography for Initial Authentication in Kerberos (PKINIT)", June 2006
Source of RFC: krb-wg (sec)
Errata ID: 3820
Status: Reported
Type: Editorial
Reported By: Nicolas Williams
Date Reported: 2013-12-05
Section 3.2.2 says:
The type of the otherName field is AnotherName. The type-id field
of the type AnotherName is id-pkinit-san:
id-pkinit-san OBJECT IDENTIFIER ::=
{ iso(1) org(3) dod(6) internet(1) security(5) kerberosv5(2)
x509SanAN (2) }
It should say:
The type of the otherName field is AnotherName. The type-id field
of the type AnotherName is id-kerberos-san:
id-kerberos-san OBJECT IDENTIFIER ::=
{ iso(1) org(3) dod(6) internet(1) security(5) kerberosv5(2)
x509SanAN (2) }
The certificate subject alternative name (SAN) type added by RFC4556 is and has been used more generically than its symbolic name denotes.
Note that there is no risk in using id-pkinit-san for non-PKINIT purposes as presence of that SAN is -naturally- insufficient by itself to cause an AS to issue a ticket to the client for the named
principal. RFC4556 is quite clear on this point.
Therefore id-pkinit-san should have been named id-kerberos-san, and should be referred to as id-kerberos-san going forward. (If there was a registry of PKIX certificate extensions we would
additionally ask IANA to updte it.)
There are a few other mentions of id-pkinit-san in RFC4556, all of which should read id-kerberos-san instead.
RFC4556, "Public Key Cryptography for Initial Authentication in Kerberos (PKINIT)", June 2006
Source of RFC: krb-wg (sec)
Errata ID: 3157
Status: Held for Document Update
Type: Technical
Reported By: Tom Yu
Date Reported: 2012-03-15
Held for Document Update by: Stephen Farrell
Section 3.2.1 says:
8. The client's Diffie-Hellman public value (clientPublicValue) is
included if and only if the client wishes to use the Diffie-
Hellman key agreement method. The Diffie-Hellman domain
parameters [IEEE1363] for the client's public key are specified
in the algorithm field of the type SubjectPublicKeyInfo
[RFC3279], and the client's Diffie-Hellman public key value is
mapped to a subjectPublicKey (a BIT STRING) according to
[RFC3279]. When using the Diffie-Hellman key agreement method,
implementations MUST support Oakley 1024-bit Modular Exponential
(MODP) well-known group 2 [RFC2412] and Oakley 2048-bit MODP
well-known group 14 [RFC3526] and SHOULD support Oakley 4096-bit
MODP well-known group 16 [RFC3526].
The Diffie-Hellman field size should be chosen so as to provide
sufficient cryptographic security [RFC3766].
When MODP Diffie-Hellman is used, the exponents should have at
least twice as many bits as the symmetric keys that will be
derived from them [ODL99].
It should say:
8. The client's Diffie-Hellman public value (clientPublicValue) is
included if and only if the client wishes to use the Diffie-
Hellman key agreement method. The Diffie-Hellman domain
parameters [IEEE1363] for the client's public key are specified
in the algorithm field of the type SubjectPublicKeyInfo
[RFC3279], and the client's Diffie-Hellman public key value is
mapped to a subjectPublicKey (a BIT STRING) according to
[RFC3279]. When using the Diffie-Hellman key agreement method,
implementations MUST support Oakley 1024-bit Modular Exponential
(MODP) well-known group 2 [RFC2412] and Oakley 2048-bit MODP
well-known group 14 [RFC3526] and SHOULD support Oakley 4096-bit
MODP well-known group 16 [RFC3526].
Some implementations are known to omit the mandatory Q value
from the DomainParameters (in the algorithm value of the
clientPublicValue) when using the well-known MODP groups 14 and
16, which can cause an ASN.1 decoding error for the
DomainParamters value. While [RFC3526] does not explicitly
specify the Q value for the well-known MODP groups 14 and 16,
the prime modulus of each of these groups is a safe prime --
having the form P = 2Q + 1, where P and Q are prime.
Therefore, the Q value for each of these moduli is the
corresponding Sophie Germain prime, and it is equal to (P-1)/2.
The Diffie-Hellman field size should be chosen so as to provide
sufficient cryptographic security [RFC3766].
When MODP Diffie-Hellman is used, the exponents should have at
least twice as many bits as the symmetric keys that will be
derived from them [ODL99].
The new paragraph identifies an interoperability problem where an
implementation omits the Q value (required in the DomainParameters
type defined in RFC3370) of a Diffie-Hellman group when participating
in PKINIT. This happens for two well-known IKEv2 MODP groups that are
defined in RFC3526, probably because RFC3526 does not explicitly state
the Q values for the moduli. The moduli are safe primes, so the new
text specifies how to compute their Q values (which are the
corresponding Sophie Germain primes). | {"url":"http://www.rfc-editor.org/errata_search.php?rfc=4556","timestamp":"2014-04-20T20:58:50Z","content_type":null,"content_length":"16791","record_id":"<urn:uuid:e9de6b3c-af5f-4057-950b-fae3b762bd69>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Master of Science in Mathematics - Computational Track
Warning: Ohio University switched to the semester system in the summer of 2012, so this information is obsolete.
Directors: Vardges Melkonian; Martin J. Mohlenkamp; and Annie Shen.
General Information
The computational track is aimed at students who are interested in both Mathematics and Computer Science. Our graduates often become software engineers, and are distinguished by mathematical skills
that make them more valuable than typical programmers. These same skills are useful for graduates seeking careers in any field that requires computational or applied Mathematics. The curriculum
provides a foundation in both computer science and mathematics, while allowing enough flexibility so that students can pursue their interests in these two fields. The track can normally be completed
in two years or less.
Admission Requirements
The Graduate College provides University-wide information about graduate study, the graduate application, and useful information for international graduate students. The requirements listed here are
in addition to the university's requirements.
• A Bachelor's degree with at least a 3.0 grade point average from an accredited undergraduate college.
• Background in Mathematics including
□ Calculus (MATH 263A,B,C,D)
□ Differential Equations (MATH 340)
□ Linear Algebra (MATH 211)
□ Matrix Theory (MATH 410)
□ Theory of Statistics (MATH 450A)
• Background in Computer Science including
□ Introduction to Computer Science (C++ based) (CS 240A,B)
□ Digital Circuits (EE 222) or Microprocessors (EE 367)
□ Discrete Structures (CS 300)
□ Organization of Programming Languages (CS 320)
□ Data Structures (CS 361)
Students who have taken classes with similar content under different names should indicate so in their application. Students who lack the statistics prerequisite will be required to take MATH 550A as
part of their study plan. Students whose preparation in matrix theory is not deemed equivalent to our course MATH 410 must take MATH 510 in addition to the required course work. Students who lack
other prerequisite courses may be admitted "conditionally". They must take the missing courses during the first year in the program and prior to enrolling in courses for which the missing courses are
The Graduate Record Examination (GRE) is not required, but scores should be sent if they are available.
Degree Requirements
1. 55 Graduate credit hours above/beyond the 5xxN level in Computer Science, or above/beyond 500 in Mathematics. Credit for Math 510 may not be used toward the 55 hour requirement.
□ At the discretion of the graduate committee, a student can transfer credit for up to 3 courses (13 credits from other institutions; 15 credits from other Ohio University programs.)
□ At least 3 courses must be at the 600 level.
2. Mathematics course requirements:
□ At least 6 courses overall.
□ At least 1 course must be at the 600 level.
□ Two courses:
☆ Linear Algebra (MATH 511)
☆ Introduction to Numerical Analysis (MATH 544)
or Numerical Linear Algebra (MATH 546)
A more advanced course in the same subject area may be substituted.
3. Computer Science course requirements:
□ At least 6 courses overall.
□ At least 1 course must be at the 600 level.
□ Three courses:
☆ Design and Analysis of Algorithms (CS 504)
☆ Computation Theory (CS 506)
☆ Operating Systems and Computer Architecture I (CS 542)
A more advanced course in the same subject area may be substituted.
4. Professional Development Requirement. A choice of:
□ Curricular Practical Training (CPT, internship; counts as 1 credit toward the program requirements)
□ Master's Thesis or Project in Computational Mathematics (MATH 695/692; maximum 10 credits)
□ One graduate course in a related area other than Mathematics or Computer Science (counts as 1 credit toward the program requirements)
5. Each student, with the assistance of a faculty adviser, must develop a study plan by the end of his or her first quarter, and have it approved by the graduate chair. The study plan must satisfy
the specific course requirements above, and also show a coherent focus on some area of interest to the student. Any changes to this study plan must be approved by the faculty adviser and graduate
chair at least one quarter before the student applies for graduation.
Conferral of a graduate degree requires at least a B (3.0) grade-point average (g.p.a) both in the courses taken towards satisfying the degree requirements as well as in all courses taken at Ohio
University. Students whose overall g.p.a stays below 3.0 in three consecutive quarters will be dropped from the program.
During a non-summer quarter in which they receive financial support from the department, a Master's student who wishes to take a non-math
course that does not appear in their study plan must first obtain approval from their advisor and the graduate chair. Students who violate this policy may lose their financial support.
Master's students who choose to do a thesis must provide the title of the thesis and have it approved (signed) by the thesis advisor before the study plan is approved.
• Typical courses are 4 credits and run one 10-week quarter. MATH 6xx courses are 5 credits.
• See the Classes page for descriptions of the Mathematics courses.
• Students who started the program before Fall quarter 2004 may choose to use the requirements that were in effect when they were admitted. | {"url":"http://www.math.ohiou.edu/quarters-archive/master-of-science-in-mathematics-computational","timestamp":"2014-04-21T14:57:22Z","content_type":null,"content_length":"29567","record_id":"<urn:uuid:66288e6d-1e4d-43d1-b6dc-c7c7a52fe1d6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4654667 - Signal-acquisition system for a circular array
The present invention is directed to signal-acquisition systems. It is concerned specifically with a system for processing the output of a circular array of antenna elements so as to determine both
the azimuth and the elevation angles of the source of signals that the antenna array receives.
U.S. patent application Ser. No. 551,664, filed on Nov. 14, 1983, by Apostolos, Boland, and Stromswold for an ACQUISITION SYSTEM EMPLOYING CIRCULAR ARRAY, discloses a powerful system for determining
the directions of arrival and frequencies of many signals simultaneously. An improvement in that system is disclosed in U.S. patent application Ser. No. 536,477, filed on Sept. 28, 1983, by John T.
Apostolos for a TWO-DIMENSIONAL ACQUISITION SYSTEM USING CIRCULAR ARRAY. In both of these systems, a spatial Fourier transformation is performed on the outputs of a circular antenna array. The
resultant transform is processed with certain correction factors related to the antenna pattern of the array and then subjected again to a spatial Fourier transformation. A temporal Fourier
transformation is also performed. The result of each system is an ensemble of signals at a group of output ports in which each output port represents a different azimuthal direction. Signals from a
source in a given azimuthal direction result in a maximum output at the port associated with that azimuthal direction. Thus, the azimuthal direction of each source is readily identified in real time.
The descriptions included in these patent applications are helpful in understanding the present invention, and they are accordingly incorporated by reference.
The assumption on which the design of the systems of those two applications is based is that the source has a negligible elevation angle. That is, there is only a very small angle between the
direction of arrival of the signal and the plane of the circular antenna array. For a wide range of applications, this is an accurate assumption. For sources whose angle of elevation is significant,
however, the direction indications produced by the systems of those two applications are inaccurate.
An object of the present invention is to eliminate the inaccuracies that can be caused in such systems by significant elevation angles.
It is a further object of the present invention to determine the values of the elevation angles of signal sources.
The foregoing and related objects are achieved in the method described below and in apparatus for carrying out that method. The method includes performing a spatial Fourier transformation on an
ensemble of input signals from a circular antenna array to generate an ensemble of input-transform signals, each of which is associated with a separate integer index n. A signal associated with an
index n represents the spatial-frequency component of n electrical degrees per spatial degree around the circular antenna array and consists of components representing all of the antenna-signal
temporal-frequency components that give rise to that spatial frequency. According to the invention, an ensemble of modified-transform signals is generated from these input signals for each of a
plurality of elevation angles. Each ensemble includes a modified-transform signal associated with each input-transform signal. Each modified-transform signal consists of modified-transform
components, each of which represents a value that is substantially proportional to an associated component in the input-transform signal multiplied by (-1)n times the azimuth-independent factor in
the antenna pattern that would be generated by the antenna array at the associated elevation angle if the antenna array were driven by signals whose temporal frequency is the frequency with which
that input-transform component is associated and whose phases vary with element position at the spatial frequency represented by that input-transform signal.
A spatial Fourier transformation is performed on each ensemble of modified-transform signals to generate an ensemble of output-transform signals for each of the plurality of elevation angles. Within
a given output-transform ensemble, each output-transform signal is associated with a different azimuth angle. The result of this process is that radiation emitted by a source and received by the
antenna array causes a maximum response in the output-transform signal associated with the azimuth and elevation angles of that source.
These and further features and advantages of the present invention are described in connection with the accompanying drawings, in which:
FIG. 1 is a block diagram of the system of the present invention for determining the elevation and azimuthal position of the source of signals detected by a circular array of antenna elements;
FIG. 2 is a more-detailed block diagram of a portion of the system of FIG. 1; and
FIG. 3 is a diagram used to define variables employed in the mathematical treatment of the invention.
The invention will be described initially by simultaneous reference to FIGS. 1, and 2. The system 10 of the present invention is a device for determining the angle of elevation, the azimuth angles,
and the temporal frequencies of radiation received from a plurality of sources by a circular antenna array 12 of 2N elements 14(1-N) through 14(N). The outputs of the antenna elements 14 are fed to
corresponding input ports 16 of a two-dimensional compressive receiver 18. In essence, the two-dimensional compressive receiver performs a two-dimensional Fourier transformation on the signal
ensemble that it receives at its input ports. The transformation is from time to temporal frequency and from position to spatial frequency. The compressive receiver 18 has 2N output ports, each of
which is associated with a spatial-frequency component, and a spatial-frequency component in the input ensemble causes its greatest response at the output port 20(n) associated with that
spatial-frequency component.
Spatial frequency in this context refers to the instantaneous phase advance around the elements of the circular array. For instance, suppose that the signals on all of the elements 14 of the circular
array 12 are sinusoidal signals of the same temporal frequency but having different phases. Suppose further that these phases advance with element position by n electrical degrees per spatial degree,
where n is an integer. In such a situation, the array output has a single temporal-frequency component and a single spatial-frequency component. For such an ensemble of signals, the output of the
compressive receiver 18 is a burst of oscillatory signal whose frequency is the center frequency of the compressive receiver. This output is greatest on the output port 20(n) associated with the
spatial frequency of n electrical degrees per spatial degree. The compressive receiver is repeatedly swept in frequency, and the burst occurs at a time within the sweep that is determined by the
temporal frequency of the radiation that causes the signal ensemble. For the ensemble just described, the response at any of the other output ports 20 is negligible--because there are no other
spatial-frequency components--and a significant output on output port 20(n) occurs only at the time within the sweep associated with the temporal frequency of the radiation.
Of course, this signal ensemble, which has only one spatial-frequency component, is extremely artificial; even a single plane-wave signal at a single temporal frequency gives rise to many
spatial-frequency components in a circular array. In ordinary operation, many spatial-frequency components, and usually many temporal-frequency components, are present in the ensemble of signals
processed by the two-dimensional compressive receiver 18, which processes all of these components simultaneously.
Those skilled in the art will recognize that the two-dimensional compressive receiver includes a two-dimensional dispersive delay line and that the position of an output port on the output edge of
the delay line determines the spatial-frequency component with which that output port 20 is associated. Therefore, the output ports of compressive recievers can in general be positioned so as to be
associated with other than integral numbers of electrical degrees per spatial degree. However, compressive receiver 18 is arranged so that the spatial frequencies associated with the output ports 20
are integral; as was stated before, each output port 20(n) is associated with a spatial frequency of n electrical degrees per spatial degree.
The signal from each compressive-receiver output port 20(n) is fed to a corresponding input port on each of Q+1 different processing units 24(.0.)-24(Q). These processing units 24 multiply the input
signals by processing factors that are functions of time within the compressive-receiver sweep and depend on the particular input port 22(q,n) to which the signal is applied. The signal resulting
from multiplication of the signal on each input port 22(q,n) is presented on a corresponding output port 26(q,n) to a corresponding input port 28(q,n) of one of Q+1 modified Butler matrices 30(.0.)
-30(Q). Each modified Butler matrix 30(q) performs a spatial Fourier transformation but no temporal Fourier transformation.
The Butler matrix 30(q) is a modified version of a conventional Butler matrix of the type described in U.S. Pat. No. 3,255,450, which issued on June 7, 1966, to Jesse L. Butler for a Multiple Beam
Antenna System Employing Multiple Directional Couplers in the Leadin. In the conventional Butler matrix, the two adjacent central output ports represent opposite phase gradients, or spatial
frequencies, of the same magnitude, and the other output ports represent spatial frequencies that are odd harmonics of these spatial frequencies. For example, with N=4, the outputs of a conventional
Butler matrix would correspond to spatial frequencies of ±22 1/2, ±67 1/2, +112 1/2, and ±157 1/2 electrical degrees per input port. In the modified Butler matrix 30(q), the spatial-frequency
difference between any two adjacent output ports is the same as that for a conventional Butler matrix. However, the spatial frequencies represented by the output ports 32(q,n) of the modified Butler
matrix 30(q) differ from those of a conventional Butler matrix by one-half of that spatial-frequency difference. For N=4, therefore, the output ports 32(q) correspond to spatial frequencies of 0,
±45, ±90, ±135, and 180 electrical degrees per input port.
The modified Butler matrix can be constructed in a number of ways. The most straightforward conceptually is to provide phase shifters (not shown in the drawings) at the input ports of a conventional
Butler matrix. Each of the phase shifters provides a different phase shift, the phase shifts increasing with input-port position in such a manner that the phase shifts of adjacent phase shifters
differ by one-half the spatial-frequency spacing of the output ports 32.
The ultimate result of the system is that each output port 32(q,n) is associated with an elevation angle of 90°×q/Q and an azimuth angle of 180°×n/N. A plane wave that arrives at the antenna array 12
at a given combination of azimuth angle and elevation angle causes the greatest response on the output port associated with that combination of angles, and the time within a compressive-receiver
sweep at which the response occurs is an indication of the temporal frequency of the plane wave.
FIG. 2 shows one of the processing units 24(q) of FIG. 1 in more detail. Associated with each input port 22(q,n) and output port 26(q,n) is an analog multiplier 34(q,n) which multiplies the signal
from the input port 22(q,n) by a processing factor represented by a signal that a function generator 36(q,n) produces. The value of the processing factor is shown in FIG. 2, where J[n] is the
nth-order Bessel function of the first kind. The W[n] 's are weighting factors that would be used in most practical applications to improve the dynamic range of the system output, as will be
described in more detail below. The weighting factors are constants that differ for different function generators within a processor unit 24 but are the same for corresponding function generators in
different processor units.
The processing factors depend on q, n, and the wave number B. The wave number, in turn, is proportional to the antenna-signal temporal-frequency component to which the compressive receiver is
responding at the current point in the compressive-receiver sweep; that is, the processing factors are functions of time within a sweep. The processing factor produced by function generator 36(q,n),
if a factor of 2π is ignored, is the weighting factor multiplied by (-1)^n times a quantity that, as will shortly be explained, can be described as the azimuth-independent factor in a particular
antenna pattern. This antenna pattern is one that is generated by an appropriate phasing of the circular array 12. Specifically, if the antenna elements were used for transmission and driven at the
temporal frequency corresponding to B but at different phases so that the phases advance around the array at a spatial frequency of n electrical degrees per spatial degree, then the far-field antenna
pattern associated with processing unit 24(q) is:
2πjhu ne^jnd J[n] [βd cos(qπ/2Q)]
This is the antenna pattern mentioned above. The only azimuth-dependent factor in this pattern is is e^jnφ. The remaining factors of this pattern--i.e., the azimuth-independent factor--is included in
the processing factor in the manner just described.
The function generator typically includes a read-only digital memory containing values of the processing factor, which, for any individual function generator 36(q,n), is a function of radiation
temporal frequency only. Since the frequency to which a compressive-receiver output is responding at any given time is a function of the time during the compressive-receiver sweep, the function
generator is synchronized with the sweep to achieve the correct timing.
The function generator typically also includes a digital-to-analog converter to which the outputs of the read-only memory are fed. The outputs of the digital-to-analog converter are applied to the
analog multiplier 34(q,nThe functions depicted in FIG. 2 are all either purely real or purely imaginary. Therefore, although the multipliers 34(q,n) perform complex multiplications, they can be
provided as simple doubly balanced modulators with or without 90° phase shifts; there is no need to include a device for adding phase and quadrature components.
The broader teachings of the present invention can be practiced without the temporal Fourier transformation that the compressive receiver 18 performs. That is, the compressive receiver 18 could, in
principle, be replaced with a modified Butler matrix or similar device. In such a system, the processing unit 24(q) would receive the results of antenna signals of all received temporal frequencies
simultaneously. Accordingly, the multiplier and function generator of FIG. 2 would be replaced with a filter network for performing the functions depicted in FIG. 2. Although synthesis of a network
implementing one of the functions of FIG. 2 does not appear to be straightforward, a close approximation can readily be achieved in a filter having a reasonable number of poles by employing available
network-synthesis routines.
We now turn to a mathematical treatment of the operation of the invention. This treatment will proceed with the aid of FIG. 3, which includes a circle 38 that represents a continuous linear array
that the discrete array 12 approximates. A plane wave propagates along a direction of arrival 40 at an angle of elevation θ and an azimuth angle Φ with respect to an arbitrary azimuth reference 42.
To determine the phase, relative to the center 44 of the array, of the signal at a given element position 46 whose azimuth angle is Φ', one determines the perpendicular distance between the element
46 and a plane normal to the direction of arrival 40 through the center 44. If the radius of the circular array is d, then the perpendicular distance is given by d cos(Φ'-Φ)cosθ. Therefore, the
phasor representation of the signal at a position Φ' is given by:
E(Φ')=ε^jβdcos(Φ'-101 )cos θ(1)
It will be recalled that the compressive receiver 18 generates an output ensemble at any given time that represents a spatial Fourier transformation of the input-signal components in its input
ensemble having the temporal frequency associated with that particular time in the compressive-receiver sweep. Accordingly, at a time associated with the frequency for which the wave number is equal
to B, the signal at a compressive-receiver output port 20(n) is represented by the spatial Fourier coefficient with which it is associated: ##EQU1##
Substitution of equation (1) into equation (2) yields the following expression for the nth Fourier coefficient c[n] : ##EQU2##
Evaluation of the integral in equation (3) can be performed with the aid of Hansen's integral formula for an nth-order Bessel function of the first kind: ##EQU3##
This results in the following expression for the signal at output port 20(n):
C[n] =2πj^n ε^jnΦ J[n] (βd cos θ) (5)
Inspection of equation (5) reveals that this output signal is a function of the azimuth and elevation angles of the source and is also a function of the wave number, which is proportional to temporal
In general, the radiation may come from more than one direction, and the output c[n] is the sum of the signals given by equation (5) for sources at different elevation and azimuth angles. At a given
time within the compressive-receiver sweep, however, the output at port 20(n) responds to signals of only a single frequency. Therefore, the outputs in response to antenna signals that are
sufficiently separated in temporal frequency are not added, because they occur at different times.
As was stated above, each processing unit 24(q) multiplies each of the c[n] 's by a different factor and provides the output to the modified Butler matrix 30(q), which generates a spatial Fourier
transform of the signal ensemble that it receives. This output is given by the following phase shifting and summation of discrete inputs: ##EQU4## where F[q],n is the signal at output port 32(q,n), d
[q],n is the value of the factor applied by multiplier 34(q,n), and θ[n] =πn/N. For present purposes, it will be assumed that the value of the weighting factor W[n] is unity. In most practical
embodiments, W[n] will differ from unity, but the assumption of a unity value will simplify the discussion, and the effect of the weighting factors will be considered at the conclusion of the
mathematical development.
If W[n] =1, the value of d[q],n is given by the following expression
d[q],n =(-j)^n J[n] (βd cosθ[q]) (7)
where θ[q] =q π/2Q.
Substitution of equation (5) and equation (7) into equation (6) yields the following expression for the signal on Butler-matrix output port 32(q,n): ##EQU5## Evaluation of this expression can be made
by application of Graf's addition theorem for Bessel functions: ##EQU6## where ##EQU7##
Comparison of equations (8) and (9) reveals that, with the exception of a factor 2π, the only difference between the expressions to the right of the equals signs is that equation (8) includes only a
finite number of addends. With the exception of the 2π factor, therefore, the expression in equation (8) is a good approximation for the expression of equation (9) whenever Bd is less than N, i.e.,
whenever the inter-element spacing is less than a wavelength. If this requirement is met, the signal on Butler-matrix output port 32(q,n) is given by the following expression: ##EQU8##
The value of an output of the form set forth in equation (11) can be understood if it is recognized, first, that the zero-order Bessel function of the first kind has its overall maximum when its
argument is zero and, second, that the Bessel-function argument in equation (11) goes to zero for a given output F[q],n only when the elevation angle of the source is equal to θ[q] and, at the same
time, the azimuth angle of the source is equal to Φ[n] herefore, each output port 32(q,n) is associated with a direction (θ[q],Φn), i.e., the direction that provides the maximum response at the
output port 32.
Although the overall maximum of the zero-order Bessel function occurs for an argument of zero, it has local maxima for other arguments, and it may prove desirable to "smooth" the output to reduce
these local maxima. It is for this purpose that the weighting factors W[n] are shown in the function generators 36 of FIG. 2. The weighting factors that are best for a particular situation can be
determined empirically. The resultant smoothing can increase the dynamic range of the system, although this increase in dynamic range is usually accompanied by some loss in resolution.
It is interesting to note that equation (11) is the output that one would obtain from a phased circular array aimed at (θ[q], Φn) for a wave number of B. Thus, the system of the present invention is
equivalent to a large number of phased circular arrays, each one phased for a different combination of direction and temporal frequency.
It will be apparent to those skilled in the art that the teachings of the present invention can be employed in systems that differ somewhat from that illustrated in the foregoing description. As was
stated above, the two-dimensional compressive receiver 18 could be replaced with a modified Butler matrix. Additionally, the modified Butler matrix 30 could be replaced with a beam-forming
surface-acoustic-wave delay line, which, like the modified Butler matrix, performs a spatial Fourier transformation. Furthermore, equivalent devices for processing the signals digitally could be
substituted: A two-dimensional fast-Fourier-transform device could be substituted for the compressive receiver 18, for example, and a plurality of one-dimensional fast-Fourier-transform devices could
replace the Butler matrices. The processing units would then use digital multipliers.
Other variations of the device illustrated in the present invention will also suggest themselves to those skilled in the art in light of the foregoing disclosure. | {"url":"http://www.google.com/patents/US4654667?dq=U.S.+Patent+No.+4,528,643)","timestamp":"2014-04-24T04:45:58Z","content_type":null,"content_length":"79144","record_id":"<urn:uuid:75ccc941-4a8a-4319-ab6e-f0aa1be464eb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
s C
Visualization of Riemann Surfaces IIa:
Compositions of Elementary Transcendental Functions
Michael Trott
Wolfram Research, Inc.
Today I deal with a slightly more mathematical subject than last time. But it is actually not the mathematics itself that is in the center, but in my opinion, the pictures of intricate surfaces that
arise from the mathematics. Because of length, "Visualization of Riemann Surfaces II" will be split into three installments.
Why Use Multivalued Functions?
Some Simple Examples of Riemann Surfaces
Works Cited
Converted by Mathematica April 20, 2000 | {"url":"http://www.mathematica-journal.com/issue/v7i4/columns/trott/contents/html/index.html","timestamp":"2014-04-17T21:27:04Z","content_type":null,"content_length":"8904","record_id":"<urn:uuid:6ffdd4a2-6c93-4afe-9da0-818891e6451d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with a Num. analysis question for unique, trivial and multiple solutions
October 2nd 2011, 08:33 AM #1
Oct 2011
Finding 2 unknown coefficents in a System of Linear Equations
Hey so I have a question about a topic in my Numerical Analysis class (Civil engineering major). Here it is,
1.Give all possible values of A and B for the following system of equations to have the following types of solutions or no solution. If it is not possible then circle "not possible".
a.) Non trivial unique solution
b.) Only trivial solution
c.) Multiple solutions
d.) No solution
I know that if the determinate of the matrix form of these equations is 0 then I have no solution but thats really all I know and I don't know where to start
any help would be appreciated, this is the one topic im confused on for my test this friday and my book does not include help on this topic.
Last edited by stevze; October 2nd 2011 at 08:42 PM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/189342-need-help-num-analysis-question-unique-trivial-multiple-solutions.html","timestamp":"2014-04-16T16:59:27Z","content_type":null,"content_length":"30618","record_id":"<urn:uuid:179f1f44-8cd2-46c8-a005-f68df26065ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 6 of 6
1. CMB 2010 (vol 53 pp. 587)
Hulls of Ring Extensions
We investigate the behavior of the quasi-Baer and the right FI-extending right ring hulls under various ring extensions including group ring extensions, full and triangular matrix ring extensions,
and infinite matrix ring extensions. As a consequence, we show that for semiprime rings $R$ and $S$, if $R$ and $S$ are Morita equivalent, then so are the quasi-Baer right ring hulls $\widehat{Q}_
{\mathfrak{qB}}(R)$ and $\widehat{Q}_{\mathfrak{qB}}(S)$ of $R$ and $S$, respectively. As an application, we prove that if unital $C^*$-algebras $A$ and $B$ are Morita equivalent as rings, then the
bounded central closure of $A$ and that of $B$ are strongly Morita equivalent as $C^*$-algebras. Our results show that the quasi-Baer property is always preserved by infinite matrix rings, unlike
the Baer property. Moreover, we give an affirmative answer to an open question of Goel and Jain for the commutative group ring $A[G]$ of a torsion-free Abelian group $G$ over a commutative
semiprime quasi-continuous ring $A$. Examples that illustrate and delimit the results of this paper are provided.
Keywords:(FI-)extending, Morita equivalent, ring of quotients, essential overring, (quasi-)Baer ring, ring hull, u.p.-monoid, $C^*$-algebra
Categories:16N60, 16D90, 16S99, 16S50, 46L05
2. CMB 2010 (vol 53 pp. 223)
Density of Polynomial Maps
Let $R$ be a dense subring of $\operatorname{End}(_DV)$, where $V$ is a left vector space over a division ring $D$. If $\dim{_DV}=\infty$, then the range of any nonzero polynomial $f(X_1,\dots,X_m)
$ on $R$ is dense in $\operatorname{End}(_DV)$. As an application, let $R$ be a prime ring without nonzero nil one-sided ideals and $0\ne a\in R$. If $af(x_1,\dots,x_m)^{n(x_i)}=0$ for all $x_1,\
dots,x_m\in R$, where $n(x_i)$ is a positive integer depending on $x_1,\dots,x_m$, then $f(X_1,\dots,X_m)$ is a polynomial identity of $R$ unless $R$ is a finite matrix ring over a finite field.
Keywords:density, polynomial, endomorphism ring, PI
Categories:16D60, 16S50
3. CMB 2009 (vol 52 pp. 145)
$2$-Clean Rings
A ring $R$ is said to be $n$-clean if every element can be written as a sum of an idempotent and $n$ units. The class of these rings contains clean rings and $n$-good rings in which each element is
a sum of $n$ units. In this paper, we show that for any ring $R$, the endomorphism ring of a free $R$-module of rank at least 2 is $2$-clean and that the ring $B(R)$ of all $\omega\times \omega$
row and column-finite matrices over any ring $R$ is $2$-clean. Finally, the group ring $RC_{n}$ is considered where $R$ is a local ring.
Keywords:$2$-clean rings, $2$-good rings, free modules, row and column-finite matrix rings, group rings
Categories:16D70, 16D40, 16S50
4. CMB 2006 (vol 49 pp. 265)
Endomorphisms That Are the Sum of a Unit and a Root of a Fixed Polynomial
If $C=C(R)$ denotes the center of a ring $R$ and $g(x)$ is a polynomial in C[x]$, Camillo and Sim\'{o}n called a ring $g(x)$-clean if every element is the sum of a unit and a root of $g(x)$. If $V$
is a vector space of countable dimension over a division ring $D,$ they showed that $\end {}_{D}V$ is $g(x)$-clean provided that $g(x)$ has two roots in $C(D)$. If $g(x)=x-x^{2}$ this shows that $\
end {}_{D}V$ is clean, a result of Nicholson and Varadarajan. In this paper we remove the countable condition, and in fact prove that $\Mend {}_{R}M$ is $g(x)$-clean for any semisimple module $M$
over an arbitrary ring $R$ provided that $g(x)\in (x-a)(x-b)C[x]$ where $a,b\in C$ and both $b$ and $b-a$ are units in $R$.
Keywords:Clean rings, linear transformations, endomorphism rings
Categories:16S50, 16E50
5. CMB 2000 (vol 43 pp. 413)
Non-Isomorphic Maximal Orders with Isomorphic Matrix Rings
We construct a countably infinite family of pairwise non-isomorphic maximal ${\mathbb Q}[X]$-orders such that the full $2$ by $2$ matrix rings over these orders are all isomorphic.
Categories:16S50, 16H05, 16N60
6. CMB 1997 (vol 40 pp. 198)
The ${\cal J}_0$-radical of a matrix nearring can be intermediate
An example is constructed to show that the ${\cal J}_0$-radical of a matrix nearring can be an intermediate ideal. This solves a conjecture put forward in [1].
Categories:16Y30, 16S50, 16D25 | {"url":"http://cms.math.ca/cmb/msc/16S50","timestamp":"2014-04-18T03:10:44Z","content_type":null,"content_length":"34349","record_id":"<urn:uuid:44ba0824-33df-403e-ba52-3cc72254c710>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum of Values Between 2 Dates [Excel Formulas]
Posted on September 27th, 2011 in
Excel Howtos
Learn Excel
- 14 comments
Lets just say, you run a nice little orange shop called, “Joe’s Awesome Oranges“. And being an Excel buff, you record the daily sales in to a workbook, in this format.
After recording the sales for a couple of months, you got a refreshing idea, why not analyze the sales between any given 2 dates? for analysis sake.
So you entered 2 dates, Starting Date in cell F5 and Ending Date in cell F6
How would you sum up the sales between the dates in F5 & F6?
This is where use the powerful SUMIFS formula.
Assuming the dates are in column B & sales are in column C,
we write =SUMIFS($C$5:$C$95,$B$5:$B$95,">="&$F$5,$B$5:$B$95,"<="&$F$6)
to calculate the sum of sales between the dates in F5 & F6.
How does this formula work?
• $C$5:$C$95 portion: This is the range of cells where our Sales values are recorded. We want these to be summed up based on the conditions as below.
• Condition 1: $B$5:$B$95 >= $F$5: This condition tells SUMIFS to check Column B for any dates on or after F5
• Condition 2: $B$5:$B$95 <= $F$6: This condition tells SUMIFS to check Column B for any dates on or before F6
• When combined, the SUMIFS formula checks for both conditions and adds sales only for dates between Starting (F5) and Ending (F6) dates.
What formula you should use in Excel 2003?
As you may know, SUMIFS formula does not work in earlier versions of Excel. But you don’t have to shut your orange shop because of that. We can use the all powerful SUMPRODUCT formula for this.
For example, =SUMPRODUCT(($B$5:$B$95>=$F$5)*($B$5:$B$95<=$F$6),$C$5:$C$95) would work the same.
Learn more about SUMPRODUCT formula & why it is awesome.
We can even use SUM & OFFSET formulas if …,
We can also use SUM & OFFSET combination to perform this calculation, provided dates are in smallest first order and all dates are entered. For the example, see download file.
Download Example Workbook:
Click here to download example workbook & play with it.
How would you sum up values between 2 dates?
In reporting situations, showing summary of values between 2 dates is a common requirement. So I use either formulas like above or Pivot Tables to do this.
What about you? How would you sum up values between 2 dates? Please share your ideas & tips using comments.
Learn More Date Related Formulas:
Want to Learn More Formulas? Join Our Crash Course
If you want to learn SUMIFS, SUMPRODUCT, OFFSET and 40 other day to day formulas, then consider my Excel Formula Crash Course. It has 31 lessons split in to 6 modules and makes you awesome in Excel
Click here to learn more about this.
Analyzing Performance of Stocks using Excel [Example] Incell Sales Funnel Charts
Written by Chandoo
Tags: advanced excel, array formulas, date and time, downloads, Learn Excel, Microsoft Excel Formulas, OFFSET(), spreadsheets, sum(), sumifs, sumproduct
Home: Chandoo.org Main Page
? Doubt: Ask an Excel Question
14 Responses to “Sum of Values Between 2 Dates [Excel Formulas]”
1. I would apply a filter and use function subtotal, with option 9. This way you can see multiple views based on the filter.
2. hey Chandoo, the solutions you proposed are very efficient, but if I wanted to be fancy I would do it this way .. the references are as your example workbook.
3. I like things simple:
4. use something like: =SUM(OFFSET(B1,0,0,DATEDIF(A1,D1,”d”)))
and have D1 be the date that I want to sum to.
5. In Excel 2003 (and earlier) I’d use an array formula to calculate either with nested if statements (as shown here) or with AND.
Note that I truly made this for BETWEEN the dates, not including the dates
6. I turned the data set into a table named Dailies.
I named the two limits StartDate and EndDate.
And used an array formula:
7. If I would still be using the old Excel I would do it as follows:
Works as simple as it is.
8. =sum(index(c:c,match(startdate,c:c,1)+1):index(c:c,match(enddate,c:c,1))
9. =sum(index(c:c,match(startdate,b:b,1)+1):index(c:c,match(enddate,b:b,1))
10. Great examples and thanks to Chandoo. You have simplified my work.
12. [...] I'm not sure I understand your question fully, but have a look at this: Sum of Values Between 2 Dates [Excel Formulas] | Chandoo.org – Learn Microsoft Excel Online [...]
13. Thank you! Thank you! Thank you!
14. =SUMIF(A2:A11;”>=”&B13;B2:B11)-SUMIF(A2:A11;”<"&A11;B2:B11)
Analyzing Performance of Stocks using Excel [Example] Incell Sales Funnel Charts | {"url":"http://chandoo.org/wp/2011/09/27/sum-between-2-dates/","timestamp":"2014-04-19T15:30:06Z","content_type":null,"content_length":"53679","record_id":"<urn:uuid:f1192a46-440a-40c0-ae4c-472c0ccae621>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
V. Aravantinos, R. Caferra and N. Peltier (2011) Decidability and Undecidability Results for Propositional Schemata
V. Aravantinos, R. Caferra and N. Peltier (2011) "Decidability and Undecidability Results for Propositional Schemata", Volume 40, pages 599-656
PDF | PostScript | doi:10.1613/jair.3351
We define a logic of propositional formula schemata adding to the syntax of propositional logic indexed propositions and iterated connectives ranging over intervals parameterized by arithmetic
variables. The satisfiability problem is shown to be undecidable for this new logic, but we introduce a very general class of schemata, called bound-linear, for which this problem becomes decidable.
This result is obtained by reduction to a particular class of schemata called regular, for which we provide a sound and complete terminating proof procedure. This schemata calculus allows one to
capture proof patterns corresponding to a large class of problems specified in propositional logic. We also show that the satisfiability problem becomes again undecidable for slight extensions of
this class, thus demonstrating that bound-linear schemata represent a good compromise between expressivity and decidability.
Click here to return to Volume 40 contents list | {"url":"http://www.jair.org/papers/paper3351.html","timestamp":"2014-04-20T05:46:05Z","content_type":null,"content_length":"3430","record_id":"<urn:uuid:b03213a0-3308-4334-86b5-8d1f1b5ac197>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite Mathematics with Applications in the Management, Natural, and Social Sciences, Tenth Edition
Back to skip links
Finite Mathematics with Applications in the Management, Natural, and Social Sciences, Tenth Edition | 978-0-321-64554-8
ISBN-13: 9780321645548 See more
Author(s): Margaret L. Lial; Thomas W. Hungerford; John P. Holcomb,
Price Information
Rental OptionsExpiration Date
eTextbook Digital Rental:180 days
Our price: $75.99
Regular price:$190.33
You save:$114.34
Additional product details
ISBN-10 0321646290, ISBN-13 9780321646293
ISBN-10 0-321-64554-5, ISBN-13 978-0-321-64554-8
Author(s): Margaret L. Lial; Thomas W. Hungerford; John P. Holcomb,
Publisher: Pearson
Copyright year: © 2011 Pages: 704
For freshman/sophomore, 1 or 2-semester or 2—3 quarter courses covering topics in college algebra and finite mathematics for students in business, economics, social sciences, or life sciences
This book presents the content and applications in an accessible manner while maintaining an appropriate level of rigor. The authors proceed from familiar material to new, and from concrete examples
to general rules and formulas. This edition retains its focus on real-world problem solving, but has been refreshed with a wealth of new data in the examples and exercises–42% of the 452 examples are
new or revised, and 31% of the 3,741 exercises are new or revised. The Table of Contents lends itself to tailoring the course to meet the specific needs of students and instructors.
CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book.
Marketing Promotion
Three Ways to Study
with eTextbooks!
• Read online from your computer or mobile device.
• Read offline on select browsers and devices when the internet won't be available.
• Print pages to fit your needs.
CourseSmart eTextbooks let you study the best way – your way. | {"url":"http://www.coursesmart.com/9780321646293","timestamp":"2014-04-18T18:46:52Z","content_type":null,"content_length":"51863","record_id":"<urn:uuid:536e48f1-9d73-4906-9f60-d291c88099ce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convex rational polyhedral cones
This module was designed as a part of framework for toric varieties (variety, fano_variety). While the emphasis is on strictly convex cones, non-strictly convex cones are supported as well. Work with
distinct lattices (in the sense of discrete subgroups spanning vector spaces) is supported. The default lattice is ToricLattice \(N\) of the appropriate dimension. The only case when you must specify
lattice explicitly is creation of a 0-dimensional cone, where dimension of the ambient space cannot be guessed.
• Andrey Novoseltsev (2010-05-13): initial version.
• Andrey Novoseltsev (2010-06-17): substantial improvement during review by Volker Braun.
• Volker Braun (2010-06-21): various spanned/quotient/dual lattice computations added.
• Volker Braun (2010-12-28): Hilbert basis for cones.
• Andrey Novoseltsev (2012-02-23): switch to PointCollection container.
Use Cone() to construct cones:
sage: octant = Cone([(1,0,0), (0,1,0), (0,0,1)])
sage: halfspace = Cone([(1,0,0), (0,1,0), (-1,-1,0), (0,0,1)])
sage: positive_xy = Cone([(1,0,0), (0,1,0)])
sage: four_rays = Cone([(1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1)])
For all of the cones above we have provided primitive generating rays, but in fact this is not necessary - a cone can be constructed from any collection of rays (from the same space, of course). If
there are non-primitive (or even non-integral) rays, they will be replaced with primitive ones. If there are extra rays, they will be discarded. Of course, this means that Cone() has to do some work
before actually constructing the cone and sometimes it is not desirable, if you know for sure that your input is already “good”. In this case you can use options check=False to force Cone() to use
exactly the directions that you have specified and normalize=False to force it to use exactly the rays that you have specified. However, it is better not to use these possibilities without necessity,
since cones are assumed to be represented by a minimal set of primitive generating rays. See Cone() for further documentation on construction.
Once you have a cone, you can perform numerous operations on it. The most important ones are, probably, ray accessing methods:
sage: rays = halfspace.rays()
sage: rays
N( 0, 0, 1),
N( 0, 1, 0),
N( 0, -1, 0),
N( 1, 0, 0),
N(-1, 0, 0)
in 3-d lattice N
sage: rays.set()
frozenset([N(1, 0, 0), N(-1, 0, 0), N(0, 1, 0), N(0, 0, 1), N(0, -1, 0)])
sage: rays.matrix()
[ 0 0 1]
[ 0 1 0]
[ 0 -1 0]
[ 1 0 0]
[-1 0 0]
sage: rays.column_matrix()
[ 0 0 0 1 -1]
[ 0 1 -1 0 0]
[ 1 0 0 0 0]
sage: rays(3)
N(1, 0, 0)
in 3-d lattice N
sage: rays[3]
N(1, 0, 0)
sage: halfspace.ray(3)
N(1, 0, 0)
The method rays() returns a PointCollection with the \(i\)-th element being the primitive integral generator of the \(i\)-th ray. It is possible to convert this collection to a matrix with either
rows or columns corresponding to these generators. You may also change the default output_format() of all point collections to be such a matrix.
If you want to do something with each ray of a cone, you can write
sage: for ray in positive_xy: print ray
N(1, 0, 0)
N(0, 1, 0)
There are two dimensions associated to each cone - the dimension of the subspace spanned by the cone and the dimension of the space where it lives:
sage: positive_xy.dim()
sage: positive_xy.lattice_dim()
You also may be interested in this dimension:
sage: dim(positive_xy.linear_subspace())
sage: dim(halfspace.linear_subspace())
Or, perhaps, all you care about is whether it is zero or not:
sage: positive_xy.is_strictly_convex()
sage: halfspace.is_strictly_convex()
You can also perform these checks:
sage: positive_xy.is_simplicial()
sage: four_rays.is_simplicial()
sage: positive_xy.is_smooth()
You can work with subcones that form faces of other cones:
sage: face = four_rays.faces(dim=2)[0]
sage: face
2-d face of 3-d cone in 3-d lattice N
sage: face.rays()
N(1, 1, 1),
N(1, -1, 1)
in 3-d lattice N
sage: face.ambient_ray_indices()
(0, 1)
sage: four_rays.rays(face.ambient_ray_indices())
N(1, 1, 1),
N(1, -1, 1)
in 3-d lattice N
If you need to know inclusion relations between faces, you can use
sage: L = four_rays.face_lattice()
sage: map(len, L.level_sets())
[1, 4, 4, 1]
sage: face = L.level_sets()[2][0]
sage: face.rays()
N(1, 1, 1),
N(1, -1, 1)
in 3-d lattice N
sage: L.hasse_diagram().neighbors_in(face)
[1-d face of 3-d cone in 3-d lattice N,
1-d face of 3-d cone in 3-d lattice N]
The order of faces in level sets of the face lattice may differ from the order of faces returned by faces(). While the first order is random, the latter one ensures that one-dimensional faces are
listed in the same order as generating rays.
When all the functionality provided by cones is not enough, you may want to check if you can do necessary things using lattice polytopes and polyhedra corresponding to cones:
sage: four_rays.lattice_polytope()
A lattice polytope: 3-dimensional, 5 vertices.
sage: four_rays.polyhedron()
A 3-dimensional polyhedron in ZZ^3 defined as
the convex hull of 1 vertex and 4 rays
And of course you are always welcome to suggest new features that should be added to cones!
[Fulton] (1, 2, 3, 4, 5) Wiliam Fulton, “Introduction to Toric Varieties”, Princeton University Press
sage.geometry.cone.Cone(rays, lattice=None, check=True, normalize=True)
Construct a (not necessarily strictly) convex rational polyhedral cone.
□ rays – a list of rays. Each ray should be given as a list or a vector convertible to the rational extension of the given lattice. May also be specified by a Polyhedron_base object;
□ lattice – ToricLattice, \(\ZZ^n\), or any other object that behaves like these. If not specified, an attempt will be made to determine an appropriate toric lattice automatically;
□ check – by default the input data will be checked for correctness (e.g. that all rays have the same number of components) and generating rays will be constructed from rays. If you know that
the input is a minimal set of generators of a valid cone, you may significantly decrease construction time using check=False option;
□ normalize – you can further speed up construction using normalize=False option. In this case rays must be a list of immutable primitive rays in lattice. In general, you should not use this
option, it is designed for code optimization and does not give as drastic improvement in speed as the previous one.
□ convex rational polyhedral cone determined by rays.
Let’s define a cone corresponding to the first quadrant of the plane (note, you can even mix objects of different types to represent rays, as long as you let this function to perform all the
checks and necessary conversions!):
sage: quadrant = Cone([(1,0), [0,1]])
sage: quadrant
2-d cone in 2-d lattice N
sage: quadrant.rays()
N(1, 0),
N(0, 1)
in 2-d lattice N
If you give more rays than necessary, the extra ones will be discarded:
sage: Cone([(1,0), (0,1), (1,1), (0,1)]).rays()
N(0, 1),
N(1, 0)
in 2-d lattice N
However, this work is not done with check=False option, so use it carefully!
sage: Cone([(1,0), (0,1), (1,1), (0,1)], check=False).rays()
N(1, 0),
N(0, 1),
N(1, 1),
N(0, 1)
in 2-d lattice N
Even worse things can happen with normalize=False option:
sage: Cone([(1,0), (0,1)], check=False, normalize=False)
Traceback (most recent call last):
AttributeError: 'tuple' object has no attribute 'parent'
You can construct different “not” cones: not full-dimensional, not strictly convex, not containing any rays:
sage: one_dimensional_cone = Cone([(1,0)])
sage: one_dimensional_cone.dim()
sage: half_plane = Cone([(1,0), (0,1), (-1,0)])
sage: half_plane.rays()
N( 0, 1),
N( 1, 0),
N(-1, 0)
in 2-d lattice N
sage: half_plane.is_strictly_convex()
sage: origin = Cone([(0,0)])
sage: origin.rays()
Empty collection
in 2-d lattice N
sage: origin.dim()
sage: origin.lattice_dim()
You may construct the cone above without giving any rays, but in this case you must provide lattice explicitly:
sage: origin = Cone([])
Traceback (most recent call last):
ValueError: lattice must be given explicitly if there are no rays!
sage: origin = Cone([], lattice=ToricLattice(2))
sage: origin.dim()
sage: origin.lattice_dim()
sage: origin.lattice()
2-d lattice N
Of course, you can also provide lattice in other cases:
sage: L = ToricLattice(3, "L")
sage: c1 = Cone([(1,0,0),(1,1,1)], lattice=L)
sage: c1.rays()
L(1, 0, 0),
L(1, 1, 1)
in 3-d lattice L
Or you can construct cones from rays of a particular lattice:
sage: ray1 = L(1,0,0)
sage: ray2 = L(1,1,1)
sage: c2 = Cone([ray1, ray2])
sage: c2.rays()
L(1, 0, 0),
L(1, 1, 1)
in 3-d lattice L
sage: c1 == c2
When the cone in question is not strictly convex, the standard form for the “generating rays” of the linear subspace is “basis vectors and their negatives”, as in the following example:
sage: plane = Cone([(1,0), (0,1), (-1,-1)])
sage: plane.rays()
N( 0, 1),
N( 0, -1),
N( 1, 0),
N(-1, 0)
in 2-d lattice N
The cone can also be specified by a Polyhedron_base:
sage: p = plane.polyhedron()
sage: Cone(p)
2-d cone in 2-d lattice N
sage: Cone(p) == plane
sage: N = ToricLattice(2)
sage: Nsub = N.span([ N(1,2) ])
sage: Cone(Nsub.basis())
1-d cone in Sublattice <N(1, 2)>
sage: Cone([N(0)])
0-d cone in 2-d lattice N
class sage.geometry.cone.ConvexRationalPolyhedralCone(rays=None, lattice=None, ambient=None, ambient_ray_indices=None, PPL=None)
Bases: sage.geometry.cone.IntegralRayCollection, _abcoll.Container
Create a convex rational polyhedral cone.
This class does not perform any checks of correctness of input nor does it convert input into the standard representation. Use Cone() to construct cones.
Cones are immutable, but they cache most of the returned values.
The input can be either:
□ rays – list of immutable primitive vectors in lattice;
□ lattice – ToricLattice, \(\ZZ^n\), or any other object that behaves like these. If None, it will be determined as parent() of the first ray. Of course, this cannot be done if there are no
rays, so in this case you must give an appropriate lattice directly.
or (these parameters must be given as keywords):
□ ambient – ambient structure of this cone, a bigger cone or a fan, this cone must be a face of ambient;
□ ambient_ray_indices – increasing list or tuple of integers, indices of rays of ambient generating this cone.
In both cases, the following keyword parameter may be specified in addition:
□ PPL – either None (default) or a C_Polyhedron representing the cone. This serves only to cache the polyhedral data if you know it already. The polyhedron will be set immutable.
□ convex rational polyhedral cone.
Every cone has its ambient structure. If it was not specified, it is this cone itself.
class sage.geometry.cone.IntegralRayCollection(rays, lattice)
Bases: sage.structure.sage_object.SageObject, _abcoll.Hashable, _abcoll.Iterable
Create a collection of integral rays.
No correctness check or normalization is performed on the input data. This class is designed for internal operations and you probably should not use it directly.
This is a base class for convex rational polyhedral cones and fans.
Ray collections are immutable, but they cache most of the returned values.
□ rays – list of immutable vectors in lattice;
□ lattice – ToricLattice, \(\ZZ^n\), or any other object that behaves like these. If None, it will be determined as parent() of the first ray. Of course, this cannot be done if there are no
rays, so in this case you must give an appropriate lattice directly. Note that None is not the default value - you always must give this argument explicitly, even if it is None.
□ collection of given integral rays.
cartesian_product(other, lattice=None)
Return the Cartesian product of self with other.
☆ other – an IntegralRayCollection;
☆ lattice – (optional) the ambient lattice for the result. By default, the direct sum of the ambient lattices of self and other is constructed.
By the Cartesian product of ray collections \((r_0, \dots, r_{n-1})\) and \((s_0, \dots, s_{m-1})\) we understand the ray collection of the form \(((r_0, 0), \dots, (r_{n-1}, 0), (0, s_0), \
dots, (0, s_{m-1}))\), which is suitable for Cartesian products of cones and fans. The ray order is guaranteed to be as described.
sage: c = Cone([(1,)])
sage: c.cartesian_product(c) # indirect doctest
2-d cone in 2-d lattice N+N
sage: _.rays()
N+N(1, 0),
N+N(0, 1)
in 2-d lattice N+N
Return the dimension of the subspace spanned by rays of self.
sage: c = Cone([(1,0)])
sage: c.lattice_dim()
sage: c.dim()
Return the dual of the ambient lattice of self.
☆ lattice. If possible (that is, if lattice() has a dual() method), the dual lattice is returned. Otherwise, \(\ZZ^n\) is returned, where \(n\) is the dimension of self.
sage: c = Cone([(1,0)])
sage: c.dual_lattice()
2-d lattice M
sage: Cone([], ZZ^3).dual_lattice()
Ambient free module of rank 3
over the principal ideal domain Integer Ring
Return the ambient lattice of self.
sage: c = Cone([(1,0)])
sage: c.lattice()
2-d lattice N
sage: Cone([], ZZ^3).lattice()
Ambient free module of rank 3
over the principal ideal domain Integer Ring
Return the dimension of the ambient lattice of self.
sage: c = Cone([(1,0)])
sage: c.lattice_dim()
sage: c.dim()
Return the number of rays of self.
sage: c = Cone([(1,0), (0,1)])
sage: c.nrays()
Plot self.
sage: quadrant = Cone([(1,0), (0,1)])
sage: quadrant.plot()
Return the n-th ray of self.
☆ n – integer, an index of a ray of self. Enumeration of rays starts with zero.
☆ ray, an element of the lattice of self.
sage: c = Cone([(1,0), (0,1)])
sage: c.ray(0)
N(1, 0)
Returns a linearly independent subset of the rays.
Returns a random but fixed choice of a \(\QQ\)-basis (of N-lattice points) for the vector space spanned by the rays.
sage: c = Cone([(1,1,1,1), (1,-1,1,1), (-1,-1,1,1), (-1,1,1,1), (0,0,0,1)])
sage: c.ray_basis()
doctest:...: DeprecationWarning:
ray_basis(...) is deprecated,
please use rays().basis() instead!
See http://trac.sagemath.org/12544 for details.
N( 1, 1, 1, 1),
N( 1, -1, 1, 1),
N(-1, -1, 1, 1),
N( 0, 0, 0, 1)
in 4-d lattice N
Returns a linearly independent subset of the rays as a matrix.
☆ Returns a random but fixed choice of a \(\QQ\)-basis (of N-lattice points) for the vector space spanned by the rays.
☆ The linearly independent rays are the columns of the returned matrix.
sage: c = Cone([(1,1,1,1), (1,-1,1,1), (-1,-1,1,1), (-1,1,1,1), (0,0,0,1)])
sage: c.ray_basis_matrix()
doctest:...: DeprecationWarning:
ray_basis_matrix(...) is deprecated,
please use rays().basis().column_matrix() instead!
See http://trac.sagemath.org/12544 for details.
[ 1 1 -1 0]
[ 1 -1 -1 0]
[ 1 1 1 0]
[ 1 1 1 1]
Return an iterator over (some of) the rays of self.
☆ ray_list – list of integers, the indices of the requested rays. If not specified, an iterator over all rays of self will be returned.
sage: c = Cone([(1,0), (0,1), (-1, 0)])
sage: [ray for ray in c.ray_iterator()]
doctest:...: DeprecationWarning:
ray_iterator(...) is deprecated!
See http://trac.sagemath.org/12544 for details.
[N(0, 1), N(1, 0), N(-1, 0)]
Return a matrix whose columns are rays of self.
It can be convenient for linear algebra operations on rays, as well as for easy-to-read output.
sage: c = Cone([(1,0), (0,1), (-1, 0)])
sage: c.ray_matrix()
doctest:...: DeprecationWarning:
ray_matrix(...) is deprecated,
please use rays().column_matrix() instead!
See http://trac.sagemath.org/12544 for details.
[ 0 1 -1]
[ 1 0 0]
Return rays of self as a frozenset.
Use rays() if you want to get rays in the fixed order.
sage: c = Cone([(1,0), (0,1), (-1, 0)])
sage: c.ray_set()
doctest:1: DeprecationWarning:
ray_set(...) is deprecated, please use rays().set() instead!
See http://trac.sagemath.org/12544 for details.
frozenset([N(0, 1), N(1, 0), N(-1, 0)])
Return (some of the) rays of self.
☆ ray_list – a list of integers, the indices of the requested rays. If not specified, all rays of self will be returned.
sage: c = Cone([(1,0), (0,1), (-1, 0)])
sage: c.rays()
N( 0, 1),
N( 1, 0),
N(-1, 0)
in 2-d lattice N
sage: c.rays([0, 2])
N( 0, 1),
N(-1, 0)
in 2-d lattice N
You can also give ray indices directly, without packing them into a list:
sage: c.rays(0, 2)
N( 0, 1),
N(-1, 0)
in 2-d lattice N
sage.geometry.cone.classify_cone_2d(ray0, ray1, check=True)
Return \((d,k)\) classifying the lattice cone spanned by the two rays.
□ ray0, ray1 – two primitive integer vectors. The generators of the two rays generating the two-dimensional cone.
□ check – boolean (default: True). Whether to check the input rays for consistency.
A pair \((d,k)\) of integers classifying the cone up to \(GL(2, \ZZ)\) equivalence. See Proposition 10.1.1 of [CLS] for the definition. We return the unique \((d,k)\) with minmial \(k\), see
Proposition 10.1.3 of [CLS].
sage: ray0 = vector([1,0])
sage: ray1 = vector([2,3])
sage: from sage.geometry.cone import classify_cone_2d
sage: classify_cone_2d(ray0, ray1)
(3, 2)
sage: ray0 = vector([2,4,5])
sage: ray1 = vector([5,19,11])
sage: classify_cone_2d(ray0, ray1)
(3, 1)
sage: m = matrix(ZZ, [(19, -14, -115), (-2, 5, 25), (43, -42, -298)])
sage: m.det() # check that it is in GL(3,ZZ)
sage: classify_cone_2d(m*ray0, m*ray1)
(3, 1)
Check using the connection between the Hilbert basis of the cone spanned by the two rays (in arbitrary dimension) and the Hirzebruch-Jung continued fraction expansion, see Chapter 10 of [CLS]
sage: from sage.geometry.cone import normalize_rays
sage: for i in range(10):
... ray0 = random_vector(ZZ, 3)
... ray1 = random_vector(ZZ, 3)
... if ray0.is_zero() or ray1.is_zero(): continue
... ray0, ray1 = normalize_rays([ray0, ray1], ZZ^3)
... d, k = classify_cone_2d(ray0, ray1, check=True)
... assert (d,k) == classify_cone_2d(ray1, ray0)
... if d == 0: continue
... frac = Hirzebruch_Jung_continued_fraction_list(k/d)
... if len(frac)>100: continue # avoid expensive computation
... hilb = Cone([ray0, ray1]).Hilbert_basis()
... assert len(hilb) == len(frac) + 1
Check if x is a cone.
□ True if x is a cone and False otherwise.
sage: from sage.geometry.cone import is_Cone
sage: is_Cone(1)
sage: quadrant = Cone([(1,0), (0,1)])
sage: quadrant
2-d cone in 2-d lattice N
sage: is_Cone(quadrant)
sage.geometry.cone.normalize_rays(rays, lattice)
Normalize a list of rational rays: make them primitive and immutable.
□ rays – list of rays which can be converted to the rational extension of lattice;
□ lattice – ToricLattice, \(\ZZ^n\), or any other object that behaves like these. If None, an attempt will be made to determine an appropriate toric lattice automatically.
□ list of immutable primitive vectors of the lattice in the same directions as original rays.
sage: from sage.geometry.cone import normalize_rays
sage: normalize_rays([(0, 1), (0, 2), (3, 2), (5/7, 10/3)], None)
[N(0, 1), N(0, 1), N(3, 2), N(3, 14)]
sage: L = ToricLattice(2, "L")
sage: normalize_rays([(0, 1), (0, 2), (3, 2), (5/7, 10/3)], L.dual())
[L*(0, 1), L*(0, 1), L*(3, 2), L*(3, 14)]
sage: ray_in_L = L(0,1)
sage: normalize_rays([ray_in_L, (0, 2), (3, 2), (5/7, 10/3)], None)
[L(0, 1), L(0, 1), L(3, 2), L(3, 14)]
sage: normalize_rays([(0, 1), (0, 2), (3, 2), (5/7, 10/3)], ZZ^2)
[(0, 1), (0, 1), (3, 2), (3, 14)]
sage: normalize_rays([(0, 1), (0, 2), (3, 2), (5/7, 10/3)], ZZ^3)
Traceback (most recent call last):
TypeError: cannot convert (0, 1) to
Vector space of dimension 3 over Rational Field!
sage: normalize_rays([], ZZ^3) | {"url":"http://sagemath.org/doc/reference/geometry/sage/geometry/cone.html","timestamp":"2014-04-18T13:08:24Z","content_type":null,"content_length":"297892","record_id":"<urn:uuid:a8415c8b-aace-4453-baeb-97abf603e0ec>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concordville Calculus Tutor
Find a Concordville Calculus Tutor
...I am hard-working, patient, and able to connect well with students of all abilities and ages. I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!
Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test.
19 Subjects: including calculus, statistics, algebra 2, geometry
...I tutor trigonometry using a calm, persistent style and bring practical examples from a career as a professional physicist. I took a one-year course in astronomy as a physics major and I have
been fascinated by it ever since. I keep up with the latest research in the scientific journals and have read many books.
10 Subjects: including calculus, physics, geometry, algebra 1
...I adapt my teaching style to students' needs, explaining difficult concepts step by step and using questions to "draw out" students' understanding so that they learn valuable problem-solving
skills along the way. I have taught students from kindergarten to college age, and I build positive tutor...
38 Subjects: including calculus, Spanish, English, reading
...As an undergraduate student at Jacksonville University, I studied both ordinary differential equations and partial differential equations obtaining A's in both courses. I have also been
tutoring these courses while a tutor at Jacksonville University. I recently graduated from Jacksonville University with a bachelor's degree in mathematics.
13 Subjects: including calculus, geometry, GRE, algebra 1
...I am certified to teach math in Pennsylvania and Delaware. I have experience tutoring kids from wealthy suburban, neighborhoods as well as helping out at homeless shelters. I believe that
everyone can learn and enjoy math.
6 Subjects: including calculus, geometry, algebra 1, algebra 2
Nearby Cities With calculus Tutor
Black Horse, PA calculus Tutors
Bridgewater Farms, PA calculus Tutors
Elwyn, PA calculus Tutors
Feltonville, PA calculus Tutors
Frazer, PA calculus Tutors
Green Ridge, PA calculus Tutors
Greenville, DE calculus Tutors
Ithan, PA calculus Tutors
Lima, PA calculus Tutors
Linwood, PA calculus Tutors
Lower Chichester, PA calculus Tutors
Rose Tree, PA calculus Tutors
Talleyville, DE calculus Tutors
Thornton, PA calculus Tutors
Twin Oaks, PA calculus Tutors | {"url":"http://www.purplemath.com/concordville_calculus_tutors.php","timestamp":"2014-04-16T13:13:04Z","content_type":null,"content_length":"24148","record_id":"<urn:uuid:e72dca4e-be67-4e36-b8f4-c72a4eca6980>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Website Detail Page
edited by Judy Spicer
supported by the National Science Foundation
This instructional module offers a wide variety of exemplary resources to support a unit on data analysis. Standards-based lessons on graph interpretation are presented in the context of real-world
applications, such as population growth, junk mail, and global temperatures. Don't miss the "applet" collection, offering fun and interactive virtual activities on graphing and statistics for grades
This module meets several standards within Benchmarks for Science Literacy (see Standards link), but is also aligned with data analysis standards found in the National Council for Teachers of
Mathematics Standards (NCTM).
Please note that this resource requires Flash, or Java Applet Plug-in.
Subjects Levels Resource Types
- Collection
- Instructional Material
= Activity
= Best practice
Education Practices
= Curriculum support
- Active Learning
= Game
= Modeling
- Middle School = Instructor Guide/Manual
- Technology
- High School = Interactive Simulation
= Multimedia
= Lesson/Lesson Plan
Other Sciences
= Model
- Mathematics
= Student Guide
= Unit of Instruction
- Audio/Visual
= Movie/Animation
Appropriate Courses Categories Ratings
- Physical Science
- Lesson Plan
- Physics First
- Activity
- Conceptual Physics
- New teachers
- Algebra-based Physics
Intended Users:
Access Rights:
Free access
This material is released under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 license.
Rights Holder:
The Ohio State University
bar graphs, coordinate graphs, data interpretation, graph interpretation, graph reading, graph skills, histograms, linear regression, scatterplot, statistics
Record Creator:
Metadata instance created October 25, 2010 by Caroline Hall
Record Updated:
January 19, 2011 by Lyle Barbato
Last Update
when Cataloged:
February 25, 2007
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2A. Patterns and Relationships
• 3-5: 2A/E2. Mathematical ideas can be represented concretely, graphically, or symbolically.
9. The Mathematical World
9B. Symbolic Relationships
• 3-5: 9B/E2. Tables and graphs can show how values of one quantity are related to values of another.
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily,
increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in
steps, or do something different from any of these.
9E. Reasoning
• 6-8: 9E/M5. In formal logic, a single example can never prove that a generalization is always true, but sometimes a single example can prove that a generalization is not always true. Proving a
generalization to be false is easier than proving it to be true.
12. Habits of Mind
12D. Communication Skills
• 6-8: 12D/M1. Organize information in simple tables and graphs and identify relationships they reveal.
• 6-8: 12D/M2. Read simple tables and graphs produced by others and describe in words what they show.
AAAS Benchmark Alignments (1993 Version)
E. Reasoning
• 9E (6-8) #4. People are using incorrect logic when they make a statement such as "If A is true, then B is true; but A isn't true, therefore B isn't true either."
11. COMMON THEMES
B. Models
• 11B (3-5) #2. Geometric figures, number sequences, graphs, diagrams, sketches, number lines, maps, and stories can be used to represent objects, events, and processes in the real world, although
such representations can never be exact in every detail.
12. HABITS OF MIND
C. Manipulation and Observation
• 12C (9-12) #2. Use computers for producing tables and graphs and for making spreadsheet calculations.
D. Communication Skills
• 12D (6-8) #4. Understand writing that incorporates circle charts, bar and line graphs, two-way data tables, diagrams, and symbols.
E. Critical-Response Skills
• 12E (6-8) #4. Be aware that there may be more than one good way to interpret a given set of findings.
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=10438">Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. February 25, 2007.</a>
, edited by J. Spicer (2005), WWW Document, (http://msteacher.org/epubs/math/math3/math.aspx).
Middle School Portal: Data Analysis: As Real World As It Gets, edited by J. Spicer (2005), <http://msteacher.org/epubs/math/math3/math.aspx>.
Spicer, J. (Ed.). (2007, February 25). Middle School Portal: Data Analysis: As Real World As It Gets. Retrieved April 18, 2014, from http://msteacher.org/epubs/math/math3/math.aspx
Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. February 25, 2007. http://msteacher.org/epubs/math/math3/math.aspx (accessed 18 April 2014).
Spicer, Judy, ed. Middle School Portal: Data Analysis: As Real World As It Gets. 2005. 25 Feb. 2007. National Science Foundation. 18 Apr. 2014 <http://msteacher.org/epubs/math/math3/math.aspx>.
@misc{ Title = {Middle School Portal: Data Analysis: As Real World As It Gets}, Volume = {2014}, Number = {18 April 2014}, Month = {February 25, 2007}, Year = {2005} }
%A Judy Spicer, (ed)
%T Middle School Portal: Data Analysis: As Real World As It Gets
%D February 25, 2007
%U http://msteacher.org/epubs/math/math3/math.aspx
%O text/html
%0 Electronic Source
%D February 25, 2007
%T Middle School Portal: Data Analysis: As Real World As It Gets
%E Spicer, Judy
%V 2014
%N 18 April 2014
%8 February 25, 2007
%9 text/html
%U http://msteacher.org/epubs/math/math3/math.aspx
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=10438","timestamp":"2014-04-18T18:45:19Z","content_type":null,"content_length":"46386","record_id":"<urn:uuid:52e6b1a7-8074-4cd8-b0b0-02ffb102fe82>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marvellous Matrix
Copyright © University of Cambridge. All rights reserved.
'Marvellous Matrix' printed from http://nrich.maths.org/
This problem was written for new year $2002$.
Circle any number in the matrix, for example, $608$ as below. Draw a line through all the squares that lie in the same row and column as your selected number.
Circle another number which has not got a line through it, for example, $343$ and again rule out all squares in the same row and column.
Repeat for a third time, then circle the remaining number which has not got a line through it.
Add all the circled numbers together. Note your answer.
Try again with a different starting number. What do you notice?
See if you can work out how this matrix works.
Below is a simpler one which might be easier to investigate.
Can you make a similar matrix which generates a different total? | {"url":"http://nrich.maths.org/2064/index?nomenu=1","timestamp":"2014-04-19T07:08:19Z","content_type":null,"content_length":"4124","record_id":"<urn:uuid:a30743ba-2318-4858-bb31-6a56fc756d5e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raytheon BBN Technologies
S. Guha and J. H. Shapiro, “Reading boundless error-free bits using a single photon,” Phys. Rev. A, 87 (Dec. 01, 2013).
B. Bash, S. Guha, D. Goeckel, D. Towsley, “Quantum Noise Limited Optical Communication with Low Probability of Detection,” Information Theory Proceedings (ISIT), 2013 IEEE International Symposium,
pgs. 1715-1719 (Jul. 10, 2013).
M. Takeoka, H. Krovi, S. Guha, “Achieving the Holevo Capacity of a Pure State Classical-Quantum Channel via Unambiguous State Discrimination,” Information Theory Proceedings (ISIT), 2013 IEEE
International Symposium, pgs. 166-170 (Jul. 08, 2013).
R. Nair, S. Guha, S.-H. Tan, “A Realizable Receiver for discriminating arbitrary Coherent States near the Quantum Limit,” Information Theory Proceedings (ISIT), 2013 IEEE International Symposium,
pgs. 729-733 (Jul. 08, 2013).
J. D. Strand, Matthew Ware, Félix Beaudoin, Thomas A. Ohki, B. R. Johnson, Alexandre Blais, B. L. T. Plourde, “First-order sideband transitions with flux-driven asymmetric transmon qubits,” Physical
Review B 87, 220505(R) (Jul. 06, 2013).
M. M. Wilde and S. Guha, “Polar codes for degradable quantum channels,” IEEE Transactions on Information Theory, vol. 59, no.7, pages 4718-4729 (Jul. 01, 2013).
Shahrokhshahi, Reihaneh; Sridhar, Niranjan; Pfister, Olivier; Habif, Jonathan L; Guha, Saikat; Miller, Aaron; Nam, Sae Woo; Lita, Adriana E; Calkins, Brice; Gerrits, Thomas; Lamas-Linares, Antia,
“High Photon Information Efficient Imaging Using Single Photon Source,” Proceedings of the Conference on Lasers and Electro-Optics (Jun. 09, 2013).
Jonathan L. Habif, Saikat Guha and Zachary Dutton, “Polar Coded Optical Communications with Weak Coherent States,” Proceedings of the Conference on Lasers and Electro-Optics (Jun. 09, 2013).
Seth T. Merkel, Jay M. Gambetta, John A. Smolin, Stefano Poletto, Antonio D. Córcoles, Blake R. Johnson, Colm A. Ryan, and Matthias Steffen, “Self-consistent quantum process tomography,” Phys. Rev. A
87, 062119 (Jun. 01, 2013).
Marcus P. da Silva, S. Guha, Z. Dutton, “Achieving minimum-error discrimination of an arbitrary set of laser-light pulses,” Phys. Rev. A 87, 052320 (2013) (May. 23, 2013).
M. M. Wilde and S. Guha, “Polar codes for classical quantum channels,” IEEE Transactions on Information Theory, vol. 59, no. 2, pages 1175-1187 (Feb. 01, 2013).
M. M. Wilde, P. Hayden, S. Guha, “Quantum trade-off coding for bosonic communication,” Phys. Rev. A 86, 062306 (Dec. 06, 2012).
Marcus P. da Silva, S. Guha, Z. Dutton, “Achieving minimum-error discrimination of an arbitrary set of laser-light pulses,” Submitted (Sep. 03, 2012).
E. Magesan, J.M. Gambetta, B.R. Johnson, C.A. Ryan, J.M. Chow, S.T. Merkel, M.P. da Silva, G.A. Keefe, M.B. Rothwell, T.A. Ohki, M.B. Ketchen, and M. Steffen, “Efficient Measurement of Quantum Gate
Error by Interleaved Randomized Benchmarking,” Phys. Rev. Lett. 109, 080505 (Aug. 24, 2012).
O. Moussa, M. P. da Silva, C. A. Ryan, R. Laflamme, “Practical experimental certification of computational quantum gates via twirling,” Phys. Rev. Lett. 109, 070504 (Aug. 17, 2012).
R. Nair, B. J. Yen, S. Guha, J. H. Shapiro and S. Pirandola, “Symmetric M-ary phase discrimination using quantum-optical probe states,” Phys. Rev. A., 86, 022306 (Aug. 07, 2012).
Félix Beaudoin, Marcus P. da Silva, Zachary Dutton, and Alexandre Blais, “First-order sidebands in circuit QED using qubit frequency modulation,” Phys. Rev. A 86, 022305 (Aug. 03, 2012).
L. Steffen, M. P. da Silva, A. Fedorov, M. Baur, A. Wallraff, “Experimental Monte Carlo Quantum Process Certification,” Phys. Rev. Lett. 108, 260506 (Jun. 28, 2012).
J.M. Gambetta, A.D. Corcoles, S.T. Merkel, B.R. Johnson, J.A. Smolin, J.M. Chow, C.A. Ryan, C. Rigetti, S. Poletto, T.A. Ohki, M.B. Ketchen, M. Steffen, “Characterization of addressability by
simultaneous randomized benchmarking,” Accepted in PRL (Apr. 27, 2012).
M. M. Wilde, P. Hayden and S. Guha, “Information trade-offs for optical quantum communication,” Phys. Rev. Lett., 108, 140501 (Apr. 02, 2012).
M. Baur, A. Fedorov, L. Steffen, S. Filipp, M. P. da Silva, A. Wallraff, “Benchmarking a Quantum Teleportation Protocol in Superconducting Circuits Using Tomography and an Entanglement Witness,”
Phys. Rev. Lett. 108, 040502 (Jan. 24, 2012).
A. Fedorov, L. Steffen, M. Baur, M. P. da Silva, A. Wallraff, “Implementation of a Toffoli gate with superconducting circuits,” Nature 481, 170–172 (Jan. 12, 2012).
J. S. Kline, M. R. Vissers, F. C. S. da Silva, D. S. Wisbey, M. Weides, Y. Shalibo, N. Katz, B. R. Johnson, T. A. Ohki, D. P. Pappas, “Sub-micrometer epitaxial Josephson junctions for quantum
circuits,” Supercond. Sci. Technol. 25 (Jan. 01, 2012).
J. Chen, J. L. Habif, Z. Dutton, R. Lazarus, S. Guha, “Optical codeword demodulation with error rates below standard quantum limit using a conditional nulling receiver,” Nature Photonics (Jan. 01,
M. Weides, J. S. Kline, M. R. Vissers, M.O. Sandberg D. S. Wisbey, B. R. Johnson, T. A. Ohki, D. P. Pappas, “Coherence in a transmon qubit with epitaxial tunnel junctions,” Appl. Phys. Lett. 99 (Dec.
01, 2011).
M. P. da Silva, O. Landon-Cardinal, and D. Poulin, “Practical Characterization of Quantum Devices without Tomography,” Phys. Rev. Lett., 107, 210404 (Nov. 16, 2011).
Saikat Guha, Zachary Dutton and Jonathan L. Habif, “Information in a Photon When Loss Encodes the Bit,” Proceedings of Frontiers in Optics (Oct. 16, 2011).
S. Guha, P. Basu, C.-K. Chau and R. Gibbens, “Green Wave Sleep Scheduling: Optimizing Latency and Throughput in Duty Cycling Wireless Networks,” IEEE Journal of Special Areas in Communications (JSAC)
(Sep. 08, 2011).
Jonathan L. Habif, “Quantum frequency-entangled optical spread spectrum for stealthy target detection and communications,” 2011 Conference on Lasers and Electro-Optics: Laser Science to Photonic
Applications (May. 30, 2011).
Jerry M. Chow, A.D. Corcoles, Jay M. Gambetta, Chad Rigetti, B.R. Johnson, John A. Smolin, J.R. Rozen, George A. Keefe, Mary B. Rothwell, Mark B. Ketchen, M. Steffen, “Simple all-microwave entangling
gate for fixed-frequency superconducting qubits,” Phys. Rev. Lett. 107, 080502 (Jan. 01, 2011).
Hanhee Paik, D.I. Schuster, Lev S. Bishop, G. Kirchmair, G. Catelani, A.P. Sears, B.R. Johnson, M.J. Reagor, L. Frunzio, L.I. Glazman, S.M. Girvin, M.H. Devoret, and R.J. Schoelkopf, “Observation of
high coherence in Josephson junction qubits measured in a three-dimensional circuit QED architecture,” Phys. Rev. Lett. 107, 240501 (Jan. 01, 2011).
S. Guha, “Structured optical receivers to attain superadditive capacity and the Holevo limit,” Phys. Rev. Lett., 106, 240502 (Jan. 01, 2011).
S. Guha, J. L. Habif, and M. Takeoka, “Approaching Helstrom limits to optical pulse-position demodulation using single-photon detection and optical feedback,,” J. of Modern Optics, Volume 58, Issue
3, 257 (Jan. 01, 2011).
W. Kelly, Z. Dutton, J. Schlafer, B. Mookerji, T. Ohki, J. Kline, D. Pappas, “Direct Observation of Coherent Population Trapping in a Superconducting Artificial Atom,,” Phys. Rev. Lett. 104, 163601
(Jan. 01, 2010).
Z. Dutton, J.H. Shapiro, S. Guha, “LADAR resolution improvement using receivers enhanced with squeezed-vacuum injection and phase-sensitive amplification,,” J. Opt. Soc. Am. B 27, A63--A72 (Jan. 01,
A. Shabaev, Z. Dutton, T. A. Kennedy, and Al. L. Efros, “Slow-light propagation using mode locking of spin precession in quantum dots,,” Phys. Rev. A 82, 053823 (Jan. 01, 2010).
G. Brummer, R. Rafique, T. A. Ohki, “Phase and Amplitude Modulator for Microwave Pulse Generation,,” IEEE Transactions on Applied Superconductivity (Jan. 01, 2010).
J. L. Habif, “Quantum Cryptographic Networks,,” Technology Today, Issue 1 (Jan. 01, 2010).
S. Guha and B. I. Erkmen, “Receiver Design for Gaussian state Quantum Illumination,” Phys. Rev. A 80, 052310 (Jan. 01, 2009).
M. R. Rafique, T. A. Ohki, P. Linner and A. Herr, “Niobium Tunable Microwave Filters,” IEEE Trans. Microw. Theory Tech., 57, 5, 1 (Jan. 01, 2009).
F.K. Fatemi, M.L. Terraciacno, M. Bashkansky, and Z. Dutton, “Cold atom Raman spectrography using velocity-selective resonances,,” Optics Express 17, 12971-12980 (Jan. 01, 2009).
F.K. Fatemi, M.L. Terraciacno, Z. Dutton, and M. Bashkansky, “Imaging velocity selective resonaces in a magnetic field,,” J. of Modern Optics 56, 2022-2028 (Jan. 01, 2009).
C. Florea, M. Bashkansky, J. Sanghera, I. Aggarwal, Z. Dutton, “Slow-light generation through a Brillouin scattering in As2S3 fibers,,” Optical Materials 32, 358-361 (Jan. 01, 2009).
S. Guha, T. Hogg, D. Fattal, T. Spiller, and R. G. Beausoleil, “Quantum Auctions using Adiabatic Evolution: The Corrupt Auctioneer and Circuit Implementations,,” International Journal of Quantum
Information, Vol. 6, No. 4 (Jan. 01, 2008).
S.-H. Tan, B. I. Erkmen, V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, S. Pirandola, and J. H. Shapiro, “Quantum Illumination using Gaussian States,” Phys. Rev. Lett. 101, 253601 (Jan. 01, 2008).
M. R. Rafique, T. A. Ohki, B. Banik, H. Engseth, P Linner and A. Herr, “Miniaturized Filters for Superconducting Microwave Filters,” Supercond. Sci. Technol. 21 075004 (Jan. 01, 2008).
S. Guha, J. H. Shapiro, and B. I. Erkmen, “Capacities of Bosonic broadcast communications and a new minimum output entropy conjecture,” Phys. Rev. A 76, 032303 (Sep. 04, 2007).
Robert H. Hadfield, Jonathan L. Habif, Lijun Ma, Alan Mink, Xiao Tang and Sae Woo Nam, “Quantum key distribution with high-speed superconducting single-photon detectors,” Proceedings of Quantum
Electronics and Laser Science Conference (May. 06, 2007).
G. D. Forney, M. Grassl, and S. Guha, “Convolutional and tail-biting quantum error-correcting codes,” IEEE Trans. Inf. Theory, Vol. 53, No. 3 (Mar. 01, 2007).
Robert H. Hadfield, Jonathan L. Habif, John Schlafer, Robert E. Schwall and Sae Woo Nam, “Quantum key distribution at 1550 nm with twin superconducting single-photon detectors,” Applied Physics
Letters (Dec. 15, 2006).
Martin A. Jaspan, Jonathan L. Habif, Robert H. Hadfield and Sae Woo Nam, “Heralding of telecommunication photon pairs with a superconducting single photon detector,” Applied Physics Letters (Jul. 19,
Jonathan L. Habif, David S. Pearson, Robert H. Hadfield, Robert E. Schwall, Sae Woo Nam and Aaron J. Miller, “Single Photon Detector Comparison in a Quantum Key Distribution Link Testbed,” Proc. of
SPIE Advanced Photon Counting Techniques (May. 01, 2006).
J. H. Shapiro, S. Guha and B. I. Erkmen, “Ultimate channel capacity of free-space optical communications,” The Journal of Optical Networking: Special Issue (invited) (Jul. 22, 2005).
V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, and J. H. Shapiro, “Minimum output entropy of bosonic channels: a conjecture,” Phys. Rev. A 70, 032315 (Sep. 21, 2004).
V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, J. H. Shapiro, and H. P. Yuen, “Classical capacity of the lossy bosonic channel: the exact solution,” Phys. Rev. Lett. 92, 027902 (Jan. 15, 2004).
P. Ghose, A. S. Majumdar, S. Guha, and J. Sau, “Bohmian trajectories for photons,” Phys. Lett. A 290, 205--213 (Nov. 19, 2001). | {"url":"http://www.bbn.com/technology/quantum/pubs","timestamp":"2014-04-16T16:21:57Z","content_type":null,"content_length":"21448","record_id":"<urn:uuid:bcb1b3f8-cc5a-40e9-84bd-842d0f59e6de>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topic 3: Path analysis
Contents of this handout: Principles; Path analysis in practice; Limitations of Path analysis; Further reading and References; Examples
Path analysis is a straightforward extension of multiple regression. Its aim is to provide estimates of the magnitude and significance of hypothesised causal connections between sets of variables.
This is best explained by considering a path diagram.
To construct a path diagram we simply write the names of the variables and draw an arrow from each variable to any other variable we believe that it affects. We can distinguish between input and
output path diagrams. An input path diagram is one that is drawn beforehand to help plan the analysis and represents the causal connections that are predicted by our hypothesis. An output path
diagram represents the results of a statistical analysis, and shows what was actually found.
So we might have an input path diagram like this:
Figure 1: Idealised input path diagram
And an output path diagram like this:
Figure 2: Idealised output path diagram
It is helpful to draw the arrows so that their widths are proportional to the (hypothetical or actual) size of the path coefficients. Sometimes it is helpful to eliminate negative relationships by
reflecting variables - e.g. instead of drawing a negative relationship between age and liberalism drawing a positive relationship between age and conservatism. Sometimes we do not want to specify the
causal direction between two variables: in this case we use a double-headed arrow. Sometimes, paths whose coefficients fall below some absolute magnitude or which do not reach some significance
level, are omitted in the output path diagram.
Some researchers will add an additional arrow pointing in to each node of the path diagram which is being taken as a dependent variable, to signify the unexplained variance - the variation in that
variable that is due to factors not included in the analysis.
Path diagrams can be much more complex than these simple examples: for a virtuoso case, see Wahlund (1992, Fig 1).
Although path analysis has become very popular, we should bear in mind a cautionary note from Everitt and Dunn (1991): "However convincing, respectable and reasonable a path diagram... may appear,
any causal inferences extracted are rarely more than a form of statistical fantasy". Basically, correlational data are still correlational. Within a given path diagram, patha analysis can tell us
which are the more important (and significant) paths, and this may have implications for the plausibility of pre-specified causal hypotheses. But path analysis cannot tell us which of two distinct
path diagrams is to be preferred, nor can it tell us whether the correlation between A and B represents a causal effect of A on B, a causal effect of B on A, mutual dependence on other variables C, D
etc, or some mixture of these. No program can take into account variables that are not included in an analysis.
What, then, can a path analysis do? Most obviously, if two or more pre-specified causal hypotheses can be represented within a single input path diagram, the relative sizes of path coefficients in
the output path diagram may tell us which of them is better supported by the data. For example, in Figure 4 below, the hypothesis that age affects job satisfaction indirectly, via its effects on
income and working autonomy, is preferred over the hypothesis that age has a direct effect on job satisfaction. Slightly more subtly, if two or more pre-specified causal hypotheses are represented in
different input path diagrams, and the corresponding output diagrams differ in complexity (so that in one there are many paths with moderate coefficients, while in another there are just a few paths
with large, significant coefficients and all other paths have negligible coefficients), we might prefer the hypothesis that yielded the simpler diagram. Note that this latter argument would not
really be statistical, though the statistical work is necessary to give us the basis from which to make it.
Path analysis in practice
Bryman and Cramer give a clear example using four variables from a job survey: age, income, autonomy and job satisfaction. They propose that age has a direct effect on job satisfaction. However
indirect effects of age on job satisfaction are also suggested; age affects income which in turn affects satisfaction, age affects autonomy which in turn affects satisfaction and age affects autonomy
which affects income which affects satisfaction. Autonomy and income have direct affects on satisfaction.
Figure 3: Input diagram of causal relationships in the job survey, after Bryman & Cramer (1990)
To move from this input diagram to the output diagram, we need to compute path coefficients. A path coefficient is a standardized regression coefficient (beta weight). We compute these by setting up
structural equations, in this case:
satisfaction = b[11]age + b[12]autonomy + b[13] income + e[1
]income = b[21]age + b[22]autonomy + e[2
]autonomy = b[31]age[ ]+ e[3]
We have used a different notation for the coefficients from Bryman and Cramer's, to make it clear that b[11] in the first equation is different from b[21] in the second. The terms e[1], e[2], and e
[3] are the error or unexplained variance terms. To obtain the path coefficients we simply run three regression analyses, with satisfaction, income and autonomy being the dependent variable in turn
and using the independent variables specified in the equations. Because we need beta values, if we are using Minitab we must first standardise the variables (subtract each column from its mean and
divide by its standard deviation); SPSS will give us beta values without this preliminary step. In Bryman and Cramer's example, we find that b[11]=-0.08, b[12]=0.58, b[12]=0.47, b[21]=0.57, b[22]=
0.22, and b[31]=0.28. In either case, the betas are then taken from the output and then inserted into the output path diagram. The constant values (a[1], a[2], and a[3]) are not used. So the complete
output path diagram looks like this:
Figure 4: Output diagram of causal relationships in the job survey, after Bryman & Cramer (1990)
If the values of e[1], e[2], and e[3] are required, they are calculated as the square root of 1-R^2 (note not 1-R^2[adj]) from the regression equation for the corresponding dependent variable.
Many researchers like to calculate the overall impact of one variable on another - e.g. of age on job satisfaction. This is done by simply adding the direct effect of age (-0.08) and adding the
indirect effects to it. The indirect effects are calculated by multiplying the coefficients for each path from age to satisfaction e.g.
age -> income -> satisfaction is 0.57 x 0.47 = 0.26,
age -> autonomy -> satisfaction is 0.28 x 0.58 = 0.16,
age -> autonomy -> income -> satisfaction is 0.28 x 0.22 x 0.47 = 0.03
total indirect effect = 0.45
The result tells us that the total indirect effect of age on satisfaction is positive and quite large whereas the direct effect is small and negative. The total effect is then -0.08 + 0.45 = 0.37.
Limitations of path analysis
To restate the obvious, path analysis can evaluate causal hypotheses, and in some (restricted) situations can test between two or more causal hypotheses, but it cannot establish the direction of
As should also already be clear, path analysis is most likely to be useful when we already have a clear hypothesis to test, or a small number of hypotheses all of which can be represented within a
single path diagram. It has little use at the exploratory stage of research.
We cannot use path analysis in situations where "feedback" loops are included in our hypotheses: there must be a steady causal progression across (or down) a path diagram.
All the relationships in the path diagram must be capable of being tested by straightforward multiple regression. The intervening variables all have to serve as dependent variables in multiple
regression analyses. Therefore each of them must be capable of being treated as being on an interval scale. Nominal measurement, or ordinal measurement with few categories (including dichotomies)
will make path analysis impossible. Although there are types of analysis that will handle such dependent variables (as we shall see in the next two sessions), there are no accepted ways of mixing
different kinds of analysis to produce the analogue of a path analysis.
Further reading:
• Bryman, A. & Cramer, D. (1990). Quantitative data analysis for social scientists, pp. 246-251.
• Everitt, B. S., & Dunn, G. (1991). Applied multivariate data analysis. London: Edward Arnold.
• Wahlund, R. (1992). Tax changes and economic behavior: The case of tax evasion. Journal of Economic Psychology, 13, 657-677.
These examples use the Singer file /singer1/eps/psybin/stats/expect.MTW, which is a small (n=50) data file in Minitab worksheet format. A portable version of this is available on the PSYCHO file
server as \scratch\segl\stats\expect.MTP (note that I expect to move this to a different directory soon), and it should be possible to read this into either Macintosh or PC versions of Minitab
The study examined the factors that influenced inflationary expections. There are measures of age (in years), income (in thousands of pounds), conservatism (factor scores based on items about
privatisation, defence spending and influence on trade unions) and consumer optimism (7-point scale). We have standardized these variables and put them in columns 11-15, so that you can use Minitab
to obtain the betas.
1. Draw an input path diagram to indicate what you consider to be a reasonable causal sequence.
2. Carry out the relevant regression analyses to obtain the path coefficients and draw the appropriate output path diagram. What would you conclude about the influence of age on inflationary
3. Calculate the values of the unexplained variances and add them to the path diagram.
Paul Webley, Stephen Lea University of Exeter Department of Psychology
Washington Singer Laboratories
Exeter EX4 4QG
United Kingdom
Tel +44 1392 264626
Fax +44 1392 264623
Send questions and comments to the departmental administrator or to the author of this page
Goto Home page for this course | previous topic | next topic | FAQ file
Goto home page for: University of Exeter | Department of Psychology | Staff | Students | Research | Teaching | Miscellaneous
Document revised 21st February 1997 | {"url":"http://people.exeter.ac.uk/SEGLea/multvar2/pathanal.html","timestamp":"2014-04-16T16:20:02Z","content_type":null,"content_length":"15326","record_id":"<urn:uuid:baa5568f-4105-4218-bbbf-0e946706f9b4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Mil-Dot and Ballistic Reticle Riflescopes Work
A Look at How Mil-Dot and Ballistic Reticle Scopes Work
By Randy Wakeman
The basics of a Mil-Dot scope at the "Mil-Dot calibrated power" (normally 10x or 12x in hunting scopes, sometimes noted as just the highest power in scope instructions) is simply this: dot to dot
means about 36" @ 1000 yards, or 3.6 inches at 100 yards. Few instruction manuals that accompany Mil-Dot scopes go into useful detail. I say "about 36 inches" at 1000 yards, because it is closer to
36.000012 inches, but that is more extraneous than useful.
The actual measurement here is milliradians of angle. There is an important distinction to be made; there are two common ways to measure angle. We have just touched on "mils" or milliradians, but
more common is MOA, meaning minutes of angle. It can get confusing, but if the goal is accuracy, we need to be sure if milliradians of angle are being discussed or minutes of angle. Like metric
versus English units, it is just two different ways of defining measurements. There are 360 degrees in a circle, which translates to approximately 6.2831853072 radians in a circle.
Most range-finding or range-compensating reticles, like the ballistic plex style of reticle, are based on minutes of angle. This is whole different ballgame, as we can forget about radians and
milliradians. One MOA equals about 1.047 inches at 100 yards. A 3 MOA ballistic plex reticle is 3.141 inches from line to line at 100 yards, 31.41 inches at 1000 yards. Not enough to fret about at
100 yards, but as both of these scopes are marketed as long range sighting systems, assuming the wrong way a reticle is calibrated can cause you to miss your varmint at extreme long range, frustrate
you, or both. The whole point is accuracy to begin with, so we might as well be a bit accurate as to how this stuff is supposed to work from the start.
Mil-Dot reticles, once you get the hang of them, are far more versatile. You can holdover and hold under with equal ease, and precisely allow for windage as well. With dots all over the place, it is
very easy to visualize half of a dot to dot length, or a dot and a half of length as the case may be. Mil-Dot aficionados will tell you that a Mil-Dot scope is the only "real" range compensating
scope (and range finding scope) that there is. Well, they have a point, or at least a dot!
However, there is an advantage to the big game hunter and the long-range muzzleloading hunter specifically in choosing and using a more simplistic, albeit more limited design. A ballistic plex type
reticle does not clog your field of view like a Mil-Dot, and hunting reality shows that in the vast majority of cases, neither high magnification nor holdover is used to take game animals. In this
large majority of cases, neither a Mil-Dot nor a ballistic plex style is used or of any value.
Many feel that if you are a practical hunter, you are wise to limit yourself to the maximum point blank range of your rifle. As for closer being better, well, it just always is. Use of a ballistic
plex reticle is out of the way when you don't need it, but instantly there on the rare occasion when you do. It sure beats the notion of good old "Kentucky elevation" at 300 yards.
We can also discard the 3.6 inch way of thinking for practical purposes, and just use the more intuitive (for many) 3 inches per hundred yards of range between the gates, or 9 inches at 300 yards.
With fur in the crosshairs, the less optical clutter the better.
Onto a specific application and example that will hopefully give this little missive a bit more meaning. The scope used is a Bushnell Elite 3200 4 x 12 AO with Bushnell's "Ballistic Reticle." The
gun: a Savage 10ML-II. The load: 60 grains of Vihtavouri pushing a .458 Barnes Semi-Spitzer (G1 of .291, Form factor .702, SD .204) at a muzzle velocity of 2287 fps.
A logical true zero is 150 yards. That makes us good to go without elevation correction to 190 yards, dropping 2.98 inches below line of sight at that range. At 200 yards, shift to the first tier of
our ballistic reticle. What would put us at -3.99 inches is now actually +2.01 inches thanks to the reticle. Now we are good to go again, but for a far shorter increment out to 250 yards now at (2.5
x 3 in. = 7.5 in correction subtracted from LOS of -10.82 in = -3.32). Between 200 and 250 yards, tier one does it for us.
Beyond 250, we need to shift gears again: down to line two of our reticle. That is 6 x 2.5 = 15 inches correction at 250 yards, a line of sight basis meaning +4.18 inches at 250 yards. We shift to
this 2nd line only past 250, though, and we are trajectory corrected again to 300 yards, where we find ourselves at -2.69 inches. We've not yet addressed windage, but that is a story for another day.
After 300 yards, things get ugly in a hurry. At 310 yards we will drop to the third tier. That gives us 9 inches x 3.1 = 27.9 inches of compensation. Calculated from the -23.05 LOS, we are +4.85. We
can continue to 350 yards leaving us at -2.55 inches. Beyond this, we have exceeded the ability of most to accurately place a shot, and the .45 caliber trajectory (and windage) makes continuing a
marginal affair for most.
Our bullet drops over 3 inches from 350 to 360 yards, and over 3.3 inches in addition to this from 360 to 370 yards. At 370 yards, just a 10 mph crosswind blows our bullet nearly two feet away from
our crosshairs on a stationary target.
So, though all this must naturally be 100% range verified in your individual gun to confirm, the thought process with the 150 yard zero and the Bushnell Ballistic reticle is as follows.
Inside 200 yards, take him. Between 200 and 250, use the first line beneath the crosshairs. Between 250 and 300 yards, the second line does the proper vertical compensation. Between 310 and 350
yards, the third line makes the correct compensation.
With this reticle the length of the lines compensate for 10mph wind drift in concert with the specific tier you are on. Tier one has 3 minutes to the left and right of the vertical crosshair, tier
two has six minutes on either side, and tier three has nine inches on each side.
Co-mingled with the first line above, at 200-250 yards we can easily compensate for a 10 mph cross wind. Beyond that, I believe you'll need a printout and a wind meter to have confidence in the shot.
The best bet is no crosswind at all.
With this load, a 175 yard zero is better, at least for me.
+2.46 @ 100 yards
+/- 0.0 @ 175 yards
-3.13 @ 210 yards
No elevation correction inside 210 yards
+2.31 @ 220 yards using first line of reticle
- 3.73 @ 270 yards using first line of reticle
beyond 270 to 310 yards, use 2nd line holding on the spine
Not a bad general purpose load! | {"url":"http://www.chuckhawks.com/mil-dot_scopes.htm","timestamp":"2014-04-21T09:35:53Z","content_type":null,"content_length":"11928","record_id":"<urn:uuid:d54b336c-705f-4b11-b91f-747c9b2b0867>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can't solve this function
March 10th 2013, 03:57 PM #1
Mar 2013
Can't solve this function
Hello all. I was practicing some algebra as i am about to return to college after some time, trying other things and finding what i really want to do. I came to this problem and just cannot seem
to get the right answer. Can someone please explain how to solve this?
find f(g(x))
thank you.
Re: Can't solve this function
The answer in the back-of-the-book is $\frac{225x^4}{75x^2+18}$.
Now if you want anymore help ever, then you must reply with a detailed explaining why.
Re: Can't solve this function
I don't follow. Explaining why?
Re: Can't solve this function
Re: Can't solve this function
Actually. I got that problem from this website CLEP College Algebra | CollegePlus question 7
which gives the answer of 5625/77. I did not get that answer.
if i was "unwilling to even try and help myself" i would not have joined this forum to ask for help.
I did not know i would be answered by a condescending prick. If you are not willing to try to help people, then don't answer.
Re: Can't solve this function
Actually. I got that problem from this website CLEP College Algebra | CollegePlus question 7
which gives the answer of 5625/77. I did not get that answer.
if i was "unwilling to even try and help myself" i would not have joined this forum to ask for help.
I did not know i would be answered by a. If you are not willing to try to help people, then don't answer.
If you think that I am a condescending prick, then you have a real problem.
This is not a website for sexual perverts. You had best get your terms straight.
You say "which gives the answer of 5625/77. I did not get that answer" that is strange, it proves that you are either a troll or a do not care about the truth. Which is it?
Last edited by Plato; March 10th 2013 at 06:10 PM.
Re: Can't solve this function
What? this is not the website for sexual perverts? I'm sorry, you're not an asshole, this was just a misunderstanding. my mistake.
Thank you for clearing that up. I'm truly sorry, all mighty Plato. We people who are not so good with math, who try to better educate ourselves, are not worthy of help.
March 10th 2013, 04:27 PM #2
March 10th 2013, 04:34 PM #3
Mar 2013
March 10th 2013, 05:10 PM #4
March 10th 2013, 05:21 PM #5
Mar 2013
March 10th 2013, 06:08 PM #6
March 10th 2013, 07:11 PM #7
Mar 2013 | {"url":"http://mathhelpforum.com/advanced-algebra/214553-can-t-solve-function.html","timestamp":"2014-04-16T14:41:03Z","content_type":null,"content_length":"50694","record_id":"<urn:uuid:08b943a4-61f4-494e-80c8-e76f6fec6bbc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions, Domain & Range
April 30th 2008, 06:14 AM #1
Junior Member
Apr 2008
Functions, Domain & Range
Let "A" be an n-element set and let k E N. How many functions f : A --> {0,1} are the for which there are exactly k elements in "A" with f(a)=1 ?
Note: E = "be a member of" (k E N)
N = Natural numbers
If $k>n$ the answer is 0.
If $1\le k \le n$ then the answer is $\binom{n}{k}$.
You may think of the number of ways to arrange k 1's and (n-k) 0's.
Thank you Plato!
April 30th 2008, 07:58 AM #2
April 30th 2008, 08:24 AM #3
Junior Member
Apr 2008 | {"url":"http://mathhelpforum.com/discrete-math/36657-functions-domain-range.html","timestamp":"2014-04-18T00:30:27Z","content_type":null,"content_length":"35065","record_id":"<urn:uuid:40d2dee0-dde9-42c8-b4b5-2f9bcdde97de>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Villanova Trigonometry Tutor
Find a Villanova Trigonometry Tutor
...However, I am also prepared to help students with the biology and chemistry questions that appear on the science test. I have been an avid fiber artist since I was a young teenager. I do
needlepoint and cross-stitch and have dozens of completed projects.
47 Subjects: including trigonometry, chemistry, English, reading
...My expertise is in the field of analytical chemistry. While my Ph.D studies were focused in the analytical area, I have significant experience in both areas of organic chemistry and
biochemistry as well. During my doctoral studies I was selected to participate in the National Science Foundation GK-12 program where I worked in the local high school classroom setting with a
science teacher.
9 Subjects: including trigonometry, chemistry, algebra 2, geometry
...I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the
last 10 years and my students are always among the top performers in the school. My goal is to provide students with the skills, organization, and confidence to become independent mathematics
15 Subjects: including trigonometry, calculus, geometry, algebra 1
My Name is Jonathan and I live in Philadelphia PA. I currently teach full time for the School District of Philadelphia I am a certified math teacher for the School District of Philadelphia. For
the past 4 years I have taught 9th grade Algebra preparing students for Pennsylvania Keystone exams in Algebra.
9 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...His industrial career included technical presentations and workshops, throughout North America and Europe, to multinational companies, to NATO, and to trade delegations from China and Russia.
In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr.
10 Subjects: including trigonometry, calculus, algebra 1, GRE | {"url":"http://www.purplemath.com/Villanova_trigonometry_tutors.php","timestamp":"2014-04-18T22:04:37Z","content_type":null,"content_length":"24366","record_id":"<urn:uuid:0dec3b9e-1529-44f8-b25e-9ed2603e2296>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vernon, CA Algebra 2 Tutor
Find a Vernon, CA Algebra 2 Tutor
...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math
competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma...
28 Subjects: including algebra 2, Spanish, chemistry, French
...I believe that with proper studying methods and careful attention to detail, any student can and will improve their score. Thank youI graduated from Duke University, and I am going to attend
UC Davis School of Veterinary Medicine next fall. As part of the requirements to be considered, a strong grade in a genetics course was required.
13 Subjects: including algebra 2, chemistry, physics, geometry
...Helping students and their families navigate the college admissions pipeline is critical if they want to be admitted to a college that’s the right fit for them. I’ve helped countless students
bring up their grades and improve their GPAs as they master crucial elementary and secondary school subjects. I can provide you with testimonials from satisfied customers.
41 Subjects: including algebra 2, English, writing, reading
...Algebra 2 can seem overwhelming if students didn't get something down correctly in Algebra 1. I help students figure out exactly where they are going wrong and correct the problem. A solid
mastery of Algebra 2 builds the self-confidence and skills for Precalculus and even Calculus.
12 Subjects: including algebra 2, calculus, SAT math, geometry
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including algebra 2, chemistry, statistics, English
Related Vernon, CA Tutors
Vernon, CA Accounting Tutors
Vernon, CA ACT Tutors
Vernon, CA Algebra Tutors
Vernon, CA Algebra 2 Tutors
Vernon, CA Calculus Tutors
Vernon, CA Geometry Tutors
Vernon, CA Math Tutors
Vernon, CA Prealgebra Tutors
Vernon, CA Precalculus Tutors
Vernon, CA SAT Tutors
Vernon, CA SAT Math Tutors
Vernon, CA Science Tutors
Vernon, CA Statistics Tutors
Vernon, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bell Gardens algebra 2 Tutors
Bell, CA algebra 2 Tutors
Bradbury, CA algebra 2 Tutors
Commerce, CA algebra 2 Tutors
Cudahy, CA algebra 2 Tutors
Dockweiler, CA algebra 2 Tutors
Hazard, CA algebra 2 Tutors
Huntington Park algebra 2 Tutors
Los Angeles algebra 2 Tutors
Maywood, CA algebra 2 Tutors
Rossmoor, CA algebra 2 Tutors
San Marin, CA algebra 2 Tutors
South Gate algebra 2 Tutors
Sunland algebra 2 Tutors
Universal City, CA algebra 2 Tutors | {"url":"http://www.purplemath.com/Vernon_CA_algebra_2_tutors.php","timestamp":"2014-04-18T19:04:11Z","content_type":null,"content_length":"24155","record_id":"<urn:uuid:0bf40b35-2565-4542-af3c-d672cd7ff731>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry puzzle
4. Point that bisects or divides the segments into two congruent segments.
6. End of a segment and front of a ray.
7. Two angles whose measurments add up to be 180 degrees. (2 Words)
10. Fflat surface with no thickness and extends forever
12. Segments that have the same lenght. (2 Words)
13. Straight path with no thicknessand extends forever
15. a figure formed by two rays with a commom endpoint.
16. Names a location on a line
17. Starts at an endpoint and goes through another point forever.
18. Ray that divides an angle. (2 Words)
19. All points between the rays. | {"url":"http://www.armoredpenguin.com/crossword/Data/2013.09/1305/13050853.488.html","timestamp":"2014-04-20T03:12:01Z","content_type":null,"content_length":"59841","record_id":"<urn:uuid:b4070abe-476e-420b-a1b9-99489bea55b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: derivatives
Very good. The easiest questions to answer are the ones where the questioner figures it out.
Im not sure how to go onto the third derivative.
would I use (2x-3) as U or 48(2x-3) as U?
Use the (2x-3) for u. Constants really do not figure in the process. Just do not forget to hold on to them for the final answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=298025","timestamp":"2014-04-20T18:35:02Z","content_type":null,"content_length":"13445","record_id":"<urn:uuid:cf3ae079-ff3d-4c31-9d1c-7153701f7254>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: qualifications for judging g.i.i.; a philosophical exercise
Stephen G Simpson simpson at math.psu.edu
Tue Mar 17 13:13:02 EST 1998
This posting is about the question of why or how f.o.m. is of general
intellectual interest. It seems that some people on the FOM list have
not seriously come to grips with this question.
Consider for instance the remarks of Torkel Franzen 17 Mar 1998
> Now, this enormous (and, I forgot to add, objective) general
> intellectual interest of f.o.m. which is independent of whether
> anybody actually takes an interest in it isn't really what people
> have in mind .... It doesn't make a great deal of sense to discuss
> the general intellectual interest of something unless one makes an
> attempt to relate it to the actual intellectual concerns of other
> people .... what is lacking is any argument linking f.o.m. to the
> actual intellectual concerns of non-specialists.
Franzen is saying that, in order to establish the general intellectual
interest of f.o.m., there is a need to "relate" or "link" f.o.m. to
unspecified "intellectual concerns of other people". But which
intellectual concerns? Is Monday night football relevant? I don't
think so, but where do we draw the line? Franzen's point has no
serious content. His proposed public relations campaign will never
lead to anything of interest.
How did Franzen arrive at such an impasse? Perhaps it's my fault for
talking about barbers way back in November. Or perhaps it is really a
profound philosophical misunderstanding, along the lines of "general
interest = vacuousness" (Heidegger? Wittgenstein?). I'll ponder this
some more.
In the meantime, let me try to clear up the immediate
misunderstanding, by making a few new points.
First, f.o.m. *already is* related to the concerns of the man in the
street. This is because f.o.m. deals with the logical structure of
mathematics, and mathematics is applied to develop technology which
benefits the man in the street, whether he understands it or not. But
note that this indirect relationship doesn't imply that f.o.m. is in
need of a public relations campaign a la Franzen.
Second, f.o.m. is a highly developed subject with a lot of great
achievements by people like Frege, Turing, G"odel, Cohen, et al, and
is also closely tied to mathematics itself, which has its own body of
problems and techniques. Clearly nobody can expect to get a
*detailed* understanding of f.o.m. and its intellectual significance,
let alone specific f.o.m. advances, without mastering at least some of
the relevant background material.
The generality of "general intellectual interest" does not imply that
any and every ignorant lout is especially well qualified to judge such
matters by virtue of his ignorance. Actually, the opposite is the
case: the *more* you know, the better qualified you are to judge such
What then *is* the general intellectual interest of advances in
f.o.m.? Ultimately this is a philosophical matter, concerned with the
place of mathematics in the structure of human knowledge as a whole.
If you can gain even a little bit of new insight with respect to such
matters, and if you can correctly formulate your new insight in
appropriately broad and objective terms, then "general intellectual
interest" (g.i.i.) is an appropriate accolade. Of course there are
degrees here, and not all f.o.m. advances are of equal g.i.i., but the
principle is clear.
In the case of Friedman's recent results on "greedy Ramsey theory", I
think an appropriate g.i.i. formulation would read something like the
A coherent body of results in finite mathematics, related to data
structures which are familiar in computer applications, have been
shown to be provable only by use of speculative mathematical axioms.
These speculative axioms are strong axioms of infinity which go far
beyond the standard mathematical axioms which have hitherto sufficed
for the bulk of mathematical practice. Such uses of speculative
axioms is unprecedented.
As a useful series of philosophical exercises, let's try to give
comparable g.i.i. formulations of other well known high points of
f.o.m. research. I have in mind advances such as Frege's invention of
the predicate calculus, G"odel's completeness and incompleteness
theorems, Turing's work on computability, the consistency and
independence of the continuum hypothesis, the large cardinal
hierarchy, the relationship between determinacy and large cardinals,
These exercises are philosophical, in the highest sense of the term,
because each of them involves a ruthless pruning away of all that is
inessential with respect to general intellectual interest.
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001562.html","timestamp":"2014-04-20T01:13:36Z","content_type":null,"content_length":"7223","record_id":"<urn:uuid:09e2b729-cc90-4db8-9b6f-82cad314be85>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Park Math Tutor
Find a College Park Math Tutor
...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information
from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated.
17 Subjects: including trigonometry, SAT math, linear algebra, precalculus
John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information
Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for t...
18 Subjects: including ACT Math, geometry, SAT math, prealgebra
I am currently an 8th grade math teacher for Anne Arundel County Public Schools. I have previously taught a wide variety of math subjects from 7th grade through entry level college classes. My
previous clients have gone on to significantly increase their score on their standardized tests as well as raise their class grades by an average of 1.5 letter grades.
12 Subjects: including linear algebra, algebra 1, algebra 2, geometry
...During that time, I have taught Biology, Anatomy and Physiology, Biotechnology, and Environmental Science classes. Prior to and in addition to my classroom teaching experience, I have worked as
a tutor for seven years. I have experience tutoring middle school, high school, and adult students in Biology, Basic Math, Pre-Algebra, Algebra I and II, Geometry, and SAT Preparation.
12 Subjects: including algebra 1, algebra 2, biology, geometry
I am currently a senior in high school and I am completely open to negotiating tutoring prices since I've yet to prove myself. My strongest subjects are mathematics and reading. Last year i was
able to pass my Advanced Placement Calculus AB class with an A average and I passed the AP exam with a 4...
7 Subjects: including algebra 1, algebra 2, calculus, reading | {"url":"http://www.purplemath.com/college_park_math_tutors.php","timestamp":"2014-04-20T11:24:02Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:181110d2-5172-4c1c-9327-871670bf7044>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
┃ ┃
┃ San José State University ┃
┃ ┃
┃ ┃
┃ applet-magic.com ┃
┃ Thayer Watkins ┃
┃ Silicon Valley ┃
┃ & Tornado Alley ┃
┃ USA ┃
┃ ┃
┃ The Role of the Electrostatic Repulsion of Protons in the ┃
┃ Estimate of the Ratio of the Nucleon Charge of a Neutron ┃
┃ to that of a Proton Based on the Limits of Nuclear Stability ┃
Nuclei are held together by the mutual attraction between neutrons and protons. Neutrons are repelled from each other through the strong force. Protons are also repelled from each other not only
through the electrostatic force but also through the strong force. Therefore there has to be some balance between the number of neutrons and the number of protons for a nucleus to hold together. If
there are too many protons compared to the number of neutrons the repulsion between the protons overwhelms the attraction between neutrons and protons. Likewise if there are too few protons the
repulsion between the neutrons overwhelms the neutron-proton attraction. There is an asymmetry between the numbers of neutrons and protons that indicates the strength of the repulsion between protons
due to the strong force is greater than that between neutrons. The strong force drops off faster with distance than the electrostatic force so the electrostatic repulsion between protons becomes
relatively stronger in larger nuclides where the average distance between protons becomes greater. The situation is made more complicated by the fact that neutrons form spin pairs with each despite
their mutual repulsion and protons do likewise. Spin pair formation is relatively more important for the smaller nuclides.
There are 2931 nuclides stable enough to have had their masses measured and their binding energies computed. For each number of neutrons the minimum number and the maximum number of protons were
compiled. The results are displayed in the following graph.
In the graph there is some piecewise linearity displayed.
A Previous study developed evidence that the nucleonic (strong force) charge of a neutron is of the opposite sign and smaller in magnitude from that of a proton. Let ν denote the ratio of the
nucleonic charge of a neutron to that of a proton. The actual value of ν is undoubtedly a simple fraction. Previous work indicated that the relative magnitude of the neutron charge could be 2/3 or 3/
4. Furthermore such a difference in charge of the nucleons can account for the limits to the values of the proton numbers of the known nuclides, shown above.
Another study demonstrated that the binding energy increments experienced by additional nucleons to a nuclide is a function of two components. One is simply the difference in the number of protons
and neutrons in the nuclide. This component has to do with the formation of a neutron-proton spin pair. The other component has to do with the interaction of nucleons through the strong force and it
is a function of the net nucleonic charge of the nuclide. If p and n are the numbers of protons and neutrons, respectively, of the nuclide then the net nucleonic charge ζ is
ζ = p − νn
where ν is the magnitude of the nucleonic charge of the neutron relative to that of a proton.
The binding energy associated with the interaction nucleons through the strong force is a nonlinear function of ζ, but for small values of ζ to a reasonable approximation it is kζ, where k is a
constant. Nucleons also interact through the formation of spin pairs. For example, the addition of another neutron to a nuclide with an odd number of neutrons would result in the formation of a
neutron-neutron spin. Let E[nn] be the binding energy associated with the formation of a neutron-neutron spin pair. If there are unpaired protons in the nuclide the addition of another neutron would
result in the formation of a neutron-proton spin pair with a binding energy of E[np]. The binding energies associated with the formation of spin pairs are not really constants independent of the
levels of n and p but for the present they are assumed to be constants.
An Additional Neutron
The incremental binding energy of a neutron, IBEn, is the binding energy change associated with the addition of another neutron to a nuclide. For a nuclide with p protons and n neutrons in which n is
odd and less than p it is
IBEn = kζ + E[nn] + E[np]
or, expanded
IBEn = k(p−νn) + E[nn] + E[np]
For stability the values of n and p must be such that IBEn≤0. The minimum number of protons for a nuclide with p protons is reached when IBEn is driven to zero. This means that
k(p[min] − νn) + E[nn] + E[np] = 0
and hence
p[min] = νn − E[nn]/k − E[np]/k
The maximum number of neutrons is also where IBEn=0 and hence
k(p − νn[max]) + E[nn] + E[np] = 0
and hence
n[max] = (1/ν)p + E[nn]/(kν) + E[np]/(kν)
An Additional Proton
In addition to the factors which affect neutrons, protons are subject to electrostatic repulsion by other protons. This factor is thought to only effective in large nuclides where the average
distance between proton becomes large. The effect of this repulsion on the binding energy for an additional proton is negative and proportional to the number of protons in the nuclide, say −qp. The
binding energy of an additional proton to a nuclide with p protons and n neutrons in which p is odd and less than n is
IBEp = −kζ −qp + E[pp] + E[np]
and thus
IBEp = kνn - (k+q)p + E[pp] + E[np] = 0
For IBEp to be greater than or equal to zero requires a minimum n of
n[min] = (1/ν)((k+q)/k)p − E[pp]/(kν) − E[np]/(kν)
or, equivalently
n[min] = (1/ν)((1+q/k)p − E[pp]/(kν) − E[np]/(kν)
The maximum p is the value such that IBEp=0; i.e.,
p[max] = (k/(k+q))νn + E[pp]/(k+q) + E[np]/(k+q)
Now the equations for the maximum and minimum number of neutrons can be displayed together; i.e.,
n[max] = (1/ν)n + E[nn]/(kν) + E[np]/(kν)
n[min] = (1/ν)((1+q/k)p − E[pp]/(kν) − E[np]/(kν)
If these two equations are added together and the result divided by 2, the result is
n[mid] = (1/ν)(1+½q/k)p + (E[nn]−E[pp])/(2kν)
The equations for the maximum and minimum number of protons are
p[max] = (k/(k+q))νn + E[pp]/(k+q) + E[np]/(k+q)
p[min] = νn − E[nn]/k − E[np]/k
The average of these two equations is
p[mid] = (2k+q)/(2(k+q))νn + (E[pp]/2(k+q) − E[nn]/2k − E[np](1/k − 1/(k+q))/2)
or, equivalently
p[mid] = (1+½q/k)/(1+q/k))νn + (E[pp]/2(k+q) − E[nn]/2k − E[np](1/k − 1/(k+q))/2)
In order to illustrate the effect of the electrostatic force suppose the true value of ν were 0.75 and q/k=0.25. Then the slope of the relationship between p[mid] and n would be (1.125/1.25)(.75)=
0.675. The slope of the relationship between n[mid] would be (1/0.75)(1.125)=1.5.
The regression coefficient for p[mid] on n is 0.67264 and that of n[mid] on p is 1.37559.* Thus
(1+½q/k)/(1+q/k))ν = 0.67264
(1/ν)(1+½q/k) = 1.37559
If these two equations are multiplied together the result is
(1+½q/k)²/(1+q/k) = 0.92528
or, equivalently
1 + q/k + (q/k)²/4 = 0.92528 + 0.92528(q/k)
0.074723 + 0.074723(q/k) + 0.25(q/k)² = 0
The solutions to this quadratic equation are
q/k = (−0.074723 ± (0.0055835 − 0.074723)^½)/0.5
These roots are not real. Therefore the electrostatic repulsion of the protons cannot explain the discrepancy of the results for the slopes of the equations for n[mid] and p[mid]. This is a negative
result but nevertheless an important one.
*For the analyses in terms of the maximum and minimum numbers of neutrons and protons see Nuclear Charge Ratio and Nuclear Charge Ratio 2. | {"url":"http://applet-magic.com/nuchargeratio3.htm","timestamp":"2014-04-20T15:51:10Z","content_type":null,"content_length":"13354","record_id":"<urn:uuid:6eb794d2-3660-43b3-a4a1-e6043146e4cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Einstein's geometric gravity
The key idea of Einstein's theory of general relativity is that gravity is not an ordinary force, but rather a property of space-time geometry. The following simplified analogy, which substitutes a
two-dimensional surface for four-dimensional space-time, serves to illustrate this idea.
Imagine empty space - in our case, a two-dimensional plane - with no forces acting between the bodies floating around. If there are no forces, then classical mechanics and Einstein's mechanics of
special relativity are in agreement: Under these circumstances, bodies move along the straightest possible lines (which in this case are just straight lines in space) with a constant velocity. In the
following image, this is symbolized by the straight paths of two particles A and B:
In particular, particles that start to move along parallel trajectories (as in the above image) will never meet, but are fated to remain forever at a constant distance from one another.
In the world of classical physics, if particles diverge from this behavior, it must be because there is a force acting on them. Forces accelerate particles, causing them to leave the straightest
possible paths and follow curved trajectories instead. In our two-dimensional example, look at the following picture,
in which the particles A and B start out in parallel, but are then accelerated towards one another. In Newton's theory of gravity, gravitation is a force which could cause such an effect. For
instance, the reason that the two particles in the above picture accelerate toward each other and then meet could be that they are both attracted gravitationally by a massive body located at the
point of their meeting.
However, there is another possibility in which the same situation (where two particles that start out in parallel converge and finally meet) could arise. The two particles could still be moving on
the straightest possible lines - not in the plane, but on a curved surface! The following image shows an example:
In that situation, there is no force making the particles deviate from the straightest possible lines; the mere fact that the particles are moving on a sphere means that, even if they still move as
straight as possible, their paths will converge.
Einstein's theory is exactly analogous to this. In Newton's theory, gravity makes particles leave their straight paths. In Einstein's theory of general relativity, gravity is a distortion of
space-time. Particles still follow the straightest possible paths in that space-time. But because space-time is now distorted, even on those straightest paths, particles accelerate as if they were
under the influence of what Newton called the gravitational force. | {"url":"http://www.einstein-online.info/elementary/generalRT/GeomGravity","timestamp":"2014-04-21T02:07:17Z","content_type":null,"content_length":"38826","record_id":"<urn:uuid:6c92f6aa-3eaa-44dd-9015-a43a49e227fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Random World
Copyright © University of Cambridge. All rights reserved.
Did you know that the world is locked in a perpetual struggle between order and chaos. Don't worry, nature is designed in this way. To explore these ideas, read on.. ..
Take a look around the room in which you sit. Does the room look the same as yesterday, or are there differences? Perhaps it is towards the end of the day and things have become untidy; perhaps it is
the start of the day and things are neatly in their place. Take a look out of the window. Is the weather the same as yesterday, or is it different? Which is more likely to have changed? The weather,
or the room? Whilst it is tempting to say that the weather is more changable than the room, both will have changed to some degree, but both will also be similar to some degree. Do you think that it
makes sense to say that one has changed more than the other?
In fact, the entire physical world is continually subject to change at either the microscopic (small) or macroscopic (large) level. Consider something which does not appear to be changing, such as
your desk. Imagine looking closer and closer at the desk through a microscope. As you reach down to the scale of the molecules in the desk you would see that they are continually oscillating and
vibrating: the entire structure of the desk is vibrating, yet forces between the molecules keep the desk roughly in the same place, so that, at our human level, the desk appears to be inert. Consider
the wind. A leaf or piece of dust on a windy day is blown this way and that, backwards and forwards, up and down. The molecules in the air also vibrate continuously, yet are not bound to each other
by atomic forces, they are free to move independently of each other. Yet, despite this inter-atomic freedom, the wind, roughly speaking, will be moving in one overall direction at any given moment in
time. In short, the world is in a constant struggle between order and chaos: at any level you will simultaneously see some order and some chaos.
We live in a world of scales at around 1,000,000,000 times bigger than the world of atoms. Like small
In short, at its very foundation, the physical world is full of randomness, but within this randomness emerges some averaged order. Mathematicians studying random processes need to try to find
structure within the randomness: whilst individual elements of a problem will vary at random, can we describe with some quantified degree of certainty an overall outcome?
Even the most trivial of random processes become complicated when considered in detail. The simplest example of a random process is the tossing of a coin: flip a coin and it lands heads or tails.
Typically, we say that the chance of heads or tails is fifty-fifty; toss a coin one time and one result is just as likely as the other. Is this exactly fifty-fifty? If you look at a coin then it is
not the same on each side; one side has the raised image of a head and one has a different raised image. Are both sides exactly the same? I can easily balance a coin on its edge; could the coin not
land on its edge every so often when tossed? Does this alter the odds of a head? At what point in time does the result of the coin toss become a certainty? When it has completely come to rest
(according to the human eye, at least)? Watched under slow motion, the coin will gradually come to a stop with a few small bounces, maybe whilst spinning. Once these bounces are small enough, we
would be able to predict with more certainty the outcome of the toss, yet the coin has not come to rest. In fact, extrapolating backwards, if we took a snapshot of the motion even before the coin has
hit the ground for the first time, would we be able to analyse the motion on a computer to be able to work out whether the coin was going to land face up or face down?
Another, often quoted, example of a random process is that of radioactive decay: a lump of radioactive material, such as uranium, will, over time break up into pieces, emitting radiation. This is
more than simply the lump of uranium falling into two pieces: individual atoms split apart, forming new atoms, such as iron. At the sub-atomic level, matter fundamentally evolves according to random
laws, called quantum mechanics . Quantum mechanics states that any certain prediction is not just difficult but impossible: atoms are likely to follow certain set-paths, but could, randomly, do
literally anything. For example, if all atoms in your desk spontaneously moved left by 2cm, then your entire desk would move left by 2cm. However, the chance of anything so exciting occurring is
impossibly small, as atoms are just as likely randomly to move to the left as to the right: overall, the random motions almost exactly cancel out in a solid, leaving an apparently static structure.
In the study of any random process, there is a balance to be made between the major effects of the problem and minor complexities. Once this balance is correctly struck, mathematicians are able
meaningfully to model the order which underlies the chaos of the real world. Their predictions are powerful and wide ranging. | {"url":"http://nrich.maths.org/5975/index?nomenu=1","timestamp":"2014-04-16T04:58:23Z","content_type":null,"content_length":"9677","record_id":"<urn:uuid:e6593c9b-3738-4937-be07-2e415e1a3557>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methods of Proof — Contrapositive
In this post we’ll cover the second of the “basic four” methods of proof: the contrapositive implication. We will build off our material from last time and start by defining functions on sets.
Functions as Sets
So far we have become comfortable with the definition of a set, but the most common way to use sets is to construct functions between them. As programmers we readily understand the nature of a
function, but how can we define one mathematically? It turns out we can do it in terms of sets, but let us recall the desired properties of a function:
• Every input must have an output.
• Every input can only correspond to one output (the functions must be deterministic).
One might try at first to define a function in terms of subsets of size two. That is, if $A, B$ are sets then a function $f: A \to B$ would be completely specified by
$\displaystyle \left \{ \left \{ x, y \right \} : x \in A, y \in B \right \}$
where to enforce those two bullets, we must impose the condition that every $x \in A$ occurs in one and only one of those subsets. Notationally, we would say that $y = f(x)$ means $\left \{ x, y \
right \}$ is a member of the function. Unfortunately, this definition fails miserably when $A = B$, because we have no way to distinguish the input from the output.
To compensate for this, we introduce a new type of object called a tuple. A tuple is just an ordered list of elements, which we write using round brackets, e.g. $(a,b,c,d,e)$.
As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided:
$\displaystyle (a,b) = \left \{ a, \left \{ a, b \right \} \right \}$
And so a function $f: A \to B$ is defined to be a list of ordered pairs where the first thing in the pair is an input and the second is an output:
$\displaystyle f = \left \{ (x, y) : x \in A, y \in B \right \}$
Subject to the same conditions, that each $x$ value from $A$ must occur in one and only one pair. And again by way of notation we say $y = f(x)$ if the pair $(x,y)$ is a member of $f$ as a set. Note
that the concept of a function having “input and output” is just an interpretation. A function can be viewed independent of any computational ideas as just a set of pairs. Often enough we might not
even know how to compute a function (or it might be provably uncomputable!), but we can still work with it abstractly.
It is also common to call functions “maps,” and to define “map” to mean a special kind of function (that is, with extra conditions) depending on the mathematical field one is working in. Even in
other places on this blog, “map” might stand for a continuous function, or a homomorphism. Don’t worry if you don’t know these terms off hand; they are just special cases of functions as we’ve
defined them here. For the purposes of this series on methods of proof, “function” and “map” and “mapping” mean the same thing: regular old functions on sets.
One of the most important and natural properties of a function is that of injectivity.
Definition: A function $f: A \to B$ is an injection if whenever $a eq a'$ are distinct members of $A$, then $f(a) eq f(a')$. The adjectival version of the word injection is injective.
As a quick side note, it is often the convention for mathematicians to use a capital letter to denote a set, and a lower-case letter to denote a generic element of that set. Moreover,
the apostrophe on the $a'$ is called a prime (so $a'$ is spoken, “a prime”), and it’s meant to denote a variation on the non-prime’d variable $a$ in some way. In this case, the variation is that $a'
eq a$.
So even if we had not explicitly mentioned where the $a, a'$ objects came from, the knowledgeable mathematician (which the reader is obviously becoming) would be reasonably certain that they come
from $A$. Similarly, if I were to lackadaisically present $b$ out of nowhere, the reader would infer it must come from $B$.
One simple and commonly used example of an injection is the so-called inclusion function. If $A \subset B$ are sets, then there is a canonical function representing this subset relationship, namely
the function $i: A \to B$ defined by $i(a) = a$. It should be clear that non-equal things get mapped to non-equal things, because the function doesn’t actually do anything except change perspective
on where the elements are sitting: two nonequal things sitting in $A$ are still nonequal in $B$.
Another example is that of multiplication by two as a map on natural numbers. More rigorously, define $f: \mathbb{N} \to \mathbb{N}$ by $f(x) = 2x$. It is clear that whenever $x eq y$ as natural
numbers then $2x eq 2y$. For one, $x, y$ must have differing prime factorizations, and so must $2x, 2y$ because we added the same prime factor of 2 to both numbers. Did you catch the quick proof by
direct implication there? It was sneaky, but present.
Now the property of being an injection can be summed up by a very nice picture:
The arrows above represent the pairs $(x,f(x))$, and the fact that no two arrows end in the same place makes this function an injection. Indeed, drawing pictures like this can give us clues about the
true nature of a proposed fact. If the fact is false, it’s usually easy to draw a picture like this showing so. If it’s true, then the pictures will support it and hopefully make the proof obvious.
We will see this in action in a bit (and perhaps we should expand upon it later with a post titled, “Methods of Proof — Proof by Picture”).
There is another, more subtle concept associated with injectivity, and this is where its name comes from. The word “inject” gives one the mental picture that we’re literally placing one set $A$
inside another set $B$ without changing the nature of $A$. We are simply realizing it as being inside of $B$, perhaps with different names for its elements. This interpretation becomes much clearer
when one investigates sets with additional structure, such as groups, rings, or topological spaces. Here the word “injective mapping” much more literally means placing one thing inside another
without changing the former’s structure in any way except for relabeling.
In any case, mathematicians have the bad (but time-saving) habit of implicitly identifying a set with its image under an injective mapping. That is, if $f :A \to B$ is an injective function, then one
can view $A$ as the same thing as $f(A) \subset B$. That is, they have the same elements except that $f$ renames the elements of $A$ as elements of $B$. The abuse comes in when they start saying $A \
subset B$ even when this is not strictly the case.
Here is an example of this abuse that many programmers commit without perhaps noticing it. Suppose $X$ is the set of all colors that can be displayed on a computer (as an abstract set; the elements
are “this particular green,” “that particular pinkish mauve”). Now let $Y$ be the set of all finite hexadecimal numbers. Then there is an obvious injective map from $X \to Y$ sending each color to
its 6-digit hex representation. The lazy mathematician would say “Well, then, we might as well say $X \subset Y$, for this is the obvious way to view $X$ as a set of hexadecimal numbers.” Of course
there are other ways (try to think of one, and then try to find an infinite family of them!), but the point is that this is the only way that anyone really uses, and that the other ways are all just
“natural relabelings” of this way.
The precise way to formulate this claim is as follows, and it holds for arbitrary sets and arbitrary injective functions. If $g, g': X \to Y$ are two such ways to inject $X$ inside of $Y$, then there
is a function $h: Y \to Y$ such that the composition $hg$ is precisely the map $g'$. If this is mysterious, we have some methods the reader can use to understand it more fully: give examples for
simplified versions (what if there were only three colors?), draw pictures of “generic looking” set maps, and attempt a proof by direct implication.
Proof by Contrapositive
Often times in mathematics we will come across a statement we want to prove that looks like this:
If X does not have property A, then Y does not have property B.
Indeed, we already have: to prove a function $f: X \to Y$ is injective we must prove:
If x is not equal to y, then f(x) is not equal to f(y).
A proof by direct implication can be quite difficult because the statement gives us very little to work with. If we assume that $X$ does not have property $A$, then we have nothing to grasp and
jump-start our proof. The main (and in this author’s opinion, the only) benefit of a proof by contrapositive is that one can turn such a statement into a constructive one. That is, we can write “p
implies q” as “not q implies not p” to get the equivalent claim:
If Y has property B then X has property A.
This rewriting is called the “contrapositive form” of the original statement. It’s not only easier to parse, but also probably easier to prove because we have something to grasp at from the
To the beginning mathematician, it may not be obvious that “if p then q” is equivalent to “if not q then not p” as logical statements. To show that they are requires a small detour into the idea of a
“truth table.”
In particular, we have to specify what it means for “if p then q” to be true or false as a whole. There are four possibilities: p can be true or false, and q can be true or false. We can write all of
these possibilities in a table.
p q
T T
T F
F T
F F
If we were to complete this table for “if p then q,” we’d have to specify exactly which of the four cases correspond to the statement being true. Of course, if the p part is true and the q part is
true, then “p implies q” should also be true. We have seen this already in proof by direct implication. Next, if p is true and q is false, then it certainly cannot be the case that truth of p implies
the truth of q. So this would be a false statement. Our truth table so far looks like
p q p->q
T T T
T F F
F T ?
F F ?
The next question is what to do if the premise p of “if p then q” is false. Should the statement as a whole be true or false? Rather then enter a belated philosophical discussion, we will zealously
define an implication to be true if its hypothesis is false. This is a well-accepted idea in mathematics called vacuous truth. And although it seems to make awkward statements true (like “if 2 is odd
then 1 = 0″), it is rarely a confounding issue (and more often forms the punchline of a few good math jokes). So we can complete our truth table as follows
p q p->q
T T T
T F F
F T T
F F T
Now here’s where contraposition comes into play. If we’re interested in determining when “not q implies not p” is true, we can add these to the truth table as extra columns:
p q p->q not q not p not q -> not p
T T T F F T
T F F T F F
F T T F T T
F F T T T T
As we can see, the two columns corresponding to “p implies q” and “not q implies not p” assume precisely the same truth values in all possible scenarios. In other words, the two statements are
logically equivalent.
And so our proof technique for contrapositive becomes: rewrite the statement in its contrapositive form, and proceed to prove it by direct implication.
Examples and Exercises
Our first example will be completely straightforward and require nothing but algebra. Let’s show that the function $f(x) = 7x - 4$ is injective. Contrapositively, we want to prove that if $f(x) = f
(x')$ then $x = x'$. Assuming the hypothesis, we start by supposing $7x - 4 = 7x' - 4$. Applying algebra, we get $7x = 7x'$, and dividing by 7 shows that $x = x’$ as desired. So $f$ is injective.
This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with $eq$ the same way it does with equality. In fact, many of the things we
take for granted about equality fail with inequality (for instance, if $a eq b$ and $b eq c$ it need not be the case that $a eq c$). The contrapositive method allows us to use our algebraic skills in
a straightforward way.
Next let’s prove that the composition of two injective functions is injective. That is, if $f: X \to Y$ and $g: Y \to Z$ are injective functions, then the composition $gf : X \to Z$ defined by $gf
(x) = g(f(x))$ is injective.
In particular, we want to prove that if $x eq x'$ then $g(f(x)) eq g(f(x'))$. Contrapositively, this is the same as proving that if $g(f(x)) = g(f(x'))$ then $x=x'$. Well by the fact that $g$ is
injective, we know that (again contrapositively) whenever $g(y) = g(y')$ then $y = y'$, so it must be that $f(x) = f(x')$. But by the same reasoning $f$ is injective and hence $x = x'$. This proves
the statement.
This was a nice symbolic proof, but we can see the same fact in a picturesque form as well:
If we maintain that any two arrows in the diagram can’t have the same head, then following two paths starting at different points in $X$ will never land us at the same place in $Z$. Since $f$ is
injective we have to travel to different places in $Y$, and since $g$ is injective we have to travel to different places in $Z$. Unfortunately, this proof cannot replace the formal one above, but it
can help us understand it from a different perspective (which can often make or break a mathematical idea).
Expanding upon this idea we give the reader a challenge: Let $A, B, C$ be finite sets of the same size. Prove or disprove that if $f: A \to B$ and $g: B \to C$ are (arbitrary) functions, and if the
composition $gf$ is injective, then both of $f, g$ must be injective.
Another exercise which has a nice contrapositive proof: prove that if $A,B$ are finite sets and $f:A \to B$ is an injection, then $A$ has at most as many elements as $B$. This one is particularly
susceptible to a “picture proof” like the one above. Although the formal the formal name for the fact one uses to prove this is the pigeonhole principle, it’s really just a simple observation.
Aside from inventing similar exercises with numbers (e.g., if $ab$ is odd then $a$ is odd or $b$ is odd), this is all there is to the contrapositive method. It’s just a direct proof disguised behind
a fact about truth tables. Of course, as is usual in more advanced mathematical literature, authors will seldom announce the use of contraposition. The reader just has to be watchful enough to notice
Though we haven’t talked about either the real numbers $\mathbb{R}$ nor proofs of existence or impossibility, we can still pose this interesting question: is there an injective function from $\mathbb
{R} \to \mathbb{N}$? In truth there is not, but as of yet we don’t have the proof technique required to show it. This will be our next topic in the series: the proof by contradiction.
Until then!
10 thoughts on “Methods of Proof — Contrapositive”
1. Thanks for writing this. I learned several new things.
Some comments:
>> Let’s show that the function f(x) = 7x – 4 is injective. Contrapositively, … This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra
works with \neq the same way it does with equality.
Can’t we just say: Let x != x’, Implies 7x != 7x’. Implies 7x-4 != 7x’-4. Implies f(x) != f(x’). Direct implication. In any case, I see your point about contrapositive being helpful sometimes.
>> Expanding upon this idea we give the reader a challenge: Let A, B, C be finite sets of the same size. Prove or disprove that if f: A \to B and g: B \to C are injective functions, and if the
composition gf is injective, then both of f, g must be injective.
There seems to be some typo here. If A \to B and g: B \to C are already injective, there is nothing left to prove. I guess you mean that if f and g are functions and their composition is
injective, then they individually must also be injective if the three sets are of the same finite size.
>> prove that if A,B are finite sets and f:A \to B is an injection, then A has fewer elements than B
Why cannot A have the same number of elements as B?
>> As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided: \displaystyle (a,b) = \left \{ a, \
left \{ b \right \} \right \}
Can you expand upon this please? :-) I had always wondered about formal definitions of ordered sets.
□ >> Can’t we just say: Let x != x’, Implies 7x != 7x’. Implies 7x-4 != 7x’-4. Implies f(x) != f(x’).
Yes you can, but in one view this relies on the implicit fact that multiplication by 7 and subtraction by 4 are injective functions. Thanks for catching those typos.
In regards to ordered pairs, the definition is nice because it allows for repetition and maintains order. For repetition, {a, {a}} != {a, a} = {a}. And for order if you want to check whether
a or b comes first in (a,b), check if a is an element of (a,b) = {a, {b}}. If it is, then a comes first, and if it’s instead {{a}, b} = (b,a), then {a} must in it and we know that b comes
first. To extend this, we can define (a,b,c,…,y,z) = (a, (b, (c, … (y,z)))) just as a programmer might define linked lists.
And although it seems to make awkward statements true (like “if 2 is prime then 1 = 0″)
Am I being an idiot here? 2 is prime, so the antecedent is true and the consequent is false, which makes the implication statement false.
3. If 2 is prime, 1=0 is false because 2 IS prime but 1 != 0. You might want to change that example.
□ Oh man, I do stuff like this all the time. It cracks me up :)
4. I’m not sure that your ordered pair definition works. Is {{1},{2}} the same as (1, {2}) or (2, {1})?
□ You’re right, there is a problem, but your examples don’t show it. I’ve updated the post with an alternative definition that works. You can find a list of the many possible definitions here:
☆ I have checked the wikipedia link you provided below and seen the definition proposed by Kuratowski, but it made me more confused because to proof its correctness Kuratowski says that:
let Y belong to p, i understand p represents the tuple but what is Y in his proof, is it a set that belongs to the tuple !?
☆ Yes, Y is a set in his proof. The reason is that there is no such thing as a “tuple” yet (a tuple is defined as a set of sets).
5. Thanks! | {"url":"http://jeremykun.com/2013/02/22/methods-of-proof-contrapositive/","timestamp":"2014-04-17T15:39:00Z","content_type":null,"content_length":"117514","record_id":"<urn:uuid:a29248e9-1366-481e-aba2-fbed540b1822>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
David Carlson
1. A note on the dimension of an orbital subspace. Linear Algebra and its Applications, 17(1977) 283-286; MR58:22131.
2. Isometries of matrix algebras. (with Marvin Marcus). Journal of Algebra, 47(1977) 180-189; MR56:3039.
3. Decomposable tensors as a quadratic variety. Proceedings of the American Mathematical Society, 64(1977) 227-230; MR57:12542.
4. Off-diagonal elements of normal matrices. Journal of Research of the National Bureau of Standards, 81B(1977) 41-44; MR56:15679.
5. The invariance of partial isometries. Indiana University Mathematics Journal, 28(1979) 445-449; MR80b:15036.
6. Certain isometries of rectangular complex matrices. Linear Algebra and its Applications, 29(1980) 161-171; MR81d:15007.
*7. Scheduling Spacelab experiments (with Frank Mathis). NASA Contractors Report 161511 (1980).
**8. Character derived matrix inequalities. Proceedings, Conference on Schur Functors, Torun, 1980.
9. Characterizations of sign patterns of inverse-positive matrices (with Miroslav Fiedler).Linear Algebra and its Applications, 40(1981) 237-245. MR82i:15030.
10. Spectral radius and seminorms in finite dimensional algebras II (with Peter Johnson). Colloquium Mathematicum, 46(1982) 85-88; MR84j:46079.
11. Permutation matrix groups with prescribed sum (with Dean Hoffman and J.R. Wall). Linear Algebra and its Applications, 45(1982) 29-34; MR83f:15007.
*12. A scheduling algorithm for Spacelab telescope observations. NASA Contractors Report 162051 (1982).
13. Positive definite completions of partial Hermitian matrices (with C.R. Johnson, E. Sa, and H. Wolkowicz). Linear Algebra and its Applications 58(1984) 109-124; MR85d:05169.
14. Improving Hadamard's inequality (with C.R. Johnson, E. Sa, and H. Wolkowicz). Linear and Multilinear Algebra 14(1984) 305-322; MR87b:15024.
15. An algorithm for the second immanant with R. Merris). Mathematics of Computation 43:168(1984) 589-591; MR85i:15008.
*16. A note on maximizing the permanent of a positive definite Hermitian matrix (with C.R.Johnson, E. Sa, and H. Wolkowicz). Emory University, Department of Mathematics Technical Report Number 15,
**17. Computation of immanants. Linear Algebra and its Applications 68 (1985) 252-254.
18. An inequality for the second immanant. Linear and Multilinear Algebra 18 (1985) 147-152. MR 87e:20026.
19. A note on maximizing the permanent of a positive definite Hermitian matrix, given the eigenvalues (with C.R. Johnson, E. Sa, and H. Wolkowicz). Linear and Multilinear Algebra 19 (1986) 389-394.
MR 88b:15006.
20. A Hadamard dominance theorem for a class of immanants (with R. Merris and W. Watkins). Linear and Multilinear Algebra 19 (1986) 167-171. MR 87g:15008.
21. Markov chains and tensor multiplications. Journal of Algebra 109 (1987) 14-24. MR 88k:15034.
22. A Fischer inequality for the second immanant (with R. Merris). Linear Algebra and its Applications 87 (1987) 77-83. MR 88d:15009.
23. Normal matrices (with C.R. Johnson, E. Sa, and H. Wolkowicz). Linear Algebra and its Applications 87 (1987) 213-225. MR 88a:15045.
24. A Hadamard inequality for the second immanant (with R. Merris). Journal of Algebra 111 (1987) 343-346.MR 88k:15008.
25. A Hadamard inequality for the third and fourth immanants (with R. Merris). Linear and Multilinear Algebra 21 (1987) 201-209. MR 89k:15038.
*26. Current Trends in Matrix Theory. (Conference proceedings, co-edited with Frank Uhlig), pp. 1-435. Elsevier North Holland, 1987.
27. Conjectures on permanents (with R. Merris). Linear and Multilinear Algebra 21 (1987) 419-427. MR 89b:15006.
28. Algebraic connectivity of trees (with R. Merris). Czechoslovak Mathematica Journal 37 (1987) 660-670. MR 89a:05053.
29. A bound for the complexity of a simple graph (with R. Merris). Journal of Discrete Mathematics 69 (1988) 97-99. MR 89j:05066.
30. Cones in the group algebra related to Schur's determinantal inequality (with R. Merris and W. Watkins). Rocky Mountain Journal of Mathematics 18 (1988) 137-146. MR 89e:15011.
**31. Factorization of bipartite graphs. Linear Algebra and its Applications 104 (1988) 216-219.
32. Cutpoints, lobes, and the spectra of graphs (with R. Merris). Portugaliae Mathematica 45 (1988) 1-8. MR 89f:05118
33. Bipartite completely positive matrices (with A. Berman). Proceedings of the Cambridge Philosophical Society 103 (1988) 269-276. MR 88k:15022.
34. Permanental inequalities for correlation matrices (with S. Pierce). SIAM Journal of Matrix Analysis 9 (1988). 194-201. MR 89i:15009.
**35. Eigenvalues and eigenvectors of the Laplacian. Linear Algebra and its Applications 107 (1988) 332-335.
36. The Laplacian spectrum of a graph (with R. Merris and V.S. Sunder). SIAM Journal of Matrix Analysis 11 (1990) 218-238.
37. Ordering trees by algebraic connectivity (with R. Merris). Graphs and Combinatorics 6 (1990) 53-61.
38. Extremal bipartite matrices (with S. Pierce). Linear Algebra and its Applications 131 (1990) 39-50.
39. Extremal correlation matrices (with S. Pierce and W. Watkins). Linear Algebra and its Applications 134 (1990) 63-70.
40. Coalescence, majorization, edge valuations, and the Laplacian spectra of graphs (with R. Merris). Linear and Multilinear Algebra 27 (1990) 139-146.
41. Large eigenvalues of the Laplacian (with G. Zimmermann). Linear and Multilinear Algebra 28 (1990) 45-47.
42. Extremal positive-semidefinite doubly stochastic matrices (with S. Pierce). Linear Algebra and its Applications 150 (1991) 107-117.
43. On the geometry and Laplacian of a graph. Linear Algebra and its Applications 150 (1991) 167-178.
**44. Extreme doubly nonnegative matrices with prescribed row sums Linear Algebra and its Applications 162(1992) 754-757.
45. Nonchordal positive semidefinite stochastic matrices (with R. Loewy and S. Pierce). Linear and Multilinear Algebra 32 (1992) 107-113.
46. Laplacian unimodular equivalence of graphs (with R. Merris and W. Watkins) - . Combinatorial and Graph-Theoretic Problems in Linear Algebra, IMA Volumes in Mathematics and it's Applications, vol.
50, Springer-Verlag 1993 (edited by R. Brualdi, S. Friedland and V. Klee) pp. 175-180.
47. The Laplacian spectrum of a graph II (with R. Merris). SIAM Journal of Discrete Mathematics 7 (1994) 221-229.
48. A biography of Marvin Marcus Linear Algebra and its Applications 201 (1994) 1-20.
49. Spectral bounds derived from quadratic forms on decomposable tensors (with J. Ross, S. Pierce and C.-K. Li). Linear Algebra and its Applications 201 (1994) 181-198.
50. Eigenvalues and the degree sequences of graphs. Linear and Multilinear Algebra 39 (1995) 133-136.
51. Decomposably Nonnegative Matrices (with S. Pierce). Linear and Multilinear Algebra, 41 (1996) 63-79.
* Not in refereed journals
** Part of a conference report. | {"url":"http://www-rohan.sdsu.edu/~bobgrone/B_Grone.htm","timestamp":"2014-04-19T11:58:32Z","content_type":null,"content_length":"57593","record_id":"<urn:uuid:c917d226-3640-4b9f-bc5c-8f1a0ce6e71b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - cumulative distribution function
For a function F(z) to qualify as a CDF,
• The function must be monotonic: F(z) ≥ F(a) for all z>a
• The function must be zero at the low end of the range: F(z[min]) = 0
• The function must be one at the high end of the rangeL F(z[max]) = 1.
Given that, does your result look like a CDF?
A couple of hints:
1. Your limits of integration aren't right.
2. What values can z take on? Can it be a large negative number? A large positive number? Zero? | {"url":"http://www.physicsforums.com/showpost.php?p=3019894&postcount=2","timestamp":"2014-04-21T12:25:50Z","content_type":null,"content_length":"7236","record_id":"<urn:uuid:575352bd-ab07-4d2c-a269-56d9c3f91595>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given any set $X$, a sequence in $X$ is a function $f\colon\mathbb{N}\to X$ from the set of natural numbers to $X$. Sequences are usually written with subscript notation: $x_{0},x_{1},x_{2}\dots$,
instead of $f(0),f(1),f(2)\dots$.
generalized sequence, transfinite sequence, finite sequence
Mathematics Subject Classification
no label found
no label found
From one high school book from 1964, revisited by Slovene mathematician Ivan Vidav and written by Alojzij Vadnal goes this simple definition for sequence:
Sequence is any number set, which is arranged in a way that one number comes first, one second, one third and it is possible for every number of the set to define at which place of the sequence it
Question is: function and number set can not be the same thing? Instead of functional notation f(0), f(1), f(2), ... we use {x_0, x_1, x_2, ... } what in the other side shows a structure of a set.
Am I missing some here? Once I have done one similar "ambiguity" when I said: if we *do this and that*, then we get a set of integer sequences. I should simply say: then we get integer (integral)
sequences, because sequences are already sets. Best regard.
I'm just wondering: has anyone here studied the Hofstadter sequences? (Such as the Q sequence: 1, 1, 2, 3, 3, 4, 5, 5, 6, 6, 6, 8, 8, 8, 10, 9, 10, 11, 11, 12, 12, 12, 12, 16, 14, 14, ...
Added: 2001-10-19 - 17:23 | {"url":"http://planetmath.org/sequence","timestamp":"2014-04-18T05:33:58Z","content_type":null,"content_length":"48565","record_id":"<urn:uuid:55d6f379-1004-428e-8aea-b1ca127eac51>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weight-Balanced Trees
Kazu Yamamoto
created: 2011.1.25
modified: 2011.2.1
Weight-Balanced Trees(or binary search trees of bounded balance) are binary search trees which can be used to implement sets and finite maps (key-value store). This structure is widely used in
functional languages. For instance, Haskell uses this for Data.Set and Data.Map. MIT/GNU Scheme and slib provides wttree.
The original weight balanced tree is invested by Nievergelt and Reingold in 1972. Its weight is "size + 1". The balance algorithm has two parameters, delta and gamma. They defined <delta,gamma> = <1
+ sprt 2, sqrt 2>. This algorithm and this parameter does not have bugs. That is, the balance of a tree is always maintained. However, since these parameter are not integer, implementation cost is
• Nievergelt J. and Reingold E.M., "Binary search trees of bounded balance", Proceedings of the fourth annual ACM symposium on Theory of computing, pp 137--142, 1972
In 1991, Adams created a variant of weight balanced tree for the programming competition organized by Appel. Its weight is purely "size". He defined (delta,gamma) = (3 or larger, 1) in his SML
implementations and papers. The pair (delta,gamma) of "wttree.scm" is (5,1).
• Adams S., "Efficient sets: a balancing act, Journal of Functional Programming", Vol 3, No 4, pp 553--562, 1993
While Campbell was re-implementing "wttree.scm" in 2010, he found a balancing bug. In some cases, the delete operation on a given balanced tree breaks its balance. He also found Data.Map of Haskell
is buggy and made a bug report. To our tests, only (3,2) and (4,2) are valid. So, all parameter choices by Adams are buggy.
We (Mr. Hirai and me) mathematically proved in Coq that <3,2> is one and only one interger solution for the original weight balanced tree (not Adams's one).
• In Dec 2010, slib incorporated our patch. (wttree.scm in slib 3b3 or earlier has this bug.)
• In Jan 2011, MIT/GNU Scheme incorporated our patch. (wttree.scm in MIT/GNU Scheme 9.0.1 or earlier has this bug.)
• Data.Map in the container package 0.3.0.0 or earlier has this bug. We would propose to incorporate our patch. | {"url":"http://mew.org/~kazu/proj/weight-balanced-tree/","timestamp":"2014-04-16T10:09:48Z","content_type":null,"content_length":"5194","record_id":"<urn:uuid:6779fb12-d7b3-4aef-8dc0-3ee1a5392f99>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tyngsboro Algebra 1 Tutor
Find a Tyngsboro Algebra 1 Tutor
...I have helped students improve their math skills in middle and high school, and I can help them become better organized in the classroom, working on note-taking, test-taking, and time
management skills as well. I've worked with students specifically in the following subjects: Pre-Algebra, Algebra 1, Geometry, U.S. History, World History, and European History.
29 Subjects: including algebra 1, reading, English, writing
...While this was my first time teaching, I am very familiar with independent school education and have extensive experience as a tutor. I was a student at Worcester Academy from 7th grade until
graduating in 2008 and over that time became familiar with the richness of the independent school experience. During my junior and senior years at St.
7 Subjects: including algebra 1, chemistry, calculus, physics
...I collected a lot of Chinese children's songs, stories and Chinese cartoons which make for good supplemental materials for Chinese learning. You will have the opportunity to choose the method
your feel comfortable with: either conversation of daily life or a fun learning process with songs, game...
5 Subjects: including algebra 1, physics, algebra 2, precalculus
...I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. I can tutor
any subject area from elementary math to college level.I got an A+ in Discrete Mathematics in College and an A in the graduate course 6.431 Applied Probability at MIT last year.
16 Subjects: including algebra 1, French, elementary math, physics
...I am patient, enthusiastic about learning, and will work very hard with you to achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology. I have
extensive coursework and research experience in Biology and am passionate about the field.
10 Subjects: including algebra 1, chemistry, geometry, biology | {"url":"http://www.purplemath.com/tyngsboro_ma_algebra_1_tutors.php","timestamp":"2014-04-18T16:23:24Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:1e8efaf0-9bda-47cb-9a61-f03818137ff3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding the Role of Linearity in Vibration Analysis - MAINTENANCE TECHNOLOGY
Understanding the Role of Linearity in Vibration Analysis
Introductory overview illustrates how awareness of the behavior of linear and nonlinear systems provides fuller understanding of machine health when analyzing vibration data.
The analysis of a vibration spectrum of a machine in the context of linearity and nonlinearity provides an additional basis for understanding why spectra look as they do and how the appearance of a
spectrum relates to machine health. Here is an overview of the concept, augmented with straightforward illustrations and examples.
Linear systems
If a linear system is thought of as a black box, it can be said that what comes out of the box is directly proportional to what goes in. This concept is called proportionality. In Fig 1. we can see
that the output motion is directly related to the input force. If the input force increases, the resulting motion also increases proportionally (click for Figs. 1-8 ).
Another quality of linear systems is superposition as demonstrated in Fig. 2. Superposition means that if we have two or more input forces, the output motion will be proportional to the sum of the
input forces. In other words, nothing new is created. If we add a whole bunch of forces at the input, the output motion will still be directly proportional to the sum of those forces.
Nonlinear systems
Consider a dense metal cube sitting on ice. If you push the cube, it will slide proportionally to how hard you push it. This is a linear response. Now consider that the cube is made out of gelatin.
When you give the gelatin a push it may slide a bit, but it also will wiggle and wobble. This is an example of a nonlinear response. The gelatin does not move only in the direction of the push, it
also wiggles around in different directions. Therefore, we can say that the output motion is not directly proportional to the input force and therefore the gelatin block is nonlinear (Fig. 3).
Nonlinear systems also do not follow the law of superposition. This means that the output response is not proportional to the sum of the input forces. In a nonlinear system, the inputs combine with
each other and produce new things in the output that were not present in the input (Fig. 4).
When one plays a stereo at a relatively low volume, the music comes out clearly. If one raises the volume slightly, the music comes out of the speaker more loudly, but still sounds good. This is a
linear response.
We reach a point, however, where if we make the stereo loud enough, the music becomes distorted, and we begin to hear new sounds that were not recorded on the CD. This is a nonlinear response. The
key again to understanding when something is nonlinear is that the output contains things that were not present in the input.
Linearity and nonlinearity in vibration
Now that we have described the basic concepts of linearity and nonlinearity, it is time to discuss them in terms of vibration signals. Simple mass-spring systems as shown in figures 5 and 6 will be
used for this discussion.
An ideal mass-spring system (Fig. 5) can be described by the equation
F = kX
where F is the input force, k is the spring stiffness, and X is the resulting displacement of the spring. This is a linear system. If we input a sinusoidal force, the resulting displacement is also
sinusoidal and proportional to the input.
If the stiffness of the spring changes as it is stretched and compressed (Fig. 6), the system is nonlinear. When we input a sinusoidal force, the resulting displacement is not sinusoidal, and thus
this is a nonlinear system in which we get out something that looks different from what we put in.
If we remember the basic rules of vibration and the Fast Fourier Transform, the displacement sine wave in Fig. 5 will produce a single peak in a vibration spectrum. The displacement wave in Fig. 6
will produce a peak in the spectrum with harmonics or multiples. This brings up another important point—the harmonics in this case are the result of nonlinearity.
Machinery vibration
When we look at the vibration spectra for a machine in the context of linear and nonlinear systems, we can make a very general statement that as machines deteriorate and develop faults they become
less linear in their responses. We also can say that many machine faults create nonlinearity. Therefore, also in very general terms, we can expect the spectra from a healthy machine to be relatively
simple compared with the spectra from a machine with faults. If we consider mechanical looseness as a common machine problem, we can demonstrate this.
When the machine is not experiencing looseness and is in good health, its spectra may look like that in Fig. 7, which shows the shaft rate peak (the big one on the left) and a couple of harmonics of
the shaft speed. The same machine with a looseness problem (Fig. 8) might show considerably more shaft rate harmonics at higher amplitudes. This is very similar to the example of the two mass-spring
systems in that when the mass-spring system was linear, only one peak was produced in the spectrum, i.e., the output looked like the input. When the mass-spring system was nonlinear, the output
waveform was not sinusoidal and therefore produced harmonics in the spectrum.
If we take a step back, we can consider that the mechanical input forces in a simple rotating machine are coming from the rotating shaft. If the shaft is rotating perfectly (i.e., there is no
looseness) and the response of the machine structure is perfectly linear, then we would expect to see only a single peak in our spectrum corresponding to the shaft rate. In other words, the output
would look like the input. No machines are perfect, however, and shafts do not typically rotate perfectly around their centers; this is why we expect to see some harmonics in machine spectra (Fig.
7). However, as the machine becomes more nonlinear, due to a condition such as looseness, foundation cracks, or broken mounting bolts, more harmonics with higher amplitudes appear (Fig. 8).
Note that if one views a spectrum with a linear amplitude scale, one may not see the harmonic content of the spectrum if the harmonics are much smaller in amplitude than the shaft rate peak. If one
views the data using a logarithmic amplitude scale, more harmonic content will be visible on the graph.
Sidebands in a spectrum are another result of nonlinearity. Sidebands are produced by amplitude modulation.
The top waveform in Fig. 9 is an example of a modulated waveform. What we have here is a wave that repeats itself with a frequency X; however, the amplitude of this wave goes up and down at the
frequency Y of the wave on the bottom of the diagram. The bottom wave is simply included to demonstrate the frequency at which the amplitude of the top wave goes up and down.
If one wishes to visualize this in mechanical terms, consider a set of gears where one gear is not centered on its shaft. We will say that the noncentered gear has 32 teeth. In one revolution of the
noncentered gear we will see 32 tooth impacts. This would relate to frequency X. Since this gear is not centered on its shaft, the amplitude of the tooth impacts will go up and down as the gear moves
closer and farther away from the second gear. It will take one revolution of the noncentered gear for the level of the impacts to go from maximum to minimum and back to maximum again. So, the
frequency with which the levels of the impacts change (or are modulated) is the rotation rate of the noncentered gear. This would relate to frequency Y in Fig. 9.
The spectrum of these gears (Fig. 10) shows a peak at frequency X with one peak on either side of it Y distance away. Stated another way, we will see a peak at frequency X, another at X+Y, and a
third at X-Y. The peaks at X+Y and X-Y are called sidebands.
Why is this system nonlinear? Because X+Y and X-Y are not found anywhere in the input signal but they do appear in the output. The only thing in the input is X or the rate of the teeth impacting.
These impacts go up and down in amplitude at a rate Y, but there is certainly no X+Y or X-Y in the input.
The off-centered gear also may cause frequency modulation because the effective radius of the off-center gear changes as it moves closer and farther from the other gear. As the effective radius
changes, the rate of tooth contact speeds up and then slows down repetitively. Frequency modulation is similar to amplitude modulation in that it also results in sidebands. In amplitude modulation,
the amplitude of the impacts goes up and down in level repeatedly. In frequency modulation, the rate of impacts gets faster and slower repetitively. In this example, both would result in the same
pattern in the spectrum.
Nonsynchronous tones
Rolling element bearing wear, gear defects, and motor-bar defects will produce sidebands. Rolling element bearings also will create nonsynchronous tones. These are new peaks that are not exact
multiples (harmonics) of the shaft rate.
To understand why rolling element bearings create nonsynchronous tones and sidebands, consider the case of a horizontal machine with an inner-race bearing fault. As the shaft and inner race spin, a
certain number of balls will impact the fault on the inner race and will produce a peak in the spectrum equal to the number of impacts per revolution of the shaft. This peak is called a bearing tone.
The number of impacts will almost never be an integral amount. In other words, there will be 3.1 or 4.7 impacts per revolution, but rarely exactly 3 or 5 impacts. Thus, the peaks will not be direct
multiples of the shaft rate and are therefore termed nonsynchronous. The higher peak marked with a circle in Fig. 11 is an example of a bearing tone at 3.1x the shaft rate.
Considering this example further, we also can see that the weight of the shaft will cause the impacts against the fault to be greater in amplitude when the fault is below the shaft. As the fault on
the inner race rotates to the top of the shaft, the impacts will be smaller because there is less weight (load) on the fault. In one revolution of the shaft the fault will travel around one time—into
the load zone, out of the load zone, and back into the load zone. Therefore, the frequency of the change of amplitude in this case is equal to the shaft rate and this also will coincide with the
spacing of the sidebands around the bearing tone (the peaks with the arrows in Fig. 11).
A similar phenomenon occurs if there is a fault on a ball or roller. We will see a bearing tone at a frequency equal to the number of impacts the fault on the ball makes with the races in one
revolution of the shaft. This peak also will be nonsynchronous and is called a bearing tone. The fault on the ball or roller also travels in and out of the load zone; however, it travels at the cage
rate, not the shaft rate. Therefore, the sideband spacing around the bearing tone will be equal to the cage rate, which is usually in the neighborhood of 0.3x the shaft rate.
Vibration trending recommended
The concept of linear and nonlinear behavior gives us another way to think about a vibration spectrum and how its appearance relates to machine faults. Healthy machines should respond more linearly
than machines with faults; in other words, as machines develop faults they likely will respond less linearly. As they become less linear we begin to see more and larger harmonics and/or sidebands in
the spectra.
Because we may not know all of the details about the design of a machine or how its spectra will appear when it is healthy, it is still best to trend information over time. Look for more and larger
harmonics and new peaks that were not there before as an indication that the health of the machine is deteriorating. MT
Alan Friedman has worked in software development, expert system development, data analysis, training, and installation of predictive maintenance programs at DLI Engineering, 253 Winslow Way West,
Bainbridge Island, WA 98110; (206) 842-7656 . The author wishes to thank Glenn White who contributed to this article. | {"url":"http://www.mt-online.com/index.php?option=com_content&view=article&id=1176:understanding-the-role-of-linearity-in-vibration-analysis&catid=206:september2003&directory=90","timestamp":"2014-04-16T13:22:56Z","content_type":null,"content_length":"43383","record_id":"<urn:uuid:40e9efeb-fb18-44f7-ab19-93ca9c2a4fb3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
LU Factorization
Factor square matrix into lower and upper triangular components
The LU Factorization block factors a row-permuted version of the square input matrix A as A[p] = L*U, where L is a unit-lower triangular matrix, U is an upper triangular matrix, and A[p] contains the
rows of A permuted as indicated by the permutation index vector P. The block uses the pivot matrix A[p] instead of the exact input matrix A because it improves the numerical accuracy of the
factorization. You can determine the singularity of the input matrix A by enabling the optional output port S. When A is singular, the block outputs a 1 at port S; when A is nonsingular, it outputs a
To improve efficiency, the output of the LU Factorization block at port LU is a composite matrix containing both the lower triangle elements of L and the upper triangle elements of U. Thus, the
output is in a different format than the output of the MATLAB^® lu function, which returns L and U as separate matrices. To convert the output from the block's LU port to separate L and U matrices,
use the following code:
L = tril(LU,-1)+eye(size(LU));
U = triu(LU);
If you compare the results produced by these equations to the actual output of the MATLAB lu function, you may see slightly different values. These differences are due to rounding error, and are
See the lu function reference page in the MATLAB documentation for more information about LU factorizations.
Fixed-Point Data Types
The following diagram shows the data types used within the LU Factorization block for fixed-point signals.
You can set the product output, accumulator, and output data types in the block dialog as discussed below.
The output of the multiplier is in the product output data type when the input is real. When the input is complex, the result of the multiplication is in the accumulator data type. For details on the
complex multiplication performed, see Multiplication Data Types.
The row-pivoted matrix A[p] and permutation index vector P computed by the block are shown below for 3-by-3 input matrix A.
The LU output is a composite matrix whose lower subtriangle forms L and whose upper triangle forms U.
See Factor a Matrix into Upper and Lower Submatrices Using the LU Factorization Block in the DSP System Toolbox™ User's Guide for another example using the LU Factorization block.
Dialog Box
The Main pane of the LU Factorization block dialog appears as follows.
Select to output the singularity of the input at port S, which outputs Boolean data type values of 1 or 0. An output of 1 indicates that the current input is singular, and an output of 0
indicates the current input is nonsingular.
The Data Types pane of the LU Factorization block dialog appears as follows.
Select the rounding mode for fixed-point operations.
Select the overflow mode for fixed-point operations.
Specify the product output data type. See Fixed-Point Data Types and Multiplication Data Types for illustrations depicting the use of the product output data type in this block. You can set it
● A rule that inherits a data type, for example, Inherit: Inherit via internal rule
● An expression that evaluates to a valid data type, for example, fixdt(1,16,0)
Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Product output data type parameter.
See Specify Data Types Using Data Type Assistant for more information.
Specify the accumulator data type. See Fixed-Point Data Types for illustrations depicting the use of the accumulator data type in this block. You can set this parameter to:
● A rule that inherits a data type, for example, Inherit: Inherit via internal rule
● An expression that evaluates to a valid data type, for example, fixdt(1,16,0)
Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Accumulator data type parameter.
See Specify Data Types Using Data Type Assistant for more information.
Specify the output data type. See Fixed-Point Data Types for illustrations depicting the use of the output data type in this block. You can set it to:
● A rule that inherits a data type, for example, Inherit: Same as input
● An expression that evaluates to a valid data type, for example, fixdt(1,16,0)
Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Output data type parameter.
See Specify Block Output Data Types for more information.
Select this parameter to prevent the fixed-point tools from overriding the data types you specify on the block mask.
Golub, G. H., and C. F. Van Loan. Matrix Computations. 3rd ed. Baltimore, MD: Johns Hopkins University Press, 1996.
Supported Data Types
┃ Port │ Supported Data Types ┃
┃ A │ ● Double-precision floating point ┃
┃ │ ┃
┃ │ ● Single-precision floating point ┃
┃ │ ┃
┃ │ ● Fixed point (signed only) ┃
┃ │ ┃
┃ │ ● 8-, 16-, and 32-bit signed integers ┃
┃ LU │ ● Double-precision floating point ┃
┃ │ ┃
┃ │ ● Single-precision floating point ┃
┃ │ ┃
┃ │ ● Fixed point (signed only) ┃
┃ │ ┃
┃ │ ● 8-, 16-, and 32-bit signed integers ┃
┃ P │ ● Double-precision floating point ┃
┃ │ ┃
┃ │ ● Single-precision floating point ┃
┃ │ ┃
┃ │ ● 32-bit unsigned integers ┃
┃ S │ ● Boolean ┃
See Also
┃ Autocorrelation LPC │ DSP System Toolbox ┃
┃ Cholesky Factorization │ DSP System Toolbox ┃
┃ LDL Factorization │ DSP System Toolbox ┃
┃ LU Inverse │ DSP System Toolbox ┃
┃ LU Solver │ DSP System Toolbox ┃
┃ Permute Matrix │ DSP System Toolbox ┃
┃ QR Factorization │ DSP System Toolbox ┃
┃ lu │ MATLAB ┃
See Matrix Factorizations for related information. | {"url":"http://www.mathworks.com/help/dsp/ref/lufactorization.html?nocookie=true","timestamp":"2014-04-18T03:36:28Z","content_type":null,"content_length":"48995","record_id":"<urn:uuid:93cde795-e3fb-440a-bb47-4e7084e75e0c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Numpy/Scipy: Avoiding nested loops to operate on matrix-valued images
tyldurd dhondt.olivier@gmail....
Thu Mar 15 03:59:28 CDT 2012
I am a beginner at python and numpy and I need to compute the matrix
logarithm for each "pixel" (i.e. x,y position) of a matrix-valued image of
dimension MxNx3x3. 3x3 is the dimensions of the matrix at each pixel.
The function I have written so far is the following:
def logm_img(im):
from scipy import linalg
dimx = im.shape[0]
dimy = im.shape[1]
res = zeros_like(im)
for x in range(dimx):
for y in range(dimy):
res[x, y, :, :] = linalg.logm(asmatrix(im[x,y,:,:]))
return res
Is it ok? Is there a way to avoid the two nested loops ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120315/b488c9c5/attachment-0001.html
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2012-March/031851.html","timestamp":"2014-04-17T09:49:07Z","content_type":null,"content_length":"3625","record_id":"<urn:uuid:c397b646-77a4-4d9c-bbb1-a93930ab15b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
computational math vs. computer science
i want to get into computers but i want to major in math
some schools i'm interested in offer things like computational math
for example,
a school i might attend offers a few math majors
pure math
applied math
math-computer science
and math-scientific computation
the latter sounds most appealing to me, but would it be a competitive major in the computer field? or would the computer science one seem a lot more normal/acceptable?
do employers typically prefer plain computer science majors to math majors? who gets paid more?
what sort of jobs would something like computational math get me? | {"url":"http://www.physicsforums.com/showthread.php?t=244466","timestamp":"2014-04-16T16:09:59Z","content_type":null,"content_length":"22336","record_id":"<urn:uuid:17537b30-3344-40c2-b101-a5c8cdf99eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sound Power, Intensity and Pressure
An introduction to decibel, sound power, sound intensity and sound pressure
The Decibel
Decibel is a logarithmic unit used to describe physical values like the ratio of the signal level - power, sound pressure, voltage or intensity.
The decibel can be expressed as:
decibel = 10 log(P / P[ref]) (1)
P = signal power (W)
P[ref] = reference power (W)
Sound Power Level
Sound power is the energy rate - the energy of sound per unit of time (J/s, W in SI-units) from a sound source.
Sound power can more practically be expressed as a relation to the threshold of hearing - 10^-12 W - in a logarithmic scale named Sound Power Level - L[w]:
L[w] = 10 log (N / N[o]) (2)
L[w] = Sound Power Level in Decibel (dB)
N = sound power (W)
• The lowest sound level that people of excellent hearing can discern has an acoustic sound power about 10^-12 W, 0 dB
• The loudest sound generally encountered is that of a jet aircraft with a sound power of 10^5 W, 170 dB.
Sound Intensity
Sound Intensity is the Acoustic or Sound Power (W) per unit area. The SI-units for Sound Intensity are W/m^2.
The Sound Intensity Level can be expressed as:
L[I] = 10 log(I / I[ref]) (3)
L[I] = sound intensity level (dB)
I = sound intensity (W/m^2)
I[ref] = 10^-12 - reference sound intensity (W/m^2)
Sound Pressure Level
The Sound Pressure is the force (N) of sound on a surface area (m^2) perpendicular to the direction of the sound. The SI-units for the Sound Pressure are N/m^2 or Pa.
The Sound Pressure Level:
L[p] = 10 log(p^2 / p[ref]^2) = 10 log(p / p[ref])^2 = 20 log (p / p[ref]) (4)
L[p] = sound pressure level (dB)
p = sound pressure (Pa)
p[ref] = 2 10^-5 - reference sound pressure (Pa)
• If the pressure is doubled, the sound pressure level is increased with 6 dB (20 log (2))
Related Topics
Related Documents | {"url":"http://www.engineeringtoolbox.com/sound-power-intensity-pressure-d_57.html","timestamp":"2014-04-18T16:30:13Z","content_type":null,"content_length":"25984","record_id":"<urn:uuid:91349add-fd6e-4d02-8b2a-7f1696427f65>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iso-lines to 3D Surface Generation
up vote 0 down vote favorite
I have a set of isolines points ( or contour points) such as this:
Each point has their own respective X, Y and Z. Since they are isolines, that means that all of the points will have a unique X-Y pair, i.e., they will be no two points with the same X and Y but
different Z.
Now, is there any algorithm, or any software packages ( either in C# or matlab) that I can use to interpolate this isoline points into full 3D surface points?
P/S: I am not just the final output, I am interested in getting the interpolated lines myself so that I can plot the surface myself.
add comment
1 Answer
active oldest votes
See these papers:
W. Barrett, E. Mortensen, and D. Taylor. An image space algorithm for morphological contour interpolation. In Proceedings of Graphics Interface'94, pages 16-24, 1994.
M. B. Gousie and W. R. Franklin. Converting Elevation Contours to a Grid. In SDH’98, Vancouver, 1998. http://www.ecse.rpi.edu/Homepages/wrf/research/p/contour.pdf.
up vote 1 down vote accepted If you can read (or guess) Portuguese, see
add comment
Not the answer you're looking for? Browse other questions tagged geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/4063/iso-lines-to-3d-surface-generation","timestamp":"2014-04-16T16:47:02Z","content_type":null,"content_length":"48524","record_id":"<urn:uuid:d80039a7-9b88-4272-abce-4e6163ea96e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Tutors
Los Angeles, CA 90064
Yan, specialized in Statistics and Biostatistics.
...She has been used SAS for 5+ years and has consulting/tutoring experience for 5+ years. She can help you on your statistics and
class tutoring, the statistics reviewing in the medical related projects, the coding in your SAS projects, the reviewing...
Offering 10 subjects including calculus, statistics and SAT math | {"url":"http://www.wyzant.com/Venice_CA_Mathematics_tutors.aspx","timestamp":"2014-04-18T14:53:53Z","content_type":null,"content_length":"61893","record_id":"<urn:uuid:a65e5d2c-2105-4124-9029-d4984d29d3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Panorama City
Santa Clarita, CA 91350
Physics/Math tutor with emphasis in coding and astronomy
...Although I will gladly extend my knowledge base to help with any science field, I am most qualified in subjects such as: Astronomy, Coding (Matlab primarily but will teach coding concepts),
ematics (Through to Differential Equations), and Optics. I...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/geo_Panorama_City_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-21T04:48:16Z","content_type":null,"content_length":"61859","record_id":"<urn:uuid:eff6c862-bba6-48d7-8ea4-efe38a191818>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial of a Simplex
March 13th 2013, 05:30 PM #1
Mar 2013
Adelaide, Australia
Polynomial of a Simplex
I am telecom engr. and don't have v.good background in mathematics. I came across a problem where I have to find the volume of a simplex. I read some papers about calculating the volume of a
simplex. But these papers talk about the polynomial related to the simplex. I don't know how do we get this polynomial from simplex. Let say I have a simplex in 2D (i.e. a triangle). I have its 3
vertices, how can I write a polynomial for this simplex?
Re: Polynomial of a Simplex
Wikipedia seems to help:
Simplex - Wikipedia, the free encyclopedia
Re: Polynomial of a Simplex
Thank you , but I didn't get what I am looking for. How can I write a polynomial equation from a simplex?
Re: Polynomial of a Simplex
Ah I was assuming you just wanted to find the volume of a simplex, and that article allows you to do that with no mention of polynomials. I'm not quite sure what you mean by polynomial related to
a simplex, could you link one of these papers here?
March 13th 2013, 10:26 PM #2
Super Member
Jan 2008
March 14th 2013, 03:21 PM #3
Mar 2013
Adelaide, Australia
March 14th 2013, 04:27 PM #4
Super Member
Jan 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/214728-polynomial-simplex.html","timestamp":"2014-04-18T04:37:44Z","content_type":null,"content_length":"36667","record_id":"<urn:uuid:79cc43a7-2ccb-48ea-a7eb-fc5cab8db260>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is weird
October 8th 2010, 07:26 AM #1
Oct 2009
This is weird
I type in \boxed{\begin{array}{c}\begin{array}{cccc}16&02&03 &13\\
04&14&15&01\end{array}\\ & \text{Figure}\ 1\end{array}} (with the math, /math parts).
Instead of getting the picture of the box with its contents, I just get the LaTex reproduced from above when I do the preview. When I first remove the /math part, do the preview, then insert it
back in, presto!, I get the picture of the box with the numbers and figure destination the way it should have happened initially.
Does anyone have a clue as to what's going on?
I type in \boxed{\begin{array}{c}\begin{array}{cccc}16&02&03 &13\\
04&14&15&01\end{array}\\ & \text{Figure}\ 1\end{array}} (with the math, /math parts).
Instead of getting the picture of the box with its contents, I just get the LaTex reproduced from above when I do the preview. When I first remove the /math part, do the preview, then insert it
back in, presto!, I get the picture of the box with the numbers and figure destination the way it should have happened initially.
Does anyone have a clue as to what's going on?
I cannot reproduce what you described. For me the LaTeX is always rendered in Preview. Are you sure it wasn't some small typo like [\math] instead of [/tex]?
Hello, wonderboy1953!
I copied what you typed in, and this was produced:
$\boxed{\begin{array}{c}\begin{array}{cccc}16&02&03 &13\\<br /> 05&11&10&08\\ 09&07&06&12\\ 04&14&15&01\end{array}\\<br /> & \text{Figure}\ 1\end{array}}$
I don't see a problem . . .
Hello Soroban (it gets weirder)
If you look at what came up with your post, you'll see three ending braces in a row, yet in the MHF preview, you'll see two (and other discrepancies).
Anyways (now I'm talking about my email draft), I have a theory. The LaTex code appears to grow stale (for lack of a better term) so that when I put it into the MHF preview post, it just comes
back, but removing and inserting the /math with its brackets appears to refresh and then I get the desired picture.
I'm going to run further tests and I'll let you know what happens.
I'm guessing that the library computer system may have a memory problem which is why removing and then reentering the end portion (h]) of /math (with the braces) seems to reset it (btw there are
other examples of computer problems that appear to be memory related from my experience).
October 8th 2010, 07:44 AM #2
October 8th 2010, 08:28 AM #3
Oct 2009
October 8th 2010, 08:06 PM #4
Super Member
May 2006
Lexington, MA (USA)
October 9th 2010, 11:11 AM #5
Oct 2009
December 18th 2010, 10:52 AM #6
Oct 2009 | {"url":"http://mathhelpforum.com/latex-help/158822-weird.html","timestamp":"2014-04-17T01:42:24Z","content_type":null,"content_length":"47217","record_id":"<urn:uuid:c455622f-adba-4e7b-8ca3-63cde670b378>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Limit theorems for supercritical age-dependent branching processes with neutral immigration.
(English) Zbl 1218.60073
The author considers models of branching processes with Poissonian immigration where individuals have inheritable types. New individuals singly enter the total population and start a new population
which evolves like a supercritical, homogeneous, binary Crump-Mode-Jagers process: individuals have independent and identically distributed lifetime durations (nonnecessarily exponential) during
which they give birth independently at a constant rate $b$. First, using spine decomposition, the author relaxes previously known assumptions required for almost-sure convergence of the total
population size.
In the paper, three models of structured populations (i.e., populations where individuals have certain types) are considered. In each model, the vector $\left({P}_{1},{P}_{2},\cdots \right)$ of
relative abundances of surviving families converges almost surely. In model I, all immigrants have a different type (not present in the current population). If they arrive at rate $\theta$, the
vector $\left({P}_{1},{P}_{2},\cdots \right)$ converges to the GEM distribution with parameter $\theta /b$. In models II and III, arriving types are drawn in a discrete or in a continuous spectrum,
respectively. The limits of the vectors $\left({P}_{1},{P}_{2},\cdots \right)$ are also described.
60J80 Branching processes
60G55 Point processes
92D25 Population dynamics (general)
60J85 Applications of branching processes
60F15 Strong limit theorems
92D40 Ecology | {"url":"http://zbmath.org/?q=an:1218.60073","timestamp":"2014-04-18T08:28:24Z","content_type":null,"content_length":"23295","record_id":"<urn:uuid:fc871c09-073b-42c6-932e-d9b104c18ad7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of false decoding with LDPC codes
up vote 5 down vote favorite
It is well known that given a binary linear code $C$ under maximum likelihood decoding the probability of false decoding $P_C$ in a Binary Symmetric Channel with cross over probability $p$, in other
words, the probability of decoding a wrong codeword is lower bounded by:
$$P_C(p)\leq\sum_{w=1}^nc_w\sum_{\lceil i=w/2\rceil }^w\binom{w}{i}p^i(1-p)^{w-i}$$ where $c_w$ is the number of words in the code with hamming weight $w$.
This bound can be computed for codes with a known weight enumerator polynomial. I would be interested in bounding, not necessarily through the bound above, the false decoding probability for
instances of LDPC codes, that is not for ensembles, and under belief propagation.
coding-theory ca.analysis-and-odes it.information-theory co.combinatorics
add comment
1 Answer
active oldest votes
Generally speaking, understanding the decoding error probability of an LDPC code is a very difficult problem. Among major channels that have extensively been studied, as far as I know, binary
erasure channels are the only ones that allow for a simple combinatorial characterization of the dominating error patterns. So, a combinatorial notion analogous to the minimum distance of a
linear code for maximum likelihood decoding is not available for binary symmetric channels when it comes to iterative decoding based on brief propagation.
With that said, if you would like a bound through a strategy similar to considering minimum distance, you can analyze the decoding error probability of an LDPC code under brief propagation by
identifying the kind of error pattern that will not be corrected no matter how many iterations the decoder performs. In other words, given a code, you try to figure out the kind of set $T$ of
bits where brief propagation fails if the bits in $T$ all get flipped. Such error patterns are called trapping sets. (But different authors use different terms or the same term to mean
different things, so be careful.) Given an LDPC code $\mathcal{C}$ and crossover probability $p$ of the channel (i.e., each bit is flipped with probability $p$), if you identify all small
trapping sets of $\mathcal{C}$, in principle, you should be able to get a good estimate of the decoding error probability at the error floor region (i.e., when $p$ is small enough), assuming
a sufficiently large number of iterations are performed.
up vote The problem is that it is extremely difficult to identify all meaningful trapping sets for a given code because there is no combinatorial characterization. There are studies on algorithms
2 down that try to find dominating trapping sets of a given LDPC code (e.g., M. Karimi, A. H. Banihashemi, Efficient algorithm for finding dominant trapping sets of LDPC codes, IEEE Trans. Inf.
vote Theory, 58 (2012) 6942-6958). But generally speaking, simulations are more practical for performance estimation purposes than trying to obtain an equivalent of the weight enumerator and
compute the decoding error probability like you do with the number $c_w$ of codewords of Hamming weight $w$.
If you would like the exact decoding error probability for any crossover probability $p$ when a parity-check matrix $H$ of an LDPC code is given, there is a paper that studies this problem.
As you already know, the standard decoding algorithm based on brief propagation for LDPC codes (i.e., the sum-product algorithm) is characterized by the probability $p'$ for initialization
(which is usually the crossover probability of the channel) and the number $l$ of the maximum iterations. M. Hagiwara, M. P. C. Fossorier, H. Imai, Fixed initialization decoding of LDPC codes
over a binary symmetric channel, IEEE Trans. Inf. Theory, 58 (2012) 2321-2329 proved that if $p'$ is fixed (so initialization probability $p'$ is generally not equal to crossover probability
$p$), the decoding error probability is a polynomial function of $p$.
But we usually set $p'=p$. In this case, they proved that there are parity-check matrices for which the decoding error probabilities are not continuous functions of $p$.
add comment
Not the answer you're looking for? Browse other questions tagged coding-theory ca.analysis-and-odes it.information-theory co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/67875/probability-of-false-decoding-with-ldpc-codes","timestamp":"2014-04-20T10:58:39Z","content_type":null,"content_length":"54796","record_id":"<urn:uuid:c7bf8ceb-433b-468a-a610-58a03dabdd63>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2002/036
Optimal Black-Box Secret Sharing over Arbitrary Abelian GroupsRonald Cramer and Serge FehrAbstract: A {\em black-box} secret sharing scheme for the threshold access structure $T_{t,n}$ is one which
works over any finite Abelian group $G$. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction
vectors are defined over the integers and are designed {\em independently} of the group $G$ from which the secret and the shares are sampled. This means that perfect completeness and perfect privacy
are guaranteed {\em regardless} of which group $G$ is chosen. We define the black-box secret sharing problem as the problem of devising, for an arbitrary given $T_{t,n}$, a scheme with minimal
expansion factor, i.e., where the length of the full vector of shares divided by the number of players $n$ is minimal.
Such schemes are relevant for instance in the context of distributed cryptosystems based on groups with secret or hard to compute group order. A recent example is secure general multi-party
computation over black-box rings.
In 1994 Desmedt and Frankel have proposed an elegant approach to the black-box secret sharing problem based in part on polynomial interpolation over cyclotomic number fields. For arbitrary given $T_
{t,n}$ with $0<t<n-1$, the expansion factor of their scheme is $O(n)$. This is the best previous general approach to the problem.
Using low degree integral extensions of the integers over which there exists a pair of sufficiently large Vandermonde matrices with co-prime determinants, we construct, for arbitrary given $T_{t,n}$
with $0<t<n-1$ , a black-box secret sharing scheme with expansion factor $O(\log n)$, which we show is minimal.
Category / Keywords: cryptographic protocols / information theoretically secure secret sharing, Date: received 21 Mar 2002, last revised 21 Mar 2002Contact author: cramer at daimi aau dkAvailable
format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | BibTeX Citation Version: 20020322:022646 (All versions of this report) Discussion forum: Show discussion | Start new discussion[
Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2002/036","timestamp":"2014-04-21T10:23:17Z","content_type":null,"content_length":"3526","record_id":"<urn:uuid:9387a803-af57-464a-af2a-23262855c85b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Do Equivalent Fractions
While sitting in a Mathematics exam, you probably have to solve a number of sums involving small and long calculations. This often makes you think that the time allotted to you to complete the number
of sums was not enough and you should have been given enough time. For you information, the time for the exam is always pre-calculated and you have to use methods that involve calculations that are
less time consuming. Such situation can be there when you have a calculation involving Equivalent Fractions and you want to avoid long or the complete calculation. By applying a minor trick, you can
avoid any long calculations thus solving the sum in the minimum possible time.
• 1
Be aware of the operation you are applying on the fraction such as it could be addition, subtraction, multiplication and division.
• 2
For any operation between two given fractions, first look at the denominator of both of the fractions and see if they could be equalized by multiplying with a certain number.
2/4 + 11/32 ------------------------ 16/32 + 11/32
• 3
The reason why you do this is to avoid calculating the Least Common Multiple or LCM of the two given fractions. Equalizing their denominators will save you a lot of time in the following way.
2/4 + 11/32 ------------------------ 16/32 + 11/32
16/32 + 11/32 = 16+11/32 = 27/32
• 4
The same thing can be used if there is a fraction having numerator and denominator in three figures that may confuse you and seems greater than the one from which you have to subtract such as:
2/4 - 25/100
• 5
Now find a number that can reduce 100 to 4 and 25 to a number that is less than 2. As you will use 25 to reduce the numerator and denominator, it will be 1 and 4.
25/100 = 1/4
• 6
The apparently bigger fraction has now been reduced and it can now be seen that it is smaller than 2/4 so the subtraction can be done easily.
2/4 – 1/4 = 1/4
• 7
In the similar way, the equivalent fraction can be used to solve fractions that are even bigger by reducing both the numerator and the denominator to a smaller number.
4500/10000 = 45/100 = 9/20 | {"url":"http://www.stepbystep.com/how-to-do-equivalent-fractions-3845/","timestamp":"2014-04-21T12:20:40Z","content_type":null,"content_length":"42951","record_id":"<urn:uuid:845a411e-cf23-4623-a87d-743ee163eec7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linked List help!
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Nov 2012
Rep Power
Linked List help!
If anyone could help me, that would be great. We just went over classes and things recently and our teacher didn't really cover things very well.
This is the assignment:
/* **********************************************
Below is the code for an implementation of a linked list. The implementation has one missing method and one flaw.
Implement the missing size() method, which returns the length of the list.
Then, fix the insertAt() method so that it does not crash when first is NULL (when all items have been removed from the list).
********************************************** */
Here is the different code files: ( I will separate them with "*")
#include <iostream>
using namespace std;
#include "linkedlist.hpp"
int main ()
LinkedList *list = new LinkedList(3);
// Append the values 8, 12, and 2
list->insertAt(8, 1);
list->insertAt(12, 2);
list->insertAt(2, 3);
// Print the list
for (int i = 0; i < 4; i++) {
cout << list->at(i) << " ";
cout << endl;
list->insertAt(50, 0);
// Print the list
cout << list->at(0) << endl;
cout << endl;
// Free the list
delete list;
#pragma once
struct ListNode
int value;
ListNode *next;
// Singly linked list of integers
class LinkedList
LinkedList (int value);
~LinkedList ();
int& at (int index);
void insertAt (int value, int index);
void removeAt (int index);
int size ();
ListNode *first;
#include <cstddef>
#include <iostream>
using namespace std;
#include "linkedlist.hpp"
LinkedList::LinkedList(int initialValue)
first = new ListNode;
first->value = initialValue;
first->next = NULL;
LinkedList::~LinkedList ()
ListNode *node = first;
while (node != NULL) {
ListNode *nodeToDelete = node;
node = node->next;
delete nodeToDelete;
int& LinkedList::at (int index) {
// Find the node with the given index
ListNode *searchNode = first;
for (int i = 0; i < index; i++) {
searchNode = searchNode->next;
// Return a reference to the value for that node
return searchNode->value;
void LinkedList::insertAt (int value, int index)
ListNode *node = new ListNode;
node->value = value;
if (index == 0) {
// Replace first
node->next = first->next;
first = node;
else {
// Find the node immediately before and insert after it
ListNode *searchNode = first;
for (int i = 0; i < index - 1; i++) {
searchNode = searchNode->next;
node->next = searchNode->next;
searchNode->next = node;
void LinkedList::removeAt (int index)
if (index == 0) {
// Remove first
ListNode *newFirst = first->next;
delete first;
first = newFirst;
else {
// Find the node immediately before and delete after it
ListNode *searchNode = first;
for (int i = 0; i < index - 1; i++) {
searchNode = searchNode->next;
ListNode *newNext = searchNode->next->next;
delete searchNode->next;
searchNode->next = newNext;
And what is your question? Don't just dump your homework here and expect someone to do it for you. We got better **** to do.
I didn't ask, "hey could someone do all this for me?" I asked if someone could help me. Meaning, could someone point me in the right direction as to what to do.
...I asked if someone could help me. Meaning, could someone point me in the right direction as to what to do.
Yup, that we can do...
First read the sticky threads at the top of this forum. Then start doing some of your homework and when you have specific problems, ask specific questions, we'd love to help. Your original post
never asked any questions.
For instance, what part of you assignment do you not understand? If you take a stab at telling us what you think the size() method should do or why insertAt() method is broken, we just might have
more motivation to lend you a hand.
Comments on this post
• salem agrees : Much better than the reply I wrote, then deleted before posting it :)
I no longer wish to be associated with this site.
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Nov 2012
Rep Power
No Profile Picture
Contributing User
Devshed Loyal (3000 - 3499 posts)
Join Date
May 2004
Rep Power | {"url":"http://forums.devshed.com/programming-42/linked-list-help-943417.html","timestamp":"2014-04-19T08:44:17Z","content_type":null,"content_length":"62584","record_id":"<urn:uuid:7f85fa01-7ca9-4b40-96e9-458491e9aacc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
What guage wire for 120 foot 110V ext cord to 45 amp Iota battery charger? - Page 3 - SailNet Community
Join Date: May 2002
Location: East Coast
Posts: 13,878
moderate? Thanks: 0
Thanked 2 Times in 2 Posts
Rep Power:
EBS...AHAH!! I've found the answer I think to what you were talking about with reactive/inductive loads. Looking at the manual for the 45amp charger I found an interesting spec...it said:
Max WATT DRAW...842.4 (VA watt draw 1296)
Note this is remarkably close to the numbers we've been discussing where VA watts would be MY measurement and the 842.4 number would be close to the 80% efficiency number! Note the RATIO is 65%
between the two numbers.
Doing some research on VA watts vs. Watt draw...I came up with these statements:
VA in AC circuits is "reactive power" and has nothing to do with real power.
Loads such as induction motors, do not act as pure resistors, but like
inductors. Inductors and capacitors draw AND supply power back into their
power source. Induction motors supply current BACK into the power grid ever
half cycle. These currents tend to somewhat cancel out, but are still there.
If I supply 10 amps on a wire for 1 second, then reverse the polarity and
try again, the net current flow from end to end was zero, but 10 amps were
flowing for 2 seconds.
It would seem that
the volt-amp refers to the maximum power flow, while the Watt refers to a
time-averaged power flow. In AC circuits, Power flow varies as a sine
function. The "root-mean-square" rate of flow is approximately 65% of the
maximum flow.
Mystery explained! Your instincts were correct!! Thanks.
No longer posting. Reach me by PM! | {"url":"http://www.sailnet.com/forums/gear-maintenance/47242-what-guage-wire-120-foot-110v-ext-cord-45-amp-iota-battery-charger-3.html","timestamp":"2014-04-19T15:36:18Z","content_type":null,"content_length":"181946","record_id":"<urn:uuid:acb0e5a1-bc79-4b17-ace3-ce2ad4990ac1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |