content
stringlengths
86
994k
meta
stringlengths
288
619
Work done by friction Help! theta and cos theta (I think) should be 0 because they are parallel? I did do this and got an answer of 4880.7 Joules - should be negative because friction is going in the opposite direction. their answer is -4.95 KJ even if I divide by 1000 I still don't get the right answer. Sorry, I missed a typo in your previous post. The distance in the original problem in your first post is 7.76m, not 7.65m. That accounts for the small difference between your answer and theirs. BTW, you should have said theta = 0, so cos(0) = 1. That must be what you meant, since you would have gotten the right answer without the distance typo. Good job.
{"url":"http://www.physicsforums.com/showthread.php?p=2911040","timestamp":"2014-04-16T10:35:32Z","content_type":null,"content_length":"39535","record_id":"<urn:uuid:13e0781a-3e8d-4d4b-ba0e-ec72785928e9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
New Carrollton, MD SAT Math Tutor Find a New Carrollton, MD SAT Math Tutor ...I do not have any professional tutoring experience, but I have had good experiences tutoring my friends and family. I am an extremely patient person, and I am usually able to explain math problems in several different ways until they are understood. I also have scored very well on standardized ... 32 Subjects: including SAT math, English, reading, calculus ...I have tutored students privately over the last 30 years. When I tutor, I usually meet for one hour. If I find the student is not responding, or has difficulty staying on task, I stop and talk to them about what interests them most. 21 Subjects: including SAT math, calculus, statistics, geometry ...I have been tutoring various levels of math for 6 years now. We can work together to significantly improve your scores and grades. I have very flexible hours and am happy to come work with you wherever is most convenient. 22 Subjects: including SAT math, calculus, geometry, GRE ...I understand that some learn better visually, while others learn better with hands-on activities, so after determining the best way to help the student, I will be able to choose the best way to help them reach their full potential. I have a strong passion for educating and have joy in spreading ... 17 Subjects: including SAT math, reading, algebra 1, geometry ...I would like to help students understand better course materials and what is integral in extracting information from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. I was born and raised in Seoul, Korea where my parents still live. 17 Subjects: including SAT math, chemistry, physics, calculus Related New Carrollton, MD Tutors New Carrollton, MD Accounting Tutors New Carrollton, MD ACT Tutors New Carrollton, MD Algebra Tutors New Carrollton, MD Algebra 2 Tutors New Carrollton, MD Calculus Tutors New Carrollton, MD Geometry Tutors New Carrollton, MD Math Tutors New Carrollton, MD Prealgebra Tutors New Carrollton, MD Precalculus Tutors New Carrollton, MD SAT Tutors New Carrollton, MD SAT Math Tutors New Carrollton, MD Science Tutors New Carrollton, MD Statistics Tutors New Carrollton, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Beltsville SAT math Tutors Berwyn Heights, MD SAT math Tutors Bladensburg, MD SAT math Tutors Cheverly, MD SAT math Tutors College Park SAT math Tutors Glenarden, MD SAT math Tutors Glenn Dale SAT math Tutors Greenbelt SAT math Tutors Landover Hills, MD SAT math Tutors Lanham SAT math Tutors Lanham Seabrook, MD SAT math Tutors Riverdale Park, MD SAT math Tutors Riverdale Pk, MD SAT math Tutors Riverdale, MD SAT math Tutors Seabrook, MD SAT math Tutors
{"url":"http://www.purplemath.com/New_Carrollton_MD_SAT_Math_tutors.php","timestamp":"2014-04-17T04:40:38Z","content_type":null,"content_length":"24355","record_id":"<urn:uuid:172348ed-b4ed-4903-9d7f-479572c25d09>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Granada Hills Math Tutor Find a Granada Hills Math Tutor ...I have worked with all ages of students, from elementary to college level, and with varying levels of abilities, from students with learning disabilities to the highly gifted. My strongest subjects are Math and Science but I can also assist with other subjects. I am available almost all hours of the day and am flexible with setting up meeting times and sessions. 11 Subjects: including algebra 1, algebra 2, biology, chemistry ...My specialty is with students who have learning disabilities, ADHD, and Asperger's. I have a B.A. in psychology from CSUN, which helps me to better assist students in their needs. I enjoy seeing students master a concept that I have taught them. 12 Subjects: including algebra 1, elementary (k-6th), prealgebra, reading ...I help them improve in various areas of speech, such as articulation, modulation, extemporaneous and fluent delivery, and enthusiasm. I focus on teaching my students how to prepare and organize their thoughts in order to keep their audience's attention and feel confident. It is important that a public speaker understands the goal of their speech: Is it to motive the audience? 13 Subjects: including algebra 2, algebra 1, prealgebra, English ...This is my favorite subject to teach. I know what it's like to have problems with SAT reading. I have worked hard to overcome my own problems with getting distracted and having test anxiety. 42 Subjects: including differential equations, SAT reading, grammar, Microsoft Excel ...Others further their studies overseas, or have moved on to careers or higher education. One student was very pleased to raise her college algebra score two whole grades, from a ‘D’ to a ‘B’.You CAN overcome the challenge before you. I would love the opportunity to help you hurdle that obstacle. 24 Subjects: including algebra 1, GED, statistics, ESL/ESOL
{"url":"http://www.purplemath.com/Granada_Hills_Math_tutors.php","timestamp":"2014-04-19T05:13:20Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:f8dbc7b4-e69a-4da7-a4d6-a4d93a4acf89>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Mars is a strict statically typed language. Its type system is based on the parametric Hindley-Milner system of Haskell (and before it, ML). All variables and expressions have a known type at compile In the traditional sense, a data type is a set of possible values. Data types in Mars are exclusive – every value belongs to exactly one type. Int and Array(Int) are examples of data types. Mars also features type constructors, which are types with parameters. Array is an example of a type constructor. A type constructor is still a type – it is said to be “uninhabited”. To distinguish between different types of types, Mars has what is known as a “kind system”. This is similar to Haskell’s kind system, with two important differences. Firstly, type constructors are n-ary, not unary. This means types take all of their arguments at the same time, rather than implicitly currying as they do in Haskell (so in Mars, T(a,b) is not equivalent to T(a)(b), whereas in Haskell, it is). Secondly, type constructors in Mars may be variadic. This means that a type may take an arbitrary number of arguments, similar to a “var-args” function in many programming languages. For a general overview of kinds, see Kind (type theory) on Wikipedia. Kind values take one of three forms: 1. * is the kind of all data types. 2. (k1,k2,...,kn) -> k is the kind of a type constructor, which takes exactly n types of kind k1, k2, ..., kn, respectively, and produces a type of kind k. 3. (k1,k2,...,kn,**) -> k is the kind of a type constructor, which takes n or more types, and produces a type of kind k. The first n types must be of kind k1, k2, ..., kn, respectively, and any additional arguments must be of kind *. The kind of a type is used to distinguish between type constructors and data types, and (for type constructors) the number and kind of their arguments. Kinds are not directly visible in Mars. They are not specified in source code, hence there is no syntax for them. They are merely a behind-the-scenes mechanic, which is automatically computed for each type. Note that it is possible to display the kind of a type . While this general definition of the kind system allows type constructors with non-star argument and return kinds, the Mars language provides no way to create such types, nor are there any such built-in types. Hence, an implementation may simply represent type constructors in the less general form of an (arity :: Int, isvariadic :: Bool) pair. Type Values Type values have the following syntax: type ::= function_type | primary_type function_type ::= "(" [type_list] ")" "->" type | primary_type "->" type primary_type ::= atom_type | primary_type "(" [type_list] ["," "..."] ")" atom_type ::= type_variable | type_name | "->" type_list ::= type ("," type)* type_variable ::= lower_identifier type_name ::= upper_identifier Type values take one of the following forms: 1. A type name is an identifier beginning with an uppercase letter, and references a user-defined or built-in type constructor. Its kind is the kind of the referenced type constructor. 2. A type variable is an identifier beginning with a lowercase letter, and represents an unknown type. Its kind is inferred from the context. 3. The token -> is the special builtin type -> (function type constructor). Its kind is (*, **) -> *. Note that this form is rarely used. 4. Type application, of the form t(t1,t2,...,tn), or partial type application, of the form t(t1,t2,...,tn,"...") (where the final ... is a literal “...” token). See Type application. 5. A function type, of the form (t1,t2,...,tn) -> t is syntactic sugar for the type ->(t1,t2,...,tn,t). It represents a function which takes n arguments, t1 through to tn, as input, and returns a t. Its kind is always *. The -> operator is right-associative. Parentheses around the argument type are optional for functions of exactly one argument. The current Mars implementation does not support the ... notation. It provides no way to perform partial type application. In light of the fact that type constructors cannot accept or produce non-star types, it is a useless feature. Type application Type application is the application of zero or more type arguments to a type constructor, producing a new type. Typically, type constructors have kind (*,*,...,*) -> *, and each type argument has kind *, resulting in a new type of kind *. However, this is not the most general case. In general, type application takes the form t(t1, t2, ..., tn [, "..."]). If the application ends with “...”, it is said to be curried. Type t must have kind (k1, k2, ..., km [, **]) -> k, or the type application is a kind error, and the compiler must reject the program. If the argument kinds of t end with “**”, t is said to be variadic. It must hold that n == m, with the following exceptions: • If the application is curried, it must hold that n <= m. • If t is variadic, it must hold that n >= m. • If both the application is curried, and t is variadic, n may be any size. For each ti, where 1 <= i <= min(n, m), ti must have kind ki. Any additional arguments must have kind *. If the application is not curried, it is considered complete. The resulting type has kind k. If the application is curried, additional arguments are required. The resulting type has kind (kn+1, ..., km [, **]) -> k. (That is, it is a new type constructor, accepting all remaining arguments after the first n). Examples of Type Values • Int, a built-in type of kind *. • List, a type defined in the prelude, of kind * -> * (that is, requiring one type argument). • List(Int), the type Int applied to type List, producing a type of kind *, which is a data type. • ->, a special built-in type of kind (*, **) -> *. • ->(Int, Int), two arguments, both Int, applied to type ->, producing a type of kind *. □ A function which accepts an Int, and produces an Int. • Int -> Int, the same as above, with a more natural syntax. • ->(Int, ...), one argument of type Int, applied to type ->, and curried, producing a type of kind (**) -> *. • a, a type variable, of unknown kind (requires context). • a -> Int, a function type which accepts an argument of any type, and produces an Int. • ->(a, b, c), three arguments applied to type ->, producing a type of kind *. □ A function which accepts two arguments, of types a and b, and produces a c. • (a, b) -> c, the same as above, with a more natural syntax. • a -> b -> c, a function which accepts an a, and produces a function which accepts a b, and produces a c. □ This is a classic “curried style” function, as may be found in Haskell. □ This type may also be written as ->(a, ->(b, c)). Type unification Unification is the main algorithm Mars uses for type checking and type inference. For a general overview of unification, see Unification on Wikipedia. When two or more values are expected to have the same type, their types are unified. The specification explicitly states when types should be unified. If the two types are the same, they successfully unify. Otherwise, they fail to unify, and the program must be rejected (due to a type error). For example, the type Int will unify with the type Int, but fail to unify with the type Array(Int). This is complicated by type variables. Each type variable is either free, bound or rigid. • A free type variable a will successfully unify with any type t, but once unified, a is bound to t for the entirety of the function. • A bound type variable a, bound to type t, will unify with s if and only if t unifies with s. • A rigid type variable a is never bound, and unifies only with a. For example, the type a (if a is a free type variable) will unify with the type Int, but result in a being bound to Int. In future unifications within the same function, a will unify only with Int. Type variables explicitly named in the header or body of a procedure are rigid, and will not unify with any type other than themselves. For example, this code is invalid: def to_int(x :: a) :: Int: return x # Type error It is invalid because the header specifies that the caller may pass an argument of non-Int type, but in that case, it won’t be able to return an Int. Thus the type variable a is rigid; the procedure is only valid if a is not unified with any other type. Non-rigid type variables are introduced by implicitly-typed variables (for type inference) or by expressions with polymorphic types (such as the empty array literal, or global variables and functions with polymorphic types). For example: def singleton(x :: Int) :: Array(Int): v = [] return array_add(v, x) In this example, the variable v is given the type Array(a), with free variable a, when it is first assigned. During the call to array_add, a is unified with Int, so v has type Array(Int). It would be a type error to treat v as a different type elsewhere, even though it hasn’t got any data in it (once its type is bound, it is bound for the entire body of the function; type variables in Mars are monomorphic), as in the following example: def monomorphic(x :: Int) :: Array(Int): v = [] r = array_add(v, x) w = array_add(v, [1]) # Type error return r The unification rules are as follows (note all rules are symmetric): • Type variable a unifies with type t with the rules as above (regardless of whether t is a type name, a type variable or a type application). If successful, this results in variable a becoming • Type name x unifies with type name x, and no other type name. • Type names do not unify with type applications. • Type application t ( t1, ..., tn ) unifies with type application s ( s1, ..., sn ) if and only if t unifies with s and ti unifies with si for all i <= n. Any bindings made in the recursive unifications apply. Polymorphism in global constants The example above shows that type variables in Mars are monomorphic. This is true only for local variables. As a special rule, global constants (including functions and data constructors) in Mars are polymorphic, meaning they can be given a different binding upon each use. The type of each global constant is “generalised” by taking each type variable in its original type, and universally quantifying it. A variable of type t containing type variables a1, ..., an is generalised as “∀ a1, ..., an. t“ A successful unification of a type with a universal quantification will not cause the quantified variables to become bound, so they may be unified again as a free variable. The example above can be “fixed” by making v a global constant: def v :: Array(a) = [] def monomorphic(x :: Int) :: Array(Int): r = array_add(v, x) w = array_add(v, [1]) return r Now v has type ∀a. Array(a). The first unification between ∀a. Array(a) and Array(Int) succeeds without binding a. Thus, the second unification between ∀a. Array(a) and Array(Array(Int)) also The polymorphic / monomorphic distinction is important when dealing with polymorphic functions. Consider: def twomaps(f :: a -> b, g :: a -> c, x :: Array(a)) :: Pair(Array(b), Array(c)): y = array_map(f, x) z = array_map(g, x) return Pair(y, z) Contrast with: def twomaps(f :: a -> b, g :: a -> c, x :: Array(a)) :: Pair(Array(b), Array(c)): mymap = array_map y = mymap(f, x) z = mymap(g, x) # Type error return Pair(y, z) In the former example, array_map is called twice. Its type is ∀t. ∀u. (t -> u, Array(t)) -> Array(u). On the first call, its first argument is unified with a -> b; hence it is treated as though it has type (a -> b, Array(a)) -> Array(b), binding y‘s type to Array(b). However, this unification does not change the type of array_map. On the second call, its first argument is unified with a -> c; hence it is treated as though it has type (a -> c, Array(a)) -> Array(c), binding z‘s type to Array(c). The two uses of array_map treat it as though it has two different, incompatible types, which is valid because the function itself is polymorphic. In the latter example, array_map is used only once, to assign to the local (monomorphic) variable mymap. This gives mymap the type (t -> u, Array(t)) -> Array(u) (note the lack of universal quantification – the free type variables t and u are scoped to the whole function). On the first call to mymap, its first argument is unified with a -> b; hence free variable t is bound to rigid variable a and free variable u is bound to rigid variable b. This permanently changes the type of mymap to (a -> b, Array(a)) -> Array(b). It is as if the function contained the declaration: var mymap :: (a -> b, Array(a)) -> Array(b) Therefore, on the second call to mymap, its first argument of type a -> b is unified with a -> c, and the unification of rigid variables b and c fails, producing a type error. The bottom line is that once a function (or any other value) is assigned to a local variable, it has a fixed type, even if it is not explicitly declared. Built-In Types Mars defines three built-in types, which could not be written in the language itself. type Int Has kind *. Values of this type are arbitrary-precision integers. Positive and negative integers of any magnitude are members of this type. This type is special because it is not possible to declare user-defined types with arbitrarily-many elements, nor with the special integer literal syntax. Integer literals form the data constructors for this type. Displaying values of this type results in integer literal form. type Array(a) Has kind * -> *. Provides extensible arrays (sometimes known as “vectors”) which may be accessed and updated in constant time. Constant-time array updates are currently only possible by importing the impure library module, and are considered to be an unofficial part of the language. This type is special because it is not possible to declare user-defined types with the same performance characteristics as a true array. Array literals may be used to build array values. Built-in functions are also available to create and manipulate arrays. Displaying values of this type results in array literal form. type -> Has kind (*, **) -> *. The function type constructor. Values of type a -> b are functions which accept an argument of type a, and produce a result of type b. Functions may only be created by being declared at the top level, or through currying. The display representation of a function is implementation-defined. It is not mathematically valid to print out anything, but implementations may print out information about the function object, to be helpful. The value should not be used in computations. Type Declarations Type declarations have the following syntax: typedef ::= "type" type_name [type_params] ":" NEWLINE INDENT constructor+ DEDENT type_params ::= "(" [type_variable ("," type_variable)*] ")" constructor ::= ctor_name [ctor_params] ctor_params ::= "(" [ctor_param ("," ctor_param)*] ")" ctor_param ::= [param_name "::"] type ctor_name ::= upper_identifier param_name ::= lower_identifier Use of the typedef production will declare a new globally-available type name, as well as a number of globally-available constructor function names. Types must have a name (an identifier beginning with an uppercase letter), and at least one constructor. A type may optionally have zero or more type parameters. If a type does not have parameters, its kind is *. If a type has n parameters, its kind is (*1, *2, ..., *n) -> *. Any parameter variables may be used in the types of the constructor parameters. This statement only creates type constructors of the form (*,*,...,*) -> *. It is not possible to create a user-defined type constructor which accepts or returns higher-kinded types, or variadic type Declaring a type with empty parentheses (eg. type Foo()) is possible in this grammar, but not something you should ever want to do. It creates a type constructor, of kind () -> *, which accepts zero arguments. This type cannot be used as a data type without accompanying parentheses wherever it is referred to. This is allowed in Mars, for more transparency in the kind system, but in practice, all nullary types should be declared without parentheses (eg. type Foo), giving them the kind *. Types have one or more constructors. Each constructor has a globally-unique name, and zero or more parameters. The constructor declares a globally-available function of the same name, which takes the given arguments as inputs, and returns a value of the type being declared. Each constructor parameter may optionally have a name. No two parameters of a constructor may have the same name. The same parameter name may appear in multiple constructors, but it MUST have the same type in all constructors. For example, the following type is illegal, as a constructor has two parameters with the same name: type Foo: X(v :: Int, v :: Int) # Error: Duplicate field name The following type is valid, as the name v appears in multiple constructors, but has the same type in all instances: type Foo: X(u :: Int, v :: Int) Y(v :: Int) The following type is illegal, as the name v appears in multiple constructors with differing types: type Foo: A(u :: Int, v :: Int) B(v :: Array(Int)) # Error: Duplicate field name A constructor without parameters is, in effect, declaring a global constant of this type. A type is an “enumeration type” if all of its constructors are nullary, and as such, it simply declares a fixed-size set of named constants. Note that, just as with the type itself, a constructor without parameters is distinct from a constructor with 0 parameters. Constructors with 0 parameters require an empty function application, and are typically not useful.
{"url":"http://ww2.cs.mu.oz.au/~mgiuca/mars/docs/ref/types.html","timestamp":"2014-04-21T14:40:19Z","content_type":null,"content_length":"57019","record_id":"<urn:uuid:d9e81007-45a4-4f51-952f-2b61ef066d9b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Washington Precalculus Tutor ...I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math be... 22 Subjects: including precalculus, calculus, geometry, GRE Dear Prospective Tutee, Get ready to learn, have fun, and gain confidence in your ability to do math—and raise your grades too! I offer tutoring sessions for all high school math subjects—from pre-algebra to AP calculus. I have helped to significantly improve students' scores and grades (as much as from an F to an A) in high school math subjects for three years now. 15 Subjects: including precalculus, chemistry, calculus, geometry ...This gives me the strong background necessary to teach precalculus. I have also been a math tutor through college, teaching up to Calculus-level classes. My tutoring style can adapt to individual students and will teach along with class material so that students can keep their knowledge grounded. 11 Subjects: including precalculus, chemistry, geometry, algebra 2 ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including precalculus, calculus, physics, GRE ...Source: www.nsf.gov. This means that the importance of understanding Mathematics can lead to more opportunities! Imagine, hiring a tutor that cares about your success and the money in your 13 Subjects: including precalculus, chemistry, physics, algebra 2
{"url":"http://www.purplemath.com/washington_navy_yard_precalculus_tutors.php","timestamp":"2014-04-20T07:04:16Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:fea6a3e9-f2a2-474a-b26c-5925daa48e91>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Larry Zimmerman strikes again I wanted to share another few excerpts from Larry. He gave a talk last Tuesday at the NY Math Circle's PD for Middle School Teachers, and I was impressed again by how he asks questions to extend thinking about a problem, offer more entry points, and deepens my interest. He does it so quickly, I can't help but get interested and then my mind is thinking about 10 different but connected things at once, and whatever I get into relates to the bigger picture, and it's like magic differentiated instruction and guided inquiry. It's really awesome. We did a bunch of problems, and here are three that I wrote down his questions for: Show that the sum of two odd numbers is an even number. What is an odd number? What is an even number? Is 4 even? Is 0 even? Are there any even primes? What’s the next even prime? Are all prime #s greater than 2 odd? Are all #s greater than 2 prime? Is -4 even? Is -6 even? Why? So now, what is an odd number? Is every even a multiple of 4? Show that no perfect sq ends in 3. Do you believe it? Do any end in 6? Did you know that every sq ends in 6 iff the number’s 10s digit must be odd? Would anyone like to see a proof of that? Good! Go do it. (I was curious to observe here that every sq is - a multiple of 5 - one less than a multiple of 5 - one more than a multiple of 5) Show that x^2 – y^2 = 2 has no solutions in integers Translate this into words. Say this in words: x^2 + y^2 (x + y)^2 x^2 + 5x + 6 = (x + 2)(x + 3) Find the value of x that makes this false. Factor it using diff of squares: (x + y)(x – y) = 2 Note: remember given info Integers factors must be 2 & 1 “I wanna savor that for a minute. To some people that sounds trivial, but to me it sounds profound. Never dismiss the obvious as being trivial. Even if it’s obvious it may be very very important.” x + y = 2 and x – y = 1 (or other way around) “Do something, do anything, and if it doesn’t work, SO WHAT?” 3 comments: 1. Mr. Zimmerman is quite a teacher. I had the honor to be one of his student. The day he retired, Brooklyn Technical High School lost one of its greatest teacher! It is good to hear that he is still involve with teaching. 2. I really enjoy your blog!! I was just looking over this question. Shouldn't you also check x+y=-2,x-y=-1 and vice versa. Given, they will not have a solution in the integers, but still.. 3. Thanks for the comment and addition! Sorry to take so long to reply. Of course, you're absolutely right that there are four possible pairs of factors! Beautiful.
{"url":"http://mathbebrave.blogspot.com/2009/11/larry-zimmerman-strikes-again.html","timestamp":"2014-04-19T17:01:20Z","content_type":null,"content_length":"90588","record_id":"<urn:uuid:adcaaf76-365c-46f9-8dec-7bcb8f6e54cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal matrix transposition and bit reversal on hypercubes: all-to-all personalized communication Results 1 - 10 of 16 , 1994 "... The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; howev ..." Cited by 32 (13 self) Add to MetaCart The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how th... , 1992 "... We present a linear algebraic formulation for a class of index transformations such as Gray code encoding and decoding, matrix transpose, bit reversal, vector reversal, shuffles, and other index or dimension permutations. This formulation unifies, simplifies, and can be used to derive algorithms for ..." Cited by 15 (3 self) Add to MetaCart We present a linear algebraic formulation for a class of index transformations such as Gray code encoding and decoding, matrix transpose, bit reversal, vector reversal, shuffles, and other index or dimension permutations. This formulation unifies, simplifies, and can be used to derive algorithms for hypercube multiprocessors. We show how all the widely known properties of Gray codes, and some not so well-known properties as well, can be derived using this framework. Using this framework, we relate hypercube communications algorithms to Gauss-Jordan elimination on a matrix of 0's and 1's. - SIAM Rev , 1996 "... Manyversions of the fast Fourier transform require a reordering of either the input or the output data that corresponds to reversing the order of the bits in the array index. There has been a surprisingly large number of papers on this subject in the recent literature. ..." Cited by 14 (0 self) Add to MetaCart Manyversions of the fast Fourier transform require a reordering of either the input or the output data that corresponds to reversing the order of the bits in the array index. There has been a surprisingly large number of papers on this subject in the recent literature. - DISCRETE APPLIED MATHEMATICS , 1992 "... We present optimal schedules for permutations in which each node sends one or several unique messages to every other node. With concurrent communication on all channels of every node in binary cube networks, the number of element transfers in sequence for K elements per node is K 2 , irrespective ..." Cited by 7 (2 self) Add to MetaCart We present optimal schedules for permutations in which each node sends one or several unique messages to every other node. With concurrent communication on all channels of every node in binary cube networks, the number of element transfers in sequence for K elements per node is K 2 , irrespective of the number of nodes over which the data set is distributed. For a succession of s permutations within disjoint subcubes of d dimensions each, our schedules yield min( K 2 + (s \Gamma 1)d; (s + 3)d; K 2 + 2d) exchanges in sequence. The algorithms can be organized to avoid indirect addressing in the internode data exchanges, a property that increases the performance on some architectures. For message passing communication libraries, we present a blocking procedure that minimizes the number of block transfers while preserving the utilization of the communication channels. For schedules with optimal channel utilization, the number of block transfers for a binary d-cube is d. The maximum - Journal of Parallel and Distributed Computing , 1994 "... by ..." , 1993 "... We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short annotated history of data parallel programming, and sketch a taxonomy in which data parallel languages can be ..." Cited by 5 (0 self) Add to MetaCart We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short annotated history of data parallel programming, and sketch a taxonomy in which data parallel languages can be classified. Finally we present our own model of data parallel programming, which is based on the view of parallel data collections as functions. We believe that this model has a number of distinct advantages, such as being abstract, independent of implicitly assumed machine models, and general. - in proceedings of the 6th Distributed Memory Computing Conf , 1991 "... All-to-all personalized communication is a class of permutations in which each processor sends a unique message to every other processor. We present optimal algorithms for concurrent communication on all channels in Boolean cube networks, both for the case with a single permutation, and the case whe ..." Cited by 4 (0 self) Add to MetaCart All-to-all personalized communication is a class of permutations in which each processor sends a unique message to every other processor. We present optimal algorithms for concurrent communication on all channels in Boolean cube networks, both for the case with a single permutation, and the case where multiple permutations shall be performed on the same local data set, but on different sets of processors. For K elements per processor our algorithms give the optimal number of elements transfer, K=2. For a succession of all-to-all personalized communications on disjoint subcubes of fi dimensions each, our best algorithm yields K 2 +oe \Gamma fi element exchanges in sequence, where oe is the total number of processor dimensions in the permutation. An implementation on the Connection Machine of one of the algorithms offers a maximum speed-up of 50% compared to the previously best known algorithm. 1 Introduction We give simple, yet optimal, schedules for all-to-all personalized commun... - In: Parallel Computing. Volume , 1991 "... We describe an implementation of the Cooley Tukey complex-to-complex FFT on the Connection Machine. The implementation is designed to make effective use of the communications bandwidth of the architecture, its memory bandwidth, and storage with precomputed twiddle factors. The peak data motion rate ..." Cited by 3 (0 self) Add to MetaCart We describe an implementation of the Cooley Tukey complex-to-complex FFT on the Connection Machine. The implementation is designed to make effective use of the communications bandwidth of the architecture, its memory bandwidth, and storage with precomputed twiddle factors. The peak data motion rate that is achieved for the interprocessor communication stages is in excess of 7 Gbytes/s for a Connection Machine system CM-200 with 2048 floating-point processors. The peak rate of FFT computations local to a processor is 12.9 Gflops/s in 32-bit precision, and 10.7 Gflops/s in 64-bit precision. The same FFT routine is used to perform both one- and multi-dimensional FFT without any explicit data rearrangement. The peak performance for a one-dimensional FFT on data distributed over all processors is 5.4 Gflops/s in 32-bit precision and 3.2 Gflops/s in 64-bit precision. The peak performance for square, two-dimensional transforms, is 3.1 Gflops/s in 32-bit precision, and for cubic, three dimensi...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=979375","timestamp":"2014-04-18T02:06:54Z","content_type":null,"content_length":"35403","record_id":"<urn:uuid:306ba925-5fb3-440d-b8dd-6297117eb4d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric Identities April 7th 2012, 08:37 AM #1 Apr 2012 Trigonometric Identities If sin x = 2/3 and cos y = -2/7, find the possible values of cos(x+y). Anyone please? I tried cos(x+y) = cos x cos y - sin x sin y = -2/7cos x - 2/3 sin y now i'm stuck Last edited by jessm001; April 7th 2012 at 08:41 AM. Re: Trigonometric Identities Use the Pythagorean Identity to get the values for cos(x) and sin(y), then apply the angle sum identity you attempted to use Re: Trigonometric Identities From $\cos^2 x + \sin ^2 x =1$ we get $\cos x = \pm \sqrt{1-\sin^2 x}$. Re: Trigonometric Identities I used the pythagorean identity and found values for cos x and sin y, should I now substitute both negative and positive values in the sum angle identity? And should the compound angle identity be equal to something ? Re: Trigonometric Identities Since you know that sin(x) is positive, that means x could be in the first and second quadrants, which means cos(x) could be positive or negative. Since you know that cos(y) is negative, that means y could be in the second or third quadrants, which means sin(x) could be positive or negative. So yes, since there are four possibilities, you will need to substitute all four values to get four possible results Re: Trigonometric Identities Hello, jessm001! You were on your way. We are expected to find those missing values. $\text{If }\sin x = \tfrac{2}{3}\text{ and }\cos y = \text{-}\tfrac{2}{7}\text{, find the possible values of }\cos(x+y).$ $\sin x \:=\:\frac{2}{3} \:=\:\frac{opp}{hyp} \qquad x\text{ is in Quadrant 1 or 2.}$ . . $adj \:=\:\pm\sqrt{5} \quad\Rightarrow\quad \cos x \:=\:\pm\frac{\sqrt{5}}{3}$ $\cos y \:=\:\frac{\text{-}2}{7} \:=\:\frac{adj}{hyp}\qquad y\text{ is in Quadrant 2 or 3.}$ . . $opp \:=\:\pm\sqrt{45} \:=\:\pm3\sqrt{5} \quad\Rightarrow\quad \sin y \:=\:\pm\frac{3\sqrt{5}}{7}$ $\cos(x + y) \;=\;\cos x\cos y - \sin x\sin y$ . . . . . . . . $=\;\left(\pm\frac{\sqrt{5}}{3}\right)\left(-\frac{2}{7}\right) - \left(\frac{2}{3}\right)\left(\pm\frac{3\sqrt{5}}{ 7}\right)$ There are four possible values: . $\frac{8\sqrt{5}}{21},\;-\frac{8\sqrt{5}}{21},\;\frac{4\sqrt{5}}{21},\;-\frac{4\sqrt{5}}{21}$ Re: Trigonometric Identities Thanks a lot ! April 7th 2012, 08:45 AM #2 April 7th 2012, 08:46 AM #3 Senior Member Jan 2008 April 7th 2012, 08:57 AM #4 Apr 2012 April 7th 2012, 06:09 PM #5 April 8th 2012, 08:04 AM #6 Super Member May 2006 Lexington, MA (USA) April 8th 2012, 09:09 AM #7 Apr 2012
{"url":"http://mathhelpforum.com/trigonometry/196910-trigonometric-identities.html","timestamp":"2014-04-17T05:36:31Z","content_type":null,"content_length":"55855","record_id":"<urn:uuid:f5754f58-703a-405b-a7bd-463f602d1a8b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Lorentz contraction of box filled with gas The pressure inside the box does not increase. I will try and explain why. Pressure is defined as force divided by area. Force is defined as m*a = m*dv/dt = dp/dt where dp is change in momentum. If we have a box with n particles, each of mass m, conveniently bouncing straight up and down with velocity w and colliding with top of the box every t seconds then the total force of the particles colliding with the top of the box n*m*w/t. When the box is moving from left to right with velocity w with respect to us the transverse component of the particles velocities is reduced by gamma (v) and the tranverse mass of each particle increases by a factor of of gamma(v). The time interval t also increases by gamma(v) from our point of view so that overall the force acting on the top of the box is n*(my)*(w/y)/(t*y) = (n*m*w/t)/y where y is gamma(v) or 1/sqrt(1-v^2/c^2). So the overall force is reduced by gamma. Since pressure is force divided by area and the surface area of the top of the box is also reduced by gamma (due to length contraction) the pressure is the same from our point of view as it is to to an observer that is stationary with respect to the box. You can do a similar analysis for the sides of the box and arrive at the same conclusion. Very nice, kev I am getting the same result with a slightly improved mathematical formalism. In the frame of the box, the mass of the gass is m_0 and the speed of the molecules is w so, the momentum is The force exerted by molecules is F=dp/dtau=m_0*dw/dtau where tau is the proper time as measured in the box frame The crossection of the top of the box is A=a*b The pressure in the box frame is: In the observer frame , assuming the crossection is: where a is the dimension of the box side moving along the box movement, b is the dimension perpendicular on the movement, gamma=1/sqrt(1-(v/c)^2) and v is the box speed wrt the observer. Lorentz transforms say that the molecules move with speed p'=gamma*m_0*w'=gamma*m_0*w/gamma=m_0*w=p! (no real surprise here, it is quite intuitive) dt=gamma* dtau (time dilation) so dtau/dt=1/gamma so: Pr'=F'/A'=F/A=Pr (Q.E.D)
{"url":"http://www.physicsforums.com/showthread.php?p=1582502","timestamp":"2014-04-17T00:52:20Z","content_type":null,"content_length":"87552","record_id":"<urn:uuid:440a5da1-ed22-4465-9dbd-4fa1c8f880c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Model of the Fractional Order Dynamics of the Planetary Gears Mathematical Problems in Engineering Volume 2013 (2013), Article ID 932150, 14 pages Research Article A New Model of the Fractional Order Dynamics of the Planetary Gears ^1State University of Novi Pazar, Novi Pazar, Serbia ^2Faculty of Engineering Science, University of Kragujevac, Serbia Received 9 November 2012; Revised 25 March 2013; Accepted 31 March 2013 Academic Editor: Jocelyn Sabatier Copyright © 2013 Vera Nikolic-Stanojevic et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A theoretical model of planetary gears dynamics is presented. Planetary gears are parametrically excited by the time-varying mesh stiffness that fluctuates as the number of gear tooth pairs in contact changes during gear rotation. In the paper, it has been indicated that even the small disturbance in design realizations of this gear cause nonlinear properties of dynamics which are the source of vibrations and noise in the gear transmission. Dynamic model of the planetary gears with four degrees of freedom is used. Applying the basic principles of analytical mechanics and taking the initial and boundary conditions into consideration, it is possible to obtain the system of equations representing physical meshing process between the two or more gears. This investigation was focused to a new model of the fractional order dynamics of the planetary gear. For this model analytical expressions for the corresponding fractional order modes like one frequency eigen vibrational modes are obtained. For one planetary gear, eigen fractional modes are obtained, and a visualization is presented. By using MathCAD the solution is obtained. 1. Introduction Planetary gears are a great application in modern engineering systems as a replacement for the conventional manual transmission complex because it has a compact structure and high transmission ratios. Due to the structure of planetary gears and the fact that the so-called planetary gear-satellites simultaneously perform two current trends in the work of planetary gears, there are even extreme vibration, that is, dynamic loads, which cause damage to the gears, bearings, and other elements of the transmission. Precise study of the dynamic behavior of planetary gear is often a difficult mathematical problem, because there are no adequate models. In the idealization of the attached planetary transmission and selection of appropriate dynamic models usually first allocate primary properties, which are maintained in solving the task, and then in future work neglect less important characteristics. In the first papers on the dynamic behavior of gears in use, one notes a great simplification, for example, that all changes have linear character. However, subsequent experimental studies have shown that this approach is not realistic and that the dynamic behavior of gears in the paper is influenced by many factors that cannot be described by linear relationships [1]. These studies have shown that it is especially important to separate the effects that occur between the gear teeth in mesh, the dynamic effects that result in the load bearing of the engine, dynamic errors in transmission, and so forth. Therefore, a number of important research results of the dynamic behavior of gear transmission will be given, with special reference to the planetary gear. Although gear dynamics has been studied for decades, few studies present a formulation intended for the dynamic response of full gear systems that contain multiple gear meshes, flexible shafts, bearings, and so forth. There are few reliable computational tools for the dynamic analysis of general gear configurations. Some models exist, but they are limited by simplified modeling of gear tooth mesh interfaces, two-dimensional models that neglect out of plane behavior, and models specific to a single gear configuration. In a series of papers that follow, the fundamental task of analytical gear research is to build a dynamic model. For different analysis purposes, there are several modelling choices such as a simple dynamic factor model, compliance tooth model, torsional model, and geared rotor dynamic model, for example, [2, 3]. The simplest models are found in a number of textbooks used in education in this field. So, the teeth in meshing action can be modelled as an oscillatory system [4–6] and so forth. This model consists of concentrated masses (each of which represents one gear) connected with elastic and dump element. Applying the basic principles of analytical mechanics and taking the initial and boundary conditions into consideration, it is possible to obtain the system of equations representing physical meshing process between the two or more gears. In order to obtain better results, it is possible to model the elastic element as a nonlinear spring. Dynamic transmission error is taken as the parameter for modelling of noise in geared transmission. In the last two decades, there is plenty of work concetrated on modelling of the dynamic transmission error for spur and helical gears and representing the influence of the dynamic transmission errors on the level of noise in the geared transmission. Lately, there has been experiments conducted in order to isolate particular noise effects like noise coming from bearing, housing noise, meshing action noise, and backlash noise simply by measuring the dynamic transmission error. Some of the earliest models are represented in [7–10]. Using the free vibration analysis, one calculates critical parameters such as natural frequencies and vibration modes that are essential for almost all dynamic investigations. The free vibration properties are very useful for further analyses of planetary gear dynamics, including eigensensitivity to design parameters, natural frequency veering, planet mesh phasing, and parametric instabilities from mesh stiffness variations [11, 12]. Based on the results of the experiments conducted during the gear vibration research, it is to conclude that the excitation is restored every time when a new pair of teeth enters the mesh. Vibrations with natural frequencies dominate the vibration spectrums. The internal dynamic forces in teeth mesh, vibration, and noise are consequences of the change in teeth deformation, teeth impact, gear inertia due to measure, and teeth shape deviation [13]. Paper [14, 15] aims to provide insight into the three-dimensional vibration of gears by investigating the mechanisms of excitation and nonlinearity coming from the gear tooth mesh. For different analysis purposes, there are several modelling choices such as a simple dynamic factor model, compliance tooth model, torsional model, and geared rotor dynamic model [6]. Using the free vibration analysis one calculates critical parameters such as natural frequencies and vibration modes that are essential for almost all dynamic investigations. The free vibration properties are very useful for further analyses of planetary gear dynamics, including eigensensitivity to design parameters, natural frequency veering, planet mesh phasing, and parametric instabilities from mesh stiffness variations [16–22]. It is also necessary to systematically study natural frequency and vibration mode sensitivities and their veering characters to identify the parameters critical to gear vibration. In addition, practical gears may be mistuned by mesh stiffness variation, manufacturing imperfections, and assembling errors. For some symmetric structures, such as turbine blades, space antennae, and multispan beams, small disorders may dramatically change the vibration [18, 19]. The following articles [10, 23] are related to the nonlinear analysis of dynamic behavior of gears, using experimental methods and the application of finite element method (FEM). Paper [24, 25] examines the nonlinear dynamics of planetary gears by numerical and analytical methods over the meaningful mesh frequency ranges. Concise, closed-form approximations for the dynamic response are obtained by perturbation analysis. By using three-dimensional finite element analysis, it is possible to model the whole planetary gear and get adequate solutions. Such a solution to the classic gear transmissions is given in the paper [26]. General three-dimensional finite element models for dynamic response are rare because they require significant computational effort. This is accomplished by many time steps required for the transient response to diminish so that steady-state data can be obtained. This study attempts to fill this gap with a general finite element formulation that can be used for full gearbox dynamic A finite element formulation for the dynamic response of gear pairs is proposed in [24, 26, 27] and so forth. Following an established approach in lumped parameter gear dynamic models, the static solution is used as the excitation in a frequency domain solution of the finite element vibration model. The nonlinear finite element/contact mechanics formulation provides an accurate calculation of the static solution and average mesh stiffness that are used in the dynamic simulation. The frequency domain finite element calculation of dynamic response compares well with numerically integrated (time domain) finite element dynamic results and previously published experimental results. Simulation time with the proposed formulation is two orders of magnitude lower than numerically integrated dynamic results. This formulation admits system level dynamic gearbox response, which may include multiple gear meshes, flexible shafts, rolling element bearings, housing structures, and other deformable components. In the latest research, light fractional order coupling element is used to describe the dynamic behavior of gears and set of constitutive relationships, so the fractional calculus can be successfully applied to obtain results. The monograph [28–31] contains a basic mathematical description of fractional calculus and some solutions of the fractional order differential equations necessary for applications of the corresponding mathematical description of a model of gear transmission based on the teeth coupling by standard light fractional order element. In the series of references [32–40], the mixed discrete-continuum or continuum mechanical systems with fractional order creep properties are mathematically described and analytically solved. Paper [40] presents two models of the geared transmission with two or more shafts. First approach gives a model based on the rigid rotors coupled with rigid gear teeth, with mass distributions not balanced and in the form of the mass particles as the series of the mass disturbance of the gears in multistep gear transmission. Using very simple model it is possible and useful to investigate the nonlinear dynamics of the multistep gear transmission and nonlinear phenomena in free and forced dynamics. This model is suitable to explain source of vibrations and big noise, as well as no stability in gear transmission dynamics. Layering of the homoclinic orbits in phase plane is source of a sensitive dependence nonlinear type of regime of gear transmission system dynamics. Second approach gives a model based on the two-step gear transmission taking into account deformation and creeping and also viscoelastic teeth gears coupling. This investigation was focused to a new model of the fractional order dynamics of the gear transmission. For this model we obtain analytical expressions for the corresponding fractional order modes like one frequency eigen vibrational modes. Generalization of this model to the similar model of the multistep gear transmission is very easy. The model in this paper represents dynamic model of the planetary gears with four degrees of freedom. Our investigation was focused to a new model of the fractional order dynamics of the planetary gears. For this model we obtain analytical expressions for the corresponding fractional order modes like one frequency eigen vibrational modes. 2. Mathematical Model of the Planetary Gear In the practice, planetary gears are very often exposed to action of forces that change with time (dynamic load). There are also internal dynamic forces present. The internal dynamic forces in gear teeth meshing are the consequence of elastic deformation of the teeth and defects in manufacture such as pitch differences of meshed gears and deviation of shape of tooth profile. Deformation of teeth results in the so-called collision of teeth which is intensified at greater difference in the pitch of meshed gears. Occurrence of internal dynamic forces results in vibration of gears so that the meshed gears behave as an oscillatory system. This model consists of reduced masses of the gear with elastic and damping connections (see [6, 14, 15, 26, 27]). By applying the basic principles of mechanics and taking into consideration initial and boundary conditions, the system of equation is established which describes physicality of the gear meshing process. On the other hand, extremely cyclic loads (dynamic forces) can result in breakage of teeth, thus causing failure of the mechanism or system. Primary dependences between geometrical and physical quantities in the mechanics of continuum (and with planetary gear as well) include mainly establishing the constitutive relation between the stress state and deformation state of the tooth’s material in the two teeth in contact for each particular case. Thus, solving this task, it is necessary to reduce numerous kinetic parameters to minimal numbers and obtain a simple abstract model describing main properties for investigation of corresponding dynamical influences. Based on previous, at starting this part, we take into account that contact between two teeth is possible to be constructed by standard light element with constitutive stress-strain state relations which can be expressed by fractional order derivatives. The papers [29, 39] analyzed in details the standard light coupling elements of negligible mass in the form of axially stressed rod without bending, which has the ability to resist deformation under static and dynamic conditions. Figure 1 shows the model planetary gear when the coupling between the teeth (sun-planet and ring-planet meshes) was obtained from a standard light fractional element. The planetary gear model consists of three members (the sun, 3 planets, and ring). The motion of the sun gear and the ring gear is given by translations that is expressed as , , and rotations that is expressed as , . The kinetic energy of the planetary stage can be written as The kinetic energy for the each element is represented by where are masses of the sun gear and ring gear, are mass moments of the inertia, are velocities of mass centers, and are angular velocities of the sun gear and ring gear. So, the total kinetic energy of the planetary stage is given by Sun gear is supported with bearing which is modeled as linear spring , and planet gear is supported with bearing which is modeled as linear spring , but the meshes of sun gear-planet gear and ring gear-planet gear are described by standard light fractional element with restitution forces and . Thus, the potential energies of the bearings are The restitution forces are in the function of element elongation , and they are in the form The fractional order differential operator of the th derivative with respect to time is given in following form: [32, 33, 39] where are rigidity momentary and prolonged coefficients and is rational number (). The equations of motion for the planetary gear are derived from Lagrange’s equation given by well-known form where are generalized coordinates, are generalized forces, and Φ is Rayleigh dissipation function (in our case Rayleigh dissipation function is zero because damping effects are taken into consideration). Generalized coordinates for the given system are , , , and . Therefore, the dynamic behavior will be governed by four independent equations of motion. In matrix form they are where the matrix is diagonal inertia matrix and the matrix is stiffness matrix. Light standard creep constraint element between sun gear and planet gear is strained for , and light standard creep constraint element between planet gear and ring gear is strained for . So, due to the constitutive relation of the standard light fractional order coupling elements, the generalized forces as a function of elongation of elements are Lagrange equations of motion are obtained following substitution (9) into (7), and they can be expressed as The diagonal inertia matrix is The stiffness matrix is 3. Modal Analysis of the Planetary Gear The system is tuned, that is, all sun-planet and ring-planet mesh stiffnesses, and their centers of stiffnesses, are identical among all planets; the planet bearing stiffnesses, the axial locations of the planet bearings, and the planet inertias are the same for all planets. 3.1. Eigenvalue Problem The proposed solutions are in the form of The eigenvalue problem is with natural frequencies . It is known that to have nontrivial solutions the matrix on the left side must be singular. It follows that the determinant of the matrix must be equal to 0, so or, in the developed form, Corresponding frequency equation in the polynomial form is where, for instance, , , and so forth. Solving this polynomial four roots , and corresponding eigen circular frequencies , , can be obtained. The solution of basic linear differential equation is and in matrix presentation where is modal matrix defined by the corresponding cofactors, ,and are main coordinates of the linear system. With this expression, the system of the fractional differential equation (10) can be transformed in the form of [39] This resulted the system of the fractional differential equation. Analytical solution of these fractional order differential equations is obtained using the approach presented in [37, 39]. Therefore, each fractional differential equation can be written in the form of where and are initial values of main coordinates defined by initial conditions and is rational number . The solution of the basis system [39] can be expressed in the following form: 3.2. Numerical Visualisation Eigensolutions of a sample system [20, Table 1], with four degrees of freedom are evaluated numerically to expose the modal properties. Eigensolutions of a sample system (Table 1) with three equally spaced planets are evaluated numerically to expose the modal properties. Four natural frequencies and their corresponding mode types are given in Figures 2, 3, 4, and 5. In Figure 2(a), the initial configuration of planetary gear is shown, and Figure 2(b) shows the planetary gear first mode. In order to better consideration of modes of individual elements of the gear in the following Figures 2(c), 2(d), 3, 4 and 5, separate elements of planetary gear are shown. The vibration modes exhibit distinctive characteristics. The central member rotates and translates axially and planets do same. Regardless of the system parameters the modal deflection of planet gears are zero for Hz. Based on (18), the first normal mode corresponds to both masses moving in the opposite direction while angular displacements are in the same direction. The second normal mode corresponds to the masses moving in the opposite directions and angular displacements are in the opposite directions also. The masses, for and , move in the same direction, but angular displacements are in the opposite directions or equal zero (fourth mode). The general solution is a superposition of the normal modes where the initial conditions of the problem must be used. By using different numerical values of the kinetic and geometrical parameters of the planetary gear model, the series of the graphical presentation of the four sets of the two time components and , of the solutions, by using expressions (21) are obtained. In the series Figures 6–10 are presented characteristic modes for different values of the coefficient of the fractional order of the used standard light fractional order element for describing teeth coupling between sun-planet and planet-ring. Time is in sec, and all values on the vertical axis are in m. First eigen fractional order mode with corresponding first eigen fractional order time components and for different system kinetic and geometric parameter values is presented in Figure 6. In Figure 7, we can see second eigen fractional mode with corresponding second fractional order time components and for different system kinetic and geometric parameter values. In Figure 8, third eigen fractional mode with corresponding third fractional order time components and for different system kinetic and geometric parameter values is presented. Fourth eigen fractional mode with corresponding third fractional order time components and for different system kinetic and geometric parameter values, in Figure 9, is presented. In Figure 10, first, second, third, and fourth eigen fractional modes , , , and are presented by surfaces. Also, the family trajectory in the plane () is shown. Based on the obtained results in this paper, we can conclude that eigen fractional order modes are like one frequency vibration modes similar to single frequency eigen mode of the corresponding linear system [29, 38, 39, 41]. The fractional order dynamic system is like dumping system. With the increase of the parameter , the period of oscillation increases but the amplitude becomes smaller. So we can say that parameter has the same influence as dumping coefficient in the corresponding system. 4. Conclusions This paper presents a new dynamic model of a planetary gear. The planetary gear system is represented by a model that allows for four degrees of freedom per gear-shaft body supported by bearings at arbitrary axial positions and with standard creep constraint element. The standard light fractional order coupling element is between sun-planet and planet-ring. A novel approach for the planetary gear dynamic analysis is developed. So, in this paper it is shown how the new model of the fractional order dynamic planetary gear can be applied to study dynamic behavior. This model simulates the real behavior of the planetary gear. With this simple model, it is possible to research the nonlinear dynamics of the planetary gear and nonlinear phenomena in free and forced dynamics. The model is suitable to explain source of vibrations and big noise, as well as no stability in planetary gear. A new method, using MATCAD software, is used in this paper for the obtaining of the eigen values and for analysis results. In the literature, similar procedures are presented in introduction, and they were used as reference material for the composition and verification of models and results. On the basis of the numerical results, shown in this paper, it has been concluded that the methodology developed to study the dynamic behaviour of planetary gear is very efficient. It gives a lot of possibilities and can be easily upgraded for analysis of other effects. The dynamic behavior and analysis of results suggest that the gear transmission is very complex and that it is almost impossible to include all the effects by such and similar research. This paper considers planetary gear with 3 planet gears, which makes the problem more complex. Further research should be directed at studying the effects of mutual dynamic impact of teeth in mesh, as well as at including more effects [42]. So, it is possible to study eigen frequency of planetary gear with moving excentric masses on the body of one of the gears or with holes on the body, by using finite element method. In accordance with the present trend of application of new materials, authors will, in future studies, simulate the dynamic behavior of a gear made of composite materials and study the life of the gears at the load. Also, future research should focus on the study of planetary gears life using low cycle fatigue properties and so forth. Results in this paper can be taken as relevant for further research, because this model simulates the real behavior of the planetary gear, more than earlier models. Parts of this research were supported by the Ministry of Sciences of Republic Serbia through Mathematical Institute SANU Belgrade Grant no. ON 174001 “Dynamics of hybrid systems with complex structures. Mechanics of materials” and also through the Faculty of Mechanical Engineering University of Niš and the State University of Novi Pazar. 1. F. Cunliffe, D. J. Smith, and D. B. Welbourn, “Dynamic tooth loads in epicyclic gears,” Journal of Engineering For Industry, vol. 95, no. 2, pp. 578–584, 1974. 2. G. W. Blankenship and R. Singh, “Dynamic force transmissibility in helical gear pairs,” Mechanism and Machine Theory, vol. 30, no. 3, pp. 323–339, 1995. View at Scopus 3. G. W. Blankenship and R. Singh, “A new gear mesh interface dynamic model to predict multi-dimensional force coupling and excitation,” Mechanism and Machine Theory, vol. 30, no. 1, pp. 43–57, 1995. View at Scopus 4. G. Liu and R. G. Parker, “Impact of tooth friction and its bending effect on gear dynamics,” Journal of Sound and Vibration, vol. 320, no. 4-5, pp. 1039–1063, 2009. View at Publisher · View at Google Scholar · View at Scopus 5. G. Niemann and H. Winter, Maschinen-Elemente II and III, Springer, Berlin, Germany, 1989. 6. V. Nikolic, Machine Elements, Theory, Calculation, Examples, Faculty of Mechanical Engineering, Prizma, Kragujevac, Serbia, 2004. 7. V. Nikolic, Mechanical Analysis of Gears, Faculty of Mechanical Engineering, Cimpes, Kragujevac, Serbia, 1999. 8. G. Liu and R. G. Parker, “Nonlinear dynamics of idler gear systems,” Nonlinear Dynamics, vol. 53, no. 4, pp. 345–367, 2008. View at Publisher · View at Google Scholar · View at Scopus 9. A. Kahraman and G. W. Blankenship, “Experiments on nonlinear dynamic behavior of an oscillator with clearance and periodically time-varying parameters,” Journal of Applied Mechanics, vol. 64, no. 1, pp. 217–226, 1997. View at Scopus 10. A. Kahraman and R. Singh, “Non-linear dynamics of a spur gear pair,” Journal of Sound and Vibration, vol. 142, no. 1, pp. 49–75, 1990. View at Scopus 11. R. G. Parker, S. M. Vijayakar, and T. Imajo, “Non-linear dynamic response of a spur gear pair: modelling and experimental comparisons,” Journal of Sound and Vibration, vol. 237, no. 3, pp. 435–455, 2000. View at Publisher · View at Google Scholar · View at Scopus 12. J. Lin and R. G. Parker, “Analytical characterization of the unique properties of planetary gear free vibration,” Journal of Vibration and Acoustics, vol. 121, no. 3, pp. 316–321, 1999. View at 13. L. Vedmar and A. Andersson, “A method to determine dynamic loads on spur gear teeth and on bearings,” Journal of Sound and Vibration, vol. 267, no. 5, pp. 1065–1084, 2003. View at Publisher · View at Google Scholar · View at Scopus 14. M. F. Agemi and M. Ognjanović, “Gear vibration in supercritical mesh-frequency range,” FME Transactions, vol. 32, no. 2, pp. 87–94, 2004. 15. D. Dimitrijević and V. Nikolić, “Eigenfrequencies analysis for the deep drilling machine gear set,” The Scientific Journal FACTA UNIVERSATES, vol. 1, no. 5, pp. 629–636, 1998. 16. D. Dimitrijevic and V. Nikolic, “Eigenfrequence analysis of the spur gear pair with moving excentric masses on the body of one of the gears,” FME Transactions, vol. 35, no. 3, pp. 157–163, 2007. 17. H. Vinayak and R. Singh, “Multi-body dynamics and modal analysis of compliant gear bodies,” Journal of Sound and Vibration, vol. 210, no. 2, pp. 171–212, 1998. View at Scopus 18. J. Lin and R. G. Parker, “Analytical characterization of the unique properties of planetary gear free vibration,” Journal of Vibration and Acoustics, vol. 121, no. 3, pp. 316–321, 1999. View at 19. J. Lin and R. G. Parker, “Natural frequency veering in planetary gears,” Mechanics of Structures and Machines, vol. 29, no. 4, pp. 411–429, 2001. View at Publisher · View at Google Scholar · View at Scopus 20. J. Lin and R. G. Parker, “Planetary gear parametric instability caused by mesh stiffness variation,” Journal of Sound and Vibration, vol. 249, no. 1, pp. 129–145, 2002. View at Publisher · View at Google Scholar · View at Scopus 21. T. Eritenel and R. G. Parker, “Modal properties of three-dimensional helical planetary gears,” Journal of Sound and Vibration, vol. 325, no. 1-2, pp. 397–420, 2009. View at Publisher · View at Google Scholar · View at Scopus 22. Y. Guo and R. G. Parker, “Sensitivity of general compound planetary gear natural frequencies and vibration modes to model parameters,” Journal of Vibration and Acoustics, vol. 132, no. 1, 13 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus 23. T. Hidaka, Y. Terauchi, and K. Nagamura, “Dynamic behavior of planetary gear,” Bulletin of the JSME, vol. 22, no. 169, pp. 1017–1025, 1979. View at Scopus 24. R. G. Parker, V. Agashe, and S. M. Vijayakar, “Dynamic response of a planetary gear system using a finite element/contact mechanics model,” Journal of Mechanical Design, vol. 122, no. 3, pp. 304–310, 2000. View at Scopus 25. C. J. Bahk and R. G. Parker, “Analytical solution for the nonlinear dynamics of planetary gears,” Journal of Computational and Nonlinear Dynamics, vol. 6, no. 2, Article ID 021007, 15 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus 26. T. Sun and H. Hu, “Nonlinear dynamics of a planetary gear system with multiple clearances,” Mechanism and Machine Theory, vol. 38, no. 12, pp. 1371–1390, 2003. View at Publisher · View at Google Scholar · View at Scopus 27. V. Nikolić, C. Dolićanin, and D. Dimitrijević, “Numerical modelling of gear set dynamic behaviour,” Scientific Technical Review, no. 3-4, pp. 48–54, 2010. 28. V. Nikolić, Ć. Dolićanin, and D. Dimitrijević, “Dynamic Model for the Stress and Strain State Analysis of a Spur Gear Transmission, StrojniŔki vestnik,” Journal of Mechanical Engineering, vol. 58, no. 1, pp. 56–67, 2012. 29. O. A. I. Goroško and K. Hedrih, Analitička Dinamika (Mehanika) Diskretnih Naslednih Sistema, (Analytical Dynamics (Mechanics) of Discrete Hereditary Systems), Monograph, University of Niš, 2001. 30. O. A. Goroško and K. Hedrih, “Advances in development of the analytical dynamics of the hereditary discrete systems,” Journal of Physics, vol. 96, Article ID 012143, 2008. View at Publisher · View at Google Scholar 31. B. S. Bačlić and T. M. Atanacković, “Stability and creep of a fractional derivative order viscoelastic rod,” Bulletin, no. 25, pp. 115–131, 2000. View at MathSciNet 32. M. T. Atanacković, C. D. Dolićanin, and S. Pilipović, “Forced oscillations of a single degree of freedom system with fractional dissipation,” Scientific Publications of the State University of Novi Pazar, vol. 3, no. 1, pp. 1–11, 2011. 33. K. Hedrih and R. Knežević, “Structural stability of the planetary reductor nonlinear dynamics phase portrait,” Facta Universitatis, vol. 1, no. 7, pp. 911–923, 2000. 34. K. Hedrih, K. Knežević, and R. Cvetković, “Dynamics of planetary reductoe with turbulent damping,” International Journal of Nonlinear Sciences and Numerical Simulations, vol. 2, no. 3, pp. 265–262, 2001. 35. R. G. Parker, “On the eigenvalues and critical speed stability of gyroscopic continua,” Journal of Applied Mechanics, vol. 65, no. 1, pp. 134–140, 1998. View at Scopus 36. K. Hedrih and L. J. Veljović, “Nonlinear dynamic of heavy gyro-rotor with two skew rotating axes,” Journal of Physics, vol. 96, Article ID 012221, 2008. View at Publisher · View at Google Scholar 37. K. Hedrih and L. J. Veljović, “Nonlinear dynamic of heavy gyro-rotor with two rotating axes, facta universitates series mechanics,” Automatic Control and Robotics, vol. 14, no. 16, pp. 55–68, 38. K. R. Hedrih and L. Veljović, “Vector rotators of rigid body dynamics with coupled rotations around axes without intersection,” Mathematical Problems in Engineering, vol. 2011, Article ID 351269, 26 pages, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 39. K. R. Hedrih, “Dynamics of multi-pendulum systems with fractional order creep elements,” Journal of Theoretical and Applied Mechanics, vol. 46, no. 3, pp. 483–509, 2008. View at Scopus 40. K. S. Hedrih and V. Nikolić-Stanojević, “A model of gear transmission fractional order system dynamics,” Mathematical Problems in Engineering, vol. 2010, Article ID 972873, 23 pages, 2010. View at Publisher · View at Google Scholar 41. S. Vijayakar, “Combined surface integral and finite element solution for a three-dimensional contact problem,” International Journal for Numerical Methods in Engineering, vol. 31, no. 3, pp. 525–545, 1991. View at Scopus 42. S. Y. T. Lang, “Graph-theoretic modelling of epicycle gear systems,” Mechanism and Machine Theory, vol. 40, no. 5, pp. 511–529, 2005. View at Publisher · View at Google Scholar · View at
{"url":"http://www.hindawi.com/journals/mpe/2013/932150/","timestamp":"2014-04-16T16:59:35Z","content_type":null,"content_length":"311617","record_id":"<urn:uuid:965b1631-f2b3-4a6c-a06b-7a1c5b84acde>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 85 To find the first apotome. Set out a rational straight line, and let BG be commensurable in length with A. Then BG is also rational. Set out two square numbers DE and EF, and let their difference FD not be square. Then ED does not have to DF the ratio which a square number has to a square number. Let it be contrived that ED is to DF as the square on BG is to the square on GC. Then the square on BG is commensurable with the square on GC. But the square on BG is rational, therefore the square on GC is also rational. Therefore GC is also rational. Since ED does not have to DF the ratio which a square number has to a square number, therefore neither has the square on BG to the square on GC the ratio which a square number has to a square number. Therefore BG is incommensurable in length with GC. And both are rational, therefore BG and GC are rational straight lines commensurable in square only. Therefore BC is an apotome. I say next that it is also a first apotome. Let the square on H be that by which the square on BG is greater than the square on GC. Now since ED is to FD as the square on BG is to the square on GC, therefore, in conversion, as DE is to EF as the square on GB is to the square on H. But DE has to EF the ratio which a square number has to a square number, for each is square, therefore the square on GB also has to the square on H the ratio which a square number has to a square number. Therefore BG is commensurable in length with H. And the square on BG is greater than the square on GC by the square on H, therefore the square on BG is greater than the square on GC by the square on a straight line commensurable in length with BG. And the whole BG is commensurable in length with the rational straight line A set out. Therefore BC is a first apotome. Therefore the first apotome BC has been found.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX85.html","timestamp":"2014-04-17T15:44:18Z","content_type":null,"content_length":"5143","record_id":"<urn:uuid:ffed867a-8418-4f80-9d2f-f10f05e3b39b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Re: st: RE: Truncated sample or Heckman selection Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: Re: st: RE: Truncated sample or Heckman selection From Ebru Ozturk <ebru_0512@hotmail.com> To <statalist@hsphsun2.harvard.edu> Subject RE: st: Re: st: RE: Truncated sample or Heckman selection Date Fri, 5 Oct 2012 00:17:42 +0300 But, the dependent variable includes many zero values as many firms do not produce innovation and of course no sales from this tyep of innovation. Also, published papers have used Tobit regression and Heckman two step correction with the same data, are they all wrong then? > Date: Thu, 4 Oct 2012 20:54:49 +0100 > Subject: st: Re: st: RE: Truncated sample or Heckman selection > From: njcoxstata@gmail.com > To: statalist@hsphsun2.harvard.edu > I agree on #1. > On #2, how is Ebru going to fit any kind of model with no data on predictors? > Nick > On Thu, Oct 4, 2012 at 7:50 PM, Millimet, Daniel <millimet@mail.smu.edu> wrote: > > 1. A fractional logit model is more appropriate when modeling percentages. > > 2. The data set up is in between the usual Heckman vs. truncated model setup. With the typical Heckman approach, X's are observed for all observations and there is no information on the missing outcome. With a truncated setup, we observe no X's, but have information at least on the range of values for the outcome. Here, you do know something about the value of Y for the censored observations as in the truncated setup, but you only observe a subset of the X's. To me, it sounds you could perhaps "invent" a new model that is a zero-inflated fractional logit model, since you have 1 set of regressors that impact perhaps the probability of no innovation, and then a second set of regressors that impacts the amount of innovation conditional on this being positive. > > > > Anyway, perhaps not the best answer. > > > > > > -----Original Message----- > > From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Ebru Ozturk > > > > I have a question that I cannot decide whether I should use truncated regression or Heckman sample selection. > > For instance, in the dataset, firms that produce any type of innovation (process or product) give information about other 'x' variables. In other words, firms that do not produce any innovation do not answer other questions as these questions are directly related to firms' innovation activities. So, the 'x' variables that I am interested in have no values only for those firms that do not produce innovation. But, I know the dependent (y) variable in both case, either firms produce innovation or not produce. > > > > > > I am planning to run tobit regression as the dependent variable is percentage between 0 - 100 and Heckman sample selection model to check selection bias. But, I can not decide whether it is truncated sample or Heckman sample selection. > > > > So, what do you think? > > > > Thank you very much, Ebru. > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/faqs/resources/statalist-faq/ > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-10/msg00196.html","timestamp":"2014-04-16T19:20:14Z","content_type":null,"content_length":"11797","record_id":"<urn:uuid:1df68d70-bb5b-4a49-8712-1054474c85e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Josephine, TX Math Tutor Find a Josephine, TX Math Tutor ...I’ve tutored over 100 hours of SAT and ACT prep, including over 30 hours of SAT and ACT Reading. The SAT Critical Reading section requires strong vocabulary skills, particularly for the sentence completion portion. I include vocabulary lessons and sentence completion practice in my tutoring sessions to help students gain confidence in answering vocabulary-related questions. 15 Subjects: including algebra 1, algebra 2, grammar, geometry ...My expertise is in math of all levels. I believe that prior knowledge frequently gets lost when learning new material. By supporting new learning with acquired skills, students do better on 10 Subjects: including SAT math, algebra 1, geometry, prealgebra ...To become certified I had to do two elementary school placements and teach every subject. I also taught as a substitute teacher with the City of New York for a year at the elementary school level. I am a certified teacher in the UK. 13 Subjects: including algebra 2, reading, trigonometry, writing ...I also tutor high school math. I help students with their understanding of new and complicated concepts. My aim, as a tutor, is to help students to solve the problems with their best abilities without overwhelming them. 19 Subjects: including geometry, statistics, probability, ACT Math ...I live in Rockwall and would love to be able to help your child understand math better.I teach MST & Honors students and we use technology everyday in class. My students are actively engaged and they really enjoy being able to work with iPads and Macbooks. I will have each of these available during my tutoring sessions if your child needs to use them. 4 Subjects: including prealgebra, grammar, elementary math, vocabulary Related Josephine, TX Tutors Josephine, TX Accounting Tutors Josephine, TX ACT Tutors Josephine, TX Algebra Tutors Josephine, TX Algebra 2 Tutors Josephine, TX Calculus Tutors Josephine, TX Geometry Tutors Josephine, TX Math Tutors Josephine, TX Prealgebra Tutors Josephine, TX Precalculus Tutors Josephine, TX SAT Tutors Josephine, TX SAT Math Tutors Josephine, TX Science Tutors Josephine, TX Statistics Tutors Josephine, TX Trigonometry Tutors Nearby Cities With Math Tutor Anna, TX Math Tutors Campbell, TX Math Tutors Celeste Math Tutors Commerce, TX Math Tutors Cumby Math Tutors Elmo, TX Math Tutors Merit Math Tutors Nevada, TX Math Tutors Princeton, TX Math Tutors St Paul, TX Math Tutors Van Alstyne Math Tutors West Tawakoni, TX Math Tutors Westminster, TX Math Tutors Weston, TX Math Tutors Wills Point Math Tutors
{"url":"http://www.purplemath.com/Josephine_TX_Math_tutors.php","timestamp":"2014-04-17T19:18:03Z","content_type":null,"content_length":"23714","record_id":"<urn:uuid:c1fce43c-ea7b-4de7-86f6-5f48984b6633>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Experimental and theoretical studies of nanofluid thermal conductivity enhancement: a review Nanofluids, i.e., well-dispersed (metallic) nanoparticles at low- volume fractions in liquids, may enhance the mixture's thermal conductivity, k[nf], over the base-fluid values. Thus, they are potentially useful for advanced cooling of micro-systems. Focusing mainly on dilute suspensions of well-dispersed spherical nanoparticles in water or ethylene glycol, recent experimental observations, associated measurement techniques, and new theories as well as useful correlations have been reviewed. It is evident that key questions still linger concerning the best nanoparticle-and-liquid pairing and conditioning, reliable measurements of achievable k[nf ]values, and easy-to-use, physically sound computer models which fully describe the particle dynamics and heat transfer of nanofluids. At present, experimental data and measurement methods are lacking consistency. In fact, debates on whether the anomalous enhancement is real or not endure, as well as discussions on what are repeatable correlations between k[nf ]and temperature, nanoparticle size/shape, and aggregation state. Clearly, benchmark experiments are needed, using the same nanofluids subject to different measurement methods. Such outcomes would validate new, minimally intrusive techniques and verify the reproducibility of experimental results. Dynamic k[nf ]models, assuming non-interacting metallic nano-spheres, postulate an enhancement above the classical Maxwell theory and thereby provide potentially additional physical insight. Clearly, it will be necessary to consider not only one possible mechanism but combine several mechanisms and compare predictive results to new benchmark experimental data sets.
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3211287/?lang=en-ca","timestamp":"2014-04-20T12:01:10Z","content_type":null,"content_length":"201543","record_id":"<urn:uuid:15bebc28-9def-48d8-8432-eb962737ac9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on the choice of Malmquist productivity index and Malmquist total factor productivity index Halkos, George and Tzeremes, Nickolaos (2006): A note on the choice of Malmquist productivity index and Malmquist total factor productivity index. Download (282Kb) | Preview This paper by analyzing the two popular methodologies of productivity measurement provides an example that illustrates the differences when adopting the two methodologies. Furthermore, under the restriction of constant returns to scale raises some methodological issues regarding the theory of productivity measurement using the Malmquist Productivity Index and Malmquist Total Factor Productivity Index. Furthermore by using an illustrative example under the restriction of constant returns to scale the study indicates that the two indexes produce similar results. However, the differences observed are determining the choice of the methodology adopted when measuring productivity. Item Type: MPRA Paper Original Title: A note on the choice of Malmquist productivity index and Malmquist total factor productivity index Language: English Keywords: Productivity measurement; Malmquist Productivity Index; Malmquist Total Factor Productivity Index C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C69 - Other Subjects: D - Microeconomics > D2 - Production and Organizations > D24 - Production; Cost; Capital; Capital, Total Factor, and Multifactor Productivity; Capacity I - Health, Education, and Welfare > I1 - Health > I10 - General Item ID: 32083 Depositing Nickolaos Tzeremes Date Deposited: 07. Jul 2011 15:48 Last Modified: 12. Feb 2013 20:36 [1] M.M. Yu, The capacity productivity change and the variable input productivity change: a new decomposition of the Malmquist productivity index, Appl. Math. Comput. 185 (1) (2007) [2] C.A.K. Lovell, The decomposition of Malmquist productivity index, J. Product. Anal. 20 (3) (2003) 437-458. [3] D.W. Caves, L.R. Christensen, W.E. Diewert, The economic theory of index numbers and the measurement of input, output and productivity. Econometr. 50 (1982) 1393–1414. [4] H. Bjurek, The Malmquist Total Factor Productivity Index, Scand. J. of Econ., 98 (2) (1996) 303-313. [5] R.W. Shephard, Theory of Cost and Production Function. Princenton University Press, Princeton, NJ, (1970). [6] R. Färe, S. Grosskoff, B. Lindgren, P. Roos, Productiviity changes in Swedish Pharmacies 1980-89: A Nonparametric Malmquist Approach, J. of Product. Anal., 3 (3) (1992) 85-101. [7] R. Färe, S. Grosskoff, P. Roos, On two definitions of productivity, Econ. Lett., 53 (1996) 269-274. [8] ICAP Greece in figures, Greek Financial Directory. ICAP, Greece, (1996). Available at: www.finacial-directory.gr. [9] D.K. Lambert, Scale and Malmquist productivity index, Appl. Econ. Lett., 6 (1999) 593-596. URI: http://mpra.ub.uni-muenchen.de/id/eprint/32083
{"url":"http://mpra.ub.uni-muenchen.de/32083/","timestamp":"2014-04-21T09:40:57Z","content_type":null,"content_length":"20560","record_id":"<urn:uuid:f7665e02-3750-4c79-8f5f-b3168e2e28a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Tag Archives: math Question: What is one number that’s divisible by everything? and Question: What number is divisible by all 9 numbers 1 to 9? Read more We have heard of inflation, but what is its effect? A dollar doesn’t buy what it used to. But what does that mean? I bought a house in 1988 for $97,000. If I had to buy it today with a weaker dollar, how much would it cost? Read more You can add the integers from 1 to 10 by hand, but what if you need to add from 1 to 1000? The shortcut for adding Read more For example one searcher wonders if all three digit numbers are divisible by eleven. The answer is Read more If you add up all the even position digits and also add up all the odd position digits Read more This trick isn’t as effective as the test for divisibility by 3. It takes more steps: Read more So is 151515151515151515 divisible by 3? I can quickly tell you yes. The trick is to Read more Question: Can you do better than guessing to win a jelly bean counting contest? Answer: Let me count the ways Read more
{"url":"http://howtutorial.com/tag/math/","timestamp":"2014-04-18T04:16:49Z","content_type":null,"content_length":"47711","record_id":"<urn:uuid:bafec421-d7dc-4c6e-a0e9-b733aba49c3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2006 [00224] [Date Index] [Thread Index] [Author Index] Re: Re: piecewise integration • To: mathgroup at smc.vnet.net • Subject: [mg67070] Re: [mg66999] Re: piecewise integration • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Thu, 8 Jun 2006 04:54:34 -0400 (EDT) • References: <20060605102611.774$dR_-_@newsreader.com> <200606061028.GAA20748@smc.vnet.net> <acbec1a40606071502r7aea9e4ahcea554c39976f739@mail.gmail.com> <B4EA9689-6BEF-4F01-AD21-FC9C6EAFC212@mimuw.edu.pl> <acbec1a40606072149na61bec9ofb16ce601628afbb@mail.gmail.com> <acbec1a40606072200i1339bcf2jbd43808da9204a9c@mail.gmail.com> • Sender: owner-wri-mathgroup at wolfram.com On 8 Jun 2006, at 14:00, Chris Chiasson wrote: > And on a related note, does anyone know why Mathematica handles > DiracDelta'[x] in this way: > In[1]:= > D[UnitStep[x],{x,2}] > Integrate[%,{x,-1,1}] > Out[1]= > Derivative[1][DiracDelta][x] > Out[2]= > 0 I can see nothing wrong with the above. There is the following rule for the derivative of the DiracDelta: Integrate[Derivative[n][DiracDelta][x]*f[x], {x, -Infinity, Infinity}] == (-1)^n f[0] for any suitable function f (a function of "slow growth"). Mathematica knows this for every positive n, e.g. Mathematica knows this rule for any positive integer n: {x, -Infinity, Infinity}] {x, -Infinity, Infinity}] So taking f to be the function 1 we get, correctly {x, -Infinity, Infinity}] which is as it should be. But now, what is puzzling me is this: which is obviously wrong! This is with Mathematica 5.1. I wonder if this is still so in 5.2. Andrzej Kozlowski • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Jun/msg00224.html","timestamp":"2014-04-19T07:00:05Z","content_type":null,"content_length":"36424","record_id":"<urn:uuid:d41fe757-9500-47b7-b508-c5891cf024d2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Brighton, MA SAT Math Tutor Find a Brighton, MA SAT Math Tutor ...I teach students both strategies for how to tackle each section of the test and the content they'll need to know in order to excel. I also work with students on timing strategies so they neither run out of time nor rush during a section. I do assign weekly homework, as it is critical for studen... 26 Subjects: including SAT math, English, linear algebra, algebra 1 ...I also have tutored the SAT and the LSAT many times. I scored 99th percentile on the SAT (perfect score on current scale) and 90th on the LSAT. In all I have over 20 years of experience with countless students in many different subjects. 29 Subjects: including SAT math, reading, calculus, geometry ...After college I taught a few ballroom classes here and there at different studios and worked with two other collegiate ballroom dance teams. I have taught social dancers, wedding couples, and competitive dancesport athletes. I am great at choreographing wedding dances to any song of your choice. 13 Subjects: including SAT math, English, writing, geometry ...I use trigonometry almost on a daily basis thanks to my graduate-level mathematics courses. In addition, I am licensed to teach high school math in the state of Massachusetts. I am licensed to teach math (8-12) and the topics on the SATs are covered in the licensure. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...Thank you for helping open her eyes and widening her options. ... I cannot express what your tutoring has done for her confidence in this pursuit of bettering her SAT scores." "Thank you so much for your efforts with B.! We think you have made a real difference, and taught him how to effectivel... 38 Subjects: including SAT math, English, reading, writing Related Brighton, MA Tutors Brighton, MA Accounting Tutors Brighton, MA ACT Tutors Brighton, MA Algebra Tutors Brighton, MA Algebra 2 Tutors Brighton, MA Calculus Tutors Brighton, MA Geometry Tutors Brighton, MA Math Tutors Brighton, MA Prealgebra Tutors Brighton, MA Precalculus Tutors Brighton, MA SAT Tutors Brighton, MA SAT Math Tutors Brighton, MA Science Tutors Brighton, MA Statistics Tutors Brighton, MA Trigonometry Tutors Nearby Cities With SAT math Tutor Allston SAT math Tutors Arlington, MA SAT math Tutors Belmont, MA SAT math Tutors Brookline, MA SAT math Tutors Cambridge, MA SAT math Tutors Everett, MA SAT math Tutors Jamaica Plain SAT math Tutors Newton Center SAT math Tutors Newton Centre, MA SAT math Tutors Newton, MA SAT math Tutors Roslindale SAT math Tutors Roxbury, MA SAT math Tutors Somerville, MA SAT math Tutors Waltham, MA SAT math Tutors Watertown, MA SAT math Tutors
{"url":"http://www.purplemath.com/brighton_ma_sat_math_tutors.php","timestamp":"2014-04-21T07:26:56Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:efa92e74-579e-4007-99d4-f0339dfe8668>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Injective Representables Posted on 7 April 2011 by Nicolas Wu Category Theory In this article I’ll show how the injective and surjective properties of a function have a nice relationship with the injectivity of the corresponding representable functors, and how the right cancellative definition of surjection can be derived from the standard one. This is a fairly trivial discussion about a simple observation, so don’t expect anything spectacular. The definition of injectivity is usually given in the following terms, where a function is injective when it is left cancellative: \(h\) is injective iff \(\forall f, g \cdot h . f = h . g \Rightarrow f = g\) Surjectivity can be described in similar terms, where a function is surjective when it is right cancellative: \(h\) is surjective iff \(\forall f, g \cdot f . h = g . h \Rightarrow f = g\) While these two definitions show the similarity between the injective and surjective properties, this definition of surjectivity isn’t the standard one. The standard definition goes as follows: \(h\) is surjective iff \(\forall y \cdot \exists x \cdot h~ x = y\) I don’t like this definition, since I don’t find it easy to work with in proofs, and it doesn’t show the relationship between surjections and injections well. The representables, or Hom-functors, are functions from arrows to arrows in a category. These functions come in two flavours, the covariant representable, and the contravariant representable. Given objects \(S\) and \(T\) in a category \(ℂ\), we write \(ℂ(S, T)\) denote the set of all arrows from \(S\) to \(T\). Covariant Representable Given a function \(h : X \rightarrow Y\), the covariant representable, \(ℂ(S,-)\) is defined as: ℂ(S,—) :: (X -> Y) -> ℂ(S, X) -> ℂ(S, Y) ℂ(S,—) h f = h . f As shorthand for the above, we usually slot the argument \(h\) into the dash: ℂ(S, h) = ℂ(S,—) h Contravariant Representable Given a function \(h : X \rightarrow Y\), the contravariant representable, \(ℂ(-,T)\) is defined as: ℂ(—,T) :: (X -> Y) -> ℂ(X, T) -> ℂ(Y, T) ℂ(—,T) h f = f . h Again, we use the following shorthand, where we slot \(h\) into the dash: ℂ(h, T) = ℂ(—,T) h One feature of these representables that I like particularly is that they give rise to a clean correspondence between injectivity and surjectivity. The injective Representables give rise to a nice model for injective functions. Injective Covariant Reperesentable The following property holds of covariant representables: \(ℂ(S, h)\) is injective iff \(h\) is injective. First we show that if \(h\) is injective then \(ℂ(S, h)\) is injective: ℂ(S, h) f == ℂ(S, h) g == {- definition ℂ(S, h) -} h . f == h . g => {- injective h -} f == g Then we show that if \(ℂ(S, h)\) is injective then \(h\) is injective: h . f == h . g == {- definition ℂ(S, h) -} ℂ(S, h) f == ℂ(S, h) g => {- injective ℂ(S, h) -} f == g Injective Contravariant Reperesentable Here’s a property of the contravariant representable functor: \(ℂ(h, S)\) is injective iff \(h\) is surjective. We work with the contrapositive to show that if \(h\) is surjective, then \(ℂ(h, S)\) is injective: f ≠ g == {- definition inequality -} ∃ y . f y ≠ g y == {- surjective h -} ∃ x . f (h x) ≠ g (h x) == {- definition inequality -} f . h ≠ g . h == {- definition ℂ(h, S) -} ℂ(h, S) f ≠ ℂ(h, S) g Finally we show that if \(h\) is not surjective, then \(ℂ(h, S)\) is not injective: let h 0 = 0 {- h : {0} -> {0,1} -} let f 0 = 0, f 1 = 0 let g 0 = 0, g 1 = 1 f . h = g . h == {- definition ℂ(h, S) -} ℂ(h, S) f = ℂ(h, S) g f ≠ g Sadly, I don’t like this part of the proof, since it involves finding a counterexample to the injectivity of \(ℂ(h, S)\) when \(h\) is not surjective, and I prefer constructive proofs. Nevertheless, we’ve shown that \(ℂ(h, S)\) is injective iff \(h\) is surjective, which gives us the right cancellative definition of surjection.
{"url":"http://zenzike.com/posts/2011-04-07-injective-representables","timestamp":"2014-04-21T06:15:15Z","content_type":null,"content_length":"8850","record_id":"<urn:uuid:a49b5d3e-40dc-4260-937e-205c4d31878b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
dense subset of C*-algbra August 23rd 2010, 12:38 AM #1 dense subset of C*-algbra Show that the span of elements of the form $a^*b$ where both $a,b\in A$ which is a C*-albegra is dense in $A$ (I'm missing something simple...) If the algebra is unital, with identity element e, then this result is obvious, because any element $a\in A$ is equal to $e^*a$. Even in the nonunital case, a C*-algebra always has an approximate unit consisting of positive elements in the unit ball. Given $a\in A$ and $\varepsilon>0$, choose an element $e_\iota$ in the approximate unit such that $\|e_\iota a-a\|<\varepsilon$. That shows that elements of the form $b^*a$ are dense in A (without even having to take the linear span of them). August 23rd 2010, 08:33 AM #2
{"url":"http://mathhelpforum.com/differential-geometry/154227-dense-subset-c-algbra.html","timestamp":"2014-04-21T03:13:50Z","content_type":null,"content_length":"35116","record_id":"<urn:uuid:26fb60c8-ec7f-42cb-9252-883c4fda43b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Have a question in Astronomy is this the right place to get an answer? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fc8fb77e4b0c6963ad350fe","timestamp":"2014-04-20T03:18:56Z","content_type":null,"content_length":"39645","record_id":"<urn:uuid:dc1c65d3-f196-4fa8-9e94-39c737ec3c1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help cant figure out this problem 6(a+3)=21-3-(1-2a) • 6 months ago • 6 months ago Best Response You've already chosen the best response. Are you trying to find a Best Response You've already chosen the best response. First apply the distributive property. Best Response You've already chosen the best response. Haha Nevermind XD Best Response You've already chosen the best response. 6*a +6*3 = 21 -3 -1*1 -1 *-2a Best Response You've already chosen the best response. i do that but the thing i get stuck on is putting it all togethter Best Response You've already chosen the best response. Oh okay. so when we simplify this we get 6a +18 = 21 -3-1+2a right? Best Response You've already chosen the best response. We'll work through it step by step. Best Response You've already chosen the best response. Are you following? :\ Best Response You've already chosen the best response. Best Response You've already chosen the best response. instead of 2a wouldnt it be -6a Best Response You've already chosen the best response. Let's see. 6a +18 = 21 -3-1+2a simplify the right side 6a + 18 = 21-4 +2a simplify again 6a + 18 = 17 + 2a bring all the a's to one side of the equation 6a -2a +18 = 17 simplify left side 4a +18 = 17 bring all the numbers to the other side of the equation 4a = 17 - 18 simplify 4a = -1 isolate a on one side by dividing both sides of the equation by 4 4a/4 = -1/4 Simplify a = -1/4 Best Response You've already chosen the best response. okay thanks Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52570605e4b002bdb08e47de","timestamp":"2014-04-21T15:29:10Z","content_type":null,"content_length":"54273","record_id":"<urn:uuid:28ec11bb-3dd1-4f73-a763-202f05b43ab0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Can logical probability be viewed as a measure of degrees of partial entailment? Mura, Alberto Mario (2008) Can logical probability be viewed as a measure of degrees of partial entailment? Logic and Philosophy of Science, Vol. 6 (1), p. 25-33. eISSN 1826-1043. Article. Full text not available from this repository. Alternative URLs: A new account of partial entailment is developed. Two meanings of the term ‘partial entailment’ are distinguished, which generalise two distinct aspects of deductive entailment. By one meaning, a set of propositions A entails a proposition q if, supposed that all the elements of A are true, q must necessarily be true as well. By the other meaning, A entails q inasmuch the informative content of q is encapsulated in the informative content of A: q repeats a part of what the elements of A, taken together, convey. It is shown that while the two ideas are coextensive with respect to deductive inferences, they have not a common proper explicatum with respect to the notion of partial entailment. It is argued that epistemic inductive probability is adequate as an explicatum of partial entailment with respect to the first meaning while it is at odds with the second one. A new explicatum of the latter is proposed and developed in detail. It is shown that it does not satisfy the axioms of probability. I documenti depositati in UnissResearch sono protetti dalle leggi che regolano il diritto d'autore Repository Staff Only: item control page
{"url":"http://eprints.uniss.it/4789/","timestamp":"2014-04-20T09:05:37Z","content_type":null,"content_length":"17608","record_id":"<urn:uuid:3142c649-f0b5-4a2c-8f16-c6607c617262>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
X-ray Emission from Clusters of Galaxies - C.L. Sarazin 5.4. Transport processes Processes that redistribute energy, momentum, or heavy elements within the intracluster gas will now be reviewed. 5.4.1 Mean free paths and equilibration time scales The mean free paths of electrons and ions in a plasma without a magnetic field are determined by Coulomb collisions. As in the stellar dynamical case, it is important to include distant as well as nearby collisions. The mean free path [e] for an electron to suffer an energy exchanging collision with another electron is given by (Spitzer, 1956) where T[e] is the electron temperature, n[e] is the electron number density, and T[e] ^5 K, this Coulomb logarithm is which is nearly independent of density or temperature. Equation (5.32) assumes that the electrons have a Maxwellian velocity distribution at the electron temperature. The equivalent mean free path of ions [i] is given by the same formula, replacing the electron temperature and density with the ion temperature T[i] and density, dividing by the ion charge to the fourth power, and slightly increasing ln Section 5.4.5 below. Numerically, assuming that T[e] = T[i] = T[g]. In general, these mean free paths are shorter than the length scales of interest in clusters (Section 5.3.4). If a homogeneous plasma is created in a state in which the particle distribution is non-Maxwellian, elastic collisions will cause it to relax to a Maxwellian distribution on a time scale determined by the mean free paths (Spitzer, 1956, 1978). Electrons will achieve this equilibration (isotropic Maxwellian velocity distribution characterized by the electron temperature) on a time scale set roughly by t[eq](e, e) [e] / <v[e]>[rms], where the denominator is the rms electron velocity, The time scale for protons to equilibrate among themselves is t[eq](p, p) m[p] / m[e])^1/2 t[eq](e, e), or roughly 43 times longer than the value in equation (5.35). Following this time, the protons and ions would each have Maxwellian distributions, but generally at different temperatures. The time scale for the electrons and ions to reach equipartition T[e] = T[i] is t[eq](p, e) m[p] / m[e])t [eq](e, e), or roughly 1870 times the value in equation (5.35). For heavier ions, the time scales for equilibration are generally at least this short if the ions are nearly fully stripped, because the increased charge more than makes up for the increased mass. For T[g] ^8 K and n[e] ^-3 cm^-3, the longest equilibration time scale is only t[eq](p, e) ^8 yr. Since this is shorter than the age of the cluster or the cooling time, the intracluster plasma can generally be characterized by a single kinetic temperature T[g]. Under some circumstances, plasma instabilities may bring about a more rapid equilibration than collisions (McKee and Cowie, 1977).
{"url":"http://ned.ipac.caltech.edu/level5/March02/Sarazin/Sarazin5_4.html","timestamp":"2014-04-18T23:21:56Z","content_type":null,"content_length":"6872","record_id":"<urn:uuid:0e2f91f9-8cfa-4e81-8d2b-6236304ea0ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Overbrook Hills, PA Calculus Tutor Find an Overbrook Hills, PA Calculus Tutor ...Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional life. I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. 9 Subjects: including calculus, physics, geometry, algebra 1 ...I took the GRE in 2012, after the new GRE had been implemented. I scored in a high percentile, and can help any student to succeed in the math topics that are covered in the GRE. As part of my civil engineering degree, I gained a firm grasp on mathematical concepts including all of the critical concepts covered by the ACT math section. 21 Subjects: including calculus, reading, physics, geometry ...Also, during the day, I stay at home with my two young daughters.I took a differential equations course in fall of 2007 at Rensselaer Polytechnic Institute. I received an A. I used these topics in many chemical engineering courses after that. 25 Subjects: including calculus, chemistry, physics, writing Hi everyone, I am an experienced Pennsylvania certified mathematics teacher. My greatest skill is the ability to take complex concepts and break them into manageable and understandable parts. I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. 15 Subjects: including calculus, geometry, algebra 1, GRE ...Please note I am available in both Pennsylvania AND New Jersey. I am a highly experienced, highly qualified math tutor (10 years experience), who has taught high school and middle school math. I have been teaching SAT math prep seminars for Temple University for 5 years. 22 Subjects: including calculus, writing, geometry, statistics Related Overbrook Hills, PA Tutors Overbrook Hills, PA Accounting Tutors Overbrook Hills, PA ACT Tutors Overbrook Hills, PA Algebra Tutors Overbrook Hills, PA Algebra 2 Tutors Overbrook Hills, PA Calculus Tutors Overbrook Hills, PA Geometry Tutors Overbrook Hills, PA Math Tutors Overbrook Hills, PA Prealgebra Tutors Overbrook Hills, PA Precalculus Tutors Overbrook Hills, PA SAT Tutors Overbrook Hills, PA SAT Math Tutors Overbrook Hills, PA Science Tutors Overbrook Hills, PA Statistics Tutors Overbrook Hills, PA Trigonometry Tutors Nearby Cities With calculus Tutor Bala, PA calculus Tutors Belmont Hills, PA calculus Tutors Bywood, PA calculus Tutors Carroll Park, PA calculus Tutors Cynwyd, PA calculus Tutors Drexelbrook, PA calculus Tutors Kirklyn, PA calculus Tutors Llanerch, PA calculus Tutors Merion Park, PA calculus Tutors Merion Station calculus Tutors Merion, PA calculus Tutors Oakview, PA calculus Tutors Penn Valley, PA calculus Tutors Penn Wynne, PA calculus Tutors Westbrook Park, PA calculus Tutors
{"url":"http://www.purplemath.com/Overbrook_Hills_PA_calculus_tutors.php","timestamp":"2014-04-20T19:58:26Z","content_type":null,"content_length":"24587","record_id":"<urn:uuid:737a6a68-5ecc-4dda-a90a-18f5b1f2c11a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Rolling That Cube Copyright © University of Cambridge. All rights reserved. Why do this problem? is one which will particularly appeal to those pupils who enjoy problem solving or have good spatial awareness. It may, therefore, be a useful activity for a whole class - as it will enable you to see more clearly which pupils work well on spatial challenges. It also offers opportunities for sharing different ways of approaching the task. Possible approach Depending on the pupils' experiences, it may be appropriate to start with them all together with a practical simplified example. Using a large cardboard/foam cube with simple shapes on each of the six faces, like this ... ... they could observe what happens as it rolls and try to predict what face will be at the bottom each time. You could then present the challenge itself and give children time to work independently or in pairs. A copy of the route can be found on this sheet. Try not to direct the way they work as you may be suprised by the methods they create. The plenary can then focus on their different approaches. Allow time for all the different ways to be explained and then encourage pairs/small groups to discuss which method they might use if they were presented with a similar problem. Can they justify their choice? Key questions When working with the simple cube above: What's happening here? What can you tell me about the one at the bottom when I roll it this way? When working on the actual challenge: How are you working this out? Will you be able to check that it's ok? Possible extension When pupils have managed this activity in a confident way they may like to have a look at Inky Cube which is similar but much harder. You could also give the pupils opportunities to create their own cubes and set challenges for each other. Possible support Many pupils may need to have support in rolling a cube over carefully. Be aware though that some pupils who need support in the more numerical aspect of mathematics may not need any support in this spatial work.
{"url":"http://nrich.maths.org/7299/note?nomenu=1","timestamp":"2014-04-17T01:14:19Z","content_type":null,"content_length":"6798","record_id":"<urn:uuid:27a7b5cc-5beb-4bef-a905-99658a2cb2cd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: How many parameters does an elisp function take? [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: How many parameters does an elisp function take? From: Kevin Rodgers Subject: Re: How many parameters does an elisp function take? Date: Wed, 16 Feb 2005 16:56:47 -0700 User-agent: Mozilla Thunderbird 0.9 (X11/20041105) David Kastrup wrote: Kevin Rodgers <address@hidden> writes: Alan Mackenzie wrote: Is it possible to determine at run time how many parameters an elisp function takes? For example, I'd like to write something like: (how-many-params 'null) and have it evaluate to 1. Or something like that. Together with reasonable convention for indicating &optional and &rest arguments. I would start with eldoc-function-arglist. For built-in functions, subr-arity might help. And now for lisp functions, lambda-arity: (require 'eldoc) (defun lambda-arity (function) "Return minimum and maximum number of args allowed for FUNCTION. FUNCTION must be a symbol whose function binding is a lambda expression or a macro. The returned value is a pair (MIN . MAX). MIN is the minimum number of args. MAX is the maximum number or the symbol `many', for a lambda or macro with `&rest' args." (let* ((arglist (eldoc-function-arglist function)) (optional-arglist (memq '&optional arglist)) (rest-arglist (memq '&rest arglist))) (cons (- (length arglist) (cond (optional-arglist (length optional-arglist)) (rest-arglist (length rest-arglist)) (t 0))) (cond (rest-arglist 'many) (optional-arglist (+ (length arglist) (length optional-arglist) (t (length arglist)))))) Kevin Rodgers [Prev in Thread] Current Thread [Next in Thread]
{"url":"http://lists.gnu.org/archive/html/help-gnu-emacs/2005-02/msg00340.html","timestamp":"2014-04-16T13:56:19Z","content_type":null,"content_length":"7424","record_id":"<urn:uuid:bca592c6-de82-4798-95da-e7c37ecd8384>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
A question about a formal power series manipulation up vote 1 down vote favorite I want to find a function $f(x,y)$ which can satisfy the following equation, $\prod _{n=1} ^{\infty} \frac{1+x^n}{(1-x^{n/2}y^{n/2})(1-x^{n/2}y^{-n/2})} = exp [ \sum _{n=1} ^\infty \frac{f(x^n,y^n)}{n(1-x^{2n})}]$ • I would like to know how this is solved. (..though I landed into this through a different route (calculating Witten Index!), such expressions also occur in finite dimensional representation theory where a generating function for the character of the (anti)symmetric powers of a representation (the LHS) is written as a Plethystic exponential (the RHS) of the original (generally fundamental) representation...) One can perturbatively check that the following function satisfies the above equation, 1. $f(x,y) = \sqrt{x} (\sqrt{y} + 1/\sqrt{y}) + x (1 + y + 1/y) + x^{3/2} (y^{3/2} + 1/y^{3/2}) + x^2 (y^2 + 1/y^2)$ $\quad\quad\quad\quad\quad + \frac{(x y)^{5/2} }{(1 - \sqrt{x y})}(1 - 1/y^2) + \frac{(x/y)^{5/2}}{(1 - \sqrt{x/y})} (1 - y^2) $ The paper doesn't state any proof or explanation for how this was obtained but the above is order-by-order in $x$ checkable to be right after truncating the original equation at any finite value of $n$. (...I don't know how to check this keeping the full sum/product over $n$..) Now I tried to do something obvious but it didn't work! $\prod_{n=1}^{\infty} \frac{ (1+x^n) }{1+x^n -x^{\frac{n}{2}} \left(y^{\frac{n}{2}} + y^{-\frac{n}{2}}\right) } = \exp \left[ \sum_{n=1}^{\infty} \frac{ I_{ST}(x^n,y^n) } {n (1-x^{2n}) } \right] $ $\Rightarrow \sum_{n=1}^{\infty} \left[ \ln (1+x^n) - \ln(1-(\sqrt{xy})^n) - \ln\left(1- \left(\sqrt{\frac{x}{y}}\right)^n\right) \right] = \sum_{n=1}^\infty \frac{I_{ST}(x^n,y^n)} {n(1-x^{2n})} $ Now we expand the logarithms and we have, $\sum_{n=1}^{\infty} \left[ \sum_{a=1}^{\infty} (-1)^{a+1} \frac{x^{na}}{a} + \sum_{b=1}^{\infty} \frac{ (\sqrt{xy})^{nb} } {b} + \sum_{c=1}^{\infty} \frac{ (\sqrt{x/y})^{nc} }{c} \right] = \sum _{n= 1}^\infty \frac{f(x^n,y^n)} {n(1-x^{2n})}$ $\Rightarrow \sum _{a=1} ^{\infty} \frac{1}{a} \left[ \sum _{n=1} ^{\infty} \left( (-1)^{a+1}x^{na} + (xy)^{\frac{na}{2}} + \left(\frac{x}{y}\right)^{\frac{na}{2}} \right) \right] = \sum _{n=1} ^\ infty \frac{f(x^n,y^n)} {n(1-x^{2n})}$ By exchanging $a$ and $n$ (relabeling on the LHS) and matching the patterns on both sides and picking out the $n=1$ term one sees that one way this equality can hold is if, $f(x,y) = (1-x^2) \sum _{a=1} ^{\infty} \left[ x^a + (xy)^{\frac{a}{2}} + (\frac{x}{y})^{\frac{a}{2}} \right]$ $\Rightarrow f (x,y) = (1-x^2) \left(-1 + \frac{1}{1-x} -1 + \frac{1}{1-\sqrt{xy}} - 1 + \frac{1}{1-\sqrt{\frac{x}{y}} } \right)$ But this solution is neither the one above which could be perturbatively checked to be true nor does it satisfy the original equation! Why? After doing a series expansion of the above (using Series on Mathematica) one sees that this above derived equation (2) differs from (1) in having just one extra term of $x^2$. (...I would like to know what is wrong in the derivation that gives (2) this one extra term compared to the non-derivable but perturbatively checked correct answer (1)...) fa.functional-analysis real-analysis power-series rt.representation-theory 3 Can you please give the exact reference to "a certain paper" ? – F. C. Oct 5 '12 at 18:48 1 write the product over $n$ of $F_{n}(x,y)$ as the exponent of the sum over $n$ of log $F_{n}(x,y)$ and then expand $F$ as a power series in $x$. – Carlo Beenakker Oct 5 '12 at 19:14 @Carlo Beenakker Can elaborate a bit on what is the theorem you are using to identify $f$ - what you are suggest will make the LHS look like the RHS - but beyond that what? I need to solve many such kinds of equations and I would like to know what is the general principle being used. – Anirbit Oct 6 '12 at 21:26 @F.C I did not link the paper since its a paper whose central focus is something deep into physics and nothing to do with this one equation - and hence I thought it would be distracting to link to that huge paper - this equation and its solution I have quoted is just one line in that huge paper. – Anirbit Oct 6 '12 at 21:38 @Carlo Beenakker By what you are saying the equation will look like, $exp [ \sum _ {n=1} ^{\infty} log F(x^n,y^n) ] = exp [ \sum _ {n=1} ^{\infty} \frac{f(x^n,y^n)}{n(1-x^{2n})} ]$ Now how do you propose to identify $f(x,y)$ from a power expansion of $log F(x^n,y^n)$? – Anirbit Oct 6 '12 at 21:43 add comment 1 Answer active oldest votes I will try to by more helpful than I was in the comments. First a general observation. Your infinite product contains socalled Euler functions, which have the Lambert series expansion Part of your infinite product can be expanded in this way, $P(x,y)\equiv\prod_{n=1}^{\infty}\frac{1}{(1-x^{n/2}y^{n/2})(1-x^{n/2}y^{-n/2})}= \frac{1}{\phi(\sqrt{xy})\phi(\sqrt{x/y})}=\exp\left[\sum_{n=1}^{\infty}\frac{1}{n}g(x^n,y^n)\right]$. The function $g(x,y)$ is given by This is not quite what you have written. I have difficulty verifying your expression. For example, the limit $x\rightarrow 0$ of the left-hand side of your expression is $1+\sqrt{x}(\ sqrt{y}+1/\sqrt{y})$, but the same limit of the right-hand side is $1+\sqrt{x}(y+1/y)$. up vote 6 down vote UPDATE 1: Your corrected expression is still problematic; the left-hand side contains the infinite product $\prod_{n=1}^{\infty}(1+x^n)=\exp\left[-\sum_{n=1}^{\infty}(-1)^{n}\frac{1}{n}G(x^{n})\right]$, with $G(x)=x/(1-x)$. The factor $(-1)^n$ in the sum over $n$ seems inconsistent with the right-hand side of your expression --- but in fact it is consistent (see update 2). UPDATE 2: One more identity is needed, in addition to the Euler function identities, to complete the identification: We then have, quite generally, with the functions $F(x)=x/(1-x^2)$, $g(q)=q/(1-q)$. Your equation (1) corresponds to $q_1=\sqrt{x/y}$, $q_2=\sqrt{xy}$. I know, your function $f(x,y)$ looks much more lengthy, but it is really just $F(x)+g(\sqrt{xy})+g(\sqrt{x/y})$. @Carlo Beenakker Thanks for your reference about Lambert functions. It seems to fit the kind of pattern that I want - let me read more about it and get back to you. Also it turns out that the original function (1) that I had typed was wrong (the original paper had it wrong!) and the correct function is what is now typed as (1). The function (1) satisfies the equation as far as one can check perturbatively. But I don't understand why the equation that I derived (2) neither satisfies the equation nor is that the same as (1) - but I thought that (2) was derived by a pretty trivial argument! – Anirbit Nov 2 '12 at 21:57 @Carlo Beenakker And the latex is getting messed in the question. It would be great if you can clean that - I don't understand where is the problem. – Anirbit Nov 2 '12 at 23:01 I corrected the LaTeX, so that it displays properly. – Carlo Beenakker Nov 2 '12 at 23:40 @Carlo Beenaker Thanks for your efforts. I did not understand your UPDATE. To which equation are you referring to? Firstly I would like to know how equation (1) can be derived since this (1) can at least perturbatively be checked to be right. Secondly I would like to know why the equation (2) is not solving the equation - I can't see anything obviously going wrong with its derivation. – Anirbit Nov 4 '12 at 19:27 1 the Euler function identity is just a Taylor series of $\ln\phi(q)$ around $q=0$; for the proof of the second identity I have asked MO for help, with success (see the answer to question 111648). – Carlo Beenakker Nov 6 '12 at 15:30 show 6 more comments Not the answer you're looking for? Browse other questions tagged fa.functional-analysis real-analysis power-series rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/108927/a-question-about-a-formal-power-series-manipulation/111294","timestamp":"2014-04-21T15:45:27Z","content_type":null,"content_length":"68879","record_id":"<urn:uuid:214129bc-733a-46cd-9817-2323bcb5150f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Projectile motion [Archive] - OpenGL Discussion and Help Forums 02-12-2012, 08:46 AM Need some help whit the following code: void quadratic(int angle,int vel) float theta=3.14*angle/180;//angle to radians ux=vel*cos(theta);//X coord init value uy=vel*sin(theta);//Y coord init value t=((vel*vel)*(2*(sin(theta)*cos(theta))))/9.8;//maximum range that projectile can reach for(i=1;i<=t;i++)//nr of projectile (i=time) (t=distance to travel) x=ux*i;//X coord value y=uy*i+(0.5*(-9.8)*(i*i));//Y coord value I'm trying to make a projectile motion just like that in pocket tank but i cant get it done. For example its show a number of points but they don't follow a quadratic trajectory and the points don't stick whit equal distance between them.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-176886.html","timestamp":"2014-04-20T11:18:36Z","content_type":null,"content_length":"4426","record_id":"<urn:uuid:b8ad4bfa-ae08-4f2c-a811-963b9df832d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Blawenburg ACT Tutor Find a Blawenburg ACT Tutor ...Most competitive schools require it and it is also an excellent way to study for the school finals. I have much experience teaching SAT I Math and SAT II Math. I can help you boost your test 11 Subjects: including ACT Math, calculus, geometry, algebra 1 ...I enjoy math (especially physics) and consider myself blessed with the understanding of these subjects. I also play soccer for my school. I have played all my life, and have experience coaching younger kids too. 20 Subjects: including ACT Math, chemistry, geometry, German ...I have experience tutoring students from kindergarten to advanced mathematics at the undergraduate level. I believe that developing a "number sense" is very important to succeed in math and spend more time developing this sense rather than having students memorize formulas and algorithms. I hav... 16 Subjects: including ACT Math, calculus, prealgebra, precalculus ...Spent another semester aiding MATLAB instructor at Rutgers for recitation purposes. I am currently a Junior undergraduate at Rutgers University School of Engineering for Chemical and Biochemical Engineering. I have taken all my basic chemical engineering courses, which in my opinion should suffice students who seek aide on this tutoring website. 8 Subjects: including ACT Math, chemistry, calculus, Chinese ...These courses were titled "Discrete Math" or "Math for Elementary School Teachers." I have also written logic questions for publishing firms. I have taught high school math from 1966 to 1980 and college math from 1973 to the present. I also worked as a Math Editor for five years at a major publishing firm in which I developed sample math questions for the PRAXIS exam. 21 Subjects: including ACT Math, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Blawenburg_ACT_tutors.php","timestamp":"2014-04-19T07:19:29Z","content_type":null,"content_length":"23761","record_id":"<urn:uuid:8abef85e-31db-4a9b-890c-4a1632a5cad7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert decivolt to attovolt - Conversion of Measurement Units ›› Convert decivolt to attovolt ›› More information from the unit converter How many decivolt in 1 attovolt? The answer is 1.0E-17. We assume you are converting between decivolt and attovolt. You can view more details on each measurement unit: decivolt or attovolt The SI derived unit for voltage is the volt. 1 volt is equal to 10 decivolt, or 1.0E+18 attovolt. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between decivolts and attovolts. Type in your own numbers in the form to convert the units! ›› Definition: Decivolt The SI prefix "deci" represents a factor of 10^-1, or in exponential notation, 1E-1. So 1 decivolt = 10^-1 volts. The definition of a volt is as follows: The volt (symbol: V) is the SI derived unit of electric potential difference or electromotive force, commonly known as voltage. It is named in honor of the Lombard physicist Alessandro Volta (1745–1827), who invented the voltaic pile, the first chemical battery. The volt is defined as the potential difference across a conductor when a current of one ampere dissipates one watt of power.[3] Hence, it is the base SI representation m^2 · kg · s^-3 · A^-1, which can be equally represented as one joule of energy per coulomb of charge, J/C. ›› Definition: Attovolt The SI prefix "atto" represents a factor of 10^-18, or in exponential notation, 1E-18. So 1 attovolt = 10^-18 volts. The definition of a volt is as follows: The volt (symbol: V) is the SI derived unit of electric potential difference or electromotive force, commonly known as voltage. It is named in honor of the Lombard physicist Alessandro Volta (1745–1827), who invented the voltaic pile, the first chemical battery. The volt is defined as the potential difference across a conductor when a current of one ampere dissipates one watt of power.[3] Hence, it is the base SI representation m^2 · kg · s^-3 · A^-1, which can be equally represented as one joule of energy per coulomb of charge, J/C. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0029 seconds.
{"url":"http://www.convertunits.com/from/decivolt/to/attovolt","timestamp":"2014-04-16T19:18:30Z","content_type":null,"content_length":"21490","record_id":"<urn:uuid:98af84bf-5066-4269-a4f1-1645ba0506e1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence rates of approximation by translates , 1997 "... We present a Kernel--based framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridge-regression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connection ..." Cited by 77 (25 self) Add to MetaCart We present a Kernel--based framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridge-regression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connections between the cost-function and some properties up to now believed to apply to Support Vector Machines only. The optimal solution of all the problems described above can be found by solving a simple quadratic programming problem. The paper closes with a proof of the equivalence between Support Vector kernels and Greene's functions of regularization operators. , 1994 "... . Neural Networks are non-linear black-box model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable non-linear systems. When estimating the parameters in these structures, there is also good adaptability to conce ..." Cited by 10 (3 self) Add to MetaCart . Neural Networks are non-linear black-box model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable non-linear systems. When estimating the parameters in these structures, there is also good adaptability to concentrate on those parameters that have the most importance for the particular data set. Key Words. Neural Networks, Parameter estimation, Model Structures, Non-Linear Systems. 1. EXECUTIVE SUMMARY 1.1. Purpose The purpose of this tutorial is to explain how Artificial Neural Networks (NN) can be used to solve problems in System Identification, to focus on some key problems and algorithmic questions for this, as well as to point to the relationships with more traditional estimation techniques. We also try to remove some of the "mystique" that sometimes has accompanied the Neural Network approach. 1.2. What's the problem? The identification problem is to infer relationships between past inp... - Adv. Comp. Math , 2000 "... This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mo ..." Cited by 5 (5 self) Add to MetaCart This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mollified data. Results about convergence and convergence rates for exact data are derived based upon well-known convergence results about least-squares collocation. Finally, the stability properties with respect to errors in the data are examined and stability bounds are obtained, which yield rules for the choice of the number of network elements. Keywords: ill-posed problems, least-squares collocation, neural networks, network training, regularization. AMS Subject Classification: 41A15, 41A30, 45L10, 65J20, 92B20. Short Title: Training Neural Networks with Noisy Data. 1 "... Abstract — Random networks of nonlinear functions have a long history of empirical success in function fitting but few theoretical guarantees. In this paper, using techniques from probability on Banach Spaces, we analyze a specific architecture of random nonlinearities, provide L ∞ and L2 error boun ..." Cited by 4 (0 self) Add to MetaCart Abstract — Random networks of nonlinear functions have a long history of empirical success in function fitting but few theoretical guarantees. In this paper, using techniques from probability on Banach Spaces, we analyze a specific architecture of random nonlinearities, provide L ∞ and L2 error bounds for approximating functions in Reproducing Kernel Hilbert Spaces, and discuss scenarios when these expansions are dense in the continuous functions. We discuss connections between these random nonlinear networks and popular machine learning algorithms and show experimentally that these networks provide competitive performance at far lower computational cost on large-scale pattern recognition tasks. I. - Neural Networks , 2001 "... . This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feed-forward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing a ..." Cited by 3 (1 self) Add to MetaCart . This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feed-forward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing and regularization by weight decay, as well as a combination of both methods to combine their advantages. We show that in all cases stable approximations are obtained converging to the approximated function in a desired Sobolev space as the noise in the data tends to zero (in the weaker L 2 -norm) if the regularization parameter and the number of units in the network are chosen appropriately. Under additional smoothness assumptions we are able to show convergence rates results in terms of the noise level and the number of units in the network. In addition, we show how the theoretical results can be applied to the important classes of perceptrons with one hidden layer and to translation networks. Finally, the performance of the different approaches is compared in some numerical examples. Key Words: Ill-posed problems, neural networks, Tikhonov regularization, output smoothing, weight decay, function approximation. AMS Subject Classifications: 65J20, 92B20, 41A30. 1. - In Intelligent Methods in Signal Processing and Communications , 1997 "... In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-o# between the global error and the partial error during the iterations of the solution. The ..." Cited by 1 (1 self) Add to MetaCart In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-o# between the global error and the partial error during the iterations of the solution. These results are then specialized to constructive function approximation using sigmoidal neural networks. The emphasis then shifts to the implementation issues associated with the problem of achieving given approximation errors when using a finite number of nodes and a finite data set for , 1994 "... Neural Networks are widely noticed to provide a nonlinear function approximation method. In order to make its approximation ability clear, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which clarifies the quantitative relations ..." Cited by 1 (1 self) Add to MetaCart Neural Networks are widely noticed to provide a nonlinear function approximation method. In order to make its approximation ability clear, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which clarifies the quantitative relationship between the approximation accuracy and the number of elements in the hidden layer, can be obtained. This result shows that the approximation accuracy depends on the smoothness of target functions. It also shows that the approximation methods which use ridge functions are free from "curse of dimensionality ". 1 Overview In the middle of the 1980s, computational research on neural networks was activated by the works of the Parallel Distributed Processing (PDP) group, and multi-layered networks having sigmoidal output functions together with back-propagation learning played important roles in this movement. Many kinds of examples provided by the PDP group attracted interest of other rese... , 1997 "... The adaptive data-driven emulation and control of mechanical systems are popular applications of artificial neural networks in engineering. However, multi-layer perceptron training is an ill-posed nonlinear optimization problem. This paper explores a method to constrain network parameters so that co ..." Add to MetaCart The adaptive data-driven emulation and control of mechanical systems are popular applications of artificial neural networks in engineering. However, multi-layer perceptron training is an ill-posed nonlinear optimization problem. This paper explores a method to constrain network parameters so that conventional computational techniques for function approximation can be used during training. This was accomplished by forming local basis functions which provide accurate approximation and stable evaluation of the network parameters. It is noted that this approach is quite general and does not violate the principles of network architecture. By employing the concept of shift invariant subspaces, this approach yields a new and more robust error condition for feedforward artificial neural networks and allows one to both characterize and control the accuracy of the local bases formed. The two methods used are: 1) adding bases while altering their shape and keeping their spacing constant and 2) ad...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=219384","timestamp":"2014-04-21T06:09:29Z","content_type":null,"content_length":"34601","record_id":"<urn:uuid:428aafdb-8c8c-4add-a52b-da2da8a119f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by shirley on Saturday, January 8, 2011 at 11:03am. elimination method Related Questions algebra - how do you solve this problem by elimination method 5r-3s=14 3r+5s=56 Algebra - Solve by elimination method.. 3r - 5s = 14 5r + 3s = 46. math elimination method - elimination method really confuses me. solve by ... alberga - 3r-5s=-4 5r+3s=16 elimination method Algebra - please help me!! Solve by elimination method: 3r - 5s = 6 5r + 3s = 44... math problem - very confused could someone help me with this using the ... Elimination method - Solve by the elimination method. Is there a solution? 2r-5s... algebra - 3r-5s=-14 3r+3s-56 Math116 - Solve by elimination 5r-3s=13 3r+5s=69 algebra - im supposed to solve by the elimination method 5r-25=33 2r+5s=48
{"url":"http://www.jiskha.com/display.cgi?id=1294502593","timestamp":"2014-04-16T10:29:57Z","content_type":null,"content_length":"8111","record_id":"<urn:uuid:30692c09-c87c-4a15-8da1-c0720d5f94c7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Submitting VAT return (FRS) - Box 6 31st January 2011 08:28 #1 Still gathering requirements... Join Date Dec 2010 Box 6: your flat rate turnover for the period Enter in Box 6 the flat rate turnover - including VAT - that you applied your flat rate percentage to. For example, if your flat rate turnover for the period is £10,000 and your percentage is 8 per cent then you would enter £10,000 in Box 6 and 8 per cent of £10,000 - that is, £800 - in Box 1. Don't forget to include in your flat rate turnover supplies that are exempt from VAT like Slightly confused here. For eg. Assuming the gross turnover for last quarter is £10k. Adding VAT collected (@17.5%), the total VAT inclusive turnover is £11.75k. The FRS percentage is 12.5%, which will be applied to £10k and hence VAT due would be £1.25k, which goes in Box 1. So which figure goes into Box 6? £10k or £11.75k? I am leaning towards £10k but slighlty confused by the words 'including VAT'. Box 6: your flat rate turnover for the period Enter in Box 6 the flat rate turnover - including VAT - that you applied your flat rate percentage to. For example, if your flat rate turnover for the period is £10,000 and your percentage is 8 per cent then you would enter £10,000 in Box 6 and 8 per cent of £10,000 - that is, £800 - in Box 1. Don't forget to include in your flat rate turnover supplies that are exempt from VAT like Slightly confused here. For eg. Assuming the gross turnover for last quarter is £10k. Adding VAT collected (@17.5%), the total VAT inclusive turnover is £11.75k. The FRS percentage is 12.5%, which will be applied to £10k and hence VAT due would be £1.25k, which goes in Box 1. So which figure goes into Box 6? £10k or £11.75k? I am leaning towards £10k but slighlty confused by the words 'including VAT'. You should put in the £10,000 plus VAT - so £11,750. This makes it easy for HMRC to check you're being consistent! Thanks Clare! I've submitted my VAT return. In the cofirmation receipt, it says that VAT would be collected on 10th Feb, whereas my due date was 7th Feb. Do you think this could be a problem? Why would it take 10 days for HMRC to debit my account? Don't worry that's normal. When HMRC collect by DD it's often a few days after the due date, presumably the time it takes their DD system to kick in. I think I made it just in time .. This is for the others out here: Direct Debit payments If you pay by online Direct Debit, HMRC will collect payment from your nominated bank account a further three bank working days after the extended due date for your return. This means that online VAT Direct Debit offers you more time to pay than any other method - a minimum of ten extra calendar days. Therefore, the deadline for quarter ending 31st Dec is 10th Feb (for those paying thru Direct Debit). However, DD must be set up before the return and at least five banking days before the payment collection date. This allows HMRC time to make the necessary arrangements with your bank or building Further guidace: HM Revenue & Customs: Deadlines for your VAT Return and payment For eg. Assuming the gross turnover for last quarter is £10k. Adding VAT collected (@17.5%), the total VAT inclusive turnover is £11.75k. The FRS percentage is 12.5%, which will be applied to £10k and hence VAT due would be £1.25k, which goes in Box 1. Some slight confusion here. To calculate the FRS VAT payable, you first calculate turnover including VAT, then apply the FRS percentage to that number. Using current rates for IT contractors, that is 20% charged to the client and 14.5% returned to HMRC, if your sales figure for the quarter was £10K then you would calculate VAT return as Turnover = 10,000 VAT charged = 2,000 VAT Inclusive turnover = 12,000 VAT owed = 12,000 * 14.5% = £1740 There's a calculator here Try putting in 48K (10k * 4 * 1.2) as your estimated VAT inclusive turnover for the year, and leaving the other fields blank. It gives FRS VAT as 6,960 (=1,740 * 4) and 'standard' VAT as 2,000. Last edited by pjclarke; 31st January 2011 at 12:04. I would just like to say that the US NSA is staffed exclusively by superb, upstanding professionals who deserve our utmost respect and admiration. Some slight confusion here. To calculate the FRS VAT payable, you first calculate turnover including VAT, then apply the FRS percentage to that number. Using current rates for IT contractors, that is 20% charged to the client and 14.5% returned to HMRC, if your sales figure for the quarter was £10K then you would calculate VAT return as Turnover = 10,000 VAT charged = 2,000 VAT Inclusive turnover = 12,000 VAT owed = 12,000 * 14.5% = £1740 There's a calculator here Try putting in 48K (10k * 4 * 1.2) as your estimated VAT inclusive turnover for the year, and leaving the other fields blank. It gives FRS VAT as 6,960 (=1,740 * 4) and 'standard' VAT as 2,000. Thank you for this mate. It only means that I've put in an incorrect VAT number Need to get in touch with HMRC and get it rectified I believe that if the difference is under £2K, then you don't need to do anything - just adjust it on the next return, but keep clear records so that you can show what you have done in case of an Also the recent change makes things a bit awkward. The flat rate should be based upon the headline rate for each invoice, so you may need to do two sets of calculations - one for the 17.5% invoices - and one for the 20% invoices - and them sum the two together. HMRC are reasonably relaxed about genuine VAT errors, particularly when changes in rates occur. But it's still important to make sure you put in the effort to get it right - and make appropriate corrections in the right timescale. Thank you all. I spoke on the VAT helpline and they said I can adjust it in the next VAT return submitted. However, I've written to the Error Correction team and let them know about the error and whether its possible to pay the difference before the next return. 31st January 2011 08:30 #2 Super poster Join Date Feb 2010 31st January 2011 08:39 #3 Still gathering requirements... Join Date Dec 2010 31st January 2011 08:53 #4 Super poster Join Date Feb 2010 31st January 2011 09:02 #5 Still gathering requirements... Join Date Dec 2010 31st January 2011 11:57 #6 Contractor Among Contractors Join Date Jul 2008 31st January 2011 16:19 #7 Still gathering requirements... Join Date Dec 2010 31st January 2011 19:08 #8 Super poster Join Date Dec 2008 1st February 2011 09:03 #9 Still gathering requirements... Join Date Dec 2010 1st February 2011 14:30 #10 More time posting than coding Join Date Nov 2010
{"url":"http://forums.contractoruk.com/accounting-legal/63367-submitting-vat-return-frs-box-6-a.html","timestamp":"2014-04-20T16:06:38Z","content_type":null,"content_length":"88740","record_id":"<urn:uuid:4103578b-7a48-432a-b779-047e6858290a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Plato Center Geometry Tutor Find a Plato Center Geometry Tutor ...Algebra 2 can be a difficult subject, but with practice I can help students gain confidence and knowledge of these complex mathematical concepts. I have tutored Geometry to many high school students. I have worked with many students who have difficulty understanding how to write out proofs using the different shapes and figures. 11 Subjects: including geometry, calculus, algebra 2, trigonometry ...I have a B.S. in Mathematics, a Masters in Math Education, a Masters Level Certificate in Gifted, and have Illinois Certification for teaching math 6-12. I have four years experience teaching math at the college level and 8 years experience teaching math at middle and high school level. I have been teaching and tutoring for 15 years. 24 Subjects: including geometry, calculus, algebra 1, GRE ...Lewis, Lee Strobel) and critical traditions (e.g. Bertrand Russell and Richard Dawkins). Of possible interest, I once saw Pope John Paul II during a Latin mass in St. Peter's Basilica in Vatican City, have toured the cathedrals of Europe and of England, attended the same Oxford college as Will... 57 Subjects: including geometry, English, chemistry, French ...I am a graduate of Eastern Illinois University and Roosevelt University with a Bachelors degree in Special and Elementary Education and a Masters degree in Teacher leadership. I am willing to tutor one-on-one or a small group of students because I feel that many students who struggle in school a... 28 Subjects: including geometry, reading, chemistry, English ...I have worked with learners from 1st and 2nd grade to their mid-late 20's. Elementary Math, Reading, Pre-Algebra, Algebra, Geometry, College Basic Math, GED preparation, SAT and ACT Math, Algebra2, all are areas where I can help you or your child gain confidence and develop content mastery. "If... 34 Subjects: including geometry, reading, writing, algebra 1
{"url":"http://www.purplemath.com/plato_center_il_geometry_tutors.php","timestamp":"2014-04-18T00:35:40Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:058fdb45-dc03-462b-b72d-7db9f9146577>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums Finding constant variation of k: biologist counting deer How do I write an equation using only one variable that could be used to solve for the constant variation of k ? In addition, I am having trouble solving problems that require me to find the constant of variation. For example: A biologist counted the number of white tail deer in a 90 acre parcel of ... How do you multiply mixed numbers? Hello I need assistance multiplying mixed numbers. For example, how would I multiply 6-7/8 and 22-2/5 and 7-1/8, and then round to the nearest 1/5? Hello. Here is a problem similar to one I am having trouble solving. If I could get help solving this problem, I would be able to solve my actual problem... The number of tickets sold each day for an upcoming performance of Handel’s Messiah is given by N(x)=-0.4x²+9x+11, where x is the expected numb... Re: Quadratic Equations and Functions: number of tickets sold Okay. As far as how many tickets will be sold that day. I do not understand where to plug h into. In addition, I do not understand this information: "Since you can't sell negative numbers of tickets, find the location of the zeroes (by plugging "0" in for "N" and then solvin...
{"url":"http://www.purplemath.com/learning/search.php?author_id=3201&sr=posts","timestamp":"2014-04-16T04:26:19Z","content_type":null,"content_length":"18387","record_id":"<urn:uuid:5b118ed2-7af8-4c67-b9b5-b34fcfb1d957>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Finds centers of clusters and groups input samples around the clusters. The function kmeans implements a k-means algorithm that finds the centers of cluster_count clusters and groups the input samples around the clusters. As an output, samples matrix. The function returns the compactness measure that is computed as after every attempt. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, you can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm, pass them with the ( flags = KMEANS_USE_INITIAL_LABELS ) flag, and then choose the best (most-compact) clustering. • An example on K-means clustering can be found at opencv_source_code/samples/cpp/kmeans.cpp • (Python) An example on K-means clustering can be found at opencv_source_code/samples/python2/kmeans.py Splits an element set into equivalency classes. C++: template<typename _Tp, class _EqPredicate> int partition(const vector<_Tp>& vec, vector<int>& labels, _EqPredicate predicate=_EqPredicate()) • vec – Set of elements stored as a vector. Parameters: • labels – Output vector of labels. It contains as many elements as vec. Each label labels[i] is a 0-based cluster index of vec[i] . • predicate – Equivalence predicate (pointer to a boolean function of two arguments or an instance of the class that has the method bool operator()(const _Tp& a, const _Tp& b) ). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class. The generic function partition implements an http://en.wikipedia.org/wiki/Disjoint-set_data_structure . The function returns the number of equivalency classes. [Arthur2007] Arthur and S. Vassilvitskii. k-means++: the advantages of careful seeding, Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 2007 Help and Feedback You did not find what you were looking for? • Ask a question on the Q&A forum. • If you think something is missing or wrong in the documentation, please file a bug report.
{"url":"http://docs.opencv.org/modules/core/doc/clustering.html","timestamp":"2014-04-21T09:37:26Z","content_type":null,"content_length":"17036","record_id":"<urn:uuid:c5883039-a006-4888-8b27-ec1289de90dc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Statistical distributions on samples Christopher Jordan-Squire cjordan1@uw.... Mon Aug 15 14:40:16 CDT 2011 On Mon, Aug 15, 2011 at 8:53 AM, Andrea Gavana <andrea.gavana@gmail.com>wrote: > Hi Chris and All, > On 12 August 2011 16:53, Christopher Jordan-Squire wrote: > > Hi Andrea--An easy way to get something like this would be > > > > import numpy as np > > import scipy.stats as stats > > > > sigma = #some reasonable standard deviation for your application > > x = stats.norm.rvs(size=1000, loc=125, scale=sigma) > > x = x[x>50] > > x = x[x<200] > > > > That will give a roughly normal distribution to your velocities, as long > as, > > say, sigma<25. (I'm using the rule of thumb for the normal distribution > that > > normal random samples lie 3 standard deviations away from the mean about > 1 > > out of 350 times.) Though you won't be able to get exactly normal errors > > about your mean since normal random samples can theoretically be of any > > size. > > > > You can use this same process for any other distribution, as long as > you've > > chosen a scale variable so that the probability of samples being outside > > your desired interval is really small. Of course, once again your random > > errors won't be exactly from the distribution you get your original > samples > > from. > Thank you for your suggestion. There are a couple of things I am not > clear with, however. The first one (the easy one), is: let's suppose I > need 200 values, and the accept/discard procedure removes 5 of them > from the list. Is there any way to draw these 200 values from a bigger > sample so that the accept/reject procedure will not interfere too > much? And how do I get 200 values out of the bigger sample so that > these values are still representative? FWIW, I'm not really advocating a truncated normal so much as making the standard deviation small enough so that there's no real difference between a true normal distribution and a truncated normal. If you're worried about getting exactly 200 samples, then you could sample N with N>200 and such that after throwing out the ones that lie outside your desired region you're left with M>200. Then just randomly pick 200 from those M. That shouldn't bias anything as long as you randomly pick them. (Or just pick the first 200, if you haven't done anything to impose any order on the samples, such as sorting them by size.) But I'm not sure why you'd want exactly 200 samples instead of some number of samples close to 200. > Another thing, possibly completely unrelated. I am trying to design a > toy Latin Hypercube script (just for my own understanding). I found > this piece of code on the web (and I modified it slightly): > def lhs(dist, size=100): > ''' > Latin Hypercube sampling of any distrbution. > dist is is a scipy.stats random number generator > such as stats.norm, stats.beta, etc > parms is a tuple with the parameters needed for > the specified distribution. > :Parameters: > - `dist`: random number generator from scipy.stats module. > - `size` :size for the output sample > ''' > n = size > perc = numpy.arange(0.0, 1.0, 1.0/n) > numpy.random.shuffle(perc) > smp = [stats.uniform(i,1.0/n).rvs() for i in perc] > v = dist.ppf(smp) > return v > Now, I am not 100% clear of what the percent point function is (I have > read around the web, but please keep in mind that my statistical > skills are close to minus infinity). From this page: > http://www.itl.nist.gov/div898/handbook/eda/section3/eda362.htm The ppf is what's called the quantile function elsewhere. I do not know why scipy calls it the ppf/percent point function. The quantile function is the inverse of the cumulative density function (cdf). So dist.ppf(z) is the x such that P(dist <= x) = z. Roughly. (Things get slightly more finicky if you think about discrete distributions because then you have to pick what happens at the jumps in the cdf.) So dist.ppf(0.5) gives the median of dist, and dist.ppf(0.25) gives the lower/first quartile of dist. > I gather that, if you plot the results of the ppf, with the horizontal > axis as probability, the vertical axis goes from the smallest to the > largest value of the cumulative distribution function. If i do this: > numpy.random.seed(123456) > distribution = stats.norm(loc=125, scale=25) > my_lhs = lhs(distribution, 50) > Will my_lhs always contain valid values (i.e., included between 50 and > 200)? I assume the answer is no... but even if this was the case, is > this my_lhs array ready to be used to setup a LHS experiment when I > have multi-dimensional problems (in which all the variables are > completely independent from each other - no correlation)? I'm not really sure if the above function is doing the lhs you want. To answer your question, it won't always generate values within [50,200]. If size is large enough then you're dividing up the probability space evenly. So even with the random perturbations (whose use I don't really understand), you'll ensure that some of the samples you get when you apply the ppf will correspond to the extremely low probability samples that are <50 or >200. -Chris JS My apologies for the idiocy of the questions. > Andrea. > "Imagination Is The Only Weapon In The War Against Reality." > http://xoomer.alice.it/infinity77/ > >>> import PyQt4.QtGui > Traceback (most recent call last): > File "<interactive input>", line 1, in <module> > ImportError: No module named PyQt4.QtGui > >>> > >>> import pygtk > Traceback (most recent call last): > File "<interactive input>", line 1, in <module> > ImportError: No module named pygtk > >>> > >>> import wx > >>> > >>> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20110815/aeb40e28/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-August/058061.html","timestamp":"2014-04-17T07:04:21Z","content_type":null,"content_length":"10497","record_id":"<urn:uuid:a06e7a38-2b65-4426-8e1e-8174b0b01771>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Feedback control of NMR systems: a control-theoretic perspective Claudio Altafini International School for Advanced Studies Trieste, Italy MIT, January 2005 ­ p. 1/5 A quote From R. Ernst, G. Bodenhausen and A. Wokaun "Principles of NMR in one and two dimensions" Claredon Press 1987, p.3 "spectroscopy stands in close analogy to techniques which measure the transfer function of an electronic device. (...) It is well known that the transfer function completely characterizes a linear time-independent system. Many of the concepts of spectroscopy stems from the consideration of linearly or approximately linear systems for which a simple and elegant mathematical treatement is possible. (...) It has been known for many years that the free induction decay is equivalent to the impulse response for linear MIT, January 2005 ­ p. 2/5
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/488/2546871.html","timestamp":"2014-04-20T15:54:50Z","content_type":null,"content_length":"8085","record_id":"<urn:uuid:a625adcb-ec25-4c88-9a90-902dbbdcf03a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Abstract Harmonic Analysis This classic monograph is the work of a prominent contributor to the field of harmonic analysis. Geared toward advanced undergraduates and graduate students, it focuses on methods related to Gelfand's theory of Banach algebra. Prerequisites include a knowledge of the concepts of elementary modern algebra and of metric space topology. The first three chapters feature concise, self-contained treatments of measure theory, general topology, and Banach space theory that will assist students in their grasp of subsequent material. An in-depth exposition of Banach algebra follows, along with examinations of the Haar integral and the deduction of the standard theory of harmonic analysis on locally compact Abelian groups and compact groups. Additional topics include positive definite functions and the generalized Plancherel theorem, the Wiener Tauberian theorem and the Pontriagin duality theorem, representation theory, and the theory of almost periodic functions. Reprint of the D. Van Nostrand Co., New York, 1953 edition. Availability Usually ships in 24 to 48 hours ISBN 10 0486481239 ISBN 13 9780486481234 Author/Editor Lynn H. Loomis Page Count 206 Dimensions 5 3/8 x 8 1/2
{"url":"http://store.doverpublications.com/0486481239.html","timestamp":"2014-04-18T16:32:27Z","content_type":null,"content_length":"43562","record_id":"<urn:uuid:09e9505a-5252-4928-8a1c-3e56ccaf6690>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
newbie needs help understanding a tachometer's "monitor" Was looking for a better understanding of what the componets are actually doing. I think you covereed that thanks, but have a few more. Does the fact that the resistor is reading slightly more resistance (3.6 KOhms instead of 3.3 Kohms) mean that the tach will be reading slightly slower. No. Resistors usually have a 5% or 10% tolerance which is the +/- error in their intended values. The strength of the input signal won't affect the calibration a long as it is in the right range. The Tachometer probably feeds the signal through a transistor circuit to trigger a controlled current pulse which is smoothed out by filter circuits to a relatively constant current. The physical meter is then an ammeter. In short only the frequency of the pulses affects the tachometer. Is the ignition coil more like a step up transformer from 12v to 25k+v rather than a capacitor? That would make sense. Yes, that's correct. It is a step up transformer. But designed for pulsed output rather than sinusoidal. What does the value of .068 MFD really mean? What is a micro ferrand? That's farad. A farad is a unit of capacitance which relates the charge stored in the capacitor to the voltage. If C is the capacitance then the charge is Q=CV where V is the voltage. So the capacitor ensures a D/C current going to the tach right? The black band denotes the negative side of the capacitor right? No the capacitor ensures that only AC is going to the tach, in particular only the transient pulse and not the 12V steady voltage that the coil uses to "charge up". I'll need to read up more on resistors...I don't think I have the right understanding of them, nor of the difference between Voltage and Current. Here's my "quick-n-dirty" overview of electronics. You can imagine the various electronic components have a mechanical analogue. The flow of charge corresponds to mechanical motion (say a belt in a pulley system). The unit of charge is coulombs but you can think of that as an analogue of distance traveled by the pulley belts. (with belts you see the need for a circuit but it may be better to think more in terms of guided chains or cables which can be "pushed" as well as "pulled".) Current is the charge flow rate, 1 Ampere is 1 coulomb per second. Take that as a velocity analogue. Voltage is the force analogue (Newtons * m/s = watts; volts * amps = volts * coulombs/sec = watts). In the analogy assume the "belts" have almost zero mass so any force if not resisted or mediated by inertia will create a great deal of motion. Resistors are the friction analogue. They obey Ohm's law V = IR. That's the mechanical analogue of a velocity dependent drag, Force = resistance * speed. Viewed another way resistors dissipate energy VI = Watts power, so I^2 R = heat generated by a resistor for a given current. Capacitors act like springs but the capacitance is comparable to the reciprocal of the spring constant. C = Coulombs per Volt, compares to 1/k = meters per newton. 1/C = Volts per coulomb corresponds to the force per distance constant of a spring. Inductors, as I mentioned, are an inertial analog. They can be thought of as flywheels in the pulley analogue. Their units are Henry's (L symbol) which gives the voltage for a rate of change of current V = L * dI/dt which corresponds to Newton's F = mass * acceleration in the mechanical analogue. Now AC you can think of as oscillatory motion of the mechanical analogue (like a belt driving the agitator of a washing machine ) and DC is steady one way motion (spin cycle!) So you can imagine the "monitor circuit" as running a belt from the pulsing of the coil through a pulley with a tight spring on it (low cap = tight spring) and then through a damper (high resistance means its highly viscous). The spring ensures only a "kick" is transmitted through the circuit and the damper ensures the "kick" doesn't move very far and doesn't "bounce" producing false signals. The coil is a transformer but you can think of it as a heavy flywheel. Steady belt force brings it up to speed and a sudden locking of the belt causes it to transmit a strong pulsed force along a secondary belt. A practical analogue is say an impact wrench. Unfortunately this mechanical analogue breaks down when you want to describe a transformer. For sinusoidal AC you can sort of get by thinking in terms of a gear system trading off speed and force but there are subtleties not represented by the analogue, most importantly the fact that the transformer output voltage is proportional to the change in input current. It can't work in DC the way a gear works for continuous motion. I'll try and think up some mechanism which mimics a transformer. But other than that you have my "quick-n-dirty" electronics review.
{"url":"http://www.physicsforums.com/showthread.php?p=3773176","timestamp":"2014-04-21T12:14:17Z","content_type":null,"content_length":"54740","record_id":"<urn:uuid:4e062b78-9061-4fae-a3ae-f4ee545620d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
This paper describes the scale effect in experiments on the stability of armor units from the point of view of the wave forces. The relationship between the drag coefficient CD and the stability coefficient KD of Hudson formula is theoretically derived to be that KD « CD"3 on the condition that the inertia force is negligible. Three kinds experiments were performed by using various sizes of Tetrapods ranging from 16 g to 6800 g according to the Froude law for scaling, 1) to measure wave force on an armor unit placed in an armor layer of a breakwater, 2) to determine the drag and inertia coefficients in wave fields, and 3) to determine the drag coefficient in a steady flow. It is found that the wave force in the small-scale experiments is relatively larger than that in the large-scale experiments. As the wave height increases, the drag force becomes predominant in comparison with the inertia force. It is concluded that the scale effect of the wave force on armor units is mainly due to the change of the relative drag force number as a function of Reynolds number. armor units; scale effect; wave force Full Text: This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/4557/0","timestamp":"2014-04-19T05:25:30Z","content_type":null,"content_length":"15513","record_id":"<urn:uuid:9482804c-3bf5-40be-9753-4eff3d08f672>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Lawrence, MA Algebra Tutor Find a Lawrence, MA Algebra Tutor ...He deserves all the respect in the world for serving his country, however the disappointment was evident. If what you do in the military during your time on active duty matters to you, take the time and invest in studying for the ASVAB. I can help you reach your goal. 6 Subjects: including algebra 1, Spanish, prealgebra, GED ...I have taught math for 11 years with a special education inclusion model. I have had both special needs students in standard math classes and special needs students in supported classes. The special needs of my students have included ADD/ADHD, non-verbal LD, English language learners, Tourettes syndrome, Asperger's syndrome and math anxiety. 8 Subjects: including algebra 2, algebra 1, reading, GED ...I have also tutored for the SATs. I have my bachelor's degree in mathematics and a master's in education. I currently teach high school math and teach Algebra I, Geometry, SAT prep and a real world math course. 4 Subjects: including algebra 1, algebra 2, geometry, SAT math ...I hold myself responsible for the student's understanding of the subject. Therefore, I will seek feedback and strive to improve in areas that are unique to each student. While keeping the student challenged, I believe in a relaxed atmosphere which is most conducive to learning. 7 Subjects: including algebra 2, geometry, prealgebra, algebra 1 I am an Engineer with a Master's degree in Electrical Engineering from an Ivy league school - UPenn, PA. I did my Bachelor's in EE and Applied Mathematics from Stony Brook University, NY. During my college years I tutored Statistics, Algebra, Chemistry and Electrical Engineering Circuits courses and received university credits and/or pay for doing so. 22 Subjects: including algebra 2, algebra 1, physics, calculus Related Lawrence, MA Tutors Lawrence, MA Accounting Tutors Lawrence, MA ACT Tutors Lawrence, MA Algebra Tutors Lawrence, MA Algebra 2 Tutors Lawrence, MA Calculus Tutors Lawrence, MA Geometry Tutors Lawrence, MA Math Tutors Lawrence, MA Prealgebra Tutors Lawrence, MA Precalculus Tutors Lawrence, MA SAT Tutors Lawrence, MA SAT Math Tutors Lawrence, MA Science Tutors Lawrence, MA Statistics Tutors Lawrence, MA Trigonometry Tutors Nearby Cities With algebra Tutor Andover, MA algebra Tutors Billerica algebra Tutors Dracut algebra Tutors Haverhill, MA algebra Tutors Londonderry, NH algebra Tutors Lowell, MA algebra Tutors Lynn, MA algebra Tutors Methuen algebra Tutors Nashua, NH algebra Tutors North Andover algebra Tutors Peabody, MA algebra Tutors Salem, NH algebra Tutors Somerville, MA algebra Tutors South Lawrence, MA algebra Tutors Tewksbury algebra Tutors
{"url":"http://www.purplemath.com/Lawrence_MA_Algebra_tutors.php","timestamp":"2014-04-20T06:56:06Z","content_type":null,"content_length":"23939","record_id":"<urn:uuid:3ed82268-2c0c-42d7-b370-4dfd91aac64e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: ranking with weights [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: ranking with weights From Steven Samuels <sjhsamuels@earthlink.net> To statalist@hsphsun2.harvard.edu Subject Re: st: ranking with weights Date Tue, 2 Dec 2008 13:53:40 -0500 Cindy, What are the analytic units (people? regions?). What are the "weights"? What is "expenditure"? How is it measured. What do you mean that some regions are "less sampled" than others. It's not clear, for example, if this is a sample, and if so, of what? So, please describe the study design in detail. Last question: what is the purpose of the ranking? On Dec 2, 2008, at 12:54 PM, Cindy Gao wrote: I am trying to find a way to rank weighted data (since the egen function -rank- does not work with weights). A simple way would be order the data in terms of variable that I have interest in (monthly expenditure) and then create a new variable like -g rank1=sum(weight)-. But, there is problem. Some of my observations are "tied" as they have the same level of expenditure. Using the simple method I mention means that some observations are ranked above others even though they have same level of expenditure. This is a problem as the weights are large so you find that 2 observations are ranked with bug gap in between even though same level of expenditure. It is even bigger problem because the weights might be correlated with some other variables I am interested in (like region, since some regions are less sampled than other). I also try multiplying the expenditure ranking by the weight, but this gives wrong results (for example they do not add up to weighted total). Can anyone help? In other words, I would like for all observations with same expenditure to have same rank, which I assume would be some average of all the weighted observations having that same expenditure. I include a sample dataset below: expenditure weighting rank rank1 weighted_rank 12 1065 2.5 1406 ??? 12 98 2.5 1504 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-12/msg00095.html","timestamp":"2014-04-18T21:56:42Z","content_type":null,"content_length":"8280","record_id":"<urn:uuid:5c04ce9f-4874-4346-ba8c-0e28c0c18e9d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Multi-dimensional array product Date: Aug 16, 2009 8:24 PM Author: Ed Stein7997 Subject: Multi-dimensional array product I have a question about how to efficiently form multi-dimensional arrays. Lets say I have two 2-d arrays (i.e. matrices) a and b; a(i,j) sized i=1..M,j=1..N; and b(i,j) sized i=1..K,j=1..L. How can I form the 4-d array w(m,n,k,l)=a(m,n)*b(k,l) which has a unique entry for every m,n,k,l; i.e. there are M*N*K*L unique entries? I want to form w in a 'vectorized' way without use of loops. A conventional matrix product like a*b doesn't work of course since the inner dimensions of a and b aren't the same. This isn't the same as a Kronecker product since that remains two-dimensional. Thanks for any assistance. Ed Stein
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=6817266","timestamp":"2014-04-20T01:38:33Z","content_type":null,"content_length":"1670","record_id":"<urn:uuid:c4eec447-0787-4aa8-adb6-adf1c22da1dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
st: re: Is there a similar output command as outreg for Summary? [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: re: Is there a similar output command as outreg for Summary? From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: re: Is there a similar output command as outreg for Summary? Date Sun, 15 Mar 2009 14:22:56 -0400 Glenn asks Is there a similar output command as outreg for Summary? I want to first use command summary and then output selected items (such as mean median p1 p99 STD) for many variables.. Is there a such output command? One solution: sysuse auto,clear tabstat price mpg headroom trunk, stat(p1 p50 p99 mean sd) col(stat) save mat s = r(StatTotal) outtable using glenn, mat(s) replace findit outtable and install if not already available. Kit Baum | Boston College Economics and DIW Berlin | http://ideas.repec.org/e/pba1.html An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-03/msg00713.html","timestamp":"2014-04-24T00:03:58Z","content_type":null,"content_length":"6048","record_id":"<urn:uuid:fb407521-4092-40b2-a6b7-dfd6bd54242b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Choosing sub array based on values in a column Vincent Davis vincent@vincentdavis.... Mon Mar 22 22:00:04 CDT 2010 I feel kinda stupid as I think this must be easier than I am making it. Below my question you will see an answer I got to a question that I thought I would be able to complete the last steps my self I was wrong :) So if I have an array Y = np.rec.array([(1.0, 0.0, 3.0, 3.5), (0.0, 0.0, 6.0, 6.5), (1.0, 1.0, 9.0, 9.5)], dtype=[('var1', '<f8'), ('var2', '<f8'), ('var3', '<f8'), ('var4', '<f8')]) do this works like I would expect >>>array([(3.0, 3.5), (9.0, 9.5)], dtype=[('var3', '<f8'), ('var4', '<f8')]) But I would like to do this, >>> Y[['var3','var4']][Y['var1']==0 and Y['var2']==0] Traceback (most recent call last): File "<string>", line 1, in <fragment> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I tried some any() and all() combination but nothing worked. What s the right way to go about this? This answer was received on the mail list from Skipper Seabold If you have a rec array Y = np.rec.array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0), (7.0, 8.0, 9.0)], dtype=[('var1', ''var2','var3']) You can access the rows like, Note the list within []. If you want a "normal" array, I like this way that Pierre recently pointed out. 3 is the number of columns, and it fills in the number of rows. note the tuple for the view, if they're all floats. Taking a view might not work if var# have different types, like ints and floats. If you want the mean of the rows (mean over the columns axis = 1) Some shortcuts. Also, for now, the columns will given back to you in the order they're in in the array no matter which way you ask for them. A patch has been submitted for returning the order you ask that I hope gets picked *Vincent Davis 720-301-3003 * my blog <http://vincentdavis.net> | -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20100322/54d02eaf/attachment.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024797.html","timestamp":"2014-04-16T20:19:08Z","content_type":null,"content_length":"5182","record_id":"<urn:uuid:4069ea9b-84ca-430f-bea7-d20d5930bb9b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose D01GBF returns an approximation to the integral of a function over a hyper-rectangular region, using a Monte–Carlo method. An approximate relative error estimate is also returned. This routine is suitable for low accuracy work. 2 Specification SUBROUTINE D01GBF ( NDIM, A, B, MINCLS, MAXCLS, FUNCTN, EPS, ACC, LENWRK, WRKSTR, FINEST, IFAIL) INTEGER NDIM, MINCLS, MAXCLS, LENWRK, IFAIL REAL (KIND=nag_wp) A(NDIM), B(NDIM), FUNCTN, EPS, ACC, WRKSTR(LENWRK), FINEST EXTERNAL FUNCTN 3 Description D01GBF uses an adaptive Monte–Carlo method based on the algorithm described in Lautrup (1971) . It is implemented for integrals of the form: $∫ a1 b1 ∫ a2 b2 ⋯ ∫ an bn f x1,x2,…,xn dxn ⋯ dx2 dx1 .$ Upon entry, unless has been set to the minimum value , the routine subdivides the integration region into a number of equal volume subregions. Inside each subregion the integral and the variance are estimated by means of pseudorandom sampling. All contributions are added together to produce an estimate for the whole integral and total variance. The variance along each coordinate axis is determined and the routine uses this information to increase the density and change the widths of the sub-intervals along each axis, so as to reduce the total variance. The total number of subregions is then increased by a factor of two and the program recycles for another iteration. The program stops when a desired accuracy has been reached or too many integral evaluations are needed for the next cycle. 4 References Lautrup B (1971) An adaptive multi-dimensional integration procedure Proc. 2nd Coll. Advanced Methods in Theoretical Physics, Marseille 5 Parameters 1: NDIM – INTEGERInput On entry: $n$, the number of dimensions of the integral. Constraint: ${\mathbf{NDIM}}\ge 1$. 2: A(NDIM) – REAL (KIND=nag_wp) arrayInput On entry: the lower limits of integration, ${a}_{i}$, for $\mathit{i}=1,2,\dots ,n$. 3: B(NDIM) – REAL (KIND=nag_wp) arrayInput On entry: the upper limits of integration, ${b}_{i}$, for $\mathit{i}=1,2,\dots ,n$. 4: MINCLS – INTEGERInput/Output On entry : must be set □ either to the minimum number of integrand evaluations to be allowed, in which case ${\mathbf{MINCLS}}\ge 0$; □ or to a negative value. In this case, the routine assumes that a previous call had been made with the same parameters NDIM, A and B and with either the same integrand (in which case D01GBF continues calculation) or a similar integrand (in which case D01GBF begins the calculation with the subdivision used in the last iteration of the previous call). See also WRKSTR. On exit: contains the number of integrand evaluations actually used by D01GBF. 5: MAXCLS – INTEGERInput On entry: the maximum number of integrand evaluations to be allowed. In the continuation case this is the number of new integrand evaluations to be allowed. These counts do not include zero integrand values. □ ${\mathbf{MAXCLS}}>{\mathbf{MINCLS}}$; □ ${\mathbf{MAXCLS}}\ge 4×\left({\mathbf{NDIM}}+1\right)$. 6: FUNCTN – REAL (KIND=nag_wp) FUNCTION, supplied by the user.External Procedure must return the value of the integrand at a given point. The specification of FUNCTION FUNCTN ( NDIM, X) REAL (KIND=nag_wp) FUNCTN INTEGER NDIM REAL (KIND=nag_wp) X(NDIM) 1: NDIM – INTEGERInput On entry: $n$, the number of dimensions of the integral. 2: X(NDIM) – REAL (KIND=nag_wp) arrayInput On entry: the coordinates of the point at which the integrand $f$ must be evaluated. must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D01GBF is called. Parameters denoted as be changed by this procedure. 7: EPS – REAL (KIND=nag_wp)Input On entry: the relative accuracy required. Constraint: ${\mathbf{EPS}}\ge 0.0$. 8: ACC – REAL (KIND=nag_wp)Output On exit : the estimated relative accuracy of 9: LENWRK – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which D01GBF is called. For maximum efficiency, should be about is given the value then the subroutine uses only one iteration of a crude Monte–Carlo method with sample points. Constraint: ${\mathbf{LENWRK}}\ge 10×{\mathbf{NDIM}}$. 10: WRKSTR(LENWRK) – REAL (KIND=nag_wp) arrayInput/Output On entry : if must be unchanged from the previous call of D01GBF – except that for a new integrand must be set to . See also On exit: contains information about the current sub-interval structure which could be used in later calls of D01GBF. In particular, ${\mathbf{WRKSTR}}\left(j\right)$ gives the number of sub-intervals used along the $j$th coordinate axis. 11: FINEST – REAL (KIND=nag_wp)Input/Output On entry: must be unchanged from a previous call to D01GBF. On exit: the best estimate obtained for the integral. 12: IFAIL – INTEGERInput/Output On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if ${\mathbf{IFAIL}}e {\mathbf{0}}$ on exit, the recommended value is When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Errors or warnings detected by the routine: On entry, ${\mathbf{NDIM}}<1$, or ${\mathbf{MINCLS}}\ge {\mathbf{MAXCLS}}$, or ${\mathbf{LENWRK}}<10×{\mathbf{NDIM}}$, or ${\mathbf{MAXCLS}}<4×\left({\mathbf{NDIM}}+1\right)$, or ${\mathbf{EPS}}<0.0$. was too small for D01GBF to obtain the required relative accuracy . In this case D01GBF returns a value of with estimated relative error , but will be greater than . This error exit may be taken before nonzero integrand evaluations have actually occurred, if the routine calculates that the current estimates could not be improved before was exceeded. 7 Accuracy A relative error estimate is output through the parameter . The confidence factor is set so that the actual error should be less than 90% of the time. If you want a higher confidence level then a smaller value of should be used. The running time for D01GBF will usually be dominated by the time used to evaluate , so the maximum time that could be used is approximately proportional to For some integrands, particularly those that are poorly behaved in a small part of the integration region, D01GBF may terminate with a value of which is significantly smaller than the actual relative error. This should be suspected if the returned value of is small relative to the expected difficulty of the integral. Where this occurs, D01GBF should be called again, but with a higher entry value of (e.g., twice the returned value) and the results compared with those from the previous call. The exact values of on return will depend (within statistical limits) on the sequence of random numbers generated within D01GBF by calls to . Separate runs will produce identical answers. 9 Example This example calculates the integral $∫01∫01∫01∫014x1x3exp2x1x3 1+x2+x4 2dx1dx2dx3dx4=0.575364.$ 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/D01/d01gbf.html","timestamp":"2014-04-16T22:15:22Z","content_type":null,"content_length":"26444","record_id":"<urn:uuid:9ad524c9-5b85-461b-a637-e8dec1d6878f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical .NET Fast, Faster, Fastest One of my favourite pass-times is getting the most out of a piece of code. Today, I got an opportunity to play a bit from a comment on Rico Mariani’s latest performance challenge: how to format the time part of a DateTime structure in the form “hh:mm:ss.mmm“ in the fastest possible way. Apparently, Alois Kraus and Greg Young have been at it for a bit. Their solution already gives us more than a five-fold increase in speed compared to the simplest solution using String.Format. But could we do even better? As it turns out, we could. Here’s the code that we can improve: long ticks = time.Ticks; int hour = (int)((ticks / 0x861c46800L)) % 24; int minute = (int)((ticks / 0x23c34600L)) % 60; int second = (int)(((ticks / 0x989680L)) % 60L); int ms = (int)(((ticks / 0x2710L)) % 0x3e8L); The tick count is the number of 100 nanosecond (ns) intervals since the zero time value. For each of the hour, minute, second and millisecond parts, this code divides the number of ticks by the number of 100ns intervals in that time span, and reduces that number to the number of units in the larger time unit using a modulo. So, for example, there are 60x60x10000000 ticks in an hour, which is 0x861c46800 in hex, and there are 24 hours in a day. What makes the above code less than optimal is that it starts from the number of ticks to compute every time part. This is a long (64 bit) value. 64-bit calculations are slower than 32-bit calculations. Moreover, divisions (or modulos) are much more expensive than multiplications. We can fix both these issues by first finding the total number of milliseconds in the day. That number is always smaller than 100 million, so it fits in an int. We can calculate the number of hours with a simple division. We can “peel off” the hours from the total number of milliseconds in the day to find the total milliseconds remaining in the hour. From this, we can calculate the number of minutes with a simple division, and so on. The improved code looks like this: long ticks = time.Ticks; int ms = (int)((ticks / 10000) % 86400000); int hour = ms / 3600000; ms -= 3600000*hour; int minute = ms / 60000; ms -= 60000 * minute; int second = ms / 1000; ms -= 1000*second; This change decreases the running time by about 28 percent from the fastest previous solution. We can shave off another 4% or so by replacing the modulo calculation by a subtraction in the code that computes the digits. The question now is: can we do even better, still? Once again, the answer is: Yes: by as much as another 25%! The single most time consuming calculation is a division. Dividing by large numbers is an order of magnitude slower than multiplying. For smaller numbers, the difference is smaller, but still significant. Since we know the numbers we are dividing by in advance, we can do a little bit shifting magic and eliminate the divisions altogether. Let’s take dividing by 10 as an example. The basic idea is to approximate the ratio 1/10 by another rational number with a power of two as the denominator. Instead of dividing, we can then multiply by the numerator, and shift by the exponent in the denominator. Since shifting chops off digits, it effectively rounds down the result of the division, so we always have to find an approximation that is larger than the ratio. We see, for example, that 13/128 is a good approximation to 1/10. We can rewrite x/10 as (x*13) >> 7 as long as x is not too large. We run into trouble as soon as the error time x is larger than 1. In this case, that happens when x is larger than 13/(13-12.8) = 65. Fortunately, this is larger than the number of hours in a day, or the number of minutes in an hour, so we can use it for most calculations in our code. It won’t work for numbers up to a 100, so to get the second digit of the millisecond, we need the next approximation, 205/2048, which is good for values up to 10,000. To get the first digit of the milliseconds, we need to divide by 100. We find that 41/4096 works nicely. Implementing this optimization, we go from (for example): *a = (char)(hour / 10 + ’0′); *a = (char)(hour % 10 + ’0′); int temp = (hour * 13) >> 7; *a = (char)(temp + ’0′); *a = (char)(hour – 10 * temp + ’0′); Our running time for 1 million iterations goes down from 0.38s to 0.28s, a savings of almost 18% compared to the original. The larger divisors give us a bit of a challenge. To get the number of seconds, we divide a number less than 60000 by 1000. Doing this the straight way has us multiplying by 536871, which would require a long value for the result of the multiplication. We can get around this once we realize that 1000 = 8*125. So if we shift the number of milliseconds by 3, we only need to divide by 125. As an added benefit, the numbers we’re multiplying are always less than 7500, so our multiplier can be larger. This gives us the simple expression good for numbers up to almost 4 million: ((x >> 3) * 67109) >> 23. The same trick doesn’t work for getting the minutes and hours, but it does allow us to fit the intermediate result into a long. We can use the Math.BigMul method to perform the calculation The final code is given below. It is doubtful it can be improved by much. It runs in as little as 0.221s for one million iterations, 2.5 times faster than the previous fastest code and over 25 times faster than the original. private unsafe static string FormatFast6(DateTime time) fixed (char* p = dateData) long ticks = time.Ticks; char* a = p; int ms = (int)((ticks / 10000) % 86400000); int hour = (int)(Math.BigMul(ms >> 7, 9773437) >> 38); ms -= 3600000 * hour; int minute = (int)((Math.BigMul(ms >> 5, 2290650)) >> 32); ms -= 60000 * minute; int second = ((ms >> 3) * 67109) >> 23; ms -= 1000 * second; int temp = (hour * 13) >> 7; *a = (char)(temp + ’0′); *a = (char)(hour – 10 * temp + ’0′); a += 2; temp = (minute * 13) >> 7; *a = (char)(temp + ’0′); *a = (char)(minute – 10 * temp + ’0′); a += 2; temp = (second * 13) >> 7; *a = (char)(temp + ’0′); *a = (char)(second – 10 * temp + ’0′); a += 2; temp = (ms * 41) >> 12; *a = (char)(temp + ’0′); ms -= 100 * temp; temp = (ms * 205) >> 11; *a = (char)(temp + ’0′); ms -= 10 * temp; *a = (char)(ms – 10 * temp + ’0′); return new String(dateData); 3 thoughts on “Fast, Faster, Fastest” 1. I think the last line should be “*a = (char) (ms + ’0′);” 2. Nice article; I’m impressed. But I think there’s a minor error in the sentence that discusses multiplying by 205/2048 instead of dividing by 10. This substitution is suitable only for values up to about 1000, not 10,000 as the article claims. Incorrect results would be obtained for numerator values that were at least 205/204.8, or 205/0.2, or 1025. (To be precise, 1029 is the lowest positive value for which the substitute operations give a different result than dividing by 10.) 3. Have you tried to replace the (temp * 10) with (temp << 3 + temp<<1) ? temp * 10 == (temp*8 + temp*2) = (temp << 3 + temp<<1)
{"url":"http://www.extremeoptimization.com/Blog/index.php/2006/04/fast-faster-fastest/comment-page-1/","timestamp":"2014-04-18T13:09:14Z","content_type":null,"content_length":"32498","record_id":"<urn:uuid:ab7ca65c-080b-42df-ae2d-01abea9c9697>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Passaic Math Tutor Find a Passaic Math Tutor ...My work in 16 AP classes has expanded my vocabulary tremendously; I received a 740 on the SAT Reading section, a section which tests your vocabulary skills. I have taken a deep interest in grammar as a result of my involvement in AP English. You could call me a little obsessed. 43 Subjects: including precalculus, trigonometry, sociology, algebra 1 ...Geometry is often a very difficult subject for students because it seems to be such a dramatic shift from the math they have learned previously. Learning Geometry is, however, a very important step in the development of a student's mathematical ability. Geometry teaches mental discipline and planning, two very useful skills. 10 Subjects: including geometry, algebra 1, algebra 2, American history ...I have been teaching for 10 years in NYC public school system, grades 9-12. I am currently teaching geometry. I am a NYS certified math teacher for grades 7-12. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I hope that my students will begin to look forward to math as something exciting.I have a year of experience teaching Algebra 1. I have taught Algebra 2 for a year. I have a year of experience teaching Geometry in a high school. 7 Subjects: including prealgebra, precalculus, trigonometry, algebra 1 ...My name is Danny, and I'd like to offer my services as a tutor for the SAT and ACT, as well as general schoolwork for K-12. As a product of the NYC school system and with over five years of tutoring experience, I empathize with college-bound students who are enduring the stresses of a complex an... 41 Subjects: including algebra 1, ACT Math, reading, probability
{"url":"http://www.purplemath.com/passaic_nj_math_tutors.php","timestamp":"2014-04-16T04:31:14Z","content_type":null,"content_length":"23471","record_id":"<urn:uuid:04dc035f-96da-406c-9764-20b6f473f2e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Immersion' printed from http://nrich.maths.org/ To begin, we can work out the volume of each solid: Solid 1) A sphere of radius $1 \ \mathrm{cm}$ : $$\textrm{[Volume]} = \frac{4}{3}\pi r^3 = \frac{4}{3}\pi \ \mathrm{cm^3}$$ Solid 2) A solid cylinder of height $\frac{4}{3} \ \mathrm{cm}$ and radius $1 \ \mathrm{cm}$: $$\textrm{[Volume]} = \pi r^2 h = \frac{4}{3}\pi \ \mathrm{cm^3}$$ Solid 3) A solid circular cone of base radius $1 \ \mathrm{cm}$ and height $4 \ \mathrm{cm}$. $$\textrm{[Volume]} = \frac{1}{3} \times \textrm{[Volume of a cylinder]} = \frac{1}{3}\pi r^2 h = \frac{4}{3}\pi \ \mathrm{cm^3}$$ Solid 4) A solid cylinder of height $\frac{4}{9}\ \mathrm{cm}$ with a hole drilled through it, leaving an annular cross-section with interal and external radii $2 \ \mathrm{cm}$ and $1 \ \mathrm{cm} \textrm{[Volume]} & = \textrm{[Volume of the outer cylinder]} - \textrm{[Volume of the inner cylinder]} \\ & = \pi (r_{outer}^{2} - r_{inner}^{2}) h = \frac{4}{3}\pi \mathrm{\ cm^3}\\ We can now work out what the axes represent: The maximum $y$ value reached by all curves is identical at around 4.2, it is in fact equal to $\frac{4}{3}\pi$. This suggests the y axis is a measure of volume, all solids will eventually displace a fluid of $\frac{4}{3}\pi \mathrm{\ cm^3}$. The $y$ axis represents the volume of fluid displaced (or the volume of the solid immersed) in $\mathrm{cm^3}$. The $x$-axis represents the time elapsed in minutes since lowering began. Simon now works out which curve matches which solid, with some clear reasoning: Curve 1: The time taken to fully immerse the object $\approx 1.3 \mathrm{\ minutes}$ The volume displaced varies linearly with time, this must therefore represent: Solid 2 (cylinder of height $\frac{4}{3} \mathrm{\ cm}$ and radius $1 \mathrm{\ cm}$) lowered vertically Solid 4 (A solid cylinder of height $\frac{4}{9} \mathrm{\ cm}$ with a hole drilled through it) lowered vertically. The time taken to fully immerse the object = $\frac{4}{3}\pi \mathrm{\ minutes}$ Curve 1 is therefore a cylinder of height $\frac{4}{3} \mathrm{\ cm}$ and radius $1 \mathrm{\ cm}$ lowered vertically Curve 2 and Curve 3: The time taken to fully immerse the object = $2 \mathrm{\ minutes}$ This could therefore be: Solid 2 lowered sideways or solid 1 lowered in any orientation. Solid 2 lowered sideways would initially displace a greater fluid than solid 1, it can therefore be seen that curve 2 corresponds to solid 2 lowered sideways and curve 3 to solid 1. Curve 4 and Curve 5: The time taken to fully immerse the object = $4 \mathrm{\ minutes}$ This could therefore be: Solid 3 lowered vertically or solid 4 lowered sideways Consider solid 3 lowered vertically: Volume immersed as a function of height ($h$) $$V(h) = \frac{1}{3}\pi r^2 h$$ If we immerse this cone point first the radius varies with the height of object immersed as: $$r = \frac{h}{4}$$ $$V(h) = \frac{\pi h^3}{48} \textrm{ or } V(t) = \frac{\pi t^3}{48} \ (\textrm{as } 1 = \frac{h}{t})$$ Solid 3 immersed point first must therefore be curve 5 which leaves solid 4 lowered vertically as curve 4. It is possible to find algebraic forms for the volume displaced at height h as shown below, although this is very involved for some of the solids! Curve 1: Cylinder of height $\frac{4}{3} \mathrm{\ cm}$ and radius $1 \mathrm{\ cm}$ lowered vertically. $$V(h) = \pi h$$ Curve 2: A solid cylinder of height $\frac{4}{3} \mathrm{\ cm}$ and radius $1 \mathrm{\ cm}$ lowered sideways The volume immersed equals the length of cylinder multiplied by the area of a segment From the geometry it can be seen that $\cos(\frac{\theta}{2}) = (1-h)$ $\theta = 2\arccos(1-h)$ $\textrm{[Area of segment]} = 0.5(r^2\theta -r^2 \sin(\theta)) = 0.5(2\arccos(1-h) - \sin(2\arccos(1-h))$ $\sin\theta = 2 \sin\frac{\theta}{2} \cos\frac{\theta}{2}$ $\sin\frac{\theta}{2} = \sqrt{1 - \cos^2\frac{\theta}{2}}$ $\sin(2 \arccos(1-h)) = 2\sqrt{1 - (1-h)^2} (1-h) $ $\textrm{[Area of segment]} = 0.5(r^2\theta -r^2 \sin(\theta)) = 0.5(2\arccos(1-h) - \sin(2\arccos(1-h))$ $\textrm{[Area of segment]} = \frac{1}{2}( 2\arccos(1-h) - 2\sqrt{1 - (1-h)^2}) (1-h)$ The volume immersed equals the length of the cylinder multiplied by the area of the segment $\textrm{[Volume at h]} = \frac{2}{3}( 2\arccos(1-h) - 2\sqrt{1 - (1-h)^2}) (1-h)$ Curve 3: A sphere of radius $1 \mathrm{\ cm}$ The volume immersed as a function of h is equal to the volume generated when we rotate the equation of a circle about the $x$-axis by 360 degrees and evaluate this integral between the limits $r$ and Equation of circle: $y^2 + x^2 = r^2 = 1$ $$\textrm{[Volume]} = \int_{1-h}^{1} \pi f(x)^2 \ \mathrm{d}x = \int_{1-h}^{1} \pi (1-x^2) \ \mathrm{d}x =\pi \left[ x - \frac{x^3}{3} \right]^{1} _{1-h} = \pi h^2(1-\frac{h}{3})$$ Curve 4: A solid cylinder of height $\frac{4}{9} \mathrm{\ cm}$ with a hole drilled through it, leaving an annular cross-section with interal and external radii $2 \mathrm{\ cm}$ and $1 \mathrm{\ cm}$ lowered When $h$ is less than $1 \mathrm{\ cm}$ the volume immersed takes the same form as curve 2 but simply changing the radius from $1 \mathrm{\ cm}$ to $2 \mathrm{\ cm}$ and changing the length of the solid from $\frac{4}{3} \mathrm{\ cm}$ to $\frac{4}{9} \mathrm{\ cm}$. From the geometry we see that: $\cos(\frac{\theta}{2}) = \frac{2 - h}{2}$ $\theta = 2 \arccos(1-0.5h)$ $A(h) = 0.5(r^2\theta -r^2 \sin\theta) = 0.5(8 \arccos(1- 0.5h) - 4 \sin (2 \arccos(1-0.5h))$ $\sin\theta = 2\sin\frac{\theta}{2} \cos\frac{\theta}{2}$ $\sin\frac{\theta}{2} = \sqrt{1 - \cos^2\frac{\theta}{2}}$ $\sin\theta = 2\sqrt{1 - (1-0.5h)^2} (1-0.5h) $ $\therefore \ A(h) = 4 \arccos(1- 0.5h) - 4(1-0.5h)\sqrt{1 - (1-0.5h)^2}$ $\textrm{[Volume]} = \frac{4}{9} \textrm{[area]} = \frac{16}{9}(\arccos(1- 0.5h) - (1-0.5h)\sqrt{1 - (1-0.5h)^2}$ (when $h$ is less than 1) When h is greater than $1\mathrm{\ cm}$ the volume immersed can be found by subtracting the area of the inner segment from the outer segment and then multiplying by the length of the cylinder. Volume of the outer segment is as above: $\textrm{[Volume]} = \frac{4}{9} \textrm{[area]} = \frac{16}{9}(\arccos(1- 0.5h) - (1-0.5h)\sqrt{1 - (1-0.5h)^2} )$ Volume of inner segment: From the geometry it can be seen that: $\theta = 2\arccos(2-h)$ $\textrm{[Volume]} = \frac{4}{9} (0.5(r^2\theta -r^2 \sin\theta)) = \frac{2}{9} ( 2\arccos(2-h) - \sqrt{1 -(2-h)^2})$ \textrm{[Volume immersed]} & = \frac{4}{9}(4\arccos(1- 0.5h) - 4(1-0.5h)\sqrt{1 - (1-0.5h)^2)} \\ & -\arccos(2-h) + 0.5 \sqrt{1 -(2-h)^2} )\\ (when $h$ is greater than 1) (when $h$ is less than 1) $V(h) = \frac{4}{9} \textrm{[area]} = \frac{16}{9}(\arccos(1- 0.5h) - (1-0.5h)\sqrt{1 - (1-0.5h)^2} )$ (when $h$ is greater than 1) $V(h) = \frac{4}{9}(4\arccos(1- 0.5h) - 4(1-0.5h)\sqrt{1 - (1-0.5h)^2)} -\arccos(2-h) + 0.5 \sqrt{1 -(2-h)^2} )$ Curve 5: A solid circular cone of base radius $1 \mathrm{\ cm}$ and height $4 \mathrm{\ cm}$ lowered point first. $V(h) = \frac{1}{3}\pi r^2 h$ $r(h) = \frac{h}{4}$ $V(h) = \frac{\pi h^3}{48} $
{"url":"http://nrich.maths.org/6439/solution?nomenu=1","timestamp":"2014-04-16T10:58:20Z","content_type":null,"content_length":"11740","record_id":"<urn:uuid:43f21654-db71-463d-986a-0019a4fe97eb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Frechet/functional derivative - misunderstanding Hi people, I have some confustion in understanding the frechet/functional derivative. If we have a function like this: - F(x)= ∫x(r')G(r',r)dr' the integration over Si (2D domain) , G is a 2D green function, r is a 2D vector outside Si, r' is a 2D vector inside Si if I want to take the derivative δF(x)/δx at some point x=xi what should I have? According to a book I have it becomes F(x)= ∫xi(r')G(r',r)dr' But I don't see the reason?
{"url":"http://www.physicsforums.com/showthread.php?t=561422","timestamp":"2014-04-21T04:40:20Z","content_type":null,"content_length":"19550","record_id":"<urn:uuid:e03396e6-01b9-4f6e-9f38-787088d20af4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Computing Interesting Topological Features Files in this item Computing Interesting Topological Features.pdf (1MB) (no description provided) PDF Title: Computing Interesting Topological Features Author(s): Chambers, Erin Wolf Subject(s): algorithms Many questions about homotopy are provably hard or even unsolvable in general. However, in specific settings, it is possible to efficiently test homotopyequivalence or compute shortest cycles with prescribed homotopy. We focus on computing such "interesting" topological features in three settings. The first two results are about cycles on surfaces; the third is about classes of homotopies in IR2 minus a set of obstacles; and the final result is about paths and cycles in Rips complexes. First, we examine two problems in the combinatorial surface model. Combinatorial surfaces combine properties of graphs and manifolds, making a rich set of techniques available for analysis and algorithm design. We give algorithms to find the shortest noncontractible and nonseparating cycles in a combinatorial surface in O(g3n log n) time. Our main tool is a data structure that kinetically maintains the shortest path tree as the root of the tree moves around the vertices of a single face. The total running time is O(g2n log n). By maintaining the data structure persistently, we can answer shortest path queries in O(log n) time. Next we consider finding the shortest splitting cycle in a combinatorial surface, or simple cycle which is both separating and noncontractible; such cycles divide the topology of the surface as well as the underlying graph. We prove that finding the shortest splitting cycle is NP-Hard. We then give an algorithm that runs gO(g)n log n time, Abstract: which is polynomial if the surface is fixed. We then examine a very different setting, namely similarity between curves in some underlying metric space. If we imagine a homotopy between the curves as a way to morph one curve into the other, we can optimize the morphing so that the maximum distance any point must travel is minimized. This is a generalization of the more well known Frechet distance, with the additional requirement that the leash to move continuously in the ambient space. We call this distance the homotopic Frechet distance. We give a polynomial time algorithm to compute the homotopic Fr´echet distance between two curves in the plane minus a set of polygonal obstacles. We also extend our characterization of optimal morphings to surfaces of nonpositive curvature. Finally, we examine a more fundamental homotopy problem in a different setting. A Rips complex is a simplicial complex defined by a set of points from ii some metric space where every pair of points within distance 1 is connected by an edge, and every (k + 1)-clique in that graph forms a k-simplex. We prove that the projection map which takes each k-simplex in the Rips complex to the convex hull of the original points in the plane induces an isomorphism between the fundamental groups of both spaces. Since the union of these convex hulls is a polygonal region in the plane, possibly with holes, our result implies that the fundamental group of a planar Rips complex is a free group, allowing us to design efficient algorithms to answer homotopy questions in planar Rips complexes. Issue Date: 2009-02 Genre: Technical Report Type: Text URI: http://hdl.handle.net/2142/11523 Identifier UIUCDCS-R-2008-3036 Rights You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of Information: 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the University of Illinois at Urbana-Champaign Computer Science Department under terms that include this permission. All other rights are reserved by the author(s). Available in 2009-04-23 This item appears in the following Collection(s) Item Statistics • Total Downloads: 110 • Downloads this Month: 0 • Downloads Today: 0 My Account Access Key
{"url":"https://www.ideals.illinois.edu/handle/2142/11523","timestamp":"2014-04-18T13:40:02Z","content_type":null,"content_length":"23147","record_id":"<urn:uuid:9b57eedd-43f8-4426-8abf-a774014997d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the wedge product of two harmonic forms harmonic? up vote 11 down vote favorite • Is the wedge product of two harmonic forms on a compact Riemannian manifold harmonic? I'm looking for a counter-example that the textbooks say exists. • I would like to see a counter example that is on a complex manifold, Ricci-flat (or Einstein) manifold or both, if it is at all possible. • In general, I'm trying to understand the interaction between the wedge product, Hodge star and the Laplacian on forms and it's eigen-vectors, references will be much appreciated. riemannian-geometry complex-geometry There is a nice article today on arxiv, called REMARKS ON THE PRODUCT OF HARMONIC FORMS by LIVIU ORNEA AND MIHAELA PILCA. You may find usefull references there. arxiv.org/abs/1001.2129 – Dmitri Jan 14 '10 at 9:56 add comment 5 Answers active oldest votes It is easy to construct examples on Riemann surfaces of genus >1. Take any surface like this. Let A and B be two harmonic 1-forms, that are not proportional. Then A \wegde B is non-zero, but it vanishes at some point, since both A and B have zeros. At the same time a harmonic 2 form on a Rieamann surface is constant. Explicite examples of 1-forms on Rieamann surfaces can be obtained as real parts of holomorphic 1-forms. up vote 19 down vote Note of course that the above example is complex, and Einstein just take the standard metric of curvature -1. If you want an example on a Ricci flat manifold you should take a K3 accepted surface. It is complex and admits a Ricci flat metric. Now, its second cohomology has dimesnion 22. Now it should be possible to find two anti-self-dual two-forms whose wedge product vanishes at one point on K3 but is not identically zero. This is because the dimesnion of the space of self dual forms is 19 which is big enought to get vanishing at one point Thank you Dmitri. The genus 2 case is obvious to me after your comment. I'll try to reinforce it with some computations. – Boris Ettinger Nov 6 '09 at 21:59 add comment Interestingly in (24) of hep-th/9603176 it is mistakenly claimed that the wedge product of harmonic forms is automatically harmonic. Because it is false we still do not know the predicted up vote 7 existence of those middle dimensional $L^2$ harmonic forms on these non-compact complete hyperkahler manifolds. down vote 1 Interesting. Is this also the case for Taub-NUT? In 0902.0948, Witten attributes the construction of the middle-dimensional $L^2$ harmonic form on Taub-NUT to three papers, of which the one you mentioned is the latest. The other two are older papers by Eguchi+Hanson and Pope from the late 1970s. – José Figueroa-O'Farrill Nov 7 '09 at 17:54 No, the individual 2-forms are harmonic, and on Taub-Nut is is in $L^2$ as well. One can show that there are no other ones as in arxiv.org/abs/math/9909002 and arxiv.org/abs/math/ 0207169 – Tamas Hausel Nov 7 '09 at 18:39 add comment Generically, the wedge product of two harmonic forms will not be harmonic. It is harder to find examples than counter-examples. For example, on compact Lie groups with a bi-invariant metric or, more generally, on riemannian symmetric spaces, harmonic forms are invariant and invariance is preserved by the wedge product. In general, though, this is not the case. up vote 6 According to Kotschick (see, e.g., this paper) manifolds admitting a metric with this property are called geometrically formal and their topology is strongly constrained. He has examples, down vote already in dimension 4, of manifolds which are not geometrically formal. 1 Nitpick: this is not true on all symmetric spaces (witness the example of hyperbolic surfaces above), only those whose isometry group admits a bi-invariant metric. – Tom Church Nov 6 '09 at 18:54 Thank you for the link, I'll study it carefully, although my interest is more about the spectral aspect and the question was about the zero eigen-modes of the Laplacian. – Boris Ettinger Nov 6 '09 at 22:02 @Tom: I thought that compact riemannian symmetric spaces are such that the transvection group always admits a bi-invariant metric. I am confused by your comment about the hyperbolic surfaces. Which surfaces are you talking about? No compact manifold with constant negative curvature can be symmetric, since by Bochner's Lemma, it cannot have any Killing vector fields. Probably I am misunderstanding your comment. – José Figueroa-O'Farrill Nov 7 '09 at 15:58 I imagine that the difference is terminological. I would call a factor space of a general group a homogeneous space rather then symmetric space. – Boris Ettinger Nov 7 '09 at 20:31 So would I. There is no reason to believe that in a general homogeneous space (one admitting a transitive action by isometries) the wedge product of harmonic forms should again be harmonic. – José Figueroa-O'Farrill Nov 7 '09 at 21:28 show 1 more comment Here's a counterexample from the theory of nilmanifolds, which by their very nature are not formal. Take a compact quotient $H^3/\Gamma$ of the Heisenberg group. It admits invariant 1-forms $e^1,e^2,e^3$ with $de^1=0=de^2$ and $de^3=e^1\wedge e^2$. Then $e^1,e^2$ are harmonic, but $e^1\wedge e^2$ is exact, so not harmonic. You can take a product with $S^1$ to get a complex up vote 3 (non-Kähler) surface on which the same thing works, but not I am afraid Ricci-flat or Einstein. down vote What do you mean by "by their very nature"? More interestngly: can you give an example of a space which is formal against its nature? :) – Mariano Suárez-Alvarez♦ Jul 25 '10 at 22:59 The differential graded algebra of invariant forms on a nilmanifold M is a minimal model for M in Sullivan's sense because the nilpotency makes d decomposable. Unless d kills everything invariant (like on a torus), M will not therefore be formal. – Simon Salamon Jul 25 '10 at 23:24 Heh. $ $ – Mariano Suárez-Alvarez♦ Jul 25 '10 at 23:34 Dear Prof. Salamon, could you please give a bit more explanation on the phrase "exact, so not harmonic?" I looked at your arXiv pieces, there are a few items on nilmanifolds but nothing that quite explains this to me, or to the friend who asked me for clarification. Anyway, if this medium is not convenient, you can always email me, just search using my last name at ams.org/cml $$ $$ Granted, I think Dmitry (not the Dmitri above) was thinking about Riemann surfaces while I was talking about Riemannian manifolds. – Will Jagy Jul 26 '10 at 21:24 add comment Using homological perturbation theory, one can repair this defect. More precisely, on the space of harmonic forms, there is an $A_\infty$ structure with no differential whose 2-ary operation (multiplication) is constructed by wedging two harmonic forms then projecting the result back to the space of harmonic forms. See "Strong homotopy algebras of Kahler manifolds" by S.A. Merkulov (Int. Math. Res. Lett. no. 3 153--164) for details of the construction. up vote 2 down vote EDIT: Also, if the manifold is compact then the natural inclusion of harmonic forms into arbitrary forms becomes an equivalence of $A_\infty$ algebras, where the space of all forms has its usual dg-algebra structure. Thanks Ian, but my interest is to analyze the product of two harmonic forms, rather than the general structures on harmonic forms. – Boris Ettinger Nov 7 '09 at 20:33 add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry complex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/4331/is-the-wedge-product-of-two-harmonic-forms-harmonic?sort=newest","timestamp":"2014-04-16T04:54:48Z","content_type":null,"content_length":"83615","record_id":"<urn:uuid:766da0c9-4769-424e-8f0d-104abe139d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
AP Phy C scale for 5. Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: May 2004 Archive: AP Phy C scale for 5. Does anyone know the scale for a 5 for phy C? i took the alternate test and prob got around 75-80% of mc RIGHT, and got around 2 / 3 for open ended. is that safe for a 5? yeah its safe. I've heard of 49/90 curves for E&M. o sorry. i forgot to mention, i ONLY did the mechanics portion. im not sure of the curve for mechanics, i think it is like 59-60/90 out of 5...and i think i got like 85-90% of MC right, but doubt the FR...probably about 45-50% right...careless mistakes....about what would that go for? And for E&M, about 80-85% on the MC, and about 50% on Free response? can anyone give an input? thanks I thought the Mech FR was really hard. I prolly got around 50% right. I'm just hoping for a 4. does anyone know the scale from a released test or a reliable source for a 5 on mechanics? 1998 test: 55/90 = 5 i hope this year 55/90 would be a 5...that would mean a little of the FR and almost all MC...thats great....for E&M the curve should be generous, and I could really use it, because I screwed up the whole first question ...haha i thought it was a sphere so I did Gauss' Law wrong..oh well...hopefuly it will be 49/90 for E&M like a prior test... Do you think that for MECHANICS a 85-90% correct on the MC and like 20% correct on the FR would get me a 4? And a 75-80% correct on the MC and 35% correct on the FR for E&M? Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/69986.html","timestamp":"2014-04-17T15:36:23Z","content_type":null,"content_length":"14605","record_id":"<urn:uuid:88ce3393-c4b7-401c-b04c-559e523e1fda>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Vinyl Siding Estimate Get a New Vinyl Siding Estimate in Maine How Your Contractor Calculates A Estimate Contractors usually offer free estimates to their potential customers for comparison, and they hope that their proposal will be accepted. Have you wondered how they calculate a siding estimate? Good businessmen must accurately calculate costs of materials and labor in order to make any profit on a job. Understanding how the estimate is made may help you judge between potential proposals. Cost of Materials in Maine The cost of materials depends on the particular product you have chosen to cover your house. Vinyl, cement fiber, and brick are popular options and each of them is available in different options that affect the cost of the materials. The first step is to calculate how many square feet must be covered and how much material must be ordered. Some types of materials are ordered in squares. One square is equal to one hundred square feet. The surface of your house can be divided up into shapes-squares, triangles, and rectangles-based on the shape of the building, the number of walls, and the roof line. The square footage of each of these shapes is calculated, a certain percentage is added to account for waste during trimming, and a certain percentage is subtracted to account for the presence of doors and windows. The final sum is divided by one hundred and rounded up to get the number of squares that must be ordered. If 16.7 squares are calculated, 17 must be ordered. Add to that the cost of any underlayment or additional trim pieces as well as fasteners to get the total cost of materials. Labor Cost for Vinyl Siding Installations in Maine Labor costs reflect how much the workers must be paid to do the job. This is greatly affected by the details of your house-how many windows and doors there are, whether there are any unusual rooflines or any other things that require special attention during the job. Labor costs are also affected by the hourly wages he has to pay his workers. The hourly wage can vary widely between different regions of the country. A highly skilled worker will also be paid more to reflect his value to the company. An experienced contractor should be able to accurately predict how many hours it takes for a skilled crew to finish the job. If his prediction about this part of the siding estimate is wrong, he may not get any profit on the job and may even lose money. Vinyl Siding Estimate and Free Inspections in Maine This is how the business itself makes money to keep going forward. It is the profit that pays the owner of the business his salary. It also pays for facilities, utilities, vehicles, equipment, and office staff. A reasonable amount of profit is part of any siding estimate. You may not see the siding estimate broken down into these categories, and may just receive a single figure from your contractor, but it is still helpful to know how he has arrived at those numbers. An educated consumer can be confident that he has chosen wisely.
{"url":"http://www.mainesidingcontractors.com/vinyl-siding-estimate-maine.php","timestamp":"2014-04-18T15:39:30Z","content_type":null,"content_length":"11201","record_id":"<urn:uuid:c297c2b8-8dfc-4c85-bf32-e8ef2f32821f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Proof Question I spent 2 days already Replies: 1 Last Post: Jul 30, 2008 8:34 AM Messages: [ Previous | Next ] LeonardoDv Re: Proof Question I spent 2 days already Posted: Jul 30, 2008 8:34 AM Posts: 9 From: Viet Nam I think if u want to find the <PRQ Registered: 7/29/08 here is my opinion: first:we have triangle(PTQ)=triangle(UTR) by AAS=~AAS *<TUR+<TUS=<TPQ+<TUS=180deg so <TUR=<TPQ =>we have TR=TQ so <TQR=<TRQ=45deg=<PRQ I don't know if this is right Date Subject Author 7/28/08 Proof Question I spent 2 days already John 7/30/08 Re: Proof Question I spent 2 days already LeonardoDv
{"url":"http://mathforum.org/kb/message.jspa?messageID=6315136","timestamp":"2014-04-17T04:39:49Z","content_type":null,"content_length":"17528","record_id":"<urn:uuid:49b23d58-80e6-4c32-a8a5-b9c9d55dc264>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating the Sides of a Right Triangle Date: 2/4/96 at 11:26:11 From: Tom Subject: right triangles Dear Dr.Math, I need help on how to calculate the length of the opposite side of a right triangle if the length of the adjacent side and angles are known. This way I will be able to determine how high my model rockets actually fly. Thank you, Tommy Welfley Date: 6/19/96 at 10:37:26 From: Doctor Lisa Subject: Re: right triangles Hi Tommy! You need to use a function called tangent to find the opposite side of the right triangle. If you have a scientific or graphing calculator, it's the tan button. This is how it would work. | \ | \ x | \ | \ | \ ------ 25 degrees 20 ft. The tangent of the angle will always be the opposite side divided by the adjacent side of a right triangle. In this case, the opposite side is x and the adjacent side is 20 feet. The angle measures 25 degrees. The setup will then look like tan 25 = x/20 I would like to get x by itself, so I would multiply both sides by 20. This will give me 20 * tan 25 = x. I would now go to the calculator and get the value of tan 25. If you have a regular scientific calculator (a TI-30, for example), you would put in 25 and hit the tan button that I mentioned earlier. If you have a graphing calculator (a TI-81, TI-82, TI-85), then you would hit the tan button first, then enter 25 and hit enter. You should get a value like 0.466307658155 (but we usually only use the first 4 decimal places). Multiply this answer by 20, which will give you 9.3261531631, or about 9.3 feet. So your rocket would have gone about 9.3 feet high. Hope this helps! -Doctor Lisa, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/57654.html","timestamp":"2014-04-16T08:15:01Z","content_type":null,"content_length":"6699","record_id":"<urn:uuid:41928f4d-0204-40de-97b0-2528b1ba1c3d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Bias detection in meta-analysis PAGE RETIRED: Click here for the new StatsDirect help system. OR YOU WILL BE REDIRECTED IN 5 SECONDS Bias detection in meta-analysis ● Systematic review of randomized trials is a gold standard for appraising evidence from trials, however, some meta-analyses were later contradicted by large trials (Sterne et al. 2001a). ● Plots of trials' variability or sample size against effect size are usually skewed and asymmetrical in the presence of publication bias and other biases (Sterne and Egger, 2001). ● Publication bias arises when trials with statistically significant results are more likely to be published and cited, and are preferentially published in English language journals and those indexed in Medline (Jüni et al, 2002). ● Publication and selection biased in meta-analysis are more likely to affect small trials. ● Small trials are more likely to be of poorer quality, for example inadequate blinding due to use of open random number tables. ● Small trials are more likely to show larger treatment effects due to case-mix differences (e.g. higher risk patients) than larger trials. If there is no 'small sample' bias across a series of studies in a meta-analysis then the estimates of effect should vary (due to random error) most with the small studies and least with the large studies. This fact lead to the use of plots of sample size against effect estimate (the original funnel plot). Bias is likely to cause asymmetry in such plots. As sample size is not the only determinant of the precision of an effect estimate, richer information for detecting bias can be gained from plotting the standard errors against their effect estimates. The reciprocal of the standard error is referred to as precision. Again, lateral asymmetry indicates bias. StatsDirect offers the following choice of Y axes: The direction of the Y axis is reversed in some cases, such as the default setting, standard error, in order to make the shape of each plot an inverted cone because this has become the convention in the literature (Sterne and Egger, 2001). The most widely accepted plot is standard error (scale reversed) against effect estimate with 95% confidence intervals outlining the inverted cone. You should examine the left-right symmetry of the plot, asymmetrical plots denote small sample bias. The best choice of x axis for detecting the small sample effect is the log odds ratio (Sterne and Egger, 2001). This is because the scale is not constrained and because the plot will be the same shape whether the outcome is defined as occurrence or non-occurrence of event. Note that you must have more than three trials/strata in your meta-analysis for the StatsDirect bias assessment functions to work. The following plots are from biased and unbiased meta-analyses respectively: Bias indicators Begg-Mazumdar: Kendall's tau = 0.15 P = 0.4503 Egger: bias = -1.599085 (95% CI = -2.191985 to -1.006186) P < 0.0001 Horbold-Egger: bias = -1.759404 (90% CI = -2.302334 to -1.216475) P < 0.0001 Bias indicators Begg-Mazumdar: Kendall's tau = 0.111111 P = 0.7275 (low power) Egger: bias = 0.580646 (95% CI = -0.88656 to 2.047852) P = 0.3881 Horbold-Egger: bias = 0.805788 (90% CI = -0.550241 to 2.161817) P = 0.3013 Egger et al. (1997) proposed a test for asymmetry of the funnel plot. This is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). StatsDirect provides this bias indicator method with all meta-analyses. Please note that the power of this method to detect bias will be low with small numbers of studies. Harbord (2005) developed a test that maintains the power of the Egger test whilst reducing the false positive rate, which is a problem with the Egger test when there are large treatment effects, few events per trial or all trials are of similar sizes. The original Egger test should be used instead of the Harbord method if there is a large imbalance between the sizes of treatment and control groups – the same is true for the Peto odds ratio, to which this test is mathematically related. Begg and Mazumdar (1994) proposed testing the interdependence of variance and effect size using Kendall's method. This bias indicator makes fewer assumptions than that of Egger et al. but it is insensitive to many types of bias to which the Egger test is sensitive. Unless you have many studies in your meta-analysis, the Begg method has very low power to detect biases (Sterne et al., 2000). Other statistical methods can be used to investigate the effects of study characteristics other than sample size upon effects (Sterne et al., 2002). Please seek the advice of a Statistician with regard to this. Note that when the between-study heterogeneity is large, none of the bias detection tests work well. See meta-analysis options for details of how to set the bias detection plot type.
{"url":"http://www.statsdirect.com/help/meta_analysis/bias_detection_in_meta_analysis.htm","timestamp":"2014-04-16T04:10:28Z","content_type":null,"content_length":"12459","record_id":"<urn:uuid:142e73fb-a325-4f1a-ab96-c886e5d3a2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
A total of 2,347 people attended the basketball game. Student tickets are $2.50 and adult tickets are $4.00.How many... - Homework Help - eNotes.com A total of 2,347 people attended the basketball game. Student tickets are $2.50 and adult tickets are $4.00. How many students attended the game? Please supply one more piece of information, the total amount of money in sales for the game. As per workhead09's request, more information need to be provided. For example, the total amount of money collected from the sale of tickets. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/total-2-347-people-attended-basketball-game-218205","timestamp":"2014-04-20T03:25:42Z","content_type":null,"content_length":"26726","record_id":"<urn:uuid:6c87d86c-6b42-4985-b146-71d34cff7ff2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
moon earth system escape velocity 1. The problem statement, all variables and given/known data A projectile is fired straight away from the moon from a base on the far side of the moon, away from the earth. What is the projectile's escape speed from the earth-moon system? 2. Relevant equations Escape velocity = sqrt[2GM/R] 3. The attempt at a solution What I'm wondering is, do I just have to use this formula the moon or do I have to take the gravitational potential energy of the earth and the projectile into account also?
{"url":"http://www.physicsforums.com/showthread.php?t=229815","timestamp":"2014-04-16T13:43:24Z","content_type":null,"content_length":"23423","record_id":"<urn:uuid:b9e79825-da1b-445b-9cbe-08267309f09d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Downingtown ACT Tutor Find a Downingtown ACT Tutor ...My background in academia and industry allows me to teach calculus from either a theoretical or an applied approach, depending on student needs and interests. I studied statistics as part of the actuarial exam process. Two of the early exams focused on statistical methods, including regression, parameter fitting, Bayesian, and non-Bayesian techniques. 18 Subjects: including ACT Math, calculus, geometry, statistics ...I received Wheaton College's highest music composition prize as a sophomore student. I have been playing the double bass and bass guitar for many years and have studied jazz bass and piano with seasoned performers. I have received several undergraduate poetry prizes, including First Place in Christianity & Literature's Student Writing Contest. 38 Subjects: including ACT Math, Spanish, English, reading ...I believe that everyone can learn and enjoy math. All they need is someone to show them math isn't all bad and can be fun once you get the hang of it. I have taught at an inner city school, a rural school, and a private school, so I have dealt with every kind of student. 6 Subjects: including ACT Math, calculus, algebra 2, geometry ...There is an awesome reward when watching a struggling student as he begins to understand what he needs to do and how everything fits together. I previously taught Algebra I, II, III, Geometry, Trigonometry, Precalculus, Calculus, Intro to Statistics, and SAT review in a public school. I have tu... 12 Subjects: including ACT Math, calculus, linear algebra, algebra 1 ...I can help you become one too! OTHER: I pay a lot of attention to details, especially grammar and spelling. I can assist with any proofreading needs or help you child learn to read. 20 Subjects: including ACT Math, reading, statistics, biology
{"url":"http://www.purplemath.com/downingtown_act_tutors.php","timestamp":"2014-04-18T09:05:00Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:892eacf9-db17-43b1-9557-d804945e1558>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Existence of Zeros of Lyapunov-Monotone Operators Abstract: Consider a nonlinear operator T from a Banach space into itself. The study of the existence of zeros of T plays an important role in yielding fixed points of nonlinear operators. The operator T has a zero if and only if the initial value problem [see pdf for notation],has a constant solution. If T is a monotone operator then (1.1) has a unique solution [see pdf for notation] defined on [see pdf for notation] and the solution operator [see pdf for notation] is nonexpansive for all [see pdf for notation]. Imposing further assumptions one can show that U(t) must have a common fixed point and that fixed point is the desired zero of T. Thus one can use the theory of differential equations and some known fixed point theorems on the solution operator to obtain the existence of zeros of T. See for example [1,3,4, 5,9,11]. In this paper, we introduce the notion of Lyapunov-monotone operators in terms of several Lyapunov-like functions and utilizing certain results in abstract cones that are recently proved [6], we establish the existence of zeros of such operators. This leads us to work with generalized Banach spaces which offer a flexible technic. The results obtained are so general that when we employ as a candidate a generalized norm, we still get results more general than the known ones [8,9].
{"url":"https://dspace.uta.edu/handle/10106/2160","timestamp":"2014-04-21T03:15:07Z","content_type":null,"content_length":"16202","record_id":"<urn:uuid:613b8771-30cc-4804-97fe-3f1e5b8d7e99>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
The Signal X(t)=e^(-|t| ) Is Defined For All Values ... | Chegg.com The signal x(t)=e^(-|t| ) is defined for all values of t. a. Plot the signal x(t) and determine if this signal is finite energy. That is, compute the integral ?_(-?)^? |x(t)|^2 dt and determine if it is finite. The work I currently have: x(t)=?_(-?)^0 e^t )^2 +?_0^? (e^(-t) )^2 = ?_(-?)^0 e^2t +?_0^? e^(-2t) = [e^(3t/3) ]_(-?)^0+[e^(3t/3) ]_0^?= b. If you determine that x(t) is absolutely integrable or that the integral ?_(-?)^? |x(t)|^2 dt is finite, could you say that x(t) has finite energy? Explain why or why not. Hint: Plot |x(t)| and |x(t)|^2. c. From your results above, is it true the energy signal y(t)=e^(-t) cos (2*pi*t)u(t) Is less than half the energy of x(t)? Explain. To verify your result, use symbolic MATALB to plot y(t) and to compute its energy. d. To discharge a capacitor of 1mF charged with a voltage of 1 volt we connect it, at time t=0, with a resistor of R?. When we measure the voltage in the resistor we find it to be v_R (t)=e^(-t) u (t). Determine the resistance R. If the capacitor has a capacitance of 1µF, what would be R? In general, how are R and C related? Please show how to do the work above and the MATLAB script. Thank you. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/signal-x-t-e-t-defined-values-t--plot-signal-x-t-determine-signal-finite-energy-compute-in-q2790358","timestamp":"2014-04-24T06:11:15Z","content_type":null,"content_length":"23867","record_id":"<urn:uuid:d58b0dca-386e-4d2a-8aa2-ea9d16cb2d38>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal subgroups of finite index in free groups up vote 13 down vote favorite Hi all, This is a question about the groups $H_{n,s}$ introduced by Völklein in his book "Groups as Galois groups", §7.1, and defined as follows: let $N$ be the intersection of all normal subgroups of index $\le n$ in $F_s$, the free group on $s$ generators, and define $H_{n,s} = F_s / N$. Have these groups been studied? Do they have a name? Is it possible to compute their orders, at least in some cases? For example for $s=1$, then $H_{n,1}$ is the cyclic group of order $lcm(1, 2, \ldots, n)$. In Völklein's book, these are introduced primarily to avoid talking about profinite groups (the inverse limits of the $H_{n,s}$, with fixed $s$, is the free profinite group of $s$ generators). Any information you may have on these will be great appreciated. gr.group-theory geometric-group-theory add comment 2 Answers active oldest votes I am preparing a paper with Ian Biringer, Martin Kassabov, and Francesco Matucci, where we study the growth of the index of the intersection of all normal subgroups of index at most $n$ in a given group. We call this the study of intersection growth of the group. In your notation, for the free group of rank $s$, $F_s$, and every natural number $n$, the intersection growth function, $i_{F_s}(n)$, is defined to be the order of $H_{n,s}$. As general motivation for studying this growth, for a general group $\Gamma$, we show that the growth of $i_\Gamma(n)$ up vote determines the dimension of the profinite completion of $\Gamma$. 11 down vote This paper (which we may split into two) has some examples worked out: we have precise calculations for this growth for nilpotent groups and certain arithmetic groups. In the case of a rank $s$ free group, we found the lower bound $e^{n^{s-2/3}}$ (which we compute by finding the precise growth when one only intersects maximal subgroups). 1 Nice! I'm eager to read that paper. – Pierre Apr 19 '13 at 9:50 The paper should appear on the arxiv this week (and is available on my website). – Khalid Bou-Rabee Sep 30 '13 at 21:52 add comment Yes, you can compute their orders in a few easy cases. For example, if $p$ is prime, then $H_{p,s}$ is elementary abelian of order $p^s$, and $H_{p^2,s}$ is homocyclic abelian of order $p^{2s}$. I expect it would not be too hard to describe the structure when $n=p^3$, or when $n=pq$ for distinct primes $p,q$. For small $n$ you could compute $H_{n,s}$ directly. For example, when $n=6$, it has order 972. up vote 10 down vote But it would be very difficult to compute the order more generally. Some interesting quotients of $H_{n,s}$ have been studied. For example, if we take $s=2$, $n=60$, and let $K$ be the intersection of the kernels of homomorphisms of $F_2$ onto $A_5$, then $F_2/K$ is a direct product of 19 copies of $A_5$. 2 I am afraid that misread the definition of $H_{n,s}$ in your question. I took the definition to be $F_s/N$ where $N$ is the intersection of all normal subgroups of index exactly $n$ (rather than at most $n$, which is what you wrote). So Khalid Bou-Rabee's answer is more useful than mine for the question that you asked! – Derek Holt Apr 18 '13 at 17:40 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory geometric-group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/127950/normal-subgroups-of-finite-index-in-free-groups","timestamp":"2014-04-16T16:12:39Z","content_type":null,"content_length":"58597","record_id":"<urn:uuid:3522e9db-9df2-43e0-a495-c01e44fb45dc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Columbine Valley, CO Algebra Tutor Find a Columbine Valley, CO Algebra Tutor ...I am familiar with SAT math as I have helped tutor students for the past 2-3 years in this area. I have taken SAT three times (for English scores) and my own math scores ranged between 750-800. I will tutor either using a practice test and work through a set with the student or we can cover individual topics. 7 Subjects: including algebra 1, algebra 2, chemistry, ACT Math ...I help students analyze the question to understand what information is given, what specifically is being asked, and what steps are needed to answer the question. I go over step-by-step instructions, explaining the theory behind each step, so the student will be able to solve similar problems in ... 7 Subjects: including algebra 2, algebra 1, chemistry, geometry I have over ten years of experience teaching and tutoring at the high school and college levels. I received my bachelor's degree in Physics from Lewis and Clark College in Portland, Oregon, and my master's degree in Physics from the University of Utah. Subjects I have taught include the following:... 11 Subjects: including algebra 1, algebra 2, physics, geometry ...I am passionate about learning and about sharing that passion with students.As a mother of five and an instructor to at least 50 children over my life time, I have guided students in developing study skills for many years. I have seen this as a very natural outgrowth of the academic support I ha... 43 Subjects: including algebra 1, algebra 2, English, chemistry My name is Kevin, and I received a Bachelor of Science Degree in Applied and Computational Mathematics from the University of Southern California (USC). I was a member of the Pi Mu Epsilon Math Honors Society at USC. Since receiving my degree, I have continued my education in mathematics by comple... 21 Subjects: including algebra 2, algebra 1, chemistry, geometry Related Columbine Valley, CO Tutors Columbine Valley, CO Accounting Tutors Columbine Valley, CO ACT Tutors Columbine Valley, CO Algebra Tutors Columbine Valley, CO Algebra 2 Tutors Columbine Valley, CO Calculus Tutors Columbine Valley, CO Geometry Tutors Columbine Valley, CO Math Tutors Columbine Valley, CO Prealgebra Tutors Columbine Valley, CO Precalculus Tutors Columbine Valley, CO SAT Tutors Columbine Valley, CO SAT Math Tutors Columbine Valley, CO Science Tutors Columbine Valley, CO Statistics Tutors Columbine Valley, CO Trigonometry Tutors Nearby Cities With algebra Tutor Bow Mar, CO algebra Tutors Cherry Hills Village, CO algebra Tutors East Lake, CO algebra Tutors Edgewater, CO algebra Tutors Englewood, CO algebra Tutors Fort Logan, CO algebra Tutors Glendale, CO algebra Tutors Highlands Ranch, CO algebra Tutors Indian Hills algebra Tutors Lakeside, CO algebra Tutors Littleton City Offices, CO algebra Tutors Littleton, CO algebra Tutors Lonetree, CO algebra Tutors Louviers algebra Tutors Sheridan, CO algebra Tutors
{"url":"http://www.purplemath.com/Columbine_Valley_CO_Algebra_tutors.php","timestamp":"2014-04-17T00:51:08Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:e36a49ad-07cd-4881-b230-6b3f4fe3dbb6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone please explain factoring to me? Can someone please explain factoring to me? I understand most of Algebra except factoring. Alzebra wrote:I understand most of Algebra except factoring. What do you mean by "canceling out" in factorization? (I'm familiar with cancelling factors to simplify fractions, but that's a separate process from the factorization, and is in a different Providing an example from your text or your class notes would be helpful. Thank you! Re: Can someone please explain factoring to me? Here is one of my problems that I am struggling with: FACTOR x to the 4th power - 9y squared Then is says "1. x to the 4th power = (x squared) squared, and 9y squared = (3y) squared." I understand that. And then it says "2. x to the 4th power - 9y squared = (x squared) squared - (3y) squared." I understand that, too, because I just said those were equal. THEN, suddenly, it says it says, "3. And so, x to the 4th power - 9y squared = (x squared + 3y)(x squared - 3y). Solved" WHERE did that come from??? Can someone please explain why (x squared) squared - (3y) squared turns into (x squared + 3y)(x squared - 3y)??? Alzebra wrote:THEN, suddenly, it says it says, "3. And so, x to the 4th power - 9y squared = (x squared + 3y)(x squared - 3y). Solved" WHERE did that come from??? From factoring the difference of squares They applied the formula that should have been covered in class; check that section, or a section or two previous, to find the formula in your book. Re: Can someone please explain factoring to me? Thanks, stapel_eliz. I looked at that formula, and I think I understand it. I geuss I just skipped it by accident in my book.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=1807&p=5435","timestamp":"2014-04-20T09:22:01Z","content_type":null,"content_length":"24925","record_id":"<urn:uuid:cdb9bc0e-784c-4007-9884-4cc4af4297f1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: HOW DO U FIGURE THIS PROBLEM M/4+8=17 • one year ago • one year ago Best Response You've already chosen the best response. Solve for m i'm guessing? Best Response You've already chosen the best response. Best Response You've already chosen the best response. m/4+8=17 add -8 to both sides m/4 = 9 multiply 4 to both sides m = 36 Best Response You've already chosen the best response. First step brings the 8 to the other side of the equation Second step gets m alone on the left You are allowed to do anything you want to an algebra problem as long as you do it to both sides Best Response You've already chosen the best response. THANK YOU THAT EXPLAINATION REALLY HELPED ME AND MY SON Best Response You've already chosen the best response. M/4= 17 - 8 M/4 = 9 M = 9*4 M = 36 Best Response You've already chosen the best response. IS YOUR CAPS LOCK STUCK OR SOMETHING? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ac6978e4b064039cbdcb2e","timestamp":"2014-04-17T09:42:57Z","content_type":null,"content_length":"41887","record_id":"<urn:uuid:41294265-2767-40f7-9f92-9316ed9dc014>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help June 8th 2008, 09:04 PM #1 Mar 2008 It's one of those trigonometric ones that always make my brain go huh? If cos x= (-4/5) and tan x>0, find the value of Sin x. I figured that since Cos was negative, it was in either quadrants II or III. Since Tan is positive, it must be in III. After that....I get lost. Think of a right triangle. The cosine of an angle is the ratio of the adjacent side to the hypotenuse. So, ignoring the negative for now: 5 / | / | b / | / θ | By the Pythagorean theorem, we have $b = \sqrt{25 - 16} = \sqrt9 = 3$ (this is a Pythagorean triple: all sides are integers). Now, $\sin\theta = \frac{\text{opposite}}{\text{hypotenuse}}$, so we have $\sin\theta = \frac b5 = \frac35$. But, our $\theta$ is in quadrant I. Adding $180^\circ$ to get into quadrant III will make the sine negative: $\sin x = -\frac35$ Draw a diagram Good luck I'm too slow Needless to say, that was far off from what I got. It turns out I did what I always do: I overthunk the problem. >.< Sorry, but another trig problem is giving me issues. it is: 47. sin(cot^-1 5/12) What I know so far: Cotangent is the inverse of tangent. This seems like another ratio of sides. Needless to say, that was far off from what I got. It turns out I did what I always do: I overthunk the problem. >.< Sorry, but another trig problem is giving me issues. it is: 47. sin(cot^-1 5/12) What I know so far: Cotangent is the inverse of tangent. This seems like another ratio of sides. Indeed it is. Set it up the same way: $x = {\rm arccot}\frac5{12}\Rightarrow\cot x = \frac5{12}$ You know that $\cot\theta = \frac{\text{adjacent}}{\text{opposite}}$, so draw your triangle. And since the cotangent is positive, and since the range of arccot is $[0,\;\pi]$ (or sometimes it is defined as $\left[-\frac\pi2,\;\frac\pi2\right]$), the angle you are dealing with ( $x$) is in quadrant I. Edit: I thought I should add: Be careful when you say that cotangent is the "inverse" of tangent. The word inverse usually implies an inverse function--e.g., arcsin is the inverse of sin. The proper word in your case is "reciprocal": the cotangent is the reciprocal of the tangent (reciprocal meaning a number's multiplicative inverse). Needless to say, that was far off from what I got. It turns out I did what I always do: I overthunk the problem. >.< Sorry, but another trig problem is giving me issues. it is: 47. sin(cot^-1 5/12) What I know so far: Cotangent is the inverse of tangent. This seems like another ratio of sides. Here is another diagram. I always draw them it helps. We know that it has to be in quadrant I because the range of the arccot function is $[0,\pi]$ and the argument is positive $\sin(\cos^{-1}\left( \frac{5}{12}\right))=\frac{12}{13}$ Edit: Too slow again. Twice in the same post June 8th 2008, 09:15 PM #2 June 8th 2008, 09:18 PM #3 June 8th 2008, 09:22 PM #4 June 8th 2008, 09:25 PM #5 Mar 2008 June 8th 2008, 09:47 PM #6 June 8th 2008, 09:50 PM #7
{"url":"http://mathhelpforum.com/trigonometry/41040-cosine.html","timestamp":"2014-04-18T22:14:54Z","content_type":null,"content_length":"57066","record_id":"<urn:uuid:075c4d07-bc28-4e2e-977c-62cd67525ff0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Charlestown, PA Math Tutor Find a Charlestown, PA Math Tutor ...I have a strong background in research, as is necessary for any advanced science degree. I was a competitive swimmer for five years, with experience in backstroke, breaststroke, butterfly and freestyle. I also spent three years as a lifeguard, two at a public pool and one at a private pool. 20 Subjects: including statistics, algebra 1, algebra 2, biology ...I believe strongly in helping students to conceptualize numbers, and not just understand the processes of various operations, but also the concepts. I often incorporate manipulatives into my lessons or show children how to do addition, subtraction, etc. using a number line or other visual strate... 32 Subjects: including algebra 1, SAT math, English, ACT Math ...I believe that the best approach for teaching is to help students conceptualize some seemingly abstract topics in these subjects. Most of the time people get hung up on the language or complex symbols used in math and science when really the key to understanding is to be able to look beyond thos... 16 Subjects: including precalculus, algebra 1, algebra 2, calculus ...I have a master's degree in Education from Temple University. I was also a math instructor for Cumberland County College, and Temple University. I have a bachelors degree in mathematics, and a graduate degree in education, M.Ed from Temple University in Philadelphia. 22 Subjects: including trigonometry, algebra 1, algebra 2, calculus Hi,I graduated from the College of William and Mary with a Ph. D. degree in Chemistry, and this is my 7th year teaching chemistry in college. I like to tutor chemistry as well as math, and I look forward to working with you to improve your understandings of chemistry and/or math.I am an instructor in college teaching chemistry, and I have taught organic chemistry (both semesters) many 9 Subjects: including algebra 1, algebra 2, chemistry, geometry Related Charlestown, PA Tutors Charlestown, PA Accounting Tutors Charlestown, PA ACT Tutors Charlestown, PA Algebra Tutors Charlestown, PA Algebra 2 Tutors Charlestown, PA Calculus Tutors Charlestown, PA Geometry Tutors Charlestown, PA Math Tutors Charlestown, PA Prealgebra Tutors Charlestown, PA Precalculus Tutors Charlestown, PA SAT Tutors Charlestown, PA SAT Math Tutors Charlestown, PA Science Tutors Charlestown, PA Statistics Tutors Charlestown, PA Trigonometry Tutors Nearby Cities With Math Tutor Chesterbrook, PA Math Tutors Devault Math Tutors Eagle, PA Math Tutors Frazer, PA Math Tutors Gulph Mills, PA Math Tutors Ithan, PA Math Tutors Kimberton Math Tutors Linfield, PA Math Tutors Rahns, PA Math Tutors Romansville, PA Math Tutors Saint Davids, PA Math Tutors Southeastern Math Tutors Strafford, PA Math Tutors Upton, PA Math Tutors Valley Forge Math Tutors
{"url":"http://www.purplemath.com/charlestown_pa_math_tutors.php","timestamp":"2014-04-16T22:19:52Z","content_type":null,"content_length":"24170","record_id":"<urn:uuid:af86fe16-68c3-4ac2-894b-456ae4872e09>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Wellesley Hills Math Tutor ...I have taught language arts for grades 6-8 and currently teach technology to students in K0 through grade 8. I have worked with Microsoft Windows since the mid 1980's. I am familiar with file management, program management and software optimization. 14 Subjects: including prealgebra, reading, ESL/ESOL, grammar ...My emphasis is on using physics principles to set up problems so that the remaining math becomes straightforward. A full Precalculus contains a great deal of material and is often a daunting task for the student. I have taught at the University level as the professor for Precalculus. 24 Subjects: including linear algebra, differential equations, discrete math, actuarial science Math and Science are my passion. I love school: currently, I am majoring in Neurobiology at Boston University. Because I work as an EMT and teach EMT training classes, my background in Science is both hands on and in a teaching environment. 14 Subjects: including algebra 2, chemistry, geometry, trigonometry ...I am looking forward to working with you soon and please request my assistance with the confidence that price is no object; I am willing to negotiate a rate that meets even the tightest budget. Providing my services and allowing students the opportunity to learn means making any accommodations n... 30 Subjects: including algebra 2, statistics, ESL/ESOL, GRE I am a licensed middle school and high school math teacher who specializes in tutoring math, organizational skills and SAT/GRE/MTEL preparation for math. I currently teach middle and high school math at a small private school and have experience teaching both gifted students and struggling learners... 22 Subjects: including trigonometry, grammar, SAT math, physics
{"url":"http://www.purplemath.com/wellesley_hills_ma_math_tutors.php","timestamp":"2014-04-17T07:50:00Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:42cefd41-8b30-4d94-8370-bfd748bce2a1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
dilation problem I suppose that is correct. Bit strange though. If you got the first k wrong, but knew to make it minus, you'd still get 0. Wrong working but right answer. Hhhmmm. Worrying. If I was testing a student, I'd ask for both answers, just to be certain. Is this work computer marked by any chance? You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=281027","timestamp":"2014-04-19T14:39:17Z","content_type":null,"content_length":"20393","record_id":"<urn:uuid:2c949782-926e-4f42-91af-3fa04f64c586>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Sharp Park, CA ACT Tutor Find a Sharp Park, CA ACT Tutor ...I have found it to be critical to be sure that the student has mastered skills from Algebra 1 before going forward. My first session thus usually involves some diagnosis of which skills the student needs help with before going on to Algebra 2. A firm base on which to build is required. 17 Subjects: including ACT Math, physics, calculus, ASVAB ...I can help you in upper level high school and college level math as well as algebra, precalculus and SAT math prep. I have taught math at the high school and college level. I have spent the past 2 years as a volunteer math instructor. 7 Subjects: including ACT Math, calculus, algebra 1, algebra 2 ...In High School and College, I was always good at math having scored near perfect on the SAT and ACT in math, as well as scoring a 5 on the AP Calculus Exam. I enjoy teaching and helping kids that are eager to learn, and I have great experience in managing expectations in addition to overwhelming... 33 Subjects: including ACT Math, calculus, geometry, statistics ...I am patient and able to explain the same concept/theorem in a variety of ways. I think students are relaxed around me and enjoy my sense of humor while tutoring. I teach the material thoroughly, helping students understand why facts are true. 14 Subjects: including ACT Math, calculus, physics, statistics ...I got a 4.05 GPA at my High School. I am currently a teacher/tutor for Think Tank Learning. I would like to use my skills to make math more manageable for others. 15 Subjects: including ACT Math, chemistry, calculus, physics Related Sharp Park, CA Tutors Sharp Park, CA Accounting Tutors Sharp Park, CA ACT Tutors Sharp Park, CA Algebra Tutors Sharp Park, CA Algebra 2 Tutors Sharp Park, CA Calculus Tutors Sharp Park, CA Geometry Tutors Sharp Park, CA Math Tutors Sharp Park, CA Prealgebra Tutors Sharp Park, CA Precalculus Tutors Sharp Park, CA SAT Tutors Sharp Park, CA SAT Math Tutors Sharp Park, CA Science Tutors Sharp Park, CA Statistics Tutors Sharp Park, CA Trigonometry Tutors Nearby Cities With ACT Tutor Alameda Pt, CA ACT Tutors Brisbane ACT Tutors Desert Edge, CA ACT Tutors Marin City, CA ACT Tutors Mount Eden, CA ACT Tutors Muir Beach, CA ACT Tutors Nas Miramar, CA ACT Tutors Pacifica ACT Tutors Palomar Park, CA ACT Tutors Presidio, CA ACT Tutors Rancho California, CA ACT Tutors Rancho Suey, CA ACT Tutors Tamalpais Valley, CA ACT Tutors Terra Linda, CA ACT Tutors West Menlo Park, CA ACT Tutors
{"url":"http://www.purplemath.com/Sharp_Park_CA_ACT_tutors.php","timestamp":"2014-04-18T06:02:03Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:e95d8352-a779-49c1-856e-b349951d4f38>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Houghton, WA Math Tutor Find a Houghton, WA Math Tutor ...Teaching SAT is a big chunk of of my 15 years for tutoring and teaching experience. I have taught SAT math classes while working with College Connections Hawaii, where I received a formal training for teaching the subject. I tutored many individual students on my own time as well, applying the knowledge I got from teaching the classes and fitting my style to the various needs of my clients. 20 Subjects: including trigonometry, ACT Math, SAT math, algebra 1 ...So I moved to Pullman where I studied at Washington State University and received my B.S in biotechnology and continued on for a master's in science. My master's research focused on molecular techniques associated with ancient DNA and because my adviser was an anthropologist I am well versed in ... 14 Subjects: including algebra 2, trigonometry, anthropology, algebra 1 ...I think I have a patient, encouraging, and intuitive teaching style that works well with students that age, and I also do adapt my teaching style to the needs of the student. I regularly tutor students in math through calculus, biology, English, and chemistry. I also coach students through the college application process and enjoy helping them write their personal statement or essay. 28 Subjects: including prealgebra, study skills, Korean, ESL/ESOL ...Geology is one of my secondary passions after astronomy. I would be pleased to work to improve students' comprehension of geology and its related subjects, from plate tectonics to volcanology to the structure of the Earth's interior. I attended a private Christian school (K-12) that placed strong emphasis on Bible study/knowledge and daily application. 21 Subjects: including algebra 1, algebra 2, English, writing ...I focus on emphasizing the underlying logic of the concepts and building on prior knowledge while incorporating new material. Some of my areas of greatest experience are: properties of exponents and roots, writing and graphing linear equations and inequalities, probabilities, interpretation of g... 17 Subjects: including prealgebra, English, linear algebra, algebra 1 Related Houghton, WA Tutors Houghton, WA Accounting Tutors Houghton, WA ACT Tutors Houghton, WA Algebra Tutors Houghton, WA Algebra 2 Tutors Houghton, WA Calculus Tutors Houghton, WA Geometry Tutors Houghton, WA Math Tutors Houghton, WA Prealgebra Tutors Houghton, WA Precalculus Tutors Houghton, WA SAT Tutors Houghton, WA SAT Math Tutors Houghton, WA Science Tutors Houghton, WA Statistics Tutors Houghton, WA Trigonometry Tutors Nearby Cities With Math Tutor Adelaide, WA Math Tutors Avondale, WA Math Tutors Clyde Hill, WA Math Tutors Earlmount, WA Math Tutors Hazelwood, WA Math Tutors Highlands, WA Math Tutors Hunts Point, WA Math Tutors Juanita, WA Math Tutors North City, WA Math Tutors Pinehurst, WA Math Tutors Queensborough, WA Math Tutors Queensgate, WA Math Tutors Totem Lake, WA Math Tutors Wedgwood, WA Math Tutors Yarrow Point, WA Math Tutors
{"url":"http://www.purplemath.com/Houghton_WA_Math_tutors.php","timestamp":"2014-04-20T14:05:08Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:824688e0-0c66-4eea-88f1-b7bd97ddc9f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Number Find the complex number $z$ which satisfies both $|z-3i|=3$ and $arg(z-3i)=\frac{3\pi}{4}$. Thanks in advance. Hmmm you mean substituting $z-3i=a+ib$ ? $3=|z-3i|=\sqrt{a^2+b^2}$ $3 \pi /4=arg(z-3i)=arctan \frac ba$ ---> $\frac ba=\tan (3 \pi /4)=-1$ $a=-b$ , $a eq 0$ Substituting in the modulus : $3=\sqrt{a^ 2+a^2}$ $9=2a^2$ $a=\pm \frac{3}{\sqrt{2}}=\pm 3 \cdot \frac{\sqrt{2}}{2}$ But now, how to get if it's + or -, I still have to think about it Last edited by Moo; June 19th 2008 at 11:05 AM. A friend of mine explained it to me. While dealing with squared numbers, it's not bijective over all real numbers. So you can't talk in terms of equivalence. That is to say that when you do the substitution here, you will have to check back if the results you've got satisfy the conditions. If $a=3 \cdot \frac{\sqrt{2}}{2}$, then $b=-3 \cdot \frac{\sqrt{2}}{2}$ $arg(z-3i)=arg \left(3 \cdot \ frac{\sqrt{2}}{2}-i \cdot 3 \cdot \frac{\sqrt{2}}{2}\right)=arg \left(\frac{\sqrt{2}}{2}-i \cdot \frac{\sqrt{2}}{2}\right)= -\frac{\pi}{4} eq \frac{3 \pi}{4} \quad \square$ Then, try out $a=-3 \cdot \frac{\sqrt{2}}2 \dots \dots \dots \dots \dots \blacksquare$ A geometrical approach would be to note that: 1. $|z-3i|=3$ defines a circle of radius 3 and centre C at z = 3i. 2. $\text{arg} \, (z-3i)=\frac{3\pi}{4}$ defines a ray with terminus at z = 3i and making an angle $\frac{3\pi}{4}$ with the positive direction of the horizontal. It's then easy to see that the required value of z is the point A of intersection of the circle and the ray. There's an obvious isosceles triangle AOC (OC and AC have length 3 and angle ACO = $\frac{\pi}{2} + \frac{\pi}{4} = \frac{3 \pi}{4}$. (O is the origin). A small bit of geometry and trigonometry, a gentle caress and the triangle gives it all to you: $\text{arg} \, (z) = \frac{\pi}{2} + \frac{\pi}{8} = \frac{5 \pi}{8}$ $|z| = OA = 6 \cos \frac{\pi}{8} = 3 \sqrt{2 + \sqrt{2}}$. Last edited by mr fantastic; June 19th 2008 at 04:05 PM. Reason: Made the triangle give it all to me
{"url":"http://mathhelpforum.com/trigonometry/41988-complex-number.html","timestamp":"2014-04-19T05:20:57Z","content_type":null,"content_length":"58530","record_id":"<urn:uuid:238ae130-40d8-4f1e-b107-48c7947b0486>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Milnor's cartography problem up vote 14 down vote favorite Let $\Omega$ be a round disc of radius $\alpha<\pi/2$ on the standard sphere. It is easy to construct a $(1,\tfrac{\alpha}{\sin\alpha})$-bi-Lipschitz map from $\Omega$ to the plane. Is it true that any convex domain $\Omega'$ on $S^2$ with the same area as $\Omega$ also admits a $(1,\tfrac{\alpha}{\sin\alpha})$-bi-Lipschitz map to the plane? • This problem appears in Milnor's A problem in cartography. Amer. Math. Monthly 76 1969 1101--1112. • I spent quite a bit of time to solve it, but without success. I only noticed that if one exchange "area" above to "perimeter" then the answer is YES. dg.differential-geometry geometry open-problem 3 In the non-convex case: Are you defining distance as spherical distance, or distance along paths that stay in your domain? (For spherical distance and a global lipschitz condition, I think there are quick counterexamples.) – Martin M. W. Nov 14 '09 at 15:33 Right, the condition is local. – Anton Petrunin Nov 14 '09 at 15:58 Could it be that for the nonconvex problem you still need that the set be simply-connected or something? Otherwise you can take a small-area neighborhood of a "triangulation by tiny triangles"'s 1-skeleton and I think it makes a counterexample, since if none of the grid's 1-cycles expands a huge amount, then the inverse map has a huge lipschitz constant. – Mircea Mar 22 '12 at 12:39 1 @Mircea, well, let's do convex first. You are definetely right if the Lipschitz condition is global; if it is only local then I am not sure. – Anton Petrunin Mar 22 '12 at 19:14 @Anton Petrunin, I agree, the maps could still "crumple" the skeleton while keeping the local condition. Can I ask you how it all works in the case of perimeter? I thought that if one takes the (globally)1-lipschitz map defined just on the boundary whose image encloses maximum area, then that should extend to the wanted map (as the projection did for the disk), but I got stuck in proving that. – Mircea Mar 24 '12 at 11:55 show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged dg.differential-geometry geometry open-problem or ask your own question.
{"url":"http://mathoverflow.net/questions/5479/milnors-cartography-problem","timestamp":"2014-04-20T16:01:01Z","content_type":null,"content_length":"51978","record_id":"<urn:uuid:1bdb681f-c595-497a-9a92-94b096a31c78>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics A $k$-nacci sequence in a finite group $G$ is a sequence of group elements ${x}_{0},{x}_{1},\cdots ,{x}_{n},\cdots$ for which, given an initial generating set ${x}_{0},\cdots ,{x}_{j-1}$ for $G$, each element is defined by ${x}_{n}=\left\{\begin{array}{cc}{x}_{0}{x}_{1}\cdots {x}_{n-1}\hfill & \text{for}\phantom{\rule{4.pt}{0ex}}j\le n<k\hfill \\ {x}_{n-k}{x}_{n-k+1}\cdots {x}_{n-1}\hfill & \text{for}\phantom{\rule {4.pt}{0ex}}n\ge k·\hfill \end{array}\right\$ A $k$-nacci sequence certainly reflects the structure of $G$. A finite group $G$ is called $k$-nacci sequenceable if there exists a $k$-nacci sequence of $G$ such that every element of $G$ appears in the sequence. It is shown that a $k$-nacci sequence in a finite group $G$ is simply periodic. This leads to a complete description of the 2-nacci sequenceable groups. A 2-nacci sequenceable group is 20D60 Arithmetic and combinatorial problems on finite groups 11B39 Fibonacci and Lucas numbers, etc. 20F05 Generators, relations, and presentations of groups
{"url":"http://zbmath.org/?q=an:0758.20006","timestamp":"2014-04-18T08:36:39Z","content_type":null,"content_length":"23390","record_id":"<urn:uuid:675331c3-864c-499b-ac19-8b39e76b0ed8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/travisbrown372/medals","timestamp":"2014-04-21T12:42:11Z","content_type":null,"content_length":"100014","record_id":"<urn:uuid:6fe6bd06-f194-4617-a7ea-5508520f962f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
3937 -- Parlay Wagering Parlay Wagering Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 116 Accepted: 20 Parlay wagering offers sports bettors the opportunity to win a large sum of money from a small initial wager. A parlay wager is a combination of individual independent wagers that only pays if no individual wager loses. The payout from each wager is applied or “parlayed” to the next wager in turn. If any individual wager loses, the bettor receives nothing. If any individual wager is a tie or “push”, that wager is effectively ignored, reducing the ultimate payout. The sports book quotes the payout rate for an individual wager as a “money line”, a non-zero integer in the range -2000 to 2000. To compute the payout for a successful wager, the money line is converted to a decimal multiplier as follows: if the money line is positive, it is divided by 100 to obtain the multiplier. If the money line is negative, the absolute value is divided into 100 to obtain the multiplier. The multiplier is always truncated to three digits after the decimal point. The wager is multiplied by this multiplier to determine the amount won. The amount won is truncated to the cent (the sports book keeps the fractional cents). Consider the following example for a five-way parlay wager: The maximum payout for any parlay wager is $1 million. If the calculated total exceeds that amount, the actual total returned will be $1 million. Write a program that will calculate the total amount returned for a series of parlay wagers. For each parlay wager, your program is to print the total amount returned in dollars and cents on a single line starting in the first column without embedded or trailing whitespace. Print the leading dollar sign and insert commas at the millions and thousands positions as needed. Input will consist of several wagers. The first line of input to your program will contain the total number of parlay wagers as a single positive integer. Each wager that follows will be represented by a series of lines. The first line of each parlay wager contains the initial bet and the count of individual wagers as integers separated from each other by a single space. The following lines represent the individual wagers, one per line. Each individual wager is given as its money line followed by a single space and the result of the wager (“Win”, “Tie”, or “Loss”). For each parlay wager, your program should print one line containing the total amount returned in dollars and cents. Print the leading dollar sign and insert commas at the millions and thousands positions as needed. Sample Input -170 Win -160 Win 125 Win -135 Win -140 Win 100 Win -100 Tie -250 Win 135 Tie 265 Tie 1500 Win 120 Win 130 Win 100 Loss 300 Tie Sample Output
{"url":"http://poj.org/problem?id=3937","timestamp":"2014-04-24T00:12:11Z","content_type":null,"content_length":"8299","record_id":"<urn:uuid:3b158615-c185-4be1-9fba-ef565803bfae>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Data Structures (0368.2158) Prof. Hanoch Levy (hanoch@math.tau.ac.il) Dr. Yossi Matias (matias@math.tau.ac.il) Teaching Assistants: Oded Schwartz (odeds@math.tau.ac.il) This page will be modified during the course, and will outline the classes Text books: Intoduction to Algorithms, by Cormen, Leiserson and Rivest. Data Structures and Algorithms, by Aho, Hopcroft and Ullman. The course follows both books. Recommended purchase - first book (to be used by other courses). Course syllabus: The course will deal with data strucutres and their use in the design of efficient algorithms. Subjects: Growth of functions and asymptotic notation; amortized analysis; recurrences: the substitution, master, and iteration methods; elementary strucutres: lists, stacks, queues; trees: ordered trees, binary trees, labeled trees and expression trees; set representation and manipulation; dictionary and hash tables. Tentative course outline • Class 1 : Recursion (review) • Class 2 : Master Theorem • Class 3 : Elementary Structures: Lists, Stacks, Queues. Doubling and amortized complexity. • Class 4 : Trees: basic concepts. Ordered tress, labeled trees, expression trees. Binary trees and Huffman Coding. • Class 5 : Dictionary and Hashing. Open and Closesd Hash. Expected value complexity analysis. Hash functions. Rehashing. Reorganization. Universal Hashing. • Class 6: Hash (continue): Perfect Hash. • Class 7: Multilist, sparse matrices, multiple-representation of data, Priority Queue (Heaps). • Class 8: Binary search trees • Class 9 : Red-Black Trees. • Class 10 : 2-3 trees, B-trees, Merge-find trees • Class 11 : Merge-find trees (cont.). Near Constant Complexity. Sorting: Simple sort, Quick Sort (algorithm, worst-case and average complexity). • Class 12 : Sorting: Heap Sort and use for partial sort, bin sort. Order Statistics. For a list of actual material covered so far click here Course material These are partial set of course notes (PowerPoint presentation in Hebrew). The final grade will be composed of the following: Final Exam: 80%, Homework assignments: 10% (N-1 best assignments), Final project: 10%. Final project is MANDATORY (that is, if you do not hand it in, you FAIL!) Tirgul, homeworks and project Details will be given in the Tirgul Home Page. Last updated October 10 , 1999
{"url":"http://www.cs.tau.ac.il/~matias/courses/ds.99-00.html","timestamp":"2014-04-20T16:37:54Z","content_type":null,"content_length":"4456","record_id":"<urn:uuid:50932745-e127-4cdc-ad3d-faacb1e71f14>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Hi John-o; There are a lot of solutions to that equation. Are there any constraints you have left out? We can treat this with a few more constraints as an Egyptian fraction. One possible solution is So we have a=1 and ab -1 =14 and b = 15 Substituting into the top equation: x = 15 , y = 210 is one solution. There 5 solutions with positive integers and x ≠ y. They are For a general solution that gets all solutions All solutions are now generated by adding 14 to the divisors of 196.
{"url":"http://www.mathisfunforum.com/post.php?tid=18344&qid=237722","timestamp":"2014-04-16T13:17:57Z","content_type":null,"content_length":"20802","record_id":"<urn:uuid:265ec4c7-de7e-400e-8460-47f3c62fe463>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: proof representation noodling, cwm, euler From: Jos De_Roo <jos.deroo@agfa.com> Date: Tue, 25 Mar 2003 23:09:53 +0100 To: "Dan Connolly <connolly" <connolly@w3.org> Cc: Jos De_Roo <jos.deroo@agfa.com>, Sandro Hawke <sandro@w3.org>, Tim Berners-Lee <timbl@w3.org>, www-archive@w3.org Message-ID: <OF5FF2F74D.450BDDD1-ONC1256CF4.0078FF38@agfa.be> [nice to see this while having a holiday and making some primitive attempts at creating > Points in random order: > * log:conclusion without log:includes seems to be > of very limited utility. You have to be very careful > to be sure the deductive closure is finite. > log:concusionIncludes seems better; cwm can > implement it as a composition of conclusion and includes, > but documents last longer than code, and our > knowledge bases should move away from using > log:conclusion. Still don't grok log:concusionIncludes and we haven't been using log:conclusion so far. The implementation of cwm builtins is on my todo stack, but... OK, there are still weekends... > I think Sandro made the analagous point about --think > without --filter. Euler has no way to offer such > an interface. The common interface they share is: > given a list of premises, find one or more > proofs of _these_ conclusions/conjectures. Right, the _these_ have to be pointed out, but we could think about an auto generated query goal which is the conjunction of ?S each_verb_occuring_in_the_KB ?O. That could be worked out I think. > I think you can use a degenerate { ?S ?P ?O } > to find all the simple factual conclusions. > I think cwm gives up in that case, but it could > probably be taught not to. Euler probably > doesn't go so fast in that case. That's right and right now it will even give up immediately as the intial goal verb is unbound... > Euler would need a "don't justify the conclusion; > just instantiate the variables and assert the result" > mode in order to work for circles-and-arrows, > travel tools, /TR/, and the like, That's another very good point, to leave out the argument premises in such cases and just give the conclusions (I had some testcases so far which could use that, such as find all ?S = ?O which we need in our prepare method to do the subtitution of equals for equals, but right now it's hacked in the code itself) > * The proof theory literature that I've been reading > treats inference rules as (computational) functions > that take proofs as arguments and return proofs > as results. > So for example andI takes a proof of A and a proof of B > and returns a proof of A /\ B: > They write these using x: T notation, which you > can read as "T is a proof of x" (they also tell > you to read it as "x is of type T" though I find > that almost superfluous). > A: pfA > B: pfB > ======== > A /\ B : andI(pfA, pfB) > >From that viewpoint, rdf simple entailment is > a function that takes a proof of A and returns > a proof of B whenever A log:includes B. > { ?PFA :proves ?A. > ?A log:includes ?B } <=> > { [ is :rdfSimpleEntailment of ?PFA ] :proves ?B. > ?PFA :proves ?A. } > * I'm starting to think our proof representation should use > this functional structure, though I can't say for sure > why. I haven't looked > at timbl''s proof representation design closely > yet. > * The systems that work this way seem to be > stratified. I'm not sure if that's critical to > making it work at all or just an artifact > of design preferences. > * log:includes is one entailment relationship; > log:conclusionIncludes is another; each entailment > relationship has an analagous inference rule form. > { ?PFA :proves ?A. > ?A log:conclusionIncludes ?B } <=> > { [ is :hornClauseResolution of ?PFA ] :proves ?B. > ?PFA :proves ?A. } So far we haven't been working with explicit proof vocabulary as we were fine with = for the bindings and => for the sequent The so called proof thing is a SOUND ARGUMENT: An ARGUMENT is a pair of things: a set of sentences, the PREMISES; a sentence, the CONCLUSION. An argument is VALID if and only if it is necessary that if all its premises are true, its conclusion is true. An argument is SOUND if and only if it is valid and all its premises are true. A sound argument can be the premis of another sound argument. and for examples/testcases see The thing is also that one can query again with the proof as query (as a kind of proof validation or as a continuation) > * I think Jos's research exploits the > "proofs as programs" Curry-Howard isomorphism > and actually provides an efficient implementation of > derived inference rules once it has prooved that > they follow from basic inference rules. It is definetely the case that proofs correspond with stepwise procedures which can be compiled into running code (I just wanted to be able to work on some more testcases for that, cases we see for some specific algorithms...) > This looks like an extremely valuable mechanism > to support "diagonalization" that will be critical > to keeping proof sizes manageable. Haven't made the connection to that "diagonalization" any pointer? > * datatype literals are functional terms. Hmm... I'm afraid we are again (mis)using forward paths for that purpose in and in our latest attempt at skolem functions in {?x :b ?y} => {?x :k (?x ?y).:sf1}. {?x :b ?y} => {(?x ?y).:sf1 :m ?y}. to horn {?x :b ?y} => {?x :k [ :m ?y]}. > -- > Dan Connolly, W3C http://www.w3.org/People/Connolly/ -- , Jos De Roo, AGFA http://www.agfa.com/w3c/jdroo/ Received on Tuesday, 25 March 2003 17:10:20 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 7 November 2012 14:17:28 GMT
{"url":"http://lists.w3.org/Archives/Public/www-archive/2003Mar/0070.html","timestamp":"2014-04-16T05:27:22Z","content_type":null,"content_length":"15020","record_id":"<urn:uuid:5eaf7794-3980-495c-bdd3-f6e6e22b54eb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert Time to Decimal in Excel The following examples convert time to decimal in Excel, using two different methods. The first method uses the fact that Excel stores times as decimals, with the number 0 equal to the time 00:00:00, the number 0.25 equal to the time 06:00:00, the number 0.5 equal to the time 12:00:00, etc. Because of this system, a time can be converted to hours, minutes or seconds by multiplying it by the number of hours, minutes or seconds in one day. The second method that we use to convert a time to a decimal uses the Excel Hour, Minute, and Second functions to extract the hours, minutes and seconds from an Excel time. Convert Time to Hours in Excel The following spreadsheets show two different formulas that can be used to convert a time to hours in Excel. The numerical value returned from these formulas is a decimal, which includes an integer portion, representing the number of whole hours, and a decimal portion, representing the minutes and seconds. Method 1: Simple Multiplication The simplest formula is shown below. In this case, the time is converted to hours by simply multiplying it by 24 (the number of hours in one day). A B 1 Time (hh:mm:ss) Hours (decimal) 2 02:30:00 =A2 * 24 For the time 02:30:00, in cell A2 of the example spreadsheet above, the formula returns the value 2.5. Ie. 2 hours 30 minutes and 0 seconds is equal to 2.5 hours. Method 2: Using the Excel Time Functions The second formula that can be used to provide the same result uses the Excel Hour, Minute, and Second functions. Although this formula is much longer than the simple multiplication shown above, some people prefer it, as it doesn't rely on the user having an understanding Excel's time system. This formula is shown in the spreadsheet below: A B 1 Time (hh:mm:ss) Hours (decimal) 2 02:30:00 =HOUR(A2) + MINUTE(A2) / 60 + SECOND(A2) / 3600 Convert Time to Minutes in Excel Cell B2 of the spreadsheets below show the two formulas that can be used to convert a time to minutes in Excel. The numerical value returned in this case, is a decimal, that includes an integer portion, representing the number of whole minutes, and a decimal portion, representing the seconds. Method 1: Simple Multiplication To convert a time to minutes, the time is multiplied by 1440, which is the the number of minutes in one day: A B 1 Time (hh:mm:ss) Minutes (decimal) 2 02:30:30 =A2 * 1440 For the time 02:30:30, in cell A2 of the above spreadsheet, the formula returns the value 150.5. Ie. 2 hours 30 minutes and 30 seconds is equal to 150.5 minutes. Method 2: Using the Excel Time Functions The same result can also be obtained using the Excel Hour, Minute and Second functions, as shown in the spreadsheet below: A B 1 Time (hh:mm:ss) Minutes (decimal) 2 02:30:30 =HOUR(A2) * 60 + MINUTE(A2) + SECOND(A2) / 60 Convert Time to Seconds in Excel The spreadsheets below show the formulas that can be used to convert a time to seconds in Excel. Method 1: Simple Multiplication The easiest way is to simply multiply the time by 86400, which is the the number of seconds in one day: A B 1 Time (hh:mm:ss) Seconds (decimal) 2 02:30:30 =A2 * 86400 For the time 02:30:30, in cell A2 of the above spreadsheet, the formula returns the value 9030. Ie. 2 hours 30 minutes and 30 seconds is equal to 9030 seconds. Method 2: Using the Excel Time Functions The same result can be obtained using the Excel Hour, Minute and Second functions, as shown below: A B 1 Time (hh:mm:ss) Seconds (decimal) 2 02:30:30 =HOUR(A2) * 3600 + MINUTE(A2) * 60 + SECOND(A2) Formatting the Result When you convert a time to a decimal, the cell containing the result may have the wrong formatting (e.g. the result may be displayed as a time, instead of a decimal). In this case, you will need to format the cell to have the Excel 'General' format. To do this: • Right click on the cell(s) to be formatted • Select the option Format Cells... • Ensure the Number tab is selected in the window that pops up • Select the option General from the list of Categories and click OK
{"url":"http://www.excelfunctions.net/Convert-Time-To-Decimal.html","timestamp":"2014-04-19T04:23:41Z","content_type":null,"content_length":"19964","record_id":"<urn:uuid:92a43069-819e-48de-b13e-a9c5e043d61d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Time and Velocity (Big Four?) A chipmunk drops an acorn from a branch 6.0 m above the ground. a) How long is it in the air? b)How fast is it going when it hits the ground? I decided my given was d= 6.0 m v[i]= 0 m/s So I used the distance formula (d=v[i]+1/2*a*t^2) and solved for time. I got about 1.1 s. For the next part I used v[f]=v[i]*t+a*t and solved. I got a)1.1 seconds b) 10.78 m/s Is this correct? Method good, but the fact you rounded off the time before calculating velocity is not desirable. Without rounding the vel calculates to 10.84 Given that orignal data was to 2 figures only, the velocity should be rounded to 2 figures, which will give 11 in both cases. You even get the same result if you stretch to 3 figures - 10.8, but if you quote 4 figures; 10.78 is incorrect.
{"url":"http://www.physicsforums.com/showthread.php?p=3660234","timestamp":"2014-04-21T07:15:41Z","content_type":null,"content_length":"26506","record_id":"<urn:uuid:9370495d-4041-4ee5-93fb-c6a9cfa60a4d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Free Algebra 1 Resource Replies: 9 Last Post: Dec 2, 2008 12:25 PM Messages: [ Previous | Next ] Kirk Weiler Free Algebra 1 Resource Posted: Nov 20, 2008 12:10 PM Posts: 13 From: Arlington I wanted to submit a post to let those who don't already know that an electronic textbook for the New York State Algebra 1 program is available completely free at: High School Registered: 10/30/ http://www.teacherweb.com/ny/arlington/algebraproject/hf0.stm It was written by high school and middle school teachers in the Arlington Central School District and is being used by students in our district. We had good success with it last year, as measured by the amazingly generous June Algebra Regents exam. Still, there are about 120 lessons and homeworks available on topics ranging from classic algebra (equations, systems, quadratics, rationals) to right triangle trig, to measurement and error. Feel free to download and use these anyway you like, as review or even as a primary text as we do. I know this has been posted before, but I thought I would give an update in case anyone had missed it before now. Kirk Weiler Editor-In-Chief of the Arlington Algebra Project Date Subject Author 11/20/08 Kirk Weiler 11/20/08 Re: Free Algebra 1 Resource L.C. 11/24/08 RE: Free Algebra 1 Resource RITA HERBST 11/24/08 Re: Free Algebra 1 Resource Robert Hazen 11/24/08 AAP 11/24/08 Re: Geometry - Hinge Theorem Arlane Frederick 11/24/08 RE: Geometry - Hinge Theorem Kate Nowak 11/24/08 Re: Geometry - Hinge Theorem Arlane Frederick 12/2/08 Re: RE: Free Algebra 1 Resource Kirk Weiler 12/2/08 RE: RE: Free Algebra 1 Resource RITA HERBST
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1858574","timestamp":"2014-04-19T05:00:57Z","content_type":null,"content_length":"27542","record_id":"<urn:uuid:b24ec51e-169e-46de-926a-b217882b0a74>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Dear Hasbro, If you really want to make money... [Archive] - SirSteve's Guide 02-16-2002, 01:36 PM Here is a simple way for Hasbro to make some SERIOUS cash with little or no expense to the company. Reproduce the following figures from existing molds on their original cards. These are guaranteed money makers that could be sold in a non-movie year. Since they have already been done, the cost of mfg. would be significantly less than the cost of R and D for new figures. In addition, these are figures that many kids would also like to get, because many are core characters or just have cool features that kids like. Here is the list: Power of the Force: CommTech figures: 3X Admiral Motti 3X Princess Leia 3X R2-D2 with holo Liea 3X Stormtrooper Cinema Scenes: 4X Mynock Hunt 4X Death Star Escape Freeze Frame: 3X Leia Hoth 3X AT-AT Driver 3X Death Star Droid 3X Pote Snitkin 6X Ree-Yees 6X Death Star Trooper 4X Darth Vader (First Pose) 4X Stormtrooper 4X Snowtrooper 4X AT-ST Driver 4X TIE Fighter Pilot 4X Grand Moff Tarkin 4X Emperor Palpatine 4X Emperor's Royal Guard 4X Garindan 4X Ishi Tib 4X Zuckuss 4X Captain Piett 6X Darth Vader with Removable Helmet 6X Boba Fett 12X Weequay 12X Sandtrooper Expanded Universe: 3X Mara Jade 3X Luke Skywalker from Dark Empire 3X Princess Leia from Dark Empire 3X Imperial Sentinel from Dark Empire 3X Clone Emperor Palpatine from Dark Empire 3X Grand Admiral Thrawn from Heir to the Empire 3X Kyle Katarn from Dark Forces 3X Spacetrooper from Heir to the Empire 3X Dark Trooper from Dark Forces Episode 1: 4X Pit Droid 2-pack 4X Queen Amidala (Naboo Battle) 4X Sio Bibble 4X Darth Sidious Hologram 4X Naboo Royal Guard (with removable helmet) 4X Jar-Jar Binks (Naboo Swamp) 4X TC-14 4X R2-B1 4X Yoda with Episode 1 on the card Cinema Scene: 8X Watto's Box Power of the Jedi: 6X Battle Droid (Security) 6X Battle Droid (Boomer Damaged) 6X Scout Trooper (Clean) 6X Scout Trooper (Dirty) 6X Rebel Trooper (Tantive 4 Defender) 6X Imperial Officer 12X Sandtrooper (though you could sell more of these by changing the colors of their should pauldron, say 3 colors from the movie. You could pack 4 of each color per case.) 4X Coruscant Guard 4X Gungan Warrior 4X Mon Calamari Officer 3X Lando Calrissian 3X Jar-Jar Binks (Tatooine) 3X R2-Q5 3X Tessek (AKA case # 84455.000 S) this supossedly shipped in May of 2001 but was hard to find. These are broken down by suggested case assortment. You will find the mix is made up of "army builders" as well as "hard to find" merchandise. If Hasbro were to put out these cases, they would be able to DOUBLE their revenue from the Star Wars line during non-movie years. These could be done inbetween shipments of new merchandise. After the initial release of the AOTC line, begining around fall of 2002 and ending in early 2005. Just a thought. ;)
{"url":"http://www.sirstevesguide.com/archive/index.php/t-3859.html","timestamp":"2014-04-20T14:21:20Z","content_type":null,"content_length":"13619","record_id":"<urn:uuid:2cc8320a-d056-48a7-9c2f-41829648802b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Waukegan Algebra Tutor ...Next, I focus on vocabulary definitions. I believe it is essential to understand the material language and how to verbally explain processes. Then, I assist in tackling the problems. 31 Subjects: including algebra 2, algebra 1, chemistry, physics ...My undergraduate was a major in chemistry with a minor in math after which I completed a masters in chemistry. I was the top student in all my chemistry classes, so I have a clear understanding of all the concepts to do with chemistry. I will be able to help you or your child to understand thes... 20 Subjects: including algebra 2, algebra 1, chemistry, physics ...The focus of my undergraduate work was geometry and topology. I have taught statistics and probability, algebra and geometry. While discrete math courses vary greatly my background encompasses the topics included in this course. 24 Subjects: including algebra 1, algebra 2, calculus, precalculus ...I really enjoyed my time as a tutor there. Being able to help in multiple areas is probably my main asset. I love biology and can help your child with any problems that they are having in this 22 Subjects: including algebra 1, algebra 2, reading, geometry As an experienced professional tutor, and middle school classroom teacher, I can help improve your child's math and study skills. I have a background in tutoring a variety of subjects in middle school including pre-algebra, algebra 1. Also I have developed and taught a study skills program aimed at middle school and high school students. 18 Subjects: including algebra 1, reading, writing, grammar
{"url":"http://www.purplemath.com/Waukegan_Algebra_tutors.php","timestamp":"2014-04-16T22:00:09Z","content_type":null,"content_length":"23697","record_id":"<urn:uuid:9c2afcb7-b543-4adf-aacc-44f2d1c160b9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Mary wants to build a triangle-shaped garden. The garden, represented by the triangle below is bordered on one side by her chicken house and the other by a park. How long will the garden be on the far side? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f7a720de4b0884e12aa9990","timestamp":"2014-04-20T06:32:11Z","content_type":null,"content_length":"67858","record_id":"<urn:uuid:4f66fc53-8e43-41bc-85f3-e848fd0036cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Interview with Prof. Lynn Arthur Steen - Part I Update on 9-14-07: Part 2 of the interview is now posted! [The comments and reactions are beginning to pour in here and on others' blogs. SteveH's and mathmom's give-and-take in the Comments section is a must read -- it's a blog of its own! Part 2 of the interview will be coming on Fri and Mon.] As reported a few days ago, Prof. Steen, one of the most highly respected voices in mathematics education, graciously accepted an invitation for an online interview at MathNotations. He has been a driving force for the reform of school mathematics for many years and was on the development team that produced NCTM's Curriculum and Evaluation Standards for School Mathematics. For the last few years, he has been involved in Achieve's commitment to developing world-class mathematics standards for K-8 and ADP's similar commitment to secondary mathematics. He will have much to say about these standards and the new Algebra II End-of-Course Exam that will be launched in the spring of 2008. He is a man of great integrity and towering commitment to quality education for all of our children . A few days ago, I emailed Prof. Steen a set of 19 questions that I felt reflected many of the concerns of my readership and, even more, confronted some of the major issues in mathematics education today. He agreed to reply to all of these, asking only that I publish his remarks in full. My role here was purely reportorial. This is not a debate. Once the questions were composed I stepped back and allowed him free rein. Prof. Steen replied thoughtfully and candidly within 48 hours. MathNotations will publish the interview in 2-3 segments to allow readers to absorb his replies and comment. If you've stopped here by the side of the road, tell your friends and colleagues about it. I invite fellow bloggers to spread the word across the blogosphere as well. Philosophically, Prof. Steen and I have much common ground, although we diverge on some key points. What I truly believe is that honest dialog is the only way we can move forward, end the Math Wars and reach a strong middle-ground position that best serves the interests of our children. Whatever your ideology may be, Prof. Steen's comments are profound and thought-provoking. Enjoy! I'll begin with my note of appreciation to Prof. Steen: Thank you Prof. Steen for your prompt yet thoughtful replies. This has been a new and rewarding experience for me and I know my regular readers (and perhaps new visitors) will read your comments with great interest, regardless of where their ideologies lie. You and I share many common views and yet we can respect our differences. In the end, we both want what is best for all our children. There are no easy answers to difficult questions, however I do believe, as you do, that dialogs like this will ultimately move us in the right direction. Thank you again for contributing to this process. Dave Marain MathNotations 9-11-07 Lynn Arthur Steen, St. Olaf College, September, 2007 1. Prof. Steen, your involvement in so many mathematics and science education projects is mind-boggling. At this time, what are your greatest concerns regarding mathematics education in the U.S.? That in our stampede for higher standards we are trampling on the enthusiasms, aspirations, and potential contributions of many students for whom mathematics is best approached indirectly. There is plenty of interesting mathematics in areas such as medicine, technology, business, agriculture, government, music, and sports, but students don't get to see these until large percentages have already given up on mathematics. It is true that mathematics unlocks doors to future careers. But we also need to open more doors to the world of mathematics. 2. Over a dozen years ago, Professor Schmidt, Director of the U.S. participation in TIMSS, made his famous comment about our mathematics curriculum being ‘an inch deep and a mile wide’. He also stressed the importance of having a coherent vision of mathematics education. Since then, fifty states have independently developed sets of mathematics standards and assessments. Although similar in some respects, they lack overall coherence and consistency of high expectations of our children. What is currently being done nationally as you see it to remedy this situation? Notwithstanding our constitutional tradition of federalism that leaves states responsible for education, some now suggest voluntary national standards as a cure for the incoherence and inconsistency that is evident in state standards. Indeed, Senator Dodd (D-CT) and Representative Ehlers (R-MI) have introduced just such a bill in the Congress. I rather doubt that there is sufficient political support for nationalizing education in this way. Nor do I think it would resolve the problem. It would simply shift the locus of inconsistency from written standards and assessments to teachers and students. More promising are efforts such as the American Diploma Project Network which is an ad hoc coalition of states that decided to work together on a common education agenda. This is not a "national" effort, but it is more in keeping with the traditions of our nation. Public distribution of comparative data is another strategy for reducing unwarranted inconsistency. Recent studies such as Mapping 2005 State Proficiency Standards Onto the NAEP Scales (NCES, June 2007) that compare states to the common scale established by the National Assessment of Educational Progress lead naturally to improvement motivated by competition or, in some cases, by embarrassment. Strategies that open more doors to mathematics are more likely to emerge in smaller jurisdictions, for the simple reason that innovation begins locally and the doors that need opening tend to have local roots. So I'm not terribly bothered by lack of coherence and consistency. I'd rather focus first on getting more students to learn more mathematics of whatever kind may interest them. What counts is that students gain sufficient experience with substantive mathematics—not just worksheets—to benefit from its power and, if possible, to appreciate its beauty. 3. What were some of the obstacles faced by Achieve’s Mathematics Advisory Panel, both at the K-8 level and for the secondary curriculum? Were many of the current conflicts in mathematics education (aka, the Math Wars) overcome by this Panel? If not, what issues remain? This is not a simple question! First, Achieve's formal Mathematics Advisory Panel (MAP) was constituted to work only on the K-8 level and produced Foundations for Success, a report with outcome expectations and sample problems for the end of grade 8. When work moved into the secondary level, it became part of the American Diploma Project (ADP) and operated with an evolving set of advisors representing all levels of mathematics and mathematics education. From the perspective of the "math wars," the original MAP panel was, for its time, a remarkably catholic forum. Strong voices from many different perspectives set forth conflicting views. Compromises were agreed to, and sometimes reversed after further discussion. Eventually a report emerged. No one was pleased with every detail, but I believe it is fair to say that everyone on the MAP committee agreed that as a whole it represented a good step forward. We reached this point by agreeing to set aside issues of pedagogy and to concentrate only on content. We further agreed that lists of expectations were less capable of conveying our intent than were rich examples. That is why the final report had 8 pages of expectations and 130 pages of examples. It was far easier for the diverse MAP members—protagonists, witnesses, and victims of the math wars—to agree on the quality of a problem than on the wording of a standard. We also choose to largely ignore the issue of calculators because it was one of the wedge issues on which we all knew that the panel could never agree. Some may view this as cowardice, and it may be that. However, it made possible the rest of the work and affirmed, in a sense, that issues such as this may best be left for local decisions. Another wedge issue we faced head-on, namely the place of quadratic functions and quadratic equations. Here we compromised, setting an ambitious bar for end of eighth grade at completing the square with a deliberate mandate to not employ the quadratic formula until the next algebra course. The purpose, of course, was understanding rather than calculation, a goal that in this case everyone around the table could support. Those on the panel with the most school experience worried that completing the square was much too ambitious. They were proved right in subsequent reviews from states who wanted to use the Foundations for Success as a guide for their own standards. Consequently, later Achieve documents dealing with the transition from elementary to secondary mathematics are much more realistic about just how much algebra can be expected for all students prior to ninth grade. Secondary mathematics is part of Achieve's ADP effort; the benchmarks together with sample postsecondary tasks appear in Ready or Not: Creating a High School Diploma That Counts (Achieve, 2004). There the contentious issue concerned the quantity of mathematics, especially of algebra, that should be required of all students for a high school diploma. A compromise was reached in which certain benchmarks, marked with an asterisk, were described as recommended for all but only required for those "who plan to take calculus in college." Of course, this asterisk mildly undermines the nominal goal of the ADP enterprise, namely, to set a uniform standard for an American high school diploma. These matters—the role of calculators, the amount of algebra—are but two of the issues that remain fundamentally unresolved both within the ADP networks and among individuals who care about school mathematics. Other sources of continuing disagreement concern the role of data analysis and statistics, the place of financial mathematics, the importance of arithmetic "automaticity" and a host of pedagogical issues that, as I noted, Achieve largely leaves to others. 4. I’ve expressed great concern on this blog about the lack of frontline teacher representation on these major panels, particularly the President’s National Mathematics Panel? I’ve reiterated my call for redressing this situation via numerous emails to the Panel and on this blog. To date, all such requests have been politely dismissed. How do you feel about the need for increased teacher representation on this and other panels? Was there more K-12 representation (current classroom educators) on the Mathematics Advisory Panel on which you served? The names of all those who advised Achieve on its MAP and ADP projects are listed in the reports of these projects. Different individuals contribute different types of work: some meet in panels; some review drafts; some write standards or contribute problems. My impression is that quite a few of Achieve's mathematics advisors have taught K-12 mathematics, but relatively few were serving as "frontline teachers" at the same time as they were helping with the Achieve work. Frontline teaching doesn't leave that much spare time. Generally, I find concerns about representation less important than those about relevant experience. Sometimes the complaint is about the lack of teachers, other times about the lack of mathematicians; often complaints are accompanied by qualifiers (e.g., "current classroom teachers," or "active research mathematicians") that appear to imply that those who do not meet the condition are somehow less capable. What matters is that a panel as a whole include individuals with a broad balance of experience, which for mathematics education certainly includes both mathematical practice and classroom teaching—but not necessarily all at the same time the panel is meeting. 5. Many critics of NCTM’s original 1989 Curriculum and Evaluation Standards for School Mathematics and the revision in 2000 have claimed there was not enough emphasis on the learning of basic arithmetic facts. In your opinion, is the issue primarily due to lack of clarity in the standards, or is there a real difference of position between NCTM and its critics on the importance of arithmetic facts?? What is your position on the relative importance of the automaticity of basic facts? There is a range of opinion about the importance of arithmetic facts within NCTM, within the broader mathematical community, and within the public at large. I understood the 1989 Standards to acknowledge this fact. A chief insight of statistics is recognizing the importance of variation. Student and adult skills with arithmetic vary, so the goals of mathematics education must take this into account. Almost all disputes about NCTM's standards arose because the historic absolutes of mathematics were replaced by alternatives and variations. In this sense, the critics were right: the Standards made mathematics "fuzzy" by insisting that most problems can be solved in more than one way. In fact, they can be. There is no dispute that knowing arithmetic facts is more desirable than not knowing them, and being quick ("automatic") is better than being slow. The issue is: how important is this difference in relation to other goals of education? It is a bit like spelling: being good at spelling is more desirable than its opposite, but there are plenty of high-performing adults—including college professors, deans, and presidents—who are bad spellers. They learn to cope, as do adults who don't instantly know whether 7 x 8 is larger or smaller than 6 x 9. For what it's worth, my "position" is that every child should be taught to memorize single digit arithmetic facts because if they do so everything that follows in school will be so much easier. But failure to accomplish this goal should not be interpreted as a sign of mathematical incapacity. Indeed, both students who achieve this goal and those who do not should continue to be stimulated with equal vigor by other mathematical topics (e.g., fractions, decimals, geometry, measurement), just like both good and bad spellers continue to read the same literature and write the same assignments. Part II is now published. I hope to hear from many of you! Dave Marain 26 comments: novemberfive said... Thanks very much for putting this together, Dave. So far I agree with Prof. Steen's answer to nearly every question, in particular his concerns about federal control of education. First, thank-you for taking the time to orchestrate the interview. I think I'm going to hold off on commenting in detail until the rest of the interview is posted. "That in our stampede for higher standards we are trampling on the enthusiasms, aspirations, and potential contributions of many students for whom mathematics is best approached indirectly." Indirectly = low expectations There is absolutely no basis for this comment other than opinion. I teach my son the value of hard work and mastery because he doesn't get it at school. If you base school on what makes students happy, they will never meet their potential. Top-down or thematic education, no matter how interesting, will never insure mastery of needed skills. The problem is that mastery of skills is hard work and there is linkage between mastery and understanding. Most schools devalue mastery. The classic example is Everyday Math, which thinks that mastery will somehow magically happen over time. Lessons are interrupted daily with a hodgepodge of Math Box flashbacks that they hope will help kids master the material at their own speed. It doesn't happen. The curriculum doesn't ensure mastery at any point in time, so it doesn't happen for many kids. It then doesn't matter how interested in math they are. It's too late. Educators are bound and determined to redefine math, but if they want to open career doors, they really need to take a good look at the Math SAT and work backwards. Mastery of skills is paramount. Spelling is to writers is NOT like math skills is to mathematicians. This is an ignorant analogy. Some students might do better with a slower approach, but it still needs to be rigorous. Unfortunately, most schools can't even do this. You either get onto the AP calculus track in high school (in spite of the math in K-8), or you are on a track to checkbook (nowhere) math. Many educators try to unlink mastery from understanding. It doesn't work. If educators can't find any other basis for education than their opinions, then please get out of my way. Choice is the only solution. I certainly expected that Prof. Steen's strong pro-reform views would evoke strong reactions. I encourage you to read ALL of his replies (I'll post the remaining ones on Fri and Mon) before forming definitive opinions. I hope that my views on math content, mastery, strong foundations, etc., have been articulated clearly on this blog. For this interview, I've chosen to neither defend nor refute Prof. Steen's positions. He certainly doesn't need me to defend him! There is much of interest remaining in the interview. Gaining insight into the process of how K-8 standards were negotiated is fascinating to me and may serve as a blueprint for resolving the current conflict. In the end, I believe it's futile to debate philosophies. It's far more productive to discuss actual content and examples of what kinds of problems students are expected to solve. This has been the raison d'etre for this blog from its inception. I also invite you to read Prof. Steen's commentary on the K-8 Achieve Math Standards. It may help you to understand his positions even better: Download this commentary from the right side of the page. It was eye-opening for me. I'm more than happy to get into details. "What counts is that students gain sufficient experience with substantive mathematics—not just worksheets—to benefit from its power and, if possible, to appreciate its beauty." Worksheets don't contain substantive mathematics? Students can't even begin to understand "substantive mathematics" without some level of mastery. I can give a talk to kids about the mathematics of computer games but there is very little they will understand without a lot of foundational math. As far as standards go, they are driven by the lowest common denominator. They are driven by the status quo, not by international standards of what can be done. They can talk all they want about "understanding" or "substantive mathematics", but what they are talking about is not a slower route to math, but a worse one. Education is not about cutoffs, and it's not about equal education. It's about individual educational opportunity. It doesn't matter whether standards are local or national. Low expectations are low expectations. Affluent kids get private schools, tutors, and help at home. They get high expectations from their parents. Smart urban kids get low cutoff standards and low expectations. Rising low cutoff test scores make educators happy, but they don't help these kids at all. No arguments here, Steve! I've been saying exactly this for months. However, you need to read the K-8 standards adopted by Achieve. They are not a lowest common denominator. Many were based on content from the highest-performing nations. If you've been examining the Grade 6B Placement math test from Singapore, you'd know what I mean. I've posted several articles about this in the past couple of weeks. These and related issues come up during the rest of the interview. The issue of mastery is still a major concern and a sticking point however. This has to be resolved for education to move forward. Thank you for your astute comments. My goal in all of this is put the issues out on the table and perhaps invite more open dialog between opposing factions. I may continue to do this with other interviews representing all sides of the debate. Many parents, educators and professionals are angry and feel very strongly about what's wrong with American education and what needs to be done to fix it. However, there is also much misunderstanding. Only when representatives of all parties sit down and face other can there ever be any meeting of the minds. The lowest common denominator? Everyone wants what's best for their Steve, a famous applied mathematician wrote an extraordinary book entitled, "Why Johnny Can't Add: the Failure of the New Math". Recognize the title? Recall the author? Morris Kline? I believe he was Professor of Mathematics, NYU. Know what year it was published? 1972 or 1973. See how far we've come! I will have more to say about his views and how they compare to the debate 35 years "Only when representatives of all parties sit down and face other can there ever be any meeting of the minds. The lowest common denominator? Everyone wants what's best for their children." I seriously doubt that there can be a meeting of the minds when it comes to basic assumptions and expectations. Mastery is a major issue. You can set all of the standards you want. You can force a school to use Singapore Math. If they don't believe in specific grade-level goals of mastery, then even that will fail. Mastery in math is not like spelling. This is not about middle ground. Does there even have to be a meeting of the minds if you allow choice? Larger school districts can easily offer parents a choice in math curricula starting in Kindergarten. This should be the focus of the discussion, not middle ground. "Everyone wants what's best for their children." ... their own children. When it comes to education, this translates to lower expectations and slowly rising cutoff scores. Individual kids are important right now, not slowly rising averages to meet minimal goals in 2014. A slowly rising tide will lift all boats, but it will not teach kids to fly. I didn't take the comment you quoted ("...for whom mathematics is best approached indirectly.") as meaning low expectations. To me, this meant instead of teaching students that math is only a series of algorithms we must show students the beauty and application of mathematics. If they are so turned off by the brute force approach, does it matter how good they are at it? Also, I interpreted "worksheets" as busy work. Something that gives the students 50 of the same type of problems to "solve" without extensions to include the meaning of the math, applications to real life, or synthesis with other concepts. As for the issue of mastery, it is my goal for my students can achieve mastery of concepts. But what if, for whatever reason, they can't? Should they spend the rest of their time limited to arithmetic? Never get to study algebra or geometry or stats? "... we must show students the beauty and application of mathematics." You need to give me an example of this and explain how this how this substitutes for mastery. Besides, these topics are not incompatible with a process that emphasizes mastery. "If they are so turned off by the brute force approach, does it matter how good they are at it?" "brute force approach"? Have you found an efficient road to mastery that doesn't involve hard work? "...50 of the same type of problems to "solve" without extensions to include the meaning of the math, applications to real life, or synthesis with other concepts." This is vague. Please give me specific examples. Please explain how you can have a steady diet of "real life" or "synthesis" (whatever they are) and still achieve mastery. Your implication is that there is not enough time for "real life" connections. This isn't true. This is just another way of saying low expectations. "As for the issue of mastery, it is my goal for my students can achieve mastery of concepts." There it is. Mastery of skills is not important. You've broken the link between mastery of skills and understanding. You have no basis for this. This is math appreciation, but the students don't have the knowledge or skills to really appreciate anything about math. "But what if, for whatever reason, they can't? Should they spend the rest of their time limited to arithmetic? Never get to study algebra or geometry or stats?" If they can't master arithmetic, then you can forget about algebra or statistics. How is this anything more than just your opinion? You can't redefine math and say that mastery of skills is not important. Math is cumulative and everything is based on mastery of skills. Skills are not rote. They are based on fundamental rules. Understand the rules and you understand math. THAT is the beauty of math. Math is cumulative and everything is based on mastery of skills. Skills are not rote. They are based on fundamental rules. Understand the rules and you understand math. I think to some extent, this depends on your definition of "mastery". When you talk about mastery of addition, are you talking about "understanding the rules" or about having instantaneous recall of all the addition facts up to a certain number? Same for multiplication facts. I've no argument with pushing kids to learn them. But I do have an argument with holding them back if they don't have them all memorized. Keep making them work on them, keep testing them, but in the meantime, let them move on to other things too (and perhaps they will see why knowing them would make their lives easier). I personally have a bachelor's degree in Pure Math and Computer Science and a master's degree (from MIT) in Electrical Engineering and Computer Science. I counted some of my addition facts on my fingers right through high school. I might have learned them sooner if someone had forced the issue, but I might instead have become turned off from mathematics. But not knowing them certainly didn't prevent me from understanding higher level mathematics. I have an 11yo son. Last year as a 5th grader, he scored in the top 5% on the 4th to 6th grade Math Olympiads for Elementary and Middle school students (and well above the 50th percentile, probably around 75th, for the 7th and 8th grade contest). Because he has always completed his math work (pre-algebra) quickly and accurately his school assumed he knew his times tables cold, but it turns out that when they tested them this fall, he couldn't complete 60 problems in 3 minutes. So, obviously, those need more work, but it doesn't mean he can't and won't move on to algebra at the same time. Where I teach, I take each group of elementary students once every two weeks, and the middle school students twice every two weeks. (I don't say once a week, because I have them twice in a row, and then not at all the next week.) I teach problem solving topics such as combinatorics, probability, number theory, etc. based on MOEMS and MathCounts type problems. I teach them to all students, even those struggling with mastery of their basic skills. Kids at all levels are able to understand the basic concepts, and it helps spark their interest in math and motivate them to master the basic skills. In our experience over the past 6 or so years of working this way, allowing the struggling kids to participate results in greater motivation and success in mastering their basic skills. I realize that this is all anecdotal evidence, but it's more than random opinion, and it works very well for us! I've watched kids who think they're "bad at math" blossom in this system, and go on to take honors level math in high school. "When you talk about mastery of addition, are you talking about "understanding the rules" or about having instantaneous recall of all the addition facts up to a certain number?" In that case, both. For other cases, there are different levels of understanding. For example, many math curricula push "understanding" of multi-digit multiplication in fourth or fifth grade. This is possible up to a point, but full understanding will have to wait until algebra. However, mastery in the lower grades requires instant recall of basic add and subtracts to 20 and the multiplication table to at least 10 X 10, by the end of third grade! No later. "But I do have an argument with holding them back if they don't have them all memorized. Keep making them work on them, keep testing them, but in the meantime, let them move on to other things too (and perhaps they will see why knowing them would make their lives easier)." But how does this work in practice? Everyday Math is based on no specific mastery at any point in time. The problem is that it never gets done. My opinion is that many schools and teachers don't think that it's important at any time. They never enforce mastery. At my son's school last year in fifth grade (Everyday Math), they had students who couldn't immediately give the sum of 7+8. Their mastery of the times table was worse. These are smart kids. There is absolutely no reason they could not memorize these facts. The teacher had to start an after-school program to fix the problem. (At least she realized that there was a problem.) " ...perhaps they will see why knowing them would make their lives easier." Self-motivation is always nice, but it's not a prerequisite. Either it's important to learn or it's not. Either they can handle the material or they cannot. If you leave it up to the kids, then that's the same as low expectations. New material in math requires mastery of old material. If you move along without mastery, you hurt both the kids who are behind and those who are up-to-speed. "But not knowing them certainly didn't prevent me from understanding higher level mathematics." But I bet you knew how to divide fractions and solve systems of equations. Small gaps or limitations in knowledge or skills (I have my share), aren't a guarantee of failure, but they should NOT be the basis of a curriculum. "... but it turns out that when they tested them this fall, he couldn't complete 60 problems in 3 minutes. So, obviously, those need more work, but it doesn't mean he can't and won't move on to algebra at the same time." Test him again with flash cards. It takes time to write. My sixth grade son tested in the top 1% in math in the country and might not (I'll have to check) meet the same goal, but now you're talking about cutoffs, not philosophy. My son's Everyday Math school (and very many others) don't enforce anywhere near that kind of standard because they don't believe in mastery. "... and it helps spark their interest in math and motivate them to master the basic skills." That's all very nice, but what do you do with the ones who receive no spark? "...allowing the struggling kids to participate results in greater motivation and success in mastering their basic skills." Compared to what? If you wait too long, the struggling kids cannot catch up, no matter what their motivation. "I've watched kids who think they're "bad at math" blossom in this system, and go on to take honors level math in high school." What about the kids who don't "blossom"? Is a "math brain" or a "spark" required to get an education in math? When my son goes to school, I expect him to pay attention and work hard even if he doesn't like the material. I check his homework daily and set much higher expectations than the school (that's not saying much). I don't tell him that he doesn't have to finish a writing assignment because he isn't motivated. I will try to motivate him and "spark" his interest, but failing that, I will apply (and the school should too) external motivation, like grades and Self-motivation is a nice goal, but if it doesn't happen, then schools darn well better do something else. If you think that it's just a matter of 60 problems in 3 minutes, then you better look again. Standardized test cutoff levels are much, much lower than that. If schools want math to be a "pump" and not a "filter", then they need to do some hard pumping. Affluent parents do a lot of pumping. Unfortunately, poor kids have to wait for a "spark". steveh, jackie and mathmom-- Thanks for keeping the lines of communication open. Civil dialog is exactly what is needed to move forward through all the confusion surrounding math education. I wish there were simple answers to these problems, but I don't think there are. But I do believe a thorough understanding of and respect for each others' positions are critical. Thanks again for making my efforts seem Perhaps one day, "Everything will be Illuminated." Just to clarify, I'm not talking about "Everyday Math" with which I don't have any direct experience. That's all very nice, but what do you do with the ones who receive no spark? Same thing we do with all the others. Keep teaching them math. Both basic skills and applications. I just don't believe that teaching applications, problem solving, etc. should be withheld until mastery is shown. It is also an important part of mathematics, and should be taught to all students, struggling or otherwise. Steve, you have a good point about my son and the writing aspect of a timed math test. He does have writing issues. I'll try flashcards and see how that goes. I suspect he knows them, but not as quickly/automatically as we'd eventually like him to. I will try to motivate him and "spark" his interest, but failing that, I will apply (and the school should too) external motivation, like grades and flunking. I, and the private school I send my children to, have a difference in philosophy with what you've stated here. Our school is "ungraded" in that the children are not grouped by chronological age, but work in multi-age groupings according to their individual needs. But it is also "ungraded" in that the students (K-8 age) do not receive "grades" but rather a long written report of where they are in their educational journey, where they excel, and where they still need work. Students are expected to be self-motivated. There is no threat of "bad grades" or "flunking" or even a "bad" report (the reports are descriptive, not judgemental) to motivate students. A student may not be motivated by interest in every subject; that is understood. But there are other reasons to work hard and do well than either interest in the subject or passing grades. Students do not leave our school lacking basic skills or knowledge, despite the lack of "external motivators" such as grades. At all levels of ability, they take school seriously and are motivated to do their best. They go on to excel in both traditional and non-traditional high school programs. If you move along without mastery, you hurt both the kids who are behind and those who are up-to-speed. We have somewhat of a "spiral" approach -- kids work in multi-age groupings, and are not necessarily expected to "get" (master) everything the first time around. This is not "low expectations" -- quite the contrary. Kids are exposed to things we know they may not be ready to master (this includes Shakespeare in the original language, starting at 5yo), with the understanding that they will have these things presented again (and again, until they master them). Meanwhile, kids who are able to master them the first time around are given extensions and higher expectations, and eventually move on to a higher-level group where new concepts are presented. Those who did not fully master a topic that first time have laid the groundwork for understanding it better the next time they are exposed to it. You can "move on" without mastery as long as you plan to come back. With math, many topics are of course sequential, and you cannot move on to the next without mastering the previous. BUT, you can move on to orthogonal topics to give the brain a rest, a chance to process, and then try again. "Drill and Kill" turns many kids off to math. They may (or may not) eventually acquire the drilled skills, but those kids who were held back and forced to play scales every year and never experience the music wind up hating math, and drop it at the first opportunity. Those who see what math may have to offer them beyond arithmetic, in my experience, stick with it more resolutely. Now, I have no doubt that "spiral curriculum" is poorly done in many schools, that kids are never assessed with respect to mastery, that kids are required to re-visit topics either too many (for those for whom mastery came easily) or too few (for those who struggle) times. But, that's not a fault of the spiral curriculum, but rather of a poor implementation. With appropriate observation and assessment, (and a small group size certainly helps) the spiral approach can really work. "Just to clarify, I'm not talking about "Everyday Math" with which I don't have any direct experience." You really should. It gives spiral a very bad name. "I just don't believe that teaching applications, problem solving, etc. should be withheld until mastery is shown." You have to quantify mastery here. If you're talking about 60 questions in three minutes, I would agree. You have to realize that in most schools, mastery at even minimal levels is avoided. In cases like this, moving ahead with new material and applications will not work. "I'll try flashcards and see how that goes." I've added minus signs to some of our flash cards to make it a little bit more difficult. "But it is also "ungraded" in that the students (K-8 age) do not receive "grades" but rather a long written report of where they are in their educational journey, where they excel, and where they still need work." Obviously a private school. The issue is not what you think is best for your kids. The question is why do public schools feel they have the right to impose their opinions and expectations on everyone else. You got to choose, (a choice I wouldn't make, by the way) but most people can't. The question is why doesn't everyone get choice when opinion and assumptions dominate the "At all levels of ability, they take school seriously and are motivated to do their best." I think public school teachers call this "pre-selected". "Drill and Kill" turns many kids off to math. They may (or may not) eventually acquire the drilled skills, but those kids who were held back and forced to play scales every year and never experience the music wind up hating math, and drop it at the first opportunity. Those who see what math may have to offer them beyond arithmetic, in my experience, stick with it more resolutely." If you don't practice your scales, and if you don't work on Czerny, you'll end up in the audience, not on the stage. If you don't focus on mastery of the basics of math, you'll end up majoring in English Literature, no matter how much beauty you see in math. Practicing scales and going to concerts are not incompatible. Mastering math basics and understanding the beauty of math are not In fact, what kind of beauty can a sixth-grader understand about math? If I gave a talk to sixth-graders about the math of computer games or of rendering, they might oooh and aaah, but they wouldn't have a clue to the beauty of transformations, dot products or cross products. It might motivate them by getting them to say: "I want to do THAT!", but this sort of thing can be done with any lame curriculum. They still have to put in the hard work of mastering the basics. "But, that's not a fault of the spiral curriculum, ..." One could argue that all education is a spiral. A big difference in spirals is the steepness of the spiral and how much mastery is required before you move on. Everyday Math is known for it's very shallow spiral (some call it circling - one mother complained that she had kids in 2nd, 4th and 5th grades and they were all covering the same material!) and it's almost complete lack of mastery enforcement. I call it repeated partial learning. This is very inefficient. You got to choose, but choice is the last thing that public schools want to offer. Why? Why not have the money follow the child so that even a child from the inner-city can choose to go to your private school? I wouldn't do that, but that would be my choice. My sense is that you don't realize how bad the lack of mastery is in public schools. I think public school teachers call this "pre-selected". There is certainly an aspect of this. However, most children are admitted at 4 or 5yo when it is difficult to know what their attitudes and motivation will be. Others are admitted older when they public schools fail them. Certainly, they come from families that take education seriously and have enough disposable income to spend money on it (though tuition is on a sliding scale and most families are not "wealthy" at all). But the children themselves are not "pre-selected" to be the easiest to educate. Many children come to our school because their parent either fear, or know, that public school will (or has) fail(ed) their children. There are students with learning disabilities, with ADD, etc... even some kids adopted from Africa with minimal prior formal education. What happens at our school is not due to pre-selection of the students. It is due to methods that really work. And yes, I would love it if public school parents had a choice of a program like this one. They do, to a smaller extent, have a choice of a very good multi-age program in the primary grades, but standardized test prep interferes with it in the higher grades. My older kids did attend this public program prior to moving to their current private school. Public schools impose their "opinions" on others because they are required to get all students to pass certain standardized tests, and so they use the programs that they believe are most likely to allow them to do so. I happen not to agree that training students to pass those tests has much in common with providing a good education, but it's not the schools' faults that they are required to do this. :( In fact, what kind of beauty can a sixth-grader understand about math? I can prove to a 6th grader that 0.99999... = 1 and that 1/2 + 1/4 + 1/8 + ... also = 1. They think that is pretty cool. I can show them Gauss' "trick" to adding up arithmetic sequences, and how this relates to triangle numbers and "handshake" problems. I can show them how solving a simpler problem and finding a pattern helps them solve a problem that they didn't think they'd be able to solve at all, such as the locker problem. They can learn how thinking about parity helps them solve seemingly complicated puzzles. They can learn how thinking about numbers in terms of their factors can help them understand and solve all sorts of problems (while quietly improving their "numeracy"). All of this even if they need to sometimes count on their fingers, or be reminded how to "invert and multiply" or how to subtract negative numbers.... They'll master those things eventually too, in part by using them to solve interesting problems. I can tell you on an anecdotal basis, that both strong and weak math students look forward to the days when I come to work with them, even though I am "mean" and don't allow them to use calculators (and also convinced their regular math teacher not to let them use them nearly as often). I will also tell you that as touchy-feely as it seems to you, confidence in themselves as mathematicians is incredibly important to many kids at this age, especially the girls. Without it, they will track themselves into the "basic" track in high school, and close off a lot of opportunities to themselves. Kids need to be given opportunities to succeed and feel good about themselves, all the while developing the skills they really need to be strong mathematicians. You asked about voucher programs which I think is another huge and complicated issue, perhaps best saved for another day. "...Certainly, they come from families that take education seriously ..." Don't get me wrong. I'm playing the devil's advocate here. My son started out in public school. We switched him to a private school in 2nd grade. Now he's back in public school for 6th grade. Our experience is that the public school sets very low expectations in the early grades because they see it mostly as socialization within a full-inclusion learning evironment. Our son was (and is) a sponge for knowledge and they were feeding him with a teaspoon. Now that he's older, there is less difference between the curriculum of the public school and the private school. I'm not saying it's great, but the public school finally accepts that they have to group by ability and not age starting in 6th or 7th grade. The private school provided more academics and expectations in the early grades, but this difference lessened in the later grades. With AP courses in high school, there are few academic differences between good public high schools and private prep schools. The biggest difference is in student support. Good prep schools give you all of the academic support you need. At public high schools, it's sink or swim. "Public schools impose their "opinions" on others because they are required to get all students to pass certain standardized tests, " Public schools have opinions whether or not there are standardized tests, and many parents strongly disagree with those opinions. "I happen not to agree that training students to pass those tests has much in common with providing a good education, but it's not the schools' faults that they are required to do this. :(" The tests are trivial and all good schools should laugh at them. They define a minimum, but schools see them as a maximum goal. Some tests have weird questions, but many of those questions are based on these same teachers' opinions. Teachers create and calibrate the tests. If the schools aren't forced to meet some level of accountability, the education they provide will be worse, not "I can prove to a 6th grader that 0.99999... = 1 and that....." All of this takes time. I'd like to see what material your school is dropping to make time for it. If it doesn't take that much time, then it could be added to any curriculum and is unrelated to "They'll master those things eventually too, in part by using them to solve interesting problems." Top-down mastery. I don't buy it. Maybe your school (with serious students) can get it to work, but it would be a disaster in public schools. In fact many public schools are centered around top-down and thematic learning. Mastery never happens. It IS a disaster. "... confidence in themselves as mathematicians is incredibly important to many kids at this age, ..." Confidence derives from being able to do the problems. Doing the problems requires skills. As I said before, I think your idea of "enough" mastery has little to do with what's going on in public schools. We're talking NO mastery. You can't compare an idealized learning environment of serious students with what is going on in public school. "voucher programs" I'm not specifically suggesting this. Public schools could provide choice, but they fight charter schools tooth-and-nail. Public schools don't even want to offer alternative math curricula even though it would be very easy to do. You might think that what you do at your school is similar to what happens at public schools, but you would be be wrong. You might be happy with your top-down, delayed mastery approach, but at least parents aren't forced to use your school. Top-down mastery. I don't buy it. Maybe your school (with serious students) can get it to work, but it would be a disaster in public schools. In fact many public schools are centered around top-down and thematic learning. Mastery never happens. It IS a disaster. First of all, serious students aren't born, they're made. Children do not enter our school at 5yo as "serious students". Serious students are made by they expectations and atmosphere of the school. Students are respected and are expected to respect teachers, one another, and themselves. They are expected to put forth an honest effort, and when they don't they are called on it. They are exposed to the good example of older students who have been part of the environment for much longer, and who have become serious students. Secondly, top-down mastery: I can certainly appreciate that a program based only on top-down mastery would be a disaster. That is not what we do, and not what I am advocating. Students in elementary school spend 1/6 of their math time on top-down activities. Students in middle school spend 1/3 of their time. The rest of the time is spent on what you would call "bottom up" learning of basic skills. There is more to math than arithmetic, and by developing students' problem solving skills, we produce stronger mathematicians than those who learn only the basic skills. Spending time on varied "hard problems" is also an excellent check on mastery. The group may work on a skill until everyone shows mastery, but then that skill will come up again in a problem a month later, and many will have forgotten it. I can then alert their regular teacher that she needs to go back and work on a particular skill with certain students again. All of this takes time. I'd like to see what material your school is dropping to make time for it. Of course it takes time. It's a question of whether one considers it time well spent or not. Obviously we disagree on this issue, but I've seen it work out well for a wide variety of students over the past 5 years. As I said above, our school dedicates 1/3 of the middle school math group's math time to work like this. They don't drop anything to make time for it. They just spend a little less time on direct instruction in basic skills, and dedicate some time to this type of indirect practice. A traditional math curriculum spends lots of time reinforcing taught skills, and in our case, some of that reinforcement is done via non-routine problem solving. This time is not wasted. Both the teacher and the students remark that the "word problems" in the textbook seem trivial after all the work on problem solving they do with me. When students who do not consider themselves mathematically inclined score in the top 50% on the elementary or middle school Math Olympiad, this gives them much more confidence in themselves as developing mathematicians than just getting a page of math problems correct. Students leave our school well prepared for either 9th grade Algebra or 10th grade Honors Geometry, according how mathematically inclined they are, and how quickly they grasp the pre-algebra topics (how many times around that spiral they need). They also spend math time on design work (tessellations, symmetry, etc.) and once every couple of years they do a "budget project" where they choose a career, get an income based on it, and have to budget for their necessities and luxuries, make a scale drawing of the house or apartment they would want, within their budget, etc. And they still manage to learn pre-algebra, and in many cases algebra as well. Can this work in a public school? Yes it can. There's nothing magical about our private school, though I will admit that smaller group sizes do help. Our state requires math problem solving portfolios from public school students beginning with 4th graders, and I've seen some of the work 3rd and 4th graders do on problem solving, and it is quite impressive. And these same students still manage to learn the basic math skills of their grade level as well. They still do "mad minutes" to encourage mastery of the basic facts. They still master long division, etc... Spending time on non-routine math problems is not a waste of time. It helps stretch their "math brains" in other ways, improves their sense of numeracy, and ultimately leads to kids who are competent and confident young mathematicians. By spending time on both bottom-up and top-down learning, we can have the best of both worlds. I've never advocated dropping the direct teaching of basic skills. I've never advocated for the adoption of Everyday Math (which I've heard can be excellent in the right hands, with plenty of differentiation, but which in reality usually is a disaster for students at many levels). I'm not advocating a program "centered around" top-down learning. I'm advocating for a middle ground -- a position where math need not be only dull drill-and-kill mastery of basic skills. I believe in the direct teaching of basic skills, but I also believe in the value of using fun, interesting, and challenging problems as a tool to help spark more interest in mathematics, to help practice basic skills in a different context, and to give kids the thrill of the "aha!" moment that one gets when solving contest-style math problems, and the resultant confidence and self-esteem that comes from struggling with something difficult and "They are expected to put forth an honest effort, and when they don't they are called on it." This doesn't happen in the lower grades of most public schools, by definition. As you mentioned before: "Our school is 'ungraded' in that the children are not grouped by chronological age, but work in multi-age groupings according to their individual needs." Most public schools work on the basis of full-inclusion or what I call age-tracking. There is absolutely no grouping by needs, interest, or ability. You seem to think that you have some sort of correlation of education philosophy with public schools. That is NOT the case. I don't think that what your school is doing is wrong because I suspect you enforce "enough" mastery. That is the key. However, I probably wouldn't send my son to your school. I don't want him to have any misconceptions about the need for mastery. In public schools, however, there is almost a dislike towards content, skills, and mastery in the lower grades. Things change starting in 7th and 8th grades, but for many, the damage has been done. "...and in our case, some of that reinforcement is done via non-routine problem solving." The implication is that the traditional, mastery first approach is more efficient, which leaves more time later on for non-routine problem solving. The downside, presumably, is that it turns some kids off to math. Your approach tries to make math more interesting by introducing non-routine problem solving even if kids are still counting on their fingers. This is less efficient but may (or may not) inspire more kids. However, you still haven't quantified the level of mastery you require before allowing kids to move on to the next level. "..this gives them much more confidence in themselves as developing mathematicians than just getting a page of math problems correct." This is a strawman. There is NOTHING about a mastery-centered approach that precludes "non-routine" problems, unless you're talking about covering different amounts of material each year. In that case, the traditional approach would be far, far ahead. "And these same students still manage to learn the basic math skills of their grade level as well." Are you saying that there are no problems with math scores in this country? Or is it the fault of the kids and parents? "I've never advocated for the adoption of Everyday Math (which I've heard can be excellent in the right hands, with plenty of differentiation, but which in reality usually is a disaster for students at many levels)." Everyday Math is structurally flawed. It advocates no specific expectations of mastery at any point in time. It contains so much non-essential material that there is no way to achieve any level of mastery. (My son's fifth grade class didn't cover 35 percent of the EM workbooks because they ran out of time - partly because the teacher had to try and fix mastery problems.) But EM thinks that's OK because students will see the material next year. It doesn't excite or turn on any kids to be constantly jumping from one topic to the next and always feeling that they don't know what's going on. There are NO "non-routine" problems in Everyday Math. It's just a sequence of tear-out worksheets; one to do in class and one to do at home. I just went through the latest edition of sixth grade EM this summer with my son to help him jump a grade to 7th grade Pre-Algebra, which uses a real textbook! I have issues with the textbook, but it's an order-of-magnitude better than the hodgepodge Everyday Math approach. By the way, my son was a model Everyday Math student because he could master the material each time through the spiral. Many kids couldn't. They didn't like math. It's not motivating to not know what you're doing. Everyday Math is repeated partial learning. "I'm advocating for a middle ground -- a position where math need not be only dull drill-and-kill mastery of basic skills." This is the same strawman. What you are advocating is the belief that lower expectations of mastery will increase motivation, mastery problems will disappear, and the net result will be positive. But your motivation comes from using non-routine problems. This is a separate issue because non-routine problems could be used with an approach that sets higher initial expectations of mastery. In fact, non-routine problems will take longer for those who are still counting on their fingers. Singapore Math expects a high level of mastery and also provides lots of non-routine or challenging problems; the best of both worlds and a very motivating formula. The fact that many public schools currently use crappy methods to teach math does not mean that they couldn't use good methods to teach math. You keep telling me that the kinds of things our school does could never work in a public school, but there's not inherent reason why they couldn't. A good method does not have to mean "you do nothing more until you master a certain basic skill". Even Singapore does not require that -- the challenging questions are interspersed in the text, and are used to help kids cement their skills. There is no testing to qualify students to move on from the mundane problems to the challenging word problems. I understand that most public schools work on an age-in-grade lockstep, but that doesn't mean that they have to. In K-3 my son was in multi-age classes, was appropriately challenged in school, worked with older kids when appropriate, worked on his own at other times, and generally thrived. It was only when 4th grade standardized testing came around that the school said they'd have to stop doing all of that and make him work on the same 4th grade curriculum he'd already completed in most subjects that we left the public school. If they'd had the faith to realize he would show proficiency on the 4th grade tests even if he worked with 5th and 6th graders on most subjects, and didn't get the "teaching to the 4th grade test" that practically defined the 4th grade year, he'd probably have stayed in public school. Since our HS has a good honors/AP program, he just re-started public school and appears to be thriving there once again. I am not in charge of moving kids in our school up from one math group to the next, so I can't quantify the skills required to move up, but it is based on mastery of the skills taught at that level. But I will say that a child who has most of the skills but is lagging in a few will sometimes move if the higher group is a better fit overall, and they will just work with the child on those skills, often as homework, while moving on to other topics as well. Even children who have demonstrated mastery sometimes forget them later and need to review them. There is no mastery requirement to take part in the problem solving sessions. The only exception to that is that in the youngest (K-2 approximately) group, I don't work on problem solving with the very youngest students until they have developed some concept of number, but I do work with the more capable students in that group right from the start. In the other groups, everyone works on problem solving. They take time out from their basic skills instruction to do so. "And these same students still manage to learn the basic math skills of their grade level as well." Are you saying that there are no problems with math scores in this country? No, I am saying there is nothing wrong with the math skills of children at the school my children attend, where math skills are taught alongside interesting problem solving topics. And in fact, I am saying that, based on my observation and that of the principal of many years, that kids are leaving with better math skills, and better preparation for high school, now that we are formally teaching problem solving. I'm not saying there is nothing wrong with the way many public schools teach math today. I am saying there is something deeply right about taking time out from basic skills instruction to also teach interesting applications and non-routine problem solving. Math is more than arithmetic. While mastery of arithmetic is a necessary pre-requisite for higher-level mathematics, so are well-developed thinking skills. It is the latter that seem to be neglected in many basic skills curricula. (I'm not talking about Singapore -- I think that's a great curriculum. I'm talking about a lot of "traditional" American curricula that merely drill and practice on the basic skills and simple word problems.) "The fact that many public schools currently use crappy methods to teach math does not mean that they couldn't use good methods to teach math." Then why don't they? Because they think that what they're doing is fine. "Even Singapore does not require that --..." Since I'm a great supporter of Singapore Math (and used it at home with my son), I think you really misunderstand my position. "In K-3 my son was in multi-age classes, was appropriately challenged in school, worked with older kids when appropriate, worked on his own at other times, and generally thrived. It was only when 4th grade standardized testing came around that the school said they'd have to stop doing all of that and make him work on the same 4th grade curriculum he'd already completed in most subjects that we left the public school." This isn't the norm. Age tracking is normally used in the lower grades to facilitate full-inclusion and socialization. Most public schools don't provide any sort of ability grouping until 7th grade. (usually in math) My niece's school (in Michigan) starts grouping in earlier grades because of parental demand and to compete with charter and other public schools. Choice forces respect for parental wishes. This says nothing about whether they enforce mastery early or late. The key is flexibility and higher expectations. Your example is the first I've ever heard of any school using testing as an excuse for eliminating ability grouping. I will ask around to see if this is happening elsewhere. I don't like standardized testing because the very low cutoff points soon become the best that the school will do. The problem is that if the testing requirement is eliminated, schools will get worse, not better. Most schools are better now because of the requirements, but that isn't saying much. Unfortunately, this is often done by shifting resources away from the more capable kids. But I can't imagine that (in general) removing trivial testing requirements will improve schools overall. "I am saying there is something deeply right about taking time out from basic skills instruction to also teach interesting applications and non-routine problem solving." The devil is in the details. The problem is that most public schools say exactly the same thing, but their rhetoric hides a complete lack of emphasis on mastery. You may think it's the same thing as what you are talking about, but it's quite different. I know another parent (with engineering degrees) who really liked these same ideas - until he saw the real-life workings of Everyday Math. Is it just the implementation and not the theory? No, it's both. When I argued the case for Singapore Math (over Everyday Math) at my son's (previous) private school, they turned up their noses. Something else is going on here. "I'm talking about a lot of "traditional" American curricula that merely drill and practice on the basic skills and simple word problems." Arguing against "traditional American curricula" is a strawman. That's NOT the argument going on now in the so-called "Math Wars". It has to do with low versus high expectations. It has to do with with extremely low expectations of mastery hidden by talk of Higher-Order Thinking and Conceptual Understanding. All of the people I know who are fighting against "reform math" (aka low expectation math) would love to see all schools use Singapore Math. Nobody wants to go back to some sort of mythical "traditional" math curriculum. That argument is simply a ploy to avoid discussing the details and lack of mastery in reform math. Standardized tests and drill-and kill aren't the big problems in math. Low expectations, bad curricula, and poor implementations are. I agree that Singapore is a great curriculum. My 6yo does the workbooks for fun at home. My 11yo will be using selected parts of 6A and 6B (based on his performance on their placement tests) to shore up his pre-algebra before moving into Algebra and enrichment topics. I don't remember if you've said how old your son is, but if you haven't already, you might want to look into the Art of Problem Solving texts for him to work from after finishing Singapore 6. I think our main disagreement was that you took Prof. Steen's arguments to apply only to Everyday Math and the way you see it being applied in districts you are familiar with, and I took them in a more generic context. Also, I guess I've seen some different public school approaches to ability grouping and math teaching than you have. I've actually never seen a public school around here that didn't ability group for math and reading, at least within each individual classroom, but often sharing kids/groups between two classes at the same level, and at least minimal support for subject acceleration (though scheduling logistics often got in the way of that). As to NCLB, I think it is making schools better for the kids just below proficiency, and worse for everyone else. A huge proportion of time and resources are spent on the so-called "bubble students" who are below the standard, but close enough that with work they might make it. Gifted programs are being cut, arts and music programs are being cut (short-term thinking!), and proficient students are being ignored. IMO, low expectations have come from NCLB more than anything else. I think, overall, that schools are far worse off because of it. btw, the public school still did ability grouping in 4th grade, but they would not allow any subject acceleration out of 4th because of the testing that had them so frenzied. :( This was in a school with over 50% of kids on free/reduced lunch, kids in foster care and all sorts of unfortunate situations. Getting the required test scores was far from assured. Virtually everything not designed to directly raise test scores for the bubble kids was marginalized. :( " ...you took Prof. Steen's arguments to apply only to Everyday Math and the way you see it being applied in districts you are familiar with, and I took them in a more generic context." I don't think so. I have seen his sort of arguments used for many, many years to hide low expectations behind a veneer of "understanding" rhetoric. They argue with generalities, but you never see the details. "I don't remember if you've said how old your son is, but if you haven't already, you might want to look into the Art of Problem Solving texts for him to work from after finishing Singapore 6." He's in sixth grade, but taking 7th grade pre-algebra. I will look into those texts. Thank you. "As to NCLB, I think it is making schools better for the kids just below proficiency, and worse for everyone else." They shift resources rather than think about fixing flaws in their assumptions. It doesn't change their overall level of expectations. "I think, overall, that schools are far worse off because of it. " But the converse isn't true; that if you get rid of testing, the schools will be better overall. The worst parts of testing is that resources get shifted, the goal becomes the low cutoff, and everyone thinks they're doing a good job - institutionalized low expectations. Is this worse? I could argue that case, but the solution isn't to go back just to help the more capable kids. Great post, thanks!
{"url":"http://mathnotations.blogspot.com/2007/09/interview-with-prof-lynn-arthur-steen.html","timestamp":"2014-04-17T19:31:35Z","content_type":null,"content_length":"288575","record_id":"<urn:uuid:7cd0cdbc-6abb-4a54-aa35-ac9a8d0af022>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
In Service to Mathematics: The Life and Work of Mina Rees Mina Spiegel Rees (1902–1997) was an American mathematician who worked primarily in administration rather than in teaching or research. This is a short but comprehensive biography of Rees’s life that includes a lengthy discussion of her PhD work on division algebras, directed by Leonard Eugene Dickson. The bibliographic information for this volume is skimpy, but the work appears to be a slightly revised version of the author’s 2000 Doctor of Arts thesis. The book is something of a hodge-podge. It starts out with a brief appreciation of Rees’s work, followed by a brief scientific biography. Then there is a lengthy chapter (about one-third of the whole book) that goes into detail on her thesis. This is of more interest to historians of mathematics than to mathematicians, because (as Saunders Mac Lane observes on p. 48) Emmy Noether’s work at the same time made Rees’s thesis obsolete. The really interesting part of the book is the next chapter, on her work during and after World War II on the Applied Mathematics Panel and its successor the Office of Naval Research, where again she was not a researcher but did much to influence the development of numerical analysis and computers. The book closes with a briefer look at her return to academia, where as an administrator at CUNY she influenced the direction of graduate mathematics education in the US. The book closes with several appendices of lists about her career. The production quality is poor. I spotted a dozen typographical or spelling errors (in a 138-page book) without even trying. Figures 2.1, a page from the family birth records, is repeated two pages later as Figure 2.3, which suggests that the real Figure 2.3 was lost. Bottom line: An interesting look at one mathematician’s career, presented in the context of a period when government support of, and involvement in, mathematics was rapidly increasing. Allen Stenger is a math hobbyist and retired software developer. He is webmaster and newsletter editor for the MAA Southwestern Section and is an editor of the Missouri Journal of Mathematical Sciences. His mathematical interests are number theory and classical analysis. He volunteers in his spare time at MathNerds.org, a math help site that fosters inquiry learning.
{"url":"http://www.maa.org/publications/maa-reviews/in-service-to-mathematics-the-life-and-work-of-mina-rees","timestamp":"2014-04-19T14:32:33Z","content_type":null,"content_length":"99354","record_id":"<urn:uuid:f7303408-4926-4dc8-a030-498f1aaa9a1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Graduate Program Course Outlines The following outlines describe first year graduate courses. 1. Monoids and Groups Isomorphism Theorems. Cayley's Theorem. Cyclic groups and their endomorphisms. Permutation groups and group actions. Homomorphisms and homomorphism theorems. Sylow's Theorems 2. Rings Matrix rings and quaternions. Ideals and quotient rings. Chinese remainder theorem. Homomorphisms and homomorphism theorems. Fields of fractions. Polynomial rings. Symmetric functions. PIDs and Euclidean domains. Polynomial extensions of UFDs. 3. Modules Free modules and matrices. Direct sums of modules. Finitely generated modules over a PID. Applications of Abelian groups, Diophantine equations, etc. 4. Splitting Fields Galois groups. Solvable groups. Galois' criterion. General equations of nth degree. Finite fields. The most recent text that has been used: • Basic Algebra I, 2nd Edition by N. Jacobson Qualifying exam problems from previous years 1. Complex Analysis Various forms of Cauchy's theorem. Cauchy integral formula. Power series, Laurent expansion. Residue calculus and applications. Properties of harmonic functions. Conformal mapping. Riemann mapping theorem (proof as time allows). 2. Measure and Integration Theory The fundamental limit theorems. Lp spaces, elementary approximation theory. Fubini's Theorem. Elementary Fourier analysis. 3. Functional Analysis The Hahn-Banach Theorem and Open Mapping Theorem. The Uniform Boundedness Principle. Hilbert Spaces. Dual Spaces. Duals of Lp spaces. Selections from the following topics as time allows: Weak and weak* topology. Banach-Alaoglu Theorem. Elementary operator theory through the spectral theorem for compact normal operators. The most recent texts that have been used: • Real & Complex Analysis, 3rd Edition by W. Rudin • Functional Analysis, 3rd Edition by W. Rudin • Measure Theory by P. Halmos Tests and Homework problems from previous courses Qualifying exam problems from previous years Topology / Geometry 1. General Topology Topological spaces and continuous mappings. Connectedness, compactness, separation. Metric spaces, criteria of metrizability. Topological groups. 2. Algebraic Topology Homotopy equivalence. Fundamental group and covering spaces. Various homology theories. CW-complexes. Relation between homology and fundamental group. Locally trivial fibrations and exact 3. Differential Topology / Geometry Manifolds and differentiable structures. Vector bundles. Tangent bundle. Vector fields. Riemannian metrics. Differential forms and the Poincare Lemma. Integration and Stokes Theorem. De Rham There is no standard text. In recent years the following texts were used for various parts of the course: • Introduction to Algebraic Topology by Rotman • Differential Geometry by S. Sternberg Tests and Homework problems from previous courses Qualifying exam problems from previous years Numerical Analysis First Semester 1. Polynomial interpolation and approximation ([SB] 2.1, 2.3, 2.4, [IK] 5) 1. Orthogonal polynomials 1. Legendre polynomials and best L_2 approximation 2. Chebyshev polynomials 2. Lagrange interpolation 1. error analysis 2. divided differences 3. interpolation using Chebyshev points 3. Piecewise polynomial interpolation 1. piecewise Lagrange and Hermite interpolation 2. spline interpolation 3. error analysis 4. Polynomial approximation theory 1. Weierstrass theorem 2. Bernstein polynomials 5. Trigonometric interpolation and Fast Fourier Transforms 2. Quadrature and numerical integration ([SB] 3.1-3.6, [IK] 7.0-7.5) 1. The trapezoidal and Simpson rules, Newton-Cotes rules 2. Euler-Maclaurin expansion 3. Romberg integration 4. Adaptive quadrature 5. Gaussian quadrature 3. Numerical linear algebra 1. Gaussian elimination with pivoting ([SB] 4.1-4.3) 2. Matrix transformations and special matrix forms ([SB] 6.4-6.5) 3. Linear least squares ([SB] 4.8) 4. Power methods ([IK] 4.2) 4. Nonlinear systems of equations ([SB] 4.8, 5.1-5.7) 1. Newton's and quasi-Newton's method 2. Broyden's method 3. Nonlinear least squares 4. Gauss—Newton methods Second Semester 1. Numerical ordinary differential equations ([AI] 1-5, [SB] 7.0-7.2, [IK] 8) 1. Euler method 2. Multistep methods 1. Adams-moulton, Adams-Bashforth method 2. predictor-corrector scheme 3. Runge-Kutta methods 4. Stiffness 5. Error estimation and stepsize control 2. Numerical partial differential equations ([AI] 7, 8, 13, 14, [SB] 7.3-7.7, [IK] 9) 1. Elliptic boundary value problems 1. finite difference methods 2. finite element methods 2. Parabolic equations 1. semi-discrete approximation 2. convergence theory 3. fully discrete scheme 3. Hyperbolic equations 1. linear hyperbolic equations 2. stability, consistency and convergence 3. conservation laws 3. Sparse matrices and iterative methods ([AI] 9-11, [SB] 8) 1. Gaussian elimination for sparse matrices 2. Iterative methods with applications to discretizations of PDEs 1. Jacobi, Gauss-Seidel, SOR 2. conjugate gradient method 3. Multigrid and domain decomposition method Recommended texts: • [AI]: A First Course in the Numerical Analysis of Differential Equations, by Irieh Iserles, Cambridge university Press 1996. • [IK]: Analysis of Numerical Methods, by E. Isaacson and H. B. Keller, Wiley 1966 (or Dover 1994). • [SB]: Introduction to Numerical Analysis, by J. Stoer and R. Bulirsch, 2nd edition, Springer Verlag 1993. Some remarks: • This is the syllabus for two numerical analysis graduate course sequence, 6 credits. • The three references will be put on reserve in the math library. Relevant sections from these books are noted in the parentheses. • The course will initially be offered every other year, starting from Fall 1998. • The qualifying exam for this topic will cover a broad range of topics, including theory, algorithm and implementation. The registration numbers of these two courses are yet to be assigned. Qualifying exam problems from previous years Logic and Foundations First Semester 1. The Propositional Calculus 1. Boolean operations 2. Truth assignments 3. The tableau method 4. The Completeness Theorem 5. The Compactness Theorem 6. Combinatorial Applications 2. The Predicate Calculus 1. Quantifiers 2. Structures 3. Satisfiability 4. Tableaux 5. The Completeness Theorem 6. The Compactness Theorem 3. Proof Systems for Propositional and Predicate Calculus 1. Hilbert-style systems 2. Gentzen-style systems 3. The Interpolation Theorem 4. Extensions of the Predicate Calculus 1. Predicate calculus with identity 2. Predicate calculus with operations 3. Categoricity 4. Countable categoricity 5. Many-sorted predicate calculus 5. Theories, Definability, Interpretability 1. Mathematical theories (groups, fields, vector spaces, ordered structures) 2. Foundational theories (arithmetic, geometry, set theory) 3. Practical completeness 4. Definability 5. Implicit definability 6. Beth's Theorem 7. Interpretability 6. Arithmetization and Incompleteness 1. Primitive recursive functions 2. Representability 3. Godel numbering 4. The Diagonal Lemma 5. Tarski's Theorem on Undefinability of Arithmetical Truth 6. Godel's Incompleteness Theorem 7. Rosser's Incompleteness Theorem 8. Godel's Theorem on Unprovability of Consistency Second Semester 1. Computability 1. Primitive recursive functions 2. The Ackerman Function 3. Computable functions 4. Partial recursive functions 5. The enumeration theorem 6. The halting problem 7. Examples of functions and sets which are not computable 2. Undecidability of the Natural Number System 1. Terms 2. Formulas 3. Sentences 4. Arithmetical definability 5. Chinese remainder theorem 6. Definability of computable functions 7. Definability of the halting problem 8. Godel numbers 9. Undefinability of arithmetical truth 3. Decidability of the Real Number System 1. Effective functions 2. Quantifier elimination (P. J. Cohen's method 3. Definability over the real number system 4. Decidability of the real number system 5. Decidability of Euclidean geometry 4. Introduction to Set Theory 1. Russell paradox 2. Operations on sets 3. Cardinal numbers 4. Ordinal numbers 5. Transfinite recursion 6. The Axiom of Choice 7. The Well Ordering Theorem 8. The Continuum Hypothesis 9. Measurable cardinals 5. Independence of the Continuum Hypothesis 1. The Zermelo-Fraenkel axioms 2. Set-theoretic foundations of mathematics 3. Models of set theory 4. Inner models 5. Constructible sets 6. The inner model L 7. The generalized continuum hypothesis in L 8. Models constructed by forcing 9. A model where the continuum hypothesis fails Recommended texts: • Raymond Smullyan, First-order Logic, Springer-Verlag. • Herbert Enderton, A Mathematical Introduction to Logic, Academic Press. • Elliott Mendelson, Introduction of Mathematical Logic, 3rd edition, Wadsworth. • Joseph R. Shoenfield, Mathematical Logic, Addison-Wesley. • Moshe Machover and John Bell, A Course in Mathematical Logic, North-Holland. • Hartley Rogers, Theory of Recursive Functions and Effective Computability, MIT Press. • Kenneth Kunen, Set Theory, North-Holland. • Thomas Jech, Set Theory, Academic Press. Some remarks: • This is the syllabus for the basic two-semester graduate course sequence in Logic and Foundations, Math 557-558, 6 credits. Tests and Homework problems from previous courses Qualifying exam problems from previous years Partial Differential Equations 1. Classical linear equations: transport, Laplace, heat, and wave equations 1. basic properties 2. fundamental solutions 3. mean value properties 4. maximum principles 5. energy methods 6. Fourier transform method 2. First order nonlinear PDE's 1. characteristics 2. conservation laws 3. shocks 3. Special solutions 1. similarity solutions 2. traveling waves 3. power series methods 4. Sobolev spaces 1. distributions 2. weak derivatives 3. weak convergence 5. More on Sobolev spaces 1. traces 2. Poincaré 3. Sobolev inequalities 4. embeddings 6. Second order elliptic equations 1. fixed point theorems 2. weak solutions 3. regularity 4. maximum principles 5. eigenvalues 6. applications 7. Evolution equations 1. weak solutions 2. Galerkin method 3. regularity 4. maximum principles and propagation of disturbance 5. applications 8. Semigroup theory 1. infinitesimal generators 2. Hille-Yoshida theorem 3. applications 9. Calculus of variations and its applications 1. direct method 2. Mountain pass theorem The above consists of the core part of the first-year graduate study on the subject of Partial Differential Equations at PSU. Each instructor may add a few additional topics. Math 513-4 is a year long course covering the above and provide an introduction to the fundamental theories and methods in partial differential equations. The first course M513 will cover topics 1-4 listed above and the second course M514 will cover the rest. Most of the first 5 chapters of Evans' book will be covered in M513. Additional references: • Partial Differential Equations, R. McOwen • Hilbert Space Methods for Partial Differential Equations, R. Showalter (available free electronically) • Elliptic Partial Differential Equations of Second Order, D. Gilbarg and N. S. Trudinger, 2nd Ed or later. • Partial Differential Equations, F. John To pursue a Ph.D. in the area of PDE, we recommend that you keep in mind ODE is an important component. PSU has M411 (ODE/Fourier Series), M412 (PDE), and M511 (ODE) before the course level of M513-4. After M513-4, PSU offers two topics courses per semester in the broad area of PDE, Numerical Analysis, and Applied Mathematics. While you are studying M513-4 in the first year, you can attend lower level courses or upper level topics courses depending on your preparedness.
{"url":"http://www.math.psu.edu/grad/requirements/exams/outlines.php","timestamp":"2014-04-17T15:52:25Z","content_type":null,"content_length":"22378","record_id":"<urn:uuid:a8ae5ad9-9ec8-496c-8314-442c40025624>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2006 [00050] [Date Index] [Thread Index] [Author Index] Re: Utilizing the Result From Solve[] • To: mathgroup at smc.vnet.net • Subject: [mg64143] Re: [mg64099] Utilizing the Result From Solve[] • From: "Erickson Paul-CPTP18" <Paul.Erickson at Motorola.com> • Date: Thu, 2 Feb 2006 00:07:03 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com Hopefully there is a more general solution for this, but I've used in that past z = Solve[z + 2 == 3, z][[1, 1, 2]] (* which is a little hard to understand ... *) z = z /. Solve[z + 2 == 3, z][[1]] (* which is a little better, but ... The z /. ... Converts the rule into a list and therefore that form can be used for converting any rule to a list of one element. The [[1]] (Part) pulls out the number. I've always been interested in getting to the number embedded in a larger equation, so assigning it as the value to the same variable as above will be a little confusing especially on subsequent evaluation within the same kernel unless you do an explicit clear. I'd suggest a separate variable name, if so needed like: a = z /. Solve[z + 2 == 3, z][[1]] -----Original Message----- From: Shyam Guthikonda [mailto:shyamguth at gmail.com] To: mathgroup at smc.vnet.net Subject: [mg64143] [mg64099] Utilizing the Result From Solve[] If I am solving a simple equation, such as: Solve[z+2 = = 3,z], this returns {{z->1}}. How can I easily get the result, 1? Solve[] returns the solution in rule form. The current method I use to just get the result looks very ugly. Is there an easier way to do this? Here is my current method: ReplaceAll[z, First[ First[ Solve[ z + 2 = = 3, z ]]]]; z = %; Now I can use z as a normal variable.
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Feb/msg00050.html","timestamp":"2014-04-19T06:57:39Z","content_type":null,"content_length":"35582","record_id":"<urn:uuid:c33d53d0-5cf9-4eb4-9d23-af69b2b34b0d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Hexadecimal Floating-Point Literals One of the more obscure language changes included back in JDK 5 was the addition of hexadecimal floating-point literals to the platform. As the name implies, hexadecimal floating-point literals allow literals of the float and double types to be written primarily in base 16 rather than base 10. The underlying primitive types use binary floating-point so a base 16 literal avoids various decimal ↔ binary rounding issues when there is a need to specify a floating-point value with a particular representation. The conversion rule for decimal strings into binary floating-point values is that the binary floating-point value nearest the exact decimal value must be returned. When converting from binary to decimal, the rule is more subtle: the shortest string that allows recovery of the same binary value in the same format is to be used. While these rules are sensible, surprises are possible from the differing bases used for storage and display. For example, the numerical value 1/10 is not exactly representable in binary; it is a binary repeating fraction just as 1/3 is a repeating fraction in decimal. Consequently, the numerical values of 0.1f and 0.1d are not the same; the exact numeral value of the comparatively low precision float literal 0.1f is and the shortest string that will convert to this value as a double is This in turn differs from the exact numerical value of the higher precision double literal 0.1d, 0.1000000000000000055511151231257827021181583404541015625. Therefore, based on decimal input, it is not always clear what particular binary numerical value will result. Since floating-point arithmetic is almost always approximate, dealing with some rounding error on input and output is usually benign. However, in some cases it is important to exactly specify a particular floating-point value. For example, the Java libraries include constants for the largest finite double value, numerically equal to (2-2^-52)·2^1023, and the smallest nonzero value, numerically equal to 2^-1074. In such cases there is only one right answer and these particular limits are derived from the binary representation details of the corresponding IEEE 754 double format. Just based on those binary limits, it is not immediately obvious how to construct a minimal length decimal string literal that will convert to the desired values. Another way to create floating-point values is to use a bitwise conversion method, such as doubleToLongBits and longBitsToDouble. However, even for numerical experts this interface is inhumane since all the gory bit-level encoding details of IEEE 754 are exposed and values created in this fashion are not regarded as constants. Therefore, for some use cases it helpful to have a textual representation of floating-point values that is simultaneously human readable, clearly unambiguous, and tied to the binary representation in the floating-point format. Hexadecimal floating-point literals are intended to have these three properties, even if the readability is only in comparison to the alternatives! Hexadecimal floating-point literals originated in C99 and were later included in the recent revision of the IEEE 754 floating-point standard. The grammar for these literals in Java is given in JLSv3 HexSignificand BinaryExponent FloatTypeSuffix[opt] This readily maps to the sign, significand, and exponent fields defining a finite floating-point value; sign0xsignificandpexponent. This syntax allows the literal to be to used represent the value 3; 1.8[hex] × 2^1 = 1.5[decimal] × 2 = 3. More usefully, the maximum value of (2-2^-52)·2^1023 can be written as and the minimum value of 2^-1074 can be written as 0x1.0P-1074 or 0x0.0000000000001P-1022, which are clearly mappable to the various fields of the floating-point representation while being much more scrutable than a raw bit encoding. Retroactively reviewing the possible steps needed to add hexadecimal floating-point literals to the language: 1. Update the Java Language Specification: As a purely syntactic changes, only a single section of the JLS had to updated to accommodate hexadecimal floating-point literals. 2. Implement the language change in a compiler: Just the lexer in javac had to be modified to recognize the new syntax; javac used new platform library methods to do the actual numeric conversion. 3. Add any essential library support: While not strictly necessary, the usefulness of the literal syntax is increased by also recognizing the syntax in Double.parseDouble and similar methods and outputting the syntax with Double.toHexString; analogous support was added in corresponding Float methods. In addition the new-in-JDK 5 Formatter "printf" facility included the %a format for hexadecimal floating-point. 4. Write tests: Regression tests (under test/java/lang/Double in the JDK workspace/repository) were included as part of the library support (4826774). 5. Update the Java Virtual Machine Specification: No JVMS changes were needed for this feature. 6. Update the JVM and other tools that consume classfiles: As a Java source language change, classfile-consuming tools were not affected. 7. Update the Java Native Interface (JNI): Likewise, new literal syntax was orthogonal to calling native methods. 8. Update the reflective APIs: Some of the reflective APIs in the platform came after hexadecimal floating-point literals were added; however, only an API modeling the syntax of the language, such as the tree API might need to be updated for this kind of change. 9. Update serialization support: New literal syntax has no impact on serialization. 10. Update the javadoc output: One possible change to javadoc output would have been supplementing the existing entries for floating-point fields in the constant fields values page with hexadecimal output; however, that change was not done. In terms of language changes, adding hexadecimal floating-point literals is about as simple as a language change can be, only straightforward and localized changes were need to the JLS and compiler and the library support was clearly separated. Hexadecimal floating-point literals aren't applicable to that many programs, but when they can be used, they have extremely high utility in allowing the source code to clearly reflect the precise numerical intentions of the author. Hi Joseph, I was hoping to get your opinion on a debate we've been having about how to best transfer 32-bit Java floats to a Javascript application (which only has a 64-bit floating-point type). This is for the Google Web Toolkit, which compiles Java code into Javascript. Here is the URL: If you could comment directly on that thread, that would be great! Posted by Alex Epshteyn on December 16, 2008 at 05:24 AM PST # I haven't read through the thread in great detail, but I'll offer a few comments. Java semantics require distinct float and double types at some level. It is possible to emulate the result of float operations, add, subtract, multiply, divide, and square root by: 1) Converting the numerical float value to its double representation 2) Performing the operation to double precision 3) Rounding the double result down to float precision (IIRC, an generalized outline of proof of this property is the numerical appendix of the 2nd edition of Hennessey and Patterson.) For point 1), a representation that preserves the \*numerical value\* is needed. One such presentation would be the string of the float value converted to double (e.g from Java, Double.toString ((double)f)). Another would be the hexadecimal representation; the toHexString output is exact so there aren't the same rounding considerations of preserving the original binary value as when doing through an intermediate decimal conversion. Using Float.toString(f) and then converting that string to double will in general \*not\* preserve the float value. For point 3), this is straightforward to do with a cast or if one has access to the bit-level floating-point representation. While a student at Berkeley, a tech report I wrote includes an outline of how to extract the bit-level representation using normal floating-point operations ("Writing robust IEEE recommended functions in ``100 % Pure Java''(TM)", http://www.sonic.net/~jddarcy/Research/ ieeerecd.pdf). However, there are somewhat more direct and natural ways to compute this rounding to reduced precision; using Dekker's tricks one can split the floating-point number at a given bit position and test the low-order bits against zero, etc. I'll leave working out the details as an "exercise for the reader," as some of my college textbooks like to say :-) Posted by Joe Darcy on December 17, 2008 at 09:35 AM PST #
{"url":"https://blogs.oracle.com/darcy/entry/hexadecimal_floating_point_literals","timestamp":"2014-04-20T08:23:08Z","content_type":null,"content_length":"36235","record_id":"<urn:uuid:c0413701-bfec-47f0-a16a-bf7083164c68>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete vertex invariants up vote 1 down vote favorite This question is related to Yet another graph invariant: the similarity matrix. In graph theory there is much talk and research on graph invariants, especially complete graph invariants describing a graph up to isomorphism. I have the impression that there is only little talk and research on complete vertex invariants describing vertices (in their graph) up to conjugacy. One vertex invariant which is complete by definition is the smallest n-neighbourhood of a vertex v which distinguishes it from all vertices not conjugate to it. Let the n-neighbourhood of v be the (unlabelled but rooted) induced subgraph containing v (as the distinguished node) and all vertices at most n edges away from v. Can someone explain in a few words, why complete vertex invariant(s) seem not to deserve so much attention? Or am I wrong and they do attract attention? Then: Can some references be given? One reason why they could deserve attention is that complete vertex invariants might be used to define complete graph invariants (à la degree sequence, which is not complete, of course). add comment 1 Answer active oldest votes I think the main reason why they have not attracted much attention is due to vertex-transitive graphs. In the case that $G$ is vertex-transitive, then $V(G)$ consists of a single conjugacy class. Thus, complete vertex invariants will be of no help in constructing the automorphism group of $G$. The other extreme is if $aut(G)$ is trivial, so in this case each vertex is a separate conjugacy class. up vote 2 A potential middle ground is to look at all graphs $G$ such that the union of the non-singleton conjugacy classes of $G$ has size at most $k$. Let $\mathcal{G}$ be the set of all such down vote graphs. Here, complete vertex invariants might be useful for constructing the automorphism group of $G$. In general, I have to check $|V(G)|!$ potential permutations, but for graphs in $\ mathcal{G}$, I only need to check at most $k!$. If I view $k$ as a constant, then as long as I can construct complete vertex invariants efficiently, this gives me a fast algorithm to construct $aut(G)$ for graphs in $\mathcal{G}$. add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/24357/complete-vertex-invariants?sort=votes","timestamp":"2014-04-24T12:03:54Z","content_type":null,"content_length":"50379","record_id":"<urn:uuid:e5646629-ba21-4c10-9d67-b81e6261b684>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Lake Calculus Tutor ...Tutored ACT math topics during high school, college, and as a GMAT instructor for three years. Scored in the 99th percentile on the GMAT. Worked as a project manager at a manufacturing company managing over 1.5 million in revenue. 28 Subjects: including calculus, physics, statistics, GRE I am very excited to help you gain the confidence you need to succeed in math and science! These are subjects that anyone can master provided they are willing to persevere. I have a broad range of teaching experiences. 15 Subjects: including calculus, physics, algebra 1, trigonometry ...I earned a BS in math and physics from the University of Alabama in Huntsville and a MS in physics from Georgia Tech. I am currently working on a PhD in physics at Georgia Tech. I have tutored several students in math in physics from high school students to college students and have seen very positive results. 11 Subjects: including calculus, physics, geometry, algebra 1 ...My Policies (please read carefully): I require a minimum of one hour per session for in person tutoring. You will be charged for at least an hour for each session, regardless of the time elapsed. You will be charged for the time booked or however long the session lasts, depending on whichever is longest. 10 Subjects: including calculus, physics, algebra 1, ASVAB ...KeI received a perfect score on the SAT when I took it myself. Moreover, I have tutored SAT for years. Primary focus in Macroeconomics, but can also teach other topics in economics up to first year PhD level material. 30 Subjects: including calculus, reading, Chinese, ESL/ESOL
{"url":"http://www.purplemath.com/pine_lake_ga_calculus_tutors.php","timestamp":"2014-04-21T12:57:58Z","content_type":null,"content_length":"23675","record_id":"<urn:uuid:bc75ce57-c636-440d-892b-56aeec899ff4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Models from experiments: combinatorial drug perturbations of cancer cells • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Mol Syst Biol. 2008; 4: 216. Models from experiments: combinatorial drug perturbations of cancer cells We present a novel method for deriving network models from molecular profiles of perturbed cellular systems. The network models aim to predict quantitative outcomes of combinatorial perturbations, such as drug pair treatments or multiple genetic alterations. Mathematically, we represent the system by a set of nodes, representing molecular concentrations or cellular processes, a perturbation vector and an interaction matrix. After perturbation, the system evolves in time according to differential equations with built-in nonlinearity, similar to Hopfield networks, capable of representing epistasis and saturation effects. For a particular set of experiments, we derive the interaction matrix by minimizing a composite error function, aiming at accuracy of prediction and simplicity of network structure. To evaluate the predictive potential of the method, we performed 21 drug pair treatment experiments in a human breast cancer cell line (MCF7) with observation of phospho-proteins and cell cycle markers. The best derived network model rediscovered known interactions and contained interesting predictions. Possible applications include the discovery of regulatory interactions, the design of targeted combination therapies and the engineering of molecular biological networks. Keywords: combination therapy, network dynamics, network pharmacology, synthetic biology Our ability to measure increasingly complete and accurate molecular profiles of living cells motivates new quantitative approaches to cell biology. For example, a key aim of systems biology is to relate changes in molecular behavior to phenotypic consequences. To achieve this aim, computational models of cellular processes are extremely useful, if not essential. Computational models can be used for the analysis of experimental data, for the prediction of outcomes of unseen experiments and for planning interventions designed to modify system behavior. We have developed a particular approach to constructing, optimizing and applying computational models of cellular processes, which we call Combinatorial Perturbation-based Interaction Analysis (CoPIA). The key ingredients of the approach are combinatorial intervention, molecular observation at multiple points, model construction in terms of nonlinear differential equations, optimization of model parameters with simplicity constraints and experimental validation. The power of combinatorial perturbation In molecular biology, a targeted perturbation typically inhibits or activates function of biomolecules, e.g. as a result of drug action, small RNA interference, genetic or epigenetic change (Figure 1 ). In a single experiment, targeted perturbations can be applied either singly or in combination. Combined perturbation by several agents can be much more informative than that by a single agent, as its effects typically reveal downstream epistasis within the system, such as non-additive synergistic or antagonistic interactions. In addition, a large number of independently informative experiments can be performed if in each experiment a different small set of, e.g. two or three, perturbants is chosen from a larger repertoire. Thus, combinatorial perturbations are potentially powerful investigational tools for extracting information about pathways of molecular interactions in cells (such as A inactivates B, or X and Y are in the same pathway) (Avery and Wasserman, 1992; Kaufman et al, 2005; Kelley and Ideker, 2005; Segre et al, 2005; Yeh et al, 2006; Lehár et al, 2007). Combinatorial perturbations can also be powerful application tools when rationally designed to achieve desired effects. For example, combination of targeted drugs is considered a promising strategy to improve treatment efficacy, reduce off-target effects and/or prevent evolution drug resistance (Borisy et al, 2003; Keith et al, 2005; Komarova and Wodarz, 2005; Chou, 2006). Combinatorial perturbation and multiple input–multiple output (MIMO) models. Upper left: intuitive view of perturbations and their points of action. Small inhibitory RNAs alter gene expression; natural protein ligands and small compounds act, ... With recent advances in molecular technologies—e.g., targeted perturbation by small molecules, full-genome libraries of small RNAs, highly specific antibody assays, massive parallelization and imaging techniques—there is intense interest in the investigational power of multiple perturbation experiments in a variety of biological systems. The inherent complexity of such experiments raises significant challenges in data analysis and an acute need for improving modeling approaches capable of capturing effects such as time-dependent responses, feedback effects and nonlinear couplings. Deriving system models from combinatorial perturbation experiments Computer simulation of pre-defined pathways can be used to predict epistasis effects and explore how pathway organization shapes the perturbation response (Omholt et al, 2000; Segre et al, 2005; Lehár et al, 2007). In many situations however, observational data are provided but the pathway is unknown or only partially known. To solve this problem, our computational modeling approach enables users to construct a complete differential equation model for a system from combinatorial perturbation experiments. In the context of this paper, the system of interest is defined by a particular type of cell, its environment, a time interval of observation and a phenotypic change, such as cell death or growth. The system is further characterized by its points of intervention, such as drug targets, and the points of observation, such as the phosphorylation state of proteins involved in signaling processes (Figure 1). To represent such a system mathematically, we choose network models in which nodes represent molecular concentrations or levels of activity and edges reflect the influence of one node on the time derivative of another. The time evolution of the system is modeled by linear differential equations, modified by a nonlinear transfer function to reflect properties of the system that are not explicitly modeled (Figure 1). We present efficient optimization algorithms to find models that achieve maximum agreement between observation and prediction. Our algorithm is based on a combination of a gradient descent method (to set dynamical parameters) and a Monte Carlo process (to explore alternative network connectivities). We make a software implementation of CoPIA available as platform-independent software (http://cbio.mskcc.org/copia). Testing the predictive power of derived system models We perform combinatorial perturbation experiments in an MCF7 breast cancer cell line to test the modeling framework in the steady-state limit. In this test, we demonstrate how observation of the effects of drug pair perturbations can be exploited to deduce a network model of signaling and phenotype control (reverse engineering of pathways). We use observed molecular state and growth phenotype responses to build predictive models and use these to explain the perturbation–phenotype relationship in terms of coupling between proteins in the EGFR/MAPK and PI3K/AKT pathways. Without using known pathway biology, the resulting model reproduces known regulatory couplings and negative feedback regulation downstream of EGFR and PI3K/AKT/mTOR, and makes predictions about possible roles of PKC-δ and eIF4E in the control of MAPK signaling and G1 arrest in MCF cells. We conclude that CoPIA may be of interest as a broadly applicable tool to construct models, discover regulatory interactions and predict cellular responses. For instance, researchers can measure a set of protein phosphorylation responses to drug combinations and use the method to automatically construct network models that predict the response to novel drug combinations. Application of this methodology to time-dependent experimental observations would extend this predictive capability to the regimen of time-dependent, rationally designed combinatorial therapy. Modeling the effects of combinatorial perturbations Multiple input–multiple output models State space representation is commonly used in mathematical modeling of input–output behavior in natural systems. In this representation, the time behavior of the system state is described by a first-order differential equation where the vector y(t) represents state variables (the activities of the system's components), the vector u(t) represents perturbations (external influences on the components) and f is a linear or nonlinear transfer function (de Jong, 2002). For example, y(t) can be the abundances of specific mRNAs or proteins, whereas u(t) can be the concentrations of different chemical compounds to which the cells are exposed (Figure 1). In essence, state space models relate a system's input to its output. State space models with multiple inputs–outputs (that is, y and u have more than one coordinate) are called multiple input–multiple output (MIMO) models. Linear MIMO models When f is a linear function of y and u, the above model is called a linear MIMO model. The mathematical properties of linear MIMO models are well known (Ljung, 1986) and such models have been applied to many biological problems, for example, the construction of transcriptional network models (Tegner et al, 2003; Xiong et al, 2004; di Bernardo et al, 2005). Nevertheless, linear models have a limitation in that they can only model uncoupled perturbation effects (linear dose–response relationships), whereas nonlinear effects (coupled perturbation effects) are ignored (Figure 1; ‘Model representation'). As a result, linear MIMO models are unable to capture important phenomena that are known to occur in cellular systems, such as saturation effects, switch-like effects and nonlinear interaction phenomena such as genetic epistasis and pharmacological synergism. Nonlinear MIMO models To overcome this limitation, we construct nonlinear MIMO models capable of representing coupled perturbation effects. Previously, other authors have observed that complex gene knockout effects, including epistasis effects, can be predicted in metabolic flux networks where bounds on the reaction rates are introduced (Fell and Small, 1986; Edwards and Palsson, 2000; Segre et al, 2005; Deutscher et al, 2006). Similarly, metabolic systems with Michaelis–Menten kinetics or transcriptional networks with bounds on transcription rates will exhibit epistasis behavior (Omholt et al, 2000; Lehár et al 2007). In the particular case of the MIMO model, we expect more biologically realistic behavior if one replaces the linear transfer function f with a nonlinear transfer function In this class of models, the matrix w[ij] represents the interactions between the molecules and processes represented by the state variables of the system. (Intuitively, the matrix elements w[ij] can be thought of as a map of the system, in which w[ij]>0 means ‘node j activates node i', whereas w[ij]<0 corresponds to inhibition.) Furthermore, α[i]>0 represents the tendency of the system to return to the initial state (y[i]=0); β[i]>0 are constants and [i] is a transfer function capable of capturing both switch-like behavior and bounded reaction rates. Examples of such functions include sigmoid functions, piece-wise linear approximations of sigmoids or biochemically motivated approximations such as the Hill or Michaelis–Menten equations (Materials and methods). Application of nonlinear MIMO models to combinatorial perturbation experiments We developed computer algorithms to infer nonlinear models of the above type from experimental data, as specified by the best-performing values of the coupling parameters w[ij] and other parameters. As detailed in Materials and methods, the current implementation of our approach consists of the following steps. First, the system of interest is subjected to a set of independent single or multiple target perturbation experiments; and, for each perturbation vector (time-independent instance of u), a readout vector (steady-state instance of y) is recorded. Second, we infer a nonlinear model that best reproduces the experimental data (Materials and methods). Specifically, we rely on parameter estimation techniques for feedback systems to find a model that minimizes a quadratic error term between observed and predicted readouts, subject to simplicity constraints on the number of interactions in the system. Third, the fitted model can be used to predict the system's response to unseen perturbations (for example, combinations of drugs), and to gain new insight into the system's architecture. Testing modeling power for combinatorial perturbations in breast cancer cells Dual drug perturbation experiments in MCF7 breast cancer cells To directly test the power of the approach, we performed an independent experimental study in MCF7 human breast carcinoma cells. As perturbants of the system, we chose compounds targeting EGFR (ZD1839), mTOR (rapamycin), MEK (PD0325901), PKC-δ (rottlerin), PI3 kinase (LY294002) and IGF1R (A12 anti-IGF1R inhibitory antibody). As relevant readouts of molecular and phenotypic responses, we chose phospho-protein levels of seven regulators of survival, proliferation and protein synthesis (p-AKT-S473, p-ERK-T202/Y204, p-MEK-S217/S221, p-eIF4E-S209, p-c-RAF-S289/S296/S301, p-P70S6K-S371 and pS6-S235/S236) as well as flow cytometric observation of two phenotypic processes (cell cycle arrest and apoptosis) (Figure 2). Inhibitors were administered singly and in pairs, followed by EGF stimulation. When recording responses of protein phosphorylation, we used the average response at 5 and 30 min as the surrogate for steady-state values. To build models, we represented the state of each of the above perturbation targets (signaling proteins), as well as each of the readouts, by one state variable y[i]. We then used the proposed optimization procedure (Materials and methods) to estimate the coupling parameters w[ij] and other parameters, resulting in predictive models of response in terms of these system variables. Breast cancer cells as a multiple input–multiple output system. To generate data for model construction, we treated human MCF7 breast tumor cell lines with one natural ligand (epidermal growth factor (EGF)) and six inhibitors, singly and in combination. ... Quantitative prediction of system response We first assessed the predictive power of the derived models using leave-one-out cross-validation, in which one pair perturbation is left out of the analysis and then its effect predicted from information gained from all other perturbations. The resulting predictions were reasonably accurate for the nine different readouts. The best prediction was obtained for p-S6 phospho-protein levels (cross-validation error CV=0.02, Pearson correlation r=0.96) and the weakest for the G1 arrest phenotype (CV=0.07, r=0.45) (Figure 2 and Supplementary Table 1). We directly compared the performance of our modeling approach to one using a corresponding set of linear differential equations with the same optimization procedure. By comparison, predictions using the nonlinear approach agreed better with experimental observations for eight of the nine readouts. Using the nonlinear modeling approach, the prediction error was lower by up to 50% with correspondingly better correlation between predictions and experimental observations (Supplementary Table 1). Thus, we conclude that our method is capable of deriving reasonably accurate network models for the input–output behavior of MCF7 cells with respect to the readouts used. Detection of key regulatory mechanisms without prior knowledge From a set of perturbation experiments, how can one deduce the logical network structure of activating and inhibiting interactions between the key molecular components, similar to the familiar pathway diagrams in publications summarizing a set of molecular biological experiments? Here, we use the derived network models with the smallest global error (E[total]=E[SSQ]+λE[STRUCT], Materials and methods) to infer causal connectivity diagrams. The inference is based on the assumption that interactions in sufficiently simple models that fit experimental observations, called ‘good' models, represent an underlying causal relationship between system components modeled by the system variables y[i]. Such a relationship can be either an indirect regulatory effect or a direct physical interaction that would be observable in vitro with purified components. Using our Monte Carlo algorithm, we generated a population of 450 good models from the MCF7 dual drug perturbation experiments. From these, we assessed the statistical significance of the individual interactions both in terms of a posterior probability (which is obtained directly from the Monte Carlo process, see Materials and methods) and a 90% confidence interval constructed by boot-strapping simulations (Table I). We now discuss the connectivity of the best model, i.e. the one with the smallest error (schema in Figure 3, explicit equations in Materials and methods) relative to the known biology of regulatory pathways in the MCF7 breast cancer cell line. Use of MIMO models to infer regulatory interactions in breast cancer cells. The interaction matrix w[ij] from a set of good models can be used to infer regulatory interactions (squares=inputs; circles=internal system variables and other observables). Positive ... Statistical assessment of inferred interactions in MCF7 cells Interpretation of derived network structure In comparing the inferred connectivity with mechanisms known to occur in MCF7 cells (Table I), two caveats are important. (1) The logical nodes in our models are defined precisely as the perturbed and observed molecular species, i.e. the targets of drug perturbation and the targets of specific observed antibody reactions, and may not be exactly identical to a single molecular species. For example, ‘EGFR' refers to the direct target(s) of activation by EGF and of inhibition by the drug ZD1839, and these two are assumed to be identical. (2) The models make no reference to unperturbed or unobserved nodes, e.g. whereas p-AKT is in the network model, the unphosphorylated AKT is not. With these caveats in mind, one can use the models both for confirmation and prediction of interactions. Of the 23 interactions in the best model, 14 had a posterior probability in the range of 20–99% (Table I). Of these, several statistically robust interactions clearly confirm canonical pathway structures. (i) The MAPK cascade downstream of the EGF receptor is detected as a chain of interactions between EGFR, MEK and ERK (Figure 3 and Table I). (ii) The negative feedback regulation of MAPK signaling is captured as negative interaction from ERK to EGFR, and as a moderately significant self-inhibition of MEK (see Discussion). (iii) PI3K-dependent signaling and the tendency for MCF7 cells to be dependent on AKT activation for survival are detected as interactions between PI3K, AKT and the apoptosis phenotype. (iv) The model inference that apoptosis is controlled by p-AKT, but not p-ERK, is in agreement with previous results in MCF7 cells (Simstein et al, 2003; DeFeo-Jones et al, 2005). (v) mTOR downstream signaling is detected as interactions between mTOR, p70S6K and ribosomal S6 protein (Mingo-Sion et al, 2005). The derivation of these expected interactions from a small set of perturbation experiments, without prior pathway knowledge, underscores the non-trivial value of the model building approach and provides some confidence in the concrete predictions of logical regulatory interactions for MCF7 cells (Table I), which are discussed below. In summary, our evaluation in breast cancer cells supports two main conclusions. First, our approach to model construction can be used to build reasonably accurate quantitative predictors of pathway responses to combinatorial drug perturbation in MCF7 cells. Second, the quality of the deduced interaction network suggests that well-parameterized nonlinear MIMO models are interpretable in terms of a network of (direct and/or indirect) regulatory interactions. The inference of network structure is surprisingly effective: the logical network diagram in Figure 3 was derived de novo based on only 21 experiments, using non-temporal data and only nine experimental readouts and accurately reflects important known regulatory interactions. This bodes well for future applications in which the amount of readout data can easily be an order of magnitude greater. In addition to yielding details of intermolecular coupling, the method is sufficiently general to allow predictive modeling of causal relationships between biomolecular events and cellular phenotypic consequences, such as growth or cell cycle arrest. The method lends itself to multi-level modeling in the sense that molecular, mesoscopic and macroscopic events can be modeled in a single framework once appropriate state variables y[i] are defined. Software and technical aspects of implementation We aim to put these tools into the hands of both computational and experimental biologists for widespread use and are providing a software distribution of CoPIA in the supplement. When applying the method in practice, three crucial technical details are important. A user has to choose (i) which system properties to represent by dynamical variables; (ii) a specific form for the transfer function T, the temperature parameter, which fine-tunes the extent of non-optimal exploration of network space. In Materials and methods, we provide guidelines for these choices. Complementarity to response surface models and epistasis clustering In a recent interesting work, Lehár et al (2007) used drug pairs to perturb signaling pathways in cancer cells, and provided an interpretation framework based on traditional pharmacological models for two-drug response surfaces. Drug targets in the PI3K and MAPK pathways were characterized by correlating ‘synergy profiles,' demonstrating a link between network connectivity and drug pair response. Such synergy profiles, in turn, can be thought of as a generalization of the epistasis matrix used by Segre et al (2005) as a basis for functional clustering of genes. The approach proposed here is different in the sense that it performs a global optimization that aims to find a fully parameterized model for the entire system. Such models, in turn, can be used for additional purposes such as making predictions of system responses, or making connectivity information explicit as pathway diagrams. Preliminary data suggest that CoPIA models can be used to interpret or predict response surface data, as a function of drug concentrations, as an alternative to the approach of Lehár et al, e.g. to reduce experimental cost (S Nelander, unpublished data). Finally, the differential equation CoPIA models can be easily represented in standard systems biology formats, such as BioModels (Le Novère et al, 2006) and be used with a number of tools for model visualization, numerical simulation or analytical characterization. Relationship to neural models and Hopfield networks The nonlinear representation proposed here, or related neural models, has been used in biological contexts such as transcriptional network modeling (Marnellos and Mjolsness, 1998; D'haeseleer et al, 2000; Omholt et al, 2000; Vohradsky, 2001; Li et al, 2004; Bonneau et al, 2006; Hart et al, 2006), in synthetic biology (Kim et al, 2005, 2006) and for problems such as approximation of inorganic chemical reactions (Shenvi et al, 2004), but not for general cellular processes and/or drug perturbations. In addition, CoPIA models are similar, but not identical, to Hopfield networks, a formalism introduced to study computation in physical systems (Hopfield, 1982). To further motivate this class of models in representing biological systems, we propose an extended effort to theoretically and empirically analyze how well biochemical reactions can be approximated by neural functions, e.g. reactions involved in DNA switches (Kim et al, 2005). Confirmed and predicted regulatory interactions in MCF7 cells In our analysis, we detected self-inhibitory feedback loops downstream of the EGF receptor. This is compatible with the observation that receptor activation of MAPK signaling frequently leads to rapid feedback inhibition, for instance by induced expression of inhibitory proteins (such as Sprouty (Kim and Bar-Sagi, 2004) or MAPK phosphatases), or inhibition of RAF by direct phosphorylation ( Dougherty et al, 2005). In our experiments, we are not able to identify the full complexity of the feedback loops, as we did not perturb nodes such as ERK or RAF-1 or other proteins and used a short EGF stimulation time. Additional predictions, such as (i) eIF4E acting as a downstream effector of ERK, as well as (ii) PKC-δ counteracting the G1 arrest phenotype, are supported by results in other cell types (Waskiewicz et al, 1997). Furthermore, the model predicts a mutually inhibitory interplay between eIF4E activation by phosphorylation and G1 arrest, consistent with the established role of eIF4E as a potent oncogene and a master activator of a ‘regulon' of cell cycle activator genes (Culjkovic et al, 2006). However, the predicted increase in p-RAF by PKC-δ is paradoxical: the observed phosphorylation sites on c-Raf (S289/S296/S301) are regarded as inhibitory, which seems inconsistent with the facts that PKC-δ can activate MAPK signaling in a RAF-dependent way (Jackson and Foster, 2004). Our prediction might suggest an unknown direct effect mechanism, or an indirect effect that is not captured in the present analysis. Finally, three less interpretable and therefore interesting or potentially problematic features of the network in Figure 3 are (i) the self-activation of ERK; (ii) the activating arrow between apoptosis and G1 arrest and, (iii) the fact that RAF is not placed between EGFR and MEK, as in the usual representation of this pathway. Overall, a number of predictions can be used to design experiments to validate or refute the model predictions. Future challenges There are a number of future challenges and opportunities to apply the method to important problems and to increase its power. A key challenge is to use the method to extend known pathways, by combining exploratory perturbation experiments with the richness of biological knowledge in pathway databases. This can be achieved by adding a priori known nodes y[j] into the formalism and introducing a bias in the network search that favors solutions compatible with prior knowledge. To deal with off-target effects of perturbations and incompletely known drug–target specificity, we propose a variant algorithm in which drug–target couplings are parameters that are determined by optimization. Such a variant can be used in target identification for interesting drugs, e.g. compounds that have a desirable effect but for which the target is not yet known. To maximize the information value of experiments, we propose to develop algorithms for the design of experiments, e.g. based on the change of outcomes with respect to particular parameters (King et al, 2004; Vatcheva et al, 2006). We see tremendous opportunities in new types of experiments. To generate more comprehensive and more informative perturbations of a larger set of cellular components, one can use combinatorial RNA interference (Friedman and Perrimon, 2006; Sahin et al, 2007). To generate readout richer by one or two orders of magnitude, one can use mass spectrometry of protein and phospho-protein levels (Mann et al, 2002). The CoPIA method can be generalized to go beyond the steady-state approximation and explicitly model the time behavior of system components by minimizing the error function for a set of time series experiments. From models to therapies The proposed combinatorial perturbation approach to cell biology, CoPIA, presents a well-specified experimental–computational procedure to construct predictive models for perturbation responses in malignant cells. We suggest use of such models to optimize therapeutic protocols, especially by designing interventions using a combination of targeted compounds administered in an optimal time sequence. Our method constitutes a concrete step toward the active development of network-oriented pharmacology. Materials and methods Computational methods Phenotype prediction The nonlinear MIMO model for combinatorial perturbation in cellular systems is introduced in the Results section (equation (2)). When this system is propagated through time, it will generally converge to a stable, fixed point (Pineda, 1987). We interpret this fixed point as the phenotypic response to the perturbation u. To calculate the fixed point given in a model, we used standard numerical integration methods (ode15s (Mathworks Inc.) and DLSODE (Hindmarsh, 1993)). As the class of models studied here can in principle have more than one solution to the steady-state equation ( Smits et al, 2006), we used the convention—for practical purposes—to start each predictive simulation from the unperturbed, wild-type steady state y=0. Overview of model fitting algorithm The procedure used to find parameter values (for the α[i]'s, β[i]'s and the w[ij]'s) from experimental data is outlined below. As an overall approach, we minimize a global error function that combines the requirements of data fit and simplicity. The error function is defined as where E[SSQ] is the residual sum of squares error, which measures the difference between the model's predicted values and the corresponding observational values for the subset of variables that are observed. The term E[STRUCT] is a penalty term that measures the complexity of the network and λ is a tuning parameter that needs to be chosen; for λ=0 no emphasis is put on the model structure and increasingly sparse (uncomplicated) models are obtained for increasing values of λ. We used the l^0-norm of the regulatory matrix w to define E[STRUCT] as where 0^0=0. The l^0-norm is a common approach to enforce sparse solutions in many machine-learning applications (Weston et al, 2002). In principle, other norms can be used, such as the l^1 norm ( Yeung et al, 2002). To minimize E[total], we made combined use of a Monte Carlo stochastic search algorithm (to search for the network structure) and an efficient gradient descent algorithm described by Pineda (1987) (to set the parameters). In an outer loop of the algorithm, the Monte Carlo process gradually updates the model structure (the set of non-zeros in w). In an inner loop, we apply Pineda's algorithm to fit parameters (α[i]'s, β[i]'s and non-zero w[ij]'s). The output of the algorithm is a set of complete ODE models, for example In the following two sections, we describe the gradient descent algorithm and the Monte Carlo stochastic search algorithm more thoroughly. Inner loop: minimization of E[SSQ] using a gradient descent algorithm Assume a MIMO system with N dynamical variables y[1],y[2], …, y[N], of which a subset Ω of the variables can be observed experimentally. A perturbation experiment is described by the pair (u, Y), where u=(u[1], …, u[N]) is the perturbation treatment and Y={Y[i]iu and the experimentally observed response Y, we use the dynamical system described in the Results section (equation (2)). Let u. We then define the sum of squares error for a single experiment as E[SSQ] = ∑[i](Y[i]−[i])^2. We consider a fixed network structure, where some w[ij]'s are fixed to zero. To describe the structure, we define a matrix U such that w[ij] can adopt a non-zero value if U[ij]=1 and w[ij] is zero if Given N, (u, Y) and U, we want to find parameters α[i]'s, β[i]'s and the non-zero w[ij]'s that minimize the error E[SSQ]. For the special case where λ=0, α=1, β=1, Pineda (1987) described a gradient descent procedure, based on solving a set of differential equations in which the weights w[ij] are updated following the gradient descent rule Here, η is a (small) number that sets the convergence speed, and τ is a ‘pseudo-time' that increases as the fitting procedure progresses. We use the update equations derived in D'haeseleer et al (2000) to extend to an arbitrary α and β. The computation formula to minimize E[SSQ] thus becomes: In these equations, z is an error propagation variable introduced for computational purposes (Pineda, 1987). To fit the model for a single (u, Y) pair, we integrated these equations (DLSODE or ode15s ) with initial value 0 for w and 1 for α and β. The parameters were not subjected to constraints such as lower and upper bounds. Solutions for different stimulus–response pairs were combined using online learning with momentum described in Duda et al (2000). Outer loop: minimization of E[TOTAL] with an l-zero penalty using stochastic search We used a Markov Chain Monte Carlo approach (Ewens and Grant, 2005) to minimize E[TOTAL], and hence find the optimal model defined by the network structure U and parameter values for α, β and non-zero w's. In the algorithm, a set of models are maintained and a particular model survives to the next iteration with probability proportional to e^−E[total]/T (the Boltzmann factor, where T denotes the temperature of the search). Hence, low-error models are more likely to be propagated to next iteration. The temperature is typically high in the beginning of the search and low in the end. The algorithm is outlined as follows: 1. Initialize with U[current]=U[start]. Here, subindexes of U (U[current], U[start], U[1], U[2], …) refer to different realizations of the U matrix (as opposed to U matrix elements. As U[start], we use a N × N matrix of zeros. 2. Generate a set S={Ũ[1], …, Ũ[k]} of structures that are variations of U[current]. For simplicity, we consider every structure that differs from Ucurrent by one edge. 3. Estimate the parameters for each structure Ũ[1], …, Ũ[k] using the variant of Pineda's algorithm presented above. Record the corresponding sum-of-square errors E[1], …, E[k.] 4. Calculate the total error for each topology as E[j]′=E[j]+λ∑U[j]. 5. Use a decision rule R to select one of the alternate topologies, U[selected.] 6. Update the current topology, Ũ[current]←Ũ[selected], potentially update T, and repeat from step 2. As decision rule R, we randomly select topology U[j] with probability Under certain assumptions (the number of neighbors k is the same for every topology U, neighbor is a mutual relationship, and all possible topologies can be reached in a finite number of steps), the above Markov chain will have a stationary probability distribution in which the probability for a certain topology is proportional to its Boltzmann factor Ewens and Grant (2005). For a sufficiently low temperature T, the algorithm will converge to a probability optimum/error minimum. Bootstrapping confidence intervals For a given model structure U, we used re-sampling of residuals to generate boot-strapped confidence intervals for the model parameters. First, the model was fitted using structure U and the original data, and residuals were calculated as the best model fit minus the original data. A total of 200 ‘new' data sets was then constructed by adding randomly drawn residuals to each measurement (using residuals for the corresponding experimental readout, i.e. p-MEK residuals were added to p-MEK values and so on). For each such re-sampled data set, a model was fitted using the structure U. Subsequently, confidence intervals for each coupling parameter w[ij] were calculated as percentiles 5–95% across the 200 data sets. Data preprocessing and parameter choices The relationship between the model variable y[i], a corresponding experimental observation Y[i] and an experimental reference point Y[ref] or Y[max] is defined by a mapping function. In our evaluation in breast cancer cells, we used the log relative change defined as The transfer functions [i] should be chosen such that the interval spanned by the experimental data corresponds to the target domain of the function. We found it useful to standardize data to the interval [−1, +1] and then to choose the sigmoid function accordingly. As the reference (‘wild-type') value Y[ref], we used the untreated controls. As only one concentration level was used for every drug (chosen to be around the ED[90]), we represented perturbation as u[i]=1 if the drug was added, and u[i]=0 otherwise. We used [i]=tanh(y[i]) as the sigmoid (suitable as it maps to the interval [−1, +1], another function with this target domain, [i]=2/π tan^−1(cy[i]/2), gave very similar results). Experimental methods Cell culture and reagents MCF7 cells were obtained from American Type Culture Collection; maintained in 1:1 mixture of DME:F12 media supplemented with 100 U/ml penicillin, 100 g/ml streptomycin, 4 mM glutamine and 10% heat-inactivated fetal bovine serum and incubated at 37°C in 5% CO[2]. The final concentrations for inhibitors used for perturbation experiments were 1 μM ZD1839 (AstraZeneca), 10 μM LY294002 (Calbiochem), 50 nM PD0325901 (Pfizer), 2 μM rottlerin (EMD), 10 nM rapamycin and 1.5 μg/ml antibody A12 (ImClone Systems). MCF7 cells were grown in 100 mm dishes, and starved for 20 h in PBS. They were then treated with indicated concentrations of inhibitors (details see Cell culture and reagents) or vehicle (DMSO) for 1 h, followed by adding EGF into the media (final EGF concentration was 100 ng/ml). After EGF stimulation for 5 or 30 min in the presence of drugs or DMSO, western blots were performed by harvesting MCF7 cellular lysates in 1% Triton lysis buffer (50 mM HEPES, pH 7.4, 1% Triton X-100, 150 mM NaCl, 1.5 mM MgCl[2], 1 mM EGTA, 1 mM EDTA, 100 mM NaF, 10 mM sodium pyrophosphate, 1 mM vanadate, 1 × protease cocktail II (Calbiochem) and 10% glycerol), separating 40 μg of each lysate by SDS–PAGE, transferring to PVDF membrane and immunoblotting using specific primary and secondary antibodies and chemoluminescence visualization on Kodak or HyBlotCL films. Antibodies for phospho-Akt-S473, phospho-ERK-T202/Y204, phospho-MEK-S217/S221, phospho-eIF4E-S209, phospho-c-RAF-S289/S296/S301, phospho-p70S6K-S371 and phospho-pS6-S235/S236 were from Cell Signaling. Films were scanned by an microTEK scanner at 600 d.p.i. in gray scale. Bands were selected and quantified by FUJIFILM Multi Gauge V3.0 software. Each membrane was normalized to internal controls (with or without 100 ng/ml EGF). The membranes were stripped and reprobed with anti-beta actin (Sigma no. A5441) to confirm equal protein loading. Flow cytometry analysis of cell cycle and apoptosis MCF7 cells were seeded in six-well plates (200 000 cells per well) and grown for 20 h in 10% FBS/DME:F12. Cells were then starved for 20 h in PBS, and then treated with indicated concentrations of inhibitors (details see Cell culture and reagents) or DMSO for 1 h, followed by adding EGF into the media (final EGF concentration was 100 ng/ml). After EGF stimulation for 24, 48 or 72 h in the presence of drugs or DMSO, cells were harvested by trypsinization, including both suspended and adherent fractions, and washed in cold PBS. Cell nuclei were prepared by the method described by Nusse et al and cell cycle distribution was determined by flow cytometric analysis of DNA content (FACS) using red fluorescence of 488 nm excited ethidium bromide-stained nuclei. The percentage of cells in the G1 phase (cell cycle arrest) and sub-G1 fraction (apoptosis) was recorded. Supplementary Material Supplementary Information We thank Doron Betel, Nikolaus Schultz, Debora Marks and Erik Kristiansson for comments on the paper and Solmaz Shahalizadeh-Korkran for contributions to algorithm evaluation. This research project was made possible by an EMBO long-term postdoctoral fellowship and a stipend from the PE Lindahl foundation (SN); support from the Göteborg University quantitative biology platform and the Swedish Strategic Research Foundation through Göteborg Mathematical Modeling Center (PG) and, by a donation from Matt's Promise Foundation (CS). Author contributions: SN, PG and CS developed the computational methodology with additional contributions from BN. SN and PG wrote the CoPIA software. SN, WW, QS and CP planned and interpreted experiments. WW performed experiments. SN and CS wrote the paper with key contributions from BN, PG and WW. • Avery L, Wasserman S (1992) Ordering gene function: the interpretation of epistasis in regulatory hierarchies. Trends Genet 8: 312–316. [PMC free article] [PubMed] • Bonneau R, Reiss DJ, Shannon P, Facciotti M, Hood L, Baliga NS, Thorsson V (2006) The inferelator: an algorithm for learning parsimonious regulatory networks from systems-biology data sets de novo. Genome Biol 7: R36. [PMC free article] [PubMed] • Borisy AA, Elliott PJ, Hurst N.W, Lee MS, Lehar J, Price ER, Serbedzija G, Zimmermann GR, Foley MA, Stockwell BR, Keith CT (2003) Systematic discovery of multicomponent therapeutics. Proc Natl Acad Sci USA 100: 7977–7982. [PMC free article] [PubMed] • Chou TC (2006) Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies. Pharmacol Rev 58: 621–681. [PubMed] • Culjkovic B, Topisirovic I, Skrabanek L, Ruiz-Gutierrez M, Borden KLB (2006) eif4e is a central node of an rna regulon that governs cellular proliferation. J Cell Biol 175: 415–426. [PMC free article] [PubMed] • D'haeseleer P, Liang S, Somogyi R (2000) Genetic network inference: from co-expression clustering to reverse engineering. Bioinformatics 16: 707–726. [PubMed] • de Jong H (2002) Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol 9: 67–103. [PubMed] • DeFeo-Jones D, Barnett SF, Fu S, Hancock PJ, Haskell KM, Leander KR, McAvoy E, Robinson RG, Duggan ME, Lindsley C.W, Zhao Z, Huber HE, Jones RE (2005) Tumor cell sensitization to apoptotic stimuli by selective inhibition of specific akt/pkb family members. Mol Cancer Ther 4: 271–279. [PubMed] • Deutscher D, Meilijson I, Kupiec M, Ruppin E (2006) Multiple knockout analysis of genetic robustness in the yeast metabolic network. Nat Genet 38: 993–998. [PubMed] • di Bernardo D, Thompson MJ, Gardner TS, Chobot SE, Eastwood EL, Wojtovich AP, Elliott SJ, Schaus SE, Collins JJ (2005) Chemogenomic profiling on a genome-wide scale using reverse engineered gene networks. Nat Biotechnol 23: 377–383. [PubMed] • Dougherty MK, Muller J, Ritt DA, Zhou M, Zhou XZ, Copeland TD, Conrads TP, Veenstra TD, Lu KP, Morrison DK (2005) Regulation of Raf-1 by direct feedback phosphorylation. Mol Cell 17: 215–224. [ • Duda RO, Hart PE, Stork DG (2000) Pattern Classification. New York, NY: Wiley-Interscience Publication, John Wiley & Sons, Inc. • Edwards JS, Palsson BO (2000) Robustness analysis of the Escherichia coli metabolic network. Biotechnol Prog 16: 927–939. [PubMed] • Ewens WJ, Grant GR (2005) Statistical Methods in Bioinformatics, 2nd edn. Springer Verlag: Berlin. • Fell DA, Small JR (1986) Fat synthesis in adipose tissue. An examination of stoichiometric constraints. Biochem J 238: 781–786. [PMC free article] [PubMed] • Friedman A, Perrimon N (2006) A functional RNAi screen for regulators of receptor tyrosine kinase and ERK signalling. Nature 444: 230–234. [PubMed] • Hart C, Mjolsness E, Wold B (2006) Connectivity in the yeast cell cycle transcription network: inferences from neural networks. PLoS Comput Biol 2: e169. [PMC free article] [PubMed] • Hindmarsh AC (1993) ODEPACK, a systematized collection of ODE solvers. In Scientific Computing, Stepleman RS, Carver M, Peskin R, Ames WF, Vichnevetsky R (eds), pp 55–64. Amsterdam: North-Holland Publishing Company. • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79: 2554–2558. [PMC free article] [PubMed] • Jackson DN, Foster DA (2004) The enigmatic protein kinase cdelta: complex roles in cell proliferation and survival. FASEB J 18: 627–636. [PubMed] • Kaufman A, Keinan A, Meilijson I, Kupiec M, Ruppin E (2005) Quantitative analysis of genetic and neuronal multi-perturbation experiments. PLoS Comput Biol 1: e64. [PMC free article] [PubMed] • Keith CT, Borisy AA, Stockwell BR (2005) Multicomponent therapeutics for networked systems. Nat Rev Drug Discov 4: 71–78. [PubMed] • Kelley R, Ideker T (2005) Systematic interpretation of genetic interactions using protein networks. Nat Biotechnol 23: 561–566. [PMC free article] [PubMed] • Kim HJ, Bar-Sagi D (2004) Modulation of signalling by Sprouty: a developing story. Nat Rev Mol Cell Biol 5: 441–450. [PubMed] • Kim J, Hopfield J, Winfree E (2005) Neural Network Computation by in vitro Transcriptional Circuits. Cambridge, MA: MIT Press. • Kim KH, Kim HC, Hwang MY, Oh HK, Lee TS, Chang YC, Song HJ, Won NH, Park KK (2006) The antifibrotic effect of tgf-beta1 sirnas in murine model of liver cirrhosis. Biochem Biophys Res Commun 343: 1072–1078. [PubMed] • King RD, Whelan KE, Jones FM, Reiser PG, Bryant CH, Muggleton SH, Kell DB, Oliver SG (2004) Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427: 247–252. • Komarova N, Wodarz D (2005) Drug resistance in cancer: principles of emergence and prevention. Proc Natl Acad Sci USA 102: 9714–9719. [PMC free article] [PubMed] • Le Novère N, Bornstein B, Broicher A, Courtot M, Donizelli M, Dharuri H, Li L, Sauro H, Schilstra M, Shapiro B, Snoep JL, Hucka M (2006) Biomodels database: a free, centralized database of curated, published, quantitative kinetic models of biochemical and cellular systems. Nucleic Acids Res 34: D689–D691. [PMC free article] [PubMed] • Lehár J, Zimmermann GR, Krueger AS, Molnar RA, Ledell JT, Heilbut AM, Short GF, Giusti LC, Nolan GP, Magid OA, Lee MS, Borisy AA, Stockwell BR, Keith CT (2007) Chemical combination effects predict connectivity in biological systems. Mol Syst Biol 3: 80. [PMC free article] [PubMed] • Li F, Long T, Lu Y, Ouyang Q, Tang C (2004) The yeast cell-cycle network is robustly designed. Proc Natl Acad Sci USA 101: 4781–4786. [PMC free article] [PubMed] • Ljung L (1986) System Identification: Theory for the User. Upper Saddle River, NJ, USA: Prentice-Hall Inc. • Mann M, Ong SE, Grønborg M, Steen H, Jensen ON, Pandey A (2002) Analysis of protein phosphorylation using mass spectrometry: deciphering the phosphoproteome. Trends Biotechnol 20: 261–268. [ • Marnellos G, Mjolsness E (1998) A gene network approach to modeling early neurogenesis in Drosophila (www.citeseer.ist.psu.edu/marn ellos98gene.html) [PubMed] • Mingo-Sion AM, Ferguson HA, Koller E, Reyland ME, Van Den Berg CL (2005) PKCdelta and mTOR interact to regulate stress and IGF-I induced IRS-1 Ser312 phosphorylation in breast cancer cells. Breast Cancer Res Treat 91: 259–269. [PubMed] • Nusse M, Beisker W, Hoffmann C, Tarnok A (1990) Flow cytometric analysis of G1- and G2/M-phase subpopulations in mammalian cell nuclei using side scatter and DNA content measurements. Cytometry 11: 813–821. [PubMed] • Omholt SW, Plahte E, Oyehaug L, Xiang K (2000) Gene regulatory networks generating the phenomena of additivity, dominance and epistasis. Genetics 155: 969–980. [PMC free article] [PubMed] • Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59: 2229–2232. [PubMed] • Sahin O, Lobke C, Korf U, Appelhans H, Sultmann H, Poustka A, Wiemann S, Arlt D (2007) Combinatorial RNAi for quantitative protein network analysis. Proc Natl Acad Sci USA 104: 6579–6584. [PMC free article] [PubMed] • Segre D, Deluna A, Church GM, Kishony R (2005) Modular epistasis in yeast metabolism. Nat Genet 37: 77–83. [PubMed] • Shenvi N, Geremia JM, Rabitz H (2004) Efficient chemical kinetic modeling through neural network maps. J Chem Phys 120: 9942–9951. [PubMed] • Simstein R, Burow M, Parker A, Weldon C, Beckman B (2003) Apoptosis, chemoresistance, and breast cancer: insights from the mcf-7 cell model system. Exp Biol Med (Maywood) 228: 995–1003. [PubMed] • Smits WK, Kuipers OP, Veening JW (2006) Phenotypic variation in bacteria: the role of feedback regulation. Nat Rev Microbiol 4: 259–271. [PubMed] • Tegner J, Yeung MK, Hasty J, Collins JJ (2003) Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc Natl Acad Sci USA 100: 5944–5949. [PMC free article] [PubMed] • Vatcheva I, de Jong H, Bernard O, Mars NJI (2006) Experiment selection for the discrimination of semiquantitative models of dynamical systems. Artif Intell 170: 472–506. • Vohradsky J (2001) Neural model of the genetic network. J Biol Chem 276: 36168–36173. [PubMed] • Waskiewicz AJ, Flynn A, Proud CG, Cooper JA (1997) Mitogen-activated protein kinases activate the serine/threonine kinases mnk1 and mnk2. EMBO J 16: 1909–1920. [PMC free article] [PubMed] • Weston J, Elisseeff A, Scholkopf B, Tipping M (2003) The use of zero-norm with linear models and kernel methods. J Mach Learn Res 3: 1439–1461. • Xiong M, Li J, Fang X (2004) Identification of genetic networks. Genetics 166: 1037–1052. [PMC free article] [PubMed] • Yeh P, Tschumi AI, Kishony R (2006) Functional classification of drugs by properties of their pairwise interactions. Nat Genet 38: 489–494. [PubMed] • Yeung MK, Tegner J, Collins JJ (2002) Reverse engineering gene networks using singular value decomposition and robust regression. Proc Natl Acad Sci USA 99: 6163–6168. [PMC free article] [PubMed] Articles from Molecular Systems Biology are provided here courtesy of The European Molecular Biology Organization and Nature Publishing Group Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2564730/?tool=pubmed","timestamp":"2014-04-20T07:08:03Z","content_type":null,"content_length":"131845","record_id":"<urn:uuid:3d59e16f-c30f-448b-b439-9dc68a461de5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
how to convert decimal to floating point number 03-03-2008 #91 Registered User Join Date Jan 2008 so say that I have an unsigned integer : unsigned int exponent = 126; exponent = exponent - 127 will result -1?? I tried this and it gives me a really big number instead Keyword "unsigned" integer. That means that ALL unsigned integers are POSITIVE. Going less than 0 is an overflow(actually, that might not be the correct term. it's more of an underflow). It acts the same as if you add too much to a number except with subtraction. Don't quote me on that... ...seriously okay and say that I want to convert the unsigned int to an int can I do that so that I will be able to do this? Yes. You can convert an unsigned int to signed. In fact, the operations (add, subtraction) are the same for unsigned and signed. So the result you get as far as the actual binary data stored in memory from the operation 0 - 1 on signed integers is the same as you would get with unsigned. unsigned int u = 0 - 5; int i = 0 - 5; If you looked at the binary memory for both i and u, they would look identical. So the conversion from unsigned to sign is really just a matter of how the program interprets the memory. unsigned int u = 0 - 5; int i = (int)u; i would now hold the value -5 Last edited by Brad0407; 03-03-2008 at 04:59 PM. Reason: code fix Don't quote me on that... ...seriously okay just one more issue.. you know that a mantissa is always precedeed with the 1. 0100101001 and I can't anyway extract the 1 in front of the fraction because it's actually not inside the 1, how do I append the extra one to the 0100101001?? do I shift it right first and then do an or with 1000000000 (masking technique) ?? the problem is if I do this I would lose bit 0 from the mantissa (the right most bit in the mantissa). I need to append the string so that it becomes 24 bit.. okay just one more issue.. you know that a mantissa is always precedeed with the 1. 0100101001 and I can't anyway extract the 1 in front of the fraction because it's actually not inside the 1, how do I append the extra one to the 0100101001?? do I shift it right first and then do an or with 1000000000 (masking technique) ?? the problem is if I do this I would lose bit 0 from the mantissa (the right most bit in the mantissa). I need to append the string so that it becomes 24 bit.. It's not there. So when you print, just print a "1." first, and then print the actual number. well no, after I got the one I would like to convert all that together into a hex A possible solution.... #include <stdio.h> void PrintTheBits(long lInput, int iStart, int iEnd) unsigned int uiMask = 1 << iStart; unsigned int uiEndMask = 1 << iEnd; while (uiMask >= uiEndMask) putchar ((uiMask & lInput) ? '1' : '0'); uiMask >>= 1; int main(void) float fInput = 56.43f; long lLongValue; lLongValue = *((long *)(&fInput)); PrintTheBits (lLongValue, 31, 31); putchar ('.'); PrintTheBits (lLongValue, 30, 23); putchar ('.'); PrintTheBits (lLongValue, 22, 0); putchar ('\n'); return 0; If you want to print the bits that are there then print the bits that are there as a hex integer using %x, which will print as hex. If you want to print "what it means", then remember: the bits that you have come after the binary point, and the implicit one comes before, so it's really "1."001010011100 or whatever. (You can also print that in hex as 0x1.29c or so.) There is no context in which you would want to add the implicit one as a bit in front and then print the result as a hex integer. 03-03-2008 #92 Captain - Lover of the C Join Date May 2005 03-03-2008 #93 Registered User Join Date Jan 2008 03-03-2008 #94 Captain - Lover of the C Join Date May 2005 03-03-2008 #95 Registered User Join Date Jan 2008 03-03-2008 #96 03-03-2008 #97 Registered User Join Date Jan 2008 03-04-2008 #98 Registered User Join Date Mar 2005 Mountaintop, Pa 03-04-2008 #99
{"url":"http://cboard.cprogramming.com/c-programming/99815-how-convert-decimal-floating-point-number-7.html","timestamp":"2014-04-20T04:42:11Z","content_type":null,"content_length":"76301","record_id":"<urn:uuid:fcf3321d-805f-411d-b266-571549d3c8b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami Shores, FL Trigonometry Tutor Find a Miami Shores, FL Trigonometry Tutor ...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive... 14 Subjects: including trigonometry, chemistry, calculus, geometry ...I have a slew of remedies and quick follow up procedures that will help improve the veracity and the tenacity of the practice. The GRE will be the next step for your professional development. With Preparation, Organization, and Practice we will ensure our success on the exam. 30 Subjects: including trigonometry, calculus, geometry, GRE ...I tutor all levels of math using different methods, including the Singapore Method. Among my current students' subjects are: university level: engineering math, differential calculus, vector calculus, linear algebra, statistics; high school: AP calculus, statistics, trigonometry, and the other b... 24 Subjects: including trigonometry, calculus, algebra 1, algebra 2 ...If you want your student to not only improve his or her grades, but also develop the skills and tricks needed to excel in every math class he or she takes in the future, you've found the right person for the job. Freshman year of high school was the year I first began tutoring. I was learning Algebra 2 and competing against students in the Tampa area as a Mathlete. 59 Subjects: including trigonometry, chemistry, writing, English ...People aren't smart or dumb, we all just learn differently. Everyone can be really good at math, they just need people to explain things to them in a way they understand. I pride myself in making analogies of complicated concepts to things in real life, as well as just breaking a hard problem down into many simpler problems. 23 Subjects: including trigonometry, calculus, GRE, ASVAB
{"url":"http://www.purplemath.com/Miami_Shores_FL_Trigonometry_tutors.php","timestamp":"2014-04-20T21:32:40Z","content_type":null,"content_length":"24709","record_id":"<urn:uuid:2fed45e4-ca16-4abb-8824-4c5cefc8317f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Every day a certain bank calculates its average daily Author Message Every day a certain bank calculates its average daily [#permalink] 18 Jul 2010, 09:32 This post received Expert's post 55% (medium) Status: Graduated Question Stats: Affiliations: HEC Joined: 28 Sep 2009 (15:36) correct Posts: 1614 68% (06:00) Concentration: wrong Economics, Finance based on 29 sessions GMAT 1: 730 Q48 V44 Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal Followers: 80 places? Kudos [?]: 454 [1] (A) 1/10 , given: 423 (B) 2/15 (C) 4/15 (D) 3/10 (E) 11/30 Spoiler: OA Last edited by on 07 Jul 2013, 05:48, edited 1 time in total. RENAMED THE TOPIC. Re: MGMAT Challenge: Decimals on Deposit [#permalink] 18 Jul 2010, 10:42 This post received Expert's post bmillan01 wrote: Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal (A) 1/10 (B) 2/15 (C) 4/15 (D) 3/10 (E) 11/30 Reduced fraction (meaning that fraction is already reduced to its lowest term) can be expressed as terminating decimal if and only b (denominator) is of the form , where are non-negative integers. For example: is a terminating decimal , as (denominator) equals to . Fraction is also a terminating decimal, as and denominator Note that if denominator already has only 2-s and/or 5-s then it doesn't matter whether the fraction is reduced or not. For example , (where x, n and m are integers) will always be terminating decimal. (We need reducing in case when we have the prime in denominator other then 2 or 5 to see whether it could be reduced. For example fraction has 3 as prime in denominator and we need to know if it can be reduced.) BACK TOT THE ORIGINAL QUESTION: Question: does has less than 5 decimal places? Where Bunuel is the chosen day. Math Expert If the chosen day, Joined: 02 Sep d , is NOT of a type Posts: 17321 Followers: 2876 Kudos [?]: 18400 [ 9] , given: 2350 n are nonnegative integers) then will not be a terminating decimal and thus will have more than 5 decimal places. How many such days are there of a type : 1, 2, 4, 5, 8, 10, 16, 20, 25 ( ), total of 9 such days (1st of June, 4th of June, ...). Now, does divided by any of these have fewer than 5 decimal places? Yes, as for any such (10,000 is divisible by all these numbers: 1, 2, 4, 5, 8, 10, 16, 20, 25). So, there are 9 such days out of 30 in June: Answer: D. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book ; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: MGMAT Challenge: Decimals on Deposit [#permalink] 19 Jul 2010, 23:37 Bunuel wrote: bmillan01 wrote: Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal (A) 1/10 (B) 2/15 (C) 4/15 (D) 3/10 (E) 11/30 Reduced fraction (meaning that fraction is already reduced to its lowest term) can be expressed as terminating decimal if and only b (denominator) is of the form , where are non-negative integers. For example: is a terminating decimal , as (denominator) equals to . Fraction is also a terminating decimal, as and denominator Note that if denominator already has only 2-s and/or 5-s then it doesn't matter whether the fraction is reduced or not. For example , (where x, n and m are integers) will always be terminating decimal. (We need reducing in case when we have the prime in denominator other then 2 or 5 to see whether it could be reduced. For example fraction has 3 as prime in denominator and we need to know if it can be reduced.) BACK TOT THE ORIGINAL QUESTION: Question: does has less than 5 decimal places? Where Joined: 09 Dec 2008 and Posts: 29 d Location: Vietnam is the chosen day. Schools: Somewhere If the chosen day, Followers: 0 d Kudos [?]: 5 [0], , is NOT of a type given: 2 are nonnegative integers) then will not be a terminating decimal and thus will have more than 5 decimal places. How many such days are there of a type : 1, 2, 4, 5, 8, 10, 16, 20, 25 ( ), total of 9 such days (1st of June, 4th of June, ...). Now, does divided by any of these have fewer than 5 decimal places? Yes, as for any such (10,000 is divisible by all these numbers: 1, 2, 4, 5, 8, 10, 16, 20, 25). So, there are 9 such days out of 30 in June: Answer: D. Hope it's clear. Great explanation!!! I learn a lot from this Re: MGMAT Challenge: Decimals on Deposit [#permalink] 21 Jul 2010, 10:56 bunuel... this is great. thanks... Joined: 11 Jul 2010 the way you piece it together is sometimes scary... just curious - what was your gmat score Posts: 229 Followers: 1 Re: MGMAT Challenge: Decimals on Deposit [#permalink] 21 Jul 2010, 11:21 gmat1011 Actually wanted to seek 1 clarification to better understand this: Manager p/d *10,000=integer Joined: 11 Jul p is a prime integer greater than 100 d can be one of the 9 numbers Posts: 229 To test the tendency to leave a certain desired number of decimal places, upon division of p by d why is it ok to multiply p by a common multiple (10k here) of the 9 numbers in the Followers: 1 denominator? very crude general example (which I am hoping is an analogy): 37 divided by 7 leaves R of 2 and certain decimal places; 37 * 14 divided by 7 leaves no remainder ---> how can the later scenario be used to test whether a certain desired number of decimal places are left by the first scenario... Re: MGMAT Challenge: Decimals on Deposit [#permalink] 21 Jul 2010, 12:13 Expert's post gmat1011 wrote: Actually wanted to seek 1 clarification to better understand this: p/d *10,000=integer p is a prime integer greater than 100 d can be one of the 9 numbers To test the tendency to leave a certain desired number of decimal places, upon division of p by d why is it ok to multiply p by a common multiple (10k here) of the 9 numbers in the very crude general example (which I am hoping is an analogy): 37 divided by 7 leaves R of 2 and certain decimal places; 37 * 14 divided by 7 leaves no remainder ---> how can the later scenario be used to test whether a certain desired number of decimal places are left by the first scenario... Your example is not good as will be recurring decimal (will have infinite number of decimal places). How many decimal places will terminating decimal have? (p is prime number) Consider following examples: has 1 decimal place --> 1.2*10=12=integer (multiplying by 10 with 1 zero); has 2 decimal places --> 1.25*10^2=125=integer (multiplying by 100 with 2 zeros); has 3 decimal places --> 1.257*10^3=1257=integer (multiplying by 100 with 3 zeros); Bunuel has 4 decimal places --> 1.2571*10^4=12571=integer (multiplying by 100 with 4 zeros); Math Expert ... Joined: 02 Sep So, terminating decimal, Posts: 17321 (where p is prime number), will have Followers: 2876 Kudos [?]: 18400 [ 0], given: 2350 decimal places, where is the least value in for which In our original question least value of for which , for all 9 d's, is 4 or when (k=4 is needed when d=16). Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book ; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Manager Re: MGMAT Challenge: Decimals on Deposit [#permalink] 22 Jul 2010, 03:22 Joined: 11 Jul Great - many thanks. Posts: 229 Followers: 1 Manager Re: MGMAT Challenge: Decimals on Deposit [#permalink] 23 Jul 2010, 04:03 Joined: 22 Jun awesome explanation! Posts: 57 Followers: 1 Kudos [?]: 4 [0], given: 10 Manager Re: MGMAT Challenge: Decimals on Deposit [#permalink] 23 Jul 2010, 05:36 Joined: 06 Jul Brilliant explanation. Never knew that dividing by (2^m)(5^n) gives a terminating decimal Posts: 112 Followers: 1 Kudos [?]: 3 [0], given: 9 Manager Re: MGMAT Challenge: Decimals on Deposit [#permalink] 23 Jul 2010, 08:09 Joined: 24 Jan Thanks for the explanation Bunuel Posts: 166 Location: India If you like my post, consider giving me a kudos. THANKS! Schools: ISB Followers: 2 Kudos [?]: 13 [0], given: 14 Intern Re: MGMAT Challenge: Decimals on Deposit [#permalink] 23 Jul 2010, 19:54 Joined: 15 May Thanks for the explanation.... Posts: 3 Followers: 0 Kudos [?]: 0 [0], given: 0 Intern Re: MGMAT Challenge: Decimals on Deposit [#permalink] 19 Sep 2010, 04:40 Joined: 26 Aug This is definitely a 800+ question 2010 Good explanation and thnx for the theory, very useful! Posts: 23 Followers: 0 Kudos [?]: 8 [0], given: 2 Re: MGMAT Challenge: Decimals on Deposit [#permalink] 06 Oct 2010, 16:18 Joined: 06 Aug 2010 One question about this problem. The problem doesn't ask if the decimal will be terminating, but rather if the decimal will have less than 5 places. Your solution checks for termination, but how do you check for the number of decimal places? Couldn't some of the possibilities result in termination with more than 5 decimal places? Posts: 225 Location: Boston Followers: 2 Kudos [?]: 71 [0], given: 5 Re: MGMAT Challenge: Decimals on Deposit [#permalink] 07 Oct 2010, 01:10 This post received Expert's post TehJay wrote: One question about this problem. The problem doesn't ask if the decimal will be terminating, but rather if the decimal will have less than 5 places. Your solution checks for termination, but how do you check for the number of decimal places? Couldn't some of the possibilities result in termination with more than 5 decimal places? You should read the last part: "Now, does divided by any of these have fewer than 5 decimal places? Yes, as Math Expert for any such Joined: 02 Sep 2009 d Posts: 17321 (10,000 is divisible by all these numbers: 1, 2, 4, 5, 8, 10, 16, 20, 25). Followers: 2876 So, there are 9 such days out of 30 in June: Kudos [?]: 18400 [ P=\frac{9}{30}=\frac{3}{10} 1] , given: 2350 Answer: D." This issue is also discussed in the posts following the one with solution. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book ; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: MGMAT Challenge: Decimals on Deposit [#permalink] 08 Oct 2010, 14:39 Joined: 03 Aug 2010 Bunuel, is it important to know that the numerator in this problem is greater than 100? Posts: 107 GMAT Date: Followers: 1 Kudos [?]: 13 [0], given: 63 Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 07:29 Bunuel wrote: bmillan01 wrote: Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal (A) 1/10 (B) 2/15 (C) 4/15 (D) 3/10 (E) 11/30 Reduced fraction (meaning that fraction is already reduced to its lowest term) can be expressed as terminating decimal if and only b (denominator) is of the form , where are non-negative integers. For example: is a terminating decimal , as (denominator) equals to . Fraction is also a terminating decimal, as and denominator Note that if denominator already has only 2-s and/or 5-s then it doesn't matter whether the fraction is reduced or not. For example , (where x, n and m are integers) will always be terminating decimal. (We need reducing in case when we have the prime in denominator other then 2 or 5 to see whether it could be reduced. For example fraction has 3 as prime in denominator and we need to know if it can be reduced.) BACK TOT THE ORIGINAL QUESTION: Question: does has less than 5 decimal places? Where is the chosen day. Senior Manager If the chosen day, Joined: 30 Nov 2010 d Posts: 266 , is NOT of a type Schools: UC 2^n5^m Berkley, UCLA Followers: 1 are nonnegative integers) then will not be a terminating decimal and thus will have more than 5 decimal places. How many such days are there of a type : 1, 2, 4, 5, 8, 10, 16, 20, 25 ( ), total of 9 such days (1st of June, 4th of June, ...). Now, does divided by any of these have fewer than 5 decimal places? Yes, as for any such (10,000 is divisible by all these numbers: 1, 2, 4, 5, 8, 10, 16, 20, 25). So, there are 9 such days out of 30 in June: Answer: D. Hope it's clear. So you're saying that by controlling the terminating decimal using d=2^m*5^n and, and you can make it an integer by multiplying it by 10000 (if the number must be five decimal places to the left. Therefore, you can choose of days 1, 2, 4, 5, 8, 10, 16, 20, or 25. (making that 9 days). - But they're not prime??? What am I missing here... I'm the slow one of the lot please help me out Thank you for your kudoses Everyone!!! "It always seems impossible until its done." -Nelson Mandela Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 07:44 Expert's post mariyea wrote: Bunuel wrote: bmillan01 wrote: Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal (A) 1/10 (B) 2/15 (C) 4/15 (D) 3/10 (E) 11/30 Reduced fraction (meaning that fraction is already reduced to its lowest term) can be expressed as terminating decimal if and only b (denominator) is of the form , where are non-negative integers. For example: is a terminating decimal , as (denominator) equals to . Fraction is also a terminating decimal, as and denominator Note that if denominator already has only 2-s and/or 5-s then it doesn't matter whether the fraction is reduced or not. For example , (where x, n and m are integers) will always be terminating decimal. (We need reducing in case when we have the prime in denominator other then 2 or 5 to see whether it could be reduced. For example fraction has 3 as prime in denominator and we need to know if it can be reduced.) BACK TOT THE ORIGINAL QUESTION: Question: does has less than 5 decimal places? Where is the chosen day. If the chosen day, , is NOT of a type Math Expert Joined: 02 Sep 2009 (where Posts: 17321 n Followers: 2876 and Kudos [?]: 18400 [ m 0], given: 2350 are nonnegative integers) then will not be a terminating decimal and thus will have more than 5 decimal places. How many such days are there of a type : 1, 2, 4, 5, 8, 10, 16, 20, 25 ( ), total of 9 such days (1st of June, 4th of June, ...). Now, does divided by any of these have fewer than 5 decimal places? Yes, as for any such (10,000 is divisible by all these numbers: 1, 2, 4, 5, 8, 10, 16, 20, 25). So, there are 9 such days out of 30 in June: Answer: D. Hope it's clear. So you're saying that by controlling the terminating decimal using d=2^m*5^n and, and you can make it an integer by multiplying it by 10000 (if the number must be five decimal places to the left. Therefore, you can choose of days 1, 2, 4, 5, 8, 10, 16, 20, or 25. (making that 9 days). - But they're not prime??? What am I missing here... I'm the slow one of the lot please help me out Nominator is a prime number>100 (on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100) and denominator is that day: {the average daily deposit}={sum of deposits up to and including that day}/{# of days}=p/d (so denominator d must not be a prime it should be of a type d=2^m*5^n. It's nominator p which is given to be a prime>100). NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book ; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 07:54 Bunuel wrote: Nominator is a prime number>100 (on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100) and denominator is that Senior Manager day: {the average daily deposit}={sum of deposits up to and including that day}/{# of days}=p/d. Joined: 30 Nov Oh so since we are limited to having five decimal places we were able to control that by choosing a using a denominator that gives us a terminating decimal and we multiply that by 2010 10000 to make that number greater than 100. And the denominator is the 30 days out of which we can choose the 9 days. Posts: 266 But if the Nominator is supposed to be a prime number then we are either supposed to choose from 2 and 5 and nothingelse. Please have patience with me. I just need to understand. Schools: UC _________________ Berkley, UCLA Thank you for your kudoses Everyone!!! Followers: 1 "It always seems impossible until its done." -Nelson Mandela Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 08:11 This post received Expert's post mariyea wrote: Bunuel wrote: Nominator is a prime number>100 (on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100) and denominator is that day: {the average daily deposit}={sum of deposits up to and including that day}/{# of days}=p/d. Oh so since we are limited to having five decimal places we were able to control that by choosing a using a denominator that gives us a terminating decimal and we multiply that by 10000 to make that number greater than 100. And the denominator is the 30 days out of which we can choose the 9 days. But if the Nominator is supposed to be a prime number then we are either supposed to choose from 2 and 5 and nothingelse. Please have patience with me. I just need to understand. I'm not sure understood your question. . In order this value (p/d) to be terminating decimal must be of a type . Because if the chosen day, , is NOT of a type Math Expert Joined: 02 Sep 2009 average=\frac{p}{d} Posts: 17321 will not be a terminating decimal and thus will have more than 5 decimal places. Followers: 2876 There are 9 such days: 1, 2, 4, 5, 8, 10, 16, 20, 25. For example Kudos [?]: 18400 [ average=\frac{prime}{1} 1] , given: 2350 , ..., all will be terminating decimals (and for ALL other values of d: 3, 6, 7, ... p/d will be recurring decimal thus will have infinite number of decimal places so more than 5). Also for all these 9 values of d average=p/d not only be terminating decimal but also will have fewer than 5 decimal places ( will have max # of decimal places which is 4). Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book ; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 11:38 Bunuel wrote: mariyea wrote: Bunuel wrote: Nominator is a prime number>100 (on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100) and denominator is that day: {the average daily deposit}={sum of deposits up to and including that day}/{# of days}=p/d. Oh so since we are limited to having five decimal places we were able to control that by choosing a using a denominator that gives us a terminating decimal and we multiply that by 10000 to make that number greater than 100. And the denominator is the 30 days out of which we can choose the 9 days. But if the Nominator is supposed to be a prime number then we are either supposed to choose from 2 and 5 and nothingelse. Please have patience with me. I just need to understand. I'm not sure understood your question. . In order this value (p/d) to be terminating decimal must be of a type . Because if the chosen day, mariyea d Senior Manager , is NOT of a type Joined: 30 Nov 2^n5^m Posts: 266 Schools: UC Berkley, UCLA will not be a terminating decimal and thus will have more than 5 decimal places. Followers: 1 There are 9 such days: 1, 2, 4, 5, 8, 10, 16, 20, 25. For example , ..., all will be terminating decimals (and for ALL other values of d: 3, 6, 7, ... p/d will be recurring decimal thus will have infinite number of decimal places so more than 5). Also for all these 9 values of d average=p/d not only be terminating decimal but also will have fewer than 5 decimal places ( will have max # of decimal places which is 4). Hope it's clear. I understand now Thank you so much! Thank you for your kudoses Everyone!!! "It always seems impossible until its done." -Nelson Mandela gmatclubot Re: MGMAT Challenge: Decimals on Deposit [#permalink] 28 Jan 2011, 11:38
{"url":"http://gmatclub.com/forum/every-day-a-certain-bank-calculates-its-average-daily-97456.html?fl=similar","timestamp":"2014-04-19T17:03:18Z","content_type":null,"content_length":"285660","record_id":"<urn:uuid:421d419a-172e-41fe-8770-e2704084f552>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami Shores, FL Trigonometry Tutor Find a Miami Shores, FL Trigonometry Tutor ...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive... 14 Subjects: including trigonometry, chemistry, calculus, geometry ...I have a slew of remedies and quick follow up procedures that will help improve the veracity and the tenacity of the practice. The GRE will be the next step for your professional development. With Preparation, Organization, and Practice we will ensure our success on the exam. 30 Subjects: including trigonometry, calculus, geometry, GRE ...I tutor all levels of math using different methods, including the Singapore Method. Among my current students' subjects are: university level: engineering math, differential calculus, vector calculus, linear algebra, statistics; high school: AP calculus, statistics, trigonometry, and the other b... 24 Subjects: including trigonometry, calculus, algebra 1, algebra 2 ...If you want your student to not only improve his or her grades, but also develop the skills and tricks needed to excel in every math class he or she takes in the future, you've found the right person for the job. Freshman year of high school was the year I first began tutoring. I was learning Algebra 2 and competing against students in the Tampa area as a Mathlete. 59 Subjects: including trigonometry, chemistry, writing, English ...People aren't smart or dumb, we all just learn differently. Everyone can be really good at math, they just need people to explain things to them in a way they understand. I pride myself in making analogies of complicated concepts to things in real life, as well as just breaking a hard problem down into many simpler problems. 23 Subjects: including trigonometry, calculus, GRE, ASVAB
{"url":"http://www.purplemath.com/Miami_Shores_FL_Trigonometry_tutors.php","timestamp":"2014-04-20T21:32:40Z","content_type":null,"content_length":"24709","record_id":"<urn:uuid:2fed45e4-ca16-4abb-8824-4c5cefc8317f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00585-ip-10-147-4-33.ec2.internal.warc.gz"}