content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
On the importance of choosing a convenient basis
The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One
of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for
movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating
conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s
currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring
quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!
Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my
girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every
subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently
asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves
that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field
theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very
convenient ‘basis’, the Virasoro algebra.
So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster.
The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame.
It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about
the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall
on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so
that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should
then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the
computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it
In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the
photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to
the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest
example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’
of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the
depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of
pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would
effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to
store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy
static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.
There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers $Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}$,
such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If
you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions
that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics
which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a
natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height
functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze
slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently
store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100
nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a
convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar
elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.
I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as
music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the
sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This
means that if you have two curves which intersect at a point with angle $\theta$, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle
between the curves will still be $\theta.$ An especially convenient basis for the analytic functions in two-dimensions ($\{f: \mathbb{C} \to \mathbb{C}\}$, where $f(z) = \sum_{n=0}^{\infty} a_nz^n$)
is given by the set of functions $\{l_n = -z^{n+1}\partial_z\}$. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal
two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has
deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in
two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is
infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations
in long lines at Disneyland!
8 thoughts on “On the importance of choosing a convenient basis”
1. I feel like you’re being a little inconsistent with your precision values here–giving 2^16 values for the heights versus 2^7 for the Spherical Harmonics. There’s still a profound savings in
information, though.
How does $z^2$ preserve angles? Wouldn’t it double an angle?
□ You’re absolutely right that I was imprecise in this calculation and subsequent explanation. In order to go deeper would require much more work. Guilty.
You hit upon the key loophole in this definition of conformal map. In 2d, the conformal maps are the analytic functions *at points where their derivative is nonzero!* $f(z) = z^2$ doesn’t
preserve angles at the origin, but infinitesimally, it does at other points.
2. The spherical harmonics are special cases of the rotation matrices “D” (whose properties are enumerated in Kurt Gottfried’s Quantum Mechanics, for example). But one computationally advantageous
property that is *not* in Gottfried, is that these rotation matrices (for arbitrary spin-j representations) take a purely algebraic form in the complex values (za,zb) = (Q1+iQ2,Q3+iQ4), where the
four Q* are real-valued quaternionic coordinates. Particularly for large-j systems, the computational speed-up in transformations (relative to Euler-angle variables) is many orders of magnitude.
□ Thanks for this insightful comment!
3. In geometry, another nice example of a convenient basis are the directions of extreme bending on a (hyper)surface, i.e., the principal curvature directions. In fact, since God was in a good mood
that day, these directions are always orthogonal (and can be expressed as the eigenvectors of a self-adjoint operator).
I just saw a nice talk by Ingrid Daubechies about sparsity in signal analysis and in computation. The theme there (as you might expect) is that many natural signals are extremely sparse, as long
as you can find a convenient basis. One question raised in the talk was: is this notion of “sparsity” too vague to be meaningful? My personal take is that this question of picking a convenient
basis is kind of like Kolmogorov complexity. In compressed sensing your signal gets more and more compressible as you specialize the dictionary (consider a basis which includes your signal as one
of the basis vectors!). With Kolmogorov complexity, your signal gets more and more compressible as you specialize the language (my favorite example is 64 kilobyte computer graphics demos, which
have become way more impressive ever since they’ve been written in the “language” of the graphics API, rather than a general-purpose programming language). In short: choosing a convenient basis
is important! :-)
Nice post, Shaun.
□ Keenan, did you get a pingback because of my link under “sophisticated new mathematics?” Anyways, your comment is close to home. The first paper John recommended to me was Charles Bennett’s
“Logical Depth and Physical Complexity”, which is intimately related to your points about Kolmogorov complexity. Thanks for the comment, but I wish you were still at Caltech to discuss these
things in person!
☆ A great many of Joseph Landsberg’s recent articles, and especially many of the topics covered in his new textbook Tensors: Geometry and Applications, can be appreciated as meditations
upon the general topic of finding efficient/accurate/compact/low-rank representations of transport systems—with “transport” understood broadly as encompassing both dynamical flow and
communication channels. Moreover, the (mostly algebraic) ideas of Landsberg’s text gain further power when they are blended with the (mostly geometric) ideas of texts like Mikio
Nakahara’s Geometry, Topology and Physics.
(Needless to say, there are many more great textbooks on algebraic geometry and dynamics than just the above two …)
For younger students, the point is that the 20th century’s great quantum textbooks, from Dirac’s Principles of Quantum Mechanics (1930) and Subrahmanyan Chandrasekhar’s An Introduction to
the Study of Stellar Structure (which has excellent descriptions of classical and quantum transport theory), extending through seven dacades to Nielsen and Chuang’s Quantum Computation
and Quantum Information can be systematically reconsolidated—often to great practical advantage—by transcribing the main results of 20th century physics into the natural mathematical
language of the 21st century’s emerging algebraic/symplectic/dynamic mathematical synthesis.
It’s a great time to be a young researcher! :)
Your thoughts here. | {"url":"http://quantumfrontiers.com/2013/07/09/on-the-importance-of-choosing-a-convenient-basis/","timestamp":"2014-04-21T04:38:52Z","content_type":null,"content_length":"83481","record_id":"<urn:uuid:d42fe86a-d8cf-45b3-afd0-fc8de7dd09b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 1996 [00038]
[Date Index] [Thread Index] [Author Index]
Re: Problem with condition in function definition !
• To: mathgroup at smc.vnet.net
• Subject: [mg4221] Re: [mg4149] Problem with condition in function definition !
• From: Allan Hayes <hay at haystack.demon.co.uk>
• Date: Tue, 18 Jun 1996 03:26:51 -0400
• Sender: owner-wri-mathgroup at wolfram.com
a_kowald at chemie.fu-berlin.de (Axel Kowald)
[mg4149] Problem with condition in function definition !
>I'd like to define a function with condition in the following way:
>bla[t_]/;(t<b) := {a,b,c}
>bla[0] gives {a,b,c}, fine. But now I want to clear b, Clear[b],
and >still get the same result. So really I want to type 1 instead
of b, >but for some reason can't do it.
I'm not sure why you can't type 1 instead of b; it works for me.
After entering
bla[t_]/;(t<b) := {a,b,c}
The infomation stored on bla and on b is
bla[t_] /; t < b := {a, b, c}
b is not evaluated because.
b = 1
When we evaluate bla[0] the condition t<b becomes first 0<b, then
0<1, then True; so we get {a,1,c}
{a, 1, c}
The change of b to 1 was because of the information stored on b.
Now clear b
No rule is stored under b
So now, when bla[a] is evaluated the condition becomed 0<b and
stays at this values - this is not True so the condition fails and
no rule is applied
You have the flexibility to define b to be whatever you want:
b = 3;
{a, 3, c}
If you don't want this flexiblity then you can use
bla[t_]/;(t<1) := {a,b,c}
bla[t_]/;(t<1) := {a,1,c}
Allan Hayes
hay at haystack.demon.co.uk
==== [MESSAGE SEPARATOR] ==== | {"url":"http://forums.wolfram.com/mathgroup/archive/1996/Jun/msg00038.html","timestamp":"2014-04-20T03:36:55Z","content_type":null,"content_length":"35718","record_id":"<urn:uuid:e1c98caa-8a87-489a-861b-777eb207e8f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Many functions are continuous at every real number x. These functions include (but are not limited to):
1. all polynomials (including lines)
2. e^x
3. sin(x) and cos(x)
It's helpful to see the continuity by graphing the functions. If we graph any of the above functions, we see a nice smooth graph that continues across the whole x-axis, with no jumps or holes. Try
it, it will bring you and your TI 83 closer together.
Many other functions are continuous everywhere that they're defined, including
1. ln(x) - This is continuous for all x > 0
2. tan(x) - This is continuous in between multiples of
3. all rational functions that don't have common roots in the numerator and denominator - These will have vertical asymptotes at the roots of the denominator, and be continuous in between those
Once we know a couple of functions that are continuous at a point c, we can build other functions that are continuous at c by combining the functions we already have. To do this, we use some
properties of limits.
If f and g are continuous at c, then
1. We can add or subtract:
(f + g) and (f - g) are continuous at c
2. We can multiply:
(fg) is continuous at c
3. We can divide functions:
c as long as g(c) ≠ 0
4. We can compose:
The composition (f ο g) is continuous at c.
All that's required here is that we have two functions continuous at c. It doesn't matter which is f and which is g. By switching f and g in our minds, we also find that (g - f) is continuous at c, g
ο f is continuous at c, etc.
Sample Problem
Let f(x) = x + 1 and g(x) = e^x. These functions are both continuous at every real number x. The following functions are also continuous at every real number x:
1. We can add or subtract:
(f + g)(x) = x + 1 + e^x (which is the same as (g + f)(x))
(f - g)(x) = (x + 1)- e^x
(g - f)(x) = e^x - (x + 1) = e^x - x - 1
2. We can multiply:
(fg)(x) = (x + 1)(e^x) = xe^x + e^x (which is the same as (gf)(x))
3. We can divide:
e^x is never 0)
4. We can compose:
(f ο g)(x) = (e^x) + 1
(g ο f)(x) = e^x + 1
Also, the function x = -1.
Let f(x) = sin(x) and g(x) = cos(x). By combining f and g, create as many functions as possible that are continuous at every real number x. | {"url":"http://www.shmoop.com/continuity-function/continuity-point-function-help.html","timestamp":"2014-04-19T17:09:31Z","content_type":null,"content_length":"46880","record_id":"<urn:uuid:a02556dc-9095-4448-9b74-20310cbce9c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPOJ.com - Problem CAKE3
SPOJ Problem Set (classical)
2159. Delicious Cake
Problem code: CAKE3
Lenka likes to bake cakes since her childhood, when she has learned to bake from her mom. She soon became a cake expert able to bake chocolate cakes, apple pies, muffins, cookies, cheese cakes,
tortes and many other cakes.
Recently, she has started her studies of math at Comenius University in Bratislava. In the first year she is taking combinatorics class. Today she is studying for the final exam. Since the brain
needs a lot of sugar to study math, she has baked, just for herself, her favorite, very delicious, strawberry cake.
The cake, still hot, is lying on an N×M inch sheet pan. Hungrily waiting for the cake to cool off Lenka came up with an interesting combinatorial question: How many different possibilities to cut the
cake are there so that every connected piece consists of some number of 1×1 inch unit squares?
Problem specification
The cake can be viewed as a grid consisting of N×M unit squares. We are allowed to cut the cake along the grid lines. As a result the cake splits into several connected pieces. (Two unit squares
remain connected if they share a side which was not cut.) How many different ways are there to cut the cake? We consider two cuttings of the cake to be the same if the resulting connected pieces of
both cuttings have the same shape and are at the same positions within the cake. In other words, we are only counting those cuttings where no cut leads between two unit squares that are in the same
connected piece.
The following picture ilustrates all the 12 different possible ways how to cut a 2×2 inch cake:
Note that cutting, for example, as on following picture
is the same as not cutting at all.
Input specification
The first line of the input file contains an integer T specifying the number of test cases. Each test case is preceded by a blank line.
Each test case consists of a single line with two positive integers N and M – dimensions of the cake.
Output specification
For each test case output a line with a single positive integer – the number of different possibilities how to cut the cake.
For all the test cases, min(N,M)<=5, max(N,M)<=130.
Blue Mary's Note: The data has been enhanced on Feb.28, 2008 to avoid precalculated tables. Sorry to some users.
Added by: [Trichromatic] XilinX
Date: 2007-12-01
Time limit: 8s
Source limit: 50000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: All except: C99 strict ERL JS
Resource: IPSC 2007 | {"url":"http://www.spoj.com/problems/CAKE3/","timestamp":"2014-04-20T05:43:50Z","content_type":null,"content_length":"20565","record_id":"<urn:uuid:8ca2c975-bfac-4fff-a595-2a98fba8627f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculations for solar electrification 01 CODESO
Calculation for solar electrification
Calculation for solar electrification 01
ENERGY = POWER x HOURS
Wh = W x h
To calculate an electric solar system, first you have to determine your daily average CONSUMPTION OF ENERGY.
You can realize your calculations of your daily average consumtion of every equipment you want to use, the same way as in the following example, multipying the POWER with the HOURS of usage.
W Subtotal
D = (A x C)
W Subtotal
D = (A x C)
Lamps floures.
W Subtotal
D = (A x C)
1 energysaving lamp for 4 hours
In this example you calculate the daily energy consumtion of 1 flourescent lamp used for 4 hours a day. You multiply (if you want to, in your computer spreadsheet) the quantity of equipment with its
power consumption in W (Watts) multiplied with the h hours of usage.
The daily energy consumption in this example is 60 Wh / day.
In this example you can see, that if you conect double of power, your energy consumption also is double.
The daily energy consumption in this example is 120 Wh / day.
Example C: "Normal Country house"
In this example you can see, the energy consumption of 2 lamps conected for 4 hours is the same as 1 television used for 2 hours. The reason why is, that the enregy consumption of the TV is 4 times
bigger than the lamps.
The daily energy consumption in this example is 280 Wh / day.
For this daily energy consumption, you need to install (more or less) one 110 W (Watt) fotovoltaic solar panel, which will cost you aproximately 600 - 900 $. (more details in the following pages).
2 lamps, 1 radio, 1 television
2 energysaving lamps for 4 hours
The fotovoltaic solar panels capture o transform the sunrays into electric energy. In any case, your daily consumption or the solar system generation is mesured in ENERGY and not only "POWER".
In the following calculations you have to know that ENERGY is your POWER used (multiplied by) certain HOURS of usage.
But you are not alone, we´ll help you..........
W Subtotal
D = (A x C)
Lamps floures.
Example D: "Bigger Country house"
2 lamps, 1 radio, 1 television, 1 refrigerator
We like to explain here, how to calculate the power of the cooler:
The power of a small cooler (1/4 HP) is aprox. 250 W, but it is conected and disconected in cicles (depends on usage, the temperature inside and outside the cooler, the model, etc.) normally 15
minutes working and 45 minutes in standby, which is the same as an average power of 60 W.
For this daily energy consumption, 1780 Wh/d you need to install (more or less) 5 solar panels 110 W (Watt), which will cost you aproximately 3000 - 4500 $. (more details in the following pages).
In this example you can see, that the energy consumption of the cooler is 5 times bigger than the rest of consumers, your lamps, radio and TV.
The cooler has got nearly the same POWER as the TV, but the cooler is 24 hours a day conected, meanwhile the TV is used only 2 hours a day.
Casilla 17-21-759, Quito, Ecuador, Sudamérica | {"url":"http://www.codeso.com/Calculo01AE.html","timestamp":"2014-04-19T17:01:28Z","content_type":null,"content_length":"107958","record_id":"<urn:uuid:78f95899-8b8a-4ef1-89fe-763b9330586e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magic squares are arrangements of numbers into squares, cubes, or hyper-cubes where the sum of each row and each column is identical. Additionally, the sum along the "principal diagonals" is also
equal to the same number. The magic square below was known in antiquity in China, Greece, and Egypt.
It is interesting to note that subtracting a constant from each cell in a magic square or hyper-cube does not change its essential property: namely, that the sum of any row, column, or principal
diagonal is identical. For example, the following magic square is obtained by subtracting 5 from each cell in the magic square above:
An interesting property about this square is that all rows, columns, and principal diagonals sum to 0. Also, it is easy to visually recognize symmetry in the square.
This web site contains:
Symmetry - examples using "tuples"
Generator - program for generating magic squares and Hyper Cubes
Other Web Sites for Magic Squares and Cubes
Please see the Instructions Page before viewing the Magic Squares and Hyper Cubes Generator.
Site Map:
Postal address
P.O. Box 567; Niwot, CO 80544; USA
Electronic mail
General Information: Charlie.Kelly@broadband-inc.com
Software or web page matters:
Webmaster: webmaster@broadband-inc.com | {"url":"http://net.indra.com/~charliek/","timestamp":"2014-04-20T08:15:02Z","content_type":null,"content_length":"10226","record_id":"<urn:uuid:8f71213a-70ca-4b1b-a6ab-b2c91bc888f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: p/q converted to Egyptian fractions in 2,000 BCE?
Replies: 3 Last Post: Mar 7, 2012 11:43 AM
Messages: [ Previous | Next ]
p/q converted to Egyptian fractions in 2,000 BCE?
Posted: Mar 1, 1996 12:04 PM
Final comments are requested to iron out obvious historical and/or
mathematical errors contained in the following summary of Middle Kingdom
Egyptian fractions, sometimes called hieratic fractions.
As an introduction this paper proposes to historically combine the
EMLR and RMP 2/nth table as one document in a manner that shows that
Old Kingdom Horus-Eye fractions and Middle Kingdom hieratic fractions
were closely related, but dissimilar in several important respects. That is
to say, this one document proposal is new and should be openly discussed
as plausible, worth of a serious investigation, or clearly a risky
historical mixing of apples and oranges, as proven here on this discussion
One theme of this paper, if the EMLR and RMP can not be refuted as belonging
to the same Middle Kingdom hieratic fraction tradition, strongly suggests
that p/q, as rational numbers, were easily converted into exact unit
fraction series as early as 2,000 BCE, much as Horus-Eye fractions converted
p/q into its inexact decimal fraction series. The Moscow Papyrus, written
on or about 2,000 BCE, provides evidence, such as its writing of 2/5 by
the same algorithm that is found in the improved EMLR and RMP, that hieratic
fractions were not always computed by the Horus-Eye duplation multiplication
paradigm, as now accepted by the Egyptology community (Shute). A new and or
historically compatible/alternate Middle Kingdom hieratic paradigm is
proposed by:
1. EMLR, The Egyptian Mathematical Leather Roll, 26 equations (Gillings)
a. 1/p = 1/2p + 1/2p and 1/p = 1/3p + 1/3p + 1/3p
b. 1/p(1/2) = 1/p(1/3 1/6) and 1/pq(1/3 1/6)
c. 1/p(1) = 1/p(1/2 1/3 1/6) and 1/pq(1/2 1/3 1/6)
d. 1/p(2/q) where 2/q was taken from the RMP 2/nth table such as
(1) 1/p(2/7) = 1/p(1/4 + 1/28) and
(2) line 17's obvious error of 1/13 = 1/28 1/49 1/96, surely was not
(as Gillings suggested) an attempt to write -
(a) 1/13(1) = 1/13(1/2 1/3 1/6) = 1/26 1/39 1/78, using rule 1.c, but
(b) 1/13= 1/3(3/13) = 1/3(1/8 1/17 1/52 1/104) = 1/24 1/39 1/156 1/312
since 2/13 = 1/8 1/52 1/104 from the RMP, as also hinted by
e. Lines 1, 2 and 3 from the EMLR which shows
(1) 1/8=1/10 1/40 = 1/10(1/1 1/4) = 1/10(5/4)=1/pq= 1/p(1/(q+1)(1/1 1/q)
(2) 1/4 = 1/5 1/20 = 1/5(1/1 1/4) = 1/5(5/4) =1/pq =1/(pq+1)(1/1 + 1/pq)
(3) 1/3 = 1/4 1/12 = 1/4(1/1 1/3) = 1/4(4/3) = 1/p = 1/(p+1)(1/1 + 1/p)
2. RMP, 2/nth table (Shute, Gillings, Chace and others)
a. 2/p - 1/a = (2a -p)/ap where a is a highly divisible number,
usually about 2/3rd of p; with 2a -p additively composed of divisors of a.
Note: Every 2/nth table 2/p unit fraction series follows this one rule
much as Hultsch, Bruins and several others have proposed over the
last 100 years.
b. Otto Neugebauer, the dominate scholar on which the current Egyptology
view of Egyptian fractions rests, reported only a muddled version of
the easy to read composite case:
(1) 2/pq = (1/p + 1/pq)2/(p + 1),
than computes all composite 2/nth table member except 2/35, 2/91, 2/95.
It is important to note that the only positive Egyptian fractions view
that Neugebauer included in his Exact Sciences in Antiquity analysis
is the acceptance of algorithm(1) with the form:
"2/n=1/3(1/n) + 5/3(1/n)... (with the comment)...
in this way, more and more cases of the table can be reached
and it appears to me there is little doubt that we have found in essence
the procedure which has lead to these rules of replacement of 2/n by
the sum of unit fraction." As a counter example to Neugebauer note that
(2) 2/pq =(1/p + 1/q)2/(p + q)
is clearly present in the 2/35 and 2/91 cases, as not seen by Neugebauer,
even though algorithm 2.b.(2) is simply read as the product of the
arithmetic mean (A) and harmonic mean (H), seen in the form 2/AH,
a common Ancient Near East pattern.
(3) Wrapping up the final exception, 2/95, is achieved by the trivial
form 2/95 = 2/19(1/5) where 2/19 was taken from equation 2.b.(1).
c. In conclusion, the EMLR and RMP as proposed as one document presents an
interesting set of patterns. Two clues that tend to closely link the EMLR
and RMP, mathematically and historically, beyond Henry Rhind bringing
both back to England in 1855, can be summarized by:
(1) Three of the 26 EMLR lines contain RMP 2/nth table members. One error
contained on line 17 is interesting in that a student may have been
confused in the writing of 1/13 as 1/3(3/13) as also suggested by lines
1: 1/8 = 1/10 1/40 = 1/10(1/1 1/4) = 1/10(5/4)
2: 1/4 = 1/5 1/20 = 1/5(1/1 1/4) = 1/5(5/4)
3: 1/3 = 1/4 1/12 = 1/4(1/1 1/3) = 1/4(4/3)
For some unknown reason the EMLR stated 1/13 = 1/7(3/7) = 3/49 rather
than the correct value 3/39. Considering several EMLR rules available the
student may have simply been confused.
(2) One RMP 2/nth table line, 2/101, contains an EMLR type algorithm, as
noted by:
2/101 = 1/p(1/1 + 1/2 + 1/3 + 1/6).
I thank one and all for the many supportive comments, posted to MAA and
to my private email box. Closing out this phase of my investigation has
not been easily. The subtle aspects of Egyptian fractions that have long
confused ancient students and modern scholar alike, including myself on
many points, may now be clearing up.
Boyer, C.B., 1968, History of Mathematics, John Wiley, 1985 re-print
Princeton University Press.
Bruckheimer, M, and Salomon Y., The RMP Unit Fraction System,
Historia Mathematica, Nov. 1977.
Chace, A. B., 1927, Rhind Mathematical Papyrus, National Council of
the Teachers of Mathematics, 1979 reprint.
Gillings, Richard J., 1972, Mathematics in the Time of the Pharaoh's,
Dover Publications, 1982 re-print.
Klee, Victor and Wagon, Stan, 1991, Old and new Unsolved Problems
in Plane Geometry and Number Theory, Mathematical Association of
America, Dolciani Mathematical Expositions-No. 11.
Knorr, Wilbur, Historia Mathematica, HM 9, "Fractions in Ancient Egypt
and Greece, 1982.
Neugebauer, Otto, 1962, Exact Sciences of Antiquity, Harper and Rowe.
Ore, Oystein, 1948, Number Theory and its History, McGraw Hill
(Dover reprint is available).
Robins, Gay and Shute, Charles, The Rhind Mathematical Papyrus, Dover
Publications (a reprint of a 1987 British Museum publication).
Milo Gardner
Sacramento, CA
March 1, 1996
As background information concerning my cryptanalysis qualifications to
discuss the mathematical side of Egyptian fractions it may be important to
note that I worked for several years as a Cryptanalytical Specialist.
My daily and ongoing assignments were twofold:
1. To sort through encoded electronic messages and place practice
traffic in one pile and language text traffic in another.
2. To attempt to predict the next day's practice traffic, which
my partner(s) and I achieved from time to time, and secondarily
process ciphered messages using the latest cryptanalysis
techniques, frequently obtaining readable messages.
Friedman, William F., Military Cryptanalyis, Parts I, II, III and IV
reprints available from Aegean Park Press, PO Box 2837, Laguna Hills, CA
92654, (800) 736-3587, USA and Canada, FAX (714) 586-8269
Friedman, William F. and Callimahos, Lambros, Military Cryptanalytics,
Part I, Vol. 1, 2 and Part II, Vol. 1, 2. Aegean Park Press.
Friedman, William F., Elementary Military Cryptography, Aegean Park Press.
Kahn, David, Codebreakers, a none technical book that went through
several editions, the last one in 1974. Kahn discusses Friedman
and modern cryptanalysis as well as Roman and Egyptian ciphers.
Date Subject Author
3/1/96 p/q converted to Egyptian fractions in 2,000 BCE? Milo Gardner
3/6/12 Re: p/q converted to Egyptian fractions in 2,000 BCE? Milo Gardner
3/6/12 Re: p/q converted to Egyptian fractions in 2,000 BCE? Milo Gardner
3/7/12 Re: p/q converted to Egyptian fractions in 2,000 BCE? Milo Gardner | {"url":"http://mathforum.org/kb/thread.jspa?threadID=435094","timestamp":"2014-04-18T18:46:13Z","content_type":null,"content_length":"27946","record_id":"<urn:uuid:92bc1fea-5773-4b65-8736-c464cc5cd22e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium Modernizing Graduate Programs in Statistics—Case Study Prem K. Goel Ohio State University First, let me thank CATS
for inviting me. About two years ago this topic arose at the summer CATS meeting, and there was a lot of enthusiasm about doing something in this direction. Recognizing that change happens very
slowly, if we start working toward it now, maybe the curriculum changes will be in place in most programs by the year 2000. I will try to avoid repetition of ideas, although many of my perspectives
have much in common with those of previous speakers. I will concentrate on graduate curriculum, not because undergraduate programs are unimportant, nor that service courses are not important for the
future of statistics, but because when I thought about this symposium in those initial CATS deliberations, I was mainly thinking about the future graduate curricula in statistics. Later, I will say
something about what we have done in terms of service courses at Ohio State University (OSU). I will not give details, but rather will describe what we are doing in general. Earlier this year while
visiting Cleveland, I read an interesting story in USA Today about total quality management (TQM) awards given by Rochester Institute of Technology for the best TQM project in industry. The story
described a paper mill company in southern Louisiana that won the award. Their motto is "In God We Trust; All Others Must Have Data." They developed this motto because they had previously had many
problems in their plant, and realized that the only way they could learn about what was happening in the plant was by going out in the woods and collecting data. The perceptions of the managers were
nowhere close to what was actually happening in the plant and the production process. That is where the motto came from. But as far as statisticians are concerned, the problem is, What if you do not
know how to use the data? The reason we ought to be doing ever more interdisciplinary education in our programs is, as many people here have already noted, that our science needs the nourishment
provided by interactions with other sciences. The development of statistical science is driven in large part by problems arising in other substantive disciplines. Today's customers, industry and the
government agencies, demand that our master's and PhD degree holders be able to work well in research teams. That means we have to design our curricula to meet such demands and instill such
qualities. The three main users of our product, academia, the government, and industry, are all demanding that our students become super-statisticians. All want individuals who have strong
theoretical knowledge in statistics and probability; who have strong statistical computing and simulation skills; who know how to model substantive complex problems; and who are adept at using a wide
spectrum of the methods in the ever-growing data analytic tool box in statistics; who have strong communication skills; who are nimble problem solvers; who are able to formulate real problems in
interdisciplinary projects and not lose sight of those problems; who are well trained in the art and science of consulting; and many more things. We are talking about producing a student today who
knows everything about everything, and that is not easy to do.
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium It is clear to me that what we have been doing in statistics education up to today has to change because the world
has changed around us. It is no longer the environment it was 25 years ago when the last major curricular changes took place in statistics. I am not as sure as to how we should change. We at Ohio
State are still experimenting and learning what works and what does not, as are many other institutions doing similar things. The need for change is equally true for undergraduate education, and not
only for graduate education. Changes are also needed in undergraduate service education, but it is much easier to reinvent or redesign service courses partly because there is little vested interest
in the existing service courses. That is not true, however, at the graduate level, where some people have been teaching courses for the last 20 years. If you now tell them, ''You have to change your
course because it is not relevant anymore," many of your colleagues may balk because it is in their nature to be conservative. This imbues the system with an opposing inertia that simply cannot be
changed overnight. Change does not come easily and does not happen by itself. As Ed Rothman said, we must take the attitude that if we can change even ourselves, perhaps that would be a contribution
to the profession. Hopefully, everybody will over time re-examine and implement the changes that ought to be made in statistics teaching and research. Most statistics departments are not as fortunate
as is Carnegie Mellon's. At CMU there is a fairly homogeneous group of people, and they have found a common purpose. Many older departments have people with very established ideas. For a department
chair or a curriculum committee chair to work with those people and gain their support to look at curriculum very carefully, redesign some courses, drop some courses, and create some new courses, is
a fairly tough job. It is especially so when the reward system does not reinforce doing that. Basically, the reward system is predicated on how many papers one published last year. But if that is all
that is rewarded, faculty members do not want to put much time or effort into teaching, partially because there is not much benefit in it for them. As departments, we have to think about how we can
change our reward structure so that teaching becomes an important activity on a day-to-day basis. Such changes will happen because there is demand by the customers for us to change. The ideas
presented by the other symposium speakers merit serious discussion. What is important is for us to go back to our own departments and start talking about these things with other colleagues, and find
ways to assimilate some of these ideas into our own programs. Again it will be a very slow process, and will not be easy. However, if we can make one convert at a time, perhaps we can succeed. For
the last four to six years, the OSU statistics faculty have been discussing what is wrong with what we do in the classroom, how our students learn or do not learn, and whether or not they are able to
put their knowledge together in solving problems. The latter is one area in which we notice particular difficulty. Sometimes after a student has taken the mathematical statistics sequence, the
probability sequence, and all the applied courses, and is then given a problem in a consulting setting to solve, he or she do not know what to do with it. Some of our faculty believe that may be
caused by the way we teach the courses. We teach all these methods courses in isolation and do not build one course on the other. That may be why the students are not learning how to synthesize.
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium Many trained statisticians act more like technicians rather than scientists, and it may be in part because various
methods courses are taught in isolation from each other. Phillip Ross remarked that if you have a hammer, you view everything as a nail; we may have to change some of our teaching to address and
counter this tendency. Edward Rothman described statistics as technology and not as science or an art. I certainly do not agree with that, even though Fisher may have said it, because those may have
been different times. If you want to be a full partner in the future development of science, you have to be a scientist today, and not just a technologist, because scientists will not accept you as a
full partner if you are merely going to be concerned with a little piece of the problem. To work as full partners with scientists, we have to act like scientists and learn like scientists, too. I
believe that our students do not learn how to solve substantive problems, but are essentially looking for a tool that is already in their tool box to fit to a problem. That may be why there is so
much emphasis on linear models when many real-world problems are not necessarily close to linear models. Even when dealing with nonlinear models we seem to have missed the boat completely, because we
have simply tried to look at the series expansion of the nonlinear expression, and tried to fit a linear model; that is, basically, we do not focus on what actually is taking place. This context of
solving substantive problems is one in which things need to change in our teaching. To address this problem, the OSU faculty agreed that something needed to be done in, as a start, at least the PhD
curriculum. The master's program got second priority because its functioning was viewed as acceptable. It was decided that a new course would be developed. In so doing, we have made substantive
changes over the last four years in the Ohio State statistics PhD program. First, many graduate students are needed to teach in their very first quarter at OSU because the new resources recently
obtained from the university for teaching the general-curriculum statistics courses require a large number of teaching assistants. Consequently, we support about 75 teaching assistants in our
program. These people are needed in the classes in the first quarter. With the help of the administration, we arranged for these people to receive training in the summer quarter. We asked them in the
first year to come to the university in the summer quarter. We support them for this with funds provided by the university. In that summer quarter, they learn how to teach laboratory courses, how to
use the Data Desk or Minitab or whatever other tools or concepts will be discussed in the service courses. They also take what is basically a refresher course in mathematics, because many of them
forget calculus. As undergraduates, they take the calculus in their first or second year and thereafter do not use calculus (in contrast to engineering students). So by the time they arrive in their
first year of the graduate program, they have forgotten their calculus and do not know such things as how to transform variables, even with only two variables. When they come, we ask them what
courses they have taken, and what concepts they remember, and then we try to fill the gaps in that summer quarter so that, on the one hand, they are ready to take on the teaching assignments and, on
the other hand, they are also ready for the first mathematical statistics course that is given in the fall quarter. The change in our program is not in terms of the requirements, but rather in how
the methods courses are presented in the first year. The students are still required to take a three-quarter sequence in a probability-mathematical statistics course and a three-quarter sequence in
the first year in real analysis. That is sometimes deadly, because by the time they have taken
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium the third course in real analysis, they might decide not to continue the program because they thought that they
had not come to learn real analysis. However, they have to go through the program, and sometimes they are taking this course with all the math majors, too. Sometimes they cannot compete as well, or,
not knowing that they will need it in the second or third year, they lose sight of why they are going through the real analysis sequence; they may decide to convert to a master's program and go out
to work. Sometimes it is easier to get a master's degree and go get a better job than to put in six years for a PhD program and not know what will happen afterward; with today's job market in
academia, we lose some of the students to a master's program, and, at least in the short term, they think they are better off. The biggest change in our program was the following: In the PhD program,
students used to take all the methods courses in the first year, such things as regression, analysis of variance, design of experiments, time-series analysis — you can as well make the list.
Second-level courses were for master's and PhD statistics students, and also for some of the PhD students from engineering and other departments. We decided to introduce a first-year sequence called
Introduction to Statistical Practice. That is a three-quarter sequence of courses in which students learn about these methods through real problems. Two years prior to the introduction of the course,
before it was implemented, we asked the faculty which of them would like to help team teach if there were to be a course like that. Three or four people volunteered. We then gave them some release
time from their teaching duties in those two years prior to the introduction of the course to develop materials, to look for data sources and find real data, to think about how to structure the
course, and things of that sort. This was to avoid having all the students in year one being guinea pigs. The approach is to teach substantive problem solving through various large and small data
assays. The class format is one of less lecture and more open discussion. Groups of five students are formed who will work together during the whole quarter. More attention is paid to asking
questions and identifying what the real questions are, than to just solving the problems. All the things we have been hearing the last two days — formulating the problems, identifying what the
questions are, raising questions about the data, finding out what the data are about, determining how the measurements came about — are done in this course. This permits the first discussions in this
sequence to address the scientific method, team work on formulation of problems, the art of raising questions, and the use of computing tools. Once they have looked at a problem and the questions
have been formulated, then the instructors say, "If you want to solve this problem, how do you answer the questions raised in this discussion using appropriate tools?" That is how we introduced the
methods in this statistical practice course. Methods are not done in great detail in the course because there are too many things to discuss: about 12 real problems had been picked through which they
were trying to introduce methods. Consequently, these students do not necessarily learn all about regression modeling, or all about detecting outliers, because they cannot go into that kind of detail
in a course that is to give them an overview of the discipline and an overview of how to solve problems. The idea is that once they have taken this course in year one, and have also gone through the
mathematical statistics sequence in year one and have taken the probability sequence in year two, they will start taking some of the advanced topics courses in year two and year three in which they
can choose, if they want, to learn multivariate or linear models or design of
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium experiments or nonparametrics. They can choose which topics they want to learn more about. Hopefully, in this
first-year Introduction to Statistical Practice course, the will develop enough curiosity about learning some of the things on their own, or will learn some of these things when they are taking a
consulting service course, which is also part of the program. Every student must take two quarters of a consulting course. The first quarter presented in the classroom includes discussion of the art
of consulting, as well as how to deal with the client, report writing, and communications. In the second quarter each student gets involved in a real consulting project that is substantive in nature;
students work for almost two quarters with one client on a project. When you run a consulting service, you cannot tell clients their problems are not important. Each person's problem is important to
him or her and therefore should be to you, too. Some of the problems that come to the consulting service are of the kind that can be handled in a couple of meetings; in such cases, we hope that the
next time around, if clients have had a good experience from getting this help from us, they will return with a much more substantive project. That is how students also learn to get involved in both
short-term and long-term projects through the consulting service. That is the small innovation we have tried to implement in our graduate statistics program at OSU. The statistical practice course
first covers problem formulation, and then techniques to solve problems are introduced through that process. In this give-and-take when the students are tackling these problems every day, five people
work with each other. They return to the same techniques several times, because some of the things are used in more than one problem. The focus is taking a synthetic approach to scientific problem
solving, not merely introducing methods or a list of methods in the course. Sometimes we have to team teach these courses because one or two individuals may not be able to cover the spectrum of
statistics in an advanced applied course. I like this idea of team teaching. I think that bringing people together to team teach these new courses is one way to induce some of the desired changes.
Currently, two people who are experienced in real-world data analysis are teaching the course for the first time. They learned a few things this first year. First, first-year graduate students often
want everything to be very structured, perhaps partly because it is such a big transition from an undergraduate program. They want to know what the course is going to cover and how grades will be
determined. Grading is very important to them, and if you tell them, "A grade is not important; what is important is the material you should learn from the course," that does not resonate very well.
That is especially so if they think you are trying to be vague about the grading system. This was in part the reason that in the first quarter of the course, many of the students were not happy with
it. The course material focused on four problems; at first they merely discussed the problems, and students would say, "I have not learned any method yet; what is going on?" It did not entirely
satisfy them to be told, "You will learn methods later; we first have to discuss how to ask the right questions before we address how to solve those problem." This made the first quarter very
difficult. Students were complaining that they were not learning anything that they had expected in terms of, say, regression modeling. In reply, they were told that the first quarter introduces
computing and discusses how to raise questions, and the second quarter presents how to solve those problems. We have made a commitment to the two faculty members that they will be teaching the course
for the next two years and thus
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium learning from their own experience. They will be talking to other faculty members about it. At the end of two
years, they will hopefully have gained experience and understanding that can be further discussed as a case study. We at OSU consider the consulting service not just as a source of revenue for the
department but also as a source for problems. Students who are assigned to substantive projects sometimes also end up getting a thesis problem out of the experience. There were about seven or eight
theses in the last four years that came out of projects out in the field, where faculty members were involved on a long-term basis. These are instances of students getting involved enough to extract
a problem that was good enough for a PhD thesis, and thus contribute to methodology development. I think it is very important to have a consulting service in any statistics department's program.
There are often difficulties in running a statistical consulting service. Not every faculty member is supportive of the effort because, for instance, some may view consulting with clients as beneath
their dignity. That happens in many of the bigger departments. What the department's purpose is may not be agreed on by all the faculty, because their depends on what the faculty think is important
and on what they perceive as the basis on which they will be reviewed for promotion and tenure. I believe that the six-year time line for an assistant professor's getting promotion and tenure is
sometimes detrimental to the interdisciplinary nature of statistical work. It can take several years for a person to become productively involved in some of the big problems; if other department
members do not see any publications from such a person for two years, it can raise concerns over what is happening with that person. I think our job is not only to convince the deans and the provost,
but also to convince ourselves and our colleagues that patience is needed. If faculty do not publish something in year one or year two, but we know they are very deeply involved in something that is
going to produce results three years down the line, we have to keep that in mind and take it into account, or else we will discourage that kind of involvement for our junior faculty. Junior faculty
sometimes need to be protected against that, because if they invest their time doing interdisciplinary work and get "passed over" for not publishing enough papers, it will be a major problem. We also
try to place many students in summer internships in industry and government and have had students go to several companies in Cleveland, Cincinnati, and Columbus. Students have been placed in the
Bureau of Census and Bureau of Labor Statistics, and in other government agencies. When they return, they are very pleased because they have learned many things they would not have learned at the
university. Internships are very important. I want to appeal to our industrial colleagues to offer more opportunities for graduate students to be interns, and perhaps also offer faculty exchange
programs. If those opportunities are not provided, despite all the talk we will not achieve what we want to achieve in interdisciplinary education. Industry must also be a full partner in the
process, because otherwise it will not be possible to implement these interdisciplinary facets. Before I close, I must acknowledge that some problems exist in the basic statistics educational
curriculum at Ohio State. One is that not all faculty members are convinced of the effectiveness of this modern interdisciplinary approach. The problem is that some people do not ever want to change;
they simply have developed their habits and that is it. Consequently, any attempt to change that faces opposition. Such people do not believe in this interdisciplinary approach because they never
learned it that way yet can solve problems now, and so they
OCR for page 91
Modern Interdisciplinary University Statistics Education: Proceedings of a Symposium wonder why students now cannot do the same thing that they did when they were graduate students and then develop
over time. However, as Jon Kettenring said yesterday, industry does not have the time or resources to provide on-the-job training, and in previous times there was not this kind of pressure on the
outgoing graduates to know everything when they get out. Things have changed. With so many more people in the statistics profession, perhaps supply is overtaking demand, and we will therefore have to
be more cognizant about what our students should know before they graduate. Another problem is the vested interest individuals have in the courses they have been teaching for the last 20 years. It is
sometimes a big problem because if they are suddenly told, "By the way, we will not offer this course any more because it is not important," their reply might be in effect, "You cannot do that to me;
I am a senior faculty member in the department, and you had better take care of me or I will create trouble for you." That is a somewhat facetious possible reaction, but there will be problems of
that kind arising when you want to change curriculum. One has to work with everybody, decide on what is important, perhaps put some priorities on certain aspects, and try to change the course so that
it incorporates the new higher-priority things, rather than simply cutting the course. It is important to avoid hurting people's egos. One problem with the Introduction to Statistics Practice course
is that no book is available yet. We intended to use two or three books as a collection to be read by students, but having students buy three books for a course is expensive these days. So we have
put a lot of data sets on our own departmental machine that students have access to, and for now we provide a lot of handouts for this course. The hope is that after three years, the professors who
developed the course will have a course format that can be made available at least in manuscript form for other people to use. There is also a problem in that some of the students do not seem to be
comfortable with a very unstructured kind of course because they are accustomed to a more traditional, seriesmode teaching style that presents a problem, the solution, the next problem, and the next
solution, rather than an appeal to try to learn how to formulate problems first. Making that change is not an easy thing for some students to do, and they do complain about it. Still, the biggest
obstacle is that a substantial number of faculty members do not want to work on particular private projects. They feel that theory is more important than are applications, and question why they
should "waste their time" working with data. That, I would venture to guess, is a problem in many big, established statistics departments. Reiterating, to solve that problem, we will have to think
about the reward structures. At Ohio State faculty are informed that those who help in the consulting projects have something in it for them. Those projects are usually funded ones, and when the
money comes to the consulting service it basically is money that can be used for whatever educational purposes we want. It is not taken back by the administration, and so it is a department resource
that can be used to buy some books or provide an extra trip to go to a meeting for those who help out in the consulting. We hope that through these kinds of little inducements we can attract more
faculty. Some of them are very willing to help on any problem; others simply do not care about consulting. We believe that, over time, through the recruitment of more and younger faculty, this
problem of not wanting to participate will diminish on its own. | {"url":"http://www.nap.edu/openbook.php?record_id=2355&page=91","timestamp":"2014-04-21T02:49:04Z","content_type":null,"content_length":"67146","record_id":"<urn:uuid:9f29def1-eccf-4097-bade-0edde5c2f94b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
Glendale, CA 91206
Experienced and Successful Tutor and Former Teacher
...In other words, one begins to finally understand, fundamentally, why the natural world is described mathematically, and not some other way. The techniques of both
and Linear Algebra are brought to bear on problems of dynamics -- that is how things move...
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Pasadena_CA_Calculus_tutors.aspx","timestamp":"2014-04-23T22:36:47Z","content_type":null,"content_length":"59735","record_id":"<urn:uuid:f1bedfda-5ca2-4d62-a966-6cb5bc9cec5e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
cs314 p. 58
Contents Page-10 Prev Next Page+10 Index
The intersection (written ∩) of two sets is the set of elements that are members of both sets.
(intersection '(a b c) '(a c e)) -> (c a)
public static Cons intersection (Cons x, Cons y) {
if ( x == null )
return null;
else if ( member(first(x), y) != null )
intersection(rest(x), y));
else return intersection(rest(x), y); }
(defun intersection (x y)
(if (null x)
(if (member (first x) y)
(cons (first x)
(intersection (rest x) y))
(intersection (rest x) y) ) ) )
If the sizes of the input lists are m and n, the time required is O(m · n). That is not very good; this version of intersection will only be acceptable for small lists. | {"url":"http://www.cs.utexas.edu/~novak/cs31458.html","timestamp":"2014-04-19T05:34:05Z","content_type":null,"content_length":"1577","record_id":"<urn:uuid:5cac6710-29d2-476f-8020-ff0def3e8520>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cedars Trigonometry Tutor
Find a Cedars Trigonometry Tutor
...I promise you that I will be committed, accessible and caring. We will work together to customize a strategy for your student and execute that plan as a team. Cancel one or all sessions with
notice, and there is never a charge.Algebra 1 is the foundation higher math course upon which all-future math courses are built.
62 Subjects: including trigonometry, reading, English, calculus
(( HIGHEST RATINGS!!! )) PARENTS: Bring the full weight of a PhD, as tutor, and student advocate. Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help
14 Subjects: including trigonometry, calculus, physics, geometry
Latoya graduated from the University of Pittsburgh in December of 2007 with a Bachelor's degree in Psychology and Medicine and a minor in Chemistry. Currently, she is pursuing her Master's in
Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities.
13 Subjects: including trigonometry, chemistry, geometry, biology
...I have primarily focused in areas such as reading, math, and test prep. However, I am proficient in teaching elementary social studies and science as well. I am more than comfortable teaching
content area subjects throughout upper middle school and high school.
41 Subjects: including trigonometry, English, Spanish, reading
...I look forward to tutoring your chemistry student! I obtained a history minor while at the University of Delaware. I also took the AP exam in European history in high school and scored a 4 on
the exam.
14 Subjects: including trigonometry, chemistry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/cedars_pa_trigonometry_tutors.php","timestamp":"2014-04-20T23:56:33Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:82b9ff2c-60f1-438f-8645-fd82e7858b7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical and analytical study of an asymptotic equation for deformation of vortex lattices
Seminar Room 1, Newton Institute
It is known that when two-dimensional flows are subject to a suitable background rotation, formation of vortex lattices are observed. We can make use of critical points of the vorticity field and
their connectivity (so-called, surface networks) to study reconnection of vorticity contours in 2D turbulence. In this talk we begin by noting how this method applies to the study of formation of
vortex lattices. We then study a coarse-grained, asymptotic equation which describes deformation vortex lattices derived by Smirnov and Chukbar, Sov. Phys. JETP vol 93, 126-135(2001). It reads $\
phi_t=\phi_{xx} \phi_{yy}-\phi_{xy}^2,$ where $\phi$ denotes displacement of vortex locations. This equation is particularly valid for geostrophic Bessel vortices with a screened interaction.
Numerical results are reported which indicate an ill-posed nature of the time evolution. Self-similar blow-up solutions were already given by those authors, which have an infinite total energy. We
ask whether finite-time blow-up can take place developing from smooth initial data with a finite energy. More general self-similar blow-up solutions are sought, but all are found to have infinite
total energy. Finally, remarks are made in connection with the Tkachenko-type lattice.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/TOD/seminars/2012072317101.html","timestamp":"2014-04-18T08:08:18Z","content_type":null,"content_length":"6889","record_id":"<urn:uuid:7165471a-5e4e-4973-9005-fb4db1b0aac6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science/Math web directory
Ajdee Web Directory - online since January 2004
Algebra (2) Geometry (2)
Calculus (2) Statistics (2)
Web Sites
American Mathematical Society - http://www.ams.org
Promoting mathematical education and research through scholarship programs, publications, conferences, surveys, employment services, resources, locating research and funding.
DoYourMath.com - http://www.doyourmath.com
Interactive learning and practice, math tests, math bookstore, a math expert, puzzles and cool math for all ages.
Math Forum - http://mathforum.org/
Mathematics and mathematics education center.
MathWorld - http://mathworld.wolfram.com/
Mathematical glossary of terms and mathematical material form undergraduate to research level.
White Group Mathematics - http://www.whitegroupmaths.com
A level H2 maths learning, with sections including detailed advice and recommendations, and fully worked problems. Higher level math, with early college material is also available.
Copyright © 2004 - 2014 Ajdee.com web directory All rights reserved. | {"url":"http://www.ajdee.com/science/math/","timestamp":"2014-04-18T05:30:55Z","content_type":null,"content_length":"9764","record_id":"<urn:uuid:87a4701d-edf7-4323-b4d2-f3766e501799>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Astronomy Tea Talks at Cal
21 June 2010 Orly Gnat (Caltech)
"Non-Equilibrium Ionization Processes in Metal-Ion Absorbers"
In this talk I will discuss several processes that give rise to
non-equilibrium ionization in metal-ion absorbers. These include
radiative cooling, fast shock waves, and conductive evaporation of
warm clouds. In each case, I will demonstrate the impact that
departures from equilibrium ionization have on the absorption line
signatures. I will first describe computations of the equilibrium and
non-equilibrium ionization states and cooling efficiencies in
radiatively cooling gas. I will then discuss models of the
non-equilibrium cooling column densities associated with fast
radiative shocks, including the effect of the shock self-radiation on
the "downstream" ionization states. In these models I
self-consistently follow the time-dependent dynamics, ionization,
cooling and radiative transfer equations in one- dimensional stable
shocks. I will describe how the observational signatures depend on the
controlling parameters, including the shock velocity, gas metallicity,
magnetic field, and shock age. Finally, I will present recent
computations of thermally conductive interface layers, that may
surround evaporating clouds embedded in a hot medium. These models
include photoionization by an external radiation field. I will
describe how departures from equilibrium affect the conditions for
which self-consistent evaporating solutions exist, and the metal-ion
columns produced in the evaporating layers. | {"url":"http://www.astro.caltech.edu/~viero/tea/abstracts/2010_gnat.php","timestamp":"2014-04-17T00:48:37Z","content_type":null,"content_length":"8638","record_id":"<urn:uuid:c012c765-c0fa-415d-b0f4-4e666264e42e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Last of the Careless Men
Rakudo has been fixed, and the code I've been trying to get work for a month works beautifully now! If I understand the fix properly, the problem was in Rat addition's call to
. The code was very dumb, so it always tried set the denominator of the new Rat to the product of the denominators of the two numbers being added. Rakudo's Ints are really Int32 right now (more or
less), so if that product was equal to or greater than
, it was autoconverted to Num. But the
which takes positional arguments takes Ints, so it wouldn't dispatch to that. Instead it would try to dispatch to the default autogenerated named argument form of
. But that only takes the implicit
parameter, and we were sending it
self, Int, Num
-- thus the "too many positional arguments: 3 passed, 1 expected" error!
Rakudo now autoconverts this case (
424/61731 + 832/61731
) to Num to avoid the overflow in the denominator. Obviously this is less than ideal -- a denominator of 61731 would work fine for this sum -- but it does work. And how! Gone are the mysterious
crashes and errors. With the number of samples cranked up, the curves look beautiful.
Now I just need to do a bit of polishing to the output and figure out how to post it to the blog. I'm definitely feeling I've accomplished something cool in Perl 6....
PS Aha! Preview graphic, I'll explain what it means next time.
A post-Thanksgiving dinner $work debugging session left me with some spare time to poke at my SVG.pm bug: that is, the "too many positional arguments: 3 passed, 1 expected" error I got when I tried
to crank up the number of samples I was taking of the curve. I finally added enough says to track the error down to the simplest of operations: @N[$i] += $temp;, where $temp was 0.00686851014887172.
This wasn't one of my fancy overloaded operators, it was a basic Perl 6 numeric operator. What could go wrong?
Well, I added a bunch more stuff to the say. And it turns out we are adding two Rats, 424/61731 and 832/61731. What could go wrong?
> say 424/61731 + 832/61731
too many positional arguments: 3 passed, 1 expected
If you look into infix:<+>(Rat, Rat), it just does a naive fractional add, relying on Rat.new to reduce the fraction. The problem here is that means the denominator of our new fraction starts its
life as 61731 * 61731.
> say (61731 * 61731) div 246924
No applicable candidates found to dispatch to for 'infix:div'
I'm not sure what the best approach to fixing this is. Obviously I can change my code to do Nums. But I think the first step is to file a Rakudo bug.
draegtun had a nice post on Project Euler #4. The goal is to find the largest palindromic product of two three digit numbers, and he lists solutions in Clojure and Python and a bunch in Perl 5. There
is also a Perl 6 version by trenton in euler_bench on github.
I thought I'd try my hand at a more idiomatic Perl 6 version. Here's my first attempt:
((100..999) X (100..999)).map({$^a * $^b}).grep({$_ eq $_.flip}).max.say
Unfortunately, the usual caveats apply. This blows up the current Rakudo, no doubt because without lazy lists it tries to construct a huge list entirely in memory and then collapses. Rakudo ng has
lazy lists, but it doesn't have grep or max yet. So for now, this is purely a hypothetical implementation.
However, with any luck, Rakudo ng will handle this by the end of the week...
jnthn has an interesting post on the progress and problems of Rakudo's ng branch. Looking over some of the spectest files, it seems to me that ng has a good excuse for not handling many spectest
files yet. By my rough count, at least 50% of the spectest files have a "rakudo skip" directive in them, ie a test Rakduo cannot handle. Because they have no equivalent "ng skip" directive set up, ng
must handle every single case that Rakduo's master branch does to qualify it for inclusion in ng's spectest. If Rakudo master similarly had no skip directive, more than half of the spectest would be
eliminated! So it's not surprising that ng is having growing pains on this front.
Personally, I'm quite impressed by the progress on ng. I cannot wait until I can use Perl 6's laziness -- that's a killer feature ng already has that Rakudo master lacks.
I noticed from #perl6 that chromatic fixed a memory leak in Parrot recently, and that fix made its way to Rakudo today. I thought, hey, I've been having a bus error. Could that be caused by an
out-of-memory condition? So I upgraded Rakudo, and haven't seen the bus error since. (The "too many positional arguments: 3 passed, 1 expected" remain if I crank up the number of samples taken of
each curve, but hopefully now I'll be able to track that down without being knocked around by bus errors.)
So, the first thing I did to get SVG working with Nubs and Polynomial was to write a simple class which converts from "normal" XY space to SVG coordinates (which I'm referring to as NM coordinates in
this code).
class Vector { ... }
subset Vector2 of Vector where { $^v.Dim == 2 };
class SVGPad
has Vector2 $.xy_min;
has Vector2 $.xy_max;
has Vector2 $.mn_min;
has Vector2 $.mn_max;
multi method new(Vector2 $xy_min, Vector2 $xy_max, Vector2 $mn_min, Vector2 $mn_max)
self.bless(*, xy_min => $xy_min, xy_max => $xy_max, mn_min => $mn_min, mn_max => $mn_max);
multi method xy2mn(Vector2 $xy)
my $t = ($xy - $.xy_min).coordinates >>/<< ($.xy_max - $.xy_min).coordinates;
return $.mn_min + Vector.new(($.mn_max - $.mn_min).coordinates >>*<< $t);
So the code to use this (for now) is just
class SVGPad { ... }
class Nubs { ... }
class Polynomial { ... }
sub MakePath($curve, Range $range, SVGPad $pad)
my @points = RangeOfSize($range.from, $range.to, 10).map({$pad.xy2mn($curve.evaluate($_))});
my $start = @points.shift;
my $path = "M {$start.coordinates[0]} {$start.coordinates[1]}";
for @points -> $v
$path ~= " L {$v.coordinates[0]} {$v.coordinates[1]}";
return $path;
my @control_points = (Vector.new(-1, -2),
Vector.new(1, 0),
Vector.new(1, 1),
Vector.new(0, 1),
Vector.new(1, 2),
Vector.new(1, 2),
Vector.new(1, 2));
my @knots = (-1, -1, -1, -1, 1, 2, 2, 3, 3, 3, 3);
my Nubs $nubs = Nubs.new(3, KnotVector.new(@knots), @control_points);
my Polynomial $poly1 = $nubs.evaluate(0, Polynomial.new(0.0, 1.0));
my Polynomial $poly2 = $nubs.evaluate(1.5, Polynomial.new(0.0, 1.0));
my Polynomial $poly3 = $nubs.evaluate(2.5, Polynomial.new(0.0, 1.0));
my $pad = SVGPad.new(Vector2.new(-2.5, -2.5), Vector2.new(2.5, 2.5),
Vector2.new(0, 0), Vector2.new(400, 400));
my $svg = svg => [
:width(400), :height(400),
path => [
:d(MakePath($nubs, -1..3, $pad)), :stroke("blue"), :stroke-width(2), :fill("none")
path => [
:d(MakePath($poly1, -2..2, $pad)), :stroke("green"), :stroke-width(1), :fill("none")
path => [
:d(MakePath($poly2, 0..3, $pad)), :stroke("red"), :stroke-width(1), :fill("none")
path => [
:d(MakePath($poly3, 1..4, $pad)), :stroke("white"), :stroke-width(1), :fill("none")
say SVG.serialize($svg);
MakePath is a simple function which takes a "curve" (that is, an object which has an evaluate function which goes from t to x, y), a Perl 6 Range to evaluate it over, and an SVGPad, and returns a SVG
path object. Then we set up a fairly simple NUBS curve, use the Nubs to Polynomial version of the evaluate function to generate the corresponding Polynomial for each segment of the curve, and output
all four curves to SVG. This is still a crude first approximation, but it does work; if I knew how to include SVG in this post I could show it. Next time, hopefully.
Saw Syntax::Hightlight::Perl6 mentioned on #perl6 over the weekend. I've never been very happy with using gist to provide snippets, so I thought I'd give it a whirl. I was anticipating a struggle
getting it installed from CPAN, but it worked on the first go on my 5.10.0 install.
Here's my source (Perl 5.10) to provide a simple harness for using the module. (Errr... on gist, because it's a Perl 6 syntax highlighter, not a Perl 5 syntax highlighter.)
And here's sample output, used on that spelling corrector script:
use v6;
my %dictionary;
sub edits($word) {
my @s = (^$word.chars).map({$word.substr(0, $_), $word.substr($_)});
my @deletes = @s.map(-> $a, $b { $a ~ $b.substr(1); });
my @transposes = @s.map(-> $a, $b { $a ~ $b.substr(0, 2).flip ~ $b.substr(2) if $b.chars > 1 });
my @replaces = @s.map(-> $a, $b {$a ~ ':' ~ $b.substr(1)});
my @inserts = (@s,$word,"").map(-> $a, $b {$a ~ ':' ~ $b});
return (@deletes, @transposes, @replaces, @inserts);
sub edit_list_to_regex(@el) {
any(@el.uniq>>.subst(':', '<alpha>', :g).map({ rx/ ^ $_ $ / }));
sub correct($word) {
return $word if (%dictionary{$word});
my $regex = edit_list_to_regex(edits($word));
my @candidates = %dictionary.keys.grep($regex);
if @candidates.elems == 0 {
$regex = edit_list_to_regex(edits($word).map({edits($_)}));
@candidates = %dictionary.keys.grep($regex);
return @candidates.max({%dictionary{$_} // 0});
Nice, eh?
I'm still working on the Nubs + SVG stuff. It's so close to doing exactly what I want, but I seem to be overworking Rakudo and causing it to croak randomly...
Updated: Errr... nice except for the part where the long lines get clipped. I need to figure out how to reformat this blog...
Very briefly got out SVG output for a NUBS curve yesterday. Tried to make it a bit nicer looking, and had it promptly dissolve in a hail of bus errors. (And occasional "too many positional arguments:
3 passed, 1 expected" errors that I think must be incorrect.) It's now reminding me of my attempt to write an Euclidean geometry proof generator in Lisp back in '92 -- seemingly minor, innocent
changes in the code lead to crazy changes in where it crashes.
Anyone have hints for how to approach debugging this sort of thing in Rakudo? I know it's just part of the frustration expected for an early Perl 6 adopter, but I'd really like to get this thing
I am awed by Carlin's IRC bot in Perl 6 code. It is cool and practical and working, and makes me want to steal code from it and write bots of my own.
At the same time, every time I look at the source, my eyes land on a series of elsif's, and I think "That's so Perl 5!" Here it is:
Starting at the beginning, that first line is a wonderful bit of Perl 6 craziness. $message .= split(' '); is equivalent to $message = $message.split(' ');. What's wacky about that is $message starts
off as a string, and ends up an array! The next two lines make the first element of the array $command and the rest of them $params. (If you're wondering about how $message and $params can be arrays,
they are array objects in scalar variables. Or something like that, I'm fuzzy on the details, but it clearly works.)
I'd recast all these lines as one simple line harking back to Perl 5, but in a nice Perl 6 way: my ($command, @params) = $message.split(' '); It would perhaps be better to split on \s+, too. And now
we can switch $message away from being a rw parameter. (I believe doing that will not mess up the rest of the bot code, but I admit I haven't actually tested it.)
Then the rest of the function is probably better expressed as given / when block. The great thing is all those $command ~~ 'karma' can become just 'karma' if we do given $command.
(Hmmm... actually now that I look at it, how the heck does the Str $message is rw handle being converted to an array of strings in the old code?)
Last post I revealed how I'd gotten myself tangled up when trying to add a "do the right thing" mode to Nubs.evaluate. I'm going to repeat that code here, as I've added one of the Nubs.evaluate
functions to the mix.
These functions became overly complex because I tried to overload the KnotBasisDirection enum with an extra layer of meaning. Pulling that extra layer out and rearranging things a tad gives you
vastly better code:
Notice that we've eliminated an enum value here, two cases that needed to be checked for, and simplified a third line of code. The "trick" (almost too obvious to be called a trick) is to make the
default value of $direction be determined by a called to Direction, rather than calling Direction with a special value that indicates it needs to do work.
It's the best of both worlds. By default, if you don't specify $direction it will do something reasonable. And for those rare weird cases where it is important (discontinuities in the curve), you can
specify the direction to use for the evaluation.
Tomorrow: On the road to SVG in Perl 6.
...about half of the time I go to post on something I've considered "done", the process of thinking about it makes me stop and rewrite it.
Case in point: I'm currently working on generating SVG output for Nubs and Polynomial, so that I can draw pretty pictures demonstrating the math. The very first thing I set out to was make sure that
both classes had an equivalently called evaluate function, because that's what we'll use to drive the graphing. And that meant adding a bit of logic to the Nubs class.
I don't know if I've properly explained this before or not, but Nubs and Nurbs have two slightly different ways of evaluating them, which I've called Left and Right here. Neither approach can
evaluate what you'd like to be the entire range of the curve: if the curve's range is [a, b], then Left works for [a, b), Right for (a, b]. That has all been handled ever since I got the Nubs to
Polynomial code working correctly.
That means to have a fully working evaluate function, you need to be able to use a blended approach. So I decided that you should be able set $direction to Left, Right, or Reasonable. That meant I
had to special case Reasonable in the KnotVector code, and come up with a function that did a reasonable thing if Reasonable was set. Here's the gist of that code:
Now that I've put these two together like this, the problem here is kind of glaring. KnotBasisDirection has three values, two of which are the only ones that make sense in the first function, and the
third of which is the only one that makes sense in the second. I overloaded it like that merely so KnotBasisDirection could take Reasonable as a default value. Stupid.
So I had written that code and tested it and gone on to the next segment of code, and it wasn't until I started thinking about how to blog the code that I realized it was stupid and needed to be
rewritten. Yay blogging!
Tomorrow: The corrected code.
Edited to add: I forgot to say "Perl 6" in this post, which is needed to make it show up in the Ironman feed, IME. (Well, just Perl would do, but I prefer to say Perl 6.)
I'm throwing this out there in hopes that wiser Perl 6 heads can help me sort out what to do with it. As I recounted two posts ago, it occurred to me that the spelling corrector could properly
support Unicode if instead of the list of letter combinations you might have meant, it generated a list of regexes for them. This drastically cuts down on the combinatorial explosion from calling the
edit routine twice and makes the script handle Unicode properly. On the downside, it is likely to be a good bit slower.
This script implements it. Unfortunately, Rakudo does not yet support variable interpolation in strings, so I can't test the script. Also I'm suspicious I have mucked up the combination of any and
grep, but it's hard to be sure without testing the script.
Anyway, for those of you keeping score at home, Norvig's original Python script does this task in 21 lines of code. This quasi-correct Perl 6 version adds full Unicode support with just an additional
3 lines of code, for 24 total. And that's counting 4 lines which are just }, and the semi-optional use v6; line as well. Assuming fixing the issues don't require additional lines, this looks like a
clear win for Perl 6. And I'm quite sure this code can be made a good bit better and clearer... | {"url":"http://lastofthecarelessmen.blogspot.com/2009_11_01_archive.html","timestamp":"2014-04-16T13:03:27Z","content_type":null,"content_length":"151629","record_id":"<urn:uuid:974ea45b-3150-459c-b273-b58e97ad7636>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Results on multiple sheets
If my previous posts didn't give an answer to your question, then I guess that I simply don't understand what your question exactly means.
If you mean with "live results" only values, but not cell-references, then, obviously it is not possible with formulas. Perhaps it is possible with a script, but I don't know anything about scripts.
However, if you mean something else, and you want to get it, using formulas, then, again, I advise you to share an example spreadsheet with us. If it can be done using formulas, although I might not
be able to provide the desired results, then we have always Yogi and Adam, who are masters in spreadsheet-formulas. | {"url":"https://productforums.google.com/forum/?_escaped_fragment_=msg/docs/argx1wYKevk/viwSoV4F9OkJ","timestamp":"2014-04-18T15:39:49Z","content_type":null,"content_length":"3829","record_id":"<urn:uuid:2227201d-f7e9-49c3-94bb-ffd93c342af7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Princeton Landmarks in Mathematics and Physics
A series of affordably priced paperback reprints of landmark works in mathematics and physics. Committed to the highest standards of scholarship and to its widespread dissemination, Princeton
University Press publishes a continuing series of paperback books in mathematics and physics written by some of the world's finest scientists on topics of lasting importance. These books are
indispensable additions to the personal libraries of advanced students, teaching faculty and professional physicists and mathematicians.
List by Title | List by Author
File created: 9/10/2007
Questions and comments to: webmaster@press.princeton.edu
Princeton University Press | {"url":"http://press.princeton.edu/math/series/plmph.html","timestamp":"2014-04-19T11:58:45Z","content_type":null,"content_length":"7637","record_id":"<urn:uuid:d887dc40-c506-4bf5-b3b3-1d04d7961e1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collegeville, PA Algebra 2 Tutor
Find a Collegeville, PA Algebra 2 Tutor
...I have 14 years' experience as a practicing actuary. I am a Fellow of the Society of Actuaries, having completed the actuarial exam process. I took courses in linear algebra, linear
programming, and linear optimization.
18 Subjects: including algebra 2, calculus, statistics, geometry
...As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Mathematics test takers. In high school I scored 1550/1600 (780M, 770V)
on the SAT and in January 2013 I scored 2390/2400 (800M, 790R, 800W). Yes, I still take the tests to mak...
19 Subjects: including algebra 2, calculus, statistics, geometry
...Another project is a bull's-eye game using a mirror to teach the principle of the angle of reflection. A pan of water can be used to show how sound waves reflect off certain surfaces and are
absorbed by others. I use a flashlight to explain how a remote control works, as kids pretend to be a remote-controlled robot.
16 Subjects: including algebra 2, reading, English, geometry
...I have taken a few courses which deal with linear algebra, namely Differential Equations (which introduces linear algebra), Intermediate Linear Algebra, and General Relativity, receiving an A
in each of these courses. I have tutored linear algebra both for my job as a math tutor and privately. ...
26 Subjects: including algebra 2, English, writing, reading
...She has offered and mastered classes in local school districts including creating a jazz flute workshop for middle school and high school students. Her education includes, a B.S. from Rutgers
University, graduate education courses at University of Pennsylvania, graduate seminars in music teachin...
51 Subjects: including algebra 2, English, reading, algebra 1 | {"url":"http://www.purplemath.com/collegeville_pa_algebra_2_tutors.php","timestamp":"2014-04-18T18:46:28Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:e009a91c-6bbd-44e0-84f0-e2aa2bd1cbe3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
I am a member of the High Energy Physics and Mathematics and Computer Science Divisions at Argonne National Laboratory, a Senior Member of the Kavli Institute for Cosmological Physics at the
University of Chicago, and a Senior Fellow in the Computation Institute, a joint collaboration between Argonne National Laboratory and the University of Chicago.
My research interests cover the broad sweep of classical and quantum dynamical systems, from field theories to particles, and from the largest scales to the smallest. Specifically — moving from
larger to smaller scales — I have worked on problems in cosmology, astrophysics, accelerator physics, condensed matter physics, atomic and quantum optics, and particle physics. Some of my more
general interests include quantum dynamics of open systems, nonlinear dynamics and nonequilibrium statistical mechanics, and stochastic ODEs and PDEs. Although most of my work has been theoretical, I
have also been involved in experimental and observational projects.
For over the last two decades I have been very interested in the intelligent application of parallel supercomputers to attacking physics problems. This has led to algorithm and code development in a
variety of fields and on a variety of platforms, beginning with the Connection Machines in the early 1990′s and leading on to the current BG/Q system, Mira, now installed at Argonne.
More recently I have become involved in efforts — with cosmology as the primary arena — to apply advanced statistical methods to complex inference problems where the datasets are very large (with
small statistical uncertainties) and the forward model predictions involve supercomputer calculations. I am a member of the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST)
I am always looking for new things to discuss and think about. Please send me email (habib[at]anl.gov) if interested. Students are most welcome!
My undergraduate preparation was at the Indian Institute of Technology, Delhi and my Ph.D. is from the University of Maryland, obtained under the guidance of Bei-Lok Hu. I was a postdoc with Bill
Unruh at the University of British Columbia and postdoc and later staff member in the Theoretical Division at Los Alamos National Laboratory before moving to Argonne in 2011. | {"url":"http://press3.mcs.anl.gov/salman-habib/main/","timestamp":"2014-04-17T19:37:04Z","content_type":null,"content_length":"6971","record_id":"<urn:uuid:6ac41b88-70ba-46d5-86d8-2b877d78606a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathNotations Soaring With Eagles or Just For the Birds? Updates 5-28-10
NOTE: I added a new solution (see (e) below). Also, read the comments to see even more solutions. Thanks to Jonathan for pointing out my error in (d) of my results.
I'll get to that cryptic title in a moment (may be obvious to some)...
1. Remember the challenge problem I posted in the tribute to Martin Gardner a few days ago? Well, we rec'd several excellent replies and I have an additional response from a very sharp high schooler
as well. Here was the problem:
Can you form 95 using each of the digits 5-2-2-1-0 exactly once? No restrictions on the arithmetic operations, parentheses, factorials, roots, logs, etc... You may combine the digits to form numerals
like 12 or 120.
Mr. Lomas: 5! - (2+2)! - 1 - 0 Perhaps the most elegant since it uses the individual digits in the given order.
Robot Guy: (21-2)*5+0
Nate (high schooler): 120-5^2 Oh, the simplicity of that one! Combining digits is not the first way I thought of...
Mine so far:
(a) 102 - (5+2) Pretty simple but I wasn't thinking much of combining digits until I saw Nate's
(b) 120 -25 (Shameless plagiarism from Nate's but I couldn't resist!)
(c) (2^5)(2+1) - 0! (I posted this one already)
(d) 10^2 - 5 x (2 - 0!) (I knew there had to be a way using 100 - 5)
(e) A new one: (2 + 2)! x (5-1) - 0! I felt I needed to atone for my error in (d)!
I suspect Mr. Lomas has even more! It was definitely the spirit of Martin Gardner at work here!
Keep these coming if you can find more. I'd like to see us get to 10 ways.
The problem on the blog was:
If a hen and a half can lay an egg and a half in a day and a half, how many eggs can three hens lay in three days? Assume that all hens are a-laying at the same rate.
Here the answer is:
6 eggs
Here's a black-box method, i.e., work shown but no explanation:
(2/3) egg per (hen⋅day) x 3 hens x 3 days = 6 eggs.
This is how most solutions are given online and in the literature. It has little to do with middle schoolers actually learning the underlying principles. See the video for details.
3. Now for something completely different as M.P would say!
I've decided for now to tweet a daily (SAT) Problem of the Day. "SAT" is in quotes because you can use these in your class as regular warm-ups or students can try these on their own to prepare for
the upcoming SAT on June 5th and beyond.
Answers to each question will generally appear the next day, just before I tweet the new question. I've posted two problems thus far and the answers are up there today. Today's question will appear
My Twitter address is naturally dmarain.
Get the RSS feed for this at Twitter/dmarain if you want to see the daily problems.
If you have a question about the problems or want more details about solutions, send me a Direct Message in Twitter or email me.
Follow me if you'd like. These questions will not appear on this blog, so you will need a Twitter account or subscribe to the RSS feed above. Let your students know about it as well if you'd like.
Let me know by commenting here or replying on Twitter (Direct Message) if you like these and want me to continue next fall. Last SAT Problem of the Day on Twitter for this school year will be
Requiescant in Pacem, Martin...
"All Truth passes through Three Stages: First, it is Ridiculed... Second, it is Violently Opposed... Third, it is Accepted as being Self-Evident." - Arthur Schopenhauer (1778-1860) You've got to be
taught To hate and fear, You've got to be taught From year to year, It's got to be drummed In your dear little ear You've got to be carefully taught. --from South Pacific
7 comments:
three of clubs said...
(0+1)| (2x2)!-5! |
three of clubs--
I like that form but check the arithmetic. If (0+1) is multiplied by the absolute value then I obtain 96. I think all you need do is to take your expression in abs values and then subtract (0+1)
yielding 96 - 1 = 95.
Regardless, you found a very interesting one.
there's an extra zero in your example d.
Not very original, but with a twist:
[(2 - 0!)/.1]^2 - 5
thanks for pointing out my error!
Naturally your decimal approach makes me think of using repeating decimals:
2 / (.02 repeating) + 1 - 5 = 2 / (2/99) -4 = 99-4 = 95.
I owe that one to you Jonathan! I hope it makes up for my carelessness.
These kinds of number puzzles do take on a life of their own and there are so many popular variations of these kinds of problems. I really do think they improve student's number sense and review
order of operations.
In actual practice, do you think most students would have calculator in hand while playing with this riddle? How many of your students do you think would attempt to do this with paper and pencil
and some mental calculation like we do! I'm sure there are some...
I just added an extra one in the original post, labeled (e). Hopefully I've now atoned for "my sin"!! (wasn't My Sin once a perfume by some French company?).
Here it is again:
(5-1) (2+2)! - 0!
Is there any end to this? Actually, the issue of how many possible expressions one could make seems to be a formidable challenge, even if we place restrictions on the operations.
I do believe there are limits if you restrict the operations adequately.
With just + - * / ^, I don't think there are any others besides
120 - 5^2
(2-21) * (0-5)
102 - (5+2)
102 - 5 - 2
In terms of limits in general, with binary operations you always consume an input, so the search depth is limited by that. If you add in unary operations like sqrt or factorial, then you can
deepen quite a lot more, though perhaps not usefully. Other unary operations like floor and ceiling only apply once non-redundantly, so those don't matter.
Other binary operations I'd try would include modulo, log with a specified base, nth-root... what else?
Nice analysis! I knew one of my readers would be able to see a few levels into this. Along the way, your systematic approach produced a few other nice combinations. Thanks! Now how would you
write a program in Python to analyze all the possibilities!
By the way, I'm sure I could have picked almost any other date and someone's age and we could have generated many solutions. But I just get the feeling there's something special about Martin
Gardner's legacy to us... | {"url":"http://mathnotations.blogspot.com/2010/05/mathnotations-soaring-with-eagles-or.html","timestamp":"2014-04-21T12:39:32Z","content_type":null,"content_length":"210208","record_id":"<urn:uuid:667515d9-9dbb-428a-b633-2f40d17cb255>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2009 [00949]
[Date Index] [Thread Index] [Author Index]
Re: ContourPlot, equation and R.H. side of equation_Plotting problem
• To: mathgroup at smc.vnet.net
• Subject: [mg99087] Re: [mg99051] ContourPlot, equation and R.H. side of equation_Plotting problem
• From: "David Park" <djmpark at comcast.net>
• Date: Sun, 26 Apr 2009 01:40:52 -0400 (EDT)
• References: <15500446.1240650564504.JavaMail.root@n11>
This works:
eqn1 = 3 x^2 + 6 y^2 == 6
ContourPlot[Evaluate@eqn1, {x, -2, 2}, {y, -2, 2}, Axes -> True]
I think this is what might be called a feature. ContourPlot has the
attribute HoldAll. Without the Evaluate it see no x and or y to substitute
values. So you obtain no numeric result and no image. When you enter the
form with the explicit equal sign, Mathematica follows a path that does
evaluate and you get the image of the curve.
The lesson is: when you don't obtain an image when you expect one, and the
plotting parameters are not manifest in the expression, use Evaluate.
David Park
djmpark at comcast.net
From: Bill [mailto:WDWNORWALK at aol.com]
1a.) When I assign 3 x^2 + 6 y^2 == 6 to eqn1, in Mathematica like this:
eqn1=3 x^2 + 6 y^2 == 6;
I can't get ContourPlot to plot eqn1 using this code:
ContourPlot[eqn1, {x, -2, 2}, {y, -2, 2}, Axes -> True]
1b.) If I assign the equation like this without the constant on the R.H.
side in eqn2,
ContourPlot will plot the equation as expected, using the following syntax:
eqn2=3 x^2 + 6 y^2;
ContourPlot[eqn2 == 6, {x, -2, 2}, {y, -2, 2}, Axes -> True]
Question: How can I get method 1a to work? Could you please give me code for
PS. I'm using Mathematica 6.0.1 w/ Win XP on a PC. | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Apr/msg00949.html","timestamp":"2014-04-21T12:15:19Z","content_type":null,"content_length":"26629","record_id":"<urn:uuid:46126a50-4696-4417-81e1-a5e794e3693a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doral, FL Math Tutor
Find a Doral, FL Math Tutor
...Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second language. I
have taught students from all Latin America and Russia. I have a yoga certification and have been teaching yoga since 2003.
16 Subjects: including SAT math, algebra 1, algebra 2, chemistry
...I try to build around what is going to work for them. Every student learns in a unique way, and I have the skills to bring out the best in each student. When working with a challenging
situation I will work outside the box to find a solution.
33 Subjects: including algebra 1, special needs, elementary (k-6th), grammar
...My goal is to make the student really understand the subject, not memorize it, so he/she can build a strong foundation that can be used for future classes. I hold a Bachelor Degree in
Industrial Engineering and a Bachelor Degree in Business Administration. I am also experienced in tutoring for ...
13 Subjects: including algebra 1, algebra 2, calculus, chemistry
...My expertise in the educational and real world side of Finance can not only help you have a much better understanding of it, but allow you to have conceptual knowledge of the material. I have
now had the pleasure of tutoring students in Chicago and Miami from undergraduate to graduate levels. U...
8 Subjects: including algebra 2, finance, algebra 1, accounting
...Like many others, my professor successfully brought the subject matter to life. I returned to class every week to teach dozens of my classmates (for free at the time) and was immediately
filled with enthusiasm. How many classes are students allowed to play with models during the exam?
30 Subjects: including prealgebra, GED, reading, algebra 1
Related Doral, FL Tutors
Doral, FL Accounting Tutors
Doral, FL ACT Tutors
Doral, FL Algebra Tutors
Doral, FL Algebra 2 Tutors
Doral, FL Calculus Tutors
Doral, FL Geometry Tutors
Doral, FL Math Tutors
Doral, FL Prealgebra Tutors
Doral, FL Precalculus Tutors
Doral, FL SAT Tutors
Doral, FL SAT Math Tutors
Doral, FL Science Tutors
Doral, FL Statistics Tutors
Doral, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Coral Gables, FL Math Tutors
Hialeah Math Tutors
Hialeah Gardens, FL Math Tutors
Hialeah Lakes, FL Math Tutors
Medley, FL Math Tutors
Miami Math Tutors
Miami Gardens, FL Math Tutors
Miami Lakes, FL Math Tutors
Miami Springs, FL Math Tutors
North Miami, FL Math Tutors
Opa Locka Math Tutors
South Miami, FL Math Tutors
Sweetwater, FL Math Tutors
Virginia Gardens, FL Math Tutors
West Miami, FL Math Tutors | {"url":"http://www.purplemath.com/doral_fl_math_tutors.php","timestamp":"2014-04-20T21:08:38Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:398ef8f4-dca2-4aeb-9b72-0cc3f0e9e4cc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using an Annotated Language Corpus as a Virtual Stochastic Grammar
Rens Bod
In Data Oriented Parsing (DOP), an annotated language corpus is used as a virtual stochastic grammar. An input string is parsed by combining subtrees from the corpus. As a consequence, one parse tree
can usually be generated by several derivations that involve different subtrees. This leads to a statistics where the probability of a parse is equal to the sum of the probabilities of all its
derivations. In (Scha, 1990) an informal introduction to DOP is given, while (Bod, 1992) provides a formalization of the theory. In this paper we show that the maximum probability parse can be
estimated in polynomial time by applying Monte Carlo techniques. The model was tested on a set of hand-parsed strings from the Air Travel Information System (ATIS) corpus. Preliminary experiments
yield 96% test set parsing accuracy.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/AAAI/1993/aaai93-116.php","timestamp":"2014-04-17T15:42:35Z","content_type":null,"content_length":"2723","record_id":"<urn:uuid:5d9d86af-4fb9-4224-a92e-64bd17a03d4a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Revision
Integration - Area under a graph
Integration - Area under a graph
Integration can be used to find the area bounded by a curve y = f(x), the x-axis and the lines x=a and x=b by using the following method.
However, you must be very careful in the way you use this as the following examples will show.
In this tutorial I show you how to find the area bounded by a curve when it is above the x-axis.
" How can I thank you? "
With over 2800 videos made so far, please help me to continue with this project and keep it free for all to benefit from. More ... | {"url":"http://www.examsolutions.net/maths-revision/core-maths/integration/applications/area/tutorial-1.php","timestamp":"2014-04-19T19:54:38Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:14e8f27e-0e85-4dc9-a77c-92ef3619205f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rego Park Geometry Tutor
...It gives me pleasure. I look forward to teaching you math, science, and general knowledge. Yours truly,BenI taught math and Algebra for years I taught math and Algebra for years I am a
Chemistry PhD.
21 Subjects: including geometry, chemistry, calculus, physics
I am a Forensic DNA Analyst, and my teaching style includes using real world examples to make learning interesting and relevant. Don't get frustrated with "why do I even have to learn this?" I'll
show you why! Learning these techniques and then learning how these techniques are used in the real world helps you picture it better, and makes it easier to remember for the big test.
11 Subjects: including geometry, chemistry, biology, algebra 1
...I understand that math is not a "favorite subject" to a lot of students. My primary objective is to provide students with a set of tools that helps them attain academic success and growth, to
build and expand their knowledge and critical thinking skills, and to develop as individuals. My approa...
18 Subjects: including geometry, calculus, statistics, algebra 1
...So am I. As a Ph.D. mathematician and educator with over 14 years teaching experience, I am fully prepared to help you understand all the ins and outs any math class you're taking. I've taught
virtually every level of student, from high school dropouts learning basic algebra to college seniors learning differential equations.
14 Subjects: including geometry, calculus, algebra 1, algebra 2
Hello parents and students. I am a NY State licensed math teacher with one year classroom experience and six years tutoring experience. I also have a Masters degree in math education.
14 Subjects: including geometry, reading, algebra 1, SAT math | {"url":"http://www.purplemath.com/rego_park_geometry_tutors.php","timestamp":"2014-04-17T13:48:14Z","content_type":null,"content_length":"23887","record_id":"<urn:uuid:3e6180e8-113c-485f-b47e-dc12f71ff40c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
fixed point
Harris, Andrew Andrew.Harris at jhuapl.edu
Mon Oct 27 09:16:22 EST 2003
> Notice that, (\x -> x) a reduces to a, so (\a b c -> a b c) x (y-z) z
> reduces to x (y-z) z. You can therefore simplify your
> function quite a
> bit.
> wierdFunc x y z = if y-z > z then x (y-z) z else (\d e -> d) (y-z) z
> and you can still apply that lambda abstraction (beta-reduce)
> wierdFunc x y z = if y-z > z then x (y-z) z else y-z
> None of these (except, of course, fix and remainder) are recursive. A
> recursive function is just one that calls itself. For wierdFunc to be
> recursive, the identifier wierdFunc would have to occur in the
> right-hand side of it's definition.
Thanks for your help. For some reason I didn't think "x (y - z) z" and "y -
z" had the same type. I am still trying to understand how they do. Also,
had a feeling the fix function was related to the "Y" combinator; it seems
they're the same thing!
thanks again,
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2003-October/005386.html","timestamp":"2014-04-17T07:45:31Z","content_type":null,"content_length":"3342","record_id":"<urn:uuid:e547c3c4-41bb-4547-a89d-329b33c4ba6e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I really need help...:/ Cross two humans with the genes Aa and show this cross.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f59c9de4b061aa9f9abc06","timestamp":"2014-04-17T19:22:46Z","content_type":null,"content_length":"79414","record_id":"<urn:uuid:96771b7e-ad92-4df7-8ed2-75cac56f04f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Process Standards
Problem Solving
Instructional programs from prekindergarten through grade 12
should enable all students to—
□ Build new mathematical knowledge through problem solving
□ Solve problems that arise in mathematics and in other contexts
□ Apply and adapt a variety of appropriate strategies to solve problems
□ Monitor and reflect on the process of mathematical problem solving
Reasoning and Proof
Instructional programs from prekindergarten through grade 12 should enable
all students to—
□ Recognize reasoning and proof as fundamental aspects of mathematics
□ Make and investigate mathematical conjectures
□ Develop and evaluate mathematical arguments and proofs
□ Select and use various types of reasoning and methods of proof
Instructional programs from prekindergarten through grade 12 should enable
all students to—
□ Organize and consolidate their mathematical thinking through communication
□ Communicate their mathematical thinking coherently and clearly to peers, teachers, and others
□ Analyze and evaluate the mathematical thinking and strategies of others;
□ Use the language of mathematics to express mathematical ideas precisely.
Instructional programs from prekindergarten through grade 12 should enable
all students to—
□ Recognize and use connections among mathematical ideas
□ Understand how mathematical ideas interconnect and build on one another to produce a coherent whole
□ Recognize and apply mathematics in contexts outside of mathematics
Instructional programs from prekindergarten through grade 12 should enable
all students to—
□ Create and use representations to organize, record, and communicate mathematical ideas
□ Select, apply, and translate among mathematical representations to solve problems
□ Use representations to model and interpret physical, social, and mathematical phenomena | {"url":"http://www.nctm.org/standards/content.aspx?id=322","timestamp":"2014-04-21T16:30:20Z","content_type":null,"content_length":"32895","record_id":"<urn:uuid:1efa28b2-d75f-463a-9c67-13ea8bab57c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by honey dip on Thursday, November 23, 2006 at 11:39am.
how to find the reciprocal??
you flip the reciprocal example (4 over 5 the reciprocal will be 5 over 4.
If it is a fraction, switch the numerator with the deniminator.
Example: reciprocal of 2/7 = 7/2 = 3 1/2
If it is a mixed number, write it as a fraction and turn the fraction and turn upside down.
Example: reciprocal of 2 1/2 = reciprocal of 5/2 = 2/5
If it is a whole number (or any number!) N, the reciprocal is 1/N.
If it is a non-integer in decimal form, such as 1.23, perform the opetation 1 divided by 1.23, and get 0.813...
It looks like you already knew the answer to your own question
If you need to calculate 1/a you can use the following algorithm. You start with a guess x_{0} and you calculate better and better approximations x_{1}, x_{2},...etc recursively using the formula:
x_{n+1} = 2x_{n} - a x_{n}^2
Note that there are no divisions in this equation. This algorithm will double the number of significant digits after each iteration, so it's superior to long division where you only get one more
digit per step (which you have to find by trail and error, so the long division algorithm isn't completely division free itself).
We want to find the decimal expansion of 1/68. 1/50 is 0.02, 1/100 is 0.01, so let's start with an initial guess of x_{0} = 0.015.
The algorithm gives:
x_{1} = 2*x_{0} - 68*x_{0}^2 = 0.0147
x_{2} = 2*x_{1} - 68*x_{1}^2 = 0.01470588
x_{3} = 2*x_{2} - 68*x_{2}^2 =
The 1/x function of my calculator gives:
1/68 = 0.0147058823529
• math - Anonymous, Tuesday, May 11, 2010 at 6:49pm
Related Questions
math help - could some one please explain to me reciprocal how to find the ...
Math - What is a reciprocal A number's "flip". Like if you have 2/3, it's ...
math - PLease help! anyy would be appreciated !thx what is an example of a ...
Math - The sum of the reciprocal of 5 and the reciprocal f 7 is the reciprocal ...
math - what is the reciprocal of -1/10(-4-7)? Can tell me how to find the ...
college - Compare mechanistic and reciprocal interactionism. Give an example of ...
math need help please - Find the reciprocal of a. - 1/10 (-4-7) b. ½ / ¼ (hint: ...
math need help please - Find the reciprocal of a. - 1/10 (-4-7) b. ½ / ¼ (hint: ...
math - Find the reciprocal of a. - 1/10 (-4-7) b. ½ / ¼ (hint: solve these first...
Math - I'm having trouble with reciprocal functions. If a graph of a reciprocal ... | {"url":"http://www.jiskha.com/display.cgi?id=1164299960","timestamp":"2014-04-21T08:32:52Z","content_type":null,"content_length":"9766","record_id":"<urn:uuid:42d3d6e2-8f85-43a6-a6af-ccc64eb84e35>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral. #6
May 27th 2011, 06:26 AM #1
Jan 2009
Integral. #6
Evaluate : $\displaystyle \int_0^1 sin(\alpha x) sin(\beta x) \, dx$
where $\displaystyle \alpha$ and $\displaystyle \beta$ are roots for the equation $\displaystyle 2x=tan(x)$.
Using the forumla $\displaystyle sin(A) sin(B)=\frac{1}{2} ( cos(A-B) - cos(A+B) )$ is not allowed.
Final Answer = 0
Well using that identity is the most obvious and easy way to solve the problem... Why can't you use it?
because the problem say that
Use $\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$ then.
Use integration by parts twice.
$u = \sin(\alpha x) \implies du=\alpha \cos(\alpha x)dx \qaud dv=\sin(\beta x) \implies v=-\frac{1}{\beta}\cos(\beta x)$
$I= \int\sin(\alpha x) \sin(\beta x)dx$
$I=-\frac{1}{\beta}\sin(\alpha x)\cos(\beta x)+\frac{\alpha}{\beta }\int \cos(\alpha x)\cos(\beta x)dx$
$u = \cos(\alpha x) \implies du=-\alpha \sin(\alpha x)dx \qaud dv=\cos(\beta x) \implies v=\frac{1}{\beta}\sin(\beta x)$
$I=-\frac{1}{\beta}\sin(\alpha x)\cos(\beta x)+\frac{\alpha}{\beta }\left( \frac{1}{\beta}\sin(\beta x)\cos(\alpha x) + \frac{\alpha}{\beta}\int \sin(\alpha x) \sin(\beta x)dx\right)$
This gives
$\left(1 -\frac{\alpha^2}{\beta^2} \right)I= -\frac{1}{\beta}\sin(\alpha x)\cos(\beta x) +\frac{\alpha}{\beta^2}\sin(\beta x)\cos(\alpha x)$
$-\beta \left(1 -\frac{\alpha^2}{\beta^2} \right)I=\sin(\alpha x)\cos(\beta x)\left[1-\frac{\alpha}{\beta}\frac{\tan(\beta x)}{\tan(\alpha x)} \right]$
Now if you evaluate for 0 to 1 the last factor gives
$1-\frac{\alpha}{\beta}\frac{\tan(\beta x)}{\tan(\alpha x)}]$
Since alpha and beta satify the equation
$2x=\tan(x)$ we get
May 27th 2011, 06:33 AM #2
May 27th 2011, 06:50 AM #3
Jan 2009
May 27th 2011, 09:49 AM #4
Nov 2009
May 27th 2011, 09:59 AM #5 | {"url":"http://mathhelpforum.com/calculus/181798-integral-6-a.html","timestamp":"2014-04-17T21:56:00Z","content_type":null,"content_length":"47221","record_id":"<urn:uuid:6888d283-ddcb-4fdb-902f-6b1a411e3286>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
An ultrafilter is a set of subsets containing exactly one element of each finite partition: reference request
up vote 19 down vote favorite
There are probably dozens of ways of defining "ultrafilter". The definition I've seen most often involves first defining "filter", then declaring an ultrafilter to be a maximal filter.
But there's another, shorter way to state the definition:
Let $X$ be a set. An ultrafilter on $X$ is a set $\mathcal{U}$ of subsets such that for all partitions $$ X = X_1 \amalg \cdots \amalg X_n $$ of $X$ into a finite number $n \geq 0$ of subsets,
there is exactly one $i$ for which $X_i \in \mathcal{U}$.
I'd be amazed if this wasn't in the literature somewhere, but I haven't been able to track it down. Can anyone help?
Actually, there's an even more economical definition: instead of allowing $n$ to be any natural number, you take it to be 3. Thus, the condition is that whenever $X = A \amalg B \amalg C$, exactly
one of $A$, $B$ and $C$ is in $\mathcal{U}$. (The same thing works with 4, or 5, etc., though not with 2.) I'm mostly interested in the version with arbitrary $n$, which seems more natural, but if
you've seen the $n = 3$ version in the literature then I'd like to hear about that, too.
Edit To be clear, when I use the word "partition" I don't mean to imply that the sets $X_i$ are nonempty. I just mean a family of pairwise disjoint sets $X_i$ whose union is $X$. They can be empty.
5 Does my blog count as part of the literature (he said, tongue firmly planted in cheek)? I implicitly give this definition at qchu.wordpress.com/2010/12/09/ultrafilters-in-topology . – Qiaochu Yuan
Jul 4 '11 at 14:04
@Qiaochu: Since this definition is equivalent to the usual, it is "implicitly" everywhere that ultrafilters are discussed. – Kevin O'Bryant Jul 4 '11 at 14:39
3 By the way, Qiaochu, I don't think you need to plant your tongue in your cheek. In my opinion, blog posts (and MO answers) should be taken absolutely seriously when it comes to attributing
priority. On the other hand, I'd bet a large amount that this characterization of ultrafilters was found well before either of us was born. – Tom Leinster Jul 4 '11 at 15:36
2 @Tom: great. I never win those contests anyway. :) – Pete L. Clark Jul 4 '11 at 23:01
1 @Pete: write $X=X_1\sqcup X_2\sqcup X_3$ where $X_1=X$ and $X_2=X_3=\emptyset$. For a unique $i$ we have $X_i\in\mathcal{U}$, so necessarily $i=1$ (otherwise uniqueness would fail). So $\emptyset\
notin\mathcal{U}$. – Yves Cornulier May 7 '13 at 14:20
show 5 more comments
3 Answers
active oldest votes
You can find that characterization (even with n = 3), as well as the generalization to $\kappa$-complete ultrafilters, in: Fred Galvin and Alfred Horn, Operations preserving all
up vote 10 equivalence relations, Proc. Amer. Math. Soc. 24 (1970), 521-523.
down vote
Welcome to MathOverflow! Gerhard "Ask Me About System Design" Paseman, 2013.05.07 – Gerhard Paseman May 7 '13 at 7:27
Excellent. Thanks very much. For the sake of precision, let me add that they don't quite do the case $n=3$ in the sense described in my question. They do show that if a set $\mathcal
{U}$ of subsets of $X$ satisfies the partition condition for all $n\leq 3$, then $\mathcal{U}$ is an ultrafilter. But they seem not to observe that it suffices to require it for $n=
3$ (which implies it for $n=0,1,2$). Quite possibly they knew it but just didn't think it was worth mentioning. – Tom Leinster May 7 '13 at 19:29
PS to Butch: I'll acknowledge you when I update the paper for which I needed this, arxiv.org/abs/1209.3606. Forgive the impertinence, but is Butch Malahide your real name? Feel free
to contact me by email. (I'd contact you myself, but there's no address on your profile.) – Tom Leinster May 7 '13 at 19:32
Tom, I've seen postings by a Butch Malahide on other fora, which may help you get in touch with him/her/it. In this day and age, referring to MathOverflow users by number should be
socially acceptable. Gerhard "User 3528. Aliases 3206, 3371..." Paseman, 2013.05.07 – Gerhard Paseman May 7 '13 at 20:57
add comment
The alternate formulation is closely related to the following fundamental definition from Ramsey Theory
Definition: Let $\phi : 2^X \to \lbrace\text{true},\text{false}\rbrace$ be a property pertaining to subsets of the set $X$. The property $\phi$ is called partition regular if, for every
partition $$X = X_1 \uplus X_2 \dots \uplus X_n $$ we have $\phi(X_i)$ for at least one $i$.
Clearly, every ultrafilter corresponds to a partition regular property, $\phi(Y) = Y\in\mathcal U$. In the other direction, it is a reasonably easy exercise to show that every partition
up vote 4 regular property is given by a collection of ultrafilters $\phi(Y) = \bigvee \lbrace Y \in \mathcal U : \mathcal U\rbrace$. See for example theorem 3.11 in Hindman & Strauss "Algebra in
down vote the Stone-Čech compactification".
That said, I've never seen the formulation with fixed $n$, like $n=3$, before.
Thanks, Greg. To save others wondering: by Bool you mean a two-element set {true, false}, if I'm not mistaken. (My first guess was that you meant the category of Boolean algebras.) – Tom
Leinster Jul 4 '11 at 14:44
@Tom Leinster. Ah, yes. I've changed it to the more apparent version. – Greg Graviton Jul 4 '11 at 17:54
add comment
Google's first page gave me:
Galvin course notes Cor. 2.7...
up vote -1 Qiaochu Yuan blog
down vote
Maybe more on the second page?
Gerald, either I'm not seeing what you're seeing or you're misunderstanding my question. Cor 2.7 of Galvin states that if a set of subsets of X is an ultrafilter, then it satisfies the
partitioning property that I mentioned. But it doesn't state the converse - which is the less obvious half. Re Qiaochu's post, see the comments below my question. – Tom Leinster Jul 4
'11 at 17:04
Lemma 2.10 states it both ways, in a slight disguise. – Emil Jeřábek Jul 4 '11 at 17:59
Thanks, Emil. That's pretty close, though it's not a totally trivial disguise, since in Defn 2.9 he doesn't constrain the union of A_1, ..., A_n to be X. Of course there's nothing
difficult about the equivalence between the definition I mentioned and the standard one. It's just a matter of finding a reference where someone has actually said it. – Tom Leinster Jul
4 '11 at 18:11
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic set-theory gn.general-topology reference-request ultrafilters or ask your own question. | {"url":"http://mathoverflow.net/questions/69467/an-ultrafilter-is-a-set-of-subsets-containing-exactly-one-element-of-each-finite","timestamp":"2014-04-19T02:46:55Z","content_type":null,"content_length":"78823","record_id":"<urn:uuid:17f776e7-cefe-4867-bdd4-8073ab95746d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
printing error
01-24-2002 #1
Registered User
Join Date
Nov 2001
hi there
my prog prompts for the number of students in a class and then for a mark for each of them.then it is supose to display a histogram of how many student have the same mark, as a bar chart.am
having problems on the displaying parts. i think is probably the way i declared the variables but am not sure anymore.please any idea
#include <stdio.h>
void main(){
int num;
int i;
int students[20];
int marks [5];
printf("How many students are in the class? :");
scanf("%d", &num);
printf("Enter the marks for the student %d :" ,i+1);
if (students[i]==0){ marks[0]++;}
else{ if (students[i]==1) marks[1]++;
else{ if (students[i]==2) marks[2]++;
else {if (students[i]==3) marks[3]++;
else{ if (students[i]==4) marks[4]++;
printf("%d\t %d\n", i, asterix(marks[i]));
asterix(char n)
int i;
char c=' ';
char ch='*';
printf("%c", c );
printf("%c", ch);
the output now is this:
How many students are in the class? :5
Enter the marks for the student 1 :1
Enter the marks for the student 2 :1
Enter the marks for the student 3 :1
Enter the marks for the student 4 :1
Enter the marks for the student 5 :1
*****1 1
It is supposed to be like this:
How many students are in the class? :5
Enter the marks for the student 1 :1
Enter the marks for the student 2 :1
Enter the marks for the student 3 :1
Enter the marks for the student 4 :1
Enter the marks for the student 5 :1
1 *****
hello. i really didn't solve any of your problems.....I got real bored and I thought that your idea was neat so I made my own program very similar to yours....just because i'm real new to
programming and wanted to know if i could repeat a similar effect without looking at your code...i did.....mine prints out properly....its structure alot different than yours and probably a bit
simpler....here it is.
#include <stdio.h>
void main()
int a=0,b=0,c=0,d=0,e=0;
int students;
int x=1;
char answer;
printf("\n\t\t\t* The Clicker Poll *");
printf("\n\n\n\t\t\t Only A,B,C,D, or E ");
printf("\n\n\n\tNumber of Students that Clicked in : ");
while (students>0) {
printf("\tStudent #%i Answered: ",x);
if ((answer=='A') || (answer=='a')) {
a++; }
if ((answer=='B') || (answer=='b')) {
b++; }
if ((answer=='C') || (answer=='c')) {
c++; }
if ((answer=='D') || (answer=='d')) {
d++; }
if ((answer=='E') || (answer=='e')) {
e++; }
printf("\n\n\t\t\t ** GRAPHED **");
printf(" Clicked A (%d Responses): ",a);
while (a>0) {
a--; }
printf(" Clicked B (%d Responses): ",b);
while (b>0) {
b--; }
printf(" Clicked C (%d Responses): ",c);
while (c>0) {
c--; }
printf(" Clicked D (%d Responses): ",d);
while (d>0) {
d--; }
printf(" Clicked E (%d Responses): ",e);
while (e>0) {
e--; }
printf("\n\n\t\t...press any key to continue...");
The output looks like:
* The Clicker Poll *
Only A,B,C,D, or E
Number of Students that Clicked in : 6
Student #1 Answered: A
Student #2 Answered: C
Student #3 Answered: D
Student #4 Answered: D
Student #5 Answered: E
Student #6 Answered: A
** GRAPHED **
Clicked A (2 Responses): **
Clicked B (0 Responses):
Clicked C (1 Responses): *
Clicked D (2 Responses): **
Clicked E (1 Responses): *
you get the idea...hope you gets yours to work. sorry i couldnt offer much assistance i could never get yours to compile right for me...
....come to think about it........i added a new special effect to mine....something like:
using the #include <windows.h> to access the Sleep function and then adding a beep....so the program beeps when the graph is made, and the graph stats kind quickly astriek themselves across each
one...makes it look kind of neat... just a thought.
01-24-2002 #2
01-24-2002 #3 | {"url":"http://cboard.cprogramming.com/c-programming/9449-printing-error.html","timestamp":"2014-04-19T15:48:46Z","content_type":null,"content_length":"50567","record_id":"<urn:uuid:a9fc9a54-bc29-4365-b525-9d1dfdde123f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: Beginner birds
[Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index]
RE: Beginner birds
> > ...why is the path loss lower in Mode A than in Mode B/J?
The equation for path loss is:
[ 4 * Pi * R ]2
[------------] Where R is the Range in meters.
[ wavelength ]
So space loss goes down as the square of wavelength. That is why 10m is
23 dB better, and 2m is 9 dB better than 70 cm and so on. This
equation is fundamental to all satellite communications. It is EASY to
figure out a link. The power received at a receiver is just the power
transmitted plus antenna gains minus all losses.
Of course, that relationship assumes the SAME kind of antenna as the
baseline (lets say a dipole). It is obvious when you think about it
because as Tony said, a 10m dipole collects the signal from 200 times
more area then a 70 cm dipole.
But conversly, it is EASIER to get high gain as you go up in frequency.
In fact you get it all back if your antennas cover the same AREA. Thus
a 10' TV dish will have the SAME link budget to a satellite using an
OMNI antnenna no matter what frequency is used... (within reason
Usually it needs to be 10 wavelengths across ore more))...
Tony said it this way...
> The capture area increases as the frequency is lowered. 2m has 9x the
> capture area of 70cm, and 10m has over 20x the capture area of the
> equivalent 2m antenna. This adds considerably to the link budget.
Sent via amsat-bb@amsat.org. Opinions expressed are those of the author.
Not an AMSAT member? Join now to support the amateur satellite program!
To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org | {"url":"http://www.amsat.org/amsat/archive/amsat-bb/200307/msg01066.html","timestamp":"2014-04-16T10:45:58Z","content_type":null,"content_length":"4899","record_id":"<urn:uuid:050e3eb4-d27e-4548-b304-108cea8689f0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Katherine on Wednesday, November 10, 2010 at 5:02pm.
kyle had 36 books in hislocker. some were library books, some were textbooks, and the rest were telophone books. the number of library books and textbooks combined equal twice the number of
textbooks. the number of textbooks and telephone books combined equals three times the number of library books. how many of each type of book were in kyle's locker??
please help me....i need a chart or a rule to show me how u got it....please
• math - Mrs W, Wednesday, November 10, 2010 at 6:12pm
Do it this way
a=library books
c=telephone books
You know a + b + c = 36
You know that a + b = 2b (library books plus textbooks equals twice number of textbooks)
You know that b + c = 3a (textbooks plus telephone books equals three times number of library books)
Take equation #2 a+b=2b
Subtract b from each side
a=b The number of library books equals the number of textbooks
Take equation #3 b+c=3a Substitute a for b since you know now from equation #2 that a=b Now you have a+c=3a
Subtract a from each side c=2a
Now, go to equation #1 a+b+c=36
You know a=b
You know c=2a
a + a + 2a = 36
Combine like terms and divide
4a = 36 a= 9
Again, you know a = b and c= 2a therefore a=9, b=9, c=18
9 library books
9 text books
18 telephone books
Related Questions
math - kyle had 36 books in hislocker. some were library books, some were ...
math - kyle had 36 books in hislocker. some were library books, some were ...
Math, Please help :/ - Kyle had 36 books in his locker. Some were library books...
math - MEghan had 36 books in her locker. Some were library books, text books ...
English - 1. We should not scribble in books in the library. 2. We should not ...
math - the library has a total collection of 2630 books. the number of non-...
math - the library has a total collection of 2630 books. the number of non-...
math - the library has a total collection of 2630 books. the number of non-...
Maths - 20% of the books in a library were fiction books while the rest were non...
algebra - John has 36 books in his locker. He has text books,library books, and ... | {"url":"http://www.jiskha.com/display.cgi?id=1289426576","timestamp":"2014-04-20T07:08:46Z","content_type":null,"content_length":"9409","record_id":"<urn:uuid:8e0108d2-4d00-4331-bf58-7979d39f5edb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
TigerWare Online For Windows
Windows Mac Linux
Convert 4.10
Convert is the only unit conversion tool you`ll ever need. It includes fast and simple conversions for distance, temperature, volume, time, speed, mass, power, density, pressure, energy and many
others. In addition, you can also add your custom conversions.
Available to: LSU Community
FreeMat 4.1.1
FreeMat is a free environment for rapid engineering and scientific prototyping and data processing. It is similar to commercial systems such as MATLAB from Mathworks, and IDL from Research Systems,
but is Open Source. FreeMat includes several novel features such as a codeless interface to external C/C++/FORTRAN code, parallel/distributed algorithm development (via MPI), and plotting and
visualization capabilities.
Available to: LSU Community
LSU Virtual Computer Lab
LSU students have access to free virtually hosted computer lab software from anywhere in the world using VLAB.
Available to: Students, Faculty, Staff
Maple 16
The ultimate productivity tool for solving mathematical problems and creating interactive technical applications.
Maple 16.02a, a maintenance update, is available to all users running Maple 16. This update corrects a 2-D plotting problem which, under rare circumstances, can cause a blank plot to be produced
instead of the desired curve.
Available to: Faculty, Staff, Virtual Lab
Maple 17
The ultimate productivity tool for solving mathematical problems and creating interactive technical applications
Available to: Faculty, Staff, Virtual Lab
Maple 18
The ultimate productivity tool for solving mathematical problems and creating interactive technical applications
Available to: Faculty, Staff
Mathematica 7.0.1
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used. This now includes Wolfram Lightweight Grid Manager.
Available to: Faculty, Staff
Mathematica 7.0.1 for Students
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used.
Available to: Students
Mathematica 8.0.4
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used.
Available to: Faculty, Staff
Mathematica 8.0.4 for Students
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used.
Available to: Students, Faculty, Staff
Mathematica 9.0.0
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used.
Available to: Faculty, Staff
Mathematica 9.0.0 for Students
From simple calculator operations to large-scale programming and interactive-document preparation, Mathematica is the tool of choice at the frontiers of scientific research, in engineering analysis
and modeling, in technical education from high school to graduate school, and wherever quantitative methods are used.
Available to: Students, Faculty, Staff
MatLab R2007A
MATLAB is a numerical computing environment and programming language. Created by The MathWorks, MATLAB allows easy matrix manipulation, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs in other languages. Although it specializes in numerical computing, an optional toolbox interfaces with the Maple symbolic engine, allowing
it to be part of a full computer algebra system.
Available to: Student Labs
SAGE 5.10
SAGE is free open source math software that supports research and teaching in algebra, geometry, number theory, cryptography, numerical computation, and related areas.
Available to: Students, Faculty, Staff
GROK Online Tech Support
Student Virtual Lab
To view and subscribe to a list of recently updated applications, click here.
To request a software item please click here.
If you have any comments or suggestions please submit them here.
• Pismo for mounting an .ISO.
• ImgBurn to burn a CDs/DVDs from .iso.
Clear your Web Browser cache if you are experiencing problems downloading from TigerWare. If the problem persists, contact the Help Desk at 578-3375 or helpdesk@lsu.edu.
Any software listed on TigerWare covered for Faculty and Staff is also covered for lab installations, as long as the lab is owned and maintained by Faculty and Staff. | {"url":"https://tigerware.lsu.edu/list.aspx?id=172","timestamp":"2014-04-18T00:12:32Z","content_type":null,"content_length":"41718","record_id":"<urn:uuid:f416e707-0e51-48a0-bb76-990ca8739e5c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
patch of a torus
January 28th 2008, 03:03 AM #1
Dec 2007
Let $C$ be the circle in the x-z plane with radius $r>0$ and center (R,0,0). A torus of revolution is obtained by revolving $C$ about the z-axis. Show that the patch is given by $x(u,v)=((R+r cos
u)cos v, (R+r cos u)sin v, r sin u)$.
Ahmmm, a patch is just a mapping x: $R^{2} to R^{3}$ that defines the whole curve.
Let $C$ be the circle in the x-z plane with radius $r>0$ and center (R,0,0). A torus of revolution is obtained by revolving $C$ about the z-axis. Show that the patch is given by $x(u,v)=((R+r cos
u)cos v, (R+r cos u)sin v, r sin u)$.
Ahmmm, a patch is just a mapping x: $R^{2} to R^{3}$ that defines the whole curve.
I've attached a drawing how to calculate the coordinates of the points of the surface of a torus.
1. The black circle in the x-y-plane is the path of point R. In the picture of the torus I've choosen R=5
2. The red circle has the radius $\rho = R+r \cdot \cos(u)$. I've taken r = 3.
With $\rho$ and the angle v you can calculate the x and y-coordinate of the points.
3. The z-coordinate of all points only depends on r and the angle u.
Last edited by earboth; January 29th 2008 at 05:20 AM.
January 29th 2008, 04:55 AM #2 | {"url":"http://mathhelpforum.com/calculus/27001-patch-torus.html","timestamp":"2014-04-18T08:36:46Z","content_type":null,"content_length":"36410","record_id":"<urn:uuid:0eeebc02-0daf-4102-973e-567ba27468e6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Infinity-Function Theory
Journal Club – Geometric Infinity-Function Theory – Week 1
Posted by Urs Schreiber
In our Journal Club on geometric $\infty$-function theory this first official week starts with Alex Hoffnung talking about section 1 of “Integral Transforms”.
This is to get us going and hopefully also reduce the intimidation level. If it looks interesting, have a look at our schedule. We are still looking for volunteers who would like to have a look at
section 4,5, and 6 of “Integral Transforms” and write some kind of report for us all, to start further discussion.
Journal club post by Alex Hoffnung
I have not been asked to write a “book report” since 3rd or 4th grade. Somehow this was more difficult than I remember.
I am going to attempt to report on the introductory sections of the Ben-Zvi, Francis, Nadler (BZFN) paper. So how does one describe an introduction which already gives a detailed section by section
description of the whole of the paper? Let’s find out. (Here is a hint: restate or copy shamelessly much of what the authors have already kindly told us.)
Let me start by saying what BZFN are attempting to do. The main applications stated are in the simpler form the calculation of Drinfeld centers of monoidal categories of sheaves and construction of
topological field theories.
The reason to first state the applications which seem to be way off in our future is just to get down the jargon and then put it to the side for a while, hopefully lessening our burden while we
trudge through the preliminaries.
So what is a Drinfeld center (of a monoidal category (of sheaves)) and roughly how does one calculate it? I guess (from reading the abstract) that the Drinfeld center should be an $\infty$
-subcategory of the category of quasicoherent sheaves and is the same as the Hochschild cohomology category. Of course, this still leaves a lot of explaining to do. Hopefully, someone who wants to
report on Section 5 can jump in here and explain this. If not, I will come back later and try to understand/explain these things.
Second, what are topological field theories (TFT’s) and roughly how does one construct such a thing? A TFT is a symmetric monoidal functor from a certain cobordism $\infty$-category to a symmetric
monoidal $\infty$-category. In Section 6, we will see an explicit construction of 2d TFT where the circle is sent to the $\infty$-category of quasicoherent sheaves on the loop space of a perfect
stack. Similarly, I hope this can be a jumping off point for someone to talk about Section 6.
Having vaguely introduced the applications, what are the main results that this paper will present as our tools? We want to identify the category of sheaves on a fiber product with two algebraic
i) the tensor product of the categories of sheaves on the factors,
ii) the category of linear functors between the category of sheaves on the factors.
Here the authors tell us that (ii) allows us to realize functors as integral transforms. So far I have been laying out an outline for this paper where we consider these factors mentioned above to be
schemes or stacks (except maybe briefly in the applications). Lets quickly finish the story for this case, make note of the fact that we want to take everything above and make it a bit fancier by
considering derived stacks, then use the comment above on (ii) to consider an even simpler example which DBZ was kind enough to explain in a previous post. Hopefully, this will help us find some
intuition for why and how we want to understand (i) and (ii) before getting scared off by things like derived stacks.
If we let X be a scheme or a stack, then we can obtain a stable $\infty$-category QC(X), whose homotopy category is the unbounded quasicoherent derived category $D_{qc}$(X). QC(X) becomes a symmetric
monoidal stable $\infty$-category. So we want to calculate the $infty$-categories built out of QC(X) from tensor products, linear functors and other algebraic constructions in terms of geometry on X.
So that last paragraph (taken almost word for word from the paper) is about where my boundaries begin to be pushed. Now I think the best thing to do is take a step back and remember that David was
kind enough to write us a short note on geometric function theory a while back. In the interest of keeping this self-contained and accessible to as many people as possible (including me) we should
recap the lessons of this note.
The toy model: function theory on finite sets. This is a great introduction to some of the main characters in our story, in particular, spans or correspondences, the push-pull, integral transforms,
functions on (fiber) products and maps between different structures of functions on algebraic objects (in this case sets). Eventually functions will mean quasicoherent sheaves and sets should mean
something like scheme, stack, groupoid, or derived stack.
We start by considering a span of finite sets
(1)$X\leftarrow Z \rightarrow Y$
and, for instance, the vector spaces of complex functions on each of these sets. It is very natural to pull back a function on X to a function on Z by the function from Z to X. Fortunately, we can
also push-forward a function on Z to a function on Y by summing over fibers, allowing us to obtain linear maps from $\mathbb{C}[X]$ to $\mathbb{C}[Y]$. This turns out to be just a special case of a
richer phenomenon. What I have just described doesn’t fully exploit the span. We can choose a function on Z and when summing over fibers of elements of Y convolve by this function, thus obtaining a
new linear map from $\mathbb{C}[X]$ to $\mathbb{C}[Y]$. My first description implicitly involved convolution with the constant function $1$ on Z. So, in general, functions on Z are integral
transforms taking functions on X to functions on Y for any span of the sort given above.
Now, Ben-Zvi goes further to explain the how products play into this game. (I should mention that this story of the toy model was first told to me by Tony Licata) This will give us a nice base to
think about the more difficult theorems we are attempting to understand in this journal club. But first, I would like to at least make a small excursion to mention the relation to groupoidification
(since I have a vested interest in the subject and it has been discussed here and here extensively). This toy model which we intend to extend to geometric $\infty$-function theory is also a toy
example for groupoidification. Spans of sets are just really boring examples of spans of groupoids and using the concept of groupoid cardinality described at the links above and zeroth (co)homology
of groupoids, we can employ the same pull-push operation and convolution to obtain linear maps. Also, one could continue reading Ben-Zvi’s note to consider functions on finite orbifolds and leading
up to his own synopsis of the paper we are currently reading.
Lets understand Ben-Zvi’s comments on products and then get back to the paper at hand. Given finite sets X and Y, there is of course a nice span
(2)$X \leftarrow X \times Y \rightarrow Y$
and using the construction above we get
(3)$\mathbb{C}[X\times Y] \cong Hom_{\mathbb{C}}(\mathbb{C}[X],\mathbb{C}[Y])$
between integral kernels and linear transformations.
When we have a cospan
(4)$X \rightarrow Y \leftarrow X'$
we get a relative version of this construction
(5)$\mathbb{C}[X\times_{Y} X'] \cong Hom_{\mathbb{C}[Y]}(\mathbb{C}[X],\mathbb{C}[X'])$
The theorems we intend to try to understand use an exact analogue of this correpondence.
So now we should take the plunge into derived stacks and try to repeat the story just told.
1) stacks and higher stacks arise from quotients (or more general colimits) on schemes
2) derived stacks arise from fiber products (or more general limits) on schemes and stacks
I would like to eventually include some links to definitions and examples here. Any help with that would be great.
Now we begin repeating what we have said above in this derived setting. To a derived stack X, we assign a stable $\infty$-category QC(X) in a manner which extends the definition for an ordinary stack
or scheme.
In particular, let X = Spec R, an affine derived scheme, then QC(X) is the $\infty$-category of R-modules $Mod_R$ whose homotopy category is the usual unbounded derived category of R-modules. As
before, the tensor product provides QC(X) with the structure of a symmetric monoidal stable $\infty$-category.
Now we should state the generalized versions of the previous stated applications and main results. The main applications being the calculation of the Drinfeld centers (and higher $\E_n$-centers) of $
\infty$-categories of sheaves and functors. BZFN give as an example, the identification of the Drinfeld center of the quasicoherent affine Hecke category with sheaves on the moduli of local systems
on a torus. And as a second application, explaining how the results obtained fit into the framework of 3-dimensional topological field theory (of Rozansky-Witten type). In particular, BZFN verify
categorified analogues of the Deligne and Kontsevich conjectures on the $\E_n$-structure of Hochschild cohomology.
We are clearly back to the land of scary. So I think the best thing to do is to follow BZFN and give a brief overview of each section, which can serve as an outline to be filled in over the course of
this journal club.
Why perfect stacks? Urs has begun to tackle this question here . We will be mainly interested in studying the $\infty$-category $QC(X)$, for X a derived stack, but this is in general too unwieldy
algebraically. The problem being that it may contain objects that cannot be constructed in terms of concrete, locally finite objects. The “perfect” solution is to consider a smaller $\infty$
-subcategory $QC(X)^\cdot$ of “generators” which are “finite” in some sense. This leads us to the idea of an ind-category. This is a nice idea, where one considers a small category of “managable”
objects which has diagrams whose inductive limits are morally the larger objects which may not be quite so managable. Of course, these objects cannot be limits of any sort since they do not live in
this smaller “managable” category, so one considers the diagrams themselves as placeholders for these objects. There is another apporach which involves the vanishing of right orthogonals, and I do
not really understand this. Maybe someone can step in here and give me a hand.
In Section 3.1, BZFN discuss appropriate notions of “finiteness”
-perfect objects (geometry)
-dualizable objects (monoidal structure)
-compact objects (categorical structure)
I would like to discuss how these relate to the corresponding notions in parenthesis, but I am not quite sure how this goes yet.
A quasicoherent sheaf $M\in QC(X)$ is a PERFECT COMPLEX if it locally restricts to a perfect module (a finite complex of free modules). Equivalently, M is DUALIZABLE with respect to the monoidal
structure on QC(X). We (or maybe I) should probably create nlab pages for the capitalized words.
We are now in a position to define a PERFECT derived stack X. The two conditions are that it has an affine diagonal and that QC(X) is given as the inductive limit of the full $\infty$-subcategory of
perfect complexes. Then we can, of course define PERFECT morphisms.
On a perfect stack X, compact objects of QC(X) are the same as perfect compexes (which are the same as dualizable objects).
I think there should be some discussion on the importance of compact objects. Hopefully, I can come back and include that here soon.
So we can reformulate the definition of a perfect derived stack as having an affine diagonal, QC(X) being compactly generated (i.e. no right orthogonal to the compact objects) and that compact and
dualizable objects coincide.
BZFN remark that compactly generated categories can be expressed as categories of modules.
Moving on to tensors and functors (Section 4) we can see analogues of the constructions in the toy example. For X, X’ schemes over a ring k, the dg category of k-linear continuous (colimit
preserving) functors is equivalent to the dg category of integral kernels:
(6)$Fun_k(QC(X),QC(X')) \cong QC(X\times_k X')$
(a theorem of Toen). A similar result holds for dg categories of perfect (equivalently, bounded coherent) complexes on smooth projective varieties.
The main technical results are as follows:
- the tensors and functors of $\infty$-categories of quasicoherent sheaves are calculated in the symmetic monoidal $\infty$-category $Pr^L$ of presentable $\infty$-categories with morphisms left
adjoints (Section 2)
- the tensors and functors of $\infty$-categories of erfect complexes are calculated in the symmetric monoidal $\infty$-category Idem of k-linear idempotent complete stalk small $\infty$-categories
(Section 4.1)
Finally we want to state the theorems:
For $X \rightarrow Y \leftarrow X'$ maps of perfect stacks, there is a canonical equivalence
(7)$QC(X\times_Y X') \cong QC(X)\otimes_{QC(Y)} QC(X')$
between the categories of sheaves on the derived fiber product and the tensor product of the $\infty$-categories of sheaves on the factors.
There is also a canonical equivalence
(8)$Perf(X\times X') \cong Perf(X)\otimes Perf(X')$
for $\inft$-categories of perfect complexes.
For $X\rightarrow Y$ a perfect morphism to a derived stack with affine diagonal and $X' \rightarrow Y$ arbitrary, there is a canonical equivalence
(9)$QC(X\times_Y X') \cong Fun_{QC(Y)}(QC(X),QC(X'))$
between the $\infty$-category of sheaves on the derived fiber product and the $\infty$-category of colimit-preserving QC(Y)-linear functors.
When X is a smooth and proper perfect stack, there is also a canonical equivalenc
(10)$Perf(X\times X') \cong Fun(Perf(X),Perf(X'))$
for $\infty$-categories of perfect complexes.
The rest of this section gives an overview of the applications of these results. We will get to this soon.
I have attempted to recap what I read in the introduction here. My main goal was to provide a template for our journal club. I hope that we can transport parts of this over to the nLab and fill in
the blanks will more detailed exposition as we go. I had some technical trouble such as using subscripts. Maybe someone can tell me what I am doing wrong.
Posted at April 27, 2009 4:48 PM UTC
Re: Journal Club – Geometric Infinity-Function Theory – Week 1
This is Bruce checking in for the Monday Seminar. Thanks Alex, that’s a great template for a first post, and I like the undercurrent of humour in your style!
The good news about this Introduction section is that (a) as you say, David Ben-Zvi has kindly sketched out the basic idea in the geometric function theory notes, and (b) there is a “Preliminaries”
section which describes in a bit more detail (but not totally scary detail) the stuff which was described in this Introduction section. Urs will be covering this section next week Monday.
While I look up stuff like “ind-category” and “right-orthogonal” on the nLab, let me say what I got out this time from reading the Introduction section again: the last section on Rozansky-Witten
theory was great, and solved some conceptual puzzles I’ve always had — at least, one day when I truly understand what things like “Spec Sym $\Omega_X[1]$” are.
But simply put, what always confused me with Rozansky-Witten theory was the following. Rozansky-Witten theory is meant to be a 3d extended TQFT based on a nice complex manifold $X$. So it assigns a
category to the circle. From abstract 2-categorical nonsense, we believe that the category assigned to the circle in an extended TQFT should be the “center” or “dimension” or “looping” of the
2-category assigned to the point:
(1)$Z(S^1) \cong dim Z(pt).$
Why do we believe that? Well, just because the circle looks like a little “loop”.
In Rozansky-Witten theory, the category assigned to the circle is the derived category of coherent sheaves on $X$:
(2)$Z(S^1) = D(X).$
That’s a bit weird though, because we know that $Z(S^1)$ is supposed to be some kind of looping of $Z(pt)$, and we don’t see any “loops” in $D(X)$. What’s going on?
In the paper we are learning about, David Ben-Zvi, John Francis and David Nadler show that there is another theory which I guess is what happens if you plug in the basic ideas of Rozansky-Witten
theory into their big machinery and turn the crank. What pops out is not exactly Rozansky-Witten theory. But the thing it assigns to the circle is very closely related to what ordinary
Rozansky-Witten theory assigns to the circle. In fact, we have:
(3)$Z_{Ben-Zvi, Francis, Nadler} (S^1) = Z_{Rozansky-Witten} (S^1)$
up to “completion of the latter along the zero section”. The great thing about this new theory though is that it has all the “loops” correctly filled in, and in fact the authors will soon prove that
$Z(S^1)$ represents the “looping” of the 2-category assigned to the point in an upcoming paper. The bad thing about it is that apparantly it only works at the level of 0, 1- and 2-manifolds; it can’t
quite be defined for 3-manifolds for reasons I don’t understand.
Posted by: Bruce Bartlett on April 27, 2009 5:01 PM | Permalink | Reply to this
Re: Journal Club – Geometric Infinity-Function Theory – Week 1
Thanks, Alex!
I’ll try to reply in more detail later. For the moment just this small comment:
Now I think the best thing to do is take a step back and remember that David was kind enough to write us a short note on geometric function theory a while back.
Yes! So one main ingredient here is that we we want to be looking at the $\infty$-version of finite-dimensional linear algebra under the following disctionary:
- finite sets $S$ are replaced by generalized spaces aka derived $\infty$-stacks $X$;
- vector spaces $[S,\mathbb{R}]$ of functions $S \to \mathbb{R}$ on finite sets are replaced with $(\infty,1)$-categories $[X, QC]$ of some kind of $\infty$-vector bundles – called $QC$ here! –
- we prove that as linear maps between finite dimensional vector spaces $[S,\mathbb{R}] \to [S',\mathbb{R}]$ are given by matrices, suitably well behaved $\infty$-functors $[X,QX] \to [X', QC]$ are
given by “$\infty$-matrices” called integral transforms.
- and then we check how the familiar notions of trace and center generalize from finite dimensional endomorphisms to such integral transforms and find that they yield lots of nice relations to
Hochschild cohomology.
Have to run now…
Posted by: Urs Schreiber on April 27, 2009 5:05 PM | Permalink | Reply to this
Re: Journal Club – Geometric Infinity-Function Theory – Week 1
Now finally have a bit of time to look into this. I’ll read Alex’s report and comment where comments come to mind.
So here goes:
I guess (from reading the abstract) that the Drinfeld center
We need to start an $n$Lab-entry [[Drinfeld center]].
should be an $\infty$-subcategory of the category of quasicoherent sheaves
We might maybe remark that the construction is more general:
For every “$(\infty,1)$-monoid” defined to be an algebra object $A$ in a [[closed monoidal symmetric $(\infty,1)$-category?]] there is a notion of [[derived center?]]
$Z(A) = End_{A \otimes A^*}(A)$
and this generalizes/expands the notion of [[Drinfeld center?]] of a monoidal category.
Eventually (like from page 34 on) we are interested in setting
$A = \infty Vect(X) \,,$
the $(\infty,1)$- (or is it $(\infty,0)$-?)category of some flavor of $\infty$-vector bundle-like things on a generalized space $X$.
The center does (see page 33 of “Integral Transform” and page 19 of “Character Theory”) come with a canonical morphism
$Z(A) \to A \,.$
Is that an inclusion in a suitable sense? One would hope it is, but I don’t know at the moment. Do you?
Hopefully, someone who wants to report on Section 5 can jump in here and explain this.
Yes, that would be great. Now that we already have a fourth volunteer doing section 4!
By the way, I had made some feeble attempts to prepare the ground for this section 5 a bit by creating
to be read eventually in the context of
So we want to calculate the infty-categories built out of QC(X) from tensor products, linear functors and other algebraic constructions in terms of geometry on X.
So that last paragraph (taken almost word for word from the paper) is about where my boundaries begin to be pushed.
Right, this is one thing we want to sort out in detail eventually.
Just for the record, notice that an attempt at a clean crisp statement about
built out of $QC(X)$ from tensor products, linear functors and other algebraic constructions
is at $n$Lab: geometric function theory, when you scroll down to the section “general $\infty$-categorical / homotopical setup”.
I don’t think it will be too hard to get this under reasonable detailed control eventually.
It’s to a large extent the familiar symbol manipulation with the main and crucial new thing that keeps coming up again and again the fact (in one incarnation or other) that the homotopy pullback of a
point in $X$ along itself is not the point, as it is in the 1-dimensional theory, but is loops in that space
$\array{ \Omega X &\to& * \\ \downarrow && \downarrow \\ * &\to& X } \,.$
Or equivalently: the fact that we are allowed to replace whenever we are to compute a pullback all points by big puffed up contractible spaces, which we may think of as total spaces of generalized
universal bundles.
We start by considering a span of finite sets
Cool. By the way, why does no one from the groupoidification crew feel like adding some introduction and statements about it at groupoidification? Maybe just copy-and-pasted from existing
introductions hidden somewhere on the blog?
That would have saved you from typing the next few paragraphs.
Well, or the other way round: some paragraphs like this should eventually be copied and transmuted into that $n$Lab entry. I think.
1) stacks and higher stacks arise from quotients (or more general colimits) on schemes
2) derived stacks arise from fiber products (or more general limits) on schemes and stacks
I would like to eventually include some links to definitions and examples here. Any help with that would be great.
Some definitions exist on the Lab, more examples and more everything surely would be nice to add. There is
$\infty$-stack (though that needs rewriting, but follow some of the links)
which I just notice has an almost-duplicate at derived stack.
A bird’s eye starting point is $(\infty,1)$-category of $(\infty,1)$-sheaves
(where $(\infty,1)$-sheaf = $\infty$-stack!).
Well, I also tried to write an introduction to that stuff at heuristic introduction to sheaves, cohomology and higher stacks.
(Most important when looking at these entries: there is lots of chance to get the feeling that this is suboptimal and missing something. If anyone begins to feel this way: edit the entries to improve
-[[perfect objects?]] (geometry)
-[[dualizable objects?]] (monoidal structure)
-$n$Lab: compact objects (categorical structure)
I would like to discuss how these relate to the corresponding notions in parenthesis, but I am not quite sure how this goes yet.
Oh, this is probably not meant to be anything deep:
- dualization is something you do in a monoidal category. So there goes the monoidal.
- Compact objects are defined in terms of homs out of them respecting certain colimits, so that’s category-theoretic.
- And those perfect stacks are some kind of $\infty$-vector bundles that locally look like finite complexes of ordinary vector bundles. So that’s a geometric, i.e. bundle-theoretic description.
By the way, David Ben-Zvi made some helpful comments on perfect stacks here in the comment section of his original guest post.
derived stack as having an [[affine diagonal?]]
I’ll skip over the nice summary of theorems and results… about to suggest that
we can transport parts of this over to the nLab and fill in the blanks will more detailed exposition as we go.
That list of theorems and results of “Integral Transforms”, for instance, should be, to get started, copy-and-pasted straight to the Journal Club entry. That’s where it belongs.
Similarly other parts of your report could be copy-and-pasted to the relevant entries (and these entries created if necessary) for a start. Polishing will come by itself then, as people work on
Okay, I am hoping I am not getting on your all nerves with this insistence of moving material to the $n$Lab. But it’s the better for all of us.
The Journal Club entry has already placeholder headlines for all the relevant sections of the BZNF articles. Eventually each of them should be filled with a one or two or a handful of paragraphs in
just this style of your report here.
Posted by: Urs Schreiber on April 28, 2009 8:54 PM | Permalink | Reply to this
Re: Journal Club – Geometric Infinity-Function Theory – Week 1
Thanks for the great book report Alex!
One quick comment: the Drinfeld center is not a “sub” in any reasonable sense (except in the loose sense that it’s a categorical limit). This is easy to see already for the category of
representations of a finite group, where the Drinfeld center (or modules for the Drinfeld double) is the category of G-equivariant vector bundles on the group G (I think they also go by the name
Yetter-Drinfeld modules?) Likewise the center of a ring in the derived world (ie Hochschild cohomology) is not really a subring - even for a commutative ring, its derived center doesn’t map in
injectively (for a smooth commutative ring the Hochschild-Kostant-Rosenberg theorem tells us that Hochschild cohomology is the exterior algebra on derivations of the ring). I think we have to abandon
the notions of sub and quotient in the homotopical world and stick to notions like (homotopy) limit and colimit.
I hope to add some comments later, but am busy getting married at the moment..
Posted by: David Ben-Zvi on April 29, 2009 7:14 PM | Permalink | Reply to this
Re: Journal Club – Geometric Infinity-Function Theory – Week 1
Hi David
I am about to hop on a plane so will try to respond with actual content and expand my post later. I heard you were getting married so I figured you would be busy. Thanks for taking the time to
respond and congratulations!
Posted by: Alex Hoffnung on April 30, 2009 4:24 AM | Permalink | Reply to this
Week 2, section 2
According to our schedule, today is my turn to talk in our Journal Club, this time about section 2 “Preliminaries” of “Integral Transforms”.
But I will be late one day and will instead post my part tomorrow.
The reason is that I don’t yet have my piece on section 2.5 worked out. And also the material I have needs a bit more polishing before I want to oficially post it. And today I am busy with something
But most of what I will post tomorrow is already visisble under “section 2” at our $n$Lab web page. It’s main purpose is to be a commented list of links to $n$Lab entries with more details, anyway.
So everybody is kindly invited to look at the $n$Lab entry to find four fifths of the material for the second week of our Journal Club. Corrections, questions and improvements are welcome, especially
if typed right away into the $n$Lab entry.
So, see you tomorrow. Sorry for the delay.
Posted by: Urs Schreiber on May 4, 2009 8:40 AM | Permalink | Reply to this
Re: Week 2, section 2
Ok, thanks Urs!
Posted by: Bruce Bartlett on May 4, 2009 10:20 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2009/04/journal_club_geometric_infinit_1.html","timestamp":"2014-04-17T22:31:23Z","content_type":null,"content_length":"77093","record_id":"<urn:uuid:56059d87-22bd-40fc-91d3-1f8288d3a3b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extraction of Transmission Parameters for Siting and Sizing of Distributed Energy Sources in Distribution Network
Journal of Energy
Volume 2013 (2013), Article ID 938958, 9 pages
Research Article
Extraction of Transmission Parameters for Siting and Sizing of Distributed Energy Sources in Distribution Network
Department of Electrical Engineering, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462051, India
Received 8 February 2013; Revised 14 June 2013; Accepted 26 June 2013
Academic Editor: Poul Alberg Østergaard
Copyright © 2013 Shilpa Kalambe and Ganga Agnihotri. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
This paper introduces a novel method for sitting and sizing the grid connected distributed generator (DG) for installation in distribution system at any input load condition, which is based on two
port transmission equations, named as modified transmission parameters (MTP) method by considering the loss minimization as a constraint. If properly organized, with the help of various transmission
parameters optimal DG allocation with minimum transmission losses, contribution of DG as well as the main supply source to each load, type of DG required to handle the existing power flow scenario,
and operating power factor at which DG should operate can be easily investigated. Apart from this the author has also investigated the worst location for DG installation and referred to it as
Consecutive Bus. The method has been tested on two test distribution systems with varying sizes and intricacy and the results have been compared with the two established methods reported earlier.
Relative study presented has shown that the proposed method leads existing methods in terms of its simplicity, computational time, and handling less number of variables.
1. Introduction
It is universally acknowledged that distributed generator (DG) is perched to become a key element in our future energy generation. DGs are generally defined as the generating plants serving a
customer on-site or providing support to a distribution network, connected to the grid at distribution-level voltages [1]. The main reasons for continuous growth in incursion of DG are the
environmental concerns, insufficiency of energy sources, constraints on building new transmission and distribution lines, technological advances in small generators, power electronics, and energy
storage devices for transient backup, but it was also observed that improper siting or sizing of DG can counter effect the system [2]. Apart from providing the solution to most of the power network
problems they are adding new problems as well, such as their grid connection, pricing, change in protection scheme, and limits on the number of DG connections in the weak grids, and also the addition
of DGs may increase real power flow back to grid causing voltage rise or increase reactive power flow into feeder causing voltage to fall. Thus it is clear that DGs come with lots of benefits as well
as challenges that is why the problem of DG planning has recently received much attention by power system researchers so as to garner maximum benefit from this upcoming power generation technology
without violating the existing power system infrastructure. In [2] Willis has applied a 2/3 rule for capacitor placement for loss reduction thereafter in [3] by giving a very simple “Zero Point
Analysis” applied the 2/3 rule for DG placement in radial distribution network. His analysis has proved to be undemanding and easy to apply but the approach may not be functional in case of variable
load conditions. In [4] Mao and Miu proposed switch placement schemes to improve the system reliability by DG placement in distribution network. In [5] Atwa et al. have given an effective
probabilistic-based planning technique for determining the optimal fuel mix of different types of renewable DG units in order to minimize the annual energy losses in the distribution system.
Considering total power penetration from DG units, Kim et al. [6] and Gandomkar et al. [7] employed the Hereford Ranch algorithm to minimize system losses. Acharya et al. [8] have used an Exact Loss
Formula to calculate losses and to find out the optimal location of DG in distribution system in a very effective manner and then Hung et al. [9, 10] have continued the work with the same formulation
and applied the method for multiple DG allocation. In most of the methods [6–13] reported earlier either new complicated equation or existing loss calculation equations such as Exact Loss Formula and
Marginal Loss Coefficients have been implemented which in turn necessitate, calculation of many other complicated subcoefficients requisite in the equations. Many methods of loss minimization may
also require cumbersome iterative steps; thus, all those methods gratuitously make the loss calculation process prolonged, but in the proposed method we directly utilize the transmission equations
and convert them into power form to get both types of losses in the distribution system. In spite of calculating the losses encountered in the system we can also avail the value of the capacity of
DG, its operating power factor, and the type of the DG from the same equation without any extra calculations. Thus the allure of method lies in its simplicity.
The proposed method has been tested on two test distribution systems (IEEE 16-Bus [14] as well as IEEE 33-Bus [15] Radial Distribution Test Systems) with varying complexity and validated by comparing
results with Improved Analytical method suggested by Hung et al. and Exhaustive Load Flow method [9]. The result shows that the proposed method requires less computational equations thereby fast for
obtaining an optimal solution with greater accuracy as verified by the Exhaustive Load Flow and IA methods; hence, it is well suited for on-line execution in an energy control centre.
The rest of the paper is organized as follows. Section 2 explains proposed methodology. Section 3 presents algorithm of the proposed method. Section 4 presents results and analysis of IEEE 16-Bus [14
] and 33-Bus [15] Test Radial Distribution Systems and the analysis of results. Section 5 outlines conclusions.
2. Proposed Methodology
Consider a system with number of generator buses and remaining load buses. For a given system the two port transmission equations are given by We can write the above equations as where , are source
voltages and currents and , load voltages and currents. From the above equation we can get the relation between load voltage to source voltage at no load or light load condition, But the value of in
terms of -parameters is So, and are corresponding partitioned portions of network matrix. The above relation will give the factor by which the source voltage may be reduced due to transmission losses
due to impedance encountered in the path from respective source to load destination, so we can define this factor as Impedance Loss Factor (ILF) having dimension .
The columns of ILF matrix correspond to the generator bus numbers and its rows correspond to the load bus numbers. Higher the value of this factor lower will be the loss occurred across the path
between respective generator and the load bus to which it will feed the power. Thus this matrix can directly give the proportion of power which should be supplied by each source present in the system
to individual load so as to accomplish the total demand with maximum efficiency.
2.1. Optimal Loss
By rearranging (2) we will get But, and for simplicity we use the value of in terms of matrix.
So, So that the final form of equation will be where = Corresponding partitioned portion of matrix, = Column matrix of load voltages, = Column matrix of generator voltages, and = Column matrix of
load currents.
Now premultiplying the above matrix by diagonal matrix of load currents with dimension we get that is,
It will give the power consumed in load which can also be obtained by subtraction of total transmission power losses from the total power supplied by generators; thus, we can say
Thus we can use the term of transmission power losses given by (16) to calculate the total power losses encountered for different structures of power system. In this paper the author has used this
equation to calculate the power losses in the distribution system for different locations of DG placement apart from this without any separate calculations; the same equation has been also used to
calculate the capacity, operating power factor, and type of DG which should be included in the power system to achieve minimum losses.
2.2. Optimal DG Capacity
From (15) corresponding optimal DG capacity at that optimal location can be obtained. Here Impedance Loss Factor (ILF) plays an important role in giving the approximate value of capacity, operating
power factor, and type of DG which should be installed at an optimal location obtained for the given system. ILF gives the loss which may be faced by the generator while feeding particular load from
any concerned location. As ILF matrix is independent of the load connected to any bus, it will give the proportion of power at any input load condition. Thus if the generators are scheduled as per
the value of loss encountered while supplying the load, the minimum loss can be achieved. So the power which should be contributed by the individual generator to meet each load can be obtained as
given below:
From the matrix given above the power contributed by the generators placed at any particular location can be obtained which is actually similar to the -index value explained in [13]. Here each row of
matrix gives the summation of power supplied to th row load bus by all generators in that system and each individual term of the equation formed at every row is the desirable contribution of
generator of respective column. So if we connect a new DG at any load bus in the system, then power contribution from all existing generators will change as some of the power will now be shared by
newly installed DG. The previously connected load will now be assumed as the local load for that bus which will be supplied by the newly installed DG. Therefore by ignoring the local load of that bus
we can now assume that a new DG is installed by replacing any load bus there by increasing the total number of generators by one, reducing the load buses by one and thus now the dimension of matrix
given by (17) will be . Thus the restructured matrix will be
Thus the desired contribution of the DG is the summation of terms of power contribution of DG to each load and the local load at that bus, which will give the required capacity of DG which should be
installed at that location where = Total number of load buses.
The capacity which we obtain will be a complex quantity giving both real as well as reactive power capacity of the DG. The capacity of the DG to be installed can also be used for calculation of power
factor as explained in the next section.
2.3. Optimal Power Factor
Operating power factor of the DG to be installed can be obtained by using the equation where and can be obtained from the complex capacity value of DG obtained from (18).
2.4. Type of the DG
DG can be classified into four major types [9] based on their terminal characteristics in terms of real and reactive power delivering capability as follows.(1)Type I: DG capable of injecting active
power only, for example, photovoltaic, fuel cells, and so forth.(2)Type II: DG capable of injecting reactive power only, for example, synchronous compensators such as gas turbines.(3)Type III: DG
capable of injecting both active and reactive power, for example, DG units that are based on synchronous machines and so forth.(4)Type IV: DG capable of injecting active power but consuming reactive
power, for example, induction generators that are used in wind farms and so forth.
So, if practically we will analyze the information given in [9] we should actually consider the Type-II DG as nothing but a capacitor, FACTS devices, and so forth, and Type-III DG may be like
synchronous generator which can inject both active and reactive power based on its inverter control, but in this paper we are not dealing with the way by which DG can be controlled to give its
output, but we have assumed only the output of the DGs mentioned above.
From the complex equation (19) obtained for calculating the capacity of DG to be installed we can also predict the type of DG.
As per the proposed method the DG to be installed will be of the following types.TypeI: if the value of reactive power will be zero, operating power factor will be unity.TypeII: if value of active
power will be zero, operating power factor will be zero.TypeIII: if value of reactive power will be positive, and the sign will be positive.TypeIV: if value of reactive power will be negative, and
the sign will be negative.
The proposed method suggests the optimal DG location, optimal DG capacity as well as power factor thereafter predicting the type of DG which should be installed at that location to achieve minimum
losses. But if practically this is not possible, then with the available DG unit we can interconnect any other DG of other type, so as to achieve the required DG output in terms of reactive power
output, power factor, and so forth.
3. Algorithm for Proposed Method
For finding the optimal location, optimal capacity at that location, operating power factor, and type of the DG to be installed, following algorithm should be followed.(1)For the given test system
without DG run the load flow and find out the voltage at each bus as well as calculate the total losses.(2)Select next bus as a DG location and consider the remaining buses (except original generator
sources and the load bus on which DG is installed) as load buses.(3)Now run the load flow for the case with DG installed at new position and find out the voltage at each bus.(4)By using equation (16
), calculate the active as well as reactive power losses.(5)Now select all the next buses individually as DG location and repeat the steps from 2 to 4.(6)Rank the buses in ascending order as per the
amount of losses encountered at that location.(7)Consider the top ranking bus as the best location for DG installation.(8)Then calculate the optimal capacity of DG to be installed at optimal location
from (19).(9)By using the complex power capacity of DG obtained in the above step calculate the operating power factor of the DG as per (20).(10)Then from the values of active as well as reactive
power capacities obtained in step (8) and the calculated operating power factor from step (9) suggest the type of DG which should be installed on the candidate location obtained in step (7).
Figure 1 shows the flowchart for the proposed algorithm.
4. Results and Analysis
In this paper two test systems are summarized, namely, the IEEE 16-Bus Test Radial Distribution System [14] with a load of 28.7MW and 5.9MVAR and IEEE 33-Bus Test Radial Distribution System [15]
with a load of 3.7MW and 2.3MVAR. Comparison of the proposed algorithm is made against two DG allocation methods, one based on Exact Loss Formula [8–10] known as Improved Analytical (IA) method [9]
and the other is Exhaustive Load Flow (ELF) method. Table 1 and Figure 2 show comparative analysis of three methods of DG allocation by considering loss minimization as a constraint for 16-bus
system. Similarly Table 2 and Figure 5 show comparative analysis of three methods for the 33-bus system. In MTP method we can calculate optimal loss, size, and operating power factor by using a
single equation but in other two methods for active and reactive loss as well as for power calculations separate equations are required apart from this solution demands iterative approach which makes
the methods lengthy.
Comparative table gives total losses calculated by MTP method, but comparison of only real losses is made against available results of, other two methods.
Figures 2 and 5 can verify that proposed method gives the same results as that of ELF and IA methods but with much less computational equations as well as time. In 16-bus system the optimum location
of DG to be installed is bus 9 where the total power losses are reduced to 163.4kW, 212.7kvar. The second best location is bus 12, where the power losses are 249.0kw, 304.3kvar. Similarly in
33-bus system the optimum location of DG to be installed is bus 6 where the total power losses are reduced to 68.2kw, 53.8kvar. The second best location is bus 26, where the power losses are
69.3kw, 54.2kvar and the successive priority list can be obtained in Tables 1 and 2. Figures 3 and 4 show both active and reactive power losses for 16-bus system which can be achieved if DG is
installed at each bus location.
For obtaining the optimal capacity, operating power factor, and type of the DG to be installed we can refer to Figures 3 and 4 for 16-bus system and Figures 6 and 7 for 33-bus system. The results
obtained from these figures are summarized in Table 3. From Table 3 it can be observed that all calculated parameters of the proposed method are identical to that of the IA and ELF methods except
type and the operating power factor. In IA and ELF methods the type of the of the system was predetermined as the Type-I, supplying real power only and as per the type of the system the power factor
as well as size has been determined so these methods show the results of only active power loss minimization whereas in the proposed method the type of the system is not predetermined, but it is
considered as the prediction factor which can be predicted as per the operating condition as well as requirement of the system which may be considered as the additional benefit of the method. Other
two methods require the repetition of complete calculations for other types of the DG and the obtained results may not satisfy completely the system conditions; that is, total loss minimization may
not be achieved whereas proposed method initiates its calculations as per the requirement of the system which is clear from the results shown in Table 3 where operating power factor of the 16-bus
system is 0.88 which identifies requirement of the Type-III DG which can supply both real as well as reactive power so it could minimize both the losses, whereas in the other two methods the power
factor is 0.98 which identifies Type-I DG which can supply only real power. In 33-bus system MTP method gives 0.81 as the operating power factor identifying Type-III DG, whereas the other two methods
give 0.85 which contradicts the presence of Type-I DG.
4.1. Consecutive Bus
Figures 3, 4, 6, and 7 indicating the data of active and reactive power contribution of each generating source to the system from corresponding location show that if DG is placed at the bus which is
just next to main sub-station may be referred as Consecutive Bus, then no power will be shared from the substation and the total power will be supplied by the DG alone. So the high capacity DG is
required to supply the power if placed at Consecutive Bus despite of that very less amount of loss minimization could be achieved from this location. Apart from this while designing any DG parameter
we consider the maximum demand condition and practically highly variable load conditions can be observed and during such a situation high reverse power flow from the DG may increase the losses to a
huge amount which may even be more than the base condition, that is, the system without DG. So it can be suggested that the Consecutive Bus should never be considered for DG installation.
In this paper in 16-bus system with a total load of 29.3MVA and three substations handling approximately 10MVA load each, buses number 4, 8, and 13 are the Consecutive Buses where the sizes of the
DGs required are 14.375MVA, 15.542MVA, and 12.214MVA each whereas in 33-bus system with a total load of 4.3566MVA, bus number 2 is the Consecutive Bus at which the size required for the of DG is
4.6496MVA. So here we can conclude that the Consecutive Bus may be considered as the worst location for DG installation.
5. Conclusion
This paper presents a novel method, that is, modified transmission parameters method of extracting the two port transmission equations for finding the optimal location, optimal size of that location,
operating power factor, and the type of the DG to be installed in a primary distribution system to minimize the total losses of the system. The prominent feature of this method lies in its simplicity
and ease of calculations as well as preciseness in achieving results. It avoids the time consuming and cumbersome iterative approach for handling the undemanding problem of designing the new DG to be
installed in radial distribution system. Validity of the proposed method for designing DG to install in distribution system is tested and verified on two test distribution systems with varying sizes
and complexity using already published Improved Analytical method and Exhaustive Load Flow solutions. Results show that locations, sizes, operating power factor, and type of distributed generators
are decisive factors in minimizing total losses in the system and properly placed; pertinently chosen distributed generators can reduce losses appreciably. In this paper the worst location for DG
allocation has been also located and referred as Consecutive Bus.
1. T. Ackermann, G. Andersson, and L. Söder, “Distributed generation: a definition,” Electric Power Systems Research, vol. 57, no. 3, pp. 195–204, 2001. View at Publisher · View at Google Scholar ·
View at Scopus
2. H. L. Willis, Power Distribution Planning Reference Book, Marcel Dekker, New York, NY, USA, 1997.
3. H. L. Willis, “Analytical methods and rules of thumb for modeling DG-distribution interaction,” in Proceedings of the Power Engineering Society Summer Meeting, vol. 3, pp. 1643–1644, July 2000.
View at Scopus
4. Y. Mao and K. N. Miu, “Switch placement to improve system reliability for radial distribution systems with distributed generation,” IEEE Transactions on Power Systems, vol. 18, no. 4, pp.
1346–1352, 2003. View at Publisher · View at Google Scholar · View at Scopus
5. Y. M. Atwa, E. F. El-Saadany, M. M. A. Salama, and R. Seethapathy, “Optimal renewable resources mix for distribution system energy loss minimization,” IEEE Transactions on Power Systems, vol. 25,
no. 1, pp. 360–370, 2010. View at Publisher · View at Google Scholar · View at Scopus
6. J. O. Kim, S. W. Nam, S. K. Park, and C. Singh, “Dispersed generation planning using improved Hereford ranch algorithm,” Electric Power Systems Research, vol. 47, no. 1, pp. 47–55, 1998. View at
7. M. Gandomkar, M. Vakilian, and M. Ehsan, “Optimal distributed generation allocation in distribution network using Hereford Ranch algorithm,” in Proceedings of the 8th International Conference on
Electrical Machines and Systems (ICEMS '05), vol. 2, pp. 916–918, September 2005. View at Scopus
8. N. Acharya, P. Mahat, and N. Mithulananthan, “An analytical approach for DG allocation in primary distribution network,” International Journal of Electrical Power and Energy Systems, vol. 28, no.
10, pp. 669–678, 2006. View at Publisher · View at Google Scholar · View at Scopus
9. D. Q. Hung, N. Mithulananthan, and R. C. Bansal, “Analytical expressions for DG allocation in primary distribution networks,” IEEE Transactions on Energy Conversion, vol. 25, no. 3, pp. 814–820,
2010. View at Publisher · View at Google Scholar · View at Scopus
10. D. Q. Hung, N. Mithulananthan, and R. C. Bansal, “Multiple distributed generators placement in primary distribution networks for loss reduction,” IEEE Transactions on Industrial Electronics, vol.
60, no. 4, pp. 1700–1707, 2013.
11. Y.-K. Wu, C.-Y. Lee, L.-C. Liu, and S.-H. Tsai, “Study of reconfiguration for the distribution system with distributed generators,” IEEE Transactions on Power Delivery, vol. 25, no. 3, pp.
1678–1685, 2010. View at Publisher · View at Google Scholar · View at Scopus
12. F. S. Abu-Mouti and M. E. El-Hawary, “Heuristic curve-fitted technique for distributed generation optimisation in radial distribution feeder systems,” IET Generation, Transmission and
Distribution, vol. 5, no. 2, pp. 172–180, 2011. View at Publisher · View at Google Scholar · View at Scopus
13. T. Dhadbanjan and V. Chintamani, “Evaluation of suitable locations for generation expansion in restructured power systems: a novel concept of T-index,” International Journal of Emerging Electric
Power Systems, vol. 10, no. 1, article 4, 2009. View at Publisher · View at Google Scholar · View at Scopus
14. S. Civanlar, J. J. Grainger, H. Yin, and S. S. H. Lee, “Distribution feeder reconfiguration for loss reduction,” IEEE Transactions on Power Delivery, vol. 3, no. 3, pp. 1217–1223, 1988. View at
Publisher · View at Google Scholar · View at Scopus
15. M. A. Kashem, V. Ganapathy, and G. B. Jasmon, “A novel method for loss minimization in distribution networks,” in Proceedings of the International Conference on Electric Utility Deregulation and
Restructuring and Power Technologies, City University, London, UK, April 2000. | {"url":"http://www.hindawi.com/journals/jen/2013/938958/","timestamp":"2014-04-17T17:04:53Z","content_type":null,"content_length":"208399","record_id":"<urn:uuid:c9da4693-9db1-4c34-8bd0-4f3ecf28514c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extend a perimeter lesson by using a spreadsheet.
• Students will measure the perimeter of a square by using a pipe cleaner.
• Students will create a square with a given perimeter.
• Print a hard copy of the perimeter worksheet for each student.
• Give each student the worksheet and a 12 inch long pipe cleaner.
• Inform the students that each small square on the worksheet is 1 inch and the pipe cleaner is 12 inches long. With this information, they should find the box that has a perimeter of 12 inches.
• Discuss their findings.
• Next give them a hard copy of the perimeter template.
• Each of the squares is 1/2 inch. (If 1/2 inch increments will be too challenging for your students, you could declare them to be units.)
• This time the students are to use their pipe cleaner to guide them in making their own box. If you use 1/2 inch increments, they should make their own 12 inch box. If you use units as your
measure, they should make a box with a perimeter of 24 units. If you use units, it is suggested that you change color of pipe cleaners so as not to confuse the students with the previous portion
of the lesson.
• After students come up with a box with the given perimeter, they should label the sides with the correct unit of measure and add the sides to double check. Finally, have each student open up the
perimeter template in Appleworks.
• Their task is to make the same box in the template as they did on their paper. They should do this by clicking in a starting point, dragging it across to create the desired box and filling it
with color. | {"url":"http://etc.usf.edu/plans/lessons/lp/lp0034.htm","timestamp":"2014-04-21T02:13:16Z","content_type":null,"content_length":"6369","record_id":"<urn:uuid:4a63fbb5-d370-45e6-b8f9-8eb3ad630d80>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hallandale Algebra 1 Tutor
Find a Hallandale Algebra 1 Tutor
...I also teach Isshinryu Karate and Self Defense courses. I have found this to be a great confidence booster for all. If you are in need of a tutor I will respect your reasons and your privacy.
9 Subjects: including algebra 1, English, reading, algebra 2
...I also was a curriculum coach for the same public school district in Pittsburgh, Pennsylvania. In that position I demonstrated lessons for K-5 math teachers, wrote and presented professional
development for teachers, did workshops for parents,and wrote curriculum. Following my retirement from t...
7 Subjects: including algebra 1, reading, prealgebra, geometry
...You would think this career choice and degree were unrelated, but it's actually all that kept the middle schoolers from driving me crazy! While teaching, I earned a B.S. in the Biological
Sciences and earned a biotechnology certificate afterwards. During this period, I also kept up a healthy vo...
17 Subjects: including algebra 1, Spanish, chemistry, English
...How about writing? Indeed, my reach goes all the way from mastery of the English language, across the ocean of political philosophy, and to the land of physical sciences (chemistry, biology,
physics), also where the dreaded algebra monster lies in waiting to devour the souls of the innocent. But need not fear, I have conquered the beast, and I can show you how.
36 Subjects: including algebra 1, reading, chemistry, English
...I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be
explained and applied. We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid.
46 Subjects: including algebra 1, Spanish, reading, writing | {"url":"http://www.purplemath.com/hallandale_fl_algebra_1_tutors.php","timestamp":"2014-04-21T10:54:21Z","content_type":null,"content_length":"24193","record_id":"<urn:uuid:5cc8c9e3-cec0-40b6-b6df-5169d06aec8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic inference of regular tree languages. Machine Learning 44(1/2
Results 1 - 10 of 28
- Journal of Artificial Intelligence Research , 2006
"... Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note
formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evalu ..."
Cited by 42 (10 self)
Add to MetaCart
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note
formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting
representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics. 1.
- International Journal of Artificial Intelligence Tools , 2003
"... We present an algorithm for the inference of context-free graph grammars from examples. The algorithm builds on an earlier system for frequent substructure discovery, and is biased toward
grammars that minimize description length. Grammar features include recursion, variables and relationshi ..."
Cited by 24 (4 self)
Add to MetaCart
We present an algorithm for the inference of context-free graph grammars from examples. The algorithm builds on an earlier system for frequent substructure discovery, and is biased toward grammars
that minimize description length. Grammar features include recursion, variables and relationships.
- In Proc. of CAV’05, volume 3576 of LNCS , 2005
"... Abstract. The applicability of assume-guarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue
of efficiently automating assume-guarantee reasoning for simulation conformance between finite state system ..."
Cited by 19 (5 self)
Add to MetaCart
Abstract. The applicability of assume-guarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue of
efficiently automating assume-guarantee reasoning for simulation conformance between finite state systems and specifications. We focus on a non-circular assume-guarantee proof rule, and show that
there is a weakest assumption that can be represented canonically by a deterministic tree automata (DTA). We then present an algorithm L T that learns this DTA automatically in an incremental
fashion, in time that is polynomial in the number of states in the equivalent minimal DTA. The algorithm assumes a teacher that can answer membership queries pertaining to the language of the unknown
DTA, and can also test a conjecture and provide a counter example if the conjecture is false. We show how the teacher and its interaction with L T are implemented in a model checker. We have
implemented this framework in the ComFoRT toolkit and we report encouraging results (up to 41 and 14 times improvement in memory and time consumption respectively) on non-trivial benchmarks.
- Proceedings of the KDD Workshop on Multi-Relational Data Mining , 2002
"... Recognizing the expressive power of graph representation and the ability of certain graph grammars to generalize, we attempt to use graph grammar leaming for concept formation. In this paper we
describe our initial progress toward that goal, and focus on how certain graph grammars can be leamed f ..."
Cited by 16 (5 self)
Add to MetaCart
Recognizing the expressive power of graph representation and the ability of certain graph grammars to generalize, we attempt to use graph grammar leaming for concept formation. In this paper we
describe our initial progress toward that goal, and focus on how certain graph grammars can be leamed from examples. We also establish grounds for using graph grammars in machine leaming tasks.
Several examples are presented to highlight the validity of the approach.
"... Probabilistic finite-state machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning,
time series analysis, circuit testing, computational biology, speech recognition and machine translatio ..."
Cited by 15 (1 self)
Add to MetaCart
Probabilistic finite-state machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning, time
series analysis, circuit testing, computational biology, speech recognition and machine translation are some of them. In part I of this paper we survey these generative objects and study their
definitions and properties. In part II, we will study the relation of probabilistic finite-state automata with other well known devices that generate strings as hidden Markov models and n-grams, and
provide theorems, algorithms and properties that represent a current state of the art of these objects.
- PROCEEDINGS OF 5TH INTERNATIONAL COLLOQUIUM, ICGI 2000, LISBON (PORTUGAL), VOLUME 1891 OF LECTURE NOTES IN COMPUTER SCIENCE , 2000
"... In this paper, we present a natural generalization of k-gram models for tree stochastic languages based on the k-testable class. In this class of models, frequencies are estimated for a
probabilistic regular tree grammar wich is bottom-up deterministic. One of the advantages of this approach is ..."
Cited by 12 (2 self)
Add to MetaCart
In this paper, we present a natural generalization of k-gram models for tree stochastic languages based on the k-testable class. In this class of models, frequencies are estimated for a probabilistic
regular tree grammar wich is bottom-up deterministic. One of the advantages of this approach is that the model can be updated in an incremental fashion. This method is an alternative to costly
learning algorithms (as inside-outside-based methods) or algorithms that require larger samples (as many state merging/splitting methods).
"... Probabilistic finite-state machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked. In part I of this paper, we surveyed these
objects and studied their properties. In this part II, we study the relations between probabilistic finit ..."
Cited by 10 (2 self)
Add to MetaCart
Probabilistic finite-state machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked. In part I of this paper, we surveyed these
objects and studied their properties. In this part II, we study the relations between probabilistic finite-state automata and other well known devices that generate strings like hidden Markov models
and n- grams, and provide theorems, algorithms and properties that represent a current state of the art of these objects.
"... In this paper, we describe some techniques to learn probabilistic k-testable tree models, a generalization of the well known k-gram models, that can be used to compress or classify structured
data. These models are easy to infer from samples and allow for incremental updates. Moreover, as shown here ..."
Cited by 5 (4 self)
Add to MetaCart
In this paper, we describe some techniques to learn probabilistic k-testable tree models, a generalization of the well known k-gram models, that can be used to compress or classify structured data.
These models are easy to infer from samples and allow for incremental updates. Moreover, as shown here, backing-off schemes can be defined to solve data sparseness, a problem that often arises when
using trees to represent the data. These features make them suitable to compress structured data files at a better rate than string-based methods.
"... Abstract. We consider the problem of learning stochastic tree languages, i.e. probability distributions over a set of trees T(F), from a sample of trees independently drawn according to an
unknown target P. We consider the case where the target is a rational stochastic tree language, i.e. it can be ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. We consider the problem of learning stochastic tree languages, i.e. probability distributions over a set of trees T(F), from a sample of trees independently drawn according to an unknown
target P. We consider the case where the target is a rational stochastic tree language, i.e. it can be computed by a rational tree series or, equivalently, by a multiplicity tree automaton. In this
paper, we provide two contributions. First, we show that rational tree series admit a canonical representation with parameters that can be efficiently estimated from samples. Then, we give an
inference algorithm that identifies the class of rational stochastic tree languages in the limit with probability one. 1
- In: Proceedings of the 8th International Colloquium on Grammatical Inference (ICGI’06). Volume 4201 of LNCS , 2006
"... Abstract. In this paper, we present a theoretical approach for the problem of learning multiplicity tree automata. These automata allows one to define functions which compute a number for each
tree. They can be seen as a strict generalization of stochastic tree automata since they allow to define fu ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. In this paper, we present a theoretical approach for the problem of learning multiplicity tree automata. These automata allows one to define functions which compute a number for each tree.
They can be seen as a strict generalization of stochastic tree automata since they allow to define functions over any field K. A multiplicity automaton admits a support which is a non deterministic
automaton. From a grammatical inference point of view, this paper presents a contribution which is original due to the combination of two important aspects. This is the first time, as far as we now,
that a learning method focuses on non deterministic tree automata which computes functions over a field. The algorithm proposed in this paper stands in Angluin’s exact model where a learner is
allowed to use membership and equivalence queries. We show that this algorithm is polynomial in time in function of the size of the representation. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=43016","timestamp":"2014-04-20T13:58:49Z","content_type":null,"content_length":"37049","record_id":"<urn:uuid:b9f6c3ed-7f13-48a5-b34a-0775cd80ff5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vlookup Returning Incorrect Values For Unfound Match -
Vlookup Returning Incorrect Values For Unfound Match
Vlookup Returning Incorrect Values For Unfound Match - Excel View Answers
Good afternoon all,
I'm trying to help a colleague automate a daily process he must perform in Excel. I had suspected it to be a simple job for a VLOOKUP formula, however incorrect values are being returned for items
without a match.
In the attached example the 'Final! sheet is where he would have to manually enter amounts from an SAP printout into C:C for jobs matching job numbers in B:B. The sheet 'Source Data! is a striped
down example of an SAP data dump where he retrieves his information.
The VLOOKUP formula I added to D:D has a couple of issues I could use a hand with.
1. When a value from 'Final!B:B is not found in 'Source Data!B:B, the formula appears to return the value from the previous item.
2. Some values in 'Final!B:B have either an "-s" or "-ss" appended to what needs to be matched in 'Source Data!B:B. I believe limiting the lookup to the first 8 characters starting from the left
would suffice.
Any suggestions?
As always.... thamks for the help!
Similar Excel Video Tutorials
VLOOKUP 11 Unusual Examples
- See these 11 VLOOKUP tricks:
1.VLOOKUP algorithm
2.VLOOKUP, Named Ranges, Exact Match, COLUMNS function& Data Validation List
3.Com ...
INDEX & MATCH Two Lookup Values
- See a formula that can lookup two 2 lookup values using the INDEX & MATCH functions and Concatenated (joined) columns for the lookup value and loo ...
Helpful Excel Macros
Get Values from a Chart - This macro will pull the values from a chart in excel and list those values on another spreadsheet. This will get the s
Similar Topics
Vlookup - Excel
Cross Vlookup? - Excel
I have my data in Data Tab as follows:
ID Desc Cat
1 yes decision
2 no final
3 12/31/10 decision
4 misc final
5 1/1/11 decision
6 no review
7 yes final
8 1/5/11 review
9 1/6/11 final
10 1/7/11 review
1 na review
3 part review
7 sold decision
I then I have two lookup tables in Lookup1 and Lookup2 Tabs. Lookup1 has ID values:
and Lookup 2 has Cat values:
Final Table first column has IDs that populates sorted IDs from Data tab. Column 2 Desc match (lookup) IDs off lookup1 to IDs from Data tab, then match (lookup) Cat from Lookup 2 to Cat in Data tab
and when both matched, then it will populate values in Desc in Final Tab. PLease see Final Tab in attached Excel file.
Do I need two vlookup and a match?
Ask Your Own Question
First off, long time viewer and first time poster, so thank you to everyone that posts information here. you have all been a tremendous help.
My current task.
I currently have a task of matching 2 sets of data. The data is simply business names from two different sources. I currently remove all special characters and remove blank spaces. This leaves me
with letters only to be matched against one another
As an example, the following could be my first data source:
(data source one is my master data and all subsequent lists are matched against the master)
data source two would be:
I am currently running the following formula =IF(ISNA(VLOOKUP(LEFT($A1,1) & "*",DataSource2!$B:$B,1,FALSE)),0,1)
This lets me know if the first character matches. I then run the same formula but to match characters for the second letter of both sets. And another to match the third , and fourth, etc etc.
Currently I use the above formula 20 times, to match up to 20 characters. I then sum the total which gives me the number of characters that match starting from the left until a break appears. As an
example the formulas would return the following for the example i provided:
burgerking = 6
harveysltd = 20 (even though the entry is only 10 characters, this formula returns a value of 20. This is actually quite helpful as any entries that have returned a 20 are essentially guaranteed
dairyqueen = 1 (due to the spelling error)
I am thinking there must be a better way.
PS - I am not an advanced user, i have no idea why these formulas work, but they do. I simply found them on this bulletin board. So please keep that in mind when you are judging the mess i created
and offering assistance. Thanks
Ask Your Own Question
I am having trouble with this. I have a sheet with values on it. I have another sheet with values on it. I am trying to use a formula, that will will say "complete" if the value from sheet 1 is
matched on sheet 2, and to say "incomplete" if nothing is found. I tried this:
The data in C2 happens to match the first cell in column A, so "complete" is returned.
Then I fill down, and it all goes wrong.
The next value (in C3) I have found manually, but my formula returns "incomplete"
I am thinking that VLookup will not work because my data won't be in the first column all the time.
Ask Your Own Question
Using the attached table as an example....
I'm using a VLOOKUP formula to return the values in column J
=VLOOKUP("Bob Smith",A1:J4,10,FALSE)
However, I only want the values returned if there are 3 or more values in columns B-H. If there are less than 3, I want "NA" returned.
I've tried a few variations, including putting a COUNTIF formula in another column to determine whether there are 3 or more values, then incorporating that column into the VLOOKUP, but I can't get
this to work either.
I was given this formula, which works for the second part of the process (returning the value in J only if there are 3 or more values in B-H), but does not incorporate the VLOOKUP.
I tried this: =VLOOKUP($A6,A1:J4(IF(COUNTIF(B:H,">0")>=3,J,"NA")),10,FALSE)
But it refers to columns without rows, which I don't think is right, and it returned a Circular Reference error anyway.
Can anyone help?
Ask Your Own Question
Here is my situation. I have two sheets. One called "Roster" and one called "final". On the final sheet i have cols for each question on the final exam. I also have a total col which sums up the pts
for each question.
On the "roster" sheet. This is kinda like a summary sheet. On this sheet i use a vlookup (shown below) in the cells which are supposed to reference the cells on the "final" sheet for the total pts.
=IF(ISERROR(VLOOKUP(A2,Final!$A$2:$Q$68,15,FALSE)),"NA", VLOOKUP(A2,Final!$A$2:$Q$68,15,FALSE))
Here is my problem. Currently the total pts col on the final sheet is 15 cols over. However, if i add another col, which i would do if i added another exam question, the above formula (which is on
the "roster" sheet) does not update now to "16"........it is staying at "15" and i dont know why. I thought it would update if i inserted another col on the final sheet.
Can anyone help?
If you need more info just let me know.
Ask Your Own Question
I have two worksheets, one called "Source", the other "Tracker". The final goal is to update "Tracker" with material from "Source", adding a new row to "Tracker" for each row in "Source" that does
not already correspond to a row in "Tracker",
I've mapped the macro out, and here is the essential decision structu
For each row, i, in Source.ColumnX, search Tracker.ColumnA to see if there is a match
If there is a match, go to next row, i+1, in Source.ColumnX
If no match, go to A)
A) on row i, use Source.ColumnY to search Tracker.ColumnB to see if there is a match
If there is, go to B)
If not, go to C)
B) Find last value in Tracker.ColumnB that matches row i of Source.ColumnY
InsertRowsAndFillFormulas (from msvp) for 1 row below that
Paste lookup values in row i from Source.ColumnX and Y into the new row in Tracker.ColumnA and B
Return and search for row i+1 repeat etc.
C) Find correct alphabetical location in Tracker.ColumnB for that value in row i of Source.ColumnY
InsertRowsAndFillFormulas (from msvp) for 1 row
Paste lookup values in row i from Source.ColumnX and Y into the new row in Tracker.ColumnA and B
Return and search for row i+1 repeat etc.
THANK ALL FOR ANY HELP!
Ask Your Own Question
I'm having big Vlookup issues today as well... I've been banging my head against them for 2 hours now!
I have 3 spreadsheets (1) a list of preferred suppliers on them, (2) with every order we've placed and (3) with all the spend indvidual accounts.
Looking between (2) and (1) I've used
=IF(EXACT(F56,VLOOKUP(F56,'[Confirmed Preferred Vendor List 3rd December 2010 (Final)modded nh.xls]BBES - Final'!$A$2:$A$748,1)),"yes","no")
to tell me if a supplier is on the preferred list. However, I now need to do the same for a more specialist subset on (3). I've used the exact same formula but every single one has returned "no".
I added a line called "test" to sheets (1) and (3) which returned "yes", so I tried overtyping a value I knew was present in both sheets and still got "no".
Ask Your Own Question
I have a formula that works nicely below except I have to manually update the rows in order for the formula to work. The reference tab, which is the "FINAL REPORT', last row is not always 109. Once
the data feed into the Final Report tab, the last row number varies from 108-150. How can adjust my formula to accommodate this function? Thank you in advance. Excellicious.
=INDEX('FINAL REPORT'!C:C,MATCH($A$3,'FINAL REPORT'!$A$2:$A$109,1)+MATCH($B3,INDIRECT(CONCATENATE("'FINAL REPORT'!B",MATCH($A$3,'FINAL REPORT'!$A$2:$A$109,0)+1,":B700")),0),1)
Ask Your Own Question
Hi all,
I've been using this formula for some time but it has just started returning incorrect results. Its purpose is to give an overview of an employees attendance, drawing data from various spreadsheets
by using a reference inserted in cell A6 (the employees ID number). The formula will look for a 1st initial to signify whether an employee was sick, on holiday etc. or, if a numeric vaule is found,
was working on any given day. An example of the formula is as follows:
HTML Code:
=IF((LEFT(VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="S","S",IF((LEFT(VLOOKUP($A$6,'H:\\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="H","H",IF((LEFT(VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="A","A",IF((LEFT(VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="N","N",IF((LEFT(VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="U","U",IF((LEFT(VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18)))="T","T",IF((VLOOKUP($A$6,'H:\Register 09-10\[May 09.xls]Register'!$A$6:$BZ$145,18))>=0.1,"W","")))))))
So, if the employee has been marked as sick on the register an S will be returned (in this case the 1st of the month is column 18), an H for holiday, a W for worked and so on.
The problem I'm getting is that for the current month the reference in cell A6 (51318) is returning values from the row of a different ID (51220). If I extend the extent of the range in the formula
from BZ145 to BZ178 (as the table is now larger) for yet another ID (51210). This shouldn't even be causing an issue as the ID I'm looking for is on row 43!
So, as you may see I am rather confused - any help would be gratefully received.
Thanks, Chris
Ask Your Own Question
Is there a way to use the Right() function with Vlookup on the source data? I know you can use it on data you are returning, but I can't seem to get it to work for source data. Here's what I've
For example: The source data is ABC123456 and I'm trying to match 123456
In the arguments window for "Lookup value" when I type Right(A:A,6) it displays the correct result = "123456", but it only returns #N/A when I try to use it. If I manually delete the ABC from the
ABC123456 source data and do a normal Vlookup, it works fine, so it is something about the Right() that it doesn't like.
Thanks /Mikea3
Ask Your Own Question
I require some assistance here to work with two sets of data source which requires to lookup and match datas between the two source and returns with a single final result.
Referring to the attached Sorting.zip, it contains an Excel file with two worksheets as the data sources, Source_local and Source_vendor. I am require to lookup and match the data sources from either
one of the column, VIN or ENGINE_NO in Source_local and match it to either column, CHASSIS NO or ENGINE NO in Source_vendor. Upon finding the matching data, it is require to return a matching result
from column AREA CODE in Source_vendor. The result will be display at any available column in Source_local.
Based on above descriptions I wonder which will be a better formula/function to achieve the results. It is possible for me to use VLOOKUP + MATCH for the above?
Ask Your Own Question
I have an issue whereby vlookup is returning incorrect data.
I have a named data range on worksheet "prices" that is called from a drop down box on another worksheet, so you get a drop down list with all of the item names.
Next to the item names on the first page is the price of that particular item.
Next to the drop down box on the second worksheet is a vlookup that looks at the value of the drop down box, and takes the value in column 2 of the afordmentioned sheet.
The vlookup returns the wrong value in some cases. My understanding is that with:
Excel SHOULD look at whats in A37. It then looks on the Prices worksheet in cells H14:I53. When it finds what it found in A37 within that range, it returns the data in column I (column 2) that is
next to the matched item.
However, if you look at the spreadsheet, try selecting "tungsten carbide Armor" and youll see its actually returning the price for "Titanium Diborate Armor" if you refer to the "prices" worksheet.
What is going on, can anyone help? Theres quite a few doing this!
Spreadsheet attached, apologies if its agianst the rules. Theres nothing important or sensitive in there.
Ask Your Own Question
Hi Everyone,
I have two sheets currently. One is a Data Dump (on the bottom). The other is "sheet4" on the top where I just need to be able to match the column with the row (Vlookup/HLookup). Is there a vlookup/
hlookup combo formula that can get me to the answer I want? Note that my Rows consist of dropdown menus/dates that change accordingly.. Please see below.
Sorry for entering it here instead of a real worksheet. I can't seem to attach a file small enough to be able to be posted on this website!!
Thank you very much!!!
SHEET 4
ENTER DATE ENTER DATE Name of Manager May-11 Jun-11 Total Plan (9/30/90) Total Fund Benchmark Looking to be able to go to "Sheet ONE DATA DUMP below and get a match of -1.3. How do I get a vlookup
and hlookup formula to get this? Looking to be able to go to "Sheet ONE DATA and get a match of -1.5. How do I get a vlookup and hlookup formula to get this?
SHEET - ONE DATA DUMP
May-11 Jun-11 Total Plan (9/30/90) -1.1 -1.0 Total Fund Benchmark -1.3 -1.5 Mstar Moderate Allocation Fund -0.8 -1.4 CPI Plus 5% 0.7 0.2
Ask Your Own Question
I am using concatenate to create a list of bottles based on other data, so that each cell contains a different combination of bottles.
The problem is that the list appears but there is also FALSE for every bottle that doesn't relate to that cell.
Is there a way to create a formula such that nothing appears if it is not true?
And is there a shorter way of writing what I am after, I can't quite finish the formula becaise it is so long.
This is what I am using:
=CONCATENATE(IF(ISNUMBER(MATCH("*O1*",C3,0)), VLOOKUP("O1",$A$10:$D$19, 4,0)), ", ",IF(ISNUMBER(MATCH("*O2*",C3,0)), VLOOKUP("O2",$A$10:$D$19,4,0)), ", ", IF(ISNUMBER(MATCH("*O3*",C3,0)), VLOOKUP
("O3",$A$10:$D$19, 4,0)), ", ",IF(ISNUMBER(MATCH("*O4*",C3,0)), VLOOKUP("O4",$A$10:$D$19, 4,0)), ", ",IF(ISNUMBER(MATCH("*O5*",C3,0)), VLOOKUP("O5",$A$10:$D$19, 4,0)), ", ",IF(ISNUMBER(MATCH
("*O6*",C3,0)), VLOOKUP("O6",$A$10:$D$19, 4,0)), ", ", IF(ISNUMBER(MATCH("*O7*",C3,0)), VLOOKUP("O7",$A$10:$D$19, 4,0)), ", ", IF(ISNUMBER(MATCH("*O8*",C3,0)), VLOOKUP("O8",A10:D19, 4,0)), ", ", IF
(ISNUMBER(MATCH("*O9*",C3,0)), VLOOKUP("O9",$A$10:$D$19, 4," ")), ", ", IF(ISNUMBER(MATCH("*OS*",C3,0)), VLOOKUP("OS",$A$10:$D$19, 4,0)), ", ",IF(ISNUMBER(MATCH("*I1",E3,0)), VLOOKUP
("I1",$A$21:$D$26, 4,0)), ", ",IF(ISNUMBER(MATCH("*I2*",E3,0)), VLOOKUP("I2",$A$21:$D$26,4,0)))
As I am here, does anyone know how to stop the formula text appearing? Cause when i select this cell it takes over half the screen as it is so long.
Thanks for listening
Ask Your Own Question
I have my data in Data Tab as follows:
ID Desc Cat
1 yes decision
2 no final
3 12/31/10 decision
4 misc final
5 1/1/11 decision
6 no review
7 yes final
8 1/5/11 review
9 1/6/11 final
10 1/7/11 review
1 na review
3 part review
7 sold decision
I then I have two lookup tables in Lookup1 and Lookup2 Tabs. Lookup1 has ID values:
and Lookup 2 has Cat values:
Final Table first column has IDs that populates sorted IDs from Data tab. Column 2 Desc match (lookup) IDs off lookup1 to IDs from Data tab, then match (lookup) Cat from Lookup 2 to Cat in Data tab
and when both matched, then it will populate values in Desc in Final Tab. Final Tab:
ID Desc
1 yes
1 na
2 no
3 12/31/10
3 part
4 misc
5 1/1/11 e
6 no
7 yes
7 sold
8 1/5/11
9 1/6/11
10 1/7/11
Ask Your Own Question
I'm having an issue with VLookup, i've attempted to search the forums for the solution but to be honest i wouldnt even know how to keyword it.
My issue is that I have a data source, column A has an I.D number ( starting from 1 all the way up to 900), but where some records have been deleted there are now gaps in the I.D numbering (.. 47,
48, 49, 52, 55 etc.).
Now, When i'm trying to use the ID numbers given to me via a seperate report and attempt to use Vlookup to gain the data from the data source it is picking up the data correctly.. until it reaches an
ID number that exists on that sheet but not the data source. What happens then is that it uses the previous entries data rather than declaring that cell as blank.
I've attached an example outlining what i mean: the example.xls file is the information with the VLookup information, and the data source.xls is, well, the data source.
Please note that ID number 281 tells me it belongs to Person 130, when the data source tells me Person 130's ID number is 280.
Is there a way for the formula to give back a blank entry if it cannot find a number within the data source?
Ask Your Own Question
Scoured forum and couldn't find answer to this.
Sheet 1 ("Dump") is a raw dump of data relating to survey results from a number of different cities. I want to read through this dump, isolate any rows where Location="Glasgow" and copy them across
into a new list on an existing sheet ("Report") - replacing any existing data on that sheet.
Although I can manually delete "Report" cells, filter "Dump" sheet and then copy and paste I am looking for some way to do this automatically as the data dump is updated daily by a manual copy and
paste operation. I cannot filter the data at source as i will require comparison to other locations elsewhere in the workbook and need all the data present in "Dump"
Tried a straight =IF(DUMP!A1="Glasgow","DUMP!A1","") but this leaves obvious gaps that mess up later calculations based on the data.
Have tried VLOOKUP but this obviously only returns first survey. Do not see how INDEX/MATCH can perform this either.
Can anyone point me in right direction here pls?
Ask Your Own Question
I know I have done this formula before, but the syntax escapes me presently and was hoping someone here would be able to job the old memory.
I am trying to use vlookup to "cleanup" a listing of data so that only particular rows are presented. In my workbook I have a "Raw Data" sheet with all the information and a "Complete" sheet which
would be populated by vlookup strings.
The issue I am coming across is on the second row of the "Complete" sheet. The first vlookup is simple and does of course pull the first item in the Raw Data that I want. However, without the proper
addition to my formula, the second row etc. also pulls the first item in the Raw Data. I seem to recall having used a variation of a vlookup formula in the past to solve this problem. Something
similar to =if(iserror(vlookup(.... except I do not believe the second statement was iserror and I am certain that somewhere in the formula there was a +1.
Basically, onthe "Complete" sheet cell A1 would read =vlookup("y",'Raw Data'!$A$1:$C$8000,3,false)
cell A2 would need to reference the result of A1 and if it matched, it would then pull the next data from the Raw data sheet.
Any help would be appreciated.
Ask Your Own Question
Hello from Canada during our beautiful two-week summer! Thank you in advance for any suggestions you might offer.
I have a large dataset for which I want to combine data by Latitude and Longitude. I have included on a single sheet data showing how well the plants are growing in specific locations, and data
showing how much rain has fallen. All data are annual, so I want to address that variable also, but I have figured out a way to do it just by sorting on that variable.
The problem is that the Latitudes and Longitudes do not match exactly.
I tried using vlookup to match the precipitation values by creating both source and lookup data using a pipe =N3&"|"&O3 and the standard vlookup formula, but received wrong information. All lookup
data are sorted ascending.
Also, I checked and rechecked formatting even by paste special product*1 and using text to columns.
This is a challenging puzzle, master points to anybody who can figure it out! VBA is something I'm comfortable with, if that seems like a superior way to handle this.
The file is too large to upload here, so I have posted it he
Ask Your Own Question
This is a very long equation I'm working on. The basic premise of the equation is using IF and AND to determine what equation to calculate the price should be used:
IF(AND(this is/isn't true, this is true),then do this,IF(AND(this is/isn't true, this is true),then do this,repeat logic
This is approximately 6 IF statements deep
I will break the statement down so it is easier to follow below at the bottom:
Full equation
[i]=IF(A12="","",(IF(AND(G12="-",VLOOKUP(B12,$L$3:$P$302,5,0)="Profile"),VLOOKUP($B12,$L$3:$P$302,2,0)*$F12*$S$17+VLOOKUP($C12,$R$2 1:$S$28,2,0),(IF(AND(G12<>"-",VLOOKUP(B12,$L$3:$P$302,5,0)=
"Profile"),VLOOKUP($B12,$L$3:$P$302,2,0)*($F12+0.6)*$S$17+VLOOKUP($G1 2,$R$44:$S$47,2,0)+VLOOKUP($C12,$R$21:$S$28,2,0),(IF(AND(G12="-",VLOOKUP($B12,$L$3:$P$302,3,0)="Metre"),VLOOKUP
($B12,$L$3:$P$302,2,0)*$F12*$S$18+VLOOKUP($C12,$R$21 :$S$28,2,0),(IF(AND($G12<>"-",(VLOOKUP($B12,$L$3:$P$302,3,0)="Metre"),VLOOKUP($B12,$L$3:$P$302,2,0)*($F12+0.6)*$S$18+VLOOKUP($G1
2,$R$44:$S$47,2,0)+VLOOKUP($C12,$R$21:$S$28,2,0),(IF(AND($F12="-",VLOOKUP(B12,$L$3:$P$302,3,0)=OR("Pair","Piece")),VLOOKUP($B12,$L$3:$P$302,2,0)*$S$18,"Check values"))))))))))))
Broken down equation
(IF(AND($F12="-", VLOOKUP (B12,$L$3:$P$302,3,0)=OR("Pair","Piece")),
VLOOKUP($B12,$L$3:$P$302,2,0)*$S$18,"Check values"))))))))))))
The equation throws up an error at the point where the red VLOOKUP is. However all table references are correct, as are the potential values "Pair" and "Piece" it is looking for, as is column '3' to
look for them in. Can anyone see what the problem is here? Is there a max number for VLOOKUP you can use, have I hit some kind of limit?
Really stuck here and believe it's an equation / limit issue. Not an incorrect reference.
This equation refers to column E. However you will have to manually copy and paste it into excel as I cannot save the Excel file with this incorrect equation.
It's driving me crazy, could really do with a quick answer!
Ask Your Own Question
Hi Everyone,
This is my first post here, and I have used this board before as reference but never got around to posting.Example Data.xlsx
This is a probably a simple question to answer, but here's my situation.
In Sheet "Data" I have a list of item numbers from Column A2 down, A1 is titled "Item Numbers."
In Sheet "Master" I have a whole spread of data that I need to bring over to Sheet "Data". "Data is about 206 rows down and "Master" is approx 3,000. The data in master begins on A2, A1 is titled
Item Numbers.
What I need to do is match the items numbers in Sheet "Data" to the item numbers in "Master" while bringing over all the data in the columns next to the item numbers in "Master". The data that I am
trying to match and bring over to the "Data" sheet goes til column CM in "Master."
I'm thinking it'll need the Index formula since my basis vlookup wasn't working, but I figured I'd finally get my first post out there and ask the experts. Thank you all in advance.
Ask Your Own Question
Hi all,
I was wondering how to make the following: get a list where each Number from the "source file.xls" is assigned to a Group like in "final list.xls" through "conversion table.xls".
In the "source file.xls" we have apart from Number, the Series and Category columns. I need to compare both Series and Category to the same from "conversion table.xls" and in the third workbook
output the matching Number from "source file.xls" and Group from "conversion table.xls" according to matching algorithm.
I am attaching the files:
source file.xls
conversion table.xls
final list.xls
All three should be different workbooks and the first two will need to be closed. Also very important is the source file can have many names so it would be great to implement an open dialog box to
load the data from "source file.xls". All data is in text format.
Thanks for any help or advice.
Ask Your Own Question
My formula is returning an #N/A when there is no match from my data source file, meaning I have an account name on my main sheet where the formula resides but on the data source sheet, this account
name does not exist, therefore the formula returns the #N/A, how can I get the formula to return a Zero, I played around with an IF(ISNA) and an ISERROR, but could not get to work, I got the ISERROR
to identify accounts not present on the data source sheet, but would rather just have the formula return a Zero, any suggestions?
=INDEX('data source sheet'!$B$1:$C$500,MATCH(A164,'data source sheet'!$A$1:$A$500,0),MATCH($C$1,'data source sheet'!$B$1:$C$1,0))
Ask Your Own Question
Hello group,
I've worked with excel for many years, but I could use some help throwing
this together. The background:
I have two sheets within the same workbook. One sheet has check numbers and
values that have been issued by the company. The other sheet has that same
information, but only what has been reported back to us by the bank. I
working on streamlining the comparison between the two sheets to more easily
see what checks are outstanding (have not been cashed at the bank). Also,
if the check has been cashed, I want to compare the value that the bank
recorded with the value that our company recorded to insure that they match
My problem is that the table is going to grow as the year goes on and I'm
not sure how to get the VLOOKUP table array to grow with it. In the
following formula, F5 is the cell that contains the check number on our
company detail sheet, Bank DetailB6:C17206 is the the table array where the
banks check number and value is stored:
=IF(F5="","",IF(ISNA(VLOOKUP(F5,'Bank Detail'!$B$6:$C$17206,1,FALSE)),"No
Is there a way to use the indirect function and/or the Row function to
update the row reference to 17206 (the footer row in my bank detail sheet?
This way, as more bank detail is added, that row reference will remain at
the bottom of the list.
I know the syntax doesn't work, but I would like it to do this:
=IF(F5="","",IF(ISNA(VLOOKUP(F5,'Bank Detail'!$B$6:$C$ROW('Bank
Detail'!C17206),1,FALSE)),"No Match","Match"))
Maybe there's a better way to go about this. I'm open to suggestions if the
community has any. Right now, each sheet has the following columns:
1.) Check Number
2.) Value
3.) Match? - if the check number is found on the other sheet, "Match" is
entered in to the cell, otherwise "No Match" is entered
4.) Amount of non-matching Checks - If Cell "Match?" = "No Match", Value
amount is entered into the cell
5.) Amount of Matching Check from other sheet if Values differ - If a check
number match is found on the other sheet, this cell compares the two values
and return the other sheets value only if the two values don't match.
Any help/comments/suggestions will be appreciated.
Ask Your Own Question
This formula below does an ok job but but it doesn't have the ability to identify the las row of the Final Report tab. I would manually have to adjust the "119", which defines the last row, in order
for it to work. I did try to define the name range-=OFFSET(FINAL REPORT!$A$1,0,0,COUNTA(FINAL REPORT!$A$2 REPORT!$A:$A),1) but no luck.
=INDEX('FINAL REPORT'!C:C,MATCH($A$3,'FINAL REPORT'!$A$2:$A$119,1)+MATCH($B3,INDIRECT(CONCATENATE("'FINAL REPORT'!B",MATCH($A$3,'FINAL REPORT'!$A$2:$A$119,0)+1,":B700")),0),1)
Ask Your Own Question | {"url":"http://www.teachexcel.com/excel-help/excel-how-to.php?i=148327","timestamp":"2014-04-18T23:15:41Z","content_type":null,"content_length":"150365","record_id":"<urn:uuid:ebdf6b71-1bed-4a74-a889-418d705d914c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
numeral system,any of various sets of symbols and the rules for using them to represent numbers, which are used to express how many objects are in a given set. Thus the idea of “oneness” can be
represented by the Roman numeral I, by the Greek letter alpha α (the first letter) used as a numeral, by the Hebrew letter aleph (the first letter) used as a numeral, or by the modern numeral 1,
which is Hindu-Arabic in origin.
A brief treatment of numeral systems follows. For further discussion, see numerals and numeral systems: Numeral systems.
Very likely the earliest system of written symbols in ancient Mesopotamia was a system of symbols for numbers. Modern numeral systems are place-value systems—that is, the value of the symbol depends
upon the position or place of the symbol in the representation; for example, the 2 in 20 and 200 represent two tens and two hundreds, respectively. Such Most ancient systems, such as the Egyptian,
Roman, Hebrew, and Greek numeral systems, did not have this positional apositional characteristic, thus making arithmetical calculations difficult. and this complicatedarithmetical calculations.
Othersystems, however, including the Babylonian, one version each of the Chinese and Indian, as well as the Mayan system did employ the principle of place value. The most commonly used numeral system
is the decimal-positional numeral system, the decimal referring to the use of 10 symbols—0, 1, 2, 3, 4, 5, 6, 7, 8, 9—to construct all numbers. This was an invention of the Indians, perfected by
medieval Islam. Two other common positional systems are used in computers and computing science, namely the binary system, with its two symbols—0,1—and the hexadecimal system, with its 16 symbols—0,
1, 2, . . ., 9, A, B, . . ., F. | {"url":"http://media-1.web.britannica.com/eb-diffs/375/422375-17920-56496.html","timestamp":"2014-04-20T15:54:56Z","content_type":null,"content_length":"4219","record_id":"<urn:uuid:c0cf893b-4c10-4faf-b9b5-5ca2efc0c21b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slaw » The Friday Fillip: Birthday – Slaw » Print
Chances are really good that a few of you among the thousands (yes, thousands) reading this are celebrating a birthday today. It can’t be a dead cert, of course, because there’s no law of nature that
requires that anyone in our readership be born on a ninth of November. There is, though, a law (or maybe a regulation) of nature that seems to dictate that approximately the same number of people get
born every day. And that being the case, I should be able to estimate the chances that some of you will indeed be blowing out candles today.
Trouble is I’m the next best thing to innumerate. But let me essay this problem anyway and invite correction, otherwise known as help. We have to assume that the same number of people are born each
day, and that there’s no bias as to whether a particular birth date influences if you’re likely to read Slaw or not. Assuming further, then, for argument’s sake, that 6000 people read Slaw each day,
it ought to be a fact (i.e. not statistics but “mere counting” as a statistician I know once called it) that at least 16 Slaw readers (6000 รท 365) will be blowing out candles today.
Of course, all of this assuming — from the assumption that our population of readers is a true representation of the whole population and on to the assumption that each day’s crop of kids is the same
size — has to produce wobbly results. Which is where probability and statistics come in.
And when it comes to those two and the topic of birthdays, the commonest get-together is a poser that goes by the name of the birthday problem. Simply put, the question is this: what is the size of
the group necessary to make the chances 50-50 that two people will have the same birthday. I won’t keep you in suspense. The answer is 23. (I know. In its seeming arbitrariness, it’s like the answer
to the question of what is the meaning of life, the universe and everything ^[1], which happens, as I’m sure you know, to be 42.) All is explained really well in a New York Times Opinionator piece ^
[2] from last month by Steven Strogatz. Thanks to one of the leads in his article, I went to the WolframAlpha page for the “birthday problem calculator” ^[3] where there’s a device that lets you
input the number of people and that then kicks out the probabilities that two (or three) of them will have the same birthday. (e.g. 40 people gives you an 89% likelihood that 2 will have the same
I’ll leave you with Beatle Paul McCartney’s performance of the Lennon and McCartney song, (They say it’s your) Birthday, and my wishes for many happy returns of the day. You know who you are. | {"url":"http://www.slaw.ca/2012/11/09/the-friday-fillip-birthday/print/","timestamp":"2014-04-17T15:26:38Z","content_type":null,"content_length":"6332","record_id":"<urn:uuid:a61fa89c-8fda-4417-a53d-f58474489142>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Z-transform done quick
The Z-transform is a standard tool in signal processing; however, most descriptions of it focus heavily on the mechanics of manipulation and don’t give any explanation for what’s going on. In this
post, my goal is to cover only the basics (just enough to understand standard FIR and IIR filter formulations) but make it a bit clearer what’s actually going on. The intended audience is people
who’ve seen and used the Z-transform before, but never understood why the terminology and notation is the way it is.
In an earlier draft, I tried to start directly with the standard Z-transform, but that immediately brings up a bunch of technicalities that make matters more confusing than necessary. Instead, I’m
going to start with a simpler setting: let’s look at polynomials.
Not just any polynomials, mind. Say we have a finite sequence of real or complex numbers $a_0, \dots, a_n$. Then we can define a corresponding polynomial that has the values in that sequence as its
$\displaystyle A := \sum_{i=0}^n a_i x^i$.
and of course there’s also a corresponding polynomial function A(x) that plugs in concrete values for x. Polynomials are closed under addition and scalar multiplication (both componentwise, or more
accurately in this case, coefficient-wise) so they are a vector space and we can form linear combinations $\lambda A + \mu B$. That’s reassuring, but not particularly interesting. What is interesting
is the other fundamental operation we can do with polynomials: Multiply them. Let’s say we multiply two polynomials A and B to form a third polynomial C, and we want to find the corresponding
$\displaystyle AB = \left( \sum_{i=0}^n a_i x^i \right) \left( \sum_{i=0}^m b_i x^i \right) = \sum_{i=0}^n \sum_{j=0}^m a_i b_j x^{i+j} = \sum_{k=0}^{n+m} c_k x^k = C$.
To find the coefficients of C, we simply need to sum across all the combinations of indices i, j such that i+j=k, which of course means that j=k-i:
$\displaystyle c_k = \sum_i a_i b_{k-i}$.
When I don’t specify a restriction on the summation index, that simply means “sum over all i for which the right-hand side is well-defined”. And speaking of notation, in the future, I don’t want to
give a name to every individual expression just so we can look at the coefficients; instead, I’ll just write $[x^k]\ AB$ to denote the coefficient of x^k in the product of A and B – which is of
course our c[k]. Anyway, this is simply the discrete convolution of the two input sequences, and we’re gonna milk this connection for all it’s worth.
Knowing nothing but this, we can already do some basic filtering in this form if we want to: Suppose our a[i] encode a sampled sound effect. A again denotes the corresponding polynomial, and let’s
say $B = (1 + x)$, corresponding to the sequence (1, 1). Then C=AB computes the convolution of A with the sequence (1, 1), i.e. each sample and its immediate successor are summed (this is a simple
but unnormalized low-pass filter). Now so far, we’ve only really substituted one way to write convolution for another. There’s more to the whole thing than this, but for that we need to broaden our
setting a bit.
Generating functions
The next step is to get rid of the fixed-length limitation. Instead of a finite sequence, we’re now going to consider potentially infinite sequences $(a_i)_{i=0}^\infty$. A finite sequence is simple
one where all but finitely many of the a[i] are zero. Again, we can create a corresponding object that captures the whole sequence – only instead of a polynomial, it’s now a power series:
$\displaystyle A := \sum_{i=0}^\infty a_i x^i$.
And the corresponding function A(x) is called a[i]‘s generating function. Now we’re dealing with infinite series, so if we want to plug in an actual value for x, we have to worry about convergence
issues. For the time being, we won’t do so, however; we simply treat the whole thing as a formal power series (essentially, an “infinite-degree polynomial”), and all the manipulations I’ll be doing
are justified in that context even if the corresponding series don’t converge.
Anyway, the properties described above carry over: we still have linearity, and there’s a multiplication operation (the Cauchy product) that is the obvious generalization of polynomial multiplication
(in fact, the formula I’ve written above for the c[k] still applies) and again matches discrete convolution. So why did I start with polynomials in the first place if everything stays pretty much the
same? Frankly, mainly so I don’t have to whip out both infinite sequences and power series in the second paragraph; experience shows that when I start an article that way, the only people who
continue reading are the ones who already know everything I’m talking about anyway. Let’s see whether my cunning plan works this time.
So what do we get out of the infinite sequences? Well, for once, we can now work on infinite signals – or, more usually, signals with a length that is ultimately finite, but not known in advance, as
occurs with real-time processing. Above we saw a simple summing filter, generated from a finite sequence. That sequence is the filter’s “impulse response”, so called because it’s the result you get
when applying the filter to the unit impulse signal (1 0 0 …). (The generating function corresponding to that signal is simply “1″, so this shouldn’t be much of a surprise). Filters where the impulse
response has finite length are called “finite impulse response” or FIR filters. These filters have a straight-up polynomial as their generating function. But we can also construct filters with an
infinite impulse response – IIR filters. And those are the filters where we actually get something out of going to generating functions in the first place.
Let’s look at the simplest infinite sequence we can think of: (1 1 1 …), simply an infinite series of ones. The corresponding generating function is
$\displaystyle G_1(x) = \sum_{i=0}^\infty x^i$
And now let’s look at what we get when we convolve a signal a[i] with this sequence:
$\displaystyle s_k := [x^k]\ A G_1(x) = \sum_{i=0}^k a_i \cdot 1 = \sum_{i=0}^k a_i$
Expanding out, we see that s[0] = a[0], s[1] = a[0] + a[1], s[2] = a[0] + a[1] + a[2] and so forth: convolution with G[1] generates a signal that, at each point in time, is simply the sum of all
values up to that time. And if we actually had to compute things this way, this wouldn’t be very useful, because our filter would keep getting slower over time! Luckily, G[1] isn’t just an arbitrary
function – it’s a geometric series, which means that for concrete values x, we can compute G[1](x) as:
$\displaystyle G_1(x) = \sum_{i=0}^\infty x^i = \frac{1}{1 - x}$
and more generally, for arbitrary $c e 0$
$\displaystyle G_c(x) = \sum_{i=0}^\infty (cx)^i = \sum_{i=0}^\infty c^i x^i = \frac{1}{1 - cx}$.
If we apply the identity the other way round, we can turn such an expression of x back into a power series; in particular, when dealing with formal power series, the left-hand side is the definition
of the expression on the right-hand side. This notation also suggests that G[1] is the inverse (wrt. convolution) of (1 – x), and more generally that G[c] is the inverse of (1 – cx). Verifying this
makes for a nice exercise.
But what does that mean for us? It means that, given the expression
$\displaystyle S = A G_c(x) = \frac{A}{1 - cx}$
we can treat it as the identity between power series that it is and multiply both sides by (1 – cx), giving:
$\displaystyle S (1 - cx) = A$
and thus
$\displaystyle [x^k]\ S (1 - cx) = s_k - c s_{k-1} = a_k \Leftrightarrow s_k = a_k + c s_{k-1}$
i.e. we can compute s[k] in constant time if we’re allowed to look at s[k-1]. In particular, for the c=1 case we started with, this just means the obvious thing: don’t throw the partial sum away
after every sample, instead just keep adding the most recent sample to the running total.
And here’s the thing: that’s everything you need to compute convolutions with almost any sequence that has a rational generating function, i.e. it’s a quotient of polynomials $P(x) / Q(x)$. Using the
same trick as above, it’s easy to see what that means computationally. Say that $P(x) = p_0 + p_1 x + \cdots + p_n x^n$ and $Q(x) = q_0 + q_1 x + \cdots + q_m x^m$. If our signal has the generating
function A(x), then computing the filtered signal S boils down to evaluating $S(x) := A(x) P(x) / Q(x)$. Along the same lines as before, we have
$\displaystyle [x^k]\ S (q_0 + q_1 x + \cdots + q_m x^m) = [x^k] A (p_0 + \cdots + p_n x^n)$
$\displaystyle \Leftrightarrow q_0 s_k + q_1 s_{k-1} + \cdots + q_m s_{k-m} = p_0 a_k + \cdots + p_n a_{k-n}$
$\displaystyle \Leftrightarrow q_0 s_k = \left(\sum_{j=0}^n p_j a_{k-j}\right) - \left(\sum_{j=1}^m q_j s_{k-j}\right)$
$\displaystyle \Rightarrow s_k = \frac{1}{q_0} \left(\sum_{j=0}^n p_j a_{k-j} - \sum_{j=1}^m q_j s_{k-j}\right)$.
So again, we can compute the signal incrementally using a fixed amount of work (depending only on n and m) for every sample, provided that q[0] isn’t zero. The question is, do these rational
functions still have a corresponding series expansion? After all, this is what we need to employ generating functions in the first place. Luckily, the answer is yes, again provided that q[0] isn’t
zero. I’ll skip describing how exactly this works since we’ll be content to deal directly with the factored rational function form of our generating functions from here on out; if you want more
details (and see just how useful the notion of a generating function turns out to be for all kinds of problems!), I recommend you look at the excellent “Concrete Mathematics” by Graham, Knuth and
Patashnik or the by now freely downloadable “generatingfunctionology” by Wilf.
At last, the Z-transform
At this point, we already have all the theory we need for FIR and IIR filters, but with a non-standard notation, motivated by the desire to make the connection to standard polynomials and generating
functions more explicit. Let’s fix that up: in signal processing, it’s customary to write a signal x as a function $x : \mathbb{Z} \rightarrow \mathbb{R}$ (or $x : \mathbb{Z} \rightarrow \mathbb{C}$
), and it’s customary to write the argument in square brackets. So instead of dealing with sequences that consist of elements x[n], we now have functions with values at integer locations x[n]. And
the (unilateral) Z-transform of our signal x is now the function
$\displaystyle X(z) = \mathcal{Z}(x) = \sum_{n=0}^{\infty} x[n] z^{-n}$.
in other words, it’s basically a generating function, but this time the exponents are negative. I also assume that the signal is x[n] = 0 for all n<0, i.e. the signal starts at some defined point and
we move that point to 0. This doesn’t make any fundamental difference for the things I’ve discussed so far: all the properties discussed above still hold, and indeed all the derivations will still
work if you mechanically substitute x^k with z^-k. In particular, anything involving convolutions still works exactly same. However it does make a difference if you actually plug in concrete values
for z, which we are about to do. Also note that our variable is now z, not x. Customarily, “z” is used to denote complex variables, and this is no exception – more in a minute. Next, the Z-transform
of our filter’s impulse response (which is essentially the filter’s generating function, except now we evaluate at 1/z) is called the “transfer function” and has the general form
$\displaystyle H(z) = \frac{P(z^{-1})}{Q(z^{-1})} = \frac{Y(z)}{X(z)}$
where P and Q are the same polynomials as above; these polynomials in z^-1 are typically written Y(z) and X(z) in the DSP literature. You can factorize the numerator and denominator polynomials to
get the zeroes and poles of a filter. They’re important concepts in IIR filter design, but fairly incidental to what I’m trying to do (give some intution about what the Z-transform does and how it
works), so I won’t go into further detail here.
The Fourier connection
One last thing: The relation of this all to frequency space, or: what do our filters actually do to frequencies? For this, we can use the discrete-time Fourier transform (DTFT, not to be confused
with the Discrete Fourier Transform or DFT). The DTFT of a general signal x is
$\displaystyle \hat{X}(\omega) = \sum_{n=-\infty}^{\infty} x[n] e^{-i\omega n}$
Now, in our case we’re only considering signals with x[n]=0 for n<0, so we get
$\displaystyle \hat{X}(\omega) = \sum_{n=0}^\infty x[n] e^{-i\omega n} = \sum_{n=0}^\infty x[n] \left(e^{i\omega}\right)^{-n} = X(e^{i\omega})$
which means we can compute the DTFT of a signal by evaluating its Z-transform at exp(iω) – assuming the corresponding series of expression converges. Now, if the Z-transform of our signal is in
general series form, this is just a different notation for the same thing. But for our rational transfer functions H(z), this is a big deal, because evaluating their values at given complex z is easy
– it’s just a rational function, after all.
In fact, since we know that polynomial (and series) multiplication corresponds to convolution, we can now also easily see why convolution filters are useful to modify the frequency response (Fourier
transform) of a signal: If we have a signal x with Z-transform X and the transfer function of a filter H, we get:
$\displaystyle (X \cdot H)(e^{i\omega}) = X(e^{i\omega}) H(e^{i\omega})$
and in particular
$\displaystyle |(X \cdot H)(e^{i\omega})| = |X(e^{i\omega})| |H(e^{i\omega})|$
The first of these two equations is the discrete-time convolution theorem for Fourier transforms of signals: the DTFT of the convolution of the two signals is the point-wise product of the DTFTs of
the original signal and the filter. The second shows us how filters can amplify or attenuate individual frequencies: if |H(e^iω)| > 1, frequency ω will be amplified in the filtered signal, and it
it’s less than 1, it will be dampened.
Conclusion and further reading
The purpose of this post was to illustrate a few key concepts and the connections between them:
• Polynomial/series multiplication and convolution are the same thing.
• The Z-transform is very closely related to generating functions, an extremely powerful technique for manipulating sequences.
• In particular, the transfer function of a filter isn’t just some arbitrary syntactic convention to tie together the filter coefficients; there’s a direct connection to corresponding sequence
• The Fourier transform of filters is directly tied to the behavior of H(z) in the complex plane; computing the DTFT of an IIR filter’s impulse response directly would get messy, but the factored
form of H(z) makes it easy.
• With this background, it’s also fairly easy to see why filters work in the first place.
I intentionally cover none of these aspects deeply; my experience is that most material on the subject does a great job of covering the details, at the expense of making it harder to see the big
picture, so I wanted to try doing it the other way round. More details on series and generating functions can be found in the two books I cited above, and a good introduction to digital filters that
supplies the details I omitted is Smith’s Introduction to Digital Filters.
1. Great article. But there is a typo in the last formula of the section ”Generating functions”, check the parentheses.
2. The Eq.
s_k = \frac{1}{q_0}\left(\sum_{j=0}^n{p_ja_{k-j}}\right)-\left(\sum_{j=1}^m{q_js_{k-j}}\right)
should be
s_k = \frac{1}{q_0}\left(\sum_{j=0}^n{p_ja_{k-j}}-\sum_{j=1}^m{q_js_{k-j}}\right) | {"url":"http://fgiesen.wordpress.com/2012/08/26/z-transform-done-quick/","timestamp":"2014-04-18T05:30:24Z","content_type":null,"content_length":"80329","record_id":"<urn:uuid:7a52f3c0-c627-4636-b624-084a36df8490>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wilkeson Math Tutor
...Having spent 6+ years in undergraduate studies and now preparing to advance to medical school, I have seen many different approaches to studying, and together we can find one that benefits your
child the most. I feel communication is the strongest skill required for good tutoring. I have been h...
25 Subjects: including statistics, geometry, trigonometry, probability
...No, it is not. They are tested on advanced vocabulary and the ability to solve math puzzles at high speed, as well as their skill in writing an entertaining essay and mastery of the mechanics
of writing good English. In tutoring for the SAT, I show students: - the time-management skills that by...
38 Subjects: including prealgebra, physical science, anthropology, linear algebra
...Having taught in the public schools for several years, I am called upon to teach not just subject matter, but also study skills that optimize the learning of that subject matter. There have
been a variety of study skills to teach, which involve organization of information, retention of informati...
13 Subjects: including prealgebra, algebra 1, algebra 2, trigonometry
...I graduated with honors from Eastern Washington University where I double majored in Physics and Theatre. I then went on to earn my masters degree in education. My passion for teaching math
stems from the fact that it was never easy for me.
6 Subjects: including algebra 2, algebra 1, geometry, prealgebra
...I created groups and access to view others inboxes or calendars after the owner granted permission to view. I could easily teach someone how to use it. I've been working in a Windows
environment for more than 10 years.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra | {"url":"http://www.purplemath.com/Wilkeson_Math_tutors.php","timestamp":"2014-04-18T23:34:32Z","content_type":null,"content_length":"23416","record_id":"<urn:uuid:e23997c8-5242-4e75-bd6e-05a0ce8de6f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Terms, definitions, and concepts used in land surveying.
Land Terms
1. The gradual and natural growth of land resulting from forces of nature, as in sediment deposition by a river or stream.
2. The incremental augmentation or accrual of something, such as interest on an investment.
Have something to add? How useful is this Landterm? Bookmark this term
Area measurement (square measure) used primarily in the United States. One (1) acre is equal to 43,560 square feet, 4,046.86 square meters, or 0.4047 hectares.
Have something to add? How useful is this Landterm? Bookmark this term
One (1) acre equals 43,560 square feet.
One (1) acre equals 0.4047 hectares.
One (1) acre equals 10 square chains.
One (1) acre equals 160 square rods.
One (1) acre equals 160 square poles.
One (1) acre equals 160 square perches.
One (1) acre equals 4,840 square yards.
One (1) acre equals 4,046.856 square meters.
One (1) acre equals 0.0016 square miles.
One (1) acre equals 0.0016 sections.
One (1) acre equals 0.004 square kilometers.
One (1) acre equals 0.0000434 townships.
One (1) acre equals 1.183676 arpents.
One (1) acre equals 4.0 roods.
Have something to add? How useful is this Landterm? Bookmark this term
Unit of area measurement (usually applied to land) equal to approximately 5/6th of an acre, used in France, Louisiana, and Quebec. One (1) arpent is equal to 0.8448 acres or about 36,800 square feet.
For more arpent conversions, see arpent equivalents or conversions or converting arpents.
Have something to add? How useful is this Landterm? Bookmark this term
One (1) arpent is equal to approximately 5/6th of an acre. One (1) arpent is equal to 0.8448 acres. One (1) arpent is equal to 36,800.6256 square feet. One (1) arpent is equal to 4,088.9584 square
yards. One (1) arpent is equal to 135.1716 square rods, poles, or perches. One (1) arepent is equal to 8.4482 square chains. One (1) arpent is equal to 3.3793 roods. One (1) arpent is equal to 0.0013
square miles or sections. One (1) arpent is equal to 0.000037 townships. One (1) arpent is equal to 3,18.89 square meters. One (1) arpent is equal to 0.3419 hectares. One (1) arpent is equal to
0.0034 square kilometers.
Have something to add? How useful is this Landterm? Bookmark this term
Definition: Unimproved land; land in its unused natural state prior to development or construction of improvements such as streets, lighting, sewers, and the like. Same as raw land.
Terms, Definitions, and Concepts: Agriculture, Appraisal, Auction, Construction and Building, Management, Real Estate, Survey, Taxes and Taxation, Zoning
Have something to add? How useful is this Landterm? Bookmark this term
Bounds refers to definite boundary markers such as natural landmarks. Natural landmarks may often consist of such things as a specific tree or stream. Part of an old system of measuring land and
establishing boundaries, commonly referred to as Metes and Bounds.
Have something to add? How useful is this Landterm? Bookmark this term
A unit of length used in the metric system that is equal to 1/100 (0.01) meters. One (1) centimeter (cm) is equal to 0.3937 inches (in). For more information, see Centimeter equivalents and
conversions or the various Converting centimeters to ... entries.
Have something to add? How useful is this Landterm? Bookmark this term
One (1) centimeter (cm) is equal to 0.3937 inches (in).
One (1) centimeter is equal to 0.01 meters (m).
One (1) centimeter is equal to 0.00001 kilometers (km).
One (1) centimeter is equal to 0.0328 feet.
One (1) centimeter is equal to 0.0497 links.
One (1) centimeter is equal to 0.0109 yards.
One (1) centimeter is equal to 0.002 rods, poles, or perches.
One (1) centimeter is equal to 0.0005 chains.
One (1) centimeter is equal to 0.00005 furlongs.
One (1) centimeter is equal to 0.000006 miles.
See the various Converting centimeters to... entries for conversion examples.
Have something to add? How useful is this Landterm? Bookmark this term
A usually metal chain used to measure length and distance. Less commonly used in land surveys than a Gunter's or surveyor's chain, the engineer's or Ramsden's chain is 100 feet in length, with 100
1-foot links. The terms "engineer's chain" and "Ramsden's chain" apply primarily to the measuring utensil itself and not to any particular unit of length.
Have something to add? How useful is this Landterm? Bookmark this term
A common forestry and land survey term that is equivalent to 66 linear feet. Often referred to as simply "chain", this term is more formally known as the surveyor's or Gunter's chain. A chain is
broken into 100 equal parts, or links. One (1) mile is equal to 80 chains. A one (1) mile square piece of land (one section or 640 acres) is 80 chains on each side. Although an engineer's or
Ramsden's chain is also used to measure length in surveys, the generic term "chain" when used in reference to land measurements refers to the unit of length (66 feet) represented by the Gunter's or
surveyor's chain. For more chain conversions, equivalents, and examples, see Chain equivalents and conversions or the various Converting chains to... examples.
Have something to add? How useful is this Landterm? Bookmark this term
A chain is broken into 100 equal parts, or links. A one (1) mile square piece of land (one (1) section or 640 acres) is 80 chains on each side.
One (1) chain is equal to 66 feet.
One (1) chain is equal to four (4) rods, poles, or perches.
One (1) chain is equal to 100 links.
One (1) chain is equal to 22 yards.
One (1) chain is equal to 20.12 meters.
One (1) chain is equal to 0.0125 miles.
One (1) chain is equal to 0.0201 kilometers (km).
One (1) chain is equal to 0.1 furlongs.
One (1) chain is equal to 792 inches.
One (1) chain is equal to 2,011.684 centimeters (cm).
See the various Converting chains to... entries for conversion examples.
Have something to add? How useful is this Landterm? Bookmark this term
Old Irish measure of land equal to an amount of land able to support a horse or cow for a year. An Irish acre of good land is also an Approximation of its' definition
Have something to add? How useful is this Landterm? Bookmark this term
For area measurements, square units refers to total area, while units square refers to the number of units on each side (the length of each side of a square). For example, consider two square pieces
of land: one is 7 square miles, and the other is 7 miles square. The total area of the first parcel is 7 square miles. Since the parcel is a square, the length of each side is equal to the square
root of 7 square miles, which is approximately 2.65 miles. In contrast, the length of each side of the second parcel is 7 miles, according to the definition above. The total area is therefore 7 miles
x 7 miles = 49 square miles.
Have something to add? How useful is this Landterm? Bookmark this term
A usually metal chain used to measure length and distance. Less commonly used in land surveys than a Gunter's or surveyor's chain, the engineer's or Ramsden's chain is 100 feet in length, with 100
1-foot links. The terms "engineer's chain" and "Ramsden's chain" apply primarily to the measuring utensil itself and not to any particular unit of length.
Have something to add? How useful is this Landterm? Bookmark this term
Plural form of foot (ft). A foot is a unit of length equal to 12 inches, taken from the average length of the human foot. One (1) foot is equal to 0.3048 meters. See also Foot (feet) equivalents and
conversions and the various Converting feet to... entries.
Have something to add? How useful is this Landterm? Bookmark this term
One (1) foot is equal to 12 inches.
One (1) foot is equal to 0.3048 meters.
One (1) foot is equal to 30.48 centimeters.
One (1) foot is equal to 0.0152 chains.
One (1) foot is equal to 0.0015 furlongs.
One (1) foot is equal to 0.0003 kilometers.
One (1) foot is equal to 1.5152 links.
One (1) foot is equal to 0.00019 miles.
One (1) foot is equal to 0.0606 rods, poles, or perches.
One (1) foot is equal to 0.3333 yards.
See the various Converting feet to... entries for conversion examples.
Have something to add? How useful is this Landterm? Bookmark this term | {"url":"http://landterms.com/Survey/index.html","timestamp":"2014-04-16T19:18:08Z","content_type":null,"content_length":"35347","record_id":"<urn:uuid:cd5c8570-1fd5-4a5a-8ae9-16cc17e3fbab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Salem, MA Prealgebra Tutor
Find a Salem, MA Prealgebra Tutor
...Though my initial certificate is expired, my professional license is pending, and I have a Master's Degree in Education Administration. My numerous years experience teaching elementary school
speaks to this, as does my education. I am familiar with the many phonics programs, and most recently taught using a Wilson-based reading and spelling program.
13 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
...Even though I was a math major in college I will only tutor math from 5th grade up to Geometry or Algebra 2 in high school. Although I know I could do more complicated math, middle school math
and algebra are what I love to tutor in. I am very responsible and am always on time for things.
6 Subjects: including prealgebra, geometry, algebra 1, elementary math
...I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and non-scientists, and am enthusiastic about tutoring at the high school
level. I am currently a research associate in materials physics at Harvard, have completed a postdoc in g...
16 Subjects: including prealgebra, calculus, physics, geometry
...If you are in need of assistance for a student struggling in Physics or Math, I am the man for you. I tutor all levels of High School Physics and Math, including AP levels (both B and C for
physics, and AB and BC for calculus). I live in Brighton, and I am willing to travel a few miles to meet s...
9 Subjects: including prealgebra, calculus, physics, geometry
...I will often teach a lesson on (for instance) fractions to an Algebra student if I feel that the skill is not strong, so that we can move forward. I let the student's learning style drive the
lesson for the most part. In order to increase my availability to a larger area, I often tutor in publi...
6 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
Related Salem, MA Tutors
Salem, MA Accounting Tutors
Salem, MA ACT Tutors
Salem, MA Algebra Tutors
Salem, MA Algebra 2 Tutors
Salem, MA Calculus Tutors
Salem, MA Geometry Tutors
Salem, MA Math Tutors
Salem, MA Prealgebra Tutors
Salem, MA Precalculus Tutors
Salem, MA SAT Tutors
Salem, MA SAT Math Tutors
Salem, MA Science Tutors
Salem, MA Statistics Tutors
Salem, MA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Arlington, MA prealgebra Tutors
Beverly, MA prealgebra Tutors
Brighton, MA prealgebra Tutors
Chelsea, MA prealgebra Tutors
Danvers, MA prealgebra Tutors
East Boston prealgebra Tutors
Everett, MA prealgebra Tutors
Lynn, MA prealgebra Tutors
Malden, MA prealgebra Tutors
Marblehead prealgebra Tutors
Peabody, MA prealgebra Tutors
Revere, MA prealgebra Tutors
Saugus prealgebra Tutors
Swampscott prealgebra Tutors
Woburn prealgebra Tutors | {"url":"http://www.purplemath.com/Salem_MA_Prealgebra_tutors.php","timestamp":"2014-04-19T23:17:11Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:56fb5eea-fb6f-41e5-96bb-552b2d29a627>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Privacy-Preserving Datamining on Vertically Partitioned
Privacy-Preserving Datamining
on Vertically Partitioned Databases
Cynthia Dwork and Kobbi Nissim
Microsoft Research, SVC, 1065 La Avenida, Mountain View CA 94043
{dwork, kobbi}@microsoft.com
Abstract. In a recent paper Dinur and Nissim considered a statistical
database in which a trusted database administrator monitors queries
and introduces noise to the responses with the goal of maintaining data
privacy [5]. Under a rigorous definition of breach of privacy, Dinur and
Nissim proved that unless the total number of queries is sub-linear in the
size of the database, a substantial amount of noise is required to avoid a
breach, rendering the database almost useless.
As databases grow increasingly large, the possibility of being able to
query only a sub-linear number of times becomes realistic. We further
investigate this situation, generalizing the previous work in two impor-
tant directions: multi-attribute databases (previous work dealt only with
single-attribute databases) and vertically partitioned databases, in which
different subsets of attributes are stored in different databases. In addi-
tion, we show how to use our techniques for datamining on published
noisy statistics.
Keywords: Data Privacy, Statistical Databases, Data Mining, Vertically Parti-
tioned Databases.
1 Introduction
In a recent paper Dinur and Nissim considered a statistical database in which
a trusted database administrator monitors queries and introduces noise to the
responses with the goal of maintaining data privacy [5]. Under a rigorous defini-
tion of breach of privacy, Dinur and Nissim proved that unless the total number
of queries is sub-linear in the size of the database, a substantial amount of noise
is required to avoid a breach, rendering the database almost useless1 . However,
when the number of queries is limited, it is possible to simultaneously preserve
privacy and obtain some functionality by adding an amount of noise that is a
function of the number of queries. Intuitively, the amount of noise is sufficiently
large that nothing specific about an individual can be learned from a relatively
small number of queries, but not so large that information about sufficiently
strong statistical trends is obliterated.
For unbounded adversaries, the amount of noise (per query) must be linear in the
size of the database; for polynomially bounded adversaries, Ω( n) noise is required.
As databases grow increasingly massive, the notion that the database will be
queried only a sub-linear number of times becomes realistic. We further inves-
tigate this situation, significantly broadening the results in [5], as we describe
Methodology. We follow a cryptography-flavored methodology, where we con-
sider a database access mechanism private only if it provably withstands any
adversarial attack. For such a database access mechanism any computation over
query answers clearly preserves privacy (otherwise it would serve as a privacy
breaching adversary). We present a database access mechanism and prove its
security under a strong privacy definition. Then we show that this mechanism
provides utility by demonstrating a datamining algorithm.
Statistical Databases. A statistical database is a collection of samples that are
somehow representative of an underlying population distribution. We model
a database as a matrix, in which rows correspond to individual records and
columns correspond to attributes. A query to the database is a set of indices
(specifying rows), and a Boolean property. The response is a noisy version of the
number of records in the specified set for which the property holds. (Dinur and
Nissim consider one-column databases containing a single binary attribute.) The
model captures the situation of a traditional, multiple-attribute, database, in
which an adversary knows enough partial information about records to “name”
some records or select among them. Such an adversary can target a selected
record in order to try to learn the value of one of its unknown sensitive at-
tributes. Thus, the mapping of individuals to their indices (record numbers) is
not assumed to be secret. For example, we do not assume the records have been
randomly permuted.
We assume each row is independently sampled from some underlying distri-
bution. An analyst would usually assume the existence of a single underlying
row distribution D, and try to learn its properties.
Privacy. Our notion of privacy is a relative one. We assume the adversary knows
the underlying distribution D on the data, and, furthermore, may have some a
priori information about specific records, e.g., “p – the a priori probability that
at least one of the attributes in record 400 has value 1 – is .38”. We anlyze
privacy with respect to any possible underlying (row) distributions {Di }, where
the ith row is chosen according to Di . This partially models a priori knowledge
an attacker has about individual rows (i.e. Di is D conditioned on the attacker’s
knowledge of the ith record). Continuing with our informal example, privacy is
breached if the a posteriori probability (after the sequence of queries have been
issued and responded to) that “at least one of the attributes in record 400 has
value 1” differs from the a priori probability p “too much”.
Multi-Attribute Sub-Linear Queries (SuLQ) Databases. The setting studied in [5],
in which an adversary issues only a sublinear number of queries (SuLQ) to a
single attribute database, can be generalized to multiple attributes in several
natural ways. The simplest scenario is of a single k-attribute SuLQ database,
queried by specifying a set of indices and a k-ary Boolean function. The re-
sponse is a noisy version of the number of records in the specified set for which
the function, applied to the attributes in the record, evaluates to 1. A more
involved scenario is of multiple single-attribute SuLQ databases, one for each
attribute, administered independently. In other words, our k-attribute database
is vertically partitioned into k single-attribute databases. In this case, the chal-
lenge will be datamining: learning the statistics of Boolean functions of the at-
tributes, using the single-attribute query and response mechanisms as primitives.
A third possibility is a combination of the first two: a k-attribute database that
is vertically partitioned into two (or more) databases with k1 and k2 (possibly
overlapping) attributes, respectively, where k1 + k2 ≥ k. Database i, i = 1, 2, can
handle ki -ary functional queries, and the goal is to learn relationships between
the functional outputs, eg, “If f1 (α1,1 , . . . , α1,k1 ) holds, does this increase the
likelihood that f2 (α2,1 . . . , α2,k2 ) holds?”, where fi is a function on the attribute
values for records in the ith database.
1.1 Our Results
We obtain positive datamining results in the extensions to the model of [5]
described above, while maintaining the strengthened privacy requirement:
1. Multi-attribute SuLQ databases: The statistics for every k-ary Boolean func-
tion can be learned2 . Since the queries here are powerful (any function), it is
not surprising that statistics for any function can be learned. The strength
of the result is that statistics are learned while maintaining privacy.
2. Multiple single-attribute SuLQ databases: We show how to learn the statis-
tics of any 2-ary Boolean function. For example, we can learn the fraction of
records having neither attribute 1 nor attribute 2, or the conditional proba-
bility of having attribute 2 given that one has attribute 1. The key innovation
is a procedure for testing the extent to which one attribute, say, α, implies
another attribute, β, in probability, meaning that Pr[β|α] = Pr[β]+∆, where
∆ can be estimated by the procedure.
3. Vertically Partitioned k-attribute SuLQ Databases: The constructions here
are a combination of the results for the first two cases: the k attributes are
partitioned into (possibly overlapping) sets of size k1 and k2 , respectively,
where k1 + k2 ≥ k; each of the two sets of attributes is managed by a multi-
attribute SuLQ database. We can learn all 2-ary Boolean functions of the
outputs of the results from the two databases.
We note that a single-attribute database can be simulated in all of the above
settings; hence, in order to preserve privacy, the sub-linear upper bound on
queries must be enforced. How this bound is enforced is beyond the scope of this
Note that because of the noise, statistics cannot be learned exactly. An additive error
on the order of n1/2−ε is incurred, where n is the number of records in the database.
The same is true for single-attribute databases.
Datamining on Published Statistics. Our technique for testing implication in
probability yields surprising results in the real-life model in which confidential
information is gathered by a trusted party, such as the census bureau, who pub-
lishes aggregate statistics. Describing our results by example, suppose the bureau
publishes the results of a large (but sublinear) number of queries. Specifically, for
every, say, triple of attributes (α1 , α2 , α3 ), and for each of the eight conjunctions
α ¯ ¯ ¯ ¯
of literals over three attributes (¯ 1 α2 α3 , α1 α2 α3 , . . . , αk−2 αk−1 αk ), the bureau
publishes the result of several queries on these conjunctions. We show how to
construct approximate statistics for any binary function of six attributes. (In
general, using data published for -tuples, it is possible to approximately learn
statistics for any 2 -ary function.) Since the published data are the results of
SuLQ database queries, the total number of published statistics must be sub-
linear in n, the size of the database. Also, in order to keep the error down,
several queries must be made for each conjunction of literals. These two facts
constrain the values of and the total number k of attributes for which the result
is meaningful.
1.2 Related Work
There is a rich literature on confidentiality in statistical databases. An excellent
survey of work prior to the late 1980’s was made by Adam and Wortmann [2].
Using their taxonomy, our work falls under the category of output perturbation.
However, to our knowledge, the only work that has exploited the opportunities
for privacy inherent in the fact that with massive of databases the actual number
of queries will be sublinear is Sect. 4 of [5] (joint work with Dwork). That work
only considered single-attribute SuLQ databases.
Fanconi and Merola give a more recent survey, with a focus on aggregated
data released via web access [10]. Evfimievski, Gehrke, and Srikant, in the Intro-
duction to [7], give a very nice discussion of work in randomization of data, in
which data contributors (e.g., respondents to a survey) independently add noise
to their own responses. A special issue (Vol.14, No. 4, 1998) of the Journal of Of-
ficial Statistics is dedicated to disclosure control in statistical data. A discussion
of some of the trends in the statistical research, accessible to the non-statistician,
can be found in [8].
Many papers in the statistics literature deal with generating simulated data
while maintaining certain quantities, such as marginals [9]. Other widely-studied
techniques include cell suppression, adding simulated data, releasing only a sub-
set of observations, releasing only a subset of attributes, releasing synthetic
or partially synthetic data [13,12], data-swapping, and post-randomization. See
Duncan (2001) [6].
R. Agrawal and Srikant began to address privacy in datamining in 2000 [3].
That work attempted to formalize privacy in terms of confidence intervals (in-
tuitively, a small interval of confidence corresponds to a privacy breach), and
also showed how to reconstruct an original distribution from noisy samples (i.e.,
each sample is the sum of an underlying data distribution sample and a noise
sample), where the noise is drawn from a certain simple known distribution.
This work was revisited by D. Agrawal and C. Aggarwal [1], who noted that it
is possible to use the outcome of the distribution reconstruction procedure to
significantly diminish the interval of confidence, and hence breach privacy. They
formulated privacy (loss) in terms of mutual information, taking into account
(unlike [3]) that the adversary may know the underlying distribution on the data
and “facts of life” (for example, that ages cannot be negative). Intuitively, if the
mutual information between the sensitive data and its noisy version is high, then
a privacy breach occurs. They also considered reconstruction from noisy sam-
ples, using the EM (expectation maximization) technique. Evfimievsky, Gehrke,
and Srikant [7] criticized the usage of mutual information for measuring privacy,
noting that low mutual information allows complete privacy breaches that hap-
pen with low but significant frequency. Concurrently with and independently of
Dinur and Nissim [5] they presented a privacy definition that related the a priori
and a posteriori knowledge of sensitive data. We note below how our definition
of privacy breach relates to that of [7,5].
A different and appealing definition has been proposed by Chawla, Dwork,
McSherry, Smith, and Wee [4], formalizing the intuition that one’s privacy is
guaranteed to the extent that one is not brought to the attention of others. We
do not yet understand the relationship between the definition in [4] and the one
presented here.
There is also a very large literature in secure multi-party computation. In
secure multi-party computation, functionality is paramount, and privacy is only
preserved to the extent that the function outcome itself does not reveal infor-
mation about the individual inputs. In privacy-preserving statistical databases,
privacy is paramount. Functions of the data that cannot be learned while pro-
tecting privacy will simply not be learned.
2 Preliminaries
Notation. We denote by neg(n) (read: negligible) a function that is asymptoti-
cally smaller than any inverse polynomial. That is, for all c > 0, for all sufficiently
large n, we have neg(n) < 1/nc . We write O(T (n)) for T (n) · polylog(n).
2.1 The Database Model
In the following discussion, we do not distinguish between the case of a verti-
cally partitioned database (in which the columns are distributed among several
servers) and a “whole” database (in which all the information is in one place).
We model a database as an n × k binary matrix d = {di,j }. Intuitively, the
columns in d correspond to Boolean attributes α1 , . . . , αk , and the rows in d
correspond to individuals where di,j = 1 iff attribute αj holds for individual i.
We sometimes refer to a row as a record.
Let D be a distribution on {0, 1}k . We say that a database d = {di,j } is
chosen according to distribution D if every row in d is chosen according to D,
independently of the other rows (in other words, d is chosen according to Dn ).
In our privacy analysis we relax this requirement and allow each row i to be
chosen from a (possibly) different distribution Di . In that case we say that the
database is chosen according to D1 × · · · × Dn .
Statistical Queries. A statistical query is a pair (q, g), where q ⊆ [n] indicates a
set of rows in d and g : {0, 1}k → {0, 1} denotes a function on attribute values.
The exact answer to (q, g) is the number of rows of d in the set q for which g
holds (evaluates to 1):
aq,g = g(di,1 , . . . , di,k ) = |{i : i ∈ q and g(di,1 , . . . , di,k ) holds}|.
We write (q, j) when the function g is a projection onto the jth element:
g(x1 , . . . , xk ) = xj . In that case (q, j) is a query on a subset of the entries in
the jth column: aq,j = i∈q di,j . When we look at vertically partitioned single-
attribute databases, the queries will all be of this form.
Perturbation. We allow the database algorithm to give perturbed (or ”noisy”)
ˆ a
answers to queries. We say that an answer aq,j is within perturbation E if |ˆq,j −
aq,j | ≤ E. Similarly, a database algorithm A is within perturbation E if for every
query (q, g)
Pr[|A(q, g) − aq,g | ≤ E] = 1 − neg(n).
The probability is taken over the randomness of the database algorithm A.
2.2 Probability Tool
Proposition 1. Let s1 , . . . , st be random variables so that |E[si ]| ≤ α and |si | ≤
β then
√ 2
Pr[| st | > λ(α + β) t + tβ] < 2e−λ /2 .
Proof. Let zi = si − E[si ], hence |zi | ≤ α + β. Using Azuma’s inequality3 we
T √ 2 T T
get that Pr[ i=1 z ≥ λ(α + β) t] ≤ 2e−λ /2 . As | i=1 st | = | i=1 z +
T T
i=1 E[si ]| ≤ | i=1 z | + tβ the proposition follows.
3 Privacy Definition
We give a privacy definition that extends the definitions in [5,7]. Our definition
is inspired by the notion of semantic security of Goldwasser and Micali [11]. We
first state the formal definition and then show some of its consequences.
Let pi,j be the a priori probability that di,j = 1 (taking into account that
we assume the adversary knows the underlying distribution Di on row i. In
Let X0 , . . . , Xm be a martingale with |Xi+1 − Xi | ≤ 1 for all 0 ≤ i < m. Let λ > 0
√ 2
be arbitrary. Azuma’s inequality says that then Pr[Xm > λ m] < eλ /2 .
general, for a Boolean function f : {0, 1}k → {0, 1} we let pi,f be the a priori
probability that f (di,1 , . . . , di,k ) = 1. We analyze the a posteriori probability
that f (di,1 , . . . , di,k ) = 1 given the answers to T queries, as well as all the values
in all the rows of d other than i: di ,j for all i = i. We denote this a posteriori
probability pi,f .
Confidence. To simplify our calculations we follow [5] and define a monotonically-
increasing 1-1 mapping conf : (0, 1) → IR as follows:
conf(p) = log .
Note that a small additive change in conf implies a small additive change in p.4
pi,f pi,f
Let conf i,f = log
and conf i,f = log
. We write our privacy require-
0 T
ments in terms of the random variables ∆conf defined as:5
∆conf i,f = |conf i,f − conf i,f |.
T 0
Definition 1 ((δ, T )-Privacy). A database access mechanism is (δ, T )-private
if for every distribution D on {0, 1}k , for every row index i, for every function
f : {0, 1}k → {0, 1}, and for every adversary A making at most T queries it
holds that
Pr[∆conf i,f > δ] ≤ neg(n).
The probability is taken over the choice of each row in d according to D, and the
randomness of the adversary as well as the database access mechanism.
A target set F is a set of k-ary Boolean functions (one can think of the
functions in F as being selected by an adversary; these represent information it
will try to learn about someone). A target set F is δ-safe if ∆conf i,f ≤ δ for
all i ∈ [n] and f ∈ F . Let F be a target set. Definition 1 implies that under a
(δ, T )-private database mechanism, F is δ-safe with probability 1 − neg(n).
Proposition 2. Consider a (δ, T )-private database with k = O(log n) attributes.
Let F be the target set containing all the 22 Boolean functions over the k at-
tributes. Then, Pr[F is 2δ-safe] = 1 − neg(n).
Proof. Let F be a target set containing all 2k conjuncts of k attributes. We
have that |F | = poly(n) and hence F is δ-safe with probability 1 − neg(n).
To prove the proposition we show that F is safe whenever F is. Let f ∈ F
be a Boolean function. Express f as a disjunction of conjuncts of k attributes:
The converse does not hold – conf grows logarithmically in p for p ≈ 0 and logarith-
mically in 1/(1 − p) for p ≈ 1.
Our choice of defining privacy in terms of ∆conf i,f is somewhat arbitrary, one could
rewrite our definitions (and analysis) in terms of the a priori and a posteriori proba-
bilities. Note however that limiting ∆conf i,f in Definition 1 is a stronger requirement
than just limiting |pi,f − pi,f |.
T 0
f = c1 ∨ . . . ∨ c . Similarly, express ¬f as the disjunction of the remaining 2k −
conjuncts: ¬f = d1 ∨ . . . ∨ d2k − . (So {c1 , . . . , c , d1 , . . . , d2k − } = F .)
We have:
i,cj i,dj
pi,f pi,¬f pT p0
∆conf i,f = log T
· 0
= log i,cj
· .
0 pi,¬f
T p0
i,d i,d
Let k maximize | log(pi,ck /pi,ck )| and k maximize | log(p0 k /pT k )|. Us-
T 0
ing | log( ai / bi )| ≤ maxi | log(ai /bi )| we get that ∆conf i,f ≤ |∆conf i,ck | +
|∆conf i,dk | ≤ 2δ, where the last inequality holds as ck , dk ∈ F .
(δ, T )-Privacy vs. Finding Very Heavy Sets. Let f be a target function and
δ = ω( n). Our privacy requirement implies δ = δ (δ, Pr[f (α1 , . . . , αk ]) such
that it is infeasible to find a “very” heavy set q ⊆ [n], that is, a set for which
aq,f ≥ |q| (δ + Pr[f (α1 , . . . , αk )]). Such a δ -heavy set would violate our privacy
requirement as it would allow guessing f (α1 , . . . , αk ) for a random record in q.
Relationship to the privacy definition of [7] Our privacy definition extends the
definition of p0 -to-p1 privacy breaches of [7]. Their definition is introduced with
respect to a scenario in which several users send their sensitive data to a center.
Each user randomizes his data prior to sending it. A p0 -to-p1 privacy breach
occurs if, with respect to some property f , the a priori probability that f holds
for a user is at most p0 whereas the a posteriori probability may grow beyond
p1 (i.e. in a worst case scenario with respect to the coins of the randomization
4 Privacy of Multi-Attribute SuLQ databases
We first describe our SuLQ Database algorithm, and then prove that it preserves
Let T (n) = O(nc ), c < 1, and define R = T (n)/δ 2 · logµ n for some µ > 0
(taking µ = 6 will work). To simplify notation, we write di for (di,1 , . . . , di,k ),
g(i) for g(di ) = g(di,1 , . . . , di,k ) (and later f (i) for f (di )).
SuLQ Database Algorithm A
Input: a query (q, g).
1. Let aq,g = i∈q g(i) = i∈q g(di,1 , . . . , di,k ) .
2. Generate a perturbation value: Let (e1 , . . . , eR ) ∈R {0, 1}R and E ←
i=1 ei − R/2.
3. Return aq,g = aq,g + E.
Note that E is a binomial random variable with E[E] = 0 and standard devi-
ation R. In our analysis we will neglect the case where E largely deviates from
zero, as the probability of such an event is extremely small: Pr[|E| > R log2 n] =
neg(n). In particular, this implies that our SuLQ database algorithm A is within
O( T (n)) perturbation.
We will use the following proposition.
Proposition 3. Let B be a binomially distributed random variable with expec-
tation 0 and standard deviation R. Let L be the random variable that takes the
value log Pr[B+1] . Then
Pr[B] Pr[−B]
1. log Pr[B+1] = log Pr[−B−1] . For 0 ≤ B ≤ R log2 n this value is
bounded by O(log2 n/ R)).
2. E[L] = O(1/R), where the expectation is taken over the random choice of B.
Proof. 1. The equality follows from the symmetry of the Binomial distribution
(i.e. Pr[B] = Pr[−B]).
R R
To prove the bound consider log(Pr[B]/ Pr[B+1]) = log( R/2+B / R/2+B+1 =
log Using the limits on B and the definition of R we get that this
R/2−B−1 .
√ √
value is bounded by log(1 + O(log2 n/ R)) = O(log2 n/ R).
2. Using the symmetry of the Binomial distribution we get:
R R/2 + B + 1 R/2 − B + 1
E[L] = 2−R log + log
R/2 + B R/2 − B R/2 + B
R R+1
= 2−R log 1 + 2 + neg(n) = O(1/R)
√ R/2 + B R /4 − B 2
0≤B≤log2 n R
Our proof of privacy is modeled on the proof in Section 4 of [5] (for single
attribute databases). We extend their proof (i) to queries of the form (q, g) where
g is any k-ary Boolean function, and (ii) to privacy of k-ary Boolean functions
Theorem 1. Let T (n) = O(nc ) and δ = 1/O(nc ) for 0 < c < 1 and 0 ≤
c < c/2. Then the SuLQ algorithm A is (δ, T (n))-private within O( T (n)/δ)
Note that whenever T (n)/δ < n bounding the adversary’s number of √
queries to T (n) allows privacy with perturbation magnitude less than n.
Proof. Let T (n) be as in the theorem and recall R = T (n)/δ 2 · logµ n for some
µ > 0.
Let the T = T (n) queries issued by the adversary be denoted (q1 , g1 ), . . . , (qT , gT ).
ˆ ˆ
Let a1 = A(q1 , g1 ), . . . , at = A(qT , gT ) be the perturbed answers to these queries.
Let i ∈ [n] and f : {0, 1}k → {0, 1}.
We analyze the a posteriori probability p that f (i) = 1 given the answers to
the first queries (ˆ1 , . . . , a ) and d{−i} (where d{−i} denotes the entire database
a ˆ
except for the ith row). Let conf = log2 p /(1 − p ). Note that conf T = conf i,f
(of Section 3), and (due to the independence of rows in d) conf 0 = conf i,f .
By the definition of conditional probability 6 we get
p Pr[f (i) = 1|ˆ1 , . . . , a , d{−i} ]
a ˆ Pr[ˆ1 , . . . , a ∧ f (i) = 1|d{−i} ]
a ˆ Num
= {−i} ]
= = .
1−p Pr[f (i) = 0|ˆ1 , . . . , a , d
a ˆ Pr[ˆ1 , . . . , a ∧ f (i) = 0|d{−i} ]
a ˆ Denom
Note that the probabilities are taken over the coin flips of the SuLQ algorithm
and the choice of d. In the following we analyze the numerator (the denominator
is analyzed similarly).
Num = Pr[ˆ1 , . . . , a ∧ di = σ|d{−i} ]
a ˆ
σ∈{0,1}k ,f (σ)=1
= Pr[ˆ1 , . . . , a |di = σ, d{−i} ] Pr[di = σ]
a ˆ
σ∈{0,1}k ,f (σ)=1
The last equality follows as the rows in d are chosen independently of each
other. Note that given both di and d{−i} the random variable a is independent
ˆ ˆ
of a1 , . . . , a −1 . Hence, we get:
Num = Pr[ˆ1 , . . . , a
a ˆ −1 |di = σ, d{−i} ] Pr[ˆ |di = σ, d{−i} ] Pr[di = σ].
σ∈{0,1}k ,f (σ)=1
Next, we observe that although a depends on di , the dependence is weak.
More formally, let σ0 , σ1 ∈ {0, 1}k be such that f (σ0 ) = 0 and f (σ1 ) = 1. Note
that whenever g (σ) = g (σ1 ) we have that Pr[ˆ |di = σ, d{−i} ] = Pr[ˆ |di =
a a
σ1 , d{−i} ]. When, instead, g (σ) = g (σ1 ), we can relate Pr[ˆ |di = σ, d{−i} ] and
Pr[ˆ |di = σ1 , d{−i} ] via Proposition 3:
Lemma 1. Let σ, σ1 be such that g (σ) = g (σ1 ). Then Pr[ˆ |di = σ, d{−i} ] =
2 Pr[ˆ |di = σ1 , d{−i} ] where |E[ ]| = O(1/R) and
−(−1)g (σ1 ) O(log2 n/ R) if E ≤ 0
(−1)g (σ1 ) O(log2 n/ R) if E > 0
and E is noise that yields a when di = σ.
Proof. Consider the case g (σ1 ) = 0 (g (σ) = 1). Writing Pr[ˆ |di = σ, d{−i} ] =
Pr[E = k] and Pr[ˆ |di = σ1 , d{−i} ] = Pr[E = k − 1] the proof follows from
Proposition 3. Similarly for g (σ1 ) = 1.
Note that the value of does not depend on σ.
Taking into account both cases (g (σ) = g (σ1 ) and g (σ) = g (σ1 )) we get
Num = Pr[ˆ1 , . . . , a
a ˆ −1 |di = σ, d{−i} ]2 Pr[ˆ |di = σ1 , d{−i} ] Pr[di = σ].
σ∈{0,1}k ,f (σ)=1
I.e. Pr[E1 |E2 ] · Pr[E2 ] = Pr[E1 ∧ E2 ] = Pr[E2 |E1 ] · Pr[E1 ].
Let γ be the probability, over di , that g(σ) = g(σ1 ). Letting γ ≥ 1 be such that
21/γ = γ , we have
Num = 2 Pr[ˆ |di = σ1 , d{−i} ]
a Pr[ˆ1 , . . . , a
a ˆ −1 |di = σ, d{−i} ] Pr[di = σ]
σ∈{0,1}k ,f (σ)=1
=2 Pr[ˆ |di = σ1 , d{−i} ]
a Pr[ˆ1 , . . . , a
a ˆ −1 ∧ di = σ|d{−i} ]
σ∈{0,1}k ,f (σ)=1
/γ {−i}
=2 Pr[ˆ |di = σ1 , d
a ] Pr[ˆ1 , . . . , a −1 ∧ f (i) = 1|d{−i} ]
a ˆ
/γ {−i}
=2 a
Pr[ˆ |di = σ1 , d ] Pr[f (i) = 1|ˆ1 , . . . , a −1 , d{−i} ] Pr[ˆ1 , . . . , a
a ˆ a ˆ −1 |d
/γ {−i} {−i}
=2 a
Pr[ˆ |di = σ1 , d a ˆ
]p −1 Pr[ˆ1 , . . . , a −1 |d ]
and similarly
Denom = 2 Pr[ˆ |di = σ0 , d{−i} ](1 − p
a −1 ) Pr[ˆ1 , . . . , a −1 |d
a ˆ {−i}
Putting the pieces together we get that
Num Pr[ˆ |di = σ1 , d{−i} ]
conf = log2 = conf −1 + ( /γ − /γ ) + log2 .
Denom Pr[ˆ |di = σ0 , d{−i} ]
Define a random walk on the real line with step = conf − conf −1 . To
conclude the proof we show that (with high probability) T steps of the random
walk do not suffice to reach distance δ. From Proposition 3 and Lemma 1 we get
|E[step ]| = O(1/R) = O
T logµ n
√ δ
|step | = O(log2 n/ R) = O √ .
T logµ/2−2 n
Using Proposition 1 with λ = log n we get that for all t ≤ T ,
Pr[|conf t − conf 0 | > δ] = Pr[| step | > δ] ≤ neg(n).
5 Datamining on Vertically Partitioned Databases
In this section we assume that the database is chosen according to Dn for some
underlying distribution D on rows, where D is independent of n, the size of the
database. We also assume that n, is sufficiently large that the true database
statistics are representative of D. Hence, in the sequel, when we write things like
“Pr[α]” we mean the probability, over the entries in the database, that α holds.
Let α and β be attributes. We say that α implies β in probability if the
conditional probability of β given α exceeds the unconditional probability of β.
The ability to measure implication in probability is crucial to datamining. Note
that since Pr[β] is simple to estimate well, the problem reduces to obtaining a
good estimate of Pr[β|α]. Moreover, once we can estimate the Pr[β|α], we can use
Bayes’ Rule and de Morgan’s Laws to determine the statistics for any Boolean
function of attribute values.
Our key result for vertically partitioned databases is a method, given two
single-attribute SuLQ databases with attributes α and β respectively, to measure
For more general cases of vertically partitioned data, assume a k-attribute
database is partitioned into 2 ≤ j ≤ k databases, with k1 , . . . , kj (possibly
overlapping) attributes, respectively, where i ki ≥ k. We can use functional
queries to learn the statistics on ki -ary Boolean functions of the attributes in the
ith database, and then use the results for two single-attribute SuLQ databases
to learn binary Boolean functions of any two functions fi1 (on attributes in
database i1 ) and fi2 (on attributes in database i2 ), where 1 ≤ i1 , i2 ≤ j.
5.1 Probabilistic Implication
In this section we construct our basic building block for mining vertically parti-
tioned databases.
We assume two SuLQ databases d1 , d2 of size n, with attributes α, β respec-
tively. When α implies β in probability with a gap of ∆, we write α → β, meaning
that Pr[β|α] = Pr[β] + ∆. We note that Pr[α] and Pr[β] are easily computed
within error O(1/ n), simply by querying the two databases on large subsets.
Our goal is to determine ∆, or equivalently, Pr[β|α] − Pr[β]; the method will be
to determine if, for a given ∆1 , Pr[β|α] ≥ Pr[β] + ∆1 , and then to estimate ∆
by binary search on ∆1 .
Notation. We let pα = Pr[α], pβ = Pr[β], pβ|α = Pr[β|α] and pβ|α = Pr[β|¬α].
Let X be a random variable counting the number of times α holds when we
take N samples from D. Then E[X] = N pa and Var[X] = N pa (1 − pa ).
pβ|α = pβ + ∆. (1)
Note that pβ = pα pβ|α + (1 − pα )pβ|α . Substituting pβ + ∆ for pβ|α we get
pβ|α = pβ − ∆
¯ , (2)
1 − pα
and hence (by another application of Eq. (1))
pβ|α − pβ|α =
¯ . (3)
1 − pα
We define the following testing procedure to determine, given ∆1 , if ∆ ≥ ∆1 .
Step 1 finds a heavy (but not very heavy) set for attribute α, that is, a set q for
which the number of records satisfying α exceeds the expected number by more
than a standard deviation. Note that since T (n) = o(n), the noise |ˆq,1 − aq,1 |
√ √
is o( n), so the heavy set really has N pα + Ω( N ) records for which α holds.
Step 2 queries d2 on this heavy set. If the incidence of β on this set sufficiently
(as a function of ∆1 ) exceeds the expected incidence of β, then the test returns
“1” (ie, success). Otherwise it returns 0.
Test Procedure T
Input: pα , pβ , ∆1 > 0.
1. Find q ∈R [n] such that aq,1 ≥ N pα + σα where N = |q| and σα =
N pα (1 − pα ).
Let biasα = aq,1 − N pα .
2. If aq,2 ≥ N pβ + biasα 1−pα return 1, otherwise return 0.
Theorem 2. For the test procedure T :
1. If ∆ ≥ ∆1 , then Pr[T outputs 1] ≥ 1/2.
2. If ∆ ≤ ∆1 − ε, then Pr[T outputs 1] ≤ 1/2 − γ,
where for ε = Θ(1) the advantage γ = γ(pα , pβ , ε) is constant, and for ε = o(1)
the advantage γ = c · ε with constant c = c(pα , pβ ).
In the following analysis we neglect the difference between aq,i and aq,i , since,
as noted above, the perturbation contributes only low order terms (we neglect
some other low order terms). Note that it is possible to compute all the required
constants for Theorem 2 explicitly, in polynomial time, without neglecting these
low-order terms. Our analysis does not attempt to optimize constants.
Proof. Consider the random variable corresponding to aq,2 = i∈q di,2 , given
that q is biased according to Step 1 of T . By linearity of expectation, together
with the fact that the two cases below are disjoint, we get that
E[aq,2 |biasα ] = (N pα + biasα )pβ|α + (N (1 − pα ) − biasα )pβ|α
= N pα pβ|α + N (1 − pα )pβ|α + biasα (pβ|α − pβ|α )
¯ ¯
= N pβ + biasα .
1 − pα
The last step uses Eq. (3). Since the distribution of aq,2 is symmetric around
E[aq,2 |biasα ] we get that the first part of the claim, i.e. if ∆ ≥ ∆1 then
Pr[T outputs 1] = Pr[aq,2 > N pβ + biasα |biasα ] ≥ 1/2.
1 − pα
To get the second part of the claim we use the de Moivre-Laplace theorem
and approximate the binomial distribution with the normal distribution so that
we can approximate the variance of the sum of two distributions (when α holds
and when α does not hold) in order to obtain the variance of aq,2 conditioned
on biasα . We get:
Var[aq,2 |biasα ] ≈ (N pα +biasα )pβ|α (1−pβ|α )+(N (1−pα )−biasα )pβ|α (1−pβ|α ).
¯ ¯
Assuming N is large enough, we can neglect the terms involving biasα . Hence,
Var[aq,2 |biasα ] ≈ N [pα pβ|α + (1 − pα )pβ|α ] − N [pα p2 + (1 − pα )p2 α ]
¯ β|α β| ¯
≈ N pβ − N [pα p2 + (1 − pα )p2 α ]
β|α β| ¯
2 2 pα
= N [pβ − pβ ] − N ∆ < N [pβ − p2 ] = Varβ .
1 − pα
The transition from the second to third lines follows from [pα p2 +(1−pα )p2 α ]−
β|α β| ¯
p2 = ∆2 1−pα . 7
We have that the probability distribution on aq,2 is a Gaussian with mean
and variance at most N pβ + biasα (∆1 − ε)/(1 − pα ) and Varβ respectively.
To conclude the proof, we note that the conditional probability mass of aq,2
exceeding its own mean by ε · biasα /(1 − pα ) > εσα /(1 − pα ) is at most
1 εσα /(1 − pα )
−γ =Φ −
2 Varβ
where Φ is the cumulative distribution function for the normal distribution.
For constant ε this yields a constant advantage γ. For ε = o(1), we get that
ε σ
γ ≥ 2 √α /(1−pα ) .
Varβ 2π
By taking ε = ω(1/ n) we can run the Test procedure enough times to
determine with sufficiently high confidence which “side” of the interval [∆1 −
ε, ∆1 ] ∆ is on (if it is not inside the interval). We proceed by binary search to
narrow in on ∆. We get:
Theorem 3. There exists an algorithm that invokes the test T
log(1/δ) + log log(1/ )
Opα ,pβ (log(1/ ) 2
ˆ ˆ
times and outputs ∆ such that Pr[|∆ − ∆| < ε] ≥ 1 − δ.
6 Datamining on Published Statistics
In this section we apply our basic technique for measuring implication in prob-
ability to the real-life model in which confidential information is gathered by
In more detail: [pα p2 + (1 − pα )p2 α ] − p2 = p2 pα (1 − pα ) + p2 α (1 − pα )pα −
β|α β| ¯ β β|α β| ¯
2pα (1−pα )pβ|α pβ|α = pα (1−pα )[p2 +p2 α −2pβ|α pβ|α ] = pα (1−pα )(pβ|α −pβ|α )2 =
¯ β|α β| ¯ ¯ ¯
∆2 1−pα .
a trusted party, such as the census bureau, who publishes aggregate statistics.
The published statistics are the results of queries to a SuLQ database. That is,
the census bureau generates queries and their noisy responses, and publishes the
Let k denote the number of attributes (columns). Let ≤ k/2 be fixed (typi-
cally, will be small; see below). For every -tuple of attributes (α1 , α2 , . . . , α ),
α ¯ ¯
and for each of the 2 conjunctions of literals over these attributes, (¯ 1 α2 . . . α ,
¯ ¯
α1 α2 . . . α , and so on), the bureau publishes the result of some number t of
queries on these conjunctions. More precisely, a query set q ⊆ [n] is selected,
and noisy statistics for all k 2 conjunctions of literals are published for the
query. This is repeated t times.
To see how this might be used, suppose = 3 and we wish to learn if α1 α2 α3
¯ ¯
implies α4 α5 α6 in probability. We know from the results in Section 4 that we
need to find a heavy set q for α1 α2 α3 , and then to query the database on the
¯ ¯
set q with the function α4 α5 α6 . Moreover, we need to do this several times
(for the binary search). If t is sufficiently large, then with high probability such
query sets q are among the t queries. Since we query all triples (generally, -
tuples) of literals for each query set q, all the necessary information is published.
The analyst need only follow the instructions for learning the strength ∆ of
¯ ¯
the implication in probability α1 α2 α3 → α4 α5 α6 , looking up the results of the
queries (rather than randomly selecting the sets q and submitting the queries to
the database).
As in Section 4, once we can determine implication in probability, it is easy
¯ ¯
to determine (via Bayes’ rule) the statistics for the conjunction α1 α2 α3 α4 α5 α6 .
In other words, we can determine the approximate statistics for any conjunction
of 2 literals of attribute values. Now the procedure for arbitrary 2 -ary func-
tions is conceptually simple. Consider a function of attribute values β1 . . . β2 .
The analyst first represents the function as a truth table: for each possible 2 -
tuple of literals over β1 . . . β2 the function has value either zero or one. Since
these conjunctions of literals are mutually exclusive, the probability (overall)
that the function has value 1 is simply the sum of the probabilities that each of
the positive (one-valued) conjunctions occurs. Since we can approximate each of
these statistics, we obtain an approximation for their sum. Thus, we can approx-
k 2
imate the statistics for each of the 2 22 Boolean functions of 2 attributes. It
remains to analyze the quality of the approximations.
Let T = o(n) be an upper bound on the number of queries permitted by the
SuLQ database algorithm, e.g., T = O(nc ), c < 1. Let k and be as above: k
is the total number of attributes, and statistics for -tuples will be published.
Let ε be the (combined) additive error achieved for all 2 22 conjuncts with
probability 1 − δ.
Input: a database d = {di,j } of dimensions n × k.
Repeat t times:
1. Let q ∈R [n]. Output q.
2. For all selections of indices 1 ≤ j1 < j2 < . . . < j ≤ k, output aq,g for all
the 2 conjuncts g over the literals αj1 , . . . , αj .
Privacy is preserved as long as t· 2 22 ≤ T (Theorem 1). To determine util-
ity, we need to understand the error introduced by the summation of estimates.
Let ε = ε/22 . If our test results in a ε additive error for each possible conjunct
of 2 literals, the truth table method described above allows us to compute the
frequency of every function of 2 literals within additive error ε (a lot better in
many cases). We require that our estimate be within error ε with probability
1 − δ where δ = δ/ 2 22 . Hence, the probability that a ‘bad’ conjunct exists
(for which the estimation error is not within ε ) is bounded by δ.
Plugging δ and ε into Theorem 3, we get that for each conjunction of
literals, the number of subsets q on which we need to make queries is
t = O 24 (log(1/ ) + )(log(1/δ) + log k + log log(1/ ))/ 2
For each subset q we query each of the 2 conjuncts of attributes. Hence,
the total number of queries we make is
t· 2 = O k 25 (log(1/ ) + )(log(1/δ) + log k + log log(1/ ))/ 2
For constant , δ we get that the total number of queries is O(25 k 2 log k). To
see our gain, compare this with the naive publishing of statistics for all conjuncts
of 2 attributes, resulting in 2 22 = O(k 2 22 ) queries.
7 Open Problems
Datamining of 3-ary Boolean Functions. Section 5.1 shows how to use two SuLQ
databases to learn that Pr[β|α] = Pr[β] + ∆. As noted, this allows estimating
Pr[f (α, β)] for any Boolean function f . Consider the case where there exist
three SuLQ databases for attributes α, β, γ. In order to use our test procedure
to compute Pr[f (α, β, γ)], one has to either to find heavy sets for α ∧ β (having
bias of order Ω( n)), or, given a heavy set for γ, to decide whether it is also
heavy w.r.t. α ∧ β. It is not clear how to extend the test procedure of Section 5.1
in this direction.
Maintaining Privacy for all Possible Functions. Our privacy definition (Defini-
tion 1) requires for every function f (α1 , . . . , αk ) that with high probability the
confidence gain is limited by some value δ. If k is small (less than log log n), then,
via the union bound, we get that with high probability the confidence gain is
kept small for all the 22 possible functions.
For large k the union bound does not guarantee simultaneous privacy for all
the 22 possible functions. However, the privacy of a randomly selected function
is (with high probability) preserved. It is conceivable that (e.g. using crypto-
graphic measures) it is possible to render infeasible the task of finding a function
f whose privacy was breached.
Dependency Between Database Records. We explicitly assume that the database
records are chosen independently from each other, according to some underlying
distribution D. We are not aware of any work that does not make this assumption
(implicitly or explicitly). An important research direction is to come up with
definition and analysis that work in a more realistic model of weak dependency
between database entries.
1. D. Agrawal and C. Aggarwal, On the Design and Quantification of Privacy Preserving Data
Mining Algorithms, Proceedings of the 20th Symposium on Principles of Database Systems,
2. N. R. Adam and J. C. Wortmann, Security-Control Methods for Statistical Databases: A
Comparative Study, ACM Computing Surveys 21(4): 515-556 (1989).
3. R. Agrawal and R. Srikant, Privacy-preserving data mining, Proc. of the ACM SIGMOD
Conference on Management of Data, pp. 439–450, 2000.
4. S. Chawla, C. Dwork, F. McSherry, A. Smith, and H. Wee, Toward Privacy in Public Databases,
submitted for publication, 2004.
5. I. Dinur and K. Nissim, Revealing information while preserving privacy, Proceedings of the
Twenty-Second ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Sys-
tems, pp. 202-210, 2003.
6. G. Duncan, Confidentiality and statistical disclosure limitation. In N. Smelser & P. Baltes
(Eds.), International Encyclopedia of the Social and Behavioral Sciences. New York: Elsevier.
7. A. V. Evfimievski, J. Gehrke and R. Srikant, Limiting privacy breaches in privacy preserving
data mining, Proceedings of the Twenty-Second ACM SIGACT-SIGMOD-SIGART Symposium
on Principles of Database Systems, pp. 211-222, 2003.
8. S. Fienberg, Confidentiality and Data Protection Through Disclosure Lim-
itation: Evolving Principles and Technical Advances, IAOS Conference on
Statistics, Development and Human Rights September, 2000, available at
9. S. Fienberg, U. Makov, and R. Steele, Disclosure Limitation and Related Methods for Cate-
gorical Data, Journal of Official Statistics, 14, pp. 485–502, 1998.
10. L. Franconi and G. Merola, Implementing Statistical Disclosure Control for Ag-
gregated Data Released Via Remote Access, Working Paper No. 30, United Na-
tions Statistical Commission and European Commission, joint ECE/EUROSTAT
work session on statistical data confidentiality, April, 2003, available at
11. S. Goldwasser and S. Micali, Probabilistic Encryption and How to Play Mental Poker Keeping
Secret All Partial Information, STOC 1982: 365-377
12. T.E. Raghunathan, J.P. Reiter, and D.B. Rubin, Multiple Imputation for Statistical Disclosure
Limitation, Journal of Official Statistics 19(1), pp. 1 – 16, 2003
13. D.B. Rubin, Discussion: Statistical Disclosure Limitation, Journal of Official Statistics 9(2), pp.
461 – 469, 1993.
14. A. Shoshani, Statistical databases: Characteristics, problems and some solutions, Proceedings
of the 8th International Conference on Very Large Data Bases (VLDB’82), pages 208–222, 1982. | {"url":"http://www.docstoc.com/docs/33942817/Privacy-Preserving-Datamining-on-Vertically-Partitioned","timestamp":"2014-04-21T09:11:01Z","content_type":null,"content_length":"111474","record_id":"<urn:uuid:e0b875c0-2ffe-4129-ba59-aaffbabd7ea9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic formula
1. November 13th 2012, 10:04 AM #1
2. November 13th 2012, 11:35 AM #2
Re: Quadratic formula
Your working is good until you get to the point where you try to divide out the 6's...that is an invalid move. Try dividing everything in the numerator and denominator by 2, since 2 is a
factor of both the numerator and denominator.
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/pre-calculus/207477-quadratic-formula.html","timestamp":"2014-04-20T05:08:56Z","content_type":null,"content_length":"31541","record_id":"<urn:uuid:89a9dc57-7698-4667-918f-1d3e2b71155e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
s rule of signs
Descartes’s rule of signs, in algebra, rule for determining the maximum number of positive real number solutions (roots) of a polynomial equation in one variable based on the number of times that
the signs of its real number coefficients change when the terms are arranged in the canonical order (from highest power to lowest power). For example, the polynomial x^5 + x^4 − 2x^3 + x^2 − 1 = 0
changes sign three times, so it has at most three positive real solutions. Substituting −x for x gives the maximum number of negative solutions (two).
The rule of signs was given, without proof, by the French philosopher and mathematician René Descartes in La Géométrie (1637). The English physicist and mathematician Sir Isaac Newton restated the
formula in 1707, though no proof of his has been discovered; some mathematicians speculate that he considered its proof too trivial to bother recording. The earliest known proof was by the French
mathematician Jean-Paul de Gua de Malves in 1740. The German mathematician Carl Friedrich Gauss made the first real advance in 1828 when he showed that, in cases where there are fewer than the
maximum number of positive roots, the deficit is always by an even number. Thus, in the example given above, the polynomial could have three positive roots or one positive root, but it could not have
two positive roots. | {"url":"http://www.britannica.com/print/topic/1499995","timestamp":"2014-04-20T01:44:15Z","content_type":null,"content_length":"9375","record_id":"<urn:uuid:97e30164-8c61-4751-ab56-7c27327ede7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Rounding with Div and Mod operators
genew@shuswap.net (Gene Wirchenko)
20 May 1999 01:44:19 -0400
From comp.compilers
| List of all articles for this month |
From: genew@shuswap.net (Gene Wirchenko)
Newsgroups: comp.compilers
Date: 20 May 1999 01:44:19 -0400
Organization: Okanagan Internet Junction
References: 99-05-039 99-05-060
Keywords: arithmetic, design
Johan Persson <johan.persson@mbox319.swipnet.se> wrote:
>William Rayer wrote:
>> My question is: which rounding system is preferred and does it matter?
>My experience of using int, mod and div operators are that if they round
>towards zero they are useless if you don't know that you only are
>dealing with positive OR negative numbers. I usually write my own
>functions that round downwards and use them instead of the built-in
>operators. I have never come across a case where rounding towards zero
>is the preferred choice.
There is a neat hack for determining digital sums that would be
cleaner if modulo rounded down to 0. (That's assuming that the
digital sum of a number is always the same as that of its abs().)
int ds(int n)
int baseds;
if (n==0)
return 0;
/* Discussion Point */
if (baseds>0)
return baseds;
return 9;
This won't work for negative numbers as you have to abs() first,
but that doesn't work either because in a two's complement system,
abs(INT_MIN) could overflow (e.g. -32768 vs. 32767). At the
discussion point, you need something like
if (n<-9)
Gene Wirchenko
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/99-05-075","timestamp":"2014-04-20T11:31:27Z","content_type":null,"content_length":"9992","record_id":"<urn:uuid:45fbd785-65f0-4101-8c15-8e2203841fcf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distinguished Lecture Series -
Thematic Program on Geometric Applications of Homotopy Theory January-June, 2007
May 14, 15, 17, 2007 --4:30 pm
Distinguished Lecture Series --Michael Hopkins,
Harvard University
May 14, 2007 Classical and quantum invariants of manifolds
4:30 pm
Room 230 By "classical invariants" of manifolds I mean cobordism invariants, and by "quantum invariants" I mean the kind of invariants that come from topological quantum field theories. The
Fields Institute methods and structures of algebraic topology give a very complete accounting of cobordism invariants. In fact much of the development of algebraic topology in the twentieth century
was inspired by these invariants. In this talk I will discuss the new demands placed on topology by "quantum invariants," and some of the new structures that seem to be relevant. The
first half of this talk will be aimed at a general audience. So will the second half, unless time permits, in which case I'll discuss in more detail some of the progress that has
been made in this direction.
May 15, 2007 Homotopy invariance of string topology
4:30 pm
Room 230 In this talk I'll explore one example of a topological field theory, the "string topology" of Chas-Sullivan. I will give a definition of A_\infty Frobenius that is good enough to
Fields Institute accept the cochain algebra of a Poincare duality space and produce, using the machinery of Kevin Costello, a topological conformal field theory whose closed string state space is the
space of (integer) chains on the free loop space. This talk is joint work (in progress) with Jacob Lurie, and owes a great deal to the work and influence of Kevin Costello.
May 17, 2007 The topological WZW space of conformal blocks
4:30 pm
Room 230 In this talk I'll describe joint work with Dan Freed and Constan Teleman on a purely topological approach to the topological field theory coming from the space of WZW conformal
Fields Institute blocks.
Michael J. Hopkins, whose work linking algebraic topology to other branches of mathematics and physics has earned him a reputation as the world's pre-eminent algebraic topologist, was appointed a
professor of mathematics in Harvard University's Faculty of Arts and Sciences, effective July 1, 2005
Hopkins has been a major player in recent developments linking algebraic topology to issues in number theory, such as elliptic curves and modular forms, and contemporary physics, such as string
theory. His work has also brought homotopy theory - specifically, understanding of the homotopy groups of spheres, one of the central problems in topology - into closer contact with other fields of
A central field in modern mathematics, algebraic topology solves topological, or geometrical, problems by translating them into algebra. Modern algebraic topology uses tools from abstract algebra -
the study of algebraic structures such as groups, rings, and fields - to study these global properties of spaces.
Algebraic topology attempts to differentiate between multidimensional shapes, or topological spaces. Topological spaces are structures that can be continuously changed; for instance, the sides and
corners of a triangle can be "pushed" and "pulled" to yield a circle or square. Topological spaces may serve to unify virtually every branch of modern mathematics, much as physicists seek a "grand
theory" to unify all known observations and theories about the physical universe.
Born in Alexandria, Va., Hopkins received a B.A. in mathematics from Northwestern University in 1979, a Ph.D. in mathematics from Northwestern in 1984, and a D.Phil. in mathematics from the
University of Oxford in 1984. He has taught at Lehigh University as a visiting lecturer in 1983, Princeton University as a postdoctoral fellow, instructor and assistant professor from 1984 to 1987,
and the University of Chicago as a professor from 1988 to 1989. He joined M.I.T. as an associate professor in 1989, and was named a full professor in 1990.
Hopkins was a Rhodes Scholar from 1979 to 1982, and has earned other significant honors including the Presidential Young Investigator Award from 1987 to 1995, an Alfred P. Sloan Fellowship from 1987
to 1992, and the Oswald Veblen Prize in Geometry in 2001. He is a member of the American Academy of Arts and Sciences and the Royal Danish Academy of Sciences and Letters.
Hopkins is currently managing editor of the journal Advances in Mathematics and on the editorial board of the Journal of the Institute of Mathematics of Jussieu, and has also served as an editor for
the Duke Mathematics Journal, Journal of the American Mathematics Society, Annals of Mathematics, Quarterly Journal of Mathematics, and K-theory.
edited version from
'Deep and broad mathematician' joins FAS, By Steve Bradt , FAS Communications
The Distinguished Lecture Series (DLS) brings a leading mathematician to the Institute to give a series of three one-hour lectures. The recipient has made an outstanding contribution to the
advancement of the mathematical sciences and with excellence in research.
Index of Fields Distinguished and Coxeter Lectures.
Thematic Year Home page | {"url":"http://www.fields.utoronto.ca/programs/scientific/06-07/homotopy/DLS/","timestamp":"2014-04-18T23:35:57Z","content_type":null,"content_length":"15810","record_id":"<urn:uuid:769327d9-78df-49c7-9f6d-fd4e0de14d61>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Algebra
August 16th 2009, 03:19 AM #1
Junior Member
Aug 2009
Hello im new to this site.. and was wondering if anyone could help me with this question..
Jennifer is 7 years younger than melisa. In 4 years time Jennifer will be 1 year older than half of Melisas age at that time. How old are Jennifer and Melissa now?
I'd like to see all working out including simultaneous equation.. im very confused.. thank you
Hello im new to this site.. and was wondering if anyone could help me with this question..
Jennifer is 7 years younger than melisa. In 4 years time Jennifer will be 1 year older than half of Melisas age at that time. How old are Jennifer and Melissa now?
I'd like to see all working out including simultaneous equation.. im very confused.. thank you
Let M be Melissa's age, J be Jennifer's age
At t=0
$J = M - 7$
At t=4
$J+4 +1 = \frac{M+4}{2} \rightarrow J = \frac{M-6}{2}$
You can set these two equations equal and solve for M. From there use the first equation to find J
i really dont know what to do after, im sorry could u help me out abit more if it doesnt bother u?
$J = \frac{M-6}{2}$
$J = M - 7$
These two must be equal so we can say that $M-7 = \frac{M-6}{2}$. Then solve for M
$2M-14 = M-6 \: \rightarrow \: M = 8$
Now M is known use J= M-7 to find J:
$J = 8-7 = 1$
August 16th 2009, 03:23 AM #2
August 16th 2009, 03:30 AM #3
Junior Member
Aug 2009
August 16th 2009, 03:40 AM #4 | {"url":"http://mathhelpforum.com/algebra/98221-simple-algebra.html","timestamp":"2014-04-16T04:30:34Z","content_type":null,"content_length":"40862","record_id":"<urn:uuid:39e10087-7431-457f-8fff-1debec5525d4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Hagen Knaf
bio website
visits member for 4 years, 5 months
seen Jan 26 '10 at 9:45
stats profile views 82
25 awarded Yearling
13 awarded Good Answer
Jan Generic fiber of morphism between non-singular curves
26 comment You need an algebraically closed base field in the statement. Otherwise there can be points $Q\in C_1$ such that the degree $[k(Q):k(\phi (Q))]>1$ in which case the number of points in
the fibre $\phi^{-1}(\phi (Q))$ is less than the degree.
20 answered Representability of finite metric spaces
19 answered What are examples illustrating the usefulness of Krull (i.e., rank > 1) valuations?
19 answered What are examples illustrating the usefulness of Krull (i.e., rank > 1) valuations?
9 answered What does “linearly disjoint” mean for abstract field extensions?
2 answered Is (relatively) algebraically closed stable under finite field extensions?
26 answered Weil divisors on non Noetherian schemes
19 awarded Nice Answer
18 awarded Teacher
Nov answered What's so great about blackboards? | {"url":"http://mathoverflow.net/users/1886/hagen-knaf?tab=activity","timestamp":"2014-04-18T18:27:12Z","content_type":null,"content_length":"39204","record_id":"<urn:uuid:be9067af-64da-4b5f-8926-0aad409ba839>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Encyclopedia of Mathematics
2010 Mathematics Subject Classification: Primary: 60G42 Secondary: 60G44 [MSN][ZBL]
(with probability 1). In the case of discrete time
or a supermartingale, if
Example 1. If
Example 2. Let
Then, if the variables
One of the basic facts of the theory of martingales is that the structure of a martingale (submartingale) Markov moment), if
As a particular case of this the Wald identity follows:
Among the basic results of the theory of martingales is Doob's inequality: If
Taking (5) and (7) into account, it follows that
In the proof of a different kind of theorem on the convergence of submartingales with probability 1, a key role is played by Doob's inequality for the mathematical expectation
The basic result on the convergence of submartingales is Doob's theorem: If
A corollary of this result is Lévy's theorem on the continuity of conditional mathematical expectations: If
A natural generalization of a martingale is the concept of a local martingale, that is, a stochastic process
are martingales. In the case of discrete time each local martingale
Each submartingale stochastic integral
which is a local martingale. In the case of a Wiener process
In the case of continuous time the Doob, Burkholder and Davis inequalities are still true (for right-continuous processes having left limits).
[D] J.L. Doob, "Stochastic processes" , Chapman & Hall (1953) MR1570654 MR0058896 Zbl 0053.26802
[GS] I.I. Gihman, A.V. Skorohod, "The theory of stochastic processes" , 1 , Springer (1974) (Translated from Russian) MR0346882 Zbl 0291.60019
Stopping times are also called optimal times, or, in the older literature, Markov times or Markov moments, cf. Markov moment. The optimal sampling theorem is also called the stopping theorem or
Doob's stopping theorem.
The notion of a martingale is one of the most important concepts in modern probability theory. It is basic in the theories of Markov processes and stochastic integrals, and is useful in many parts of
analysis (convergence theorems in ergodic theory, derivatives and lifting in measure theory, inequalities in the theory of singular integrals, etc.). More generally one can define martingales with
values in
[DM] C. Dellacherie, P.A. Meyer, "Probabilities and potential" , 1–3 , North-Holland (1978–1988) pp. Chapts. V-VIII. Theory of martingales (Translated from French) MR0939365 MR0898005 MR0727641
MR0745449 MR0566768 MR0521810 Zbl 0716.60001 Zbl 0494.60002 Zbl 0494.60001
[D2] J.L. Doob, "Classical potential theory and its probabilistic counterpart" , Springer (1984) pp. 390 MR0731258 Zbl 0549.31001
[N] J. Neveu, "Discrete-parameter martingales" , North-Holland (1975) (Translated from French) MR0402915 Zbl 0345.60026
[V] J. Ville, "Etude critique de la notion de collectif" , Gauthier-Villars (1939) Zbl 0021.14601 Zbl 0021.14505 Zbl 65.0547.05
[WH] P. Wall, C.C. Heyde, "Martingale limit theory and its application" , Acad. Press (1980) MR624435
How to Cite This Entry:
Martingale. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Martingale&oldid=26610
This article was adapted from an original article by A.N. Shiryaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"http://www.encyclopediaofmath.org/index.php/Martingale","timestamp":"2014-04-19T12:13:05Z","content_type":null,"content_length":"39493","record_id":"<urn:uuid:259f4642-1e61-4fb0-a90d-950fd127893d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivatives and first principles ??? help please
March 26th 2009, 10:03 PM
Derivatives and first principles ??? help please
What is the difference between deriving and deriving using first principles ?
for example how would you use first principles to find the derivative of y=7/(rootx + 6)
and y = 1 / 2x
March 26th 2009, 10:25 PM
Finding derivatives by first principles means to use the definition of the derivative. recall that we define the derivative of a function $f(x)$ as follows
$f'(x) = \lim_{h \to 0} \frac {f(x + h) - f(x)}h$
provided the limit exists.
now use that to find the derivatives. you can check your answers by doing the derivatives the short way | {"url":"http://mathhelpforum.com/calculus/80876-derivatives-first-principles-help-please-print.html","timestamp":"2014-04-18T09:25:52Z","content_type":null,"content_length":"4907","record_id":"<urn:uuid:6e41a6bb-ee6c-4f54-beb4-d2a4cc46a783>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Communication Complexity
[AB09] Sanjeev Arora and Boaz Barak. Computational Complexity: A Modern Approach. Cambridge University Press, 2009. [ bib ]
[AJP10] Alexandr Andoni, T. S. Jayram, and Mihai Patrascu. Lower bounds for edit distance and product metrics via Poincaré-type inequalities. In Proc. 21th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), pages 184-192, 2010. [ bib | .pdf ]
[AMS99] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. Computer and System Sciences, 58(1):137-147, 1999. (Preliminary Version in 28th
STOC, 1996). [ bib | DOI ]
[BBCR10] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive communication. In Proc. 42nd ACM Symp. on Theory of Computing (STOC), pages 67-76, 2010. [ bib | DOI ]
[BFS86] László Babai, Peter Frankl, and Janos Simon. Complexity classes in communication complexity theory (preliminary version). In Proc. 27th IEEE Symp. on Foundations of Comp. Science (FOCS),
pages 337-347, 1986. [ bib | DOI | .pdf ]
[BJKS04] Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. J. Computer and System Sciences, 68(4):702-732,
June 2004. (Preliminary Version in 43rd FOCS, 2002). [ bib | DOI ]
[BJR07] Paul Beame, T. S. Jayram, and Atri Rudra. Lower bounds for randomized read/write stream algorithms. In Proc. 39th ACM Symp. on Theory of Computing (STOC), pages 689-698, 2007. [ bib | DOI ]
[BN07] Liad Blumrosen and Noam Nisan. Combinatorial auctions. In Noam Nisan, Tim Roughgarden, Éva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory, chapter 11, pages 267-300.
Cambridge University Press, 2007. [ bib | .pdf ]
[BR11a] Mark Braverman and Anup Rao. Information equals amortized communication, 2011. [ bib | arXiv ]
[BR11b] Mark Braverman and Anup Rao. Towards coding for maximum errors in interactive communication. In Proc. 43rd ACM Symp. on Theory of Computing (STOC), pages 159-166, 2011. [ bib | DOI ]
[CCM08] Amit Chakrabarti, Graham Cormode, and Andrew McGregor. Robust lower bounds for communication and stream computation. In Proc. 40th ACM Symp. on Theory of Computing (STOC), pages 641-650,
2008. [ bib | DOI ]
[CCM10] Amit Chakrabarti, Graham Cormode, and Andrew McGregor. A near-optimal algorithm for estimating the entropy of a stream. ACM Transactions on Algorithms, 6(3), 2010. (Preliminary version in
18th SODA, 2007). [ bib | DOI ]
[CK11] Amit Chakrabarti and Ranganath Kondapally. Everywhere-tight information cost tradeoffs for augmented index. In Leslie Ann Goldberg, Klaus Jansen, R. Ravi, and José D. P. Rolim, editors,
Proc. 15th International Workshop on Randomization and Approximation Techniques in Computer Science (RANDOM), volume 6845 of LNCS, pages 448-459. Springer, 2011. [ bib | DOI ]
[CP10] Arkadev Chattopadhyay and Toniann Pitassi. The story of set disjointness. SIGACT News, 41(3):59-85, 2010. [ bib | DOI ]
[CR11] Amit Chakrabarti and Oded Regev. An optimal lower bound on the communication complexity of gap-Hamming-distance. In Proc. 43rd ACM Symp. on Theory of Computing (STOC), pages 51-60, 2011. [
bib | DOI | arXiv ]
[DJS98] Carsten Damm, Stasys Jukna, and Jiri Sgall. Some bounds on multiparty communication complexity of pointer jumping. Comput. Complexity, 7(2):109-127, 1998. (Preliminary Version in 13th STACS,
1996). [ bib | DOI ]
[GS10] Dmitry Gavinsky and Alexander A. Sherstov. A separation of NP and coNP in multiparty communication complexity. Theory of Computing, 6(1):227-245, 2010. [ bib | DOI ]
[Ind07] Piotr Indyk. 6.895: Sketching, Streaming and Sub-linear space algorithms, 2007. A course offered at MIT (Fall 2007). [ bib | .html ]
[JK10] Rahul Jain and Hartmut Klauck. The partition bound for classical communication complexity and query complexity. In Proc. 25th IEEE Conference on Computational Complexity, pages 247-258,
2010. [ bib | DOI | arXiv ]
[JKR09] T. S. Jayram, Swastik Kopparty, and Prasad Raghavendra. On the communication complexity of read-once AC^0 formulae. In Proc. 24th IEEE Conference on Computational Complexity, pages 329-340,
2009. [ bib | DOI ]
[JKS03] T. S. Jayram, Ravi Kumar, and D. Sivakumar. Two applications of information complexity. In Proc. 35th ACM Symp. on Theory of Computing (STOC), pages 673-682, 2003. [ bib | DOI ]
[JKS08] T. S. Jayram, Ravi Kumar, and D. Sivakumar. The one-way communication complexity of Hamming distance. Theory of Computing, 4(1):129-135, 2008. [ bib | DOI ]
[JKZ10] Rahul Jain, Hartmut Klauck, and Shengyu Zhang. Depth-independent lower bounds on the communication complexity of read-once boolean formulas. In My T. Thai and Sartaj Sahni, editors, Proc. of
16th Annual International Conference on Computing and Combinatorics (COCOON), volume 6196 of LNCS, pages 54-59. Springer, 2010. [ bib | DOI | arXiv ]
[KN97] Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, 1997. [ bib | DOI ]
[Lee10] Troy Lee. 16:198:671 Communication Complexity, 2010. A course offered at Rutgers University (Spring 2010). [ bib | .html ]
[LS09] Troy Lee and Adi Shraibman. Lower bounds in communication complexity. Foundations and Trends in Theoretical Computer Science, 3(4):263-398, 2009. [ bib | DOI | .pdf ]
[LZ10] Troy Lee and Shengyu Zhang. Composition theorems in communication complexity. In Samson Abramsky, Cyril Gavoille, Claude Kirchner, Friedhelm Meyer auf der Heide, and Paul G. Spirakis,
editors, Proc. 37th International Colloquium of Automata, Languages and Programming (ICALP), Part I, volume 6198 of LNCS, pages 475-489. Springer, 2010. [ bib | DOI | arXiv ]
[Pat11] Mihai Patrascu. Unifying the landscape of cell-probe lower bounds. SIAM J. Computing, 40(3):827-847, 2011. (Preliminary version in 49th FOCS, 2008). [ bib | DOI | arXiv ]
[PD04] Mihai Patrascu and Erik D. Demaine. Tight bounds for the partial-sums problem. In Proc. 15th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 20-29, 2004. [ bib | DOI ]
[PD06] Mihai Patrascu and Erik D. Demaine. Logarithmic lower bounds in the cell-probe model. SIAM J. Computing, 35(4):932-963, 2006. (Preliminary version in 36th STOC, 2004 and 15th SODA, 2004). [
bib | DOI | arXiv ]
[PRV01] Stephen Ponzio, Jaikumar Radhakrishnan, and Srinivasan Venkatesh. The communication complexity of pointer chasing. J. Computer and System Sciences, 62(2):323-355, 2001. (Preliminary Version
in 31st STOC, 1999). [ bib | DOI ]
[Sen03] Pranab Sen. Lower bounds for predecessor searching in the cell probe model. In Proc. 18th IEEE Conference on Computational Complexity, pages 73-83, 2003. [ bib | DOI ]
[She08] Alexander A. Sherstov. Communication lower bounds using dual polynomials. Bulletin of the EATCS, 95:59-93, 2008. [ bib | arXiv | .pdf ]
[She10] Alexander A. Sherstov. Communication complexity under product and nonproduct distributions. Comput. Complexity, 19(1):135-150, 2010. (Preliminary version in 23rd IEEE Conference on Comput.
Complexity, 2008). [ bib | DOI ]
[She11a] Alexander A. Sherstov. The communication complexity of gap Hamming distance. Technical Report TR11-063, Electronic Colloquium on Computational Complexity (ECCC), 2011. [ bib | http ]
[She11b] Alexander A. Sherstov. The multiparty communication complexity of set disjointness. Technical Report TR11-145, Electronic Colloquium on Computational Complexity (ECCC), 2011. [ bib | http ]
[SS02] Michael E. Saks and Xiaodong Sun. Space lower bounds for distance approximation in the data stream model. In Proc. 34th ACM Symp. on Theory of Computing (STOC), pages 360-369, 2002. [ bib |
DOI | .pdf ]
[SV08] Pranab Sen and Srinivasan Venkatesh. Lower bounds for predecessor searching in the cell probe model. J. Computer and System Sciences, 74(3):364-385, 2008. (Preliminary version in in 28th
ICALP, 2001 and 18th IEEE Conference on Computational Complexity, 2003). [ bib | DOI | arXiv ]
[Woo04] David P. Woodruff. Optimal space lower bounds for all frequency moments. In Proc. 15th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 167-175, 2004. [ bib | DOI ]
[Yan91] Mihalis Yannakakis. Expressing combinatorial optimization problems by linear programs. J. Computer and System Sciences, 43(3):441-466, 1991. (Preliminary Version in 20th STOC, 1988). [ bib |
DOI | .pdf ] | {"url":"http://www.tcs.tifr.res.in/~prahladh/teaching/2011-12/comm/","timestamp":"2014-04-19T09:23:55Z","content_type":null,"content_length":"37192","record_id":"<urn:uuid:b240e9fd-c154-4f6b-a294-33ff1965d4a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real analytic functions
up vote 0 down vote favorite
I am quite confused with some ideas regarding the Real analytic functions.
Just to introduce my questions:
A function $f$ is real analytic on an open set $D$ of the real line if for any $x_0\in D$ there exist an interval $I=(x_0-\epsilon,x_0+\epsilon)$ such that locally in $I,$ $f(x)=\sum_{k=0}^{\infty}
Call $A=A(D)$ the set of real analytic functions on $D.$
Call $R^{*}$ the set of finite sequences of distinct real numbers $(x_1,x_2,\ldots,x_n),$ $x_i\leq x_{i+1},$ and call $\mathbb{R},\mathbb{N}$ the set of real and natural numbers, respectively.
1)This implies that there exists a function between $A$ and $R^{*}\times \mathbb{R}^{\mathbb{N}}$ that associated to $f\in A$ the element $\left((x_1,x_2,\ldots,x_n),(a_0^1,a_0^2,\ldots,a_1^n,a_1^
1,a_1^2,\ldots,a_1^n,\ldots)\right)$ in the natural way in which locally around $x_i$ the map has series with coefficients $(a_0^i,a_1^i,\ldots),$ $n$ is minimal such that $x_1$ is in the boundary of
$D$ and the pairwise separation between $x_{i}$ and $x_{i+1}$ is always maximal and constructed inductively from $i=1.$ (Of course it is not surjective, but it should be a bijection when we consider
$BV$ instead of $A$ like in 6) ).
2)Analytic properties of elements in $A$ depends on $R^{*}\times \mathbb{R}^{\mathbb{N}}.$ For example, define $R^{2}$ the set sequences of real numbers $(x_1\leq x_2), x_1\neq x_2.$ I am interested
in properties of $R^{2}\times \mathbb{R}^{\mathbb{N}},$ like, it seems that we can integrate very accurately maps on $R^{2}\times \mathbb{R}^{\mathbb{N}},$ because we just need to find the 2 points
for our Taylor expansions, and we can truncate the integration of each polynomials to the neighborhood of each point to approximate the integral. Another example is the case in which $R^{1}$ the set
of real numbers $(x_1)$ and then to integrate a map in $R^{1}\times \mathbb{R}^{\mathbb{N}},$ we just need to approximate by truncate the polynomial.
3)It seems that we can prove some degree of accuracy when integrating maps on $R^{n}\times \mathbb{R}^{\mathbb{N}}.$ It seems also that there is a huge difference when passing from $n<\infty$ to the
case $R^{*}.$
4)I have not seen anyone trying to study $R^{n}\times \mathbb{R}^{\mathbb{N}}$ instead of $A.$ I have not seen anyone trying to study the convergence of numerical algorithm to integrate by looking at
$R^{n}\times \mathbb{R}^{\mathbb{N}}.$ I have not seen anyone interested in the following algorithm: Given $f\in A$ approximate $(x_1,x_2,\ldots,x_n).$ This is a surjection, but it is interesting,
because it is getting us a lot of information about $f,$ thereafter we only need to care some accuracy on finitely many derivatives on each point and we could prove some bounds for the error when
computing properties of $f.$
5)Could you give me an example or way to construct maps in $A$ for which $(x_1,x_2,\ldots,x_n)$ has a very big $n?$
6)I guess that $f\in BV$ (Bounded variation) means that when we try to do the same for $f$ we get finite sequence $(x_0,x_1,\ldots)$ for which in a similar way we can associate f to an unique element
in $R^{*}\times \mathbb{R}^{\mathbb{N}}$ for which however we have a formal and not absolute convergent series in each neighborhood. So in that way $BV$ are formal $A$ maps.
What I want to ask is: Am I correct in what I am thinking? and If someone knows any paper I could find where someone had elaborate some similar sort of ideas.
I have been unable to find anything, but for me it is a very natural approach to integrate real analytic functions. Maybe I am completely lost, in that case, I apologize because of having taken your
I will give you an example: Suppose we want to compute numerically $\int_{0}^{1} 5\pi/2(e^{\pi}-2)^{-1}e^{\pi x} \cos(\pi x/2)dx.$
My ideas comes from a paper in Ergodic theory, that you can find on the web with the name: COMPUTING ENTROPY RATES FOR HIDDEN MARKOV PROCESES.
In this paper they approximate this integral:
N = 1 0.18575506891852380346423780644
N = 2 −0.841124284383205603881801616792 (of course +)
N = 3 0.40608775333324283787989060678
N = 4 1.09233276774560006235284301088
N = 5 0.99681510352795871656533973381
N = 6 1.00004673478255155995818417282
N = 7 0.99999977398633338017700990934
N = 8 0.99999999982780643498244012823
N = 9 1.00000000000326837809455213367
N = 10 0.99999999999999360196233983786
N = 11 1.00000000000000000436159826786
N = 12 0.9999999999999999999989026376489
N = 13 1.0000000000000000000000000742107229
N = 14 1.00000000000000000000000000000513617960
N = 15 0.9999999999999999999999999999999993061845
N = 16 0.9999999999999999999999999999999993061845283932
N = 17 0.9999999999999999999999999999999999999999997039178592326
Now, using the ideas before, what we can do is to consider a sequence of 2^N equidistributed points for N=0,1,..,5, a fixed maximum number T of coefficients to carry on the Taylor expansion, for
example T=10.
We obtain
N=0 0.9999997
N=1 0.99999999992
N=2 0.99999999999998
N=3 0.999999999999999995
N=4 0.999999999999999999998
N=5 0.9999999999999999999999997
When T=20:
N=0 1.00000000000000000009
N=1 0.999999999999999999999998
N=2 0.9999999999999999999999999999995
N=3 0.9999999999999999999999999999999999998
N=4 0.99999999999999999999999999999999999999999997
N=5 0.999999999999999999999999999999999999999999999999993
When T=30:
N=0 1.00000000000000000000000000001
N=1 1.000000000000000000000000000000000000004
N=2 1.0000000000000000000000000000000000000000000000009
N=3 1.0000000000000000000000000000000000000000000000000000000002
N=4 1.00000000000000000000000000000000000000000000000000000000000000000005
N=5 1.00000000000000000000000000000000000000000000000000000000000000000000000000001
When T=40:
N=0 0.99999999999999999999999999999999999999999991
N=1 1.00000000000000000000000000000000000000000000000000000001
N=2 1.000000000000000000000000000000000000000000000000000000000000000000006
N=3 1.000000000000000000000000000000000000000000000000000000000000000000000000000000001
N=4 1.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004
N=5 1.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009
I will stop here, however, it is obvious to come with thousand of conjectures at this point.
My question here is, where someone has done this before? I would like to say that this is the most efficient possible way to integrate numerically real analytic maps on the interval $[0,1].$
My deepest question (maybe it is absurd) says this: Is there an optimal* way to compute numerically the integral of a real analytic map on the interval?
Optimal in the sense that in finite time it gives a better approximation of the real integral than any other algorithm.
ca.analysis-and-odes real-analysis integration
1 I'm not sure I understand your construction for 1). You associate a function $f$ which is given by power series $a_j^i(x-x_i)$ on intervals $(x_i-\epsilon,x_i+\epsilon)$ to the pair consisting of
the sequence of values $(x_0,\ldots,x_n)$ and the sequence $(a_0^1,a_0^2,\ldots,a_0^n,a_1^1,\ldots)$? If so this is not defined for all $f$, well-defined, or surjective. – Alex Becker Feb 6 '13 at
I haven't read through all of this, but the idea seems to hinge on (1). As far as I can see, there is a natural relation between $A$ and $R^*\times \mathbb{R}^\mathbb{N}$, but it is far from being
1 a map in either direction, never mind a bijection. That is to say, there are many elements of $R^*\times \mathbb{R}^\mathbb{N}$ which correspond to the same element of $A$ (you can list the
coefficients of the power series at as many points as you wish) and some which correspond to no element of $A$ (because some of the relevant power series fail to converge, or converge to different
functions). – Noah Stein Feb 6 '13 at 19:49
Thanks! There were some details to repair...I edited it. – Umberto Feb 6 '13 at 20:59
The "map" in 1) is still not defined. There may be no $x_1$ in the boundary of $D$ (consider the power series for $1/(1-x)$ with $D=(-1,1)$). – Alex Becker Feb 6 '13 at 22:03
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ca.analysis-and-odes real-analysis integration or ask your own question. | {"url":"https://mathoverflow.net/questions/121008/real-analytic-functions","timestamp":"2014-04-21T07:20:12Z","content_type":null,"content_length":"57299","record_id":"<urn:uuid:f15a1d19-43eb-47e8-afc5-0864891301af>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A point P in the first quadrant lies on the graph of the function f(x) = x^1/2. Express the coordinates of P as functions of the slope of the line joining P to the origin.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b9c52ae4b0c789d51075ec","timestamp":"2014-04-18T23:53:54Z","content_type":null,"content_length":"25297","record_id":"<urn:uuid:74367705-e323-4328-bff8-7f8b430bce5a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highbridge, NY Prealgebra Tutor
Find a Highbridge, NY Prealgebra Tutor
...I have the most experience with younger children (k-6), but would be happy to provide help to any student who wants to learn. Science and math have been emphasized throughout all of my
schooling, and I have an aptitude for grammar and English. If you think we would be a good fit, I would love to help!I had about 15 credits of Sociology in college, and almost minored in it.
17 Subjects: including prealgebra, reading, writing, geometry
Hi, I was born and grew up in northern China until I went to Japan at 21 years old. I studied and worked for a major airline in Japan for almost 10 years, then I came to New York and finished my
MBA degree in Finance. Therefore, I am fluent in Mandarin, Japanese and English.
6 Subjects: including prealgebra, calculus, algebra 1, precalculus
...My experience with kids on the spectrum is not only academic, but also social. My most recent counselor position was specifically dedicated to building social skills. I use stories and games
to explain and reinforce the guidelines of social situations and I create safe spaces for my kids to learn what behaviors are acceptable and why.
39 Subjects: including prealgebra, reading, English, algebra 1
...I have had great academic success, from a Bachelors in Anthropology from Smith College (Dean's List) to a Masters of Fine Arts from New School University in Theatre Directing (Schubert
Fellow). I used to feel like I wasn't as smart as other students because I needed tutors, but now I know it is...
9 Subjects: including prealgebra, reading, English, algebra 1
...Journal), 2006.- B.A. in Film & English (creative writing concentration) from Boston College, 2010- 3 years of tutoring at C2 Education, the nation's fastest growing personal tutoring
service.- 1 year of tutoring with a boutique tutoring company that prides itself on selective tutor recruiting.- ...
41 Subjects: including prealgebra, English, reading, writing
Related Highbridge, NY Tutors
Highbridge, NY Accounting Tutors
Highbridge, NY ACT Tutors
Highbridge, NY Algebra Tutors
Highbridge, NY Algebra 2 Tutors
Highbridge, NY Calculus Tutors
Highbridge, NY Geometry Tutors
Highbridge, NY Math Tutors
Highbridge, NY Prealgebra Tutors
Highbridge, NY Precalculus Tutors
Highbridge, NY SAT Tutors
Highbridge, NY SAT Math Tutors
Highbridge, NY Science Tutors
Highbridge, NY Statistics Tutors
Highbridge, NY Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Allerton, NY prealgebra Tutors
Beechhurst, NY prealgebra Tutors
Bronx prealgebra Tutors
Castle Point, NJ prealgebra Tutors
Fort George, NY prealgebra Tutors
Fort Lee, NJ prealgebra Tutors
Hamilton Grange, NY prealgebra Tutors
Hillside, NY prealgebra Tutors
Inwood Finance, NY prealgebra Tutors
Manhattanville, NY prealgebra Tutors
Morsemere, NJ prealgebra Tutors
Parkside, NY prealgebra Tutors
Rochdale Village, NY prealgebra Tutors
West Englewood, NJ prealgebra Tutors
West Fort Lee, NJ prealgebra Tutors | {"url":"http://www.purplemath.com/highbridge_ny_prealgebra_tutors.php","timestamp":"2014-04-18T05:58:34Z","content_type":null,"content_length":"24468","record_id":"<urn:uuid:e4d33f90-76d3-4080-bb41-5b83ac3e96dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/myininaya/medals/2","timestamp":"2014-04-18T16:52:43Z","content_type":null,"content_length":"126615","record_id":"<urn:uuid:fff29828-d61b-4ca5-aeab-cefb7137c3f4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shrewsbury, MA Algebra 1 Tutor
Find a Shrewsbury, MA Algebra 1 Tutor
I am a well educated stay-at-home mom. I worked in the corporate world for about a decade. I am an attorney admitted to practice law in NY and NJ.
16 Subjects: including algebra 1, reading, biology, writing
...I have been programming computers, from 8-bit embedded processors to desktop systems, for thirty years. I have programmed in a wide variety of languages, including assembly language. I have
studied the role of compilers and linkers (which are now usually hidden from the programmer within the de...
33 Subjects: including algebra 1, chemistry, physics, calculus
...The courses I've taught and tutored required differential equations, so I have experience working with them in a teaching context. In addition to undergraduate level linear algebra, I studied
linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear algebra in my physics research.
16 Subjects: including algebra 1, calculus, physics, geometry
...I have worked with students with ADD & ADHD extensively in my private tutoring. Some have been on meds.; others not, some have been on school IEPs, some not, some have been high school
students, others middle and elementary students. My tutoring work for the Lexington public school system, run...
34 Subjects: including algebra 1, reading, English, geometry
...I used tons of robots to carry out my experiments and learnt even more there. Unfortunately the company went out of business and I took some time off. I love to tutor and help others and
recently helped my own husband with his recent microbiology class.
22 Subjects: including algebra 1, reading, English, grammar
Related Shrewsbury, MA Tutors
Shrewsbury, MA Accounting Tutors
Shrewsbury, MA ACT Tutors
Shrewsbury, MA Algebra Tutors
Shrewsbury, MA Algebra 2 Tutors
Shrewsbury, MA Calculus Tutors
Shrewsbury, MA Geometry Tutors
Shrewsbury, MA Math Tutors
Shrewsbury, MA Prealgebra Tutors
Shrewsbury, MA Precalculus Tutors
Shrewsbury, MA SAT Tutors
Shrewsbury, MA SAT Math Tutors
Shrewsbury, MA Science Tutors
Shrewsbury, MA Statistics Tutors
Shrewsbury, MA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Auburn, MA algebra 1 Tutors
Berlin, MA algebra 1 Tutors
Boylston algebra 1 Tutors
Franklin, MA algebra 1 Tutors
Holden, MA algebra 1 Tutors
Hudson, MA algebra 1 Tutors
Marlborough, MA algebra 1 Tutors
Milford, MA algebra 1 Tutors
Millbury, MA algebra 1 Tutors
Natick algebra 1 Tutors
Northborough algebra 1 Tutors
West Boylston algebra 1 Tutors
Westboro, MA algebra 1 Tutors
Westborough algebra 1 Tutors
Worcester, MA algebra 1 Tutors | {"url":"http://www.purplemath.com/Shrewsbury_MA_algebra_1_tutors.php","timestamp":"2014-04-20T11:35:16Z","content_type":null,"content_length":"24032","record_id":"<urn:uuid:099b6bfe-22f4-4dd2-92f2-21129fed0dd2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Urgent: 4 degree polynomial with irrational roots
December 10th 2006, 01:45 PM
Urgent: 4 degree polynomial with irrational roots
which specific formula/method I can use to find the complex roots of a polynomial of degree 4 on form:
$p(x) = x^4 + a x^3 + bx^2 + ax + a_0 = 0$
which has irrational coefficients
a = $-1 - \sqrt(2)$ and b = $2+ \sqrt(2)$ and
where $a_0 = 1$
If I write the equation as $x^4-x^3+2x^2-x+1-\sqrt2(x^3-x^2+x)=0$, then you will notice that it factorises as $(x^2-x+1)(x^2-\sqrt2x+1)=0$
Then by fundamental theorem of algebra I see that
$p(x) = (x-(\frac{1}{2}) + \frac{\sqrt(3)}{2}i)) (x-(\frac{1}{2}) - \frac{\sqrt(3)}{2}i)) \cdot ((x - (\frac{\sqrt(2)}{2} + \frac{\sqrt(2)}{2} i)) ((x - (\frac{\sqrt(2)}{2} - \frac{\sqrt(2)}{2}
i)) = (x^2-x+1)(x^2-\sqrt2x+1) = x^4-x^3+2x^2-x+1-\sqrt2(x^3-x^2+x)$
where the complex roots of the original polymial p(x) is
$x = ((\frac{1}{2}) \pm \frac{\sqrt(3)}{2}i), (\frac{\sqrt(2)}{2} \pm \frac{\sqrt(2)}{2} i)$
Getting back to the original polynomial
(*) $p(x) = x^4 + ax^3 + bx^ 2 + ax + 1 = 0$
where $a,b \in \mathbb{C}$
I would like to prove that a complex number x makes (*) true iff
$s = x + x^{-1}$ is a root of the $Q(s) = s^2 + as + (b-2)$
I see that that $Q(x + x^{-1}) = \frac{p(x)}{x^2}$
So the solutions of in my equation $s^2+as+(b-2)=0$ are $s=\frac{-a\pm\sqrt{a^2-4(b-2)}}{2}$.
I then need to plug each of these numbers into the equation $s=x+x^{-1}$, which can be written as $x^2-sx+1=0$. Using the quadratic formula again, you get $x=\frac{s\pm\sqrt{s^2-4}}{2}$.
By the way do x^-1 then exist ?
Yes, there's no problem about that. In fact since $s=x+x^{-1}$, you can see that $x^{-1}=s-x$
I get this x value for both values of s to be:
$x =\frac{\sqrt{-2(a \sqrt{a^2 - 4 \cdot (b-2) - a^2 + 2(b+2)}} + \sqrt{a^2 - 4 (b-2) -a}}{4}$
This expression makes the equation
$p(x) = \frac{p(x)}{x^2}$ true
since $p(\frac{\sqrt{-2(a \sqrt{a^2 - 4 \cdot (b-2) - a^2 + 2(b+2)}} + \sqrt{a^2 - 4 (b-2) -a}}{4}) = ewline \frac{p(\frac{\sqrt{-2(a \sqrt{a^2 - 4 \cdot (b-2) - a^2 + 2(b+2)}} + \sqrt{a^2 - 4
(b-2) -a}}{4})}{(\frac{\sqrt{-2(a \sqrt{a^2 - 4 \cdot (b-2) - a^2 + 2(b+2)}} + \sqrt{a^2 - 4 (b-2) -a}}{4})^2} = 0$
This proves that there exist a number x which is both a root of the original polynomial and Q(s = x + x^{-1}), since $p(x) = \frac{p(x)}{x^2}$.
But one question remains on my part. Since x is supposedly a complex number. Is it a complex number in its current form? That I'm a bit unsure of. Since then it should be written in the form x =
a + bi ?
Best Regards,
December 11th 2006, 04:33 AM
If you want to call x a complex number, it's a complex number. There's no need to write it in a + bi form, though you may do so if you like.
December 11th 2006, 05:47 AM
You could try running the particular form of quartic that you have through
the quartic formula and see what you get.
Background on the formula can be found here.
December 11th 2006, 07:34 AM
I know how to solve the cubic algebraically but never learned how to solve the quartic, it is too long. I find it amazing how these Italian mathematicians were so good at manipulation of radical
terms and factorizations.
I read that one Italian mathematician was killed for being able to solve the quartic. Because people believed it was not withing human skills and thus he made a pact with a Devil (similar story
with the violinist, Pagannini, they used the same excuse).
This is my 37:):)th Post!!! | {"url":"http://mathhelpforum.com/algebra/8689-urgent-4-degree-polynomial-irrational-roots-print.html","timestamp":"2014-04-18T13:23:05Z","content_type":null,"content_length":"12635","record_id":"<urn:uuid:ae6a7c0d-88fa-43e2-821a-46b854ee2cac>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Powder Springs, GA Geometry Tutor
Find a Powder Springs, GA Geometry Tutor
...I have years of experience with relational databases and understand normalized data, how to model business systems with data (when to add another table and fields, for example, and what data
type is best), and how to design and perform database queries using SQL. I have worked with VBA macros an...
126 Subjects: including geometry, chemistry, English, calculus
...When I tutored someone in chemistry, I devoted my time to one person at a time to focus on their problems, to see what they were struggling on. I then use the Socratic Method and go through
the process by asking a question for each step. Once the person starts to get it, I then provide a person...
17 Subjects: including geometry, chemistry, calculus, algebra 1
...I presently teach Precalculus at Chattahoochee Tech. I have also taught Calculus at the college level, so I am able to help prepare students for the highest levels of math. What sets me apart
is the ability to provide simple and clear explanations of advanced concepts.
13 Subjects: including geometry, calculus, statistics, algebra 1
...I am skilled in delivering specialized instruction to fifth grade through tenth students in math, reading and writing. I enjoy teaching and have a keen way of chunking subject matter into
pieces that provide encouragement as well as the extra support students need. By incorporating more complex...
18 Subjects: including geometry, reading, English, biology
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT.
I am currently finishing up my masters degree from KSU.
4 Subjects: including geometry, algebra 1, algebra 2, prealgebra
Related Powder Springs, GA Tutors
Powder Springs, GA Accounting Tutors
Powder Springs, GA ACT Tutors
Powder Springs, GA Algebra Tutors
Powder Springs, GA Algebra 2 Tutors
Powder Springs, GA Calculus Tutors
Powder Springs, GA Geometry Tutors
Powder Springs, GA Math Tutors
Powder Springs, GA Prealgebra Tutors
Powder Springs, GA Precalculus Tutors
Powder Springs, GA SAT Tutors
Powder Springs, GA SAT Math Tutors
Powder Springs, GA Science Tutors
Powder Springs, GA Statistics Tutors
Powder Springs, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Austell geometry Tutors
Chamblee, GA geometry Tutors
Clarkdale, GA geometry Tutors
Cumming, GA geometry Tutors
Dallas, GA geometry Tutors
Douglasville geometry Tutors
Hiram, GA geometry Tutors
Holly Springs, GA geometry Tutors
Lilburn geometry Tutors
Lithia Springs geometry Tutors
Mableton geometry Tutors
Marietta, GA geometry Tutors
Tyrone, GA geometry Tutors
Villa Rica, PR geometry Tutors
Winston, GA geometry Tutors | {"url":"http://www.purplemath.com/Powder_Springs_GA_Geometry_tutors.php","timestamp":"2014-04-18T09:04:15Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:7b2515a3-6ac0-48ed-b217-66c39accd507>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra question please help
A cashier uses the expressions d + s - c to check the amount of money in her cash drawer, where d is the initial amount in the drawer, s is the amount of cash that customers pay, and c is the
amount of change that customers receive. Suppose the inital amount is $125, customers pay $15.00, $18.25, and $24.00, and the amount of change returned is $1.50 and $1.95. How much money is in
the cashier's drawer?
Also, How much profit did the cashier make? | {"url":"http://mathhelpforum.com/algebra/65578-algebra-question-please-help.html","timestamp":"2014-04-16T21:58:01Z","content_type":null,"content_length":"37325","record_id":"<urn:uuid:afe2b629-30b2-4731-9478-070214104f9e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Secure Frameproof Code Through Biclique Cover
Secure Frameproof Code Through Biclique Cover
Hossein Hajiabolhassan, Farokhlagha Moazami
For a binary code Γ of length v , a v -word w produces by a set of codewords {w^1,…,w^r} ⊆Γ if for all i=1,…,v , we have w[i]∈{w[i]^1, …, w[i]^r} . We call a code r -secure frameproof of size t if |Γ
|=t and for any v -word that is produced by two sets C[1] and C[2] of size at most r , then the intersection of these sets is non-empty. A d -biclique cover of size v of a graph G is a collection of
v complete bipartite subgraphs of G such that each edge of G belongs to at least d of these complete bipartite subgraphs. In this paper, we show that for t≥2r, an r-secure frameproof code of size t
and length v exists if and only if there exists a 1-biclique cover of size v for the Kneser graph KG(t,r) whose vertices are all r-subsets of a t-element set and two r-subsets are adjacent if their
intersection is empty. Then we investigate some connection between the minimum size of d-biclique covers of Kneser graphs and cover-free families, where an (r,w; d) cover-free family is a family of
subsets of a finite set X such that the intersection of any r members of the family contains at least d elements that are not in the union of any other w members. The minimum size of a set X for
which there exists an (r,w;d) cover-free family with t blocks is denoted by N((r,w;d),t). We prove that for t > 2r and r>s, we have bc[d](KG(t,r))≥bc[m](KG(t,s)), where m=N((r-s,r-s;d),t-2s).
Finally, we show that for any 1≤i < r, 1≤j < w, and t≥r+w we have N((r,w;d),t)≥N((r-i,w-j;m),t), where m=N((i,j;d),t-r-w+i+j).
Full Text:
PDF PostScript | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/2131","timestamp":"2014-04-20T11:02:08Z","content_type":null,"content_length":"13768","record_id":"<urn:uuid:9ca65cff-291f-4046-80dc-abbffdb1d591>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A common technique for finding modulo \(10\) of any huge number in the form \(a^b\)?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It seems that you have to go through a lot of manipulations... a lot for doing that. Although I was thinking of a technique they use to figure all that out.
Best Response
You've already chosen the best response.
OK, I'm gonna change the topic to something else:\[39^{39} \equiv n\pmod{100}\]
Best Response
You've already chosen the best response.
\(n\) has to be a two-digit number. (I actually have to find the last two digits)
Best Response
You've already chosen the best response.
Something with a pattern?
Best Response
You've already chosen the best response.
39 = -61 mod100 =>39^38 = 61^38 mod100 =>39^39 = 39*61^38 mod100 SInce 61^n always ends in 1, that multiplied by 39, will give last digit as 9, hmm, leme see for the second last.
Best Response
You've already chosen the best response.
Better yet, 39^5 = -1 mod 100 Use that: 39^39 = (39^5)^7 * 39^4 = (-1)^7 * 39^4 mod 100
Best Response
You've already chosen the best response.
Seems legit. But on paper, who's gonna go till 39^5 ! :O
Best Response
You've already chosen the best response.
Very nice, can you tell me *how* you came up with that solution? What do you initially think?
Best Response
You've already chosen the best response.
square 39, and then only take the last two digits...
Best Response
You've already chosen the best response.
then multiply 39 to it again.
Best Response
You've already chosen the best response.
and only take the last two digits, again...
Best Response
You've already chosen the best response.
Ahem, that was beautiful. But is there a common way to figure out these solutions?
Best Response
You've already chosen the best response.
Ohh, I see.
Best Response
You've already chosen the best response.
The way I think of things, and it's a common trend that the way I do it is highly inefficient (LOL) But when faced with a problem b^p = n (mod k) I try to see if there's an exponent to which b
can be raised so that it is 1 (or -1) mod k first.
Best Response
You've already chosen the best response.
That "last two digits" trick only works because you were conveniently working under mod 100... XD
Best Response
You've already chosen the best response.
Nice... can you give me another example?
Best Response
You've already chosen the best response.
How about 6^5000 = n (mod 215)
Best Response
You've already chosen the best response.
(sneakily thinking of simple problems) :>
Best Response
You've already chosen the best response.
Arhmegurd.\[\left(6^{5}\right)^{1000} \equiv (6^2)^{1000} \equiv n \pmod{215}\]
Best Response
You've already chosen the best response.
\[\left(6^5\right)^{400} \equiv (6^2)^{400} \equiv n\pmod{215}\]
Best Response
You've already chosen the best response.
\[\left(6^5\right)^{120} \equiv 6^{240} \equiv n \pmod{215}\]
Best Response
You've already chosen the best response.
I'm not following o.O You may be using a method I'm not familiar with, though... if you are, do continue :)
Best Response
You've already chosen the best response.
\[\left(6^{5}\right)^{48} \equiv 6^{96} \equiv n \pmod{215}\]
Best Response
You've already chosen the best response.
Hmm... I am just using \(6^2 \equiv 6^5 \pmod{215}\)
Best Response
You've already chosen the best response.
What should I do here?
Best Response
You've already chosen the best response.
Hang on...
Best Response
You've already chosen the best response.
I was rather hoping you'd go for \[6^3 = 216 = -1(mod \ 215)\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Wait, typo -.-
Best Response
You've already chosen the best response.
Yes, isn't that 1 (mod 215)
Best Response
You've already chosen the best response.
\[\large 6^3 = 216 = 1(\mod \ 215 \ \ \ \ )\] Much better
Best Response
You've already chosen the best response.
And work from there...
Best Response
You've already chosen the best response.
\[\large 6^{5000} = 6^{3(1666)+2}\]
Best Response
You've already chosen the best response.
\[6^{5000}= 6^{4998} \times 36 \equiv \]
Best Response
You've already chosen the best response.
\[1 \times 36 (\mod 215 \ \ \ \ \ )\]
Best Response
You've already chosen the best response.
\[\equiv 36 \pmod{215}\]
Best Response
You've already chosen the best response.
Argh, you type fast. =3
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Another n00bish question coming up. Thanks man.
Best Response
You've already chosen the best response.
If you have time, look up Fermat's Little Theorem, might come in handy in a pinch.
Best Response
You've already chosen the best response.
Yesh, that one is cool:\[p^{n - 1} \equiv 1 \pmod{n} \]I'd try to see if that applies in the future.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510a825ae4b070d859bf144e","timestamp":"2014-04-19T04:29:51Z","content_type":null,"content_length":"123712","record_id":"<urn:uuid:87e9ca75-c45b-4bf1-8904-8e4581b5b297>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Storing MATLAB structs in Java objects
up vote 6 down vote favorite
I'm using Java HashMap in MATLAB
h = java.util.HashMap;
And while strings, arrays and matrices works seemlessly with it
h.put(5, 'test');
h.put(7, magic(4));
Structs do not
st.val = 7;
h.put(7, st);
??? No method 'put' with matching signature found for class 'java.util.HashMap'.
What would be the easiest/most elegant way to make it work for structs?
java matlab hashmap
add comment
3 Answers
active oldest votes
You need to ensure that the data passed from MATLAB to Java can be properly converted. See MATLAB's External Interfaces document for the conversion matrix of which types get converted
to which other types.
MATLAB treats most data as pass-by-value (with the exception of classes with handle semantics), and there doesn't appear to be a way to wrap a structure in a Java interface. But you
could use another HashMap to act like a structure, and convert MATLAB structures to HashMaps (with an obvious warning for multiple-level structures, function handles, + other beasts
that don't play well with the MATLAB/Java data conversion process).
function hmap = struct2hashmap(S)
if ((~isstruct(S)) || (numel(S) ~= 1))
'struct2hashmap only accepts single structures');
hmap = java.util.HashMap;
for fn = fieldnames(S)'
% fn iterates through the field names of S
% fn is a 1x1 cell array
fn = fn{1};
a possible use case:
up vote 6 down
vote accepted >> M = java.util.HashMap;
>> M.put(1,'a');
>> M.put(2,33);
>> s = struct('a',37,'b',4,'c','bingo')
s =
a: 37
b: 4
c: 'bingo'
>> M.put(3,struct2hashmap(s));
>> M
M =
{3.0={a=37.0, c=bingo, b=4.0}, 1.0=a, 2.0=33.0}
(an exercise for the reader: change this to work recursively for structure members which themselves are structures)
add comment
Matlab R2008b and newer have a containers.Map class that provides HashMap-like functionality on native Matlab datatypes, so they'll work with structs, cells, user-defined Matlab
objects, and so on.
% Must initialize with a dummy value to allow numeric keys
m = containers.Map(0, 0, 'uniformValues',false);
% Remove dummy entry
up vote 5 down m.remove(0);
m(5) = 'test';
m(7) = magic(4);
m(9) = struct('foo',42, 'bar',1:3);
m(5), m(7), m(9) % get values back out
add comment
I'm not familiar with Java HashMaps, but could you try using a cell array to store the data instead of a struct?
h = java.util.HashMap;
carr = {7, 'hello'};
h.put(7, carr);
up vote 1 down vote % OR
h = java.util.HashMap;
st.val = 7;
h.put(7, struct2cell(st));
add comment
Not the answer you're looking for? Browse other questions tagged java matlab hashmap or ask your own question. | {"url":"http://stackoverflow.com/questions/436852/storing-matlab-structs-in-java-objects","timestamp":"2014-04-18T01:37:10Z","content_type":null,"content_length":"73589","record_id":"<urn:uuid:151dd6c9-57cd-4886-b52f-d660df30330e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Typical value of totient function
up vote 10 down vote favorite
Can anyone tell me what the expected value of Euler's totient function φ$(n)$ is (roughly) if you choose a random integer $n$ in the range $[N,N+M]$, where $M$ is large and $N$ is larger than $M$? (I
think of $M$ as being $cN$ for some small constant $c$, which, if one wanted an answer accurate to $1+o(1)$, would in reality be a slowly decreasing function of $N$.)
IIRC Hardy and Wright says a lot about average values of arithmetic functions such as as phi. (there's a chapter or two on it). – Kevin Buzzard Nov 28 '09 at 11:59
add comment
2 Answers
active oldest votes
I've just realized I was being a little bit slow. I had already found on the internet that $n^{-2}\sum_{k=1}^nφ(k)$ is roughly $3/π^2$ and stupidly didn't notice that I could
"differentiate" this to get exactly what I want. That is, $\sum_1^N φ(k)$ is about $3N^2/π^2$, so the difference between the sum to $N+M$ and the sum to $N$ is around $6NM/π^2$,
from which it follows that the average value near $N$ is around $6N/π^2$, which is entirely consistent with the well-known fact that the probability that two random integers are coprime
is $6/π^2$.
up vote
15 down I'm adding this paragraph after Greg's comment. To argue that the probability that two random integers are coprime, you observe that the probability that they do not have p as a common
vote factor is (1-1/p^2). If you take the product of that over all p then you've got the reciprocal of the Euler product formula for ζ(2), or 1^{-2}+2^{-2}+... = π^2/6. It's not that hard to
turn these formal arguments into a rigorous proof, since everything converges nicely.
4 It isn't difficult to argue $6N/\pi^2$ directly either. You should accept your own answer! You get a badge for that. – Greg Kuperberg Nov 28 '09 at 16:15
add comment
Let me also mention the following: You can adapt Schoenberg's result to prove that 1/M * {N <= n <= N + M : phi(n) / n <= t} --> F(t) uniformly in t, where F is a distribution function. The
up vote 1 proof goes by computing the moments sum((phi(n)/n)^k , N <= n <= N + M). You can probably get a O(loglog N / log N) rate of convergence (as was done by Levin ... if I recall correctly).
down vote
+ If this is of interest the distribution function F(t) decays doubly exponentially at 0, that is F(1/t) << exp(-C*exp(t)) for some constant C. This was investigated by Erdos and more
recently Weingartner (mrlonline.org/proc/2007-135-09/S0002-9939-07-08771-0/…). Asymptotics for 1 - F(t) when t is close to 1 where studied by Tenenbaum and Toulmonde (a reference is in
the paper above). In this case the asymptotic behaviour is more tame. There should be no problem adapting all these results to the case of the interval [N; N + M] with M as you
described... – mrm Nov 29 '09 at 17:59
add comment
Not the answer you're looking for? Browse other questions tagged analytic-number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/7039/typical-value-of-totient-function","timestamp":"2014-04-18T05:40:06Z","content_type":null,"content_length":"57443","record_id":"<urn:uuid:8b4b9d82-e261-4c8b-85d6-7d2523211c57>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraction integrand with square root
November 18th 2010, 06:12 AM #1
Oct 2010
Fraction integrand with square root
Hi, I am having some trouble with this integral:
$\int \frac{x}{\sqrt{x^{2}+x+1}} dx$
I suspect I have to make a substitution, but I am unsure about what to substitute. I guess whatever it is it have to be something that gets rid of the square root? Or is that unneccesary?
This integral appeared few days ago in the forum? Anyway, rewrite the integral as:
$\displaystyle I = \int\frac{x}{\sqrt{x^2+x+1}}\;{dx} = \int\frac{\frac{1}{2}(2x+1)+x-\frac{1}{2}(2x+1)}{\sqrt{x^2+x+1}}\;{dx}$
$\displaystyle \Rightarrow I = \int\frac{\frac{1}{2}(2x+1)}{\sqrt{x^2+x+1}}\;{dx} +\frac{1}{2}\int\frac{2x-(2x+1)}{\sqrt{x^2+x+1}}\;{dx}$
$\displaystyle \Rightarrow I = \int\frac{\frac{1}{2}(2x+1)}{\sqrt{x^2+x+1}}\;{dx} }}-\frac{1}{2}\int\frac{1}{\sqrt{x^2+x+1}}\;{dx}}$.
Let $u = \sqrt{x^2+x+1}$ for the first one, and for the other complete the
square $x^2+x+1 = \left(x+\frac{1}{2}\right)^2+\frac{3}{4}$, then let $x+\frac{1}{2} = \frac{\sqrt{3}}{2}\sinh{\varphi}$.
November 18th 2010, 06:17 AM #2
Super Member
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/163676-fraction-integrand-square-root.html","timestamp":"2014-04-20T23:28:22Z","content_type":null,"content_length":"33927","record_id":"<urn:uuid:5c133bdd-0d56-4dfd-873c-c902e25f8e6b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Many Cubic Feet Will One 80 Lb Bag Of Quickcrete Concrete
How Many Cubic Feet Will One 80 Lb Bag Of Quickcrete Concrete
Watch How Many Cubic Feet Will One 80 Lb Bag Of Quickcrete Concrete
Download Video
MP4 | 3GP
If you Couldn't Find and the Page you Were Looking For, REFRESH or Search Again Videos Above Top Right!!
how many 80 lb bags of quickcrete make 1 yard on concrete? chacha answer: it would take just about two bags to cover one yard at 2 in How many 80 lb bags of quickcrete make 1 yard on concrete how
much concrete does an 80 lb. bag make?. cement, commonly sold to consumers in bags, is one of four components that make up concrete. once properly mixed with How much concrete does an 80 lb. bag
make? | ehow can anyone tell me how many square feet i would get out of an 80 lb bag of cement? i plan to use 2x4's turned on their "2" inch side to make my forms when How many square feet in an 80
lb bag of cement how many bags of cement are in a cubic yard?. while no one knows exactly how long cement has been around, historians are certain that the romans have been using it in How many bags
of cement are in a cubic yard? | ehow the slab being built is what will determine the number of bags (bags of concrete mix) one will need. typically concrete mix comes in 60 or 80 pound bags, if the
How many bags of concrete do i need? - ask.com you might also like how many bags of quikret in a yard? measurements and units. 1 yard is 9 square feet and 0.9144 meter. you will need about 7 bags of
quikrete How many servings in a 3 lb. bag of salad? - blurtit
Related How Many Cubic Feet Will One 80 Lb Bag Of Quickcrete Concrete Video Post
Feb 22, 2010
How many cubic feet will one 80 lb. bag of quickcrete concrete make?
Feb 22, 2010
An 80 pound bag of concrete will provide about .6 of a cubic foot, so for four cubic feet, you would need seven 80 pound bags of concrete.
Feb 22, 2010
How many cubic feet in a 60 pound bag of quikrete? Answer There is just under half (0.45) of a cubic foot in a 60 pound bag of quikrete. | {"url":"http://onmilwiki.com/how/how-many-cubic-feet-will-one-80-lb-bag-of-quickcrete-concrete.html","timestamp":"2014-04-20T11:56:48Z","content_type":null,"content_length":"22477","record_id":"<urn:uuid:b6a24048-5dae-4cae-bce9-f303d85156e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
March Give A Way!!!!
This month we are giving away ANY Monogrammed Large Tote bag of your choice!!! This month search for the letters WIN A TOTE, found within our Monogrammed Bag Category found here Monogrammed Bags. You
will receive 10 Entries for this!!!
Extra Entries....
1 Extra Entry
if you follow Chic Monkey Boutique on Twitter
1 Extra Entry
per tweet that links to http://www.thechicmonkey.com/
1 Extra Entry
if you become a friend of Chic Monkey Boutique on Facebook
1 Extra Entry
per comment on Facebook that links back to http://www.thechicmonkey.com/
1 Extra Entry
if you follow Chic Monkey's Blog
2 Extra Entries
if you add Chic Monkey Boutiques Banner on your website or blog. (Make sure it's linked to our website)
3 Extra Entries
per blog linking to http://www.thechicmonkey.com/
15 Extra Entries
for every purchase made at Chic Monkey Boutique
Entries are held all year long! Contestant with the most entries at the end of 2010 will receive a bed/nursery set valued at $500!!!!!
Giveaway will end on March 31st at 12:00am. Winner will be chosen at random by random.org. Winner will have 48 hours to contact us via e-mail to claim their prize. Must be 18 years or older to enter.
Congratulations to Carol Sue for winning our Febuary Give A Way. She will receive a monogrammed shirt or onesie of her choice!!!!
Also, Congratulations to Kathleen Miller for ending the month with the MOST entries...A grand total of 41 ENTRIES!!!!! She is now in the lead of winning the custom bed/nursery set worth $500 at the
end of the year!
Good Luck,
Cristie Barbian
143 comments:
found all letters #1
found all letters #2
found all letters #3
found all letters #4
found all letters #5
found all letters #6
found all letters #7
found all letters #8
found all letters #9
found all letters #10
facebook fan
I follow on Twitter as Halleelujahmom
I follow your blog
1- Found all letters.
2- Found all letters.
3- Found all letters.
4- Found all letters.
5- Found all letters.
6- Found all letters.
7- Found all letters.
8- Found all letters.
9- Found all letters.
10- Found all letters.
follow Chic Monkey Boutique on Twitter @Ashbug622
friend of Chic Monkey Boutique on Facebook
Follow your blog
I have given you an award on my blog. Don't feel like you have to pass it on, but be sure to stop by and check it out. :)
I follow your blog
I follow on twitter
daily tweet
I am your facebook friend
Found all the letters #1
Found all the letters #2
Found all the letters #3
Found all the letters #4
Found all the letters #5
Found all the letters #6
Found all the letters #7
Found all the letters #8
Found all the letters #9
Found all the letters #10
daily tweet
I follow with google friends connect.
kirbycolby at gmail dot com
daily tweet
i follow yoU!
follow you on twitter clallen at ntin dot net
your fb fan clallen at ntin dot net
i follow your blog clallen at ntin dot net
daily tweet
daily tweet
daily tweet
Found the W!
Found the I!
Found the N!
Found the A!
Found the T!
Found the O!
Found the other T!
Found the E!
Found all the letters!
(Entry #9)
Found all the letters!
(Entry #10)
I am a friend of Chic Monkey Boutique on facebook as
Shanyn Greene Claycomb
I follow your blog through google friend connect!
daily tweet
daily tweet
google friend
sibabe64 at ptd dot net
follow u on the twit
found all letters 1
sibabe64 at ptd dot net
found all letters 2
sibabe64 at ptd dot net
found all letters 3
sibabe64 at ptd dot net
found all letters 4
sibabe64 at ptd dot net
found all letters 5
sibabe64 at ptd dot net
found all letters 6
sibabe64 at ptd dot net
found all letters 7
sibabe64 at ptd dot net
found all letters 8
sibabe64 at ptd dot net
found all letters 9
sibabe64 at ptd dot net
found all letters 10
sibabe64 at ptd dot net
daily tweet
daily tweet
found all letters #1
angelface041206 at yahoo
found all letters #2
angelface041206 at yahoo
found all letters #3
angelface041206 at yahoo
found all letters #4
angelface041206 at yahoo
found all letters #5
angelface041206 at yahoo
found all letters #6
angelface041206 at yahoo
found all letters #7
angelface041206 at yahoo
found all letters #8
angelface041206 at yahoo
found all letters #9
angelface041206 at yahoo
found all letters #10
angelface041206 at yahoo
I follow on twitter
angelface041206 at yahoo
I follow on your blog
angelface041206 at yahoo
fanned you on facebook
angelface041206 at yahoo
I tweeted http://twitter.com/all_about_savin/status/10632712929
angelface041206 at yahoo
daily tweet
I tweeted http://twitter.com/all_about_savin/status/10670092949
angelface041206 at yahoo
daily tweet
daily tweet
1- found all the letters! :)
2- found all the letters! :)
3- found all the letters! :)
4- found all the letters! :)
5- found all the letters! :)
6- found all the letters! :)
7- found all the letters! :)
8- found all the letters! :)
9- found all the letters! :)
10- found all the letters! :)
twitter follower- WeeShare
FB fan- Desiree Glaze
blog follower
My first tweet: http://twitter.com/WeeShare/status/10737512098
daily tweet http://twitter.com/all_about_savin/status/10753830076
angelface041206 at yahoo
daily tweet
daily tweet
angelface041206 at yahoo
daily tweet
daily tweet http://twitter.com/all_about_savin/status/10851385813
angelface041206 at yahoo
daily Tweet http://twitter.com/all_about_savin/status/10874728347
angelface041206 at yahoo
posted your banner #1
angelface041206 at yahoo
posted your banner #2
angelface041206 at yahoo
Blogged #1
angelface041206 at yahoo
Blogged #2
angelface041206 at yahoo
Blogged #3
angelface041206 at yahoo
I follow your blog on google friend connect!
mannasweeps (at) gmail DOT com
I follow chic monkey on twitter @mannabsn
daily tweet
daily tweet
daily tweet http://twitter.com/all_about_savin/status/10937554474
angelface041206 at yahoo
blogged #2
blogged #3
daily tweet
angelface041206 at yahoo
Facebook Post
angelface041206 at yahoo
angelface041206 at yahoo
angelface041206 at yahoo
last tweet for the month | {"url":"http://www.thechicmonkeyblog.com/2010/03/march-give-way.html?showComment=1269543035465","timestamp":"2014-04-18T00:14:26Z","content_type":null,"content_length":"312883","record_id":"<urn:uuid:ff43ad1f-1bc8-4a4c-80e0-ac966f4b287d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limits involving square roots
August 29th 2012, 08:18 AM #1
Aug 2012
United States
Limits involving square roots
Hello. I've never been great at math (or spent much time studying), but, since my new major requires quite a few math classes, I have decided to put in as much time as I can to develop my skills.
I need help finding a limit for a problem involving square roots. The problem is probably very simple, but I am not sure exactly how to work with the square roots/squares in the problem. I have
tried as many approaches as I can think of, and I still feel like I'm getting nowhere.
Could anyone walk me through the steps in solving this? That would be EXTREMELY helpful. The problem:
lim t->2
sqrt( (t+4)(t-2)^4 )
(3t - 6)^2
Sorry, I have no idea how to format that properly. Thanks for any help in advance!
Re: Limits involving square roots
No walk through is available here. We don't do that.
But $\(3t-6)^2= 9(t-2)^2$ so $\frac{\sqrt{(t+4)(t-2)^4}}{(3t-6)^2}=\frac{\sqrt{(t+4)}(t-2)^2}{9(t-2)^2}=\frac{\sqrt{(t+4)}}{9}$
Re: Limits involving square roots
If this is what you mean:
$\lim _{t \to 2} \frac {\sqrt{(t+4)(t-2)^4}}{(3t-6)^2}$
You can start by pulling the (t-2)^4 term out of the square root, then simplify:
$\frac {\sqrt{(t+4)(t-2)^4}}{(3t-6)^2} = \frac {\sqrt{t+4}(t-2)^2}{3^2(t-2)^2} = \frac {\sqrt{t+4}} 9$
$\lim _{t \to 2} \frac {\sqrt{t+4}} 9 = \frac {\sqrt 6 } 9$
August 29th 2012, 08:30 AM #2
August 29th 2012, 08:39 AM #3 | {"url":"http://mathhelpforum.com/calculus/202678-limits-involving-square-roots.html","timestamp":"2014-04-19T17:18:13Z","content_type":null,"content_length":"38461","record_id":"<urn:uuid:2f1ce7cb-dd29-4bab-9bbf-f9e26a1bd427>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Zapiski Nauchnyh Seminarov POMI"
VOL. 241
This issue is entitled "Studies in Constructive Mathematics and Mathematical Logic. Part X"
editors E. Ya. Dantsin and V. P. Orevkov
This volume contains articles on complexity of algorithms and complexity of proofs. Themes of papers concern diophantine representations for sets, probabilistic verifications of proofs and upper
bounds for problem of satisfiability. | {"url":"http://www.emis.de/journals/ZPOMI/1997/v5.html","timestamp":"2014-04-17T13:16:59Z","content_type":null,"content_length":"2392","record_id":"<urn:uuid:9a029f27-c493-4726-bb42-b817b127aae2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Risk And Rates Of Return Ppt
Sponsored High Speed Downloads
Title: CHAPTER 5 Risk and Rates of Return Author: Christopher Buzzard Last modified by: DCS Created Date: 10/10/2002 7:37:16 PM Document presentation format
Title: CHAPTER 5 Risk and Rates of Return Author: Christopher Buzzard Last modified by: Joel Houston Created Date: 10/10/2002 7:37:16 PM Document presentation format
Title: Chapter 4 - Risk and Rates of Return Author: Susan Cook Last modified by: sgriffith Created Date: 2/9/1999 3:21:30 AM Document presentation format
Title: CHAPTER 5 Risk and Rates of Return Author: Christopher Buzzard Last modified by: AEC Created Date: 10/10/2002 7:37:16 PM Document presentation format
Risk and Return: Part I Basic return concepts Basic risk concepts Stand-alone risk Portfolio (market) risk ... Required Rates of Return Expected versus Required Returns Slide 43 Calculate beta for a
portfolio with 50% Alta and 50% Repo What is the required rate of return on the Alta/Repo portfolio?
Title: Risk and Rates of Return Author: Ivan Orlic Last modified by: Hofstra Created Date: 3/22/2001 11:53:30 PM Document presentation format: On-screen Show
Title: Chapter 5 Risk & Rates of Return Author: Mike Dyer Last modified by: Finance Created Date: 6/17/1995 11:31:02 PM Document presentation format
Title: CHAPTER 5 Risk and Rates of Return Author: Christopher Buzzard Last modified by: MASUD Created Date: 10/10/2002 7:37:16 PM Document presentation format
... so the NPV’s reflect that risk/ return tradeoff Rates of Return Construction Engineering 221 Economic Analysis Rates of Return ROR stand for Rate of return- it is the effective annual interest
rate earned on an investment ROI is Return on Investment is NOT stated as a dollar amount in ...
Title: Chapter 4 - Risk and Rates of Return Author: Susan Cook Last modified by: Batool Created Date: 2/9/1999 3:21:30 AM Document presentation format
Theory 1: Risk and Return The beginnings of portfolio theory BM410: Investments Objectives A. Understand rates of return B. Understand return using scenario, probabilities, ...
Title: Chapter 6-Risk and Rates of Return Subject: Martin. Keown, Petty, Scott Author: Claire/Darrell Crutchley Last modified by: kalra Created Date
... * * Risk and Rates of Return Risk is the potential for unexpected events to occur or a desired outcome not to occur. If two financial alternatives are similar except for their degree of risk, ...
Risk and Rates of Return Portfolio Beta Coefficients Example: What is the beta of a portfolio made up of: 25% of Stock H, 45% of Stock A, and 30% of Stock L?
Title: Risk, Return, and Discount Rates Author: zender Last modified by: zender Created Date: 7/1/1997 11:04:06 PM Document presentation format: Letter Paper (8.5x11 in)
Title: CHAPTER 5 Risk and Rates of Return Last modified by: A Created Date: 9/4/2002 3:55:56 AM Document presentation format: On-screen Show Other titles
Title: CHAPTER 5 Risk and Rates of Return Author: Christopher Buzzard Last modified by: Susan Purcell Whitman Created Date: 7/18/1995 9:57:32 AM Document presentation format
Risk, Return, & the Capital Asset Pricing Model * ... Required Rates of Return Expected versus Required Returns (%) SML: ri = rRF + (RPM) bi ri = 8% + (7%) bi Calculate beta for a portfolio with 50%
Alta and 50% Repo Required Return on the Alta/Repo Portfolio? Impact of ...
Title: Risk & Rates of Return Author: cbacc Last modified by: macminn Created Date: 10/23/1995 11:21:23 AM Document presentation format: On-screen Show
Risk and Return: Capital Asset Pricing Model Liuren Wu ... =1.16 FIN3000, Liuren Wu * 8.3 The CAPM CAPM also describes how the betas relate to the expected rates of return that investors require on
their investments.
Title: CHAPTER 5 Risk and Rates of Return Last modified by: G. Donald Jud Created Date: 7/18/1995 9:57:32 AM Document presentation format: On-screen Show
Title: CHAPTER 5 Risk and Rates of Return Last modified by: HBCP Created Date: 7/18/1995 9:57:32 AM Document presentation format: On-screen Show Other titles
Title: Chapter 4 - Risk and Rates of Return Author: Susan Cook Last modified by: Dr khalid Created Date: 2/9/1999 3:21:30 AM Document presentation format
... Expected Return Expected Rate of Return Studying and Understanding Risk Risk Standard Deviation of Return Annual Rates of Return, 1926-2000 Total Risk or Variability Diversification
Company-Unique Risk Market Risk Market Risk Measuring Market Risk Measuring Market Returns Beta ...
... Liuren Wu Overview Calculate Realized and Expected Rates of Return and Risk. Describe the Historical Pattern of Financial Market Returns. Compute Geometric and Arithmetic Average Rates of Return.
Explain Efficient Market Hypothesis FIN3000, ...
CHAPTER 5 Risk and Rates of Return Investor like return but dislike risk, thus they invest in risky asset only if they expect to receive higher returns.
Risk and Rates of Return Chapter 11 Requests for permission to make copies of any part of the work should be mailed to: Thomson/South-Western 5191 Natorp Blvd.
Chapter 13. Risk & Return in Asset Pricing Models Portfolio Theory Managing Risk Asset Pricing Models I. Portfolio Theory how does investor decide among group of assets? assume: investors are risk
averse additional compensation for risk tradeoff between risk and expected return goal efficient or ...
Risk, Return, and the Capital Asset ... Required Rates of Return Expected versus Required Returns (%) SML: ri = rRF + (RPM) bi ri = 8% + (7%) bi Calculate beta for a portfolio with 50% Alta and 50%
Repo Required Return on the Alta/Repo Portfolio? Impact of ...
... Example Using Total Return Swaps to Hedge Credit Risk Total Return Swap ... Floating rates are reset at settlement dates covering the subsequent period. Only net payments are actually exchanged.
Swaps are equivalent to an exchange of a series of forward contracts.
... earn its expected rate of return Probability is the likelihood of an outcome Expected Rates of Return Measuring the Risk of Expected Rates of Return Expected Return and Risk Risk Aversion
Measuring the Risk of Expected Rates of Return Coefficient of variation ...
Title: Financial Management Subject: Ch. 6: Risk and Rates of Return Author: Anthony K. Byrd, Associate Professor of Finance Last modified by: Ian Landsman
Risk and Return Risk is defined as uncertainty of outcomes. ... Risk averse investors require higher rates of return to invest in riskier assets. It is assumed that all investors are risk averse: If
two assets offer the same return, ...
Risk Management is a methodology that helps managers make best use of their available resources Risk Management practices are ... Determine operating procedures. Next Identify commodity or control
risks; e.g., high duty rates or quantity controls, the demand for prohibited goods, such ...
... Short term government security rates are used as risk free rates Historical risk premiums are used for the risk premium Betas ... lower for safer investments While risk is usually defined in
terms of the variance of actual returns around an expected return, risk and return models in ...
... Estimating Expected Returns Estimating Expected Returns Estimating Expected Returns Estimating Expected Returns Risk Risk of Rates of Return: Variance and Standard Deviation Measuring Risk
Example Using the Ex post Standard Deviation Portfolio Return: Two ...
Chapter Preview We examine how financial institutions manage credit risk, default risk, ... specialize in earning a higher rate of return on their assets relative to the interest paid on their
liabilities. ... items whose market value will change as interest rates change.
... foreign exchange adjusted rates of return International Structure of Interest Rates Yield differences between countries are related to: Expected changes in forex rates Varied expected real rates
of return Varied expected inflation rates Varied country and business risk Varied central ...
... investors choose securities with less risk Leads to risk-return tradeoff Illustration of Risk Aversion One year, ... we need to add 0.90% to future rates Returns on Bonds: Risk-Neutral World
Expected one-year return on bond: ...
... CAPM: COMMENTS ON ESTIMATING Required rates of return Under the CAPM, required rate of return (or cost of equity) depends on the risk free rate, the market risk premium, and the Beta, as shown in
... The CML gives the risk/return relationship for efficient portfolios. The Security ...
credit risk management ... from static management of credit to dynamic management of credit risk key drivers of change shareholder value issues return on credit capital optimisation of return on ...
default probability/ ratings migration recovery rates default correlations portfolio models ...
... to certain ranges of maturities Financial Markets and Intercountry Risk Financial System Risk Political System Risk Exchange Rate Risk Rates of Return in Financial Markets *Foundations of Finance
Chapter 2 The Financial Markets and Interest Rates * Lower interest rates Contracting ...
The Risk and Term Structure of Interest Rates ... becoming riskier Corporate bonds generally have a higher default risk than municipal bonds Investors will expect higher return to compensate for
increased default risk Default Risk Standard and Poor’s and Moody’s Investors Service rate ...
... money market Why interest rates are different? Risk structure of interest rate Term structure of interest rate Expected return and risk Risk-return tradeoff. Then the ‘average’ of return, ...
... borrowed $12.5B, invested in 5yr. notes interest rates increased reported at cost - big mistake! realized loss of $1.64B Public Funds ... Division of VaR by business units, areas of activity,
counterparty, currency. Performance measurement - RAROC (Risk Adjusted Return On Capital).
Risk and Rates of Return What does it mean to take risk when investing? How are risk and return of an investment measured? For what type of risk is an average investor rewarded?
... in price due to changes in interest rates Long-term bonds have more price risk than short-term bonds Reinvestment Rate Risk Uncertainty concerning rates at which cash flows can be ... Required
Return Default risk premium and ... Ross PPt template Microsoft Graph 2000 ...
Title: Financial Management Subject: Ch. 6: Risk and Rates of Return Author: Anthony K. Byrd, Associate Professor of Finance Last modified by: Cengiz Capan
Session 2: The Risk Free Rate Aswath Damodaran The risk free rate is the starting point.. For both cost of equity and cost of debt To get to a cost of equity from any risk and return model, you begin
with a riskfree rate.
Title: Risk and Return Subject: Powerpoint show Author: Mike Ehrhardt, Lou Gapenski and Phillip Daves Last modified by: Amelia Bell Created Date: 7/18/1995 9:57:32 AM | {"url":"http://ebookily.org/ppt/risk-and-rates-of-return-ppt","timestamp":"2014-04-25T02:27:43Z","content_type":null,"content_length":"43293","record_id":"<urn:uuid:8e2a2cd0-8976-4c09-8dde-90fbd07c3033>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
complex numbers
June 14th 2012, 10:25 PM #1
Junior Member
Jun 2012
complex numbers
the potential difference across a circuit is given by the complex number
V = 40 + j35 volts
& the current is given by the complex number
I = 6 + j3 amps
How do i find the phase difference (the angle) between the phasors for Voltage V and Current I?
Re: complex numbers
Convert the voltage and current from cartesian coordinates $(x,y)$ - where $x$ is the real value and $y$ the imaginary - to polar coordinates of the form $(r,\theta)$. The phase angle is the
difference between the two values of $\theta$.
Re: complex numbers
thanks for that just figured it out but good to double check
June 15th 2012, 06:57 AM #2
June 15th 2012, 09:28 AM #3
Junior Member
Jun 2012 | {"url":"http://mathhelpforum.com/pre-calculus/200046-complex-numbers.html","timestamp":"2014-04-19T20:46:18Z","content_type":null,"content_length":"35194","record_id":"<urn:uuid:15f51b28-9206-4345-abdf-ddd497590941>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which of the following statements is false? Select one: a. -1 > -4 b. 1 < - 2 c. -3 > -5 d. -5 < 3
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512ad995e4b02acc415d05e4","timestamp":"2014-04-17T18:25:29Z","content_type":null,"content_length":"53979","record_id":"<urn:uuid:342a3282-a48a-49f5-aadd-3f496e48838f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Serre theorem in group cohomology
From Encyclopedia of Mathematics
A theorem proved by J.-P. Serre in 1965 about the cohomology of pro-Pro-; Cohomology of groups). The original proof appeared in [a7], a proof in the context of finite group cohomology appears in [a1]
Let ). Assume that [a9] and Cohomology operation). Note that for
For a finite
which the theorem asserts to be the trivial extension class.
The original application of Serre's result was for proving that if profinite group without elements of order [a8] for more on this; cf. also Cohomological dimension).
However, it is also a basic technical result used in proving the landmark result (see [a5] and [a6]) that the Krull dimension (cf. Dimension) of the cohomology of a finite group
This, in turn, can be extended to arbitrary finite groups and to cohomology with coefficients in a modular representation. Indeed, it is a basic result in the theory of complexity and cohomological
varieties in representation theory. This is explained [a2], [a3] and [a4].
[a1] A. Adem, R.J. Milgram, "Cohomology of finite groups" , Grundlehren , 309 , Springer (1994) MR1317096 Zbl 0828.55008 Zbl 0820.20060
[a2] D.J. Benson, "Representations and cohomology II: Cohomology of groups and modules" , Studies in Advanced Math. , 32 , Cambridge Univ. Press (1991) Zbl 0731.20001
[a3] J.F. Carlson, "Modules and group algebras" , ETH Lect. Math. , Birkhäuser (1994) MR1428452 MR1393196 MR1342784 MR1338985 MR1159220 MR0621286 MR0613859 MR0528565 MR0491914 MR0472985 MR0554578
MR0419506 MR0364411 Zbl 0883.20006 Zbl 0837.20010 Zbl 0762.20021 Zbl 0484.20005
[a4] L. Evens, "Cohomology of groups" , Oxford Univ. Press (1992) MR1144017 MR0574102 MR0153725 Zbl 0742.20050 Zbl 0122.02804
[a5] D. Quillen, "The spectrum of an equivariant cohomology ring I–II" Ann. of Math. , 94 (1971) pp. 549–602 MR0298694 Zbl 0247.57013
[a6] D. Quillen, B. Venkov, "Cohomology of finite groups and elementary Abelian subgroups" Topology , 11 (1972) pp. 317–318 MR0294506 Zbl 0245.18010
[a7] J.-P. Serre, "Sur la dimension cohomologique des groupes profinis" Topology , 3 (1965) pp. 413–420 MR0180619 Zbl 0136.27402
[a8] J.-P. Serre, "Cohomologie Galoisienne" , Lecture Notes in Mathematics , 5 , Springer (1994) (Edition: Fifth) MR1324577 Zbl 0812.12002
[a9] E. Spanier, "Algebraic topology" , Springer (1989) MR1325242 Zbl 0810.55001 Zbl 0477.55001 Zbl 0222.55001 Zbl 0145.43303
How to Cite This Entry:
Serre theorem in group cohomology. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Serre_theorem_in_group_cohomology&oldid=24127
This article was adapted from an original article by Alejandro Adem (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"http://www.encyclopediaofmath.org/index.php/Serre_theorem_in_group_cohomology","timestamp":"2014-04-19T09:38:18Z","content_type":null,"content_length":"29006","record_id":"<urn:uuid:fc32cd57-96df-4695-942e-dd5a17e3944f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Seven Bridges of Königsberg
In the 1730s, Leonhard Euler lived in the Prussian city of Königsberg. The Pregel River runs around the center of the city (Kneiphof) and then splits into two parts. The city was then quite
prosperous and the volume of commerce justified connections between the separated land masses by seven bridges. A popular problem of the day was to find a continuous path which would cross all seven
bridges once and only once.
You can try to solve this problem by first dragging the locator to some starting point, then clicking at points that define a path. Points can then be moved, or removed (Alt-click).
We apologize for your futile efforts in trying to solve the problem. In fact, Euler proved that no solution exists. It is easy to see why. The four land masses are the north and south shores and the
two islands. Any path must arrive at and depart from a land mass via two different bridges (lest one bridge be used twice), so for any legal path to exist, each land mass must be approachable by an
even number of bridges.
Euler's more sophisticated treatment can be considered as the beginning of what is now graph theory and, more generally, the entire field of topology.
Snapshot 1: an illegal solution since one bridge is crossed twice
Snapshot 2: adding an eighth bridge can make the problem solvable
Snapshot 3: with two of the bridges destroyed by bombing in World War II, another solution is possible | {"url":"http://demonstrations.wolfram.com/TheSevenBridgesOfKoenigsberg/","timestamp":"2014-04-19T01:51:50Z","content_type":null,"content_length":"43024","record_id":"<urn:uuid:56c176cf-97d5-411e-8334-698b8b42f2c0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mean-Gini Optimization
Mean-Gini Optimization
In the last post we introduced the Gini Coefficient as a measure of inequality and statistical dispersion. The primary benefit to using the Gini versus standard deviation is the proper consideration
of abnormally large values in the cumulative distribution. There are many applications of the Gini within quantitative finance. One example is the Mean-Gini framework, which was presented as an
alternative to classic Mean-Variance optimization. So what are the advantages of a Mean-Gini framework? Cheung et al. (2007) provide a very compelling argument:
“From a theoretical perspective, the mean-variance approach is appropriate
only when investment returns are normally distributed or investors’ preferences can be
characterized by quadratic functions.As the assumption of quadratic utility is known to be problematic on theoretical grounds, normality of investment returns becomes necessary for the mean-variance
approach to hold. The validity of the assumption of normality or even near normality, however, is questionable when applied to financial assets such as derivatives (which include various forms of
options on stocks and other assets), stocks from emerging markets,and hedge funds.”
Given that we know financial data has a tendency to “mis-behave” (the 1 in 10 trillion events using normal distribution assumptions that happen every 10 years), the Mean-Variance framework is clearly
more fragile than a Mean-Gini framework. That is the good news, unfortunately the bad news is that the use of the Gini Coefficient for optimization is complex and there is no efficient closed-form
solution such as the case for the Markowitz Mean-Variance framework. In fact, the calculation and interpretation of the Gini Coefficient differs from the original statistic often quoted by
economists. In a typical context, the Gini Coefficient would range between 0 and 1. In the case of portfolio management, we wish to compare a return stream to its cumulative distribution. While the
mathematics of the Gini are beyond the scope of this article, the calculation is analogous to a measurement of absolute error that is more complex. In general terms, the absolute error is a measure
of dispersion that calculates the average absolute difference of return values from their mean. In contrast, the standard deviation calculates squared errors. This makes the Gini more resistant to
the outliers that plague the variance-covariance matrix estimation in a Mean-Variance setting.
6 thoughts on “Mean-Gini Optimization”
1. Rather than solve for fat tails and the “normal” distribution as one problem, why not solve them as two problems. A set of hedges/exposures optimized for tails (melt downs and melt ups) and a set
of exposures optimized for the normally distributed middle.
□ That is an interesting idea, and related somewhat to a combined mean-variance/modified var framework—of course without explicitly considering the “melt-ups.” Is there any formal framework
that you can suggest that addresses this in the manner you are describing?
☆ The closest to a formalized framework that I have seen is by Jeff McGinn in “Tail Risk Killers” on page 326 where he has a simple graphic with coarse suggestions on what to hold across
the bell curve.
I think the key is to find practical cutoffs on the tails… With simple counting of events (eg days with more than x% volatility, or weeks with total losses of more than x%, etc…) and
identifying likelihoods of events (eg the volatililty issues occur most frequently when the market is below its 200 SMA), we can have a “observed” bell curve (OBC), not a theoretical on
dependent on Central Limits…and by regime.
I don’t have much more than this notion. But it feels right. Kind of like why auto engineers use seat cushions nearer the passenger (normal curve?) and shocks, struts and bumpers the
closer to the exterior (fat tails?). Each is optimized for its part of the environment.
☆ Thank you very much for the suggestion cm, I understand the analogy and it makes sense. I will look into it the McGinn book.
2. Use GMD= Gini’s Mean Difference not the Gini Coefficient . Look at the papers by Shalit and Yitzhaki. The first J.Finance 1984. Most papers are available at http://www.bgu.ac.il/~shalit | {"url":"http://cssanalytics.wordpress.com/2012/02/25/mean-gini-optimization/","timestamp":"2014-04-21T07:40:02Z","content_type":null,"content_length":"66791","record_id":"<urn:uuid:3cf93d0b-e918-410a-8805-f62a73094fa6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal Distribution
February 8th 2010, 01:04 PM #1
Normal Distribution
Given that $X,Y$ are independant Normal distributed variables, with $N(1,\sigma^2), \sigma = \frac{1}{2}$
How do I calculate $P(X+Y\leq t)$
I'm not quite sure how to calculate this since the cdf of the normal distribution doesn't have a closed form. And approximating with an error-function seems quite a bother. Any hints?
Let Z=X+Y, then $Z\sim N(E(Z), V(Z))$
you want $P(Z\le t)=\int_{-\infty}^t f_Z(z)dz$
February 8th 2010, 02:05 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/127828-normal-distribution.html","timestamp":"2014-04-16T05:32:52Z","content_type":null,"content_length":"34265","record_id":"<urn:uuid:7be0f448-b10a-4855-bf72-2accf4fe20cc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
October 2
Simpler, Easier!
In a recent paper,
Simply Easy! (An Implementation of a Dependently Typed Lambda Calculus)
, the authors argue that type checking a dependently typed language is easy. I agree whole-heartedly, it doesn't have to be difficult at all. But I don't think the paper presents the easiest way to
do it. So here is my take on how to write a simple dependent type checker. (There's nothing new here, and the authors of the paper are undoubtedly familiar with all of it.)
I'll start by implementing the untyped lambda calculus. It's a very simple language with just three constructs: variables, applications, and lambda expressions, i.e.,
For example,
. In Haskell I'll use strings to represent variables names; it's simple and easy.
type Sym = String
data Expr
= Var Sym
| App Expr Expr
| Lam Sym Expr
deriving (Eq, Read, Show)
The example above represented by
App (Lam "x" $ Lam "y" $ Var "x") (Lam "z" $ Var "z")
. What do we want to do with the
type? Well, evaluating an expression seems like the thing we need. Now, there are many degrees of evaluation to choose from, Weak Head Normal Form, Head Normal Form, Normal Form, etc., etc. They
differ in exactly where there might be reducible expression lingering. To evaluate lambda expression the most important step is β-reduction. A β-reduction step can be performed anywhere a function
meets an argument, i.e., an application where the function is on λ form, a.k.a. a redex.
(λx.e)a reduces to e[x:=a]
Where the
] notation means that all (free) occurrences of the variable
in the expression
are replaced by
. The example above has one redex, and doing a β step yields
. The other kind of reduction we will make use of is α-substitution, which is simply renaming a bound variable, e.g.,
can be changed to
. Let's start with an easy evaluation strategy, normal order to WHNF. In WHNF we only need to make sure there the there's no redex along the "spine" of the expression, i.e., starting from the root
and following the left branch in applications. Doing normal order reduction means that we do
evaluate anything inside the argument of a β redex before doing the reduction. It's sometimes called lazy evaluation, but I prefer to use that term for an implementation strategy for normal order
reduction. We implement normal order WHNF by walking down the spine collecting arguments (i.e., the right branch of a applications) until we reach a lambda or a variable. If we reach a variable we
already have WHNF so we just reconstitute the applications again. If we hit a lambda we get to the crux of evaluation. We need to perform a β-reduction, i.e., if we have
App (Lam v e) a
we need to replace all (free) occurrences of the variable
by the argument
inside the lambda body
. That's what the
function does.
whnf :: Expr -> Expr
whnf ee = spine ee []
where spine (App f a) as = spine f (a:as)
spine (Lam s e) (a:as) = spine (subst s a e) as
spine f as = foldl App f as
function is the only tricky part, so let's relax by first defining something easy, namely getting the free variables from an expression. The free variables are those variables that occur in an
expression, but are not bound in it. We simply collect the variables in a set (using a list as a set here), removing anything bound.
freeVars :: Expr -> [Sym]
freeVars (Var s) = [s]
freeVars (App f a) = freeVars f `union` freeVars a
freeVars (Lam i e) = freeVars e \\ [i]
Back to substitution.
subst :: Sym -> Expr -> Expr -> Expr
subst v x b = sub b
where sub e@(Var i) = if i == v then x else e
sub (App f a) = App (sub f) (sub a)
sub (Lam i e) =
if v == i then
Lam i e
else if i `elem` fvx then
let i' = cloneSym e i
e' = substVar i i' e
in Lam i' (sub e')
Lam i (sub e)
fvx = freeVars x
cloneSym e i = loop i
where loop i' = if i' `elem` vars then loop (i ++ "'") else i'
vars = fvx ++ freeVars e
substVar :: Sym -> Sym -> Expr -> Expr
substVar s s' e = subst s (Var s') e
function will replace all free occurrences of
, i.e.,
]. The
case is easy. If it's the variable we are replacing then replace else leave it alone. The
case is also easy, just recurse in both branches. The
case has three alternative. First, if the bound variable is the same as the one we are replacing then there can be no free occurrences inside it, so just return the lambda as is. Second, if the
lambda bound variable is among the free variables in
we have a problem (see below). Third case, just recurse in the body. So, what about the case when the lambda bound variable occurs in
? Well, if we just blindly continue substitution the variable
will no longer refer to the same thing; it will refer to the variable bound in the lambda. That's no good. For example, take the expression
, the β reduction gives
(or similar), whereas doing the substitution blindly would give
. Which is wrong! But it's easy to fix, just conjure up a variable,
that will not clash with anything (
does that). How do we come up with a good variable? Well, we don't want to pick one that is free in the expression
because that would lead to the same problem again. Nor do we want to pick a variable that is free in
because that would accidentally bind something in
. So we take the original identifier and tack on "'" until it fulfills our requirements. (OK, efficiency affectionados are allowed to complain a little here, but this isn't that bad actually.) Once
we have a new variable we can do an α-conversion to rename the offending variable to something better. The
function is a utility when we want to replace one variable with another. Another useful thing to be able to do is to compare lambda expression for equality. We already have syntactic equality derived
, but it is also very useful to be able to compare expressions modulo α-conversions. That is, we'd like
to compare equal to
. Let's call that function
alphaEq :: Expr -> Expr -> Bool
alphaEq (Var v) (Var v') = v == v'
alphaEq (App f a) (App f' a') = alphaEq f f' && alphaEq a a'
alphaEq (Lam s e) (Lam s' e') = alphaEq e (substVar s' s e')
alphaEq _ _ = False
Variables and applications just proceed along the structure of the expression. When we hit a lambda the variables might be different, so we do an α-conversion of the second expression to make them
equal. As the final functions, we will do reduction to Normal Form (i.e., to a form where no redexes remain) and comparison of expressions via their normal forms.
nf :: Expr -> Expr
nf ee = spine ee []
where spine (App f a) as = spine f (a:as)
spine (Lam s e) [] = Lam s (nf e)
spine (Lam s e) (a:as) = spine (subst s a e) as
spine f as = app f as
app f as = foldl App f (map nf as)
betaEq :: Expr -> Expr -> Bool
betaEq e1 e2 = alphaEq (nf e1) (nf e2)
Computing the NF is similar to WHNF, but as we reconstruct expressions we make sure that all subexpression have NF as well. Note that both
may fail to terminate because not all expressions have a normal form. The canonical non-terminating example is
(λx.x x)(λx.x x)
which has one redex, and doing the reduction produces the same term again. But if an expression has a normal form then it is unique (the Church-Rosser theorem). Here are some sample lambda terms
(named for convenience):
zero ≡ λs.λz.z one ≡ λs.λz.s z two ≡ λs.λz.s (s z) three ≡ λs.λz.s (s (s z)) plus ≡ λm.λn.λs.λz.m s (n s z)
Or, in Haskell
[z,s,m,n] = map (Var . (:[])) "zsmn"
app2 f x y = App (App f x) y
zero = Lam "s" $ Lam "z" z
one = Lam "s" $ Lam "z" $ App s z
two = Lam "s" $ Lam "z" $ App s $ App s z
three = Lam "s" $ Lam "z" $ App s $ App s $ App s z
plus = Lam "m" $ Lam "n" $ Lam "s" $ Lam "z" $ app2 m s (app2 n s z)
And now we can check that addition works,
betaEq (app2 plus one two) three
will produce
To do type checking we need to introduce types. A very simple system is the simply typed lambda calculus. It has one (or more) base type (think of it as Bool or Int if you want an example) and
function types. In Haskell terms:
data Type = Base | Arrow Type Type
deriving (Eq, Read, Show)
The expressions themselves will have an explicit type on the bound variable in a lambda expression. So we now have
For example,
. The Haskell type for expressions is
data Expr
= Var Sym
| App Expr Expr
| Lam Sym Type Expr
deriving (Eq, Read, Show)
The only difference is the
. All the functions we had for the untyped lambda calculus can be trivially extended to the simply typed one by simply carrying the type along. So finally, time for some type checking. The type
checker will take an expression and return the type of the expression. The type checker will also need the types of all free variables in the expression to be able to do this. Otherwise, what type
would it assign to, say, the expression
? To represent the types of the free variables we use an environment which is simply a list of variables and their types.
newtype Env = Env [(Sym, Type)] deriving (Show)
initalEnv :: Env
initalEnv = Env []
extend :: Sym -> Type -> Env -> Env
extend s t (Env r) = Env ((s, t) : r)
Type checking can go wrong; there can be type errors. To cater for this the type checker will be written in monadic style where the monad is simply an error (exception) monad. The error messages are
strings, and the monad itself is the
type. So
is the type checking monad.
type ErrorMsg = String
type TC a = Either ErrorMsg a
We can now write variable lookup.
findVar :: Env -> Sym -> TC Type
findVar (Env r) s =
case lookup s r of
Just t -> return t
Nothing -> throwError $ "Cannot find variable " ++ s
It simply looks up the variable and returns the type. If not found it throws an error with
). And then the type checker itself.
tCheck :: Env -> Expr -> TC Type
tCheck r (Var s) =
findVar r s
tCheck r (App f a) = do
tf <- tCheck r f
case tf of
Arrow at rt -> do
ta <- tCheck r a
when (ta /= at) $ throwError "Bad function argument type"
return rt
_ -> throwError "Non-function in application"
tCheck r (Lam s t e) = do
let r' = extend s t r
te <- tCheck r' e
return $ Arrow t te
For variables, just look up the type for it in the environment. For application, type check the function part and the argument part. The function should have function (arrow) type, and if it does the
type of the application is the return type of the function. Finally, for a lambda expression we extend the environment with the bound variable. We then check the body, and the type of the lambda
expression is a function type from the argument type to the type of the body. For convenience:
typeCheck :: Expr -> Type
typeCheck e =
case tCheck initalEnv e of
Left msg -> error ("Type error:\n" ++ msg)
Right t -> t
Pretty easy sailing so far. The simply typed lambda calculus is a pain to use. Take something like
in the untyped world. What type should we give it? Well that depends on how we intend to use it. Maybe
, maybe
, maybe
... So we can no longer have one identity function; we need one for each type. What a bummer! It's as bad as C. BTW, all (type correct) expression in the simply typed lambda calculus have a normal
form (Tait 1967).
(Don't get me wrong, the polymorphic lambda calculus is a work of marvel.) So how can we fix the problem with one identity function for each type? We can add polymorphism! We can extend the
expression language so that we also pass types around; we add type abstraction and type application.
is a type abstraction, i.e.,
is a type variable which we can use in type expressions inside
. To supply a type argument we have type application,
. So the types we have would functions, base type, and type variables.
And what is
in the
? Well, now types have gotten so complicated that it is possible to construct types that make no sense, so we need a "type system" for the types. We call them kinds.
Defining all this is Haskell would be something like
data Expr
= Var Sym
| App Expr Expr
| Lam Sym Type Expr
| TLam Sym Kind Expr
| TApp Expr Type
deriving (Eq, Read, Show)
data Type
= Arrow Type Type
| Base
| TVar Sym
deriving (Eq, Read, Show)
data Kind
= KArrow Type Type
| Star
deriving (Eq, Read, Show)
But wait, there's an awful lot of duplication here. The structures on the three levels have a lot of similarities. (Oh, and we don't really need
anymore now when we have variables.) BTW, this system, called System F
, is (a simplified version of) what GHC uses internally to represent Haskell code. It's a beautiful system, really. Oh, the identity function, well it would be
. And using it:
, assuming
has type
To simplify and (as often happens when you simplify) generalize the expressions above we are going to squish them all into one expression data type. So
will join, as will
. But wait, there's nothing corresponding to
. We need to add something. We could just add it as it is, but we won't. TADA, enter dependent types. Instead of the boring function type
we will use a more exciting one,
. What does it mean? It means that the variable
can occur in
. If it doesn't then it's simply the same as the old fashioned function type. If
does occur it means that
type u
can depend on the
of the argument (
). In Haskell:
data Expr
= Var Sym
| App Expr Expr
| Lam Sym Type Expr
| Pi Sym Type Type
| Kind Kinds
deriving (Eq, Read, Show)
type Type = Expr
data Kinds = Star | Box deriving (Eq, Read, Show)
The new arrow type is called
. We will also need more than one kind,
. It's pretty easy to extend the functions from the first part to handle this expression type. There's just a few more places to recurse. Here's the code again. Absolutly nothing subtle about it.
freeVars :: Expr -> [Sym]
freeVars (Var s) = [s]
freeVars (App f a) = freeVars f `union` freeVars a
freeVars (Lam i t e) = freeVars t `union` (freeVars e \\ [i])
freeVars (Pi i k t) = freeVars k `union` (freeVars t \\ [i])
freeVars (Kind _) = []
subst :: Sym -> Expr -> Expr -> Expr
subst v x = sub
where sub e@(Var i) = if i == v then x else e
sub (App f a) = App (sub f) (sub a)
sub (Lam i t e) = abstr Lam i t e
sub (Pi i t e) = abstr Pi i t e
sub (Kind k) = Kind k
fvx = freeVars x
cloneSym e i = loop i
where loop i' = if i' `elem` vars then loop (i ++ "'") else i'
vars = fvx ++ freeVars e
abstr con i t e =
if v == i then
con i (sub t) e
else if i `elem` fvx then
let i' = cloneSym e i
e' = substVar i i' e
in con i' (sub t) (sub e')
con i (sub t) (sub e)
To cut down on the code you could actually join the
constructors since they are treated identically in many cases. I've left them separate for clarity. The
function extends in the natural way to the new type, so does
, but here it is anyway.
nf :: Expr -> Expr
nf ee = spine ee []
where spine (App f a) as = spine f (a:as)
spine (Lam s t e) [] = Lam s (nf t) (nf e)
spine (Lam s _ e) (a:as) = spine (subst s a e) as
spine (Pi s k t) as = app (Pi s (nf k) (nf t)) as
spine f as = app f as
app f as = foldl App f (map nf as)
So, now for the meaty part, the type checking itself. The handling of the environment is just as before, so we'll just look at the different cases for the type checking.
tCheck :: Env -> Expr -> TC Type
tCheck r (Var s) =
findVar r s
Just as before.
tCheck r (App f a) = do
tf <- tCheckRed r f
case tf of
Pi x at rt -> do
ta <- tCheck r a
when (not (betaEq ta at)) $ throwError $ "Bad function argument type"
return $ subst x a rt
_ -> throwError $ "Non-function in application"
This is almost as before, but the arrow type is called
now. The key thing here — and this is really where the fact that we are doing dependent types shows up — is the return type. For the simply typed lambda calculus it was just
, but now
can contain free occurences of the variable
. Since we are returning
would no longer be in scope, so we substitute the value of the argument for it. This is coolest part of the type checker. You've seen it. That's where it is. Since types can now be arbitrary
expression we use
to compare them instead of
tCheck r (Lam s t e) = do
tCheck r t
let r' = extend s t r
te <- tCheck r' e
let lt = Pi s t te
tCheck r lt
return lt
The lambda case is similar to before, but we return a
now, so we need to include the variable name. Furthermore, to avoid nonsense like
we make sure that the type we want to return actually has a valid kind itself. (The first call to
is to ensure the type we're putting into the environment is valid; I'm sure there's a more elegant way to do this, but I can't remember what it is right now.)
tCheck _ (Kind Star) = return $ Kind Box
tCheck _ (Kind Box) = throwError "Found a Box"
Everything has a type, so what's the type of
Kind Star
), well it's a [] (
Kind Box
) (excuse the ugly box, I can't find the HTML version of a box). And what's the type of
? Well, you could keep going, but instead we'll stop right here. The idea is that the source language which we'll write our terms in will not allow the box to be written, so it should never occur.
tCheck r (Pi x a b) = do
s <- tCheckRed r a
let r' = extend x a r
t <- tCheckRed r' b
when ((s, t) `notElem` allowedKinds) $ throwError "Bad abstraction"
return t
How do we check the type of the (dependent) function type? Well, we check the type of the thing to the left of the arrow, extend the environment, and then check the thing to the right. So now what
(s, t)
be? Well,
should be types (or maybe kinds). So their types should be kinds. This leads to the following definition:
allowedKinds :: [(Type, Type)]
allowedKinds = [(Kind Star, Kind Star), (Kind Star, Kind Box), (Kind Box, Kind Star), (Kind Box, Kind Box)]
I.e., we allow (*,*), (*,[]), ([],*), and ([],[]). What does it all mean? Here's the beauty of the lambda cube. By varying what we allow we can change what system we type check.
(*,*) values can depend on values. Just this gives the simply typed λ calculus. ([],[]) types can depend on types. ([],*) values can depend on type. Include all these three and you get F[ω]. (*,
[]) types can depend on values. Include this one to get dependent types.
With all four combination allowed you get Calculus of Construction (CoC). If you always include (*,*), but make a choice of the other 3 you get 8 choices; these are the corners of the lambda cube.
All of these system have been studied. BTW, all the systems in the lambda cube have the property that a well typed expression has a normal form. (Well, the proof of this is so complicated for some of
these systems that some people kinda doubt it.)
Here the syntax
, where "_" is some new variable not used in
Identity Pairs
Pair ≡ λa:*.λb:*.(c:*→((a→b→c)→c)) pair ≡ λa:*.λb:*.λx:a.λy:b.λc:*.λf:(a→b→c).f x y split ≡ λa:*.λb:*.λr:*.λf:(a→b→r).λp:(Pair a b).p r f fst ≡ λa:*.λb:*.λp:(Pair a b).split a b a
(λx:a.λy:b.x) p snd ≡ λa:*.λb:*.λp:(Pair a b).split a b b (λx:a.λy:b.y) p
My fingers are numb from all these greek characters, so I'll continue with examples another time. And, of course, a parser and pretty printer.
Labels: Dependent types, Haskell, Lambda calculus | {"url":"http://augustss.blogspot.com/2007_10_01_archive.html","timestamp":"2014-04-20T21:30:54Z","content_type":null,"content_length":"40689","record_id":"<urn:uuid:318f7470-46d8-41d0-979e-1fafa3ed0083>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rumford, RI Prealgebra Tutor
Find a Rumford, RI Prealgebra Tutor
...I also teach AP Chemistry, so I am very comfortable helping students taking that course. Not only am I familiar with the content, but I have acquired many helpful tricks over the years that I
can pass on to help students study for the AP exam as well as the SAT II Chemistry. In college I studied chemical engineering, so I have taken advanced math courses.
9 Subjects: including prealgebra, chemistry, algebra 1, calculus
...My teaching philosophy is simply that every student can learn, achieve and succeed in mathematics if they have the correct, positive attitude and are willing to work hard. I am passionate
about teaching mathematics and enjoy seeing students develop excellent study and learning abilities over tim...
13 Subjects: including prealgebra, physics, calculus, geometry
...It has been a while since I have looked at a calculus book, but I think that it would come back fairly easily with materials in hand. I can help with test prep in math subjects for SAT, ACT
and AP. I am also available for some college math subjects.
17 Subjects: including prealgebra, calculus, geometry, statistics
...In addition, I have taught one year in the sixth grade, and coach the middle school cross country team. I have a strong background in modifying assignments to fit the needs of my students. I
also have a wide variety of strategies to be used as study aids.
33 Subjects: including prealgebra, English, reading, algebra 2
...Liz holds a B.A. in Economics & Philosophy from Boston College and an M.A. in Philosophy from the University of Rhode Island. She has works for the Providence Public School Department as a
Middle and High School Substitute teacher, with long term placements in math. Currently, Liz is pursuing a...
7 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Rumford, RI Tutors
Rumford, RI Accounting Tutors
Rumford, RI ACT Tutors
Rumford, RI Algebra Tutors
Rumford, RI Algebra 2 Tutors
Rumford, RI Calculus Tutors
Rumford, RI Geometry Tutors
Rumford, RI Math Tutors
Rumford, RI Prealgebra Tutors
Rumford, RI Precalculus Tutors
Rumford, RI SAT Tutors
Rumford, RI SAT Math Tutors
Rumford, RI Science Tutors
Rumford, RI Statistics Tutors
Rumford, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/Rumford_RI_prealgebra_tutors.php","timestamp":"2014-04-20T08:38:25Z","content_type":null,"content_length":"24116","record_id":"<urn:uuid:539bb111-6c66-4a33-980d-237e57ce7be6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Pleasant, WI Prealgebra Tutor
Find a Mount Pleasant, WI Prealgebra Tutor
...While I was a student at Northwestern University, I worked as a volunteer French teacher for students in Kindergarten through third grade. I planned my own lessons and managed a mixed age
group class. At Mark Sheridan Math & Science Academy, I tutored middle school students in math, science, English, and other subjects in an after-school program.
22 Subjects: including prealgebra, reading, English, writing
...The ultimate goal in passing these exams is a certificate that is equivalent to a high school diploma. A GED certificate can be useful for gaining admission to college, for obtaining certain
vocational licenses, or for finding employment in the many types of jobs that require a high school diplo...
17 Subjects: including prealgebra, reading, algebra 1, ASVAB
...In my work as a college professor, I teach a lot of first- and second-year students, helping them make successful transitions into college and putting them on a path to graduation. As such, I
am intimately familiar with the knowledge and skills that K-12 students need to develop in order for the...
17 Subjects: including prealgebra, reading, English, writing
...I have a minor in English Literature. I have led multiple teams/departments within a Fortune 100 company, mentoring and training individuals. I have taught MBA level courses at a local
university for one semester.
21 Subjects: including prealgebra, English, reading, accounting
...Also teens usually can sense whether you are genuine or not. I've been able to identify with the students I have had and that makes for a good working relationship which leads to success in
the classroom. I would really like to help your child succeed in his or her classes.
9 Subjects: including prealgebra, geometry, algebra 1, world history
Related Mount Pleasant, WI Tutors
Mount Pleasant, WI Accounting Tutors
Mount Pleasant, WI ACT Tutors
Mount Pleasant, WI Algebra Tutors
Mount Pleasant, WI Algebra 2 Tutors
Mount Pleasant, WI Calculus Tutors
Mount Pleasant, WI Geometry Tutors
Mount Pleasant, WI Math Tutors
Mount Pleasant, WI Prealgebra Tutors
Mount Pleasant, WI Precalculus Tutors
Mount Pleasant, WI SAT Tutors
Mount Pleasant, WI SAT Math Tutors
Mount Pleasant, WI Science Tutors
Mount Pleasant, WI Statistics Tutors
Mount Pleasant, WI Trigonometry Tutors | {"url":"http://www.purplemath.com/Mount_Pleasant_WI_Prealgebra_tutors.php","timestamp":"2014-04-17T20:04:51Z","content_type":null,"content_length":"24410","record_id":"<urn:uuid:28043f98-4fef-4187-b639-80578210a4f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
The moduli 3-stack of the C-field
The moduli 3-stack of the C-field
An article of ours:
The higher gauge field in 11-dimensional supergravity – the C-field – is constrained by quantum effects to be a cocycle in some twisted version of ordinary differential cohomology. We argue that it
should indeed be a cocycle in a certain twisted nonabelian differential cohomology. We give a simple and natural characterization of the full smooth moduli 3-stack of configurations of the C-field,
the field of gravity and the (auxiliary) E8-Yang-Mills field.
We show that the truncation of this moduli 3-stack to a bare 1-groupoid of field configurations reproduces the differential integral Wu structures that Hopkins-Singer had shown (HS02) to formalize
Witten’s argument (Wi96) on the nature of the C-field. We give a similarly simple and natural characterization of the moduli 2-stack of boundary C-field configurations and show that it is equivalent
to the moduli 2-stack of anomaly free heterotic supergravity field configurations (SSS12). Finally we show how to naturally encode the Hořava-Witten boundary condition on the level of moduli
3-stacks, and refine it from a condition on 3-forms to a condition on full differential cocycles.
This may be read as a companion article to
Revised on December 18, 2013 11:18:07 by
Urs Schreiber | {"url":"http://www.ncatlab.org/schreiber/show/The+moduli+3-stack+of+the+C-field","timestamp":"2014-04-18T13:08:21Z","content_type":null,"content_length":"22654","record_id":"<urn:uuid:2c5736a1-bcd2-4955-874b-ac78a96940f1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Brattleboro, VT 05301
Math, All Levels, 12 Years of Teaching Experience
...love to help people learn math! I have ten years of experience teaching high school and college level math. I've taught everything from pre-
to college Calculus and linear
. I have an M.S. in math from Cornell University and a B.A. in math and education...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Spofford_algebra_tutors.aspx","timestamp":"2014-04-19T00:29:38Z","content_type":null,"content_length":"54031","record_id":"<urn:uuid:cdd90290-8ff1-4f3c-b3a0-d696f569c524>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chebyshev iteration method
From Encyclopedia of Mathematics
An iterative algorithm for finding a solution to a linear equation
that takes account of information about the inclusion of
The most well-developed Chebyshev iteration method is obtained when in (1),
in which for a given
The polynomials
The methods (2) and (3) can be optimized on the class of problems for which
Substituting (7) for
Thus, computing
To optimize (2) for a given
Then after
An important problem for small Iteration algorithm; and for
There exists a class of methods (2) — the stable infinitely repeated optimal Chebyshev iteration methods — that allows one to repeat the method (2), (5), (11) after
then once again one obtains a Chebyshev iteration method after
The theory of the Chebyshev iteration methods (2), (3) can be extended to partial eigen value problems. Generalizations also exist to a certain class of non-self-adjoint operators, when
One of the effective methods of speeding up to the convergence of the iterations (2), (3) is a preliminary transformation of equation (1) to an equivalent equation of the form
and the application of the Chebyshev iteration method to this equation. The operator
[1] G.I. Marchuk, V.I. Lebedev, "Numerical methods in the theory of neutron transport" , Harwood (1986) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] G.I. Marchuk, "Methods of numerical mathematics" , Springer (1982) (Translated from Russian)
[4] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[5a] V.I. Lebedev, S.A. Finogenov, "The order of choices of the iteration parameters in the cyclic Chebyshev iteration method" Zh. Vychisl. Mat. i Mat. Fiz. , 11 : 2 (1971) pp. 425–438 (In Russian)
[5b] V.I. Lebedev, S.A. Finogenov, "Solution of the problem of parameter ordering in Chebyshev iteration methods" Zh. Vychisl. Mat. i Mat. Fiz , 13 : 1 (1973) pp. 18–33 (In Russian)
[5c] V.I. Lebedev, S.A. Finogenov, "The use of ordered Chebyshev parameters in iteration methods" Zh. Vychisl. Mat. i Mat. Fiz. , 16 : 4 (1976) pp. 895–907 (In Russian)
[6a] V.I. Lebedev, "Iterative methods for solving operator equations with spectrum located on several segments" Zh. Vychisl. Mat. i Mat. Fiz. , 9 : 6 (1969) pp. 1247–1252 (In Russian)
[6b] V.I. Lebedev, "Iteration methods for solving linear operator equations, and polynomials deviating least from zero" , Mathematical analysis and related problems in mathematics , Novosibirsk
(1978) pp. 89–108 (In Russian)
In the Western literature the method (2), (5), (11) is known as the Richardson method of first degree [a2] or, more widely used, the Chebyshev semi-iterative method of first degree. The method goes
back to an early paper of L.F. Richardson , where the method (2), (5) was already proposed. However, Richardson did not identify the zeros [a1] and [a3].
The "stable infinitely repeated optimal Chebyshev iteration methods" outlined above are based on the identity
This formula has already been used in [a1] in the numerical determination of fundamental modes.
The method (3), (9) is known as Richardson's method or Chebyshev's semi-iterative method of second degree. It was suggested in [a9] and turns out to be completely stable; thus, at the cost of an
extra storage array the instability problems associated with the first-degree process are avoided.
As to the choice of the transformation operator [a8].
Introductions to the theory of Chebyshev semi-iterative methods are provided by [a2] and [a3]. An extensive analysis can be found in [a10], Chapt. 5 and in [a4]. In this work the spectrum of the
operator [a5].
Instead of using minimax polynomials, one may consider integral measures for "minimizing" [a9] and extended in [a11], Chapt. 5.
Iterative methods as opposed to direct methods (cf. Direct method) only make sense when the matrix is sparse (cf. Sparse matrix). Moreover, their versatility depends on how large an error
When no information about the eigen structure of Conjugate gradients, method of). Numerical algorithms based on the latter method combined with incomplete factorization have proven to be one of the
most efficient ways to solve linear problems up to now (1987).
[a1] D.A. Flanders, G. Shortley, "Numerical determination of fundamental modes" J. Appl. Physics , 21 (1950) pp. 1326–1332
[a2] G.E. Forsythe, W.R. Wasow, "Finite difference methods for partial differential equations" , Wiley (1960)
[a3] G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983)
[a4] G.H. Golub, R.S. Varga, "Chebyshev semi-iterative methods, successive over-relaxation methods and second-order Richardson iterative methods I, II" Num. Math. , 3 (1961) pp. 147–156; 157–168
[a5] T.A. Manteuffel, "The Tchebychev iteration for nonsymmetric linear systems" Num. Math. , 28 (1977) pp. 307–327
[a6a] L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam"
Philos. Trans. Roy. Soc. London Ser. A , 210 (1910) pp. 307–357
[a6b] L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam" Proc.
Roy. Soc. London Ser. A , 83 (1910) pp. 335–336
[a7] G. Shortley, "Use of Tchebycheff-polynomial operators in the numerical solution of boundary-value problems" J. Appl. Physics , 24 (1953) pp. 392–396
[a8] J.W. Sheldon, "On the numerical solution of elliptic difference equations" Math. Tables Aids Comp. , 9 (1955) pp. 101–112
[a9] E.L. Stiefel, "Kernel polynomials in linear algebra and their numerical applications" , Appl. Math. Ser. , 49 , Nat. Bur. Standards (1958)
[a10] R.S. Varga, "Matrix iterative analysis" , Prentice-Hall (1962)
[a11] E.L. Wachspress, "Iterative solution of elliptic systems, and applications to the neutron diffusion equations of nuclear physics" , Prentice-Hall (1966)
How to Cite This Entry:
Chebyshev iteration method. V.I. Lebedev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Chebyshev_iteration_method&oldid=13255
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Chebyshev_iteration_method","timestamp":"2014-04-20T08:13:51Z","content_type":null,"content_length":"40759","record_id":"<urn:uuid:23c984f4-fde2-4464-bcbe-facc3ec1dd08>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Math Essays and Papers
Free Essays Unrated Essays Better Essays Stronger Essays Powerful Essays Term Papers Research Papers
Your search returned over 400 essays for "
". To narrow your search results, please add more search terms to your query.
[1] [
] [
] [
] [
] [
Next >>
These results are sorted by most relevant first (ranked search). You may also sort these by color rating or essay length.
Title Length Color
Advanced Math Solutions - ... Solution: Line passing through the points is (5, - 5) and (4, 1). Equation of a line =. We know that, the equation of line is y = mx + c. m 878
= slope, c = y-intercept. To find the equation of line, find out the m – slope and y-intercept values. Given points are (5,- 5) and (4, 1). (x1, y1) is (5,- 5) (x2, y2) words Better [preview]
is (4, 1) Slope m = (y2 - y1)/(x2 - x1) = (1 - (-5)) / (4 -5) = (1 + 5) / (-1) = 6 / (-1) Slope m = -6 Substitute m = -6 in the equation y = mx + c. Hence result as y = - (2.5 Essays
6x + c To find the y-intercept, substitute any given points in the equation.... [tags: Math] pages)
Use of Math in Auto Racing - Mathematics is found everywhere in life and work and auto racing is no exception. There are many applications of math in racing. The 1265
purpose of racing is to win and in order to do that there must be a lot of math involved. If you don’t use math and use it correctly then you will not win. Mathematics is words Unrated [preview]
involved in racing in two ways, the car setup and scoring an measurements. The car setup involves tire pressure, down force, wedge, aerodynamic Drag, camber, track bar (3.6 Essays
and valance. The scoring system also uses math.... [tags: mathematics math car racing] pages)
The Fencing Problem - Math Coursework - The Fencing Problem - Math The task -------- A farmer has exactly 1000m of fencing; with it she wishes to fence off a level area
of land. She is not concerned about the shape of the plot but it must have perimeter of 1000m. What she does wish to do is to fence off the plot of land which contains
the maximun area. Investigate the shape/s of the plot of land that have the maximum area. Solution -------- Firstly I will look at 3 common shapes. These will be: 1134
------------------------------------------------------ [IMAGE] A regular triangle for this task will have the following area: 1/2 b x h 1000m / 3 - 333.33 333.33 / 2 = words FREE [view]
166.66 333.33² - 166.66² = 83331.11 Square root of 83331.11 = 288.67 288.67 x 166.66 = 48112.52² [IMAGE]A regular square for this task will have the following area: (3.2 Essays
Each side = 250m 250m x 250m = 62500m² [IMAGE] A regular circle with a circumference of 1000m would give an area of: Pi x 2 x r = circumference Pi x 2 = circumference / pages)
r Circumference / (Pi x 2) = r Area = Pi x r² Area = Pi x (Circumference / (Pi x 2)) ² Pi x (1000m / (pi x 2)) ² = 79577.45m² I predict that for regular shapes the
more sides the shape has the higher the area is.... [tags: Math Coursework Mathematics]
Introduction to Solve Math Problems Deductive Reasoning - ... • Decompositional reasoning • Deductive reasoning • Exemplar reasoning • Inductive reasoning • Modal logic • 508
Traditional logic • Pros-vs-cons reasoning • Set-based reasoning. • Systemic reasoning Example for Deductive Reasoning 1.To determine (a × b) × c is same as (c × b) × a words Better [preview]
always. Say yes or no. Explain your steps using deductive reasoning. Choices: A. Yes B. No Correct Answer: A Solution: Step 1: (a × b) × c = c × (a × b),here we are using (1.5 Essays
commutative property of multiplication. Step 2: = c × (b × a), similarly we used commutative property of multiplication again.... [tags: Math] pages)
Use of a Portfolio to Assess Students in Math and Science - Use of a Portfolio to Assess Students in Math and Science For a young child, going off to school can be an 2879
intimidating experience. Thoughts of whether the other children will like them, if they will have enough money to buy an ice cream at lunch, or if they will have homework words Better
that night overwhelms their minds. However, a major part of schooling is testing, and many children freeze when they hear that word. Think about yourself in a testing (8.2 Essays [preview]
situation then imagine what it is like for a young child to feel this defeating anxiety.... [tags: Assessing Children Math Science] pages)
:: 5 Sources Cited
Math Facts - ... The push to master facts by a certain age and prior to moving on to more complex math is most controversial with parents and teachers of gifted students. 1877
These students are often thought to be bored by simple, basic math concepts. Although “some of the very highest areas of math do not require automaticity of basic math words FREE [view]
facts, they do require automaticity of the skills that fall somewhere in between them and single-digit addition, and that those skills are very difficult to master and to (5.4 Essays
automatize when the basic stuff isn't firmly in place” (Yermish, 2011).... [tags: Education, The Arithmetic Gap] pages)
Math Manipulatives - A Positive Outlook on Math Manipulatives Math manipulatives have been around for years, but are now becoming increasingly popular amongst educators. 1007
Math manipulatives include anything from buckets of pattern blocks, trays of tiles, and colored cubes to virtual manipulatives, or manipulatives colored and cut out by words Unrated
the students themselves. All of these materials can help assist in tangibly teaching children math concepts and by pulling math off the page and into the hands of (2.9 Essays [preview]
students. For a child to be verbally and physically taught a math concept allows them to think, reason, and solve problems with the teacher's guidance as well as on their pages)
own.... [tags: Mathematics]
Math Solutions - ... In mathematics, there are many chapters included such as number system, fraction, algebra, functions, trigonometry, integral, calculus, matrix,
vector, geometry, graph etc. We can understand how to solve the problems using formulas and some operations. Let us discuss some important problems below in different
concepts. Example problems – Solve math solutions manual Example problem 1 – Solve math solutions manual Simplify the expression: (5k2 - 8k + 6) + (3k2 + 5k - 4) - (2k2 + 604
6k + 8) Solution: In this problems, the given polynomials are (5k2 - 8k + 6) + (3k2 + 5k - 4) - (2k2 + 6k + 8) We need to simplify the given polynomial values First we words Better
need to arrange the given polynomial values = (5k2 - 8k + 6) + (3k2 + 5k - 4) - (2k2 + 6k + 8) Multiply the minus sign with the third parenthesis values = (5k2 - 8k + 6) (1.7 Essays [preview]
+ (3k2 + 5k - 4) + (- 2k2 - 6k - 8) = 5k2 - 8k + 6 + 3k2 + 5k – 4 - 2k2 - 6k - 8 = 5k2 - 8k + 6 + 3k2 + 5k – 4 - 2k2 - 6k – 8 = 5k2 + 3k2 - 2k2 - 8k + 5k - 6k + 6 – 4– 8 pages)
= (5k2 + 3k2 - 2k2) – (8k + 5k - 6k) + (6 – 4 – 8) Simplify the first parenthesis values = (6k2) – (8k + 5k - 6k) + (6 – 4 – 8) [5k2 + 3k2 - 2k2 = 6k2] Simplify the
second parenthesis values = (6k2) – (7k) + (6 – 4 – 8) [8k + 5k - 6k = 7k] = (6k2) – (7k) + (-6) [6 – 4 – 8 = -6] = 6k2 -7k - 6 Answer: 6k2 -7k - 6 Example problem 2 –
Solve math solutions manual If the value A = {21, 22, 23, 24, 25}, B = {25, 26, 27} and C = {27, 28, 29}, prove the distributive law.... [tags: Mathematics]
Math and Music - Math and Music When you listen to a piece of music you usually don’t think of math, but the two are interlinked and music always involves math even 451
though we don’t always realize it. When musicians play music they are using mathematical formulas to play. There are formulas for making cords, scales and a formula for words Unrated [preview]
the what notes they play. Musical notation also involves math, you use time signatures while playing along to a piece of music which are basically just fractions, 3/4,7/ (1.3 Essays
4, and 4/4 are all time signatures.... [tags: Papers] pages)
Egyptian Math - Egyptian Math The use of organized mathematics in Egypt has been dated back to the third millennium BC. Egyptian mathematics was dominated by
arithmetic, with an emphasis on measurement and calculation in geometry. With their vast knowledge of geometry, they were able to correctly calculate the areas of 1038
triangles, rectangles, and trapezoids and the volumes of figures such as bricks, cylinders, and pyramids. They were also able to build the Great Pyramid with extreme words Better [preview]
accuracy. Early surveyors found that the maximum error in fixing the length of the sides was only 0.63 of an inch, or less than 1/14000 of the total length.... (3 Essays
[tags: History Mathematics Research Papers] pages)
:: 5 Works Cited
The History of Math - The History of Math Mathematics, study of relationships among quantities, magnitudes, and properties and of logical operations by which unknown 4777
quantities, magnitudes, and properties may be deduced. In the past, mathematics was regarded as the science of quantity, whether of magnitudes, as in geometry, or of words Strong [preview]
numbers, as in arithmetic, or of the generalization of these two fields, as in algebra. Toward the middle of the 19th century, however, mathematics came to be regarded (13.6 Essays
increasingly as the science of relations, or as the science that draws necessary conclusions.... [tags: Mathematics Education Logic Numbers Essays] pages)
The History of Math - The history of math has become an important study, from ancient to modern times it has been fundamental to advances in science, engineering, and 810
philosophy. Mathematics started with counting. In Babylonia mathematics developed from 2000B.C. A place value notation system had evolved over a lengthy time with a words Better
number base of 60. Number problems were studied from at least 1700B.C. Systems of linear equations were studied in the context of solving number problems. The basic of (2.3 Essays [preview]
mathematics was inherited by the Greeks and independent by the Greeks beg the major Greek progress in mathematics was from 300 BC to 200 AD.... [tags: essays research pages)
Math Coursework - The Fencing Problem - The Fencing Problem There is a need to make a fence that is 1000m long. The area inside the fence has to have the maximum area. I 2860
am investigating which shape would give this. Rectangles: I am going to start investigating different shape rectangles, all which have a perimeter of 1000m. Below are 2 words FREE [view]
rectangles (not to scale) showing how different shapes with the same perimeter can have different areas. Here are some pure examples of what I have to accomplish with (8.2 Essays
rectangles having perimeters of 1000 metres.... [tags: Math Coursework Mathematics] pages)
Math Coursework - The Fencing Problem - The Fencing Problem A farmer has 1000m of fencing and wants to fence off a plot of level land. She is not concerned about the 909
shape of plot, but it must have a perimeter of 1000m. So it could be: [IMAGE] Or anything else with a perimeter (or circumference) of 1000m. She wishes to fence of the words FREE
plot of land with the polygon with the biggest area. To find this I will find whether irregular shapes are larger than regular ones or visa versa. To do this I will find (2.6 Essays [view]
the area of irregular triangles and a regular triangle, irregular quadrilaterals and a regular square, this will prove whether irregular polygons are larger that regular pages)
polygons.... [tags: Math Coursework Mathematics]
Math Coursework - The Fencing Problem - The Fencing Problem Aim - to investigate which geometrical enclosed shape would give the largest area when given a set perimeter. 1214
In the following shapes I will use a perimeter of 1000m. I will start with the simplest polygon, a triangle. Since in a triangle there are 3 variables i.e. three sides words FREE [view]
which can be different. There is no way in linking all three together, by this I mean if one side is 200m then the other sides can be a range of things. I am going to fix (3.5 Essays
a base and then draw numerous triangles off this base.... [tags: Math Coursework Mathematics] pages)
Math Coursework - The Fencing Problem - The Fencing Problem Introduction A farmer has exactly 1000 metres of fencing and wants to use it to fence a plot of level land. 657
The farmer was not interested in any specific shape of fencing but demanded that the understated two criteria must be met: · The perimeter remains fixed at 1000 metres · words FREE [view]
It must fence the maximum area of land Different shapes of fence with the same perimeter can cover different areas. The difficulty is finding out which shape would cover (1.9 Essays
the maximum area of land using the fencing with a fixed perimeter.... [tags: Math Coursework Mathematics] pages)
Math And Owning A Restaraunt - Math is an essential asset in the business world. Without mathematics businesses wouldn’t be able to operate effectively. In order to run a 788
restaurant math plays an important role in a lot of different areas. For instance the items on the menu may change due to the way it sells. Bookkeeping and math allow you words Unrated [preview]
to both figures out what items are profitable and what items are selling. The business world revolves around math, from profit and loss statements, to graphs, to taxes. (2.3 Essays
Everything in business requires mathematics.... [tags: essays research papers] pages)
my experiance with math - This course had forced me to analyze the psychological effects one’s negative thinking has in impacting the ability to embrace a situation that 734
originally may be perceived as fear. My first obstacle with this course was to admit to myself that I had created my own fear of math. I had fully produced what I now words Better [preview]
view to be a huge challenge. The inevitable had finally arrived. I had postponed my taking this math class for close to ten years. I was now at age 29 sitting in a math (2.1 Essays
class that I had avoided through out my collegial career.... [tags: essays research papers] pages)
Gender Equity in Math and Science - Gender Equity in Math and Science From the research I have read while there is a disagreement on when and how much of a gender gap 2559
exists in math and science, there is definitely an equity issue that needs addressing. There seems to be an abundance of information about equity issues and as a future words Powerful
teacher I feel that it is important to examine these issues. If gender equity issues exist in today's’ classrooms why do they and what can be done to help correct it. (7.3 Essays [preview]
Everything I've read so far states that a gender gap exists in science, while opinions about math vary.... [tags: Essays Papers] pages)
:: 7 Sources Cited
English Vs Math - English Vs. Math To most people English or Language Arts is a creative course and math is just a logical, you get it or you don’t class. My purpose 636
writing this paper is to change your mind. I believe that Math is just as, or more creative than English. I will demonstrate this through a couple of examples. First, we words Unrated [preview]
must understand what is behind the creative aspect in English. Most people consider that English is the, ‘creative,’ subject because of titles such as ‘creative writing’ (1.8 Essays
and ‘creative thinking’ and in contrast there is no creative something in math.... [tags: essays research papers] pages)
Solving Math Solutions Manuals - ... We can understand how to solve the problems using formulas and some operations. We can solve the math problems easily without using
any electronic devices. Let us discuss some important problems below in different concepts. Example problems – Solving math solutions manuals Example problem 1 – Solving
math solutions manuals Simplify the expression: (3k2 - 14k - 10) - (12k2 + 8k – 3) + (7k2 + 4k - 5) Solution: The given polynomials are (3k2 - 14k - 10) - (12k2 + 8k – 3) 619
+ (7k2 + 4k - 5) In this problem, we need to simplify the given polynomial values First we need to arrange the given polynomial values = (3k2 - 14k - 10) - (12k2 + 8k – words Better
3) + (7k2 + 4k - 5) Multiply the minus sign with the second parenthesis values = (3k2 - 14k - 10) + (- 12k2 - 8k + 3) + (7k2 + 4k - 5) Remove the parenthesis = 3k2 - 14k (1.8 Essays [preview]
- 10 - 12k2 - 8k + 3 + 7k2 + 4k - 5 = 3k2 - 12k2 + 7k2 - 14k - 8k + 4k - 10 + 3 - 5 = (3k2 - 12k2 + 7k2) + (- 14k - 8k + 4k) + (- 10 + 3 – 5) Simplify the first pages)
parenthesis values = (- 2k2) – (14k - 8k + 4k) + (- 10 + 3 – 5) [3k2 - 12k2 + 7k2 = - 2k2] Simplify the second parenthesis values = (- 2k2) + (-18k) + (- 10 + 3 – 5) [-
14k - 8k + 4k = - 18k] = (- 2k2) + (-18k) + (- 12) [- 10 + 3 – 5 = - 12] = - 2k2 – 18k - 12 Answer: - 2k2 – 18k – 12 Example problem 2 – Solving math solutions manuals If
the value A = {30, 40, 50, 60}, B = {50, 60, 70} and C = {70, 80, 90}, prove the distributive law.... [tags: Mathematics]
Math Changed My Life - College Admissions Essays - Math Changed My Life I enjoyed mathematics in grade school. When I started my high school mathematical studies with 182
Calculus and Analytic Geometry, by George B. Thomas, a whole new vista opened to me. The mathematical concepts contained in this book combine with the principles of words FREE [view]
physics to form the basis for most engineering disciplines. There is a beauty to mathematical concepts presented in an orderly and intelligible way. This text challenged (0.5 Essays
my imagination and my intellect.... [tags: College Admissions Essays] pages)
Encouraging Girls in Math and Science - Encouraging Girls in Math and Science An ideal classroom in an elementary school would allow both boys and girls to learn fairly, 1584
equally, and also be encouraged to be involved in the classroom. The teacher would expect the same effort from the boys as well as the girls. The teacher would implement words Powerful
a respectful atmosphere where the teacher as well as the students would respect one another. The reality is that girls quickly become discouraged to pursue math and/or (4.5 Essays [preview]
science related careers.... [tags: Essays Papers] pages)
:: 6 Sources Cited
Math Research Paper - Math Research Paper Since the 1980’s calculator use in the classroom has been a huge controversy between educators (Golden, 2000). It is becoming 1462
increasingly common to use calculators in the classroom on a regular basis. Some states allow students to use calculators on standardized tests and as part of the regular words Powerful
curriculum (Dion, 2001). Because we live in such a technologically changing world, hand held calculators have been far surpassed and can be purchased for as low as $4.00 (4.2 Essays [preview]
each. This low price however, has not swayed the many people that believe calculators are not appropriate in the classroom.... [tags: Essays Papers] pages)
:: 7 Sources Cited
Women in the Math World - Women in the Math World Works Cited Not Included Math is commonly known as the man’s major. Many college math professors are men and the same 1239
goes for their students. "One study revealed that women accounted for 15% of students in computer science, 16% in electrical engineering,. . . Gender splits in the words Unrated [preview]
faculty were similar" (Cukier). There are few women that have made an impact on the math society compared with the number of men. A person can ramble off names such as (3.5 Essays
Isaac Newton, Albert Einstein, Pythagoras of Samos, and Jean-François Niceron.... [tags: Papers Females Mathematics Essays] pages)
Exploring Philosophical Aspects of Math - Exploring Philosophical Aspects of Math Abstract: This essay neither examines a mathematical equation, nor does it analyze a 1638
distinguished mathematician. This essay explores a few philosophical aspects of math. Particularly, this essay covers the confound subjects covered by Zeno of Elea. The words FREE
math world has been disturbed, agitated, and even titillated by the mysteries of Zeno of Elea's paradoxes in questioning the laws of math and science. A Paradox, defined (4.7 Essays [view]
by Webster's dictionary, is "A statement that seems contrary to common sense and yet is perhaps true." In taking his arguments at face value, they may seem very pages)
logical.... [tags: Papers]
The New Math of Gambling - The New Math of Gambling The article "The New Math of Gambling" in Discover Magazine May 2000 was an article that shows the use of software, 511
math and a few hours of time to beat the house when gambling. These life stories and achievements the individuals have are truly remarkable and real. The article begins words Unrated [preview]
with Anthony Curtis who is a blackjack conqueror. He is a regular gambler at the Binion's Horseshoe tables in Las Vegas. He was once a rugby player turned publishing guru (1.5 Essays
of the Huntington Press.... [tags: Papers] pages)
Math Is The Language Of The Un - Mathematics, the language of the universe, is one of the largest fields of study in the world today. With the roots of the math tree 1218
beginning in simple mathematics such as, one digit plus one digit, and one digit minus one digit, the tree of mathematics comes together in the more complex field of words Better [preview]
algebra to form the true base of calculations as the trunk. As we get higher, branches begin to form creating more specialized forms of numerical comprehension and (3.5 Essays
schools of mathematical thought. Some examples of these are the applications into chemistry, economics and computers.... [tags: essays research papers] pages)
Math in Medieval Times - Math in Medieval Times Math in Medieval times was evident at Stonehenge. Stonehenge and its purpose remains an mystery even now, more than 4,000 640
years after it was first constructed. It could have been a temple, an astronomical calendar, or guide to the heavens. Despite the fact that we don't know its purpose for words Better [preview]
certain, Stonehenge acts as a prehistoric timepiece, allowing us to theorize what it would have been like during the Neolithic Period, and who could have built this (1.8 Essays
ancient wonder. Stonehenge stands on open land of the Salisbury Plain two miles west of the town of Amesbury, Wiltshire, in Southern England.... [tags: Papers] pages)
Math Lesson Plan - Grade Level: 4 Time: 40 minutes Subject: Math Topic: Dividing and Multiplying to Find Equivalent Fractions NY State Learning Standards: Mathematics, 344
Science, and Technology Standard 1: Analysis, Inquiry, and Design • Students will use mathematical analysis and scientific inquiry to seek answers and develop solutions. words FREE [view]
Materials: Mathematics Textbooks (page 401) Notebooks Pencils Different colored chalk Objectives: Students will be able to name and write equivalent fractions by (1 Essays
multiplying and dividing.... [tags: essays research papers] pages)
Women in Science, Math, and Engineering - Women in Science, Math, and Engineering The statistics can be somewhat startling, while women receive 56% of BA degrees in the 3310
United States, they receive only 37% of the Science, Mathematics, and Engineering (SME) bachelor degrees (Chang, 1). As scary as the statistics on women are, they only words Research
point to an even bigger problem among all SME majors. According to one study, there is a 40% decline in the number of undergraduate science majors between the first and (9.5 Papers [preview]
senior year of college (Didon, 336).... [tags: Work Careers Papers] pages)
:: 8 Works Cited
The Role of the Proof in Math - The Role of the Proof in Math The notion of proof has long played a key role in the study of mathematics. It is in my opinion the role of 2682
proof that separates mathematics from the sciences and other fields of study. It is the existence of proofs that give mathematicians the confidence that their work is words Research
credible and thus allows them to continue to build upon prior work without the need to second guess what has previously been accomplished. Based upon this observation, it (7.7 Papers [preview]
becomes natural to ask the questions pertaining to the use of proof in learning and understanding mathematics.... [tags: Mathematics Mathematical Papers] pages)
:: 6 Works Cited
Math Lesson Plan - Lesson Plan Guide Topic: Math (To investigate and explore multiplication facts) Objectives In this activity, students will investigate and explore 493
multiplication facts. The students will: • Work in groups to device a plan for making a multiplication matrix • Construct a multiplication matrix • Reflect on the words FREE [view]
patterns they observe in the matrix Material For each group • 1 cm grid paper • full sheets of paper • glue • scissors For the class • 36” X 48” butcher paper Preparation (1.4 Essays
• Make approximately 10 copies of 1 cm grid paper on colored paper for each group of students.... [tags: essays research papers] pages)
Math Fencing Project - Math Fencing Project I have to find the maximum area for a given perimeter (1000m) in this project. I am going to start examining the rectangle
because it is by far the easiest shape to work with and is used lots in places (most things use rectangles for design- basic cube .etc). To start with what type of 1125
rectangle gives the best result. A regular square or an irregular oblong. I start by having 4 individual squares. [IMAGE] [IMAGE] [IMAGE] [IMAGE] [IMAGE][IMAGE] Goes to words FREE
[IMAGE] [IMAGE] Regular square irregular oblong Now look at how many sides are exposed on each shape- å sides of each cube internal1 å sides of each cube internal2 (3.2 Essays [view]
[IMAGE][IMAGE]Ratio for square = ratio for oblong = å sides of each cube exposed1 å sides of each cube exposed2 2 ´4 (1 ´ 2) + (2 ´ 2) [IMAGE][IMAGE] = = 2 ´ 4 (3 pages)
´ 2) + (2 ´ 2) = 1 = 0.6 This can be further done by having more squares (to show that the more irregular a square is the less area it has for that given perimeter....
[tags: Papers]
The Shopkeeper's Theory of Math - The Shopkeeper's Theory of Math Introduction I am going to investigate the shopkeeper's theory - "When the area of the base is the same 1200
as the area of the four sides, the volume of the tray will be a maximum"-. A net of a tray made from a piece of card measuring 18cm by 18cm is shown below: [IMAGE] words FREE [view]
[IMAGE][IMAGE][IMAGE][IMAGE][IMAGE][IMAGE][IMAGE][IMAGE]In order to investigate whether his theory is right or wrong, I will vary the size of the corner that is being cut (3.4 Essays
off in the 18 x 18 card.... [tags: Papers] pages)
Beyond Pythagoras Math Investigation - Beyond Pythagoras Math Investigation Pythagoras Theorem: Pythagoras states that in any right angled triangle of sides 'a', 'b' and 1011
'c' (a being the shortest side, c the hypotenuse): a2 + b2 = c2 [IMAGE] E.g. 1. 32 + 42= 52 9 + 16 = 25 52 = 25 2. 52+ 122= 132 3. 72 + 242 = 252 25 + 144 = 169 49 + 576 words FREE [view]
= 625 132 = 169 252 = 625 All the above examples are using an odd number for 'a'. It can however, work with an even number. E.g. 1. 102 + 242= 262 100 + 576 = 676 262 = (2.9 Essays
676 N.B.... [tags: Papers] pages)
Math Investigation of Painted Cubes - Math Investigation of Painted Cubes Introduction ============ I was given a brief to investigate the number of faces on a cube, 2967
which measured 20 small cubes by 20 small cubes by 20 small cubes (20 x 20 x 20) To do this, I had to imagine that there was a very large cube, which had had its outer words FREE
surface painted red. When it was dry, the large cube was cut up into the smaller cubes, all 8000 of them. From there, I had to answer the question, 'How many of the small (8.5 Essays [view]
cubes will have no red faces, one red face, two red faces, and three faces?' From this, I hope to find a formula to work out the number of different faces on a cube sized pages)
'n x n x n'.... [tags: Papers]
Math Statistics Project - Math Statistics Project My main factor I am investigating is going to be weight. For the majority I aim to investigate the effect of weight on 2534
height. I am also going to look at the frequency of different weight groups among people. · The height will be measured in cm. I will keep it continuous by not asking words FREE [view]
the people to place their heights into groups, but instead enter their heights. This will be Quantitive data. · The weight will be measured in cm. I will keep it (7.2 Essays
continuous by not asking the people to place their weights into groups, but instead enter their weights.... [tags: Papers] pages)
Math’s Coursework - Math’s Coursework Statistics Planning Investigation I have been given a survey of the Boy’s of the Oratory School. The survey consists of the boy’s 804
pulse, height and weight. I have made two hypotheses: 1. The lower the pulse, the taller you are. 2. The taller you are, the higher your weight is. My reasons behind words Unrated [preview]
hypotheses 1 is because the lower your pulse normally means you are fit and sporty, most sporty people are reasonably tall. My reason behind hypotheses number 2 is (2.3 Essays
because you are bigger and therefore should have a reasonable amount of weight compared to someone who isn’t that tall.... [tags: Papers] pages)
Math Borders Investigation - Math Borders Investigation Figure below shows a dark cross-shape that has been surrounded by white squares to create a bigger cross-shape; 1602
The bigger cross-shape consists of 25 small squares in total. ------------------------------------------------------------- The next cross-shape is always made by words FREE [view]
surrounding the previous cross-shape with small squares. Part 1- Investigate to see how many squares would be needed to make any cross-shape built in this way. Part 2- (4.6 Essays
Extend your investigation to 3 dimensions.... [tags: Papers] pages)
Math Hidden Faces Investigation - Math Hidden Faces Investigation In this coursework I would be investigating the number of hidden faces in different cubes and cuboids. I 1036
would provide predictions to make sure I get the right results. After that I would provide diagrams of the cubes and I would explain how I found it. In each section of a words FREE [view]
set of cubes I would provide formula's that I would find. I would also give information that the pattern carries on. The reason why I am doing this investigation about (3 Essays
cubes is because to find the hidden faces and total faces which would be added near the diagrams.... [tags: Papers] pages)
Core Math Investigation - Core Math Investigation 1) When x is more than 0 and increases (e.g. from 5 to 6), y increases at a much faster rate and becomes very big. When 713
x increases when it is less then 0 (e.g. from –10 to –9), y increases very slowly. 2) (i) The value of a affects the value of x proportionally. For example we can compare words FREE [view]
the equations y = 1 + 2x and y = 5 + 2x. In the second equation, the value of a has been change to 5. As we can see from the results in Tables 1 and 2, all the values of (2 Essays
y for y = 5 + 2x are greater by 4, when x is the same.... [tags: Papers] pages)
Integrating Science and Math Into The Classroom - ... This unit also provided activities for the students to construct models, by having the students make clay apple to 1401
make fractions from the story, and by having the student construct a model of the apple life cycle. Differentiation Strategies Tomlinson, 2003 defines differentiated words Better
instruction is as, “when a teacher proactively plans varied approaches to what students need to learn, how they will learn it, and/or how they can express what they have (4 Essays [preview]
learned in order to increase the likelihood that each student will learn as much as he or she can as efficiently as possible” (p.... [tags: Education ] pages)
:: 4 Works Cited
“Writing to Learn” in a Math Classroom - ... Second, these activities allow students to clarify their learning and they engage students in the lessons (Vacca et al., 891
2011). Through these small writing activities, students can write freely, without much emphasis on spelling or grammar, but rather on their understanding of the subject words Strong
matter in their own words. Furthermore, it allows students to investigate and make sense of their learning through writing. Lastly, “write to learn” activities allow all (2.5 Essays [preview]
students, specifically English Learners, practice English writing skills and vocabulary in multiple contexts.... [tags: Education, mathematics] pages)
:: 2 Works Cited
Math - How the Renaissance had an effect on western Europe The Renaissance was significant on the development of Western Europe and the Impact it had was immense. The
Renaissance not only influenced the worlds of art, music, and literature, but also the worlds of politics, religion, and society. During the Renaissance, advancements 1908
were made in several areas of technology and in thought. The Renaissance was a key in the development of Western Civilization. The Renaissance is a term that was coined words FREE [view]
in the 19th century to describe a period in which art and literature flourished in Europe, but there were so many significant changes during this time period that the (5.5 Essays
term Renaissance began to mean all the developments during this time period.... [tags: essays research papers fc] pages)
:: 3 Sources Cited
Math - Why We Have Computers Input/Output Because computers can display and deal with text, they offer “communication with human beings—communication that far surpasses 374
the numeric display of the common desktop calculator. This communication is achieved through input/output, or I/O. I/O provides the basis for all the work the computer words FREE [view]
does. Computers work by dealing with data. You put data into the computer, and it spits something useful back out, for example: Word Processing: You input text and (1.1 Essays
manipulate it by using word processing software.... [tags: essays research papers] pages)
Sex, Math and Science: Exploring the Gender Gap in Math and Science - ... They point to several ways in which one’s identity may be threatened which included the fear of
being discriminated, ratifying harmful stereotypes or worse being unappreciated and underrepresented in the STEM field (477). They conclude that despite the fact that 1170
there are other social factors which explain one’s interest in a field, their research points to having a sense of likeness to the people within the same discipline as a words Powerful
key predictor of females interest in that field (484). In the same way, Sapna Cheryan in "Understanding the Paradox in Math-Related Fields: Why Do Some Gender Gaps Remain (3.3 Essays [preview]
While Others Do Not?" (2012) puts forward that despite the fact that women are excelling in math in the classroom, there is still a gap in the number of women in math pages)
related fields.... [tags: Education]
:: 6 Works Cited
Math strategies for special education students - ... The math instruction using the EAI method used a video anchor to enhance instruction. The use of technology in EAI
provides students with learning disabilities access to a wide range of math tasks that previously were unattainable due to learning deficits (Maccini & Strickland, 2010). 1158
The video gave students a visual representation of the types of math problems they were working on. Students were allowed time to discuss how they might best solve each words Powerful [preview]
math problem. If they had difficulty understanding how to go about MATH STRATEGIES FOR SPECIAL EDUCATION STUDENTS solving a problem the teacher would intervene and (3.3 Essays
provide more detailed instruction.... [tags: Education] pages)
:: 6 Works Cited
Male Superiority In Math: Fact or Fiction? - Male Superiority In Math: Fact or Fiction. One true mystery of mathematics is the small number of female mathematicians. When 1359
most people think of mathematicians, they automatically assume that they are male. This leads to the idea that boys are mathematically superior to girls, which has long words Strong
been a popular belief. Recent studies, however, may prove this to be wrong. The fact is that there are numerous female mathematicians who have made very important (3.9 Essays [preview]
contributions to the mathematical world throughout history.... [tags: Argumentative Persuasive Papers] pages)
:: 3 Works Cited :: 1 Sources Cited
My Math Teacher, Mrs. Ladd - My Math Teacher, Mrs. Ladd When thinking back and remembering all of the teachers that I have had in the past, there is one in particular 1867
that comes to mind. Her name was Mrs. Ladd. She taught math at the junior high school. Mrs. Ladd was not the most popular, funniest, hardest, easiest, nicest, nor the words Strong [preview]
meanest teacher. I remember her for some other reasons. When I think of Mrs. Ladd, I think about how hard she made me work. But I also think about how she made me (5.3 Essays
challenge myself. Most of all, I remember how she influenced me.... [tags: Personal Narratives Mathematics Essays] pages)
Math Perceptions of Taiwanese and American children - Article Critique The objective of this article critique is to review and evaluate several empirical studies which 2328
have examined mathematics perception cross-culturally. The main study that focuses on examining mathematics perception cross-culturally is a study that was done in 2004 words Better
by Dr. Yea-Ling Tsao. In this study, researchers proved that Taiwanese students consistently score higher in cross-national studies of achievement than American students. (6.7 Essays [preview]
Several other studies were done that also support this theory.... [tags: essays research papers fc] pages)
:: 3 Sources Cited
Looking at Different Shapes - Math Problem - Looking at Different Shapes - Math Problem For this investigation, I will be looking at different shapes, and the areas the 1807
different shapes give. The exact question is: 'A farmer has exactly 1000m of fencing, and wants to fence off a plot of level land. She is not concerned about the shape of words FREE [view]
the plot, but it must have a perimeter of 1000m. She wishes to fence of the area of land, which contains the maximum area. Investigate the shapes that could be used to (5.2 Essays
fence in the maximum area using exactly 1000m of fencing each time.... [tags: Papers] pages)
Are Asians More Proficient in Math and Science - Minorities will increasingly surpass Whites in the labor force, which not only means they will significantly impact the 813
trend of the economy, but will, also, become critical for recruiting in high demand industries. Minorities had better opportunities to prosper than they had in the past, words Strong
which allowed US to reach an eminent moment of electing their first African-American President and appointing their first Hispanic Supreme Court justice. Many minorities (2.3 Essays [preview]
are increasingly educated and participating in business, science, and technical innovation as well as, enjoying their middle to high income status.... [tags: American pages)
Differences in Math Learning and Aptitude Between Boys and Girls - Differences in Math Learning and Aptitude Between Boys and Girls The topic of my research has been 1550
differences in math learning and aptitude between boys and girls. This topic was suggested to me by my mentor, Mike Millo, as it is of particular interest to him. Mr. words Better [preview]
Millo is an Algebra teacher at Ball High. Much has been made of gender differences in math by the popular media and Mr. Millo felt that it would be interesting to examine (4.4 Essays
this topic and explore the findings of educational researchers.... [tags: Papers] pages)
The Correlation Between Music and Math: A Neurobiology Perspective - The Correlation Between Music and Math: A Neurobiology Perspective I remember the first time I heard 1078
the statement ¡° Did you know that listening to classical music enhances your mathematical abilities?¡± I was both intrigued and excited, intrigued because I did not words Strong
understand how music and math, two seemingly unrelated subject could possibly affect each other. I was also excited because I began to view classical music as some kind (3.1 Essays [preview]
of magical potion that would transform my math skills from decent to extraordinary.... [tags: Biology Essays Research Papers] pages)
:: 4 Sources Cited
What I’ve Learned About Math Operations - ... If a problem is 32 – 16, one of the three tens must be broken up into units in order to subtract; being able to see 32 as 3 1681
tens and 2 ones and as 2 tens and 12 ones is what makes the problem doable. Place value and unitizing have much to do with a comprehension of base ten, but they are not words Powerful
singular to base ten. One could apply the idea of place value and unitizing to base five for example. The number 18510 would be written as 12205. The place values would (4.8 Essays [preview]
then be 125, 25, 5, and 1 (instead of 1000, 100, 10, and 1), and 12205 would be understood as one 125, two 25s, and two 5s.... [tags: Education, Mathematics] pages)
:: 5 Works Cited
Investigating the Relationship Between English and Math Scores of Students - Investigating the Relationship Between English and Math Scores of Students Task: To 623
investigate the relationship between the results in Maths and English of the students in Key Stage 4 at Mayfield High School Plan: To investigate two questions related to words FREE
the relationship between the results in the key stage 2 Maths and English Sats of students in KS4 at Mayfield high school 1. Do students who do well in Maths also do well (1.8 Essays [view]
in English 2. Do girls do better than boys in maths and English I am going to take a large sample, 60 (5-10 % of the total) of the students in MayfieldHigh School.... pages)
[tags: Papers]
United States vs. Japan in Math and Science - Over the years, tests have determined that the United States has not improved in math and science compared to Japan. Both 862
countries have a different approach towards school. This might be the reason why American students are doing so poorly in math and science. American education should be words Better [preview]
compared to Japanese so that both can learn from each other because even though American scores are down they still have great ways to educate students. Differences that (2.5 Essays
they have are ability vs. effect, teaching techniques, and parenting.... [tags: American Education, Japanese Education] pages)
Why is Gender Inequity an Issue in Math and Science Classrooms? - Why is Gender Inequity an Issue in Math and Science Classrooms. Gender Inequity has been a growing 2197
subject in America’s schools, especially in the fields of math and science. Gender is defined as sexual identity, especially in relation to society or culture (Lexico, words FREE [view]
2001) Inequity is defined as injustice; unfairness (Lexico, 2001). Gender Inequity in our schools has been based on the findings that females aren’t getting the attention (6.3 Essays
and grades required for them to do well in math and science. Why is this and what can we do about it.... [tags: ] pages)
Secondary Math: Video- Variables and Patterns of Change - Secondary Math: Video- Variables and Patterns of Change Variables and Patterns of Change video (Annenberg Media, 884
2004) follows two teachers, Ms. Green and Ms. Novak, as they begin their school year teaching high school math. Throughout my paper, I plan to show the elements of a words Better [preview]
non-threatening learning environment as well as the importance of having a non-threatening learning environment. Additionally, I will discuss the similarities and (2.5 Essays
differences between the teacher’s methods in the video. I will explain how the methods are effective and how I would expand on their class lessons.... [tags: Educationi] pages)
Investigating the Relationship Between the Ability in Math and Science of Students - Investigating the Relationship Between the Ability in Math and Science of Students I 10223
would like to know whether there is a link between ability in Maths and in Science in the Year 7, 8 and 9 students at Mayfield High School. My initial thoughts are that words FREE [view]
there is a link between the two because Maths and Science share some of the same attributes - they both involve formulae, they both require logical ability, and they both (29.2 Essays
use numbers. Furthermore, I think that someone with an interest in Maths will also have an interest in Science, and so will probably work hard at both.... [tags: Papers] pages)
Special Education Students' Placement and Performance Outcomes on Math Assessments - ... Yang, Shaftel, Glasnapp, and Poggio (2005), reported that students across all
disability categories are more alike than different when using general mathematical skills. Cole, Mills, Dale, and Jenkins (1991) had previously suggested that different
instructional strategies were not necessary for students with aptitude differences, but these strategies were needed for cognitive functioning deficits. A further 1519
consideration of these findings could be the lack of content knowledge special education teachers possess in the area of mathematics and would make one wonder if a words Strong
resource room is not an Special Education Students' Placement and Performance Outcomes on Math Assessments academic placement for special education students with the (4.3 Essays [preview]
cognitive ability to be successful in a general education classroom especially in the area of mathematic instruction. Statement of the Hypothesis: Previous limited pages)
research has shown special education students perform at higher proficiency rates when instructed by a teacher that is highly qualified in a given subject area.... [tags:
Special Education ]
:: 26 Works Cited
Group Project Analysis: Creating a Board Game that Teaches Math and Vocabulary Skills - THE IDEA (Day One): Creating a Board Game that teaches math and vocabulary skills. 927
Similar to Monopoly, which uses a spinner, cards, and a game board will various squares, our game board will have squares color coated to various subjects (blue for math, words Unrated
yellow for English/vocabulary, green for science, and red for Social Studies/History). The child would would spin the spinner and, having a marker in the shape of a car, (2.6 Essays [preview]
would move the appropriate number of squares that it says on the spinner (for example, if you spin a three on the spinner, you move three spaces forward).... [tags: pages)
elementary education, early education, ]
Massage Therapy Reduces Anxiety and Enhances EEG Patterns of Alertness and Math Computations - ... Math computations were measured by giving the subjects 7 numbers before 1315
the massage and 7 different numbers given after and the subject was asked to add them together. The time it took to solve the question and its accuracy was recorded. To words Better
measure the EEG patterns were one of the most important variables in the study. The subjects put on a special cap, called the Lycra stretchable cap, used you measure (3.8 Essays [preview]
delta, alpha, beta, and theta. To prevent data from having an incorrect reading, the subjects had to fill out two questionnaire forms.... [tags: Health] pages)
:: 1 Works Cited
The Great Pyramid at Giza: The Use of Advanced Math and Science Within its Design - ... Ultimately, without the technological advances of ramps, coupled with pulleys and
other systems, this would have been an immensely difficult or impossible task, and would have taken many more workers and hours. As a result of these advances, the 1338
Egyptians have created one of the most intriguing and interesting structures of the past. Another component involving mathematics and science is found less in the words Better [preview]
physical construction, but within the design. When looking at the pyramid, its profile can be broken into two similar, right triangles when split in half [Figure #3].... (3.8 Essays
[tags: Architecture] pages)
:: 6 Works Cited
Statement of Purpose: My Interests in Math, Physics, Statistics and Computer Science - ... Accordingly, I received admiration from family and friends from this struggling 1012
process of adaptation. Day by day, I wrote programs, dealt with errors, rewrote programs, over and over again. The process was dull and offered no glorious anecdotes. It words Strong
was the time I spent working on the most fundamental problems and the piles of books borrowed from the library that constituted the foundation of my future progress. (2.9 Essays [preview]
During the debugging and reviewing, I gradually adjusted to the logic of machine, which enables me to handle different programming languages with more haste.... [tags: pages)
Educational Goals, application essays]
Math History - Mathematics starts with counting. It is not reasonable, however, to suggest that early counting was mathematics. Only when some record of the counting was 2043
kept and, therefore, some representation of numbers occurred can mathematics be said to have started. In Babylonia mathematics developed from 2000 BC. Earlier a place words Better [preview]
value notation number system had evolved over a lengthy period with a number base of 60. It allowed arbitrarily large numbers and fractions to be represented and so (5.8 Essays
proved to be the foundation of more high powered mathematical development.... [tags: essays research papers] pages)
Math in life - In the last few centuries, the number of people living on Earth has increased many times over. By the year 2000, there will be 10 times more people on 504
Earth than there were 300 years ago. How can population grow so fast. Think of a family tree. At the top are 2 parents, and beneath them the children they had. Listed words FREE [view]
beneath those children are the children they had, and so on and so on, down through each generation. As long as the family members continue to reproduce, the family tree (1.4 Essays
continues to increase in size, getting larger with each passing generation.... [tags: essays research papers] pages)
math lesson - Lesson Plan Title: Alexander, Who Used to Be Rich Last Sunday: Understanding Opportunity Costs Grade Level:2, 3, or 6th Duration: three 50-minute class 1724
periods Student Goal: To understand that there is an opportunity cost to every economic decision and that these costs come as a result of limited resources. Student words Unrated [preview]
Objectives Students will: • Identify "opportunity costs" in the story and in their own lives. • Create an opportunity costs bar graph as a whole class. • Complete a table (4.9 Essays
of personal spending and savings information.... [tags: essays research papers] pages)
Strategic Plan to Develop Student Interest in Science, Technology, Engineering, and Math Related Careers - Strategic Plan to Develop Student Interest in Science, 1064
Technology, Engineering, and Math Related Careers Success in K-12 math and science classes has a direct effect on student’s decision to pursue careers in engineering and words FREE
technology. In a recent International assessment conducted by the Organization for Economic Co-operation and Development (OECD) there were some unacceptable statistics (3 Essays [view]
released about the United States schools. The results of this latest research place the United States 25th out of 30 OECD countries in math achievement among 15-year-olds pages)
and 21st in science achievement (Nagel, 2007).... [tags: Education Students Teaching]
Classroom Observation: Applying Danielson's Domains to the Math Education Classes Which I Observed - ... Albert Einstein once said, “The most incomprehensible thing about 955
the universe is that it is comprehensible.” Getting kids to love math is indeed a very high goal, but luckily there are many more reasons I want to teach. Another words Strong [preview]
influence is in high school, I had great math teachers. It was senior year when I asked my Calculus teacher why we had to work through such complicated problems after we (2.7 Essays
had already grasped the concept. He told me that it wasn’t at all about coming up with the right answer.... [tags: child observation, teacher observation, teaching] pages)
business math paper - Annuities Businesses, financial institution, and other organizations invest in annuities to raise money to pay such expenses as bond debts, notes 982
due, or stock dividends. They also invest in annuities to provide for future needs, such as new facilities and equipment or employee retirement benefits. Individuals may words Unrated [preview]
purchase annuities, such as an Individual Retirement Account (IRA), or an insurance policy, from insurance companies, financial institutions, or securities brokers. An (2.8 Essays
ordinary annuity is a series of regular payments where each payment is made at the end of the payment period.... [tags: essays research papers] pages)
math graph story - Between study group, debate, and chess tournaments there wasn’t much of a social scene around Winchester University in Omaha, Nebraska. The school year 1275
at this college was year round, but the students were given a 30 day summer vacation in July. The majority of the students went back home to visit their families during words FREE [view]
this time. But as juniors at the University Charles, Fredrick, and Stanley, all childhood buddies, decided it was time for a change and that they needed a little more (3.6 Essays
spice in their life.... [tags: essays research papers] pages)
Micheal Redkin and Math Basics - In order to create a graph such as the one Ms. Redkin uses to calculate the depreciation of her rental house, first it must be determined 593
which part of the information given is the dependant variable and which is the independent variable. In this case the independent variable is time (in years), and the words FREE
dependent the value of the house. Next create a graph with the given data, the independent variables on the x-axis and the dependent on the y. Graph and label the given (1.7 Essays [view]
data as points (4 yrs, $64000) and (7 yrs, $52000), allow the graph to represent the house’s value from when it was new to 10 years after its purchase.... [tags: essays pages)
research papers]
Applications of Prisms and Math - Missing Figures Prisms and their Applications Introduction A prism is one or several blocks of glass, through which light passes and 2309
refracts and reflects off its straight surfaces. Prisms are used in two fundamentally different ways. One is changing the orientation, location, etc. of an image or its words Unrated
parts, and another is dispersing light as in a refractometers and spectrographic equipment. This project will only deal with the first use. Consider an image projected (6.6 Essays [preview]
onto a screen with parallel rays of light, as opposed to an image formed by the same rays that are passed through a cubic prism (assume that the amount of light that is pages)
reflected is negligible).... [tags: Mathematics]
Solution of the Cubic Equation - ... His scars were covered in adulthood with a beard, but the stammering that resulted persisted throughout his life. Tartaglia learned 975
mathematics on his own for many years until a patron made possible some study at Padua. This education gave Tartaglia a high sense of pride that others often resented. words Strong
For a while, he was merely a mathematics teacher but gained fame by participating in local contests. As was common in this age, Fior challenged Tartaglia to a public (2.8 Essays [preview]
mathematics contest in 1535. Each would propose questions to the other in the hopes of solving more than his opponent within a given time period (here, 40 to 50 days).... pages)
[tags: Math]
Trigonometry in Daily Life - ... Trigonometry is arty of science can be used to measure the height of mountain very easily. This is the basement information to aircraft 416
designing and navigation, overly technical think the time last took a vacation at a hill station. In daily life medical conditions that prevent them from traveling to words Unrated
very large altitudes. Supposing it is unlikely that one will ever need to directly apply a trigonometric function in solving a practical issue, the underused background (1.2 Essays [preview]
of the science finds usage in an area which is passion for more - music.... [tags: Math ] pages)
:: 6 Works Cited
The History of Mathematics in Africa - ... However, the Ishango bone appears to be much more than a simple tally. The markings contain the prime numbers between 10 and
20.” Around 7000 BC, people in Egypt and further south in Sudan began to use clay tokens to count things. They probably got the idea from West Asia, where people began 1439
using tokens earlier. “Thus by this early period in the development of civilization, humans' mnemonic devices were already forming. The bones, discovered in the late 60s words Better [preview]
and early 70s, uses a method of marking, in which an elongated stick or bone was engraved with a series of marks, offering an analog measure of individual events.” By (4.1 Essays
3000 BC, people in Egypt were using hieroglyphs to write numbers.... [tags: Math ] pages)
:: 5 Works Cited
Human Vision and the Eye - ABOUT THE EYE Human eyes receive and form images from outside, also automatically changes in light and seeing things close up and at a 574
distance. Therefore, we can see most of things from outside world. But without light, we can't see anything. Light travels though space and the sun gives off light rays words Unrated [preview]
then enter the eyes they are bent or refracted and these light rays create images or picture of all the objects around you, that's why we can see things very clearly. How (1.6 Essays
light enter the eye, first light enters the eye though pupil which control different amounts of light into our eye.... [tags: Math] pages)
Chaotic Behavior Of The Logistic Equation - ... Given the number N(0) of members of the population at time k = 0, the number of members at time k is N(k) = (1 + ¾)kN(0):
(3) The eqution(3) predicts the exponential growth of the population. In a more realistic situation, let b be the birth rate, then the number of births at time k is bN 1702
(k). Assuming the number of deaths proportionals to the population size , let d be the death rate; then the number of deaths at time k id dN(k). The change in population words Better [preview]
in a time period between times k and k + 1 is N(k + 1) ¡ N(k) = bN(k) ¡ dN(k): (4) This can rewritten as N(k + 1) = (1 + b ¡ d)N(k): (5) Let ¾ = b ¡ d: Then, equation(5)s (4.9 Essays
the same form as equation(2).... [tags: Math] pages)
:: 5 Works Cited
Your search returned over 400 essays for "Math". To narrow your search results, please add more search terms to your query.
[1] [2] [3] [4] [5] [Next >>]
Search Our Free Directory:
Please enter the title keyword: | {"url":"http://www.123helpme.com/search.asp?text=Math","timestamp":"2014-04-17T06:41:03Z","content_type":null,"content_length":"125445","record_id":"<urn:uuid:29571712-0505-4206-80bd-d6979380e4d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving the Beginner’s PID: Tuning Changes
(This is Modification #3 in a larger series on writing a solid PID algorithm)
The Problem
The ability to change tuning parameters while the system is running is a must for any respectable PID algorithm.
The Beginner’s PID acts a little crazy if you try to change the tunings while it’s running. Let’s see why. Here is the state of the beginner’s PID before and after the parameter change above:
So we can immediately blame this bump on the Integral Term (or “I Term”). It’s the only thing that changes drastically when the parameters change. Why did this happen? It has to do with the
beginner’s interpretation of the Integral:
This interpretation works fine until the Ki is changed. Then, all of a sudden, you multiply this new Ki times the entire error sum that you have accumulated. That’s not what we wanted! We only wanted
to affect things moving forward!
The Solution
There are a couple ways I know of to deal with this problem. The method I used in the last library was to rescale errSum. Ki doubled? Cut errSum in Half. That keeps the I Term from bumping, and it
works. It’s kind of clunky though, and I’ve come up with something more elegant. (There’s no way I’m the first to have thought of this, but I did think of it on my own. That counts damnit!)
The solution requires a little basic algebra (or is it calculus?)
Instead of having the Ki live outside the integral, we bring it inside. It looks like we haven’t done anything, but we’ll see that in practice this makes a big difference.
Now, we take the error and multiply it by whatever the Ki is at that time. We then store the sum of THAT. When the Ki changes, there’s no bump because all the old Ki’s are already “in the bank” so to
speak. We get a smooth transfer with no additional math operations. It may make me a geek but I think that’s pretty sexy.
The Code
/*working variables*/
unsigned long lastTime;
double Input, Output, Setpoint;
double ITerm, lastInput;
double kp, ki, kd;
int SampleTime = 1000; //1 sec
void Compute()
unsigned long now = millis();
int timeChange = (now - lastTime);
/*Compute all the working error variables*/
double error = Setpoint - Input;
ITerm += (ki * error);
double dInput = (Input - lastInput);
/*Compute PID Output*/
Output = kp * error + ITerm - kd * dInput;
/*Remember some variables for next time*/
lastInput = Input;
lastTime = now;
void SetTunings(double Kp, double Ki, double Kd)
double SampleTimeInSec = ((double)SampleTime)/1000;
kp = Kp;
ki = Ki * SampleTimeInSec;
kd = Kd / SampleTimeInSec;
void SetSampleTime(int NewSampleTime)
if (NewSampleTime > 0)
double ratio = (double)NewSampleTime
/ (double)SampleTime;
ki *= ratio;
kd /= ratio;
SampleTime = (unsigned long)NewSampleTime;
So we replaced the errSum variable with a composite ITerm variable [Line 4]. It sums Ki*error, rather than just error [Line 15]. Also, because Ki is now buried in ITerm, it’s removed from the main
PID calculation [Line 19].
The Result
So how does this fix things. Before when ki was changed, it rescaled the entire sum of the error; every error value we had seen. With this code, the previous error remains untouched, and the new ki
only affects things moving forward, which is exactly what we want.
Next >>
Tags: Arduino, Beginner's PID, PID
I don’t get it. I am trying for about an hour but I cannot get it.
It looks the same arithmetically to me.
I don’t know what am I missing.
when you look at the calculus, it is identical. when it comes to implementation, it is different. let’s say that at t=10, Ki changes from 5 to 10, with Kp and Kd staying the same. now, let’s look at
the integral term values at t=20:
old way: 10*(Integral of error from t=0 to 20)
new way: (Integral of (5*error) from t=0 to 10) + (Integral of (10*error) from t=10 to 20)
hopefully this helps reveal the difference. again. this is something you only notice if the value of Ki changes. if it stays the same, both methods are identical.
So, since:
old way: 10*(Integral of error from t=0 to 20)
new way: (Integral of (5*error) from t=0 to 10) + (Integral of (10*error) from t=10 to 20)
old way: 10*(Integral of error from t=0 to 20)
new way: 5*(Integral of error from t=0 to 10) + 10(Integral of error from t=10 to 20)
You are basically changing the algorithm. I kind of understand it now, but, is it OK to change the algorithm like that?
I know that this may very well be a silly question, but I don’t remember much from my control theory
This entire series is about changing the algorithm. the idea is that the PID algorithm behaves in a predictable, reliable way, EXCEPT for in certain situations. The goal of these changes (all of
them) is to make the algorithm work as you would expect, EVEN WHEN these special conditions occur.
OK, sounds nice. Thanks for the quick and accurate help!
Will let you know when my PID controller works.
The algebra is actually not correct. If both terms were equal, the result should be the same. Your ki is time dependent, which is why it cannot get out of the integrand. Still, the integral output
term really is oi(t) = integral{ ki(t) * e(t) dt}.
Anyway, this is a nice fix and a very useful library. | {"url":"http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/","timestamp":"2014-04-21T12:08:15Z","content_type":null,"content_length":"29981","record_id":"<urn:uuid:a128337d-1082-4214-a1ad-c8e3100ebf47>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
PPI Analysis
From ACLab Wiki
Psychophysiological Interactions for SPM. Hopefully this is actually easier to understand than what's currently available on the web.;
1. Pick an ROI. You should have some sort of .mat file at the end of this step that tells SPM what your ROI is on the glass brain.
One idea is to just use the VOI function with the peak voxel and a 3mm sphere instead of worrying about using the exact ROI cluster.
2. FOR AN INDIVIDUAL SUBJECT (or even an individual run), PLOT the signal change in that ROI. You can use the SPM VOI function, but if the ROI is derived from a second level (group) analysis, that
won't help you much unless that same ROI is apparent in your first level (individual) contrast. I suggest using marsbar. At the end of this step you should have a vector of numbers representing
signal equal in length to the number of TRs in your design.
3. Normalize this vector. The command I think you should use in the matlab window is 'spm_detrend'. If Y is your vector of numbers, then detrending this vector looks like:
Y = spm_detrend( Y );
I don't really know what that does other than mean subtract. You might want to get rid of giant spikes and run based mean changes:
tempdev = std(rtest);
maxdev = 3*tempdev;
for i = 1:size(rtest,1)
if (abs(rtest(i)) > maxdev)
rtest(i) = 0
r1mean = mean(rtest(1:297))
r2mean = mean(rtest(298:594))
for i = 1:297
rtest(i) = rtest(i) - r1mean;
for i = 298:594
rtest(i) = rtest(i) - r2mean;
Here's the untranslated part that I'm still working through:
4. Next you need to create a regressor P corresponding to the Psychological Factor 1,2. It is important to retain the same order of scans that is assumed by y. So you need to give the value 1 to all
scans corresponding to conds A1 and B1 and (-1) to all the other scans (conds A2 and B2). You should now have a vector of equal size to your vector from step 2; length = number of scans in your
study), consisting of 1s and (-1)s. This vector has a mean = 0 so does not require mean-correction. This is Factor (1, 2) say P = [1 (- 1) 1 (-1) ...]');
5. Multiply these 2 vectors together and mean correct the result. PPI = P( : ) .* y( : );
Now you have three vectors: a) activity in a particular voxel over the time course of the experiment y b) condition order P c) the interaction between activity in a particular voxel and the
corresponding task PPI
6. Go into SPM, carry out a statistical analysis specifying User Specified type of analysis. Specify 1 covariate of interest, and enter the name of the matlab vector that corresponds to PPI above.
Enter the two other vectors (P and y above) as confounds. Enter 2 contrasts: 1 and -1.
This analysis will calculate regressions at every voxel in the brain for the confounds P and y. Differences between the regression slopes of BOLD/rCBF on y under levels 1 and 2 of the psychological
factor are tested directly to produce a SPM{t}. The resultant MIP demonstrates the presence of significant psychophysiological interactions i.e. voxels in which the contribution of the thalamus
changed significantly as a function of context (1 vs. 2): contrast 1 shows areas for which there is a significant positive regression slope difference; contrast -1 shows areas for which there is a
significant negative regression slope difference. | {"url":"http://www.aclab.ca/wiki/index.php?title=PPI_Analysis","timestamp":"2014-04-20T23:27:36Z","content_type":null,"content_length":"11580","record_id":"<urn:uuid:af4f926c-e7e5-471d-8f3e-eb56ece36342>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
slope of the tangent line
March 7th 2007, 07:15 AM #1
Junior Member
Nov 2006
slope of the tangent line
Let f(x) = 1/x-3. Calculate the difference quotient F(0+h)-f(0)/h for
h =.1 = (-.114942587)
h =.01 = (-.1114827202)
h = -.01 = (-.1107419712)
h = -.1 = (-.1075268817)
I understood how to get these values but then it asked:
If the slope of the tangent line to the graph of F(x) at x = 0 was -1/n^2 for some integer n, what would you expect n to be?
n = ?
This is the part I do not understand, could someone please explain this to me.
Thank You,
Keith Stevens
Let f(x) = 1/x-3. Calculate the difference quotient F(0+h)-f(0)/h for
h =.1 = (-.114942587)
h =.01 = (-.1114827202)
h = -.01 = (-.1107419712)
h = -.1 = (-.1075268817)
I understood how to get these values but then it asked:
If the slope of the tangent line to the graph of F(x) at x = 0 was -1/n^2 for some integer n, what would you expect n to be?
n = ?
This is the part I do not understand, could someone please explain this to me.
Thank You,
Keith Stevens
i didnt check this, but you got the answers to be roughly -0.1 right. so we want n so that -1/n^2 = -0.1. so idealy we want n^2 = 10. since -1/10 = -0.1. but that would mean n = sqrt(10) which is
not an integer. so the closest we can get is say, n^2 = 9. so -1/n^2 = -1/9 = -0.111111111... so then n=3
Let f(x) = 1/x-3. Calculate the difference quotient F(0+h)-f(0)/h for
h =.1 = (-.114942587)
h =.01 = (-.1114827202)
h = -.01 = (-.1107419712)
h = -.1 = (-.1075268817)
I understood how to get these values but then it asked:
If the slope of the tangent line to the graph of F(x) at x = 0 was -1/n^2 for some integer n, what would you expect n to be?
n = ?
This is the part I do not understand, could someone please explain this to me.
Thank You,
Keith Stevens
Hi Keith it's Andrew. Jhevon was right but I posted some steps that might give you more of a visual perspective.
Click on the following two links:
a. http://item.slide.com/r/1/131/i/ohC2...s5OHgEjT9ZCth/
b. http://item.slide.com/r/1/66/i/hRo6v...aqPDkV55xVeZk/
Also Keith don't forget to check Dr. W's website out either. We have a calculator lab due on March 21.
Pet Peeve alert!! Warning! Danger, Will Robinson! Warning! Warning!
I know what you were TRYING to say, but how can the following be true statements?
Just a suggestion to be careful about what you are writing. Professors have taken points on exams for even less serious errors. My suggestion is to be careful about what you write at all times.
March 7th 2007, 07:45 AM #2
March 7th 2007, 08:25 AM #3
March 7th 2007, 08:25 AM #4 | {"url":"http://mathhelpforum.com/calculus/12289-slope-tangent-line.html","timestamp":"2014-04-21T08:58:57Z","content_type":null,"content_length":"42867","record_id":"<urn:uuid:b0b6745e-8d24-4382-ad0b-1cdd2ff18a10>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex orientations on homotopy
up vote 13 down vote favorite
I am wondering if there is a more "geometric" formulation of complex orientations for cohomology theories than just a computation of $E^*\mathbb{C}$P$^{\infty}$ or a statement about Thom classes. It
seems that later in Hopkins notes he says that the complex orientations of E are in one to one correspondence with multiplicative maps $MU \rightarrow E$, is there a treatment that starts with this
perspective? How do the complex orientations of a spectrum E help one compute the homotopy of $E$, or the $E$-(co)homology of MU? Further, what other kinds of orientations could we think about, are
there interesting $ko$ or $KO$ orienations? how much of these $E$-orientations of X is detected by E-cohomology of X?
I do have some of the key references already in my library, for example the notes of Hopkins from '99, Rezk's 512 notes, Ravenel, and Lurie's recent course notes. If there are other references that
would be great. I am secretly hoping to get some insight from some of the experts. (I guess I should really also go through Tyler's abelian varieties paper)
(sorry for the on and off texing but the preview is giving me weird feedback.)
EDIT: I eventually found the type of answer i was looking for in some notes of Mark Behrens on a course he taught. This answer is that a ring spectrum $R$ is complex orientable is there is a map of
ring spectra $MU \to R$. This also appears in COCTALOS by Hopkins but neither source takes this as the more fundamental concept. Anyway, the below answer is more interesting geometrically.
stable-homotopy homotopy-theory at.algebraic-topology cohomology
I sympathize with you, Sean. For a few minutes LaTeX was not rendered for me, even after reloading the page several times. Finally now it is working. – Georges Elencwajg Mar 3 '10 at 21:47
There's Yuli Rudyak's book on "Thom Spectra, Orientations and Cobordism" as well. – Ryan Budney Mar 23 '10 at 5:43
add comment
1 Answer
active oldest votes
The natural starting point of this story are E-orientations on, say closed, manifolds M. That's just a fundamental class $[M^n] \in E_n(M)$ such that cap product induces a (Poincare
duality) isomorphism. Given E, the question becomes which M are E-orientable. In many cases it happens that this follows if the stable normal bundle of M admits a lift through a
fibration $X\to BO$. For example, if E=HZ is ordinary Z-cohomology then X=BSO works, if E=KO then X=BSpin works, if E=KU then X=BU or X=BSpin$^c$ works etc.
To formalize the idea that every X-manifold has an E-orientation, form the bordism groups $\Omega^X_n$ of X-manifolds and observe that the fundamental classes lead to natural maps $\
up vote 15 Omega^X_n(Y) \to E_n(Y)$ for any space Y. In other words, there are natural transformations of cohomology theories $\Omega^X \to E$, or even better, maps of spectra $MX \to E$, where
down vote $MX$ is the Thom spectrum associated to the fibration $X\to BO$.
In the case X=BU this is called a complex orientation of E and has been studied extensively because it simplifies computations of E-cohomology tremendously. The original and still
relevant reference is Adams' little blue book.
add comment
Not the answer you're looking for? Browse other questions tagged stable-homotopy homotopy-theory at.algebraic-topology cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/17010/complex-orientations-on-homotopy","timestamp":"2014-04-18T10:50:36Z","content_type":null,"content_length":"54678","record_id":"<urn:uuid:dbf360f9-19ec-4b55-ab3c-5bf3d0cd3dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
see below:
Luc Maisonobe schrieb:
> Bernhard Grünewaldt a écrit :
>> Hello,
>> I am new here so I don't know if it's correct to ask here for it, but:
>> Is it possible to have the latex and graphviz plugin for the apache
>> commons wiki installed?
>> Latex Plugin: http://moinmo.in/ParserMarket/latex
>> Graphviz Plugin: http://moinmo.in/GraphVizForMoin
>> Math Wiki: http://wiki.apache.org/commons/Math
>> I would like to add a documentation about the FastHadamardTransformer
>> which includes latex equations and some nice graphs:
>> http://wiki.apache.org/commons/Transformers/FastHadamardTransformer
>> Or is this page the right place for it?
>> http://commons.apache.org/math/userguide/transform.html
> I think this is a better place. It is the official user guide which is
> bundled with the source.
>> If yes, which syntax (MoinMoin syntax, Dokuwiki, plain HTML) does it use?
> It uses xdoc described here:
> http://maven.apache.org/doxia/references/xdoc-format.html
> The source files are in the subversion tree. Starting from the top
> directory of the project where the pom.xml files is, the user guide
> directory is here: src/site/xdoc/userguide. The html files are generated
> using maven2 thanks to the following command:
> mvn site
> Unfortunately, this format does not support mathematical syntax. In
> fact, depending on doxia version, it may even not support standard HTML
> 4.0 entities like π or ∇ (see
> http://jira.codehaus.org/browse/DOXIA-237). The current published
> version of doxia seems to be 1.0-alpha-4, but according to this message
> (http://markmail.org/message/z5n4tdiaohaxlqey) new versions should be
> published soon.
> Doxia supports others formats as well, and they can even be mixed on a
> page basis (i.e. you can have one page generated from apt, another one
> from xdoc ...). The following page lists several supported modules:
> http://maven.apache.org/doxia/doxia/doxia-modules/index.html. The
> documentation is ... scarce. Beware that some formats are output formats
> only (typically LaTeX).
> As far as I am concerned, I would be happy to change our documentation
> format to something more math-friendly. I think we will at least have to
> wait until the new version of doxia is published and look at what it
> provides (and perhaps contribute to it if time allows).
Ok, I will add the docu to the xdoc files in the repo.
(I will generate a diff and mail it to you)
How about adding MathML to the generated xdoc xhtml pages.
Here is a example xhtml page with mathml support:
Here is described how to convert latex to mathml:
If we add the mathml namespace we could use mathml.
Header just has to look like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0//EN"
"http://www.w3.org/TR/MathML2/dtd/xhtml-math11-f.dtd" [
<!ENTITY mathml "http://www.w3.org/1998/Math/MathML">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
We could add the latex syntax as html comments above the mathml
statements, so everybody can easily change the euqation using the
latex2mathml converter.
> Luc
>> thanks.
>> Bernhard Grünewaldt
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
>> For additional commands, e-mail: dev-help@commons.apache.org
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org | {"url":"http://mail-archives.apache.org/mod_mbox/commons-dev/200812.mbox/%3C4958137E.3010209@gruenewaldt.net%3E","timestamp":"2014-04-21T16:41:40Z","content_type":null,"content_length":"7921","record_id":"<urn:uuid:21c018e9-1fd9-4658-a778-827a8f987d38>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Principles of Fission Weapon Design
Excerpted from Section 2 of Nuclear Weapons Frequently Asked Qustions by Carey Sublette.
Basic Principles of Fission Weapon Design
The principle issues that must solved to construct a fission weapon are:
1. Keeping the fissionable material in a subcritical state before detonation;
2. Bringing the fissionable material into a supercritical mass while keeping it free of neutrons;
3. Introducing neutrons into the critical mass when it is at the optimum configuration (i.e. at maximum supercriticality);
4. Keeping the mass together until a substantial portion of the material has fissioned.
Solving issues 1, 2 and 3 together is greatly complicated by the unavoidable presence of naturally occurring neutrons. Although cosmic rays generate neutrons at a low rate, almost all of these
"background" neutrons originate from the fissionable material itself through the process of spontaneous fission. Due to the low stability of the nuclei of fissionable elements, these nuclei will
occasionally split without being hit by a neutron. This means that the fissionable material itself periodically emits neutrons.
The process of assembling the supercritical mass must occur in significantly less time than the average interval between spontaneous fissions to have a reasonable chance of succeeding. This problem
is difficult to accomplish due to the very large change in reactivity required in going from a subcritical state to a supercritical one. The time required to raise the value of k from 1 to the
maximum value of 2 or so is called the reactivity insertion time, or simply insertion time.
It is further complicated by the problem of subcritical neutron multiplication. If a subcritical mass has a k value of 0.9, then a neutron present in the mass will (on average) create a chain
reaction that dies out in an average of 10 generations. If the mass is very close to critical, say k=0.99, then each spontaneous fission neutron will create a chain that lasts 100 generations. This
persistence of neutrons in subcritical masses further reduces the time window for assembly, and requires that the reactivity of the mass be increased from a value of <0.9 to a value of 2 or so within
that window.
Simply splitting a supercritical mass into two identical parts, and bringing the parts together rapidly is unlikely to succeed since neither part will have a sufficiently low k value, nor will the
insertion time be rapid enough with achievable assembly speeds.
Assembly Techniques - Achieving Supercriticality
The key to achieving objectives 1 and 2 is revealed by the fact that the critical mass (or supercritical mass) of a fissionable material is inversely proportional to the square of its density. By
contriving a subcritical arrangement of fissionable material whose average density can be rapidly increased, we can bring about the sudden large increase in reactivity needed to create a powerful
explosion. As a general guide, a suitable highly supercritical mass needs to be at least three times heavier than a mass of equal density and shape that is merely critical. Thus doubling the density
of a pit that is slightly sub- critical (thereby making it into nearly four critical masses) provides sufficient reactivity insertion for a bomb.
Two general approaches have been used for achieving this idea: implosion assembly, and gun assembly. Implosion is capable of very short insertion times, gun assembly is much slower. | {"url":"http://nuclearweaponarchive.org/Library/Fsswpnpr.html","timestamp":"2014-04-18T05:55:52Z","content_type":null,"content_length":"4325","record_id":"<urn:uuid:781e292e-dc59-4211-84e2-fd63e1dbe58b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
A finite axiomatization of inductive-recursive definitions
Results 1 - 10 of 31
, 2001
"... We give two nite axiomatizations of indexed inductive-recursive de nitions in intuitionistic type theory. They extend our previous nite axiomatizations of inductive-recursive de nitions of sets
to indexed families of sets and encompass virtually all de nitions of sets which have been used in ..."
Cited by 44 (16 self)
Add to MetaCart
We give two nite axiomatizations of indexed inductive-recursive de nitions in intuitionistic type theory. They extend our previous nite axiomatizations of inductive-recursive de nitions of sets to
indexed families of sets and encompass virtually all de nitions of sets which have been used in intuitionistic type theory. The more restricted of the two axiomatization arises naturally by
considering indexed inductive-recursive de nitions as initial algebras in slice categories, whereas the other admits a more general and convenient form of an introduction rule.
- Nordic Journal of Computing , 2003
"... We show how to write generic programs and proofs in MartinL of type theory. To this end we consider several extensions of MartinL of's logical framework for dependent types. Each extension has a
universes of codes (signatures) for inductively defined sets with generic formation, introduction, el ..."
Cited by 42 (2 self)
Add to MetaCart
We show how to write generic programs and proofs in MartinL of type theory. To this end we consider several extensions of MartinL of's logical framework for dependent types. Each extension has a
universes of codes (signatures) for inductively defined sets with generic formation, introduction, elimination, and equality rules. These extensions are modeled on Dybjer and Setzer's finitely
axiomatized theories of inductive-recursive definitions, which also have a universe of codes for sets, and generic formation, introduction, elimination, and equality rules.
- Annals of Pure and Applied Logic , 2003
"... 1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott ("Constructive Validity") [31] and Martin-L"of [17, 18, 19]. The first
occurrence of formal induction-recursion is Martin-L"of's definition of a universe `a la T ..."
Cited by 28 (11 self)
Add to MetaCart
1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott ("Constructive Validity") [31] and Martin-L"of [17, 18, 19]. The
first occurrence of formal induction-recursion is Martin-L"of's definition of a universe `a la Tarski [19], which consists of a set U
"... We present a closed dependent type theory whose inductive types are given not by a scheme for generative declarations, but by encoding in a universe. Each inductive datatype arises by
interpreting its description—a first-class value in a datatype of descriptions. Moreover, the latter itself has a de ..."
Cited by 20 (4 self)
Add to MetaCart
We present a closed dependent type theory whose inductive types are given not by a scheme for generative declarations, but by encoding in a universe. Each inductive datatype arises by interpreting
its description—a first-class value in a datatype of descriptions. Moreover, the latter itself has a description. Datatype-generic programming thus becomes ordinary programming. We show some of the
resulting generic operations and deploy them in particular, useful ways on the datatype of datatype descriptions itself. Surprisingly this apparently self-supporting setup is achievable without
paradox or infinite regress. 1.
- 16th International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2003 , 2003
"... We extend the proof assistant Agda/Alfa for dependent type theory with a modi ed version of Claessen and Hughes' tool QuickCheck for random testing of functional programs. In this way we combine
testing and proving in one system. Testing is used for debugging programs and speci cations before ..."
Cited by 13 (1 self)
Add to MetaCart
We extend the proof assistant Agda/Alfa for dependent type theory with a modi ed version of Claessen and Hughes' tool QuickCheck for random testing of functional programs. In this way we combine
testing and proving in one system. Testing is used for debugging programs and speci cations before a proof is attempted. Furthermore, we demonstrate by example how testing can be used repeatedly
during proof for testing suitable subgoals. Our tool uses testdata generators which are de ned inside Agda/Alfa. We can therefore use the type system to prove properties about them, in particular
surjectivity stating that all possible test cases can indeed be generated.
, 2001
"... First-order unification algorithms (Robinson, 1965) are traditionally implemented via general recursion, with separate proofs for partial correctness and termination. The latter tends to involve
counting the number of unsolved variables and showing that this total decreases each time a substitution ..."
Cited by 12 (4 self)
Add to MetaCart
First-order unification algorithms (Robinson, 1965) are traditionally implemented via general recursion, with separate proofs for partial correctness and termination. The latter tends to involve
counting the number of unsolved variables and showing that this total decreases each time a substitution enlarges the terms. There are many such proofs in the literature, for example, (Manna &
Waldinger, 1981; Paulson, 1985; Coen, 1992; Rouyer, 1992; Jaume, 1997; Bove, 1999). This paper
, 1999
"... . This paper deals with formalizations and verifications in type theory that are abstracted with respect to a class of datatypes; i.e polytypic constructions. The main advantage of these
developments are that they can not only be used to define functions in a generic way but also to formally st ..."
Cited by 11 (0 self)
Add to MetaCart
. This paper deals with formalizations and verifications in type theory that are abstracted with respect to a class of datatypes; i.e polytypic constructions. The main advantage of these developments
are that they can not only be used to define functions in a generic way but also to formally state polytypic theorems and to synthesize polytypic proof objects in a formal way. This opens the door to
mechanically proving many useful facts about large classes of datatypes once and for all. 1 Introduction It is a major challenge to design libraries for theorem proving systems that are both
sufficiently complete and relatively easy to use in a wide range of applications (see e.g. [6, 26]). A library for abstract datatypes, in particular, is an essential component of every proof
development system. The libraries of the Coq [1] and the Lego [13] system, for example, include a number of functions, theorems, and proofs for common datatypes like natural numbers or polymorphic
lists. In th...
- LerNet 2008. LNCS , 2009
"... Abstract. In these lecture notes we give an introduction to functional programming with dependent types. We use the dependently typed programming language Agda which is an extension of
Martin-Löf type theory. First we show how to do simply typed functional programming in the style of Haskell and ML. ..."
Cited by 11 (1 self)
Add to MetaCart
Abstract. In these lecture notes we give an introduction to functional programming with dependent types. We use the dependently typed programming language Agda which is an extension of Martin-Löf
type theory. First we show how to do simply typed functional programming in the style of Haskell and ML. Some differences between Agda’s type system and the Hindley-Milner type system of Haskell and
ML are also discussed. Then we show how to use dependent types for programming and we explain the basic ideas behind type-checking dependent types. We go on to explain the Curry-Howard identification
of propositions and types. This is what makes Agda a programming logic and not only a programming language. According to Curry-Howard, we identify programs and proofs, something which is possible
only by requiring that all program terminate. However, at the end of these notes we present a method for encoding partial and general recursive functions as total functions using dependent types.
- Informal Proc. of Workshop on Generic Programming, WGP’98, Marstrand , 1998
"... We first present a finite axiomatization of strictly positive inductive types in the simply typed lambda calculus. Then we show how this axiomatization can be modified to encompass simultaneous
inductive-recursive definitions in intuitionistic type theory. A version of this has been implemented in t ..."
Cited by 7 (4 self)
Add to MetaCart
We first present a finite axiomatization of strictly positive inductive types in the simply typed lambda calculus. Then we show how this axiomatization can be modified to encompass simultaneous
inductive-recursive definitions in intuitionistic type theory. A version of this has been implemented in the Half system which is based on Martin-Lf's logical framework. 1 Introduction The present
note summarizes a presentation to be given at the Workshop on Generic Programming, Marstrand, Sweden, June 18th, 1998. We use Martin-Lof's logical framework as a metalanguage for axiomatizing
inductive definitions in the simply typed lambda calculus. We also show how to generalize this axiomatization to the case of inductive-recursive definitions in the lambda calculus with dependent
types. The reader is referred to the full paper [7] for a more complete account focussing on induction-recursion. Related papers discussing inductive definitions in intuitionistic type theory include
Backhouse [1, 2], Co...
- In MSFP 2006 , 2006
"... We present a Tait-style proof to show that a simple functional normaliser for a combinatory version of System T terminates. The main interest in our construction is methodological, it is an
alternative to the usual small-step operational semantics on the one side and normalisation by evaluation on t ..."
Cited by 7 (4 self)
Add to MetaCart
We present a Tait-style proof to show that a simple functional normaliser for a combinatory version of System T terminates. The main interest in our construction is methodological, it is an
alternative to the usual small-step operational semantics on the one side and normalisation by evaluation on the other. Our work is motivated by our goal to verify implementations of Type Theory such
as Epigram. Keywords: Normalisation,Strong Computability 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.219.2442","timestamp":"2014-04-21T11:10:50Z","content_type":null,"content_length":"37015","record_id":"<urn:uuid:b9916c39-4387-4c26-8a7e-a02b826a29f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flushing, NY Calculus Tutor
Find a Flushing, NY Calculus Tutor
...I had been teaching assistant for three courses and four semesters during my graduate studies: Introduction to Medical Imaging, Advanced Medical Imaging and Introduction to Biomaterials at
University of Southern California at Los Angeles during my PhD training. I assisted these courses by develo...
24 Subjects: including calculus, physics, geometry, statistics
Hello, students and/or parents, Thank you for giving me the opportunity to introduce myself to you. I am a current college instructor at the City University of New York (CUNY) in the department of
computer engineering technology. I earned my Master of Science degree in electrical engineering from Stony Brook University, New York.
28 Subjects: including calculus, Spanish, chemistry, geometry
...I've been using apple computers since the first Macintosh classic came out. Although I also have PC's at home and at work, Mac's are my main tools for graphic design, movie editing, and web
design. I consistently follow the technological advances with MacOS, as well as it's connection with iPhones, iPads, and iTunes.
83 Subjects: including calculus, chemistry, physics, statistics
...We can do practices, exercises, and/or homework together, or we can study a topic from the beginning and do exercises to reinforce our understanding. I need cancellation notifications 24-hours
before. As work place, I prefer comfortable and suitable public places generally, like a coffee shop with wide tables, or a library.
25 Subjects: including calculus, physics, statistics, logic
I am a former Wall Street quant with degrees in Math, Computer Science and Finance. I have been tutoring since my days on the Math Team at Stuyvesant HS. I was a high school teacher for a brief
time and, more recently, I taught Probability and Statistics at Stony Brook University for a few years, where I received the President's Award for Excellence in Teaching.
16 Subjects: including calculus, geometry, statistics, GRE | {"url":"http://www.purplemath.com/Flushing_NY_Calculus_tutors.php","timestamp":"2014-04-21T00:07:31Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:f060308d-14e9-48a3-b908-2c1c8f78b67e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Jolla Calculus Tutor
...Moreover, I have done a lot of tasks and have been experienced in many different fields of study and research areas, such as: Numerical Solutions for Differential Equations, Control Systems,
Simulink, and Convex Optimization. My Bachelor's degree was concerned to the Mechanical Engineering field...
14 Subjects: including calculus, physics, GRE, algebra 1
...I have been a college mathematics professor all through my professional life. Therefore, I have taught the Trigonometry course at college numerous times. Not only have I taught Trigonometry at
college, but I also taught and transcribed an appropriate curriculum for Trigonometry at high school l...
13 Subjects: including calculus, geometry, ASVAB, GRE
Very patient and creative teacher of physics and mathematics. Able to explain and solve problems at any level. Prefer to use a large conference room in the Department of Physics at UCSD.
2 Subjects: including calculus, physics
...Until now, I have never been compensated for my tutoring; I have done it simply because I enjoy it. I love the feeling that I can help someone succeed, and I am good at it. I can talk to anyone
and I can help anyone.The SAT Math section has some straightforward multiple choice problems and some free response problems that need to be bubbled in.
26 Subjects: including calculus, Spanish, chemistry, physics
...I have a master degree in applied mathematics and use linear algebra in my work quite often. I have taught classes in linear algebra and control systems (which rely heavily on liner algebra)
many times during my career as a University Professor. I have more than 20 years' experience of programming in MATLAB/SIMULINK and associated Toolboxes.
32 Subjects: including calculus, physics, geometry, statistics | {"url":"http://www.purplemath.com/la_jolla_calculus_tutors.php","timestamp":"2014-04-21T02:35:55Z","content_type":null,"content_length":"23754","record_id":"<urn:uuid:da22bae3-4358-406a-8b44-ea7bd32c8a66>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
total order (plural total orders)
1. (set theory) A relation that is reflexive, antisymmetric, and transitive (i.e., that is a partial order), and having the property that for any two elements of its set, one is related to the
If → is the material implication for a propositional logic $\mathcal{L}$, and $\mathcal{L}$ has logical equivalence ↔ (an equivalence relation), then → is a total order for the quotient $\
mathcal{L} / \leftrightarrow$.
Related termsEdit
Last modified on 8 October 2013, at 21:13 | {"url":"https://en.m.wiktionary.org/wiki/total_order","timestamp":"2014-04-16T16:39:58Z","content_type":null,"content_length":"18921","record_id":"<urn:uuid:2f88b566-9937-4985-bfb4-36bbd452a73c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Libertyville, IL Prealgebra Tutor
Find a Libertyville, IL Prealgebra Tutor
...I was the top student in all my chemistry classes, so I have a clear understanding of all the concepts to do with chemistry. I will be able to help you or your child to understand these
concepts, using real life examples, and will also be able to coach you to the techniques which will enable you to answer them every time. I tutor because I love working with children.
20 Subjects: including prealgebra, chemistry, calculus, physics
...When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are
studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student.
12 Subjects: including prealgebra, calculus, geometry, algebra 1
...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only
a few sessions.
26 Subjects: including prealgebra, Spanish, geometry, chemistry
...I am currently a high school math teacher and I have been teaching high school for the past 11 years. I have experience teaching students in subjects from pre-algebra through pre-calculus. I
also have experience preparing students for the math portion of the ACT.
7 Subjects: including prealgebra, algebra 1, algebra 2, precalculus
...I have taught second through eighth grade science. I organized and ran the outdoor education program for my school, and helped to build and organize the science curriculum. I have tutored
students in middle school.
23 Subjects: including prealgebra, reading, chemistry, geometry
Related Libertyville, IL Tutors
Libertyville, IL Accounting Tutors
Libertyville, IL ACT Tutors
Libertyville, IL Algebra Tutors
Libertyville, IL Algebra 2 Tutors
Libertyville, IL Calculus Tutors
Libertyville, IL Geometry Tutors
Libertyville, IL Math Tutors
Libertyville, IL Prealgebra Tutors
Libertyville, IL Precalculus Tutors
Libertyville, IL SAT Tutors
Libertyville, IL SAT Math Tutors
Libertyville, IL Science Tutors
Libertyville, IL Statistics Tutors
Libertyville, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Libertyville_IL_Prealgebra_tutors.php","timestamp":"2014-04-18T05:40:06Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:50c1961c-1e6a-4501-bfad-6c2a82710300>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |