content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mental math program
March 1st 2008, 11:34 PM #1
Mental math program
Okay, I'm decent at math concepts, I usually excel in my courses, but wow am I bad at arithmetic. Adding, subtracting, multiplying, dividing, I need a calculator for these things.
A while back I was looking around the internet, and found an awesome site that went through all the steps to learn math, and I used what they said on the addition part, and found it worked
marvelously (was based on using the number ten).
At the site, they had some program to give you problems like the ones they were testing you on, and it would help you learn to do arithmetic the way quick efficient way they suggested.
...but I didn't want to spend the money at the time (I'm poor), and apparently didn't save the URL
So now my birthday is coming up, and I want to ask for the software as a gift, but I can't find the site anymore. I was hoping someone here might know what it was or know the site, or a suitable
substitute. I think it was only like $10 or $20 (USD), and if I recall, it had a picture of a ninja on the front.
Anyone here know where to get this? I'm embarrassingly bad at addition, and want to remedy that so I only need a calculator for more involved arithmetic.
edit: The site was fantastic, it had like an explanation of how to do certain types of problems, then a few examples, then a bunch of questions that would show you the answers when you moused
over them. They got progressively more complex, like first showed you how to add single digit numbers, then double digit, then triple digit numbers. That's as far as I got, though, but I was
really impressed with the site.
Mental Math Master - The complete guide to mental arithmetic
This may be the software you are looking for. One of the cds is called mental math samurai and looks like it has someone doing karate on the cover
I am not familiar with this software, I merely pride myself on my ability to use google quickly and effectively.
One search attempt and one redirect from another site led me to the above address. If it is correct, SCORE! If not it might still be useful to help you master your mental math... or mentally
master math... or mathematically master your mentality
March 7th 2008, 11:55 AM #2
Mar 2008 | {"url":"http://mathhelpforum.com/math-software/29683-mental-math-program.html","timestamp":"2014-04-16T19:14:55Z","content_type":null,"content_length":"34077","record_id":"<urn:uuid:0d935b7b-b6cc-4115-86c6-35f5be371238>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Planetary motion.
This material comes from an article by Robert Osserman in the American Mathematical Monthly of
July 2001.
Part One. Suppose we have an inertial frame of reference in which a body of constant mass m is
concentrated at the point p and another body of constant mass M is concentrated at the point P. Suppose
there are no other masses in our universe and that Newton's law of gravitation holds; this amounts to
(1) m¨p =
|P - p|3
(P - p) and M ¨P =
|p - P|3
(p - P)
where G is Newton's gravitational constant in appropriate units.
C =
m + M
(mp + MP)
be the center of mass of our two body system. A simple calculation shows that | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/304/2826859.html","timestamp":"2014-04-21T13:23:04Z","content_type":null,"content_length":"7763","record_id":"<urn:uuid:aedcfcf1-c183-45a7-9314-931e8f8d1284>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
E.W.Dijkstra Archive: A Short Introduction to the Art of Programming (EWD 316), Chapter 5
EWD316: A Short Introduction to the Art of Programming
prof.dr.Edsger W.Dijkstra
August 1971
5. Programs corresponding to recurrence relations
Theorem 8 mentions successive states connected by a recurrence relation. The meaning of this theorem is twofold: it can be used to prove assertions about a given program, but also —and this, I think,
is more important— it suggests to us, when faced with the task of making a program, the use of a while-clause in the case of a problem that in its mathematical formulation presents itself as the
evaluation of a recurrence relation. We are going to illustrate this by a number of examples.
Consider the sequence of pairs a[i], c[i] given by
for i = 0 a[0] = 1 (1)
c[0] = 1 b, with 0 < b < 2 (i.e. abs(c[0]) < 1)
for i > 0 a[i] = (1 + c[i][-1]) * a[i][-1] (2)
c[i] = (c[i][-1])^2 .^
Then lim a[i] = 1/b .
i →∞
Exercise. Prove the last formula. (This has nothing to do with programming, it is secondary school algebra. The clue of a proof can be found in the relation
1 1 + c[i][-1]
------- = ------- .)
1 c[i][-1] 1 - c[i]
It is requested to use this recurrence relation to approximate the value of 1/b; obviously we cannot compute infinitely many elements of the sequence a[0], a[1], a[2], but we can accept a[k] as a
sufficiently close (how close?) approximation of 1/b when c[k] is less (in absolute value) than a given, small, positive tolerance named "eps". (This example is of historical interest; it has been
taken from the subroutine library for EDSAC 1, the world's first stored program controlled automatic computer. The order code of this computer did not comprise a divide instruction and one of the
methods used with this computer to compute quotients was based on the above recurrence relation.)
Theorem 8 talks about "a part, s, of the state space" and the loop
while B(s) do s := S(s)
asserts that after the initial state s[0] , the states s[i] after the i-th execution of the repeatable statement will satisfy
s[i] = S(s[i][-1]) (3)
Our recurrence relations (2) are exactly of the form (3) if we identify the state s[i] with the value pair a[i], c[i]. That is, to span the part s of the state space we have to introduce two
variables, for the purpose of this discussion called A and C, and we shall denote their values after the i-th execution of the repeatable statement A[i] and C[i] respectively. We associate the state
s[i] (as given by the values A[i] and C[i ]) with the value pair a[i], c[i] by the relations
A[i] = a[i] (4)
C[i] = c[i]
(Remember: on the left-hand sides the subscript "i" means "the value of the variable after the i-th execution of the repeatable statement", on the right-hand side the subscript "i" refers to the
recurrent sequences as given by (1) and (2). It would have been usual to call the two variables "a" and "c" instead of "A" and "C", i.e. not to distinguish between the quantities defined in the
mathematical formulation on the one hand and the associated variables in the program on the other hand. As this association is the very subject of this discussion, it would have been fatal not to
distinguish between them.)
Within the scope of a declaration "real A, C" it is now a straightforward task to write the piece of program:
A := 1; C := 1 b;
while abs(C ) ≥ eps do
begin A := (1 + C ) * A;
C := C * C
end .
The first line has to create the proper state s[0] and does so in accordance with (4) and (1), the repeatable statement has the form, symbolically denoted by "s := S(s)" —see the Note below— in
accordance with (4) and (2), and the condition guarantees that after termination
(A[k] = ) A = a[k]
will hold with the proper value of k.
Exercise. Prove that the loop terminates.
Note. The symbolic assignment "s := S(s)" has the form of two assignments
A := (1 + C ) * A;
C := C * C .
With the initial condition A = a[i][-1] , C = c[i][-1] the first assignment is equivalent to
A := (1 + c[i][-1]) * a[i][-1]
and after the first, but before the second assignment we have —on account of (2)—
A = a[i], C = c[i-1] .
We have the complete pair A = a[i], C = c[i] only after the second assignment. Thanks to the explicit occurrence of the subscripts, the order of the two relations comprising (2) is immaterial, this
in contrast to the two assignment statements composing the repeatable statement, whose order is vital.
Exercise. In the same EDSAC 1 subroutine library the next scheme is used. Consider the sequence of pairs a[i], c[i], given by
for i = 0 a[0] = b
c[0] = 1 b, with 0 < b < 2 (i.e. abs(c[0]) < 1)
for i > 0 a[i] = (1 + .5 * c[i][-1]) * a[i][-1]
c[i] = (c[i][-1])^2 * (.75 + .25 * c[i][-1]) .
Then lim a[i] = b^.5 .
Prove the last formula and make a program using it for the approximation of the square root. The clue of a proof can be found in the relation
(1 c[i][-1])^-.5 = (1 + .5 * c[i][-1]) * (1 - c[i])^-.5 .
Prove also the termination of the repetition in the program made.
Exercise. In the same EDSAC 1 subroutine library the next scheme is used. Consider the sequence of triples inc[i], s[i], x[i], given by
for i = 0 inc[0] = log 2
s[0] = 0
x[0] = arg (with 1 ≤ arg < 2)
for i > 0
for (x[i][-1])^2 < 2 inc[i] = .5 * inc[i][-1]
s[i] = s[i][-1]
x[i] = (x[i][-1])^2 .^
for (x[i][-1])^2 ≥ 2 inc[i] = .5 * inc[i][-1]
s[i] = s[i][-1] + .5 * inc[i][-1]
x[i] = .5 * (x[i][-1])^2
Then lim s[i] = log(arg) .
Prove this relation and make a program using it to approximate the logarithm of a value arg in the interval stated. (In this program "log 2" may be regarded as a known constant which, in fact,
determines the base of the logarithm.) The clue of the proof can be found in the relation
log(arg) = s[i] + inc[i ] * log(x[i] ) / log 2 .
Our next example is very simple; it is so traditional that we could call it standard. (No self-respecting programming course omits it, it is often the very first example of a loop; Peter Naur uses it
in his article "Proof of algorithms by general snapshots", 1966, BIT, 6, pp 310-316.)
Given a sequence of values
a[1], a[2], a[3], , a[N] (with N ≥ 1)
and a variable called "max." Make a piece of program assigning to the variable named "max" the maximum value occurring in the sequence. (As N ≥ 1, the sequence is not empty and therefore the task
makes sense; it is not required that any two values in the sequence differ from each other, the maximum value sought may occur more than once in the sequence.) If he welcomes the experience the
reader is invited to try to make this piece of program himself before reading on.
How do we define the maximum value occurring in a sequence of length N for general N ≥ 1? If we call "maximum[k]" the maximum value occurring among the first k elements a[1], , a[k], then
1) the answer sought is maximum[N]
2) the values maximum[k] are given
for k = 1 by the base: maximum[1] = a[1] (5)
appealing to the knowledge that the maximum element in a sequence of length 1 must be the only element in the sequence
for k > 1 by the recurrence relation: (6)
maximum[k] = MAX(maximum[k-1], a[k])
assuming the knowledge of the function MAX of two arguments.
The recurrence relation (6) presents us with an additional difficulty because it is not of the form
s[i] = S(s[i-1])
because —via "a[k]"— the value k occurs on the right-hand side not exclusively in the subscript "k-1". To overcome this we use a trick that might be called a method. If we call n[k] the k-th natural
number, then n[k ]= k; the numbers n[k] satisfy the obvious recurrence relation n[k ]= 1 + n[k][−1]. We can now rewrite the definition for the sequence of values maximum[k] in the form of a
definition for the pairs n[k], maximum[k]:
for k = 1 n[1] = 1 (7)
maximum[1] = a[1]
for k > 1 n[k] = 1 + n[k][-1] (8)
maximum[k] = MAX(maximum[k][-1], a[1 + n[k] [-1]])
and now the recurrence relations are of the form s[i] = S(s[i-1]), the only —trivial— difference being that in Theorem 8 we started with i = 0 and here with k = 1. The trick we called a method shows
that we need a second (integer) variable; call it "m". Our state s[i] will associate (with k = i + 1)
max[i] = maximum[k]
m[i] = n[k ] .
The piece of program now becomes:
max := a[1]; m := 1;
while m < N do begin m := m + 1;
max := MAX(max, a[m])
end .
Again, the order of the two assignment statements is essential.
We have given the above piece of reasoning and the explicit references to the recurrence relation of Theorem 8 because it shows a mechanism leading to the conclusion that the part of the state space
on which the repetition operates needs to comprise an additional variable. Even a moderately trained programmer draws this conclusion "intuitively" and from now onwards I shall assume my reader
equipped with that intuition. Then —and only then— there is a much shorter path of reasoning that leads to the program we found. It does not consider —"statically" so to speak— the sequence of values
s[0], s[1], in the sense that it bothers about the values of the subscript i in s[i]. It appeals directly to Theorems 5 and 6 and works in terms of assertions valid (before and after) any execution
of the repeatable statement. The price paid for this is the duty to prove termination separately.
Given the base
k = 1 maximum[1] = a[1]
and the step
1 < k ≤ N maximum[k] = MAX(maximum[k] [-1], a[k])
the programmer "intuitively" introduces two variables which he calls "maximum" and "k" for short and the relation to be kept invariant is
P : 1 ≤ k ≤ N and maximum = maximum[k] .
(Here the use of "maximum" and "k" stands for the current value of the variable thus named, while "maximum[k]" stands for the value as given by the recurrence relation. This double use of the same
names is tricky but programmers do it. I too.)
The program then consists of two parts: establishing the relation P in accordance with the base and repeatedly increasing k under invariance of relation P, i.e. in accordance with the step.
The initialization
"maximum := a[1]; k := 1"
establishes P (with k = 1), the repetition
while k < N do
begin k := k + 1;
maximum := MAX(maximum, a[k])
causes the repeatable statement to be executed under the combined relation "B and P ", i.e.
k < N and 1 ≤ k ≤ N and maximum = maximum[k]
which reduces to
1 ≤ k < N and maximum = maximum[k] . (9)
In order to show that the execution of the repeatable statement under the initial condition (9) leaves relation P valid, it is desirable to distinguish between the values before and after its
execution; now it would be confusing to do so with subscripts (why?), therefore we distinguish the values after execution by primes.
Initially we have relation (9); after the assignment k := k + 1 we have the relation k' = k + 1 and from the first part of (9), i.e. 1 ≤ k ≤ N, follows 2 ≤ k' ≤ N, which implies
1 ≤ k' ≤ N . (10)
The second assignment now becomes effectively maximum := MAX(maximum[k], a[k']), resulting in the relation
maximum' = maximum[k' ]. (11)
Relations (10) and (11) combine to a replica of P, but now for the primed quantities.
Termination follows from the fact that each execution of the repeatable statement involves an effective increase of the integer valued variable k. After termination we have, according to Theorem 5,
"P and non B ", i.e.
1 ≤ k ≤ N and maximum = maximum[k ]and non k < N ;
from the first and the last term we conclude k = N and then from the middle part
maximum = maximum[N]
which concludes the proof.
Exercise. Make a program effectively assigning "prod := X * Y" with integer X and Y, satisfying X ≥ 0, Y ≥ 0
a) using only addition and subtraction
b) using in addition the boolean function "odd(x)", doubling and halving of a number. (The so-called Egyptian multiplication.)
Exercise. Make a program effectively assigning "rem :=REM(X, Y)" with integer X and Y, X ≥ 0, Y > 0, where the function REM(X, Y) is the remainder after the division of X by Y
a) using only addition and subtraction
b) using in addition doubling and halving of a number. Modify both programs in such a way that in addition "quot :=QUOT(X, Y)" will take place. (The so-called Chinese division.)
We conclude this section by an example of the (standard) circumstance in which a recurrence relation should not be translated blindly into a loop. Given two sequences of values
x[1], x[2], , x[N] and
y[1], y[2], , y[N] with N ≥ 0 ;
make a program assigning to the boolean variable named "eq" the value true if x[i] = y[i] for all i satisfying 1 ≤ i ≤ N and the value false if in that range a value for i exists such that x[i] ≠ y
[i]. (The sequence may be empty, in that case "eq" should get the value true.)
How do we define equality for sequences of length N for general N? Again by means of a recurrence relation. Let eq[i] mean "no difference occurs among the first i pairs"; the sequence of values eq[i]
is given by
for i = 0 eq[0] = true
for i > 0 eq[i] = eq[i-1] and x[i] = y[i] .
The net effect of the program to be made should be eq := eq[N].
A blind translation into initialization followed by repetition would lead to
eq := true; i := 0;
while i < N do begin i := i + 1; eq := (eq and x[i] = y[i]) end
Although the above program is correct, the following program, besides being equally correct, is on the average more efficient:
eq := true; i := 0;
while i < N and eq do begin i := i + 1; eq := (x[i] = y[i]) end .
because it terminates the repetition as soon as a difference has been found.
Exercise. Prove the correctness of the second program.
Next chapter: 6. A first example of step-wise program composition | {"url":"http://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD316.5.html","timestamp":"2014-04-19T10:48:07Z","content_type":null,"content_length":"33892","record_id":"<urn:uuid:57e3db43-b641-44eb-8b63-93d14f102782>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spatial Data Option Concepts
Oracle7 Spatial Data Option User's Guide and Reference
Library Product Contents Index
Spatial Data Option Concepts
Oracle Spatial Data Option is an integrated set of functions and procedures that enables spatial data to be stored, accessed, and analyzed quickly and efficiently in an Oracle7 database.
Spatial data represents the essential location characteristics of real or conceptual objects as those objects relate to the real or conceptual space in which they exist.
1.1 Introduction to Spatial Data
Spatial Data Option is designed to make the storage, retrieval, and manipulation of spatial data easier and more natural to users such as a Geographic Information System (GIS). Once this data is
stored in an Oracle7 relational database, it can be easily and meaningfully manipulated and retrieved as it relates to all the other data stored in the database.
A common example of spatial data can be seen in a road map. A road map is a two-dimensional object that contains points, lines, and polygons that can represent cities, roads, and political boundaries
such as states or provinces. A road map is a visualization of geographic information. The location of cities, roads, and political boundaries that exist on the surface of the Earth are projected onto
a two-dimensional display or piece of paper, preserving the relative positions and relative distances of the rendered objects.
The data that indicates the Earth location (latitude and longitude, or height and depth) of these rendered objects is the spatial data. When the map is rendered, this spatial data is used to project
the locations of the objects on a two-dimensional piece of paper. A GIS is often used to store, retrieve, and render this Earth-relative spatial data.
Other types of spatial data that can be stored using Spatial Data Option besides GIS data include data from computer-aided design (CAD) and computer-aided manufacturing (CAM) systems. Instead of
operating on objects on a geographic scale, CAD/CAM systems work on a smaller scale such as for an automobile engine or much smaller scale as for printed circuit boards.
The differences among these three systems are only in the scale of the data, not its complexity. They might all actually involve the same number of data points. On a geographic scale, the location of
a bridge can vary by a few tenths of an inch without causing any noticeable problems to the road builders. Whereas, if the diameter of an engine's pistons are off by a few tenths of an inch, the
engine will not run. A printed circuit board is likely to have many thousands of objects etched on its surface that are no bigger than the smallest detail shown on a roadbuilder's blueprints.
1.2 Geometric Types
Spatial Data Option supports three geometric primitive types and geometries composed of collections of these types. The three primitive types are:
2-D points are elements composed of two ordinates, X and Y, often corresponding to longitude and latitude. Line strings are composed of one or more pairs of points that define line segments. Polygons
are composed of connected line strings that form a closed ring and the interior of the polygon is implied. Figure 1-1 illustrates the supported geometric primitives:
Figure 1-1 Geometric Primitives
Self-crossing polygons are not supported although self-crossing line strings are. If a line string crosses itself it does not become a polygon. A self-crossing line string does not have any implied
1.3 Data Model
The Spatial Data Option data model is a hierarchical structure consisting of elements, geometries, and layers, which correspond to representations of spatial data. Layers are composed of geometries,
which in turn are made up of elements.
For example, a point might represent a building location, a line string might be a road or flight path, and a polygon could be a state, city, zoning district, or city block.
1.3.1 Element
An element is the basic building block of a geometric feature for Spatial Data Option. The supported spatial element types are points, line strings, and polygons. For example, elements might model
star constellations (point clusters), roads (line strings), and county boundaries (polygons). Each coordinate in an element is stored as an X,Y pair.
Point data^1 consists of one coordinate. Line data consists of two coordinates representing a line segment of the element. Polygon data consists of coordinate pair values, one vertex pair for each
line segment of the polygon. Coordinates are defined in either a clockwise or counter-clockwise order around the polygon. Each layer's geometric objects and their associated spatial index are stored
in the database in tables.
If an element spans more than one row, an incremental sequence number (starting at zero) orders the rows.
1.3.2 Geometry
A geometry, or geometric object, is the representation of a user's spatial feature, modeled as an ordered set of primitive elements. Each geometric object is required to be uniquely identified by a
numeric geometry identifier (GID), associating the object with its corresponding attribute set.
A complex geometric feature such as a polygon with holes would be stored as a sequence of polygon elements. In a multi-element polygonal geometry, all subelements are wholly contained within the
outermost element, thus building a more complex geometry from simpler pieces.
For example, a geometry might describe the buildable land in a town. This could be represented as a polygon with holes where water or zoning prevents construction.
1.3.3 Layer
A layer is a heterogeneous collection of geometries having the same attribute set. For example, one layer in a GIS might include topographical features, while another describes population density,
and a third describes the network of roads and bridges in the area (lines and points).
1.4 Database Structures
Spatial Data Option uses four database tables to store and index spatial data. These four tables are collectively referred to as a "layer". A template SQL script is provided to facilitate the
creation of these tables. See Section A.1.3, "crlayer.sql Script" for details.
The following tables describe the schema of a Spatial Data Option layer.
Table 1-1 <layername>_SDOLAYER
│ SDO_ORDCNT │ SDO_LEVEL │ SDO_NUMTILES │ SDO_COORDSYS │
│ <number> │ <number> │ <number> │ <varchar> │
Table 1-2 <layername>_SDODIM table or view
│ SDO_DIMNUM │ SDO_LB │ SDO_UB │ SDO_TOLERANCE │ SDO_DIMNAME │
│ <number> │ <number> │ <number> │ <number> │ <varchar> │
Table 1-3 <layername>_SDOGEOM table or view
│ SDO_GID │ SDO_ESEQ │ SDO_ETYPE │ SDO_SEQ │ SDO_X1 │ SDO_Y1 │ ... │ SDO_Xn │ SDO_Yn │
│ <number> │ <number> │ <number> │ <number> │ <number> │ <number> │ ... │ <number> │ <number> │
Table 1-4 <layername>_SDOINDEX table
│ SDO_GID │ SDO_CODE │ SDO_MAXCODE ** │ SDO_GROUPCODE ** │ SDO_META │
│ <number> │ <raw> │ <raw> │ <raw> │ <raw> │
The SDO_MAXCODE and SDO_GROUPCODE columns are not required for the recommended indexing algorithm using fixed-size tiles.
The columns of each table are defined as follows:
● SDO_DIMNUM - The SDO_DIMNUM column is the dimension to which this row refers, starting with 1 and increasing.
● SDO_LB - The SDO_LB column is the lower bound of the ordinate in this dimension. For example, if the dimension is latitude, the lower bound would be -90.
● SDO_UB - The SDO_UB column is the upper bound of the ordinate in this dimension. For example, if the dimension is longitude, the upper bound would be 180.
● SDO_TOLERANCE - The SDO_TOLERANCE column is the distance two points can be apart and still be considered the same due to round-off errors. Tolerance must be greater than zero. If you want
zero tolerance, enter a number such as 0.00005, where the number of zeroes to the right of the decimal point matches the precision of your data. The extra "5" will round up to your last
decimal digit.
● SDO_ESEQ - A geometry is composed of one or more primitive types called elements. The column SDO_ESEQ enumerates each element in a geometry, that is, the Element SEQuence number.
● SDO_ETYPE - The type of each element is recorded by the Element TYPE column. For this release of Spatial Data Option, the valid values are SDO_GEOM.POINT_TYPE, SDO_GEOM.LINESTRING_TYPE, or
SDO_GEOM.POLYGON_TYPE (ETYPE values 1, 2, and 3, respectively). Setting the ETYPE to zero (0) indicates that this element should be ignored. See Section A.2.7 for information on ETYPE 0.
● SDO_SEQ - The SDO_SEQ column records the order (the SEQuence number) of each row of data making up the element.
● SDO_GID - A Geometry IDentifier is a unique numeric identifier for each geometry in a layer. This can be thought of as a foreign key back to the <layername>_SDOGEOM table.
● SDO_CODE - The SDO_CODE column is the bit interleaved ID of a tile that covers SDO_GID. The number of bytes needed for the SDO_CODE and SDO_MAXCODE columns depends on the level used for
tiling. Use the SDO_ADMIN.SDO_CODE_SIZE() function to determine the size required for a given layer. The maximum number of bytes possible is 255.
● SDO_MAXCODE - The SDO_MAXCODE column describes a logical tile which is the smallest tile (with the longest tile ID) in the current quadrant. The SDO_MAXCODE column is SDO_CODE padded out one
place farther than the longest allowable code name for this index. This column is not used for fixed-size tiles.
● SDO_GROUPCODE - The SDO_GROUPCODE is a prefix of SDO_CODE. It represents a tile at level <layername>_SDOLAYER.SDO_LEVEL that contains or is equal to the tile represented by SDO_CODE. This
column is not used for fixed-size tiles.This column is new for release 7.3.4.
● SDO_META - The SDO_META column is not required for spatial queries. It provides information necessary to find the bounds of a tile. See Section A.2.4 for one possible usage of this column.
This column is new for release 7.3.4.
Spatial Data Option provides stored procedures that assume the existence of the layer schema as described in this section. While layer objects may contain additional columns, they are required to
contain at least the columns described here with the same column names and data types. The SDO_GID column always needs to be specified when loading or inserting geometries.
Figure 1-2 illustrates how a geometry is stored in the database using Spatial Data Option. The geometry to be stored is a complex polygon with a hole in it.
Figure 1-2 Complex Polygon
│ SDO_DIMNUM (number) │ SDO_LB (number) │ SDO_UB (number) │ SDO_TOLERANCE (number) │ SDO_DIMNAME (varchar) │
│ 1 │ 0 │ 100 │ .05 │ X axis │
│ 2 │ 0 │ 100 │ .05 │ Y axis │
│ SDO_GID (NUMBER) │ SDO_ESEQ (NUMBER) │ SDO_ETYPE (NUMBER) │ SDO_SEQ (NUMBER) │ SDO_X1 (NUMBER) │ SDO_Y1 (NUMBER) │ SDO_X2 (NUMBER) │ SDO_Y2 (NUMBER) │
│ 1013 │ 0 │ 3 │ 0 │ P1(X) │ P1(Y) │ P2(X) │ P2(Y) │
│ 1013 │ 0 │ 3 │ 1 │ P2(X) │ P2(Y) │ P3(X) │ P3(Y) │
│ 1013 │ 0 │ 3 │ 2 │ P3(X) │ P3(X) │ P4(X) │ P4(Y) │
│ 1013 │ 0 │ 3 │ 3 │ P4(X) │ P4(Y) │ P5(X) │ P5(Y) │
│ 1013 │ 0 │ 3 │ 4 │ P5(X) │ P5(Y) │ P6(X) │ P6(Y) │
│ 1013 │ 0 │ 3 │ 5 │ P6(X) │ P6(Y) │ P7(X) │ P7(Y) │
│ 1013 │ 0 │ 3 │ 6 │ P7(X) │ P7(Y) │ P8(X) │ P8(Y) │
│ 1013 │ 0 │ 3 │ 7 │ P8(X) │ P8(Y) │ P1(X) │ P1(Y) │
│ 1013 │ 1 │ 3 │ 0 │ G1(X) │ G1(Y) │ G2(X) │ G2(Y) │
│ 1013 │ 1 │ 3 │ 1 │ G2(X) │ G2(Y) │ G3(X) │ G3(Y) │
│ 1013 │ 1 │ 3 │ 2 │ G3(X) │ G3(Y) │ G4(X) │ G4(Y) │
│ 1013 │ 1 │ 3 │ 3 │ G4(X) │ G4(Y) │ G1(X) │ G1(Y) │
In this example, the <layername>_SDOGEOM table is shown as a 4-wide table. In actual usage, Spatial Data Option supports N-wide^2 tables. The coordinates for each ESEQ in this example table could
have been loaded into a single, 18-wide row containing values for SDO_X1 and SDO_Y1 through SDO_X9 and SDO_Y9.
The data in the <layername>_SDOINDEX table is described in Section 1.5, "Indexing Methods".
1.5 Indexing Methods
Spatial Data Option release 7.3.3 introduced two distinct algorithms for building a spatial index: fixed-size tiling and variable-sized tiling. Based on testing and customer feedback, for release
7.3.4, Oracle recommends using only fixed-size tiling on production systems. Variable-sized tiling, while it has theoretical advantages in some situations, is included for experimentation purposes
In spatial indexing, the object space (the layer where all geometric objects are located,) is subjected to a quad-tree decomposition called tessellation, which defines exclusive and exhaustive cover
tiles of every stored element. Spatial Data Option can use either fixed or variable-sized tiles to cover a geometry.
The number of tiles used to cover an element is a user-tunable parameter. Using either smaller fixed-size tiles or more variable-sized tiles provides a better fit of the tiles to the element. The
fewer the number of tiles or the larger the tiles, the coarser the fit.
1.5.1 Tessellation of a Layer
The process of determining which tiles cover a given element is called tessellation. The results of a tessellation process on an element are stored in the <layername>_SDOINDEX table. See Section 2.3,
"Index Creation" for more information on tessellation.
Figure 1-3 illustrates geometry 1013 decomposed to a maximum of four cover tiles. The cover tiles are then shown stored in the <layername>_SDOINDEX table.
Figure 1-3 Tessellated Figure
Only three of the four tiles generated by the first tessellation interact with the geometry. Only those tiles that interact with the geometry are stored in the <layername>_SDOINDEX table, as shown in
Table 1-5. In this example, three fixed-size tiles are used.
Table 1-5 <layername>_SDOINDEX Using Fixed-Size Tiles
│ SDO_GID <NUMBER> │ SDO_CODE <RAW> │
│ 1013 │ T0 │
│ 1013 │ T2 │
│ 1013 │ T3 │
All elements in a geometry are tessellated. In a multi-element polygon like 1013, Element 1 is covered by the redundant tile T2 from the tessellation of Element 0.
1.5.2 Fixed-Size Tile Spatial Indexing
Fixed-tile spatial indexing is the recommended indexing method. This method uses cover tiles of equal size to cover a geometry. Because all of the tiles are the same size, the standard SQL equality
operator (=) can be used to compare tiles during a join operation. This results in excellent performance characteristics.
If you select a small fixed-tile size to cover small geometries and then try to use the same sized tiles to cover a very large geometry, a large number of tiles would be required, thereby increasing
the size of the index table. However, if the fixed-tile size chosen is large, so that fewer tiles are generated in the case of a large geometry, then the index selectivity suffers because the large
tiles do not fit the small geometries very well. Figure 1-4 and Figure 1-5 illustrate the relationships between tile size, selectivity, and the number of cover tiles.
Using a small fixed-size tile as shown in Figure 1-4, selectivity is good, but a large number of tiles are needed to cover large geometries. A window query would easily identify geometries A and B,
but would reject C.
Figure 1-4 Fixed-Size Tiling with Many Small Tiles
Using a large fixed-size tile as shown in Figure 1-5, fewer tiles are needed to cover the geometries, but the selectivity is poor. A window query would likely pick up all three geometries. Any object
that shares tile T1 or T2 would identify object C as a candidate, even though the objects may be far apart, such as objects B and C are in this figure.
Use the SDO_TUNE.ESTIMATE_TILING_LEVEL() function to determine an appropriate tiling level for your dataset.
Figure 1-5 Fixed-Size Tiling with Fewer Large Tiles
1.5.3 Variable-Sized Tile Spatial Indexing
Variable-sized tile spatial indexing is not recommended for production environments. It is included primarily for experimentation purposes.
Variable-sized tile spatial indexing uses cover tiles of different sizes to approximate a geometry. The user specifies the number of tiles per object that should be used to approximate it and this
governs the tiling process. As in the case of a linear quad tree, the cover tiles depend on the size and shape of each geometry being indexed and therefore good primary filter selectivity can be
achieved. Figure 1-6 illustrates the approximation that variable-sized tiles can achieve.
In Figure 1-6, the variable-sized cover tiles conform closely to each geometry, resulting in good selectivity. The number of tiles needed to cover a geometry is controlled using the sdo_numtiles
column in the SDO_LAYER table. See Section 2.3.3 for information on selecting appropriate values for variable-sized tiling.
Figure 1-6 Variable-Sized Tile Spatial Indexing
Two geometries may interact if a tile of one object is equal to, inside of, or contains a tile of the other. Thus, the predicate to compare tiles involves a test for either equality or containment.
This is unlike fixed-size tiling, which only requires an equality check. Example 1-1 demonstrates this feature ("5" is an arbitrary window identifier):
Example 1-1
SELECT r.sdo_gid
FROM roads_sdoindex r,
window_sdoindex w
WHERE w.sdo_gid = 5
AND (r.sdo_code BETWEEN w.sdo_code AND w.sdo_maxcode OR
w.sdo_code BETWEEN r.sdo_code AND r.sdo_maxcode);
To reduce the number of times a complex predicate needs to be applied, variable-sized tile indexing uses a mechanism similar to spatial partitioning. To use this mechanism, select a tiling level,
called the groupcode level, that results in tiles larger than any variable-sized tile generated for all the geometries in the layer or dataset of interest. Each tile at the specified groupcode level
can be considered as a spatial partition. This reduces the size of the dataset on which the complex predicate is evaluated. Example 1-2 illustrates this feature:
Example 1-2
SELECT r.sdo_gid
FROM layer_sdoindex r,
window_sdoindex w
WHERE w.sdo_gid = 5
AND r.sdo_group_code = w.sdo_groupcode
AND (r.sdo_code BETWEEN w.sdo_code AND w.sdo_maxcode OR
w.sdo_code BETWEEN r.sdo_code AND r.sdo_maxcode);
In Figure 1-7, consider the domain partitioned into 16 subregions. If a join compares tiles from the two objects, under normal circumstances the join operation would process tiles from the entire
domain, searching for tiles that interact. However, if you constrain the processing to common partitions, then only partitions 5 and 6 would need to be processed. This may result in substantial
performance improvements.
Figure 1-7 Spatially Partitioning Data
1.6 Partitioned Point Data
Spatial Data Option has undergone an architectural change, beginning with the 7.3.3 release. A reliance on partitioned tables has changed to utilize improved spatial indexing capable of handling
complex geometries. However, for handling very large amounts (tens of gigabytes) of purely point data, keeping that data in partitioned tables may be more efficient than using the new spatial
indexing scheme.
Table partitioning and spatial indexing are two very different techniques. While both are important to their respective users, this manual emphasizes and recommends the spatial indexing capabilities
of Spatial Data Option.
See Chapter 4, "Partitioned Point Data" for a brief overview of partitioned point data.
^1 Point data can also be stored in a partitioned table. See Chapter 4, "Partitioned Point Data" for details.
^2 A <layername>_SDOGEOM table can have up to 255 columns. The maximum number of data columns is 255, minus 4 for the other required spatial columns, and minus any other user-defined columns. For
polygon and linestrings, storing 16 to 20 ordinates per row is suggested, but not required.
Prev Next Copyright © 1997 Oracle Corporation. Library Product Contents Index
All Rights Reserved. | {"url":"http://docs.oracle.com/cd/A57673_01/DOC/server/doc/A48124/sdo_intr.htm","timestamp":"2014-04-20T21:52:42Z","content_type":null,"content_length":"57525","record_id":"<urn:uuid:7bd58a08-810a-42de-986a-ba37434cb8ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Session 1
Below is an email I sent to the instructor giving him a report of the session last Wednesday. I’m reluctant to write something from scratch and I think this best summarizes what we did that day. I
really like the group that I have this semester.
The students collaborated effectively today. Each group was on task and following the schedule. Only one student was absent out of the 19 students enrolled. The students did not have any questions
for me during the question and answer period. I waited 2 minutes in silence while students shuffled around their notes. No one came forward with anything. I told them that this period was their only
chance to ask me about problems and told them to come prepared for next class. I worked problem 68 [large power saw] from the suggested problems on the board. We quickly discussed my solution (which
was somewhat long) and a student in the back suggested an easier way to solve the problem by taking the ratios of intensities instead of explicitly finding the power.
When we started the activity session I had to rearrange the groups. They sat roughly in groups of 4 and they were reluctant to move. I told them not to argue with me and they grouped themselves into
6 groups of 3. It was clear that the students knew what was expected of them in their groups. The group quizzes ran smoothly. I had 2 groups volunteer to present their solutions. The first group
received a 3 because their presentation was really quick and their collaboration while writing out their solution was one sided since one of the three students had already worked the problem herself.
Their solution was good and I asked the rest of the class to comment on what they liked about their solution, what could have been improved and any other remarks. The second group received a 4. Their
solution was fantastic, they made good use of their time during the work period and the 5 minute set up. They evenly distributed speaking roles and board work and were able to address student
questions. Again, I asked the class to comment on their performance.
The activities worked:
I had 6 groups.
All groups worked 1, 2 and 4.
5/6 groups worked 5, 6 and 7.
4/6 groups worked 3.
The students asked me questions about activities 2, 3 and 5. | {"url":"http://martinstheorem.tumblr.com/tagged/teaching","timestamp":"2014-04-17T21:26:13Z","content_type":null,"content_length":"63687","record_id":"<urn:uuid:023a48ec-e667-4ca6-8878-79d1b277d771>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philosophy, Technology and Math
The sum of consecutive cubes is a square
Summing up some cubes I noticed that 1^3 + 2^3 + 3^3 + 4^3 = 100, which is 10^2. And the pattern held as I added more cubes (i.e., I kept getting squares). This is well known in the world of
mathematics and number theory, but it was new to me. Extending the pattern:
And that makes these triangular numbers. And that means that every triangular number squared is the sum of cubes. It works both way in being interesting. | {"url":"http://www.mrrives.com/Technology/?p=817","timestamp":"2014-04-21T04:35:11Z","content_type":null,"content_length":"11402","record_id":"<urn:uuid:00e2f2b2-22f8-4f64-aeea-53eb93bd5b74>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Petaluma Trigonometry Tutors
...I finished an MS in chemistry in June of 2010, and have been working as a Teacher's assistant and tutor since then. During this program I taught chem lab to groups of 15 to 20 students, and led
review sessions to over 70 students at a time. I now specialize in tutoring inorganic, organic, and biochemistry at the university level, as well as high school and middle school level chemistry.
50 Subjects: including trigonometry, English, chemistry, reading
...The concepts of Linear Algebra are at the heart of (1) numerical methods (used to develop and evaluate solution techniques that are used by computers to solve large numerical systems such as
finite element analyses), (2) numerical solutions of overdetermined systems (i.e., "least squares" which a...
13 Subjects: including trigonometry, calculus, physics, geometry
...Wondering just what complex numbers are good for? Beginning to think these subjects are just too hard, that you will never ?get it?? Don?t worry, I can help. Most people can master science and
math subjects with a bit of help ? they just need motivation, explanations from someone familiar with the material, and a bit of practice.
37 Subjects: including trigonometry, reading, English, physics
...Since repetition is very important in studying math, we work on extra problems either from the book or that I offer myself. The student should work on as much of the current homework before a
session as possible. This allows us to focus on concepts or specific problems that the student is having a problem with.
12 Subjects: including trigonometry, calculus, statistics, geometry
...The coolest part about that kind of math? For me the coolest part about that kind of math is explaining it to non-math people. Non-math people are a gift to someone like me because they force
me to reexamine my own way of looking at math and to come up with new, different and innovative ways of explaining it.
11 Subjects: including trigonometry, calculus, geometry, algebra 1 | {"url":"http://www.algebrahelp.com/Petaluma_trigonometry_tutors.jsp","timestamp":"2014-04-19T17:10:34Z","content_type":null,"content_length":"25264","record_id":"<urn:uuid:84d4765c-5566-46d5-ac97-66500d55a904>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Goodness-Of-Fit Statistical Test
Bruno Apolloni and Simone Bassis
Intelligent Decision Technologies Volume 1, , 2007.
We introduce a new concept of nonparametric test for statistically deciding if a model fits a sample of data well. The employed statistic is the empirical cumulative distribution (e.c.d.f.) of the
measure of the blocks determined by the ordered sample. For any distribution law underlying the data this statistic is distributed around a Beta cumulative distribution law (c.d.f.) so that the shift
between the two curves is the statistic at the basis of the test. Its distribution is computed through a new bootstrap procedure from a population of free parameters of the model that are \emph
{compatible} with the sampled data according to the model. Closing the loop, we may expect that if the model fits the data well the Beta c.d.f. constitutes a template for the block e.c.d.f.s that are
compatible with the observed data. In the paper we show how to appreciate the template functionality in the case of a good fit and also how to discriminate bad models. We show the test's potential in
comparison to conventional tests, both in case studies and in a well-known benchmark for the semiparametric logistic model used widely in database analysis. | {"url":"http://eprints.pascal-network.org/archive/00003656/","timestamp":"2014-04-19T01:48:56Z","content_type":null,"content_length":"7067","record_id":"<urn:uuid:9494e4ff-18e4-41b5-bca8-78dd2bebb0be>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Proof "from the book"
Arnon Avron aa at tau.ac.il
Tue Aug 31 04:33:39 EDT 2004
> Arnon Avron says:
> >A Godel sentence (NOT "the"
> >Godel sentence) for a consistent extension of Q is true, and
> >Godel's proof does show this
> Godel's proof does indeed show the implication "if S is consistent then
> G is true", which is provable in S itself. I don't think you mean to
> say that if S is consistent, Godel's proof shows G to be true.
That "if S is consistent then G is true" is provable not only in S (which
might prove false sentences), but also in PA (which proves only true
sentences). Hence I do say that if S is a (formal) consistent extension of Q
then Godel's proof shows G to be true.
One more note: I assume here of course that an absolute notion of
truth exists for some propositions concerning finite objects like natural
numbers or proofs in a formal system. Without assuming this, it makes
no sense to talk about what Godel's proof proves (intuitionists will
object perhaps, but I admit that I was never able to understand how
can they coherently believe in what they claim they believe). Needless
to say, all theorems of PA (to say nothing about Q) are TRUE.
Arnon Avron
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008433.html","timestamp":"2014-04-16T19:31:39Z","content_type":null,"content_length":"3798","record_id":"<urn:uuid:2d6780d1-6b75-4953-932d-98a3c3a22dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gamma-Ray Bursts - Apparantly, They Are Absolutely Bright
IV. Activities
1. Apparently, They Are Absolutely Bright
At a press conference discussing the event GRB 990123, Dr. Chryssa Kouveliotou of Universities Space Research Association at the Marshall Space Flight Center said "If the burst had occurred somewhere
in our galactic neighborhood, it would have been so bright that night would've turned into day." What exactly does this tell us about how bright it was?
Astronomers express the brightness of stars in visible light in two related forms. Apparent visible magnitude, mv, measures the light that reaches us on Earth. This measure, however, is not a true
measure of brightness because distance makes things appear dimmer and apparent magnitude does not correct for this effect. Absolute magnitude, Mv, is a true measure of how much light an object is
actually producing. Determining how bright the star (or galaxy or any other emitter) would be if it were located 32.6 light-years (10 parsecs) away compensates for the effect of distance. Note that
here we assume space is completely transparent in all directions, so only distance affects what we detect. If clouds of dust intervene between the emitter and us (as they usually do), we have to
compensate for that effect too!
Electromagnetic radiation (regardless of whether it is in the form of radio waves, infrared, or gamma-rays) has a common property called the inverse square law. This law states that the amount of
energy that is measured by a given detector put at a given distance from an emitter is proportional to the inverse square of the distance from the emitter to the detector. Think about it this way. An
emitter E is sitting a distance R away from a detector D. The emitter is radiating equally in all directions. Place an imaginary sphere of radius R around the emitter. The emitter releases a certain
amount of energy in 1 second. This energy travels outward in all directions such that in a time T, it reaches the surface of the imaginary sphere a distance R away. This means that now, the original
energy, let us call this amount O, is spread out equally over the surface of a sphere of radius R. The surface area of a sphere of radius R is equal to 4 pi R^2. Thus, the amount of energy passing
through each square centimeter of the sphere is O/4piR^2, if R is measured in centimeters. We see, then, that the amount of energy passing through a unit area decreases with the square of the
distance from the source. This is the inverse square law of light propagation.
It is important to realize that we now have something to consider when we analyze our observations of the Universe: an object may appear bright because it really is, or it may be bright because it is
close by. Conversely, an object may appear dim because it really is, or else it could be just very far away. Such thinking played an important role in the history of understanding GRBs. When they
were first detected and their enormous energies calculated, it was believed that they had to be located in our Galaxy. The amount of energy they would be required to produce if they were very far
away was just too difficult to seriously consider. Now, however, we know that in fact they are at cosmological distances - that is, very far away indeed. Scientists are still working hard to
understand how the enormous energy required for us to be able to detect them from so far away (with the energy falling off as 1/R^2) is created. Beaming, which means the energy is not emitted equally
in all directions, but instead in a narrowly defined, preferred direction is the most probable answer.
It is easy to demonstrate the fall off of light in your classroom with a graphing calculator, Calculator-Based Laboratory (CBL ), and light probe. In the exercise, you will measure the intensity (or
brightness) of a light as it is moved away from the light probe of the CBL. The resulting data can be graphed and analyzed. For detailed discussions of an activity of this type, see
Real-World Math with the CBL System, Activity 7, Light at a Distance
Exploring Physics and Math with the CBL System, Activity 43, Intensity of Light
Physical Science with CBL , Experiment 25, How Bright is the Light?
In short, place a 40W (or less) bulb in a shadeless lamp or socket. Put a meter stick at a known distance about 2 meters away from the bulb and on the same level. Place the light probe next to the
end of the meter stick closest to the bulb. Make sure nothing obstructs the path between the two. Darken the room. Run the appropriate program on the TI-83 (either BULB, LIGHT, or PHYSCI) and follow
the directions it gives you. Make a measurement, then move the probe such that the distance to the bulb increases about 10 cm. Repeat until you have 10 measurements. Plot the intensity values you
measured as a function of distance.
What form do your data take? Linear? Power Law? Quadratic?
Now fit your data using your graphing calculator. What happens to the shape of the line if the fitting parameters become larger? Smaller?
Perform the experiment again with either a brighter or dimmer bulb. Consider also taking data at different distances. You should now be able to discuss the following:
When an astronomer measures the brightness of an object in the Universe, what sort of conclusions can be made about the energy being emitted by that object? What additional information would help
the astronomer?
Investigate the magnitude scale in depth -- learn about Norman R. Pogson; learn about the mathematical equation that relates the magnitude values and fluxes of different objects; learn about where
our Sun ranks on the magnitude scale.
Investigate what astronomers call the distance modulus and use it to determine how bright a star would be on the magnitude scale as you move it closer or further away from Earth. What effects would
intervening dust have on this calculation?
Download a pdf version. | {"url":"http://imagine.gsfc.nasa.gov/docs/teachers/gammaraybursts/imagine/page10.html","timestamp":"2014-04-21T02:03:10Z","content_type":null,"content_length":"19287","record_id":"<urn:uuid:1e004dbe-cc9b-45cb-b54c-68fb6c9ecb67>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new method for source separation
- IN IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING , 1996
"... Blind separation of independent sources from their convolutive mixtures is a problem in many real world multi-sensor applications. In this paper we present a solution to this problem based on
the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of bl ..."
Cited by 93 (1 self)
Add to MetaCart
Blind separation of independent sources from their convolutive mixtures is a problem in many real world multi-sensor applications. In this paper we present a solution to this problem based on the
information maximization principle, which was recently proposed by Bell and Sejnowski for the case of blind separation of instantaneous mixtures. We present a feedback network architecture capable of
coping with convolutive mixtures, and we derive the adaptation equations for the adaptive filters in the network by maximizing the information transferred through the network. Examples using speech
signals are presented to illustrate the algorithm.
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in
traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Cited by 74 (0 self)
Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in
traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group
theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix
framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feed-forward memory
connections, wideband array processing, and in problems with a multi-input, multi-output network having channels between each source and sensor, such as source separation. Particular applications of
FIR polynomial matrix alg...
, 2005
"... Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is
accomplished by taking into account the structure of the mixing process and by making assumptions about the sour ..."
Cited by 35 (1 self)
Add to MetaCart
Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is
accomplished by taking into account the structure of the mixing process and by making assumptions about the sources. When the information about the mixing process and sources is limited, the problem
is called ‘blind’. By assuming that the sources can be represented sparsely in a given basis, recent research has demonstrated that solutions to previously problematic blind source separation
problems can be obtained. In some cases, solutions are possible to problems intractable by previous non-sparse methods. Indeed, sparse methods provide a powerful approach to the separation of linear
mixtures of independent data. This paper surveys the recent arrival of sparse blind source separation methods and the previously existing non-sparse methods, providing insights and appropriate hooks
into the literature along the way.
"... In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be
organized, and we present published results from those algorithms that have been applied to real-world audio ..."
Cited by 23 (0 self)
Add to MetaCart
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be
organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks.
- In Proceedings of the VIII European Signal Processing Conference , 1996
"... this paper is a blockmethod based on second-order statistics of the measurement data only. The parameters of the inverse filter are to be found such that the resulting filtered output signals y
1 (t) and y 2 (t) have zero cross-covariance function. Assuming a certain filter structure, the resulting ..."
Cited by 4 (3 self)
Add to MetaCart
this paper is a blockmethod based on second-order statistics of the measurement data only. The parameters of the inverse filter are to be found such that the resulting filtered output signals y 1 (t)
and y 2 (t) have zero cross-covariance function. Assuming a certain filter structure, the resulting conditions take the form of bilinear equations. The usual approach at this point is to set up a
"... In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be
organized, and we present published results from those algorithms that have been applied to real-world audio ..."
Cited by 1 (0 self)
Add to MetaCart
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be
organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks. 1.
- in MILCOM’95 , 1995
"... A method for whitening a polynomial matrix is described, including the calculation of the eigenvalue polynomials and eigenvector polynomials of an FIR polynomial matrix. The multichannel blind
deconvolution problem is briefly described and FIR polynomial matrix whitening is applied to the problem. B ..."
Cited by 1 (0 self)
Add to MetaCart
A method for whitening a polynomial matrix is described, including the calculation of the eigenvalue polynomials and eigenvector polynomials of an FIR polynomial matrix. The multichannel blind
deconvolution problem is briefly described and FIR polynomial matrix whitening is applied to the problem. Benefits of the whitening technique are demonstrated through simulation. Data prewhitening or
the use of an exact least squares adaptation is necessary in any problem of moderate complexity. The group theoretic aspects of FIR polynomial matrix algebra are discussed. 1. INTRODUCTION AND
MOTIVATION A method for whitening a multichannel linear system is presented. Multiple input and multiple output linear systems are considered. A two input and two output system would be written as H
= h 11 h 21 h 12 h 22 : (1) The h ij 's are FIR filters which each represent an acoustic multi-path transfer function from source i to sensor j. Referring to Figure 1, a two-sensor, two-source
problem can... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2775187","timestamp":"2014-04-17T14:04:40Z","content_type":null,"content_length":"29314","record_id":"<urn:uuid:6b4a1765-0281-454f-ba3b-618d271536c5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal Seating Arrangements
Date: 07/20/2000 at 11:03:38
From: Ben Schwartz
Subject: Optimal Solution for a Requested Seating Arrangement
My mother encountered this problem when she was planning large
parties. N people are invited, and their invitations ask them to RSVP
with the names of up to k people they would like to sit with. Is there
a formula or method that can be applied here, given a table size s, to
yield the "best" arrangement of people?
Date: 07/20/2000 at 16:31:48
From: Doctor Rob
Subject: Re: Optimal Solution for a Requested Seating Arrangement
Thanks for writing to Ask Dr. Math, Ben.
There is no formula or algebraic method to figure out the best
arrangement. The way to proceed is to do what is called a discrete
hill climb.
I assume that N is a multiple of s, so there are no empty places at
any of the tables.
Start with a random arrangement of people at the tables. Compute a
score for this arrangement. I would probably make the score be the sum
over all individuals of the number of people at his table that he
requested divided by the total number he requested. That would
represent the fraction of his request that was accommodated, and could
be a measure of his satisfaction. If he got 1 of the 3 people he
requested, then he is 1/3 satisfied. The score would be the total of
these satisfaction fractions over all people. (Other score functions
are also possible, and maybe even desirable.)
Then try all possible swaps of two people from the current
arrangement, score those outcomes, and do the one (or one of those)
that gave the largest increase in the score. Continue this over and
over until the score can't increase any more. Record the setup and its
Choose a new random starting arrangement, and repeat this process.
After several hundred such choices, pick the arrangement you have
recorded with the highest score.
This may not get you the theoretically best arrangement, but it will
be close to the best.
Note that you needn't recompute the entire score function, only the
terms that involve the two swapped individuals.
Programming this is not too tough. You could do it in any language
with which you are familiar: C, C++, FORTRAN, Mathematica, or even
BASIC. The program will be a bit long, but nothing tricky is involved.
- Doctor Rob, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/54294.html","timestamp":"2014-04-21T15:37:23Z","content_type":null,"content_length":"7422","record_id":"<urn:uuid:48adce44-eb08-46d4-884c-9e931b455eec>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematica Journal: Volume 9, Issue 2: p-adic Arithmetic
p-adic Arithmetic
Function Expansion
For the space Mahler basis, consisting of the polynomials Vanderput basis, consisting of locally constant functions
Here p-adic expansion
In this section of the package we implement these expansions as Mahler[f,x,n] and Vanderput[f,x,n,p], which calculate the first
Here are some examples. | {"url":"http://www.mathematica-journal.com/issue/v9i2/contents/Padic/Padic_6.html","timestamp":"2014-04-17T01:46:58Z","content_type":null,"content_length":"8737","record_id":"<urn:uuid:36dddbf4-f71f-4c69-937b-c70c01960814>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Scipy-tickets] [SciPy] #1546: stats: test_continuous_basic has broken test
[Scipy-tickets] [SciPy] #1546: stats: test_continuous_basic has broken test
SciPy Trac scipy-tickets@scipy....
Sun Oct 30 14:18:06 CDT 2011
#1546: stats: test_continuous_basic has broken test
Reporter: josefpktd | Owner: somebody
Type: defect | Status: new
Priority: normal | Milestone: Unscheduled
Component: scipy.stats | Version: 0.9.0
Keywords: |
Comment(by josefpktd):
This is what the test is supposed to do, I don't think we have this test
in scipy.stats
sample variance on top (check ddof of variance, not that it matters at
large sample sized)
I think the distribution should be correct also for large samples if the
distribution is not normal, (central limit theorem?).
Ticket URL: <http://projects.scipy.org/scipy/ticket/1546#comment:1>
SciPy <http://www.scipy.org>
SciPy is open-source software for mathematics, science, and engineering.
More information about the Scipy-tickets mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-tickets/2011-October/004707.html","timestamp":"2014-04-21T04:42:21Z","content_type":null,"content_length":"4250","record_id":"<urn:uuid:9933e968-7d25-45d3-be27-b5da43961bfd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Beginner's Guide to Reasoning about Quantification in ACL2
Major Section: DEFUN-SK
The initial version of this tutorial was written by Sandip Ray. Additions and revisions are welcome. Sandip has said:
``This is a collection of notes that I wrote to remind myself of how to reason about quantifiers when I just started. Most users after they have gotten the hang of quantifiers probably will not
need this and will be able to use their intuitions to guide them in the process. But since many ACL2 users are not used to quantification, I am hoping that this set of notes might help them to
think clearly while reasoning about quantifiers in ACL2.''
Many ACL2 papers start with the sentence ``ACL2 is a quantifier-free first-order logic of recursive functions.'' It is true that the syntax of ACL2 is quantifier-free; every formula is assumed to be
universally quantified over all free variables in the formula. But the logic in fact does afford arbitrary first-order quantification. This is obtained in ACL2 using a construct called defun-sk. See
Many ACL2 users do not think in terms of quantifiers. The focus is almost always on defining recursive functions and reasoning about them using induction. That is entirely justified, in fact, since
proving theorems about recursive functions by induction plays to the strengths of the theorem prover. Nevertheless there are situations where it is reasonable and often useful to think in terms of
quantifiers. However, reasoning about quantifiers requires that you get into the mindset of thinking about theorems in terms of quantification. This note is about how to do this effectively given
ACL2's implementation of quantification. This does not discuss defun-sk in detail, but merely shows some examples. A detailed explanation of the implementation is in the ACL2 documentation (see
defun-sk); also see conservativity-of-defchoose.
[Note: Quantifiers can be used for some pretty cool things in ACL2. Perhaps the most interesting example is the way of using quantifiers to introduce arbitrary tail-recursive equations; see the paper
``Partial Functions in ACL2'' by Panagiotis Manolios and J Strother Moore. This note does not address applications of quantifiers, but merely how you would reason about them once you think you want
to use them.]
Assume that you have some function P. I have just left P as a unary function stub below, since I do not care about what P is.
(defstub P (*) => *)
Now suppose you want to specify the concept that ``there exists some x such that (P x) holds''. ACL2 allows you to write that directly using quantifiers.
(defun-sk exists-P () (exists x (P x)))
If you submit the above form in ACL2 you will see that the theorem prover specifies two functions exists-p and exists-p-witness, and exports the following constraints:
1. (defun exists-P () (P (exists-P-witness)))
2. (defthm exists-P-suff (implies (p x) (exists-p)))
Here exists-P-witness is a new function symbol in the current ACL2 theory. What do the constraints above say? Notice the constraint exists-p-suff. It says that if you can provide any x such that (P
x) holds, then you know that exists-p holds. Think of the other constraint (definition of exists-p) as going the other way. That is, it says that if exists-p holds, then there is some x, call it
(exists-p-witness), for which P holds. Notice that nothing else is known about exists-p-witness than the two constraints above.
[Note: exists-p-witness above is actually defined in ACL2 using a special form called defchoose. See defchoose. This note does not talk about defchoose. So far as this note is concerned, think of
exists-p-witness as a new function symbol that has been generated somehow in ACL2, about which nothing other than the two facts above is known.]
Similarly, you can talk about the concept that ``for all x (P x) holds.'' This can be specified in ACL2 by the form:
(defun-sk forall-P () (forall x (P x)))
This produces the following two constraints:
1. (defun forall-P () (P (forall-p-witness)))
2. (defthm forall-p-necc (implies (not (P x)) (not (forall-p))))
To understand these, think of for-all-p-witness as producing some x which does not satisfy P, if such a thing exists. The constraint forall-p-necc merely says that if forall-p holds then P is
satisfied for every x. (To see this more clearly, just think of the contrapositive of the formula shown.) The other constraint (definition of forall-p) implies that if forall-p does not hold then
there is some x, call it (forall-p-witness), which does not satisfy P. To see this, just consider the following formula which is immediately derivable from the definition.
(implies (not (forall-p)) (not (P (forall-witness))))
The description above suggests that to reason about quantifiers, the following Rules of Thumb, familiar to most any student of logic, are useful.
RT1: To prove (exists-p), construct some object A such that P holds for A and then use exists-P-suff.
RT2: If you assume exists-P in your hypothesis, use the definition of exists-p to know that P holds for exists-p-witness. To use this to prove a theorem, you must be able to derive the theorem
based on the hypothesis that P holds for something, whatever the something is.
RT3: To prove forall-P, prove the theorem (P x) (that is, that P holds for an arbitrary x), and then simply instantiate the definition of forall-p, that is, show that P holds for the witness.
RT4: If you assume forall-p in the hypothesis of the theorem, see how you can prove your conclusion if indeed you were given (P x) as a theorem. Possibly for the conclusion to hold, you needed
that P holds for some specific set of x values. Then use the theorem forall-p-necc by instantiating it for the specific x values you care about.
Perhaps the above is too terse. In the remainder of the note, we will consider several examples of how this is done to prove theorems in ACL2 that involve quantified notions.
Let us consider two trivial theorems. Assume that for some unary function r, you have proved (r x) as a theorem. Let us see how you can prove that (1) there exists some x such that (r x) holds, and
(2) for all x (r x) holds.
We first model these things using defun-sk. Below, r is simply some function for which (r x) is a theorem.
(((r *) => *))
(local (defun r (x) (declare (ignore x)) t))
(defthm r-holds (r x)))
(defun-sk exists-r () (exists x (r x)))
(defun-sk forall-r () (forall x (r x)))
ACL2 does not have too much reasoning support for quantifiers. So in most cases, one would need :use hints to reason about quantifiers. In order to apply :use hints, it is preferable to keep the
function definitions and theorems disabled.
(in-theory (disable exists-r exists-r-suff forall-r forall-r-necc))
Let us now prove that there is some x such that (r x) holds. Since we want to prove exists-r, we must use exists-r-suff by RT1. We do not need to construct any instance here since r holds for all x
by the theorem above.
(defthm exists-r-holds
:hints (("Goal" :use ((:instance exists-r-suff)))))
Let us now prove the theorem that for all x, (r x) holds. By RT3, we must be able to prove it by definition of forall-r.
(defthm forall-r-holds
:hints (("Goal" :use ((:instance (:definition forall-r))))))
[Note: Probably no ACL2 user in his or her right mind would prove the theorems exists-r-holds and forall-r-holds above. The theorems shown are only for demonstration purposes.]
For the remainder of this note we will assume that we have two stubbed out unary functions M and N, and we will look at proving some quantified properties of these functions.
(defstub M (*) => *)
(defstub N (*) => *)
Let us now define the predicates all-M, all-N, ex-M, and ex-N specifying the various quantifications.
(defun-sk all-M () (forall x (M x)))
(defun-sk all-N () (forall x (N x)))
(defun-sk some-M () (exists x (M x)))
(defun-sk some-N () (exists x (N x)))
(in-theory (disable all-M all-N all-M-necc all-N-necc))
(in-theory (disable some-M some-N some-M-suff some-N-suff))
Let us prove the classic distributive properties of quantification: the distributivity of universal quantification over conjunction, and the distributivity of existential quantification over
disjunction. We can state these properties informally in ``pseudo ACL2'' notation as follows:
1. (exists x: (M x)) or (exists x: (N x)) <=> (exists x: (M x) or (N x))
2. (forall x: (M x)) and (forall: x (N x)) <=> (forall x: (M x) and (N x))
To make these notions formal we of course need to define the formulas at the right-hand sides of 1 and 2. So we define some-MN and all-MN to capture these concepts.
(defun-sk some-MN () (exists x (or (M x) (N x))))
(defun-sk all-MN () (forall x (and (M x) (N x))))
(in-theory (disable all-MN all-MN-necc some-MN some-MN-suff))
First consider proving property 1. The formal statement of this theorem would be: (iff (some-MN) (or (some-M) (some-N))).
How do we prove this theorem? Looking at RT1-RT4 above, note that they suggest how one should reason about quantification when one has an ``implication''. But here we have an ``equivalence''. This
suggests another rule of thumb.
RT5: Whenever possible, prove an equivalence involving quantifiers by proving two implications.
Let us apply RT5 to prove the theorems above. So we will first prove: (implies (some-MN) (or (some-M) (some-N)))
How can we prove this? This involves assuming a quantified predicate (some-MN), so we must use RT2 and apply the definition of some-MN. Since the conclusion involves a disjunction of two quantified
predicates, by RT1 we must be able to construct two objects A and B such that either M holds for A or N holds for B, so that we can then invoke some-M-suff and some-N-suff to prove the conclusion.
But now notice that if some-MN is true, then there is already an object, in fact some-MN-witness, such that either M holds for it, or N holds for it. And we know this is the case from the definition
of some-MN! So we will simply prove the theorem instantiating some-M-suff and some-N-suff with this witness. The conclusion is that the following event will go through with ACL2.
(defthm le1
(implies (some-MN)
(or (some-M) (some-N)))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition some-MN))
(:instance some-M-suff
(x (some-MN-witness)))
(:instance some-N-suff
(x (some-MN-witness)))))))
This also suggests the following rule of thumb:
RT6: If a conjecture involves assuming an existentially quantified predicate in the hypothesis from which you are trying to prove an existentially quantified predicate, use the witness of the
existential quantification in the hypothesis to construct the witness for the existential quantification in the conclusion.
Let us now try to prove the converse of le1, that is: (implies (or (some-M) (some-N)) (some-MN))
Since the hypothesis is a disjunction, we will just prove each case individually instead of proving the theorem by a :cases hint. So we prove the following two lemmas.
(defthm le2
(implies (some-M) (some-MN))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition some-M))
(:instance some-MN-suff
(x (some-M-witness)))))))
(defthm le3
(implies (some-N) (some-MN))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition some-N))
(:instance some-MN-suff
(x (some-N-witness)))))))
Note that the hints above are simply applications of RT6 as in le1. With these lemmas, of course the main theorem is trivial.
(defthmd |some disjunction|
(iff (some-MN) (or (some-M) (some-N)))
:hints (("Goal"
:use ((:instance le1)
(:instance le2)
(:instance le3)))))
Let us now prove the distributivity of universal quantification over conjunction, that is, the formula: (iff (all-MN) (and (all-M) (all-N)))
Applying RT5, we will again decompose this into two implications. So consider first the one-way implication: (implies (and (all-M) (all-N)) (all-MN)).
Here we get to assume all-M and all-N. Thus by RT4 we can use all-M-necc and all-N-necc to think as if we are given the formulas (M x) and (N x) as theorems. The conclusion here is also a universal
quantification, namely we have to prove all-MN. Then RT3 tells us to proceed as follows. Take any object y. Try to find an instantiation z of the hypothesis that implies (and (M y) (N y)). Then
instantiate y with all-MN-witness. Note that the hypothesis lets us assume (M x) and (N x) to be theorems. Thus to justify we need to instantiate x with y, and in this case, therefore, with
all-MN-witness. To make the long story short, the following event goes through with ACL2:
(defthm lf1
(implies (and (all-M) (all-N))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition all-MN))
(:instance all-M-necc (x (all-MN-witness)))
(:instance all-N-necc (x (all-MN-witness)))))))
This suggests the following rule of thumb which is a dual of RT6:
RT7: If a conjecture assumes some universally quantified predicate in the hypothesis and its conclusion asserts a universallly quantified predicate, then instantiate the ``necessary condition''
(forall-mn-necc) of the hypothesis with the witness of the conclusion to prove the conjecture.
Applying RT7 now we can easily prove the other theorems that we need to show that universal quantification distributes over conjunction. Let us just go through this motion in ACL2.
(defthm lf2
(implies (all-MN)
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition all-M))
(:instance all-MN-necc
(x (all-M-witness)))))))
(defthm lf3
(implies (all-MN)
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition all-N))
(:instance all-MN-necc
(x (all-N-witness)))))))
(defthmd |all conjunction|
(iff (all-MN)
(and (all-M) (all-N)))
:hints (("Goal" :use ((:instance lf1)
(:instance lf2)
(:instance lf3)))))
The rules of thumb for universal and existential quantification should make you realize the duality of their use. Every reasoning method about universal quantification can be cast as a way of
reasoning about existential quantification, and vice versa. Whether you reason using universal and existential quantifiers depends on what is natural in a particular context. But just for the sake of
completeness let us prove the duality of universal and existential quantifiers. So what we want to prove is the following:
3. (forall x (not (M x))) = (not (exists x (M x)))
We first formalize the notion of (forall x (not (M x))) as a quantification.
(defun-sk none-M () (forall x (not (M x))))
(in-theory (disable none-M none-M-necc))
So we now want to prove: (equal (none-M) (not (some-M))).
As before, we should prove this as a pair of implications. So let us prove first: (implies (none-M) (not (some-M))).
This may seem to assert an existential quantification in the conclusion, but rather, it asserts the negation of an existential quantification. We are now trying to prove that something does not
exist. How do we do that? We can show that nothing satisfies M by just showing that (some-M-witness) does not satisfy M. This suggests the following rule of thumb:
RT8: When you encounter the negation of an existential quantification think in terms of a universal quantification, and vice-versa.
Ok, so now applying RT8 and RT3 you should be trying to apply the definition of some-M. The hypothesis is just a pure (non-negated) universal quantification so you should apply RT4. A blind
application lets us prove the theorem as below.
(defthm nl1
(implies (none-M) (not (some-M)))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition some-M))
(:instance none-M-necc (x (some-M-witness)))))))
How about the converse implication? I have deliberately written it as (implies (not (none-M)) (some-M)) instead of switching the left-hand and right-hand sides of nl1, which would have been
equivalent. Again, RH8 tells us how to reason about it, in this case using RH2, and we succeed.
(defthm nl2
(implies (not (none-M)) (some-M))
:rule-classes nil
:hints (("Goal"
:use ((:instance (:definition none-M))
(:instance some-M-suff (x (none-M-witness)))))))
So finally we just go through the motions of proving the equality.
(defthmd |forall not = not exists|
(equal (none-M) (not (some-M)))
:hints (("Goal"
:use ((:instance nl1)
(:instance nl2)))))
Let us now see if we can prove a slightly more advanced theorem which can be stated informally as: If there is a natural number x which satisfies M, then there is a least natural number y that
satisfies M.
[Note: Any time I have had to reason about existential quantification I have had to do this particular style of reasoning and state that if there is an object satisfying a predicate, then there is
also a ``minimal'' object satisfying the predicate.]
Let us formalize this concept. We first define the concept of existence of a natural number satisfying x.
(defun-sk some-nat-M () (exists x (and (natp x) (M x))))
(in-theory (disable some-nat-M some-nat-M-suff))
We now talk about what it means to say that x is the least number satisfying M.
(defun-sk none-below (y)
(forall r (implies (and (natp r) (< r y)) (not (M r))))))
(in-theory (disable none-below none-below-necc))
(defun-sk min-M () (exists y (and (M y) (natp y) (none-below y))))
(in-theory (disable min-M min-M-suff))
The predicate none-below says that no natural number less than y satisfies M. The predicate min-M says that there is some natural number y satisfying M such that none-below holds for y.
So the formula we want to prove is: (implies (some-nat-M) (min-M)).
Since the formula requires that we prove an existential quantification, RT1 tells us to construct some object satisfying the predicate over which we are quantifying. We should then be able to
instantiate min-M-suff with this object. That predicate says that the object must be the least natural number that satisfies M. Since such an object is uniquely computable if we know that there
exists some natural number satisfying M, let us just write a recursive function to compute it. This function is least-M below.
(defun least-M-aux (i bound)
(declare (xargs :measure (nfix (- (1+ bound) i))))
(cond ((or (not (natp i))
(not (natp bound))
(> i bound))
((M i) i)
(t (least-M-aux (+ i 1) bound))))
(defun least-M (bound) (least-M-aux 0 bound))
Let us now reason about this function as one does typically. So we prove that this object is indeed the least natural number that satisfies M, assuming that bound is a natural number that satisfies
(defthm least-aux-produces-an-M
(implies (and (natp i)
(natp bound)
(<= i bound)
(M bound))
(M (least-M-aux i bound))))
(defthm least-<=bound
(implies (<= 0 bound)
(<= (least-M-aux i bound) bound)))
(defthm least-aux-produces-least
(implies (and (natp i)
(natp j)
(natp bound)
(<= i j)
(<= j bound)
(M j))
(<= (least-M-aux i bound) j)))
(defthm least-aux-produces-natp
(natp (least-M-aux i bound)))
(defthmd least-is-minimal-satisfying-m
(implies (and (natp bound)
(natp i)
(< i (least-M bound)))
(not (M i)))
:hints (("Goal"
:in-theory (disable least-aux-produces-least least-<=bound)
:use ((:instance least-<=bound
(i 0))
(:instance least-aux-produces-least
(i 0)
(j i))))))
(defthm least-has-m
(implies (and (natp bound)
(m bound))
(M (least-M bound))))
(defthm least-is-natp
(natp (least-M bound)))
So we have done that, and hopefully this is all that we need about least-M. So we disable everything.
(in-theory (disable least-M natp))
Now of course we note that the statement of the conjecture we are interested in has two quantifiers, an inner forall (from none-below) and an outer exists (from min-M). Since ACL2 is not very good
with quantification, we hold its hands to reason with the quantifier part. So we will first prove something about the forall and then use it to prove what we need about the exists.
RT9: When you face nested quantifiers, reason about each nesting separately.
So what do we want to prove about the inner quantifier? Looking carefully at the definition of none-below we see that it is saying that for all natural numbers r < y, (M r) does not hold. Well, how
would we want to use this fact when we want to prove our final theorem? We expect that we will instantiate min-M-suff with the object (least-M bound) where we know (via the outermost existential
quantifier) that M holds for bound, and we will then want to show that none-below holds for (least-M bound). So let us prove that for any natural number (call it bound), none-below holds for (least-M
bound). For the final theorem we only need it for natural numbers satisfying M, but note that from the lemma least-is-minimal-satisfying-m we really do not need that bound satisfies M.
So we are now proving: (implies (natp bound) (none-below (least-M bound))).
Well since this is a standard case of proving a universally quantified predicate, we just apply RT3. We have proved that for all naturals i < (least-M bound), i does not satisfy M (lemma
least-is-minimal-satisfying-M), so we merely need the instantiation of that lemma with none-below-witness of the thing we are trying to prove, that is, (least-M bound). The theorem below thus goes
(defthm least-is-minimal
(implies (natp bound)
(none-below (least-M bound)))
:hints (("Goal"
:use ((:instance (:definition none-below)
(y (least-M bound)))
(:instance least-is-minimal-satisfying-m
(i (none-below-witness (least-M bound))))))))
Finally we are in the outermost existential quantifier, and are in the process of applying min-M-suff. What object should we instantiate it with? We must instantiate it with (least-M bound) where
bound is an object which must satisfy M and is a natural. We have such an object, namely (some-nat-M-witness) which we know have all these qualities given the hypothesis. So the proof now is just RT1
and RT2.
(defthm |minimal exists|
(implies (some-nat-M) (min-M))
:hints (("Goal"
:use ((:instance min-M-suff
(y (least-M (some-nat-M-witness))))
(:instance (:definition some-nat-M))))))
If you are comfortable with the reasoning above, then you are comfortable with quantifiers and probably will not need these notes any more. In my opinion, the best way of dealing with ACL2 is to ask
yourself why you think something is a theorem, and the rules of thumb above are simply guides to the questions that you need to ask when you are dealing with quantification.
Here are a couple of simple exercises for you to test if you understand the reasoning process.
Exercise 1. Formalize and prove the following theorem. Suppose there exists x such that (R x) and suppose that all x satisfy (P x). Then prove that there exists x such that (P x) & (R x). (See http:/
/www.cs.utexas.edu/users/moore/acl2/contrib/quantifier-exercise-1-solution.html for a solution.)
Exercise 2. Recall the example just before the preceding exercise, where we showed that if there exists a natural number x satisfying M then there is another natural number y such that y satisfies M
and for every natural number z < y, z does not. What would happen if we remove the restriction of x, y, and z being naturals? Of course, we will not talk about < any more, but suppose you use the
total order on all ACL2 objects (from "books/misc/total-order"). More concretely, consider the definition of some-M above. Let us now define two other functions:
(include-book "misc/total-order" :dir :system)
(defun-sk none-below-2 (y)
(forall r (implies (<< r y) (not (M r)))))
(defun-sk min-M2 () (exists y (and (M y) (none-below-2 y))))
The question is whether (implies (some-M) (min-M2)) is a theorem. Can you prove it? Can you disprove it? | {"url":"http://www.cs.utexas.edu/users/moore/acl2/v3-5/QUANTIFIER-TUTORIAL.html","timestamp":"2014-04-18T11:29:48Z","content_type":null,"content_length":"29705","record_id":"<urn:uuid:c1b725bf-2d9a-46ce-b05e-9fba283336e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Terrace, NY Prealgebra Tutor
Find a The Terrace, NY Prealgebra Tutor
...If I don't know the answer, I know where to look for it! I majored in literature at Rutgers University and went on to take graduate literature courses. I taught high school and middle school
English for over 10 years.
24 Subjects: including prealgebra, English, reading, grammar
...I have tutored students in both ESL and Spanish using the phonetics. I had the student create noises; the younger students I had them make faces, and for the older students have show them the
position where the tongue and shape of the lips. I am currently a Spanish major with a concentration in linguistics.
13 Subjects: including prealgebra, reading, Spanish, English
...I aced my precalculus class in high school and have gone on to take the upper levels of calculus. I aced trigonometry when I took it in high school and have tutored many students in the
subject. I have many different materials on trigonometry.
9 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I've also ran "Math Circle" at the Middle School, helping to introduce kids to math beyond textbook and study some "unusual math problems". My approach is to ensure that the student really
understands the subject matter. It's always better to go slow so that the student is not overwhelmed by too much new material.
25 Subjects: including prealgebra, physics, calculus, geometry
...Just as I've voyaged on a winding road to get where I am, I believe every student finds a different path to success. No two students are alike, and I make every effort to 1) connect on a
personal level with my students, and 2) lesson plan to emphasize their strengths and shore up their weaknesse...
41 Subjects: including prealgebra, English, reading, writing
Related The Terrace, NY Tutors
The Terrace, NY Accounting Tutors
The Terrace, NY ACT Tutors
The Terrace, NY Algebra Tutors
The Terrace, NY Algebra 2 Tutors
The Terrace, NY Calculus Tutors
The Terrace, NY Geometry Tutors
The Terrace, NY Math Tutors
The Terrace, NY Prealgebra Tutors
The Terrace, NY Precalculus Tutors
The Terrace, NY SAT Tutors
The Terrace, NY SAT Math Tutors
The Terrace, NY Science Tutors
The Terrace, NY Statistics Tutors
The Terrace, NY Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Baxter Estates, NY prealgebra Tutors
East Atlantic Beach, NY prealgebra Tutors
Fort Totten, NY prealgebra Tutors
Garden City South, NY prealgebra Tutors
Glenwood Landing prealgebra Tutors
Harbor Acres, NY prealgebra Tutors
Harbor Hills, NY prealgebra Tutors
Kenilworth, NY prealgebra Tutors
Manorhaven, NY prealgebra Tutors
Maplewood, NY prealgebra Tutors
Meacham, NY prealgebra Tutors
Port Washington, NY prealgebra Tutors
Roslyn, NY prealgebra Tutors
Saddle Rock Estates, NY prealgebra Tutors
University Gardens, NY prealgebra Tutors | {"url":"http://www.purplemath.com/The_Terrace_NY_Prealgebra_tutors.php","timestamp":"2014-04-18T15:47:46Z","content_type":null,"content_length":"24447","record_id":"<urn:uuid:a5e985ed-c379-423b-9633-56aaaef1f4b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Needham, MA Trigonometry Tutor
Find a Needham, MA Trigonometry Tutor
...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science
from West Point. My academic strengths are in mathematics and French.
16 Subjects: including trigonometry, French, elementary math, algebra 1
...I am also an expert in time management and study skills, which are an essential part of scholastic success. I am patient, enthusiastic about learning, and will work very hard with you to
achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology.
10 Subjects: including trigonometry, chemistry, geometry, biology
...Typically this involves having students work on problems relevant to the material they are studying. I make sure that students do as much as possible on their own, and I take on for myself the
role as a guide rather than simply an instructor. I use many examples and problems, starting with easy ones and working up to harder ones.
9 Subjects: including trigonometry, calculus, physics, geometry
...Do you want to get the most from your classes? Is something holding you back from doing your best? Would you like to ace that entrance exam?
34 Subjects: including trigonometry, reading, calculus, English
...That include the full progression of Algebra, Algebra II, Trigonometry etc. I also have tutored the SAT and the LSAT many times. I scored 99th percentile on the SAT (perfect score on current
scale) and 90th on the LSAT.
29 Subjects: including trigonometry, reading, calculus, geometry
Related Needham, MA Tutors
Needham, MA Accounting Tutors
Needham, MA ACT Tutors
Needham, MA Algebra Tutors
Needham, MA Algebra 2 Tutors
Needham, MA Calculus Tutors
Needham, MA Geometry Tutors
Needham, MA Math Tutors
Needham, MA Prealgebra Tutors
Needham, MA Precalculus Tutors
Needham, MA SAT Tutors
Needham, MA SAT Math Tutors
Needham, MA Science Tutors
Needham, MA Statistics Tutors
Needham, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Needham_MA_trigonometry_tutors.php","timestamp":"2014-04-21T07:46:43Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:056d9c2b-2d67-4374-b200-97231ec549ca>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Local Extrema Problem
March 28th 2011, 02:08 PM #1
Nov 2009
Let f(x)=(x^(4))(2+sinx^(-1)), if x≠0
...........0, if x=0
Prove that f is differentiable on R (the real numbers).
Prove that f has an absolute minimum at x=0.
Prove that f ' takes on both positive and negative values in every neighborhood of 0.
Pretty lost on this problem, I would really appreciate some help please
What is the domain of your function?
To show a minimum at x=0, solve f'(x)=0
March 28th 2011, 03:06 PM #2 | {"url":"http://mathhelpforum.com/differential-geometry/176109-local-extrema-problem.html","timestamp":"2014-04-17T20:13:08Z","content_type":null,"content_length":"32957","record_id":"<urn:uuid:8d8e9191-e3ae-4875-afae-3b91893d1860>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: Is this a proof of Pythagorean Theorem?
November 8th 2012, 08:59 PM #1
Junior Member
Apr 2011
RE: Is this a proof of Pythagorean Theorem?
Is this a legitimate proof of the Pythagorean theorem?
We know that Pythagorean triples a, b, c, such that a^2 +b^2 = c^2.
We know that that a = 2n+1, b= 2n(n+1), and c = 2n(n+1) + 1 for finding Pythagorean triples.
If you substitute these into the formula a^2 + b^2 = c^2, expand and collect like terms to show they are equal, would this be considered a proof?
Re: Is this a proof of Pythagorean Theorem?
no, The Pythagorean thrm states the connection between the side lengths of right triangles and the algebraic equation $a^2 + b^2 = c^2$, namely that every right triangle has side lengths
satisfying the equation. What you are doing is merely proving that a = 2n+1, b= 2n(n+1), and c = 2n(n+1) + 1 satisfies the algebraic equation $a^2+b^2=c^2$. In order to prove the Pythagorean thrm
for positive integer triples satisfying the algebraic equation $a^2 + b^2 = c^2$, show me that you can use the positive numbers in the tuple (a,b,c) to construct a right triangle OR show that
every right triangle has side lengths satisfying $a^2 + b^2 = c^2$
Last edited by jakncoke; November 8th 2012 at 10:14 PM.
Re: Is this a proof of Pythagorean Theorem?
Yes, but every side length of a triangle can be constructed using these equations, correct? They would, therefore, be "sides" in general.
Re: Is this a proof of Pythagorean Theorem?
No. Your 'proof' doesn't even mention right angles.
Re: Is this a proof of Pythagorean Theorem?
You need to prove that every side length of a right triangle can be constructed using the equations. If you can do that then your proof would have shown the Pythagorean theorem for Pythagorean
triples only. . Don't not be confused by the term "Pythagorean triples", they are only positive integer solutions to the equation $a^2+b^2=c^2$. People use the "Pythagorean" prefix to "triples"
to imply that these are using these triples in the context of representing the sides of a right triangle. If these a,b,c don't represent any object, then they are just triples, numbers, that
satisfy $a^2+b^2=c^2$ nothing more.
November 8th 2012, 10:09 PM #2
November 8th 2012, 10:26 PM #3
Junior Member
Apr 2011
November 8th 2012, 11:15 PM #4
Senior Member
Jan 2008
November 8th 2012, 11:40 PM #5 | {"url":"http://mathhelpforum.com/geometry/207077-re-proof-pythagorean-theorem.html","timestamp":"2014-04-16T18:01:05Z","content_type":null,"content_length":"41251","record_id":"<urn:uuid:9691961e-c56d-4329-aa57-e02fd15aeb38>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Napoleon Triangle
One of the greatest military leaders in history, Napoleon Bonaparte, was also an amateur mathematician and is credited for the following result.
If equilateral triangles are constructed on the sides of any triangle then the centres joining the constructed triangles will always form an equilateral triangle.
Prove "Napoleon's Theorem".
Problem ID: 313 (15 Feb 2007) Difficulty: 4 Star | {"url":"http://mathschallenge.net/view/napoleon_triangle","timestamp":"2014-04-20T18:58:51Z","content_type":null,"content_length":"4434","record_id":"<urn:uuid:ecf070bf-1911-42b3-93d8-3b9e23a324f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wheat Ridge Algebra 1 Tutor
Find a Wheat Ridge Algebra 1 Tutor
...This is when things start making sense and you begin to feel like you are mastering the subject, not just "getting by." Of course, the trick is to play at it every day and really concentrate!!
I am an experienced middle school, high school, and adult education math teacher with loads of ex...
7 Subjects: including algebra 1, geometry, GRE, algebra 2
...I can help students with more than just math! I am very good with the reading/writing portions of standardized tests, such as the SAT and GRE. I can help students with reading comprehension and
essay writing.
27 Subjects: including algebra 1, reading, writing, geometry
...I have taken 17-year Chinese math training and education. Generally speaking, math is very easy for me, including algebra, any elementary and middle school math course, SAT math,ACT math, GMAT
math. I took statics in my undergraduate study, obviously was an A+ student in statistics.
34 Subjects: including algebra 1, reading, calculus, geometry
...For over seven years I have taken classroom notes and narrated tests for disabled students at Arapahoe Community College; for a couple of years, I tutored students at Red Rocks Community
College. More and more, helping others by explaining things in person or in writing has made me happy. Although I dream of achieving lofty goals, I find comfort being grounded in a simple life.
12 Subjects: including algebra 1, English, reading, writing
...For this reason I left a successful career of more than twenty years to study mathematics. After receiving a degree in mathematics, I am continuing on towards a PhD in applied mathematics.
While studying as an undergraduate I became a tutor for the mathematics department.
6 Subjects: including algebra 1, calculus, algebra 2, geometry | {"url":"http://www.purplemath.com/wheat_ridge_co_algebra_1_tutors.php","timestamp":"2014-04-16T21:56:19Z","content_type":null,"content_length":"24118","record_id":"<urn:uuid:761c9768-7326-4b86-9fa4-f1026ec85366>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rex and Ralph: Mystery Number
Category: Math
Submitted By: javaguru
Two mathematicians, Rex and Ralph, have an ongoing competition to stump each other. Ralph was impressed by the ingenuity of Rex's last attempt using clues involving prime numbers, but he thinks he's
got an even better one for Rex. He tells Rex he's thinking of a 6-digit number.
"All of the digits are different. The digital sum matches the number formed by the last two digits in the number. The sum of the first two digits is the same as the sum of the last two digits."
"Take the sum of the number, the number rotated one to the left, the number rotated one to the right, the number with the first three and last three digits swapped, the number with the digit pairs
rotated to the left, and the number with the digit pairs rotated to the right. The first and last digits of this sum match the last two digits of the number, in some order."
Ralph then asks, "If each of the three numbers formed by the digit pairs in the number is prime, then what is the number?"
Rex looks confused, and for a moment Ralph thinks he's finally gotten him. Then Rex smiles, scribbles a few things down on a pad of paper and then says, "Very nice, Ralph!"
Rex then tells Ralph his number.
What did Rex say?
(See the hint for an explanation of the terminology.)
The digital sum is the sum of the digits in the number. The digital sum of 247 is 2+4+7 = 13.
The digit pairs in 125690 are 12 56 90. These are also the numbers formed by the digit pairs.
Rotating 123456 one to the left gives 234561;
Rotating 123456 one to the right gives 612345;
Rotating the digit pairs in 567890 to the left gives 789056;
Rotating the digit pairs in 567890 to the right gives 905678.
Comments on this teaser
Most Popular | Hardest | Easiest
Privacy | Terms
Copyright © 2003 | {"url":"http://www.braingle.com/palm/teaser.php?op=1&id=44017&comm=0","timestamp":"2014-04-21T08:07:49Z","content_type":null,"content_length":"6127","record_id":"<urn:uuid:5846affc-d3f0-47e7-9c79-1af390799817>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
ABC Online Forum
Dude, gravity "is" the way certain things with this property we call gravitational mass appear to attract other things with the same property.
Forces, rubbery rulers, bending space-time, etc are just descriptions, models. They're tools that let us say, ok we know the state at A, now we can predict it at B.
"non linearity" is a property of the Riemann metric, the curvature tensor. It is a particularly mathematical property, and can not in any way be described as a real property of space. Yet you use it
as a description of "space-time" as if the two were real. You can really say no more than that the space-time geometry is a model of gravity, and that the "non-linearity" isa descriptions which can
be applied to the mathematics of the model.
That is all. | {"url":"http://www2b.abc.net.au/science/k2/stn/archives/archive34/newposts/226/topic226314.shtm","timestamp":"2014-04-17T02:30:18Z","content_type":null,"content_length":"23171","record_id":"<urn:uuid:1f13989b-4bc3-4768-8667-6b04c23499af>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: MASKED DIGITAL SIGNATURES
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present invention relates to digital signature operations using public key schemes in a secure communications system and in particular for use with processors having limited computing power such
as `smart cards`. This invention describes a method for creating and authenticating a digital signature comprising the steps of selecting a first session parameter k and generating a first short term
public key derived from the session parameter k, computing a first signature component r derived from a first mathematical function using the short term public key, selecting a second session
parameter t and computing a second signature component s derived from a second mathematical function using the second session parameter t and without using an inverse operation, computing a third
signature component using the first and second session parameters and sending the signature components (s, r, c) as a masked digital signature to a receiver computer system. In the receiver computer
system computing a recovered second signature component s' by combining a third signature component with the second signature component to derive signature components (s', r) as an unmasked digital
signature. Verifying these signature components as in a usual EIGamal or ECDSA type signature verification.
A cryptographic processor at a sender in a data communication system having access to a long term private key d and a corresponding long term public key derived from said long term private key d,
said processor being configured for: (a) generating in a secure computer system a first short term private key k; (b) computing in said secure computer system a first short term public key from said
first short term private key k; (c) computing a first signature component r in said secure computer system by using said first short term public key; (d) generating a second short term private key
tin said secure computer system; (e) computing a second signature component s in said secure computer system using said second short term private key t, said message m, said long term private key d,
and said first signature component r, (f) computing a third signature component c in said secure computer system using said first short term private key k and said second short term private key t;
(g) forwarding said signature components (r, s, c) as a masked digital signature of said message m to a receiver computer system associated with said secure computer system; (h) computing in said
receiver computer system a regular signature component s using said second and third signature components (s, c); and (i) sending said signature components ( s, r) as a regular digital signature to a
receiver verifier computer system to enable said receiver verifier computer system to verify said regular signature ( s, r).
The cryptographic processor of claim 1, wherein said first short term private key k is an integer and said first short term public key is computed by calculating the value kP=(x
), wherein P is a point of prime order n in E(F
), and wherein E is an elliptic curve defined over F
The cryptographic processor of claim 2, wherein said first signature component r is of the form r= x(mod n), wherein x is derived by converting said coordinate x
to an integer.
The cryptographic processor of claim 2, wherein said second short term private key is an integer selected such that
ltoreq.t≦(n-2), and said second signature component is defined as s=t (e+dr)(mod n), wherein e is a hash of said message m.
The cryptographic processor of claim 2, wherein said third signature component is defined as c=tk(mod n).
The cryptographic processor of claim 5, wherein said regular signature component is defined as s=c
s (mod n).
A cryptographic processor at a sender in a data communication system having access to a long term private key d and a long term public key y derived from a generator g and said long term private key
d, said processor being configured for: (a) generating a short term private key k; (b) computing a first short term public key derived from said short term private key k; (c) computing a first
signature component r by using said first short term public key k; (d) generating a second short term private key t; (e) computing a second signature component s by using said second short term
private key t on said message m, said long term private key d and first signature component r, (f) computing a third signature component c using said first and second short term private keys k and t
respectively; and (g) sending said signature components (r, s, c) as a masked digital signature of said message m to a receiver computer system.
The cryptographic processor of claim 7 further configured for, in said receiver computer system, using said second and third signature components (s, c) to compute a regular signature component s,
and sending said signature components ( s, r) as a regular digital signature to a verifier computer system, and verifying said regular signature ( s, r) by said verifier computer system.
The cryptographic processor of claim 7 further configured for, in said receiver computer system, using said second and third signature components (s, c) to compute a regular signature component s, to
derive a regular digital signature components ( s, r), and verifying said regular signature components.
A cryptographic processor at a verifier in a data communication system established between a sender and the verifier, said sender having generated in a secure computer system a masked signature
having a first signature component r computed using a first short term public key derived from a first short term private key, a second signature component s computed using a second short term
private key on said message m, a long term private key, and said first signature component r, and a third signature component c computed using said first and second short term private keys, said
processor being configured for: a) obtaining a regular signature derived from said masked signature (r, s, c), said regular signature having said first signature component r, and another signature
component s computed using said second signature component s and said third signature component c; b) recovering a point on an elliptic curve defined over a finite field using said message m and said
another signature component s; c) converting an element of said point to an integer; d) calculating a value r' using said integer; and e) verifying said regular signature ( s, r) if said value r' is
equal to said first signature component r.
The cryptographic processor of claim 10 further configured for said verifier receiving said masked signature (r, s, c) from said sender and converting (r, s, c) to obtain said regular signature ( s,
The cryptographic processor of claim 10, wherein said sender converts said masked signature (r, s, c) to said regular signature ( s, r) and said sender sends said regular signature ( s, r) to said
The cryptographic processor of claim 10, wherein said point is calculated using a pair of values u and v, said values u and v derived from said regular signature ( s, r) and said message m.
The cryptographic processor of claim 13, wherein said point is calculated as (x
, y
)=uP+vQ, wherein P is a point on an elliptic curve E and Q is a public verification key of said sender derived from P as Q=dP.
The cryptographic processor of claim 13, wherein said value u is computed as u= s
e mod n and said value v is computed as v= s
r mod n, e being a representation of said message m.
The cryptographic processor of claim 15, wherein e is calculated as e=H(m), H( ) being a hash function of said sender and being known to said verifier.
The cryptographic processor of claim 10, wherein a coordinate x
of said point is first converted to an integer x
prior to calculating said component r'.
The cryptographic processor of claim 17, wherein said component r' is calculated as r'= x
mod n.
The cryptographic processor of claim 10, wherein prior to calculating said component r', a coordinate pair (x
, y
) of said point is first verified, whereby if said coordinate pair (x
, y
) is a point at infinity, then said regular signature ( s, r) is rejected.
The present application is a continuation of U.S. patent application Ser. No. 12/488,652 filed on Jun. 22, 2009, which is a continuation of U.S. patent application Ser. No. 11/882,560 filed on Aug.
2, 2007, now U.S. Pat. No. 7,552,329, which is a continuation of U.S. patent application Ser. No. 09/773,665 filed on Feb. 2, 2001, now U.S. Pat. No. 7,260,723, which is a continuation of U.S. patent
application Ser. No. 08/966,702 filed on Nov. 10, 1997, now U.S. Pat. No. 6,279,110, all of which are hereby incorporated by reference.
FIELD OF THE INVENTION [0002]
This invention relates to a method of accelerating digital signature operations used in secure communication systems, and in particular for use with processors having limited computing power.
BACKGROUND OF THE INVENTION [0003]
One of the functions performed by a cryptosystem is the computation of digital signatures that are used to confirm that a particular party has originated a message and that the contents have not been
altered during transmission. A widely used set of signature protocols utilizes the EIGamal public key signature scheme that signs a message with the sender's private key. The recipient may then
recover the message with the sender's public key. The EIGamal scheme gets its security from calculating discrete logarithms in a finite field. Furthermore, the EIGamal-type signatures work in any
group and in particular elliptic curve groups. For example given the elliptic curve group E(F
) then for P E(F
) and Q=aP the discrete logarithm problem reduces to finding the integer a. Thus these cryptosystems can be computationally intensive.
Various protocols exist for implementing such a scheme. For example, a digital signature algorithm DSA is a variant of the EIGamal scheme. In these schemes, a pair of correspondent entities A and B
each create a public key and a corresponding private key. The entity A signs a message m of arbitrary length. The entity B can verify this signature by using A's public key. In each case however,
both the sender, entity A, and the recipient, entity B, are required to perform a computationally intensive operations to generate and verify the signature respectively. Where either party has
adequate computing power this does not present a particular problem but where one or both the parties have limited computing power, such as in a `smart card` application, the computations may
introduce delays in the signature and verification process.
Public key schemes may be implemented using one of a number of multiplicative groups in which the discrete log problem appears intractable, but a particularly robust implementation is that utilizing
the characteristics of points on an elliptic curve over a finite field. This implementation has the advantage that the requisite security can be obtained with relatively small orders of field
compared with, for example, implementations in Z
* and therefore reduces the bandwidth required for communicating the signatures.
In a typical implementation of such a digital signature algorithm such as the Elliptic Curve Digital Signature Algorithm (ECDSA) a signature component s has the form:
(e+dr)mod n
d is a long term private key random integer of the signor;
Q is a public key of the signor derived by computing the point Q=dP;
P is a point (x, y) on the curve which is a predefined parameter of the system;
k is a random integer selected as a short term private or session key, and has a corresponding short term public key R=kP;
e is a secure hash, such as the SHA-1 hash function of a message; and
n is the order of the curve.
In this scheme the signor represents the x coordinate of the point kP as an integer z and then calculates a first signature component r=z mod n. Next, the second signature component s above is
calculated. The signature components s and r and a message M is then transmitted to the recipient. In order for the recipient to verify the signature (r,s) on M, the recipient looks up the public key
Q of the signor. A hash e' of the message M is calculated using a hash function H such that e'=H(M). A value c=s
mod n is also calculated. Next, integer values u
and u
are calculated such that u
=e'c mod n and u
=rc mod n. In order that the signature be verified, the value u
Q must be calculated. Since P is known and is a system wide parameter, the value u
P may be computed quickly. The point R=u
Q is computed. The field element x of the point R=(x
,y) is converted to an integer z, and a value v=z mod n is computed. If v=r, then the signature is valid.
Other protocols, such as the MQV protocols also require similar computations when implemented over elliptic curves which may result in slow signature and verification when the computing power is
limited. The complexity of the calculations may be explained by observing a form of the elliptic curve. Generally, the underlying elliptic curve has the form y
+ax+b and the addition of two points having coordinates (x
, y
) and (x
) results in a point (x
) where:--
3 = { ( y 1 ⊕ y 2 x 1 ⊕ x 2 ) 2 ⊕ y 1 ⊕ y 2 x 1 ⊕ x 2 ⊕ x 1 ⊕ x 2 ⊕ a ( P ≠ Q ) y 3 = { ( y 1 ⊕ y 2 x 1 ⊕ x 2 ) ⊕ ( x 1 ⊕ x 3 ) ⊕ x 3 ⊕ y 1 ( P ≠ Q ) ##EQU00001##
The doubling of a point i
.e. P to 2P, is performed by adding the point to itself so that
3 = { x 1 2 ⊕ ( x 1 ⊕ y 1 x 1 ) } x 3 ⊕ x 3 ##EQU00002## x 3 = x 1 2 ⊕ b x 1 2 ##EQU00002.2##
It may be seen in the above example of the ECDSA algorithm that the calculation of the second signature component involves at least the computation of an inverse. Modulo a number the generation of
each of the doubled points requires the computation of both the x and y coordinates and the latter requires a further inversion. These steps are computationally complex and therefore require either
significant time or computing power to perform.
Inversion is computationally intensive
, and generally performed within a secure boundary where computational power is limited thus it would be advantageous to perform such calculations outside the secure boundary, particularly where
computational power is more readily available. This however cannot be done directly on the ECDSA signature scheme without potentially compromising the private key information. Therefore there exists
a need for a method of performing at least part of a signature operation outside a secure boundary while still maintaining an existing level of security in current signature schemes.
SUMMARY OF THE INVENTION [0017]
It is therefore an object of the present invention to provide a method and apparatus in which at least some of the above disadvantages are mitigated.
This invention seeks to provide a digital signature method, which may be implemented relatively efficiently on a processor with limited processing capability, such as a `smart card` or the like.
In general terms, the present invention provides a method and apparatus in which signature verification may be accelerated.
In accordance with this invention there is provided; a method of signing and authenticating a message m in a public key data communication system, comprising the steps of:
in a secure computer system:
(a) generating a first short term private key k
(b) computing a first short term public key derived from the first short term private key k;
(c) computing a first signature component r by using the first short term public key k;
(d) generating a second short term private key t;
(e) computing a second signature component s by using the second short term private key t on the message m, the long term private key and the first signature component r,
(f) computing a third signature component c using the first and second short term private keys t and k respectively, and sending the signature components (r, s, c) as a masked digital signature of
the message m to a receiver computer system; in the receiver system;
(g) using said second and third signature components (s, c) computing a normal signature component s and sending the signature components ( s,r) as a normal digital signature to a verifier computer
system; and
(h) verifying normal signature.
In accordance with a further aspect of the invention there is provided a processing means for assigning a message m without performing inversion operations and including a long term private key
contained within a secure boundary and a long term public key derived from the private key and a generator of predetermined order in a field, the processing means comprising:
within the secure boundary;
means for generating a first short term private key;
means for generating a second short term private key;
means for generating a first signature component using at least the second short term session key; and
generating a masked signature component using the first and second short term session keys to produce masked signature components of the message m.
BRIEF DESCRIPTION OF THE DRAWINGS [0036]
Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings in which:--
FIG. 1 is a schematic representation of a communication system; and
[0038] FIG. 2
is a flow chart showing a signature algorithm according to the present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT [0039]
Referring therefore to FIG. 1, a data communication system 10 includes a pair of correspondents, designated as a sender 12, and a recipient 14, who are connected by a communication channel 16. Each
of the correspondents 12,14 includes an encryption unit 18,20 respectively that may process digital information and prepare it for transmission through the channel 16 as will be described below. The
sender is the party signing a message m to be verified by the recipient. The signature is generally performed in the encryption unit 18 and normally defines a secure boundary. The sender could be a
`smart card`, a terminal or similar device. If for example the signor is a `smart card`, it generally has limited processing power. However, the `smart card` is typically used in conjunction with a
terminal 22 which has at least some computing power. The `smart card` is inserted into a terminal 22 which then forwards digital information received from the `smart card` 12 along the channel 16 to
the recipient 14. The terminal may preprocess this information before sending it along the channel 16.
In accordance then with a general embodiment, the sender assembles a data string, which includes amongst others the public key Q of the sender, a message m, the sender's short-term public key R and a
signature S of the sender. When assembled the data string is sent over the channel 16 to the intended recipient 18. The signature S is generally comprised of one or more components as will be
described below with reference to a specific embodiment and according to a signature scheme being implemented by the data communication system.
The invention describes in a broad aspect a signature algorithm in which the private key is masked to generate masked signature components which may then be converted to a regular signature prior to
the verification of the signature.
Referring to
FIG. 2
, it is assumed that E is an elliptic curve defined over Fq, P is point of prime order n in E(F
), d is the senders private signature key, such that 2≦d≦n-2, Q=dP is the senders public verification key and m is the message to be signed. It is further assumed these parameters are stored in
memory within a secure boundary as indicated by block 30. For example if the sender is a `smart card`, then that would define the secure boundary while for example the terminal in which the `smart
card` was inserted would be outside the secure boundary. The first step is for the sender to sign the message m. The sender computes a hash value e=H(m) of the message m, where H is typically a SHA-1
hash function. A first statistically unique and unpredictable integer k, the first short term private key, is selected such that 2≦k≦(n-2). Next a point (x
, y
)=kP is computed. The field element x
of the point kP is converted to an integer x
and a first signature component r= x
, (mod n) is calculated. A second statistically unique and unpredictable integer the second short-term private key is selected such that 2≦t≦(n-2). Second and third signature components s=t(e+dr)(mod
n) and c=tk (mod n) respectively are also computed as indicated. This generates the masked ECDSA signature having components (r,s,c). This masked ECDSA signature (r, s, c) may be converted to regular
ECDSA signature ( s, r) by computing s=c
s mod n. The ECDSA signature of the sender 12 is then s and r. The signature ( s, r) can then be verified as a normal ECDSA signature as described below. Thus the sender can either forward the masked
ECDSA signature (s,r,c) to the verifier where the verifier can do the conversion operation to obtain the signature ( s, r) prior to the verification operation or the sender can perform the conversion
outside the secure boundary, as for example in a terminal and then forward the DSA signature ( s, r) to the verifier.
Once the recipient has the signature components ( s, r), then to verify the signature the recipient calculates a hash value e=H(m) where this the hash function of the signor and known to the verifier
of the message m and then computing u= s
e mod n and v= s
r mod n. Thus the point (x
, y
)=uP+vQ may now be calculated. If (x
, y
) is the point at infinity then the signature is rejected. If not however the field element x
is converted to an integer x
. Finally the value r'= x
, mod n is calculated. If r'=r the signature is verified. If r'≠r then the signature is rejected.
Thus it may be seen that an advantage of the masked ECDSA is that modular inverse operation of the normal ECDSA is avoided for the masked signing operation. As stated earlier this is very useful for
some applications with limited computational power. The masked signature to ECDSA signature conversion operation can be performed outside the secure boundary protecting the private key of the sender.
For example if the sender was a `smart card` that communicated with a card reader then this operation could be performed in the `smart card` reader. Alternatively the masked signature can be
transmitted to the verifier, and the verifier can do the conversion operation prior to the verification operation. It may be noted that in the masked ECDSA, no matter how we choose t, we always have
. Since c is made public, t is not an independent variable.
While the invention has been described in connection with specific embodiments thereof and in specific uses, various modifications thereof will occur to those skilled in the art without departing
from the spirit of the invention as set forth in the appended claims. For example in the above description of preferred embodiments, use is made of multiplicative notation, however the method of the
subject invention may be equally well described utilizing additive notation. It is well known for example that elliptic curve algorithm embodied in the ECDSA is equivalent of the DSA and that the
elliptic curve analog of a discrete log logarithm algorithm that is usually described in a setting of, F*
the multiplicative group of the integers modulo a prime. There is a correspondence between the elements and operations of the group F*
and the elliptic curve group E(F
Furthermore, this signature technique is equally well applicable to functions performed in a field defined over F
and F
. It is also to be noted that the DSA signature scheme described above is a specific instance of the EIGamal generalized signature scheme which is known in the art and thus the present techniques are
applicable thereto.
The present invention is thus generally concerned with an encryption method and system and particularly an elliptic curve encryption method and system in which finite field elements are multiplied in
a processor efficient manner. The encryption system can comprise any suitable processor unit such as a suitably programmed general-purpose computer.
Patent applications by Donald B. Johnson, Manassas, VA US
Patent applications by Minghua Qu, Mississauga CA
Patent applications by RESEARCH IN MOTION LIMITED
Patent applications in class Authentication by digital signature representation or digital watermark
Patent applications in all subclasses Authentication by digital signature representation or digital watermark
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110258455","timestamp":"2014-04-16T09:16:10Z","content_type":null,"content_length":"54159","record_id":"<urn:uuid:39faef6f-b69d-4121-95be-4a659f046edd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given The Network In The Figure Below, Find The ... | Chegg.com
Please help! Find real/imaginary part of Voc & Zth
Image text transcribed for accessibility: Given the network in the figure below, find the Thevenin's equivalent of the network at the terminals A-B. Find the real part of VOC. Find the imaginary part
of VOC. Find the real part of ZTH. Find the imaginary part of ZTH. V V Ohm Ohm
Electrical Engineering
Answers (2)
• ....
Rating:5 stars 5 stars 1
TheGladiator answered 2 hours later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/given-network-figure-find-thevenin-s-equivalent-network-terminals-b-find-real-part-voc-fin-q3950373","timestamp":"2014-04-19T13:16:15Z","content_type":null,"content_length":"20815","record_id":"<urn:uuid:be6113fe-a5a0-466a-9692-f461ca5e8ae8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-traditional mathematics curriculum results in higher standardized test scores, MU study finds
Public release date: 16-Sep-2013
[ | E-mail ]
Contact: Nathan Hurst
University of Missouri-Columbia
Non-traditional mathematics curriculum results in higher standardized test scores, MU study finds
COLUMBIA, Mo. -- For many years, studies have shown that American students score significantly lower than students worldwide in mathematics achievement, ranking 25th among 34 countries. Now,
researchers from the University of Missouri have found high school students in the United States achieve higher scores on a standardized mathematics test if they study from a curriculum known as
integrated mathematics.
James Tarr, a professor in the MU College of Education, and Doug Grouws, a professor emeritus from MU, studied more than 3,000 high school students around the country to determine whether there is a
difference in achievement when students study from an integrated mathematics program or a more traditional curriculum. Integrated mathematics is a curriculum that combines several mathematic topics,
such as algebra, geometry and statistics, into single courses. Many countries that currently perform higher than the U.S. in mathematics achievement use a more integrated curriculum. Traditional U.S.
mathematics curricula typically organize the content into year-long courses, so that a 9th grade student may take Algebra I, followed by Geometry, followed by Algebra II before a pre-Calculus course.
Tarr and Grouws found that students who studied from an integrated mathematics program scored significantly higher on standardized tests administered to all participating students, after controlling
for many teacher and student attributes. Tarr says these findings may challenge some long-standing views on mathematics education in the U.S.
"Many educators in America have strong views that a more traditional approach to math education is the best way to educate high school students," Tarr said. "Results of our study simply do not
support such impassioned views, especially when discussing high-achieving students. We found students with higher prior achievement scores benefitted more from the integrated mathematics program than
students who studied from the traditional curriculum."
Tarr and Grouws' papers, which were recently published in the Journal for Research in Mathematics Education, come from a three-year study measuring educational outcomes for students studying from
different types of mathematics curricula. Tarr says improving American mathematics education is vital for the future of the country.
"Many countries that the U.S. competes with economically are outpacing us in many fields, particularly in mathematics and science," Tarr said. "It is crucial that we re-evaluate our school
mathematics curricula and how it is implemented if we hope to remain competitive on a global stage."
Tarr and Grouws' longitudinal study is funded by grant of more than $2 million from the National Science Foundation.
[ | E-mail ] | {"url":"http://www.eurekalert.org/pub_releases/2013-09/uom-nmc091613.php","timestamp":"2014-04-19T14:40:45Z","content_type":null,"content_length":"9136","record_id":"<urn:uuid:c5a2639f-4c12-41a9-8db5-67fcc6000d30>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to prove that this Norm is COMPATIBLE?
May 20th 2011, 05:58 AM #1
May 2011
How to prove that this Norm is COMPATIBLE?
Please refer attachment for the problem. I need to prove that the given norm is COMPATIBLE. Please help me solve this.
Thank you!
Start in the following way: if $A=(a_{ij})\in\mathbb{K}^{n\times n}$ and $y=Ax$ with $x=(x_1,\ldots,x_n)^t$ and $y=(y_1,\ldots,y_n)^t$ then,
$\left\|{Ax}\right\|= \left\|{y}\right\|=\sum_{k=1}^nc_k{|y_k|}=\sum_{k= 1}^nc_k\left|{\sum_{j=1}^n}a_{kj}x_j\right|\leq \ldots$
May 20th 2011, 07:10 AM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/181144-how-prove-norm-compatible.html","timestamp":"2014-04-17T01:03:32Z","content_type":null,"content_length":"33816","record_id":"<urn:uuid:d1d7c216-f266-4bd6-bd21-257b3b211110>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homomorphisms of Lie groups preserving regularity
up vote 4 down vote favorite
Let $G_1, G_2$ be connected semisimple Lie groups, let us assume for simplicity that both groups are complex (even though, I am interested in the real Lie groups as well). Let $f: G_1\to G_2$ be a
monomorphism which sends regular semisimple elements to regular semisimple elements. Does it follow that $f$ also sends regular unipotent elements to regular ones?
I suspect that the answer is well-known, but I could not find it. (Actually, I do not even know if there is a standard name for homomorphisms preserving regularity.) This question is motivated by
study of discrete subgroups of higher rank Lie groups, but explaining the precise motivation will take us a bit too far.
Edit: I should have thought a bit more before asking, since there is an obvious counter-example: The reducible (faithful) representation $SL(2)\to SL(3)$ preserves regularity of semisimple elements
but does not preserve regularity of unipotent elements. However, the reducible representation $SL(n)\to SL(n+1), n\ge 3$, fails to preserve regularity of semisimple elements, so maybe there is a hope
to classify all counter-examples.
gr.group-theory lie-groups
@Misha: I got distracted by the Lie group language and skipped over the key word "monomorphism". So my comment on centralizers isn't relevant except in the image of the map. – Jim Humphreys May 8
'13 at 20:59
add comment
2 Answers
active oldest votes
Probably it's more natural to talk about the Jordan decomposition and regularity when the groups are interpreted as semisimple algebraic groups (or real forms thereof). I don't think there
is a special name for the homomorphisms you describe. But your question should have an affirmative answer in the algebraic group setting as an application of ideas in the paper by A. Borel
and J. Tits, Homomorphismes "abstraits" de groupes algebriques simples, Ann. of Math. 97 (1973), 499-571. This paper has the most comprehensive treatment of what is possible for abstract
homomorphisms relative to various fields of definition, etc.
An important feature of regular semisimple elements in such an algebraic group is their density, whereas Borel-Tits show in effect that abstract homomorphisms are fairly close to being
algebraic group morphisms that would respect semisimple and unipotent elements along with their centralizers. I'll take another look at the paper to see how close it comes to answering your
question directly, but anyway it's available online through JSTOR.
1) Working with these groups over $\mathbb{C}$ simplifies matters a lot. For example, a connected semisimple Lie group is the same as a connected semisimple algebraic group (Chevalley),
up vote where the Jordan-Chevalley decomposition exists and is preserved under rational homomorphisms. In this algebraically closed characteristic 0 setting, the Borel-Tits study of abstract
3 down homomorphisms also simplifies and overlaps earlier papers on Chevalley groups, etc. Since Borel-Tits aim for maximum generality, their hypotheses get fairly technical and are not always
vote needed over $\mathbb{C}$. Indeed, I'm not quite convinced that you need your hypothesis on the behavior of regular semisimple elements.
2) However, working with real Lie groups is appreciably more complicated. For example, some of these are not linear algebraic groups (making the notion of semisimple or unipotent element
less obvious). Borel-Tits mostly avoid considering anisotropic algebraic groups over a field which is not algebraically closed. For semisimple Lie groups, anisotropic = compact. Fortunately
however, all elements of a compact Lie group are "semisimple" (while a compact Lie group itself is algebraic over $\mathbb{R}$); so your question doesn't arise here.
3) Though it's probably not directly relevant to what you are looking at, there is a fairly long history involving real Lie groups (for instance continuity of their abstract homomorphisms),
going back to work of Freudenthal and others. Following the Borel-Tits paper, Tits himself focused more directly on Lie groups in a concisely written conference article. The promised sequel
with more details apparently never appeared: Homorphismes “abstraits” de groupes de Lie. Symposia Mathematica, Vol. XIII (Convegno di Gruppi e loro Rappresentazioni, INDAM, Rome, 1972), pp.
479–499. Academic Press, London, 1974.
Thank you, Jim, I will take a look. – Misha May 8 '13 at 5:20
Jim: In general (even over C), algebraic monomorphisms need not preserve regularity of semisimple elements (and of unipotent ones). For instance, reducible representations $SL(n)\to SL
(n+1)$ will not preserve regularity (for $n\ge 3$), while all irreducible representations $SL(2)\to SL(n)$ preserve regularity of unipotents. – Misha May 8 '13 at 18:14
add comment
If memory serves, results similar to those you are interest in are proved in
Seitz, Gary M. "Abstract homomorphisms of algebraic groups." Journal of the London Mathematical Society 56.1 (1997): 104-124.
More recent references include the work by Caprace and his students, the starting point being
up vote 1 Caprace, Pierre-Emmanuel. "Abstract" Homomorphisms of Split Kac-Moody Groups. Amer Mathematical Society, 2009.
down vote
The main trick, already noted in Borel--Tits, is the following: by Jacobson-Morozov (on the group level), the nilpotent element you are interested in is included in a group of type $A_1$,
i.e. $\text{SL}_2(\mathbf C)$ or $\text{PSL}_2(\mathbf C)$. Now either use representation theory (Caprace) or the fact that your nilpotent element is included in the derived group of a
solvable group (Borel--Tits).
Yes, this is what I tried to do before finding a trivial counter example! – Misha May 8 '13 at 18:19
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory lie-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/130038/homomorphisms-of-lie-groups-preserving-regularity","timestamp":"2014-04-18T16:17:04Z","content_type":null,"content_length":"64793","record_id":"<urn:uuid:6bd40f4c-3f59-4fc3-af05-8901a0ac218b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about perspective and which mountain is taller
It is the question on page ten of this link http://www.educationaldesigner.org/e...dt_09_fig3.pdf I am having particular trouble with the last part about which mountain appears taller please help
me. Also could you please explain how you got the answer as I would like to understand how to do it if I were asked to do a similar question again.
Thank You | {"url":"http://mathhelpforum.com/geometry/203964-question-about-perspective-mountain-taller.html","timestamp":"2014-04-21T11:54:04Z","content_type":null,"content_length":"44122","record_id":"<urn:uuid:b7e999b3-514b-4f3a-b134-d46357fada5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Klein 2-Geometry VIII
Posted by David Corfield
As John said:
categorification is a very broad project, like a huge tidal wave hitting the whole length of the shoreline. If certain parts don’t advance as fast as others, it’s really no big deal.
We’ve been trying to ride other parts of the wave of late, so there’s not much to report on last month’s events.
Something we did find is that the Poincaré 2-group is a full sub-2-group of the general linear 2-group of a skeletal Baez-Crans real 2-vector space of dimension (4, 1), i.e., with $R^4$ worth of
objects and $R$ worth of morphisms from an object to itself.
There was also some speculation about the possibility and/or value of a composition of 2-groups, which prompted some exposition on $K$-enriched profunctors.
Posted at December 1, 2006 9:44 AM UTC | {"url":"http://golem.ph.utexas.edu/category/2006/12/klein_2geometry_viii.html","timestamp":"2014-04-21T09:39:44Z","content_type":null,"content_length":"10877","record_id":"<urn:uuid:c844fe55-4049-40a1-a97d-75577894ad6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: equal loudness calculation
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: equal loudness calculation
This is the analitic equation for A-Weighting.
thank you, this is quite exactly what I was looking for.
Just, pass it to Z domain and use "a" and "b" coef. in a IIR filter.
in this case, I don't want to filter a sound, but simply scale its amplitude.
Or is there a way to make the formula itself efficient by
implementing it as a filter?
Very efficient !
----- Original Message ----- From: "Julian Rohrhuber"
To: <AUDITORY@xxxxxxxxxxxxxxx>
Sent: Tuesday, April 12, 2005 4:25 PM
Subject: [AUDITORY] equal loudness calculation
I'm looking for a computationally efficient way to do a frequency
dependent amplitude compensation. The emphasis is much less accuracy
(especially not for different loudness levels) but more a reasonable
approximation - maybe a polynomial.
Any hints? | {"url":"http://www.auditory.org/mhonarc/2005/msg00279.html","timestamp":"2014-04-19T12:01:16Z","content_type":null,"content_length":"6128","record_id":"<urn:uuid:3b310751-a594-4cd2-ba37-fd6e422a05ff>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings in Applied Mathematics 122
Symposium held in Miami, Florida, January 22–24, 2006.
This symposium is jointly sponsored by the ACM Special Interest Group on Algorithms and Computation Theory and the SIAM Activity Group on Discrete Mathematics.
Preface; Acknowledgments; Session 1A: Confronting Hardness Using a Hybrid Approach, Virginia Vassilevska, Ryan Williams, and Shan Leung Maverick Woo; A New Approach to Proving Upper Bounds for
MAX-2-SAT, Arist Kojevnikov and Alexander S. Kulikov, Measure and Conquer: A Simple O(20.288n) Independent Set Algorithm, Fedor V. Fomin, Fabrizio Grandoni, and Dieter Kratsch; A Polynomial Algorithm
to Find an Independent Set of Maximum Weight in a Fork-Free Graph, Vadim V. Lozin and Martin Milanic; The Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity, Wolfgang W.
Bein, Mordecai J. Golin, Larry L. Larmore, and Yan Zhang; Session 1B: Local Versus Global Properties of Metric Spaces, Sanjeev Arora, László Lovász, Ilan Newman, Yuval Rabani, Yuri Rabinovich, and
Santosh Vempala; Directed Metrics and Directed Graph Partitioning Problems, Moses Charikar, Konstantin Makarychev, and Yury Makarychev; Improved Embeddings of Graph Metrics into Random Trees, Kedar
Dhamdhere, Anupam Gupta, and Harald Räcke; Small Hop-diameter Sparse Spanners for Doubling Metrics, T-H. Hubert Chan and Anupam Gupta; Metric Cotype, Manor Mendel and Assaf Naor; Session 1C: On Nash
Equilibria for a Network Creation Game, Susanne Albers, Stefan Eilts, Eyal Even-Dar, Yishay Mansour, and Liam Roditty; Approximating Unique Games, Anupam Gupta and Kunal Talwar; Computing Sequential
Equilibria for Two-Player Games, Peter Bro Miltersen and Troels Bjerre Sřrensen; A Deterministic Subexponential Algorithm for Solving Parity Games, Marcin Jurdziński, Mike Paterson, and Uri Zwick;
Finding Nucleolus of Flow Game, Xiaotie Deng, Qizhi Fang, and Xiaoxun Sun, Session 2: Invited Plenary Abstract: Predicting the “Unpredictable”, Rakesh V. Vohra, Northwestern University; Session 3A: A
Near-Tight Approximation Lower Bound and Algorithm for the Kidnapped Robot Problem, Sven Koenig, Apurva Mudgal, and Craig Tovey; An Asymptotic Approximation Algorithm for 3D-Strip Packing, Klaus
Jansen and Roberto Solis-Oba; Facility Location with Hierarchical Facility Costs, Zoya Svitkina and Éva Tardos; Combination Can Be Hard: Approximability of the Unique Coverage Problem, Erik D.
Demaine, Uriel Feige, Mohammad Taghi Hajiaghayi, and Mohammad R. Salavatipour; Computing Steiner Minimum Trees in Hamming Metric, Ernst Althaus and Rouven Naujoks; Session 3B: Robust Shape Fitting
via Peeling and Grating Coresets, Pankaj K. Agarwal, Sariel Har-Peled, and Hai Yu; Tightening Non-Simple Paths and Cycles on Surfaces, Éric Colin de Verdičre and Jeff Erickson; Anisotropic Surface
Meshing, Siu-Wing Cheng, Tamal K. Dey, Edgar A. Ramos, and Rephael Wenger; Simultaneous Diagonal Flips in Plane Triangulations, Prosenjit Bose, Jurek Czyzowicz, Zhicheng Gao, Pat Morin, and David R.
Wood; Morphing Orthogonal Planar Graph Drawings, Anna Lubiw, Mark Petrick, and Michael Spriggs; Session 3C: Overhang, Mike Paterson and Uri Zwick; On the Capacity of Information Networks, Micah
Adler, Nicholas J. A. Harvey, Kamal Jain, Robert Kleinberg, and April Rasala Lehman; Lower Bounds for Asymmetric Communication Channels and Distributed Source Coding, Micah Adler, Erik D. Demaine,
Nicholas J. A. Harvey, and Mihai Pătraşcu; Self-Improving Algorithms, Nir Ailon, Bernard Chazelle, Seshadhri Comandur, and Ding Liu; Cake Cutting Really is Not a Piece of Cake, Jeff Edmonds and Kirk
Pruhs; Session 4A: Testing Triangle-Freeness in General Graphs, Noga Alon, Tali Kaufman, Michael Krivelevich, and Dana Ron; Constraint Solving via Fractional Edge Covers, Martin Grohe and Dániel
Marx; Testing Graph Isomorphism, Eldar Fischer and Arie Matsliah; Efficient Construction of Unit Circular-Arc Models, Min Chih Lin and Jayme L. Szwarcfiter, On The Chromatic Number of Some Geometric
Hypergraphs, Shakhar Smorodinsky; Session 4B: A Robust Maximum Completion Time Measure for Scheduling, Moses Charikar and Samir Khuller; Extra Unit-Speed Machines are Almost as Powerful as Speedy
Machines for Competitive Flow Time Scheduling, Ho-Leung Chan, Tak-Wah Lam, and Kin-Shing Liu; Improved Approximation Algorithms for Broadcast Scheduling, Nikhil Bansal, Don Coppersmith, and Maxim
Sviridenko; Distributed Selfish Load Balancing, Petra Berenbrink, Tom Friedetzky, Leslie Ann Goldberg, Paul Goldberg, Zengjian Hu, and Russell Martin; Scheduling Unit Tasks to Minimize the Number of
Idle Periods: A Polynomial Time Algorithm for Offline Dynamic Power Management, Philippe Baptiste; Session 4C: Rank/Select Operations on Large Alphabets: A Tool for Text Indexing, Alexander Golynski,
J. Ian Munro, and S. Srinivasa Rao; O(log log n)-Competitive Dynamic Binary Search Trees, Chengwen Chris Wang, Jonathan Derryberry, and Daniel Dominic Sleator; The Rainbow Skip Graph: A
Fault-Tolerant Constant-Degree Distributed Data Structure, Michael T. Goodrich, Michael J. Nelson, and Jonathan Z. Sun; Design of Data Structures for Mergeable Trees, Loukas Georgiadis, Robert E.
Tarjan, and Renato F. Werneck; Implicit Dictionaries with O(1) Modifications per Update and Fast Search, Gianni Franceschini and J. Ian Munro; Session 5A: Sampling Binary Contingency Tables with a
Greedy Start, Ivona Bezáková, Nayantara Bhatnagar, and Eric Vigoda; Asymmetric Balanced Allocation with Simple Hash Functions, Philipp Woelfel; Balanced Allocation on Graphs, Krishnaram Kenthapadi
and Rina Panigrahy; Superiority and Complexity of the Spaced Seeds, Ming Li, Bin Ma, and Louxin Zhang; Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time, Michael Krivelevich and
Dan Vilenchik; Session 5B: Analysis of Incomplete Data and an Intrinsic-Dimension Helly Theorem, Jie Gao, Michael Langberg, and Leonard J. Schulman; Finding Large Sticks and Potatoes in Polygons,
Olaf Hall-Holt, Matthew J. Katz, Piyush Kumar, Joseph S. B. Mitchell, and Arik Sityon; Randomized Incremental Construction of Three-Dimensional Convex Hulls and Planar Voronoi Diagrams, and
Approximate Range Counting, Haim Kaplan and Micha Sharir; Vertical Ray Shooting and Computing Depth Orders for Fat Objects, Mark de Berg and Chris Gray; On the Number of Plane Graphs, Oswin
Aichholzer, Thomas Hackl, Birgit Vogtenhuber, Clemens Huemer, Ferran Hurtado, and Hannes Krasser; Session 5C: All-Pairs Shortest Paths for Unweighted Undirected Graphs in o(mn) Time, Timothy M. Chan;
An O(n log n) Algorithm for Maximum st-Flow in a Directed Planar Graph, Glencora Borradaile and Philip Klein; A Simple GAP-Canceling Algorithm for the Generalized Maximum Flow Problem, Mateo Restrepo
and David P. Williamson; Four Point Conditions and Exponential Neighborhoods for Symmetric TSP, Vladimir Deineko, Bettina Klinz, and Gerhard J. Woeginger; Upper Degree-Constrained Partial
Orientations, Harold N. Gabow; Session 7A:
On the Tandem Duplication-Random Loss Model of Genome Rearrangement, Kamalika Chaudhuri, Kevin Chen, Radu Mihaescu, and Satish Rao; Reducing Tile Complexity for Self-Assembly Through Temperature
Programming, Ming-Yang Kao and Robert Schweller; Cache-Oblivious String Dictionaries, Gerth Střlting Brodal and Rolf Fagerberg; Cache-Oblivious Dynamic Programming, Rezaul Alam Chowdhury and Vijaya
Ramachandran; A Computational Study of External-Memory BFS Algorithms, Deepak Ajwani, Roman Dementiev, and Ulrich Meyer; Session 7B: Tight Approximation Algorithms for Maximum General Assignment
Problems, Lisa Fleischer, Michel X. Goemans, Vahab S. Mirrokni, and Maxim Sviridenko; Approximating the k-Multicut Problem, Daniel Golovin, Viswanath Nagarajan, and Mohit Singh; The Prize-Collecting
Generalized Steiner Tree Problem Via A New Approach Of Primal-Dual Schema, Mohammad Taghi Hajiaghayi and Kamal Jain; 8/7-Approximation Algorithm for (1,2)-TSP, Piotr Berman and Marek Karpinski;
Improved Lower and Upper Bounds for Universal TSP in Planar Metrics, Mohammad T. Hajiaghayi, Robert Kleinberg, and Tom Leighton; Session 7C: Leontief Economies Encode NonZero Sum Two-Player Games, B.
Codenotti, A. Saberi, K. Varadarajan, and Y. Ye; Bottleneck Links, Variable Demand, and the Tragedy of the Commons, Richard Cole, Yevgeniy Dodis, and Tim Roughgarden; The Complexity of Quantitative
Concurrent Parity Games, Krishnendu Chatterjee, Luca de Alfaro, and Thomas A. Henzinger; Equilibria for Economies with Production: Constant-Returns Technologies and Production Planning Constraints,
Kamal Jain and Kasturi Varadarajan; Session 8A: Approximation Algorithms for Wavelet Transform Coding of Data Streams, Sudipto Guha and Boulos Harb; Simpler Algorithm for Estimating Frequency Moments
of Data Streams, Lakshimath Bhuvanagiri, Sumit Ganguly, Deepanjan Kesh, and Chandan Saha; Trading Off Space for Passes in Graph Streaming Problems, Camil Demetrescu, Irene Finocchi, and Andrea
Ribichini; Maintaining Significant Stream Statistics over Sliding Windows, L.K. Lee and H.F. Ting; Streaming and Sublinear Approximation of Entropy and Information Distances, Sudipto Guha, Andrew
McGregor, and Suresh Venkatasubramanian; Session 8B: FPTAS for Mixed-Integer Polynomial Optimization with a Fixed Number of Variables, J. A. De Loera, R. Hemmecke, M. Köppe, and R. Weismantel; Linear
Programming and Unique Sink Orientations, Bernd Gärtner and Ingo Schurr; Generating All Vertices of a Polyhedron is Hard, Leonid Khachiyan, Endre Boros, Konrad Borys, Khaled Elbassioni, and Vladimir
Gurvich; A Semidefinite Programming Approach to Tensegrity Theory and Realizability of Graphs, Anthony Man-Cho So and Yinyu Ye; Ordering by Weighted Number of Wins Gives a Good Ranking for Weighted
Tournaments, Don Coppersmith, Lisa Fleischer, and Atri Rudra; Session 8C: Weighted Isotonic Regression under L1 Norm, Stanislav Angelov, Boulos Harb, Sampath Kannan, and Li-San Wang; Oblivious String
Embeddings and Edit Distance Approximations, Tuğkan Batu, Funda Ergun, and Cenk Sahinalp; Spanners and Emulators with Sublinear Distance Errors, Mikkel Thorup and Uri Zwick; Certifying Large
Branch-Width, Sang-il Oum and Paul Seymour; DAG-width—Connectivity Measure for Directed Graphs, Jan Obdrzálek; Session 9A: On the Diameter of Eulerian Orientations of Graphs, Laszlo Babai;
Max-Tolerance Graphs as Intersecton Graphs: Cliques, Cycles, and Recognition, Michael Kaufmann, Jan Kratochvil, Katharina A. Lehmann, and Amarendran R. Subramanian; Subgraph Characterization of Red/
Blue-Split Graphs and Kőnig Egerváry Graphs, Ephraim Korach, Thŕnh Nguyen, and Britta Pies; Critical Chromatic Number and the Complexity of Perfect Packings in Graphs, Daniela Kühn and Deryk Osthus;
On the Number of Crossing-Free Matchings, (Cycles, and Partitions), Micha Sharir and Emo Welzl; Session 9B: Slow Mixing of Glauber Dynamics via Topological Obstructions, Dana Randall; Quantum
Verification of Matrix Products, Harry Buhrman and Robert Spalek; Counting Without Sampling. New Algorithms for Enumeration Problems Using Statistical Physics, Antar Bandyopadhyay and David Gamarnik;
Accelerating Simulated Annealing for Combinatorial Counting Problems, Ivona Bezáková, Daniel Stefankovič, Vijay V. Vazirani, and Eric Vigoda; Query-Efficient Algorithms for Polynomial Interpoltion
over Composites, Parikshit Gopalan; Session 9C: New Lower Bounds for Oblivious Routing in Undirected Graphs, Mohammad T. Hajiaghayi, Robert D. Kleinberg, Tom Leighton, and Harald Räcke; Anytime
Algorithms for Multi-Armed Bandit Problems, Robert Kleinberg; Robbing the Bandit: Less Regret in Online Geometric Optimization Against an Adaptive Adversary, Varsha Dani and Thomas P. Hayes; On the
Competitive Ratio of Evaluating Priced Functions, Ferdinando Cicalese and Eduardo Sany Laber; Randomized Online Algorithms for Minimum Metric Bipartite Matching, Adam Meyerson, Akash Nanavati, and
Laura Poplawski; Session 10: Invited Plenary Abstract: Random Graphs, Alan Frieze, Carnegie Mellon University; Session 11A: Analyzing BitTorrent and Related Peer-to-Peer Networks, David Arthur and
Rina Panigraphy; Oblivious Network Design, Anupam Gupta, Mohammad T. Hajiaghayi, and Harald Räcke; The Price of Being Near-Sighted, Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer; Scalable
Leader Election, Valerie King, Jared Saia, Vishal Sanwalani, and Erik Vee; Deterministic Boundary Recognition and Topology Extraction for Large Sensor Networks, A. Kröller, Sandor P. Fekete, Dennis
Pfisterer, and Stefan Fischer; Session 11B: Improved Lower Bounds for Embeddings into L1 , Robert Krautghamer and Yuval Rabani; l_2^2 Spreading Metrics for Vertex Ordering Problems, Moses Charikar,
Mohammad Taghi Hajiaghayi, Howard Karloff, and Satish Rao; Trees, Markov convexity, James R. Lee, Assaf Naor, and Yuval Peres; An Algorithmic Friedman-Pippenger Theorem on Tree Embeddings and
Applications to Routing, D. Dellamonica, Jr. and Y. Kohayakawa; A Tight Upper Bound on the Probabilistic Embedding of Series-Parallel Graphs, Yuval Emek and David Peleg; Session 11C: Single-Value
Combinatorial Auctions and Implementation in Undominated Strategies, Moshe Babaioff, Ron Lavi, and Elan Pavlov; An Improved Approximation Algorithm for Combinatorial Auctions with Submodular Bidders,
Shahar Dobzinski and Michael Schapira; Revenue Maximization When Bidders Have Budgets, Zoë Abrams; Knapsack Auctions, Gagan Aggarwal and Jason Hartline; Single-Minded Unlimited Supply Pricing on
Sparse Instances, Patrick Briest and Piotr Krysta; Session 12A: The Complexity of Matrix Completion, Nicholas J. A. Harvey, David R. Karger, and Sergey Yekhanin; Relating Singular Values and
Discrepancy of Weighted Directed Graphs, Steven Butler; Matrix Approximation and Projective Clustering via Volume Sampling, Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang; Sampling
Algorithms for l2 Regression and Applications, Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan; The Hunting of the Bump: On Maximizing Statistical Discrepancy, Deepak Agarwal, Jeff M.
Phillips, and Suresh Venkatasubramanian; Session 12 B: A General Approach for Incremental Approximation and Hierarchical Clustering, Guolong Lin, Chandrashekhar Nagarajan, Rajmohan Rajaraman, David
P. Williamson; The Space Complexity of Pass-Efficient Algorithms for Clustering, Kevin L. Chang and Ravi Kannan; Correlation Clustering with a Fixed Number of Clusters, Ioannis Giotis and Venkatesan
Guruswami; On k-Median Clustering in High Dimensions, Ke Chen; Entropy Based Nearest Neighbor Search in High Dimensions, Rina Panigrahy; Session 12C: A Dynamic Data Structure for 3-d Convex Hulls and
2-d Nearest Neighbor Queries, Timothy M. Chan; Efficient Algorithms for Substring Near Neighbor Problem, Alexandr Andoni and Piotr Indyk; Many Distances in Planar Graphs, Sergio Cabello; Pattern
Matching with Address Errors: Rearrangement Distances, Amihood Amir, Yonatan Aumann, Gary Benson, Avivit Levy, Ohad Lipsky, Ely Porat, Steven Skiena, and Uzi Vishne; Squeezing Succinct Data
Structures into Entropy Bounds, Kunihiko Sadakane and Roberto Grossi; Author Index.
2006 / xviii + 1242 pages / Softcover / ISBN-13: 978-0-898716-05-4 / ISBN-10: 0-89871-605-5 /
List Price $156.50 / SIAM Member Price $109.55 / Order Code PR122 | {"url":"http://www.ec-securehost.com/SIAM/PR122.html","timestamp":"2014-04-17T15:46:44Z","content_type":null,"content_length":"19317","record_id":"<urn:uuid:d71688eb-ea14-41a0-b3c9-879a04281f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patente US5936720 - Beam characterization by wavefront sensor
The Government has rights to this invention pursuant to Contract No. DE-AC04-94AL85000 awarded by the U.S. Department of Energy.
The present invention is of a two-dimensional (preferably Shack-Hartmann) wavefront sensor that uses micro optic lenslet arrays to directly measure the wavefront slope (phase gradient) and amplitude
of the laser beam. This sensor uses an array of lenslets 21 that dissects the beam 23 into a number of samples. The focal spot locations 24 of each of these lenslets (measured by a detector array 22)
is related to the incoming wavefront slope over the lenslet. By integrating these measurements over the laser aperture, the wavefront or phase distribution can be determined. Because the power
focused by each lenslet is also easily determined, this allows a complete measurement of the irradiance and phase distribution of the laser beam. Furthermore, all the information is obtained in a
single measurement. Knowing the complete scalar field of the beam allows the detailed prediction of the actual beam's characteristics along its propagation path. In particular, the space-beamwidth
product, M.sup.2, can be obtained in a single measurement. The irradiance and phase information can be used in concert with information about other elements in the optical train to predict the beam
size, shape, phase and other characteristics anywhere in the optical train. For purposes of the specification and claims, "characterization" means using information gathered about an energy beam to
predict characteristics of the beam, including but not limited to size, shape, irradiance and phase, anywhere in the train of the beam.
The time-independent electric field of a coherent light beam directed along the z-axis can in general be described by its complex amplitude profile, E(x,y;z)= ##EQU1## where φ(x,y) is the wavefront
or optical path difference referenced to the wavefront on the z-axis. The wavefront is also defined as the surface normal to the direction of propagation. Due to rapid temporal oscillations at
optical frequencies, it is not possible to directly measure the electric field. However, by using a Shack-Hartmann wavefront sensor, one can indirectly reconstruct a discrete approximation to the
time independent electric field at a given plane normal to the z-axis.
A Shack-Hartmann sensor provides a method for measuring the phase and irradiance of an incident light beam. The sensor is based on a lenslet array that splits the incoming light into a series of
subapertures, each of which creates a focus on a detector (usually a CCD camera) (see FIG. 1). The wavefront of the incoming beam is defined as a surface that is normal to the local propagation
direction of the light. Hence distorted light will have a wide collection of propagation directions and the separate lenslets will focus the light into different positions on the detector. By
determining the position of each of these focal spots the wavefront slope over the lenslet can be measured. The wavefront itself must be reconstructed by integrating these wavefront slope
There are several steps in wavefront sensor data reduction. First the sensor is placed in a reference beam and data is acquired with a camera for calibration. Since there are a large number of focal
spots in the field, the image must be divided into a set of small windows, each centered on a focal spot peak, with one window per lenslet. Once the windows have been found, a centroid is computed
using a center-of-mass algorithm: ##EQU2##
With pixels indicated by the i,j indices, a sum is made over the pixels in each window (W.sub.l, where l indicates a particular lenslet) of the irradiance-weighted locations. (When not mentioned
explicitly, similar equations hold for the y-axis.) This results in a reference set of centroids, ρ.sub.x,l measurement of actual data. Note that the reference beam need not be a collimated beam, as
long as its characteristics are known; results are then deviations from this reference.
The first step in analyzing real data is the same as that for the reference data. The data is acquired and digitized and then centroids are computed using the windows calculated in the reference
step. A typical image is shown in FIG. 2. Once these centroids have been obtained, and with the lenslet to CCD distance, L, known, the wavefront slopes can be calculated: ##EQU3## FIG. 3 displays an
example of this calculation for an expanding beam.
The final step is the wavefront reconstruction. This is the solution of the gradient equation, ##EQU4## where the data provides sampled values for the wavefront gradient, ##EQU5## Here θ.sub.x and
θ.sub.y are the measured slope data. The reconstruction proceeds by finding a set of φ.sub.l values that obey the above equations. Commonly used methods include least-squares procedures and marching
methods. Southwell teaches a variety of methods for solving Eq. 4.
One method that has advantages in that it takes account of the irradiance distribution as well as the phase slopes is known as the modal reconstruction method. In this method the data is fit to the
derivatives of an analytical surface described by an expansion in terms of a set of basis functions. One simple case is the use of a polynomial expansion. Thus the phase might be described by
φ=α.sub.00+α.sub.10 x+α.sub.01 y+α.sub.11 xy+. . . +α.sub.ij x.sup.i y.sup.j. (5)
This description uses normal polynomials in x and y. Different basis sets may also be used, such as Hermite and Zernike functions. The derivatives of the phase are then easily determined by ##EQU6##
with a similar expression for the y-derivative. Eq. 6 is then fit to the wavefront slope data using a least-squares method. Since Eq. 5 determines the wavefront phase in terms of these a.sub.ij (with
an arbitrary constant of integration, a.sub.00, which is usually set equal to zero), the complete wavefront has been determined. The irradiance for each lenslet is determined by the denominator of
Eq. 1. FIGS. 4 and 5 illustrate a typical phase and irradiance distribution obtained by this method.
The above provides a complete measurement of the beam irradiance and phase, sampled by the lenslets. This measure is from a single time and location. It can be used for calculation of other
parameters of interest, such as M.sup.2 as discussed below. In addition, the reconstructed wavefront can be numerically propagated to another location using a standard propagation code (e.g.,
LightPipes or GLAD) or other propagation method.
Shack-Hartmann wavefront sensors have been used for many years as sensors for adaptive optics in military high energy laser and atmospheric compensation. However, recently they have been applied to
measurement applications in thermal flow, turbulence and surface measurement. While some of these early sensors were one dimensional in order to make high bandwidth measurements, recently fully
two-dimensional sensors have been developed.
One of the chief limitations on making wavefront sensors is the fabrication of an appropriate lenslet array. Early lenslet arrays were either individually ground and polished lens segments that were
assembled together, or were fabricated with step and repeat processes. With the advent of micro (continuous, diffractive, or binary) optics technology, the methods for fabricating lenslet arrays have
greatly improved. This technology is discussed in detail in parent U.S. patent application Ser. No. 08/678,019.
Micro optics technology is the application of integrated circuit manufacturing technology to the fabrication of optics. Swanson, et al. (U.S. Pat. No. 4,895,790) developed a process for the
fabrication of micro optics known as binary optics. In this process, described in FIG. 12, a series of high contrast masks are used to construct the desired surface profile. Through accurate
alignment of each new mask to the structure fabricated in the previous step, the desired optic may be approximated by a binary structure, much like a binary number can be used to represent higher
values, even though only ones and zeros are used.
There have been many additional means developed for fabricating micro optics. The photolithography etching processes may be used, but it is helpful to reduce the requirements for multi-mask alignment
and the number of required masks. Through the use of special gray mask materials, such as High Energy Beam Sensitive (HEBS) glass (U.S. Pat. No. 5,078,771, to Che-Kuang Wu), the desired structure may
be encoded as optical density variations in the mask. This allows a single mask, with a single exposure, to be used to fabricate the entire structure. In the present invention, micro optic
fabrication method or methods may be used, however the gray mask process may have advantages of resolution and ease of fabrication in the present invention. The steps needed to create an optic using
the gray mask process of the invention are as follows and are shown in FIG. 21:
1. The design of the optic is developed using a series of computer programs to describe the desired lenslet shape and profile. These include a code to define the shape and placement that solves the
exact Huygens-Fresnel equations for a lens, a diffractive analysis code, a photomask layout tool, and various other elements as needed to produce a complete digital description of the lens or lens
2. A photomask is fabricated using the digital data described above whose optical density is a direct function of the desired final optic surface profile height. This mask may be fabricated through a
number of methods, including the use of e-beam sensitive material, variable thickness metal or other coating layer, or though other techniques as appropriate.
3. A thin layer of photoresist is spun onto the substrate (which may be made of fused silica or other appropriate optical material). The mask pattern is transferred to this layer by uv contact or
projection lithography. Once the photoresist is developed, it assumes a surface profile shape similar, and directly related through a known function to, the shape of the desired lens.
4. The substrate and photoresist is etched using chemical, reactive-ion, ion-milling, or other etching process that etches both materials until all of the photoresist has been removed. At this point,
a replica of the lens surface profile has been produced in the substrate.
This series of steps can result in lens profiles that are produced with no alignment between successive steps, a single etch step, and a much smoother profile. With this method, extremely high
precision lens arrays can be made. They have an extremely precise surface profile, with features down to 1 micrometer and 100% fill factor. Furthermore, they can be arranged in many different
configurations to compensate for other effects in the optical system, as taught in parent application Ser. No. 08/648,019.
The other necessary item to make a wavefront sensor is a detection device, preferably a CCD, CID, or CMOS camera. Off-the-shelf cameras, which are low cost, yield excellent results. The camera is
interfaced to a frame grabber for data acquisition into the computer. Once data is acquired, the analysis proceeds along the lines described above. Other detection means may be used to take advantage
of various detector technologies to improve or modify dynamic range, sensitivity, frequency response, spectral sensitivity, and so forth.
In the preferred embodiment, the lenslet array is mounted directly in front of the detector, as appropriate to the application, in a rigid assembly (preferably at the focal point of the lenslet
array) with no optical or other elements located between the lenslet array and the detector. In this arrangement, the sensor head is extremely compact and lightweight. This means that the sensor can
be mounted on common optical mounts, or easily incorporated into other optical systems. This is a significant advantage in some cases where there are severe restrictions on space and weight. The
resulting sensor design is extremely rugged and robust, and has no moving parts. This allows use in non-ideal environments. By coupling with electronic shuttering or pulsed light sources, the sensor
can be used in high vibration environments, such as industrial production-line environments, that would otherwise preclude the use of sensitive optical instruments. For many applications, small size,
weight, and vibration insensitivity allow measurements to be made that were not possible before.
Accurate wavefront slope measurements require that the lenslet array be located a precise, known distance from the detector. There are a number of means to achieve mechanically rigid, precision
spacing. This spacing must be precisely and rigidly controlled, and must be adjusted through a calibration step to a known, predetermined value. Therefore a means is needed for positioning,
measuring, and adjusting this lenslet position.
There are two preferred embodiments for this positioning. The first is shown in FIG. 19. In this figure, the lenslet array 57, is mounted in an insert 53, which is custom fit to the size and shape of
the particular lenslet array. Different inserts may be used that are matched to the lenslet array focal length and required position. The lenslet array 57 may be glued or otherwise affixed to the
insert 53. The insert/lenslet array assembly is positioned in a sensor body 52 which is firmly attached to the detector front plate 51. The sensor body is mounted with threaded or other means with
considerable torque so that the camera front plate and the sensor body are an integral assembly, with no possibility of relative motion. Special tools may be required to accomplish this attachment
step. The CCD or other detector 58 is mounted to the camera front plate 51 in a rigid manner with the use of shims or other means so that precise physical contact is maintained between the detector
chip frame and the camera front plate. This assures that the sensor body will maintain precise physical spacing and alignment to the detector element.
To mount the lenslet array/insert assembly to the sensor body 52 at a known spacing, the following elements are used. One or more shims 54 of a precise thickness and character are used between a step
in the sensor body and the insert. The thickness and selection of these shims can be determined in a calibration step described below. A nylon or other low friction material 55 is used between the
insert 53 and the retaining ring 56 to prevent rotation during final assembly and tightening. The lenslet array 57 and detector 58 are rotationally aligned relative to one another by rotating the
insert 53 relative to the sensor body 52. This is accomplished through the use of a special tool that is designed to interface to notches or tabs on the insert 53. This rotation step may be
accurately accomplished while monitoring the position of the focal spots electronically while rotating the insert/lenslet assembly.
In another embodiment shown in FIG. 20, the lenslet array 58 is designed with mounting means directly on the lenslet substrate 53. In this embodiment, the lenslet array is arranged with a special
mounting surface 57 that is designed to interface to the frame for the detector 51. The detector sensitive surface 54 is mounted rigidly with epoxy, solder, or other means to the detector frame 51.
The detector frame 51 has sufficient depth for the wire bonds 56. It also holds pins or other connection means 55 for electrical connection to other circuits. Precision machining of the various
components is used to assure the proper separation of components. The lenslet array substrate 53 is fixed to the detector frame 51 using UV cured epoxy or other means. The mounting surface 57 is
designed such that there is sufficient space to allow for slight rotational alignments. The use of UV cured epoxy allows the position of focal spots to be monitored during an alignment step while the
epoxy is in place but not yet set. Once the final position has been obtained, then the epoxy is set by application of UV light. This embodiment allows for an extremely compact, hermitically sealed
and robust sensor, in configurations where the detector is not mounted in a separate mechanical fixture.
In order to use these rigid assembly embodiments, the lenslet-to-CCD distance, L in Eq. 2, must be determined experimentally. To accomplish this objective, an optical system 10, comprising laser 16
and lenslet array 18 and detector array 20 (rigidly assembled to the lenslet array to form a wavefront sensor) as shown in FIG. 13 is preferably used to introduce various amounts of wavefront
curvature in a known fashion. A pair of 250 mm focal length achromats 12 and 14 spaced 2 f apart are examples of devices which may be employed. By adjusting the position of the second lens 14
slightly (e.g., with a micrometer driven translation stage), data with known curvature as shown in FIG. 14 may be generated. Aberrations in the lenses can be dealt with by referencing the wavefront
sensor with light that passes through the same lenses at exactly 2 f spacing. It is not necessary to use two lenses in this configuration. A single lens with a point source and an accurate tip/tilt
stage or other means for introducing low order wavefronts with known character can be employed.
To calibrate the sensor by determining the exact separation L of lenslet and CCD, data is acquired as a function of the position of lens 2. A typical summary of this data is presented in FIG. 15, as
a plot of measured wavefront curvature versus input curvature. The slope of this line is related to the exact distance between the lenslet array and the detector. Using this information, this
distance can be adjusted to produce an exact match between nominal lens focal length and camera to lens spacing, preferably through the use of shims or other means that maintain the rigid nature of
the wavefront sensor. Setting L=f (where f is the lenslet focal length) produces the smallest spot size, allowing the largest dynamic range on the sensor. This positioning procedure allows for an
accurate determination of L. Typical post-positioning data after adding the appropriate shims is shown in FIG. 16. This procedure allows the wavefront sensor to be accurately calibrated even though a
rigid mounting system for the lenslet array is used.
One commonly used parameter for characterizing laser beam quality is the space-beamwidth product, or M.sup.2 parameter. Let the complex electrical field distribution of a beam directed along the
z-axis be given by E(x,y,z), with the corresponding spatial frequency domain description of the beam, P(s.sub.x,s.sub.y,z) given by its Fourier transform, P(s.sub.x,S.sub.y, z)=ℑ{E(x,y,z)}. Beam
irradiances in each domain are then defined by I(x,y,z).tbd. P(s.sub.x,s.sub.y z).tbd. The M.sup.2 parameter is then defined by:
M.sub.x.sup.2 =4πσ.sub.x.sbsb.0 σ.sub.s.sbsb.x (7)
where σ.sub.x is irradiance weighted standard deviation at position z in t x-direction, defined by ##EQU7## and σ.sub.s.sbsb.x the spatial-frequency standard deviation of the beam along the x-axis ##
EQU8## Note that σ.sub.s.sup.2.sbsb.x is not a function of z, and can be obtained using the Fourier transform of the electric field.
(The first moments of the beam along the x-axis and the s.sub.x -axis are indicated by x and s.sub.x, respectively. The spot size of the beam is w.sub.x (z).tbd.2σ.sub.x. The corresponding y-axis
quantities hold for σ.sub.y.sbsb.0, σ.sub.s.sbsb.y, etc., mutatis mutandis, throughout this description. In addition, the normalizing factor in the denominators shall be indicated by P=∫∫I
(x,y,z.sub.1)dxdy=∫∫P(s.sub.x,s.sub.y,z)ds.sub.x ds.sub.y.)
In the case of a paraxial beam in the z-direction, with an arbitrary reference plane (z.sub.1), the irradiance weighted standard deviation will have an axial distribution given by,
σ.sub.x.sup.2 (z)=σ.sup.2 (z.sub.1)-A.sub.x,1
where A.sub.x,1 is given by the function ##EQU9## The beam waist, or location of minimum irradiance variance, is obtained from Eq. 10: ##EQU10## Substituting back into Eq. 10 yields the relationship
##EQU11## M.sub.x.sup.2 follows immediately from Eq. 7.
These formulas form the basis for defining the space-beamwidth product, M.sup.2. To calculate M.sup.2 from discrete irradiance and phase measurements requires appropriate processing of the data. The
present invention provides three exemplary methods of equal validity, dependent upon experimental parameters such as instrument noise, resolution, and dynamic range, or depended upon wavefront and
irradiance distribution characteristics. The three methods may be summarized as gradient method, curvature removal method, and multiple propagation method, and are next discussed.
It should be noted that Eqs. 13 and 11 are not derived from series expansions in the vicinity of the beam waist, but are analytical derivations dependent only upon the paraxial wave equation, the
paraxial propagation assumption, and the Fourier transform relationships between the complex electric field amplitude (E(x,y,z)) and the spatial-frequency beam description (P(s.sub.x,s.sub.y,z)).
Gradient Method. As shown previously, it is possible to obtain a discrete description of the beam electric field amplitude and phase in a given plane normal to the z-axis. As part of the measuring
process, discrete values ##EQU12## are also obtained. By means of the above formulae and standard numerical integration techniques one can then obtain values for M.sup.2 and the waist locations,
The sequence is as follows, and is referred to as the gradient method. (See FIG. 6.) From the Shack-Hartmann sensor, the distribution of irradiance, I(x,y,z) and wavefront slope, ##EQU13## are
obtained. From these, the electric field, ##EQU14## is calculated. The spatial-frequency electric field distribution, P(s.sub.x,s.sub.y,z.sub.1), is derived using a Fourier transform algorithm, such
as the fast Fourier transform (FFT). From these the irradiance distributions in both domains, I(x,y,z.sub.1) and P(s.sub.x,s.sub.y,z.sub.1), are obtained, whence numerical values for the variances,
σ.sub.x.sup.2 (z.sub.1) and σ.sub.s.sup.2.sbsb.x, are calculated. Concurrently, the integral of Eq. 11 is computed by using the directly measured values of ##EQU15## with the results being used in
Eqs. 13 and 12 to produce the waist location and irradiance variance. The waist irradiance standard deviation and the spatial-frequency standard deviations immediately yield the M.sup.2 parameter per
Eq. 7.
Curvature Removal Method. The M.sup.2 of a laser beam is completely independent on its overall curvature. Hence, performing an operation on the beam that affects its curvature will not affect the
resultant value of M.sup.2. Many previous methods for measuring M.sup.2 depend on this fact, in that a weak focusing lens is introduced, and the second moment of the beam measured at differing Z
locations. The weak lens is used to assure that all of the light arrives at the detector and to reposition the beam waist such that measurements are made near this waist. In general, this gives the
best sensitivity to the measurement process.
Since a Shack-Hartmann wavefront sensor gives a complete measure of the irradiance and phase distribution of the light, the same operation can be performed without using a physical lens. Wavefront
curvature may be added or subtracted from the digitally stored irradiance and phase distribution without affecting the M.sup.2 of the beam. This operation can then be performed as part of the
numerical process of determining M.sup.2, without need to introduce a physical beam.
To compute M.sup.2, information at the waist (denoted by the subscript 0 in equations 7-13) is needed. The waist is that plane that has infinite effective radius of curvature. With a wavefront sensor
measurements may be made at another location, however. It is somewhat difficult to construct the location of the waist, and hence determine the second moment at the waist as required by equation 7.
In the gradient method, this was the primary object: to use the gradient information (also produced by the wavefront sensor) to compute the location and size of the waist, so that M.sup.2 can be
determined. However, since wavefront curvature has no affect on the M.sup.2 calculation, an artificial waist can be created by removing the average curvature from the beam. This can be done by
fitting the wavefront to a polynomial with second order terms, such as in Eq. 5. These second order terms are related to the radius of curvature of the real beam R(z.sub.1). The wavefront
corresponding to the fit can then be subtracted out of the data, and the M.sup.2 value computed through application of Eqs. 7-13, where the measurement plane is also the waist plane.
This method, referred to as the curvature removal method, has several advantages. It is simple to implement, and requires a minimum of calculations to determine M.sup.2. The calculation of M.sup.2
does not rely on determination of the waist plane or the waist size, and is thus somewhat less sensitive to noise or other errors. However, often these are desirable parameters as well. Hence
additional calculations are needed to calculate the waist distance and size.
The real beam spot size propagation equation states: ##EQU16## Furthermore, the real beam radius of curvature is given by ##EQU17## R(z.sub.1) is known from the curvature removal step.
Since the irradiance and phase of the beam is known (at an arbitrary plane z.sub.1), and the radius of curvature R(z.sub.1) was determined in order to remove curvature from the beam, all of the
information is available that is needed for determining the waist size and location. Using Eq. 14 the real beam can be propagated (numerically) back to the waist. Thus the waist size is given by, ##
EQU18## and its location by, ##EQU19## This gives a complete description of the beam at both the waist and the measurement planes and a calculation of M.sup.2.
One disadvantage of this method is that the calculation of waist size and location depend upon the M.sup.2 calculation. As long as an accurate value of M.sup.2 has been obtained, then these values
are also accurate. However, it has been shown that M.sup.2 is extremely sensitive to noise far from the laser beam center, and from truncation of the field at the edge of the detector. In this case
the inaccurate M.sup.2 values will also lead to inaccurate waist size and location values. In this respect the gradient method is better. The waist location is determined by the wavefront and
wavefront gradients directly. Truncation or other errors will not have a strong effect of the waist size and location, although they will still affect M.sup.2 because of the second order moment
calculation (Eq. 8).
Fourier Propagation Method. Given a known irradiance and phase of the laser beam, the beam irradiance and phase distribution may be determined at another plane, Z, through the Fresnel integral: ##
EQU20## This equation may be written as the Fourier transform of the E field modified by the appropriate phase factor, or ##EQU21## This expression may be discretized and the discrete Fourier
Transform (or Fast Fourier Transform, FFT) used to calculate the results. It has been shown that the Fast Fourier Transform is an efficient algorithm which can be readily implemented on common
computers. This efficient algorithm allows the E field to be calculated at a new Z location very quickly.
Since the field can be determined at a new Z location, it is also straightforward to calculate the field at a number of locations, Z.sub.j. The irradiance distribution is calculated from the field as
shown previously. The second order moment of the irradiance distribution can be calculated from the field at each of these locations (σ.sub.j). These second order moments should obey Eqs. 14 and 15.
This equation can be fit, using a least squares method, to the measured values of σ.sub.j. Thus, the values of M.sup.2, W.sub.0 and Z.sub.0 can be determined.
This method, referred to as the Fourier propagation method, has several advantages. It does not calculate any of the parameters with better accuracy than the others, as in the curvature removal
method. All of the parameters are determined from the basic propagation of the light itself. It is also more independent of the irradiance distribution. Thus the defining equation are extremely
simple and robust. However, it does rely on an accurate Fourier propagation. This can be difficult because for sampling, aliasing, and guard band issues. These problems are mitigated through care in
the design of the propagation algorithm, and because the integrals are generally performed for the least stressing case of Eq. 19, that is for propagation over long distances or near the focus of a
simulated lens. It may be advantageous to add a simulated lens to the calculation. In that case the first phase factor in Eq. 20 cancels out, and minimum aliasing occurs. It should also be noted
that, since the wavefront gradients are also known, an appropriate grid may be selected algorithmically so that aliasing and other effects can be minimized.
The invention is further illustrated by the following non-limiting examples.
EXAMPLE 1
In order to determine the sensitivity of the invention, a number of different modeled beams were created. This allowed for a check on the technique of the invention with known conditions, without
having to consider the effects of noise or experiment errors. To this end, the laser beam was modeled with either a Gaussian or sech.sup.2 propagation profile, and the effect of various parameters
was considered. The modeled beam was broken into the appropriate samples to model the lenslet array and detector, and the equations above were used to determine M.sup.2. For calculations to obtain
beam characteristics, the integrals in Eqs. 8, 9, and 11 are replaced with discrete sums over validly measured values. All Fourier transforms are performed using standard discrete Fourier transform
methods, and the fast Fourier transform (FFT) algorithm when possible.
Elliptical Gaussian beams were modeled, adjusted in piston by setting the phase equal to zero on the z-axis. FIG. 7 details the results of modeling two Gaussian beams of differing waist size. For
these beams, the M.sup.2 parameter is unity. The smaller beam was propagated over several Rayleigh ranges, and the larger over a full Rayleigh range. In each case the invention correctly calculated
the M.sup.2 parameter based solely on a sampling of the wavefront at a given (but unknown to the invention) distance from the waist. A similar computation was conducted with a Gaussian beam with a
constant 1.3 milliradian tilt, or roughly one wave across the beam diameter. Again the invention correctly calculated a value of unity for the M.sup.2 parameter throughout the range tested.
The model was also tested on non-Gaussian beam profiles. FIG. 8 depicts the results for a beam with a hyperbolic secant squared propagation profile, which has a theoretical M.sup.2 of 1.058. The beam
was modeled with a flat phase front at z=0, simulating a beam waist, and then propagated over the distance shown (roughly one Rayleigh range) using a commercial propagation program (LightPipes™).
As another check on the invention, beams with various levels and types of aberration were examined, as shown in FIG. 9 (M.sub.x.sup.2 values are shown). Four types of aberration were examined, based
upon four Zernike polynomial aberration functions: astigmatism with axis at .+-.45.degree. (U.sub.20), astigmatism with axis at 0 triangular astigmatism with base on x-axis (U.sub.30); and triangular
astigmatism with base on y-axis (U.sub.33 ). The invention correctly calculated an M.sup.2 value near unity for U.sub.22 astigmatism, as well as showing increasing values of M.sup.2 for increased
U.sub.20, U.sub.30, and U.sub.33 astigmatism.
Of concern in the use of the invention is the granularity of the reconstructed wavefront and the effect this would have on the computation of M.sup.2. This was tested by examining the results of the
invention when sampling a modeled Gaussian beam at the waist. The invention correctly calculated the M.sup.2 parameter once information was available from several lenslets. Accuracy remained within a
few percent until the beam size (2σ) reached about 45% of the total aperture. (See FIG. 10.) At this point, in a zero noise environment, detectable energy from the beam just reaches to the edge of
the aperture. Thus all beam energy outside the aperture is below the sensitivity threshold of the detector. However, once energy which would otherwise be detectable fell outside of the detector
aperture, the value of the M.sup.2 parameter determined by the invention drops. We also found, as shown in FIG. 11, that there was no need to go to an extreme number of lenslets in order to obtain
good results in a low-noise environment. It is important to note that this set of results are for a Gaussian beam at the waist, and as a result there were no beam aberrations. It is believed that the
invention will correctly calculate M.sup.2 as long as the spatial structure of the aberration is larger than twice the lenslet spacing.
EXAMPLE 2
Once a wavefront sensor according to the invention was assembled and calibrated according to the invention, a series of laser beams were measured to experimentally determine M.sup.2 to test the
methods of the invention. The reference beam was an expanded, collimated Helium-Neon (HeNe) laser. The laser source was a variety of different Helium-Neon lasers operated in different conditions.
This way, a number of different lasers with different beam sizes and aberration content could be tested. Three basic laser sources were used in this example. The first was a low quality HeNe that is
used as an alignment and test laser at WaveFront Sciences, Inc. This laser was attenuated with several neutral density filters in order to reduce the peak irradiance to a level that did not saturate
the sensor. It was tested with a wavefront sensor constructed from a Cohu 6612 modified camera and a 2.047-mm focal length, 0.072-mm diameter lenslet array. This combination was aligned and
calibrated using the principles of the invention. The resulting measurements are presented in FIG. 18 as a table of measured values. For this case the M.sup.2 values were 1.375 and 1.533 for x and y
respectively. This matches well with the observations of the way this beam propagated. There was considerable non-Gaussian shape to the irradiance distribution and 0.038 m of phase aberration. The
waist size, waist position, and real beam spot size are also shown. Since the laser was set up approximately 0.5 m from the wavefront sensor, the measured waist position of 0.44 (x) and 0.48 (y) are
in good agreement. The waist size of 0.242 is in good agreement with the published specifications for this laser.
The same laser was used with a 1 mr tilt introduced between the laser and the wavefront sensor. In this case, very similar values for M.sup.2, Z.sub.0, W and W.sub.0 were obtained. This is a good
indication that tilt has little effect on the overall measurement. This is important because it means that even poorly aligned beams may be measured.
In order to measure good beams, a series of experiments was conducted at the National Institute of Standards, using lasers with known good beam quality. A series of data sets were acquired at varying
distances from the laser. A wavefront sensor with an 8.192-mm focal length, 0.144-mm diameter lenslet array was used for these measurements. Extreme care was taken not to aberrate the beam in the
process of the measurement by using low reflectivity, but high quality mirrors as attenuators. The laser in this case had quite a small waist size, and hence the beam expanded quite rapidly. In FIG.
18 the NIST-HeNe1 and 5 data sets were measured 1650 mm from the laser output mirror and the NIST HeNe 7 data set was measured at 2400 mm from the HeNe laser output mirror. In this case all of the
measurements had M.sup.2 in the 1.1-1.3 regime, except for the farthest from the laser (2.4 m). In this case the beam had overfilled the detector, so that it was slightly clipped in the vertical
direction. This lead to higher M.sup.2 values (1.4) for this case.
The final example was for a larger beam that was directed through and acousto-optic modulator. The same 8.192-mm, 0.144-mm wavefront sensor was used in this case. The M.sup.2 values were measured by
the wavefront sensor to determine the quality of the beam after passing through this optic. A comparison of the beam both with and without the modulator allowed a determination of the effect of the
modulator on the beam quality. FIG. 17 shows the irradiance and phase distributions for this case. In FIG. 18, the tabulated values for m.sup.2 for this case are 1.21 and 1.29 (x and y respectively).
This is in good agreement with propagation performance of this beam. While the wavefront was relatively flat for this case (0.012 m RMS WFE), the larger beam size and non-Gaussian beam shape lead to
larger M.sup.2 values.
In all of these examples the measured M.sup.2 values, as well as the calculations of waist position and waist size were in good agreement with expected values.
The preceding examples can be repeated with similar success by substituting the generically or specifically described reactants and/or operating conditions of this invention for those used in the
preceding examples.
Although the invention has been described in detail with particular reference to these preferred embodiments, other embodiments can achieve the same results. Variations and modifications of the
present invention will be obvious to those skilled in the art and it is intended to cover in the appended claims all such modifications and equivalents. The entire disclosures of all references,
applications, patents, and publications cited above are hereby incorporated by reference.
The accompanying drawings, which are incorporated into and form a part of the specification, illustrate several embodiments of the present invention and, together with the description, serve to
explain the principles of the invention. The drawings are only for the purpose of illustrating a preferred embodiment of the invention and are not to be construed as limiting the invention. In the
FIG. 1 illustrates the basic configuration of a wavefront sensor;
FIG. 2 is an image of data from a Shack-Hartmann sensor; light gray spots are the centroid positions of a calibration beam;
FIG. 3 is a vector plot displaying wavefront slopes of an expanding beam;
FIG. 4 is a phase map for a Helium-Neon (HeNe) laser beam, with tilt removed; curvature dominates the phase structure;
FIG. 5 is an irradiance map for the Helium-Neon laser;
FIG. 6 illustrates computation of waist location and M.sup.2 in a single measurement; Shack-Hartmann sensor gives a set of wavefront slopes and intensities used to reconstruct an electric field
wavefront (normalized to zero phase on the z-axis); spatial frequency field is obtained by Fourier transform; application of other relationships described in the text yield waist location and the
M.sup.2 parameter;
FIG. 7 is a graph of beam size and calculated M.sup.2 parameter for two Gaussian He--Ne beams (ideal model); for both modeled beams, the calculated M.sup.2 parameter is unity throughout the range of
FIG. 8 is a graph of a beam profile of hyperbolic secant squared beam with the modeled beam propagated using LightPipes;
FIG. 9 is a graph showing the effect of wavefront error on M.sup.2 for various levels of selected aberrations;
FIG. 10 is a graph of calculated M.sup.2 versus beam size for Gaussian beam modeled at waist; for a detector consisting of a 40 lenslets, each 250 μm on a side; once detectable beam energy no longer
falls on the detector, there is a loss of irradiance in the higher spatial frequencies, resulting in a decrease in the M.sup.2 parameter;
FIG. 11 is a graph showing the effect of increasing the number of lenslets across the aperture;
FIG. 12 illustrates the binary optic fabrication sequence for a micro optic with four phase levels, as taught by Swanson;
FIG. 13 is a schematic of the test setup for calibrating a wavefront sensor of the invention; the second lens 14 is adjusted to provide different amounts of wavefront curvature to the sensor;
FIG. 14 is a three-dimensional plot of the curved wavefront measured with the wavefront sensor of the invention for a 0.95 m radius of curvature wavefront;
FIG. 15 is a graph of the measured radius of curvature versus incident radius of curvature; the slope error reprents a displacement of the lenslet array by 0.01" from the nominal focal plane;
FIG. 16 is a graph of the measured radius of curvature versus incident radius of curvature after adding appropriate shims; the slope of near unity indicates the lenslet array is positioned one focal
length from the detector;
FIGS. 17(a) and (b) display a HeNe beam measured with the wavefront sensor of the invention: (a) Beam irradiance; and (b) Beam phase;
FIG. 18 is a table of calculated M.sup.2 obtained from various HeNe laser beams using the wavefront sensor and other methods of the invention;
FIG. 19 illustrates one embodiment of the rigid sensor/detector array combination of the invention; and
FIG. 20 illustrates a second embodiment of the rigid sensor/detector array combination of the invention.
FIG. 21 depicts the sequence of operations needed to fabricate micro optics using the gray scale process.
1. Field of the Invention (Technical Field)
The present invention relates to methods and apparatuses for beam characterization.
2. Background Art
In many instances where a laser beam is needed, it is important to know something about the laser beam quality. The beam quality affects how the beam will propagate, as well as how tightly it will
focus. Unfortunately, beam quality is a somewhat elusive concept. Numerous attempts have been made to define beam quality, stretching back almost to the invention of the laser. In practice, any one
of these measures will have some flaw in certain situations, and many different measures are often used. Among these is the M.sup.2 parameter (space-beamwidth product).
The irradiance (or intensity) and phase distribution of a laser beam are sufficient for determining how the beam will propagate or how tightly it can be focused. Most of the beam quality measurements
rely on characterizing the beam from only the irradiance distribution, since obtaining this is a comparably straightforward process. However, if both the irradiance and phase distribution could be
obtained simultaneously, then all the information would be available from a single measurement.
In general, phase is measured with an interferometer. Interferometers are sensitive instruments that have been extensively developed. They can be used to measure laser beams by using a shearing or
filtered Mach-Zehnder arrangement, and can produce the desired irradiance and phase distribution. Unfortunately, these systems rapidly become complex, and are slow, unwieldy, sensitive to alignment,
as well as being expensive.
A Shack-Hartmann wavefront sensor is an alternative method for measuring both irradiance and phase. Such sensors have been developed by the military for defense adaptive optics programs over the last
25 years. This sensor is a simple device that is capable of measuring both irradiance and phase distributions in a single frame of data. The advent of micro-optics technology for making arrays of
lenses has allowed these sensors to become much more sophisticated in recent years. In addition, advances in charge coupled device (CCD) cameras, computers and automated data acquisition equipment
have brought the cost of the required components down considerably. With a Shack-Hartmann wavefront sensor it is relatively straightforward to determine the irradiance and phase of a beam. This
allows not only the derivation of various beam quality parameters, but also the numerical propagation of the sampled beam to another location, where various parameters can then be measured.
M.sup.2 has become a commonly used parameter to generally describe near-Gaussian laser beams. It is especially useful in that it allows a prediction of the real beam spot size and average irradiance
at any successive plane using simple analytic expressions. This allows system designers the ability to know critical beam parameters at arbitrary planes in the optical system. Unfortunately,
measuring M.sup.2 is somewhat difficult. To date, obtaining M.sup.2 has generally required measurements of propagation distributions at multiple locations along the beam path. Although efforts have
been made to obtain this parameter in a single measurement, these still suffer from the need to make simultaneous measurements at more than one location. The present invention permits calculation of
the parameter using only a single measurement at a single location.
The following references relate to development of the present invention: A. E. Siegman, "New developments in laser resonators", SPIE Vol.1224, Optical Resonators (1990), pp.2-14; H. Weber, "Some
historical and technical aspects of beam quality", Opt. Quant.Elec. 24 (1992), S861-S864; M. W. Sasneft, and T. F. Johnston, Jr., "Beam characterization and measurement of propagation attributes",
SPIE Vol. 1414, Laser Beam Diagnostics (1991), pp. 21-32; D. Malacara, ed., Optical Shop Testing, John Wiley & Sons, Inc., 1982; D. Kwo, G. Damas, W. Zmek, "A Hartmann-Shack wavefront sensor using a
binary optics lenslet array", SPIE Vol. 1544, pp. 66-74 (1991); W. H. Southwell, "Wave-front estimation from wavefront slope measurements", JOSA 70 (8), pp.993-1006 (August, 1980); J. A. Ruff and A.
E. Siegman, "Single-pulse laser beam quality measurements using a CCD camera system", Appl.Opt., Vol.31, No.24 (Aug. 20, 1992) pp.4907-4908; Gleb Vdovin, LightPipes: beam propagation toolbox,
ver.1.1, Electronic Instrumentation Laboratory, Technische Universiteit Delft, Netherlands, 1996; General Laser Analysis and Design (GLAD) code, v. 4.3, Applied Optics Research, Tucson, Ariz., 1994;
A. E. Siegman, "Defining the Effective Radius of Curvature for a nonideal Optical Beam", IEEE J. QuantElec., Vol.27, No.5 (May 1991), pp.1146-1148; D. R. Neal, T. J. O'Hern, J. R. Torczynski, M. E.
Warren and R. Shul, "Wavefront sensors for optical diagnostics in fluid mechanics: application to heated flow, turbulence and droplet evaporation", SPIE Vol. 2005, pp. 194-203 (1993); L. Schmutz,
"Adaptive optics: a modern cure for Newton's tremors", Photonics Spectra (April 1993); D. R. Neal, J. D. Mansell, J. K Gruetzner, R. Morgan and M. E. Warren, "Specialized wavefront sensors for
adaptive optics", SPIE Vol. 2534, pp. 338-348 (1995); MATLAB for Windows, v. 4.2c.1, The MathWorks, Inc., Natick, Mass., 1994; and J. Goodman, Introduction to Fourier Optics, McGraw-Hill, (New York,
The present invention is of a wavefront sensor that is capable of obtaining detailed irradiance and phase values from a single measurement. This sensor is based on a microlens array that is built
using micro optics technology to provide fine sampling and good resolution. With the sensor, M.sup.2 can be determined. Because the full beam irradiance and phase distribution is known, a complete
beam irradiance and phase distribution can be predicted anywhere along the beam. Using this sensor, a laser can be completely characterized and aligned. The user can immediately tell if the beam is
single or multimode and can predict the spot size, full irradiance, and phase distribution at any plane in the optical system. The sensor is straightforward to use, simple, robust, and low cost.
The present invention is of a method and apparatus for characterizing an energy beam (preferably a laser), comprising a two-dimensional wavefront sensor comprising a lenslet array and directing the
beam through the sensor. In the preferred embodiment, the wavefront sensor is a Shack-Hartmann wavefront sensor. Wavefront slope and irradiance (preferably at a single location along the beam) are
measured, wavefront slope distribution is integrated to produce wavefront or phase information, and a space-beamwidth product is calculated (preferably by, as later defined, the gradient method, the
curvature removal method, or the Fourier propagation method). A detector array is employed, such as a charge coupled device (CCD) camera, a charge inductive device (CID) camera, or a CMOS camera,
rigidly mounted behind the wavefront sensor, ideally at the focal point of the lenslet array. Shims are used to adjust spacing between the wavefront sensor and the detector array, following
computation of shim size and placement to properly adjust the spacing. The sensor is calibrated, preferably against known optically induced wavefront curvature or tilt, and most preferably by
generating a reference beam and computing one or more spot positions (using a computation such as the center-of-mass computation, matched filter computation, or correlation computation).
The invention is additionally of a method of fabricating micro optics comprising: generating a digital description of the micro optic; fabricating a photomask; lithographically projecting the
photomask's pattern onto a layer of photoresist placed on a substrate; etching the photoresist layer and the substrate until all photoresist has been removed; and applying this method to fabricating
lenslet arrays for Shack-Hartmann wavefront sensors.
A primary object of the present invention is provide a means for laser beam characterization using only a single measurement at a single location, which is also the primary advantage of the
Other objects, advantages and novel features, and further scope of applicability of the present invention will be set forth in part in the detailed description to follow, taken in conjunction with
the accompanying drawings, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. The objects and advantages
of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
This application is a continuation-in-part application of U.S. patent application Ser. No. 08/678,019, entitled "Automated Pupil Remapping With Binary Optics", to Daniel R. Neal, filed on Jul. 10,
1996, now U.S. Pat. No. 5,864,381 and claims the benefit of U.S. Provisional Patent Application Ser. No. 60/051,863, entitled "Beam Characterization by Wavefront Sensor", to Daniel R. Neal, et al.,
filed on Jul. 7, 1997, and the specifications thereof are incorporated herein by reference. | {"url":"http://www.google.es/patents/US5936720?dq=flatulence","timestamp":"2014-04-20T23:38:30Z","content_type":null,"content_length":"142473","record_id":"<urn:uuid:9a084faf-b6d4-4552-8f78-9cddb6b3e7f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear transformation
January 11th 2007, 06:36 AM
linear transformation
talking of linear functions: the theorem says a function f: Rn->Rm is linear if and only if there exists the matrix A such that f(x)=Ax, the A is unique and the euclidean basis in Rn and Rm are
fixed. Ok, I understood the proof and the remark that the linear application f is associated with matrix A, whose columns are the images of the vectors of the euclidean basis of Rn according to
f. But then, how do we arrive to say that rank of A is equal to the dimension of the image space Im(f) ?:confused:
I would be extremely grateful if you could show me the logic behind this.. infinitely many times thanks
January 11th 2007, 07:14 AM
Say, for $m\times n$ matrix,
$f_{A}:\mathbb{R}^n\to \mathbb{R}^m$
Now any element in $\mathbb{R}^n$ can be expressed as,
Then, the image is the set of all linear combinations,
$S=\{ f_A(c_1\bold{e}_1+...+c_n\bold{e_n}) \}$
It is a linear transformation,
$S=\{ c_1f_A(\bold{e}_1)+...+c_nf_A(\bold{e}_n) \}$
But note that,
Correspond to the coloum vectors of the matrix $A$.
Thus, $S$ is a space spanned by the linear combinations of the coloum vectors. That means it has a dimension which is called the rank of $A$. Alternatievly, it is the dimension of the set
mentioned above, which is the dimension of the image space.
January 11th 2007, 07:27 AM
Umm..let's check if I have undestood the entire thing rightly:
the image is the space geneated by the columns of A. The rank is the max number of linearly indipendent vectors, so the rank is the dimension of the image. Then, since the image is generated by
the columns of the matrix, if these are vectors linearly independent then they are a basis of the image. Otherwise let's eliminate the dependent ones and the lefts will be the basis.
Am I wrong or have I understood your explaination?
January 11th 2007, 08:43 AM
Umm..let's check if I have undestood the entire thing rightly:
the image is the space geneated by the columns of A. The rank is the max number of linearly indipendent vectors, so the rank is the dimension of the image. Then, since the image is generated by
the columns of the matrix, if these are vectors linearly independent then they are a basis of the image. Otherwise let's eliminate the dependent ones and the lefts will be the basis.
Am I wrong or have I understood your explaination?
Sounds good to me.
What I have shown is that the image of the function is the space spanned by all linear combinations of the column vectors. Further, the colomun space by defintion is space spanned by linear
combinations of the colomn vectors. Thus, they are really the same thing. Saying the "Rank" means the dimension of basis for the latter case. And saying "Dimension" means the dimension of basis
for former case. Which is the same thing. | {"url":"http://mathhelpforum.com/advanced-algebra/9846-linear-transformation-print.html","timestamp":"2014-04-20T14:59:38Z","content_type":null,"content_length":"9375","record_id":"<urn:uuid:e370bc88-c78b-42bb-906d-992d29346221>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robot brains? Can't make 'em, can't sell 'em
Why dopes still beat boffins
High performance access to file storage
At every level, even specialists lack conceptual clarity.Let's look at a few examples taken from current academic debates.
We lack a common mathematical language for generic sensory input - tactile, video, rangefinder - which could represent any kind of signal or mixed-up combination of signals. Vectors? Correlations?
Imagine this example. If one were to plot every picture from a live video-feed as a single "point" in a high-dimensional space, a day's worth of images would be like a galaxy of stars. But what shape
would that galaxy have: a blob, a disk, a set of blobs, several parallel threads, donuts or pretzels? At the point scientists don't even know the structure in real-world data, much less the best ways
to infer those structures from incomplete inputs, and to represent them compactly.
And once we do know what kind of galaxies we're looking for, how should we measure the similarity or difference between two example signals, or two patterns? Is this "metric" squared-error, bit-wise,
or probablistic?
Well, in real galaxies, you measure the distance between stars by the usual Pythagorean formula. But in comparing binary numbers, one typically counts the number of different bits (which is like
leaving out Pythagorus' square root). If the stars represented probabilities, the comparisons would involve division rather than subtraction, and would probably contain logarithms. Choose the wrong
formula, and the algorithm will learn useless features of the input noise, or will be unable to detect the right patterns.
There's more: the stars in our video-feed galaxy are strung together in time like pearls on a string,in sequence. but we don't know what kind of (generic) patterns to look for among those stars
-linear correlations, data-point clusters, discrete sequences, trends?
Perhaps every time one image ("star") appears, a specific different one follows, like a black car moving from left to right in a picture. Or maybe one of two different ones followed, as if the car
might be moving right or left. But if the car is black, or smaller (two very different images!), would we still be able to use what we learned about large black moving cars? Or would we need to learn
the laws of motion afresh for every possible set of pixels?
The problems don't end there. We don't know how to learn from mistakes in pattern-detection, to incorporate errors on-the-fly. Nor do we know how to assemble small pattern-detection modules into
usefully large systems. Then there's the question of how to construct or evaluate plans of action or even simple combinations of movements for the robot.
Academics are also riven by the basic question of whether self-learning systems should ignore surprising input, or actively seek it out? Should the robot be as stable as possible, or as
hyper-sensitive as possible?
If signal-processing boffins can't even agree on basic issues like these, how is Joe Tinkerer to create an autonomous robot himself? Must he still specify exactly how many pixels to count in
detecting a wall, or how many degrees to rotate each wheel? Even elementary motion-detection - "Am I going right or left?" - is way beyond the software or mathematical prowess of most homebrew | {"url":"http://www.theregister.co.uk/2007/06/28/softky_robots_part_two?page=2","timestamp":"2014-04-16T18:00:58Z","content_type":null,"content_length":"51612","record_id":"<urn:uuid:5f6a8810-eea6-4361-9440-23e99a237ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Deviation
Standard Deviation and Variance (1 of 2)
The variance and the closely-related
standard deviation
are measures of how
out a distribution is. In other words, they are measures of variability.
The variance is computed as the average squared deviation of each number from its mean. For example, for the numbers 1, 2, and 3, the mean is 2 and the variance is:
The formula (in
summation notation)
for the variance in a
where μ is the mean and N is the number of scores.
When the variance is computed in a
the statistic
(where M is the mean of the sample) can be used. S² is a
estimate of σ², however. By far the most common formula for computing variance in a sample is:
which gives an unbiased estimate of σ². Since samples are usually used to estimate parameters, s² is the most commonly used measure of variance. Calculating the variance is an important part of many
statistical applications and analyses. It is the first step in calculating the standard deviation.
Standard Deviation
The standard deviation formula is very simple: it is the square root of the
. It is the most commonly used measure of spread.
An important attribute of the standard deviation as a measure of spread is that if the mean and standard deviation of a
distribution are known, it is possible to
compute the percentile rank associated with any given score
. In a normal distribution, about 68% of the scores are within one standard deviation of the mean and about 95% of the scores are within two standard deviations of the mean.
The standard deviation has proven to be an extremely useful measure of spread in part because it is mathematically tractable. Many formulas in
statistics use the standard deviation.
next page
for applications to risk analysis and stock portfolio volatility.)
How to compute the standard deviation in SPSS
Free PDF on Computing Basic Statistics with EXCEL | {"url":"http://davidmlane.com/hyperstat/A16252.html","timestamp":"2014-04-21T09:37:34Z","content_type":null,"content_length":"6798","record_id":"<urn:uuid:9ac0df79-8302-45fe-b867-4f2806097b80>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A Spectral Method for Elliptic Equations:
The Neumann Problem
Kendall Atkinson
Departments of Mathematics & Computer Science
The University of Iowa
David Chien, Olaf Hansen
Department of Mathematics
California State University San Marcos
July 6, 2009
Let be an open, simply connected, and bounded region in Rd
, d 2,
and assume its boundary @ is smooth. Consider solving an elliptic partial
di¤erential equation u + u = f over with a Neumann boundary
condition. The problem is converted to an equivalent elliptic problem over
the unit ball B, and then a spectral Galerkin method is used to create
a convergent sequence of multivariate polynomials un of degree n that
is convergent to u. The transformation from to B requires a special
analytical calculation for its implementation. With su¢ ciently smooth
problem parameters, the method is shown to be rapidly convergent. For | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/884/1901589.html","timestamp":"2014-04-17T01:34:07Z","content_type":null,"content_length":"8031","record_id":"<urn:uuid:3d0ad4da-eec6-4847-9131-fa24ec547e6d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Random Coupling Model
The Random Coupling Model: A User's Guide (updated 27 March, 2013)
Steven M. Anlage, Thomas Antonsen, James Hart, Sameer Hemmady, Jen-Hao Yeh, John Rodgers, Gabriele Gradoni, Edward Ott, Xing Zheng
Physics and ECE Departments, University of Maryland
Research funded by the AFOSR-MURI and DURIP programs, with continuing support from AFOSR and
the UMD / ONR Applied Electromagnetics Center
This page provides an overview of the Random Coupling Model (RCM) and its use in predicting HPM effects. The RCM is a method for making statistical predictions of induced voltages and currents for
objects and components contained in complicated (ray-chaotic) enclosures and subjected to RF fields. It is based on simple universal predictions of wave chaos theory and is quantitatively supported
by random matrix theory. The system-specific (non-universal) aspects of the problem are quantified by means of the radiation impedance of the "ports" involved in the problem, as well as prominent
short orbits. Please take a look at our papers, presentations, Frequently Asked Questions (FAQs), and caveats below. We hope to make this model useful and accessible to all interested parties, so
please give us your feedback.
A comprehensive paper describing the Random Coupling Model for the engineering community has been published as IEEE Trans. Electromag. Compat. 54, 758-771 (2012).
A power-point presentation that provides an Overview of the Random Coupling Model
We have also developed a first-principles model of fading based on the Random Coupling Model.
Biniyam Taddese's Ph.D. thesis "SENSING SMALL CHANGES IN A WAVE CHAOTIC SCATTERING SYSTEM AND ENHANCING WAVE FOCUSING USING TIME REVERSAL MIRRORS"
FAQ's about the Random Coupling Model:
How do I know my enclosure is chaotic for ray orbits? Most enclosures are ray-chaotic (its hard to make them otherwise!) The “soda can effect” is a good operational test - in other words the
impedance of the cavity is a strong function of frequency and details of the internal configuration of the components. See also Fig. 8.2 and related discussion in Sameer Hemmady's Ph.D. thesis.
Suppose I don't know the Q, volume, Z[Rad] precisely, is this a problem? Most likely no. The predicted PDFs are not highly sensitive to the loss parameter. Z[Rad] tends to be slowly varying in
frequency. It is most likely OK if you are away from antenna resonances. One possible extension of the RCM (suggested by David Dietz) is to include the possibility of a statistical variation of the
loss parameter k^2/(Dk^2 Q) and Z[rad], thus translating into a slightly different PDF of induced voltages.
I have a wide-band incident signal. Is the RCM still valid? Yes. First of all, the loss parameter is usually not a very strong function of frequency because it depends on the fixed geometry of the
system as well as the fact that the quality factor is a smoothly varying function of frequency with small fluctuations (see Fig. 7 in the paper of Barthelemy, Legrand and Mortessagne, Phys. Rev. E 71
, 016205 (2005). This shows a nice example of how the Ohmic losses vary with frequency in a ray-chaotic microwave enclosure.) Secondly, the frequency dependence of the radiation impedance accounts
for the variation of the coupling of the signal in/out of the enclsoure. Rather than assuming the loss parameter is constant, one could extend the RCM to include the variation of the loss parameter k
^2/(Dk^2 Q) with frequency to make a more accurate estimate of the PDF of induced voltages from the broadband excitation.
What is the minimum number of modes for the RCM model to be applicable? The higher the mode number the better (for best results use overmoded cavities). Generally you would need more than about 50
propagating modes below your lowest frequency of interest. We have empirical evidence that the RCM "degrades gracefully" in the limit of low frequencies. We hope to work on an extension of the RCM to
lower frequencies in the future.
How do I know if my enclosure is overmoded? A general rule-of-thumb would be to look at the ratio of the maximum to minimum transmitted power at a given frequency for your measured cavity ensemble.
The enclosure (cavity) is overmoded if this ratio is more than about 20 dB in magnitude. See also Fig. 8.2 and related discussion in Sameer Hemmady's Ph.D. thesis.
What about cross-talk between ports? This is included in the RCM. See section IV of the paper Phys. Rev. E 74 , 036213 (2006).
Does a port have to be on the surface of the enclosure? No, it can be inside and away from the walls of the enclosure. In fact, this is often the most interesting case because you would like to know
the statistics of induced voltages on an electronic component inside the enclosure.
Does the RCM work for pulsed excitations? Yes, we have developed the theory for the time-domain. The long-term behavior of a cavity excited with a pulse has been studied in detail (Phys. Rev. E. 79
016208 (2009)). We have also examined the contributions of short-orbits to the impedance (Phys. Rev. E 80, 041109 (2009), see below), and this will influence the short-time behavior of excited
systems. Experimental verification is largely complete (Phys. Rev. E 81, 025201(R) (2010)). We have investigated the sensitivity of time-domain signals to the scattering properties of ray-chaotic
enclosures (see the YouTube movie). We have also developed a new sensor paradigm based on the sensitivity of wave scattering in such systems to small perturbations (Appl. Phys. Lett. 95 , 114103
What happens if there is a port in the enclosure that is overlooked? It is incorporated in the model through the scattering it produces, as well as modifications to the Q and cavity volume.
How do I identify the presence of a new object inside my enclosure? The addition of a new object to the enclosure can be detected by a change in the loss parameter (since the object takes up a
certain electromagnetic volume and will have some degree of loss), by the creation or destruction of short-orbit trajectories, or by modification of the radiation impedance of a nearby port.
How many ports do I need to include in the RCM to describe my enclosure? We believe that the most important ports to explicitly include are those that are actively adding energy to the system, and
those that represent the sensitive/susceptible object(s) in the system. On a related note, one might also consider adding single ports that represent classes of more-or-less identical objects in the
enclosure. The present code (Terrapin RCM Solver v1.0) considers only 2-port systems. An extension of this code to a higher number of points can be arranged through Sameer Hemmady.
Suppose I don't have an ensemble of enclosures, can I still use the RCM? Yes, it is often a good approximation to use a frequency average to substitute for an ensemble average. In addition, our work
on removing the influence of short-orbits (Phys. Rev. E 80, 041109 (2009), Phys. Rev. E 81, 025201(R) (2010), see below) allows one to use single-realizations and smaller ranges of frequency
averaging to uncover the universal statistical fluctuations. This means that predictions of induced voltage PDFs will be improved if basic knowledge of short orbits in the system is also available.
I am worried about using a single Q-value. Suppose Q varies from mode to mode? Ray-chaotic systems tend to show small fluctuations of Q with mode number. Also, remember that what counts is the loss
parameter k^2/(Dk^2 Q), not Q by itself. See also the "wide-band" FAQ above.
What about antenna polarization for 3-D enclosures? This effect is included in the radiation impedance of the ports of interest.
Does the RCM take into account field variations associated with the presence of a wall? Yes. The presence of a wall is included in the radiation impedance of the port located near the wall, edge, or
corner of a structure.
Does the short-orbit extension of the RCM take into account multiple short-orbits produced by waves that bounce off of the port? Yes, these multiple bounce short orbits are naturally included through
consideration of the impedance.
Can the RCM work for multiple enclosures conected by apertures? Yes, the RCM has been extended to study the statistics of fields in a cascade of connected reverberant systems. See the paper of
Gradoni, Phys. Rev. E 86, 046204 (2012).
Random Coupling Model: CAVEATS and additional details
What could possibly go wrong?
If you need to predict the outcome of a specific measurement in a specific situation, then the RCM cannot help The RCM provides only statistical predictions. The 'extended RCM' now includes
short-orbit system specific information, in addition to the system specific port information.
If there are strong periodic contributions to the ray dynamics (e.g. short periodic orbits from parallel planes), these will lead to deviations from RCM predictions. Such orbits are now included in
the extended RCM. However, scars, cusps, caustics, singularities, and perhaps “Freak Waves” can produce large local enhancements of electromagnetic fields, and they fall outside of the Random
Coupling Model.
What is the low-frequency limit of the model? Rough rule of thumb: Apply the RCM to mode #50 and greater. One should have at least 3 wavelengths along each dimension of the enclosure. This is still
an open question.
One failure mode of the RCM arises when there is poor coupling to the enclosure (i.e. |S[Rad,11]|~1 or |S[Rad,22]|~1). The poor coupling makes it very difficult to de-embed the antenna properties
from the impedance data. When S[Rad] approaches 1 in magnitude, the Z[Rad] will become very large in magnitude and sensitive to small errors in S[Rad]. This propagates into large values and large
errors for Z[Cav]. This is likely to cause deviations in the induced voltage PDF.
Another failure mode of the RCM arises if the losses in the system are NOT uniformly distributed. If losses are highly localized (e.g. an electrically large aperture that allows many ray trajectories
to exit the system), the assumption of uniform loss may be violated, and the statistical predictions of the RCM may not be strictly valid. Another way to look at it, the properties of the system will
be dominated by short orbits (which are non-universal and non-statistical) because the longer "chaotic" orbits will exit the system through the localized loss mechanism (e.g. an electrically large
When do you NOT want to use this model?
Enclosure Q ~ 1 or less. No reverberation, no chaos, very lossy. The impedance does not fluctuate. It reduces to the radiation impedance.
Enclosure size NOT much larger than wavelength l. A direct numerical solution of such a problem is not sensitive to details and should be employed rather than the RCM. Rule of thumb for validity of
the RCM: cube root of enclosure volume > about 3 wavelengths. Remember: dielectrics inside the enclosure increase it's effective size.
Random Coupling Model: Does it work in 3 Dimensions?
YES! Our results are not in any way predicated on the peculiar 2D “bow-tie” cavity employed in some of our early verification work, nor on it's shape, thickness, or the antenna configuration,
coupling or position. The theory is well established and works in 3D situations. In addition to our own 3D verification results, several other groups have performed demonstrations of RCM-like results
in 3D : Sandia (Warne, et al .): Demonstrated the utility of Z[Rad] in removing the effects of coupling in 3D. They independently discovered that the 3D statistics are governed by a single loss
parameter. ONERA (Parmentier, et al .) demonstrated the equivalence of the S[Rad] and <S> in 3D. They also independently discovered an S-variance ratio relation in 3D data analogous to the
Hauser-Feshbach relation of nuclear physics (see the paper: Phys. Rev. E 73, 046208 (2006)).
Loss Parameter: Does it depend on dimensionality?
The general expression for the loss parameter is k^2/(Dk^2 Q), where k is the wavenumber, Dk^2 is the mean spacing between squared wavenumbers, and Q is the typical Q-factor of the modes (see above).
In 2D cavities the loss parameter can be written as k^2A/(4pQ) (where A is the area of the cavity), whereas in 3D it can be written as k^3V/(2p^2Q) (where V is the volume of the cavity), using the
Weyl formula for mean spacings of the corresponding closed systems. A general discussion of different ways to determine the loss parameter for a given system is presented in this appendix of Sameer
Hemmady's Ph.D. thesis.
Random Coupling Model Publications:
Xing Zheng, Thomas M. Antonsen Jr., Edward Ott, "Statistics of Impedance and Scattering Matrices in Chaotic Microwave Cavities: Single Channel Case," Electromagnetics 26, 3 (2006). pdf. This and the
following paper are the seminal papers on the Random Coupling Model.
Xing Zheng, Thomas M. Antonsen Jr., Edward Ott, "Statistics of Impedance and Scattering Matrices of Chaotic Microwave Cavities with Multiple Ports," Electromagnetics 26, 37 (2006). pdf.
Xing (Henry) Zheng, Ph.D. thesis, "Statistics of Impedance and Scattering Matrices in Chaotic Microwave Cavities: The Random Coupling Model," University of Maryland, 2005.
Xing Zheng, Sameer Hemmady, Thomas M. Antonsen Jr., Steven M. Anlage, and Edward Ott, "Characterization of Fluctuations of Impedance and Scattering Matrices in Wave Chaotic Scattering," Phys. Rev. E
73, 046208 (2006). pdf This paper also includes experimental verification of the raw-S and raw-Z variance ratios.
James Hart, Thomas M. Antonsen, Jr., Edward Ott, "Scattering a pulse from a chaotic cavity: Transitioning from algebraic to exponential decay," Phys. Rev. E 79, 016208 (2009). pdf The first paper on
the time-domain version of the random coupling model.
James A. Hart, T. M. Antonsen, E. Ott, "The effect of short ray trajectories on the scattering statistics of wave chaotic systems," Phys. Rev. E 80, 041109 (2009). pdf This paper presents the
Extended Random Coupling Model.
Gabriele Gradoni, Thomas M. Antonsen, Jr., and Edward Ott, “Impedance and power fluctuations in linear chains of coupled wave chaotic cavities,” Phys. Rev. E 86, 046204 (2012). pdf.
T. M. Antonsen, G. Gradoni, S. M. Anlage E. Ott, “Statistical Characterization of Complex Enclosures with Distributed Ports,” proceedings of the 2011 IEEE International Symposium on Electromagnetic
Compatibility, pp. 220-225. pdf
G. Gradoni, Jen-Hao Yeh, T. M. Antonsen, S. M. Anlage, E. Ott, “Wave Chaotic Analysis of Weakly Coupled Reverberation Chambers,” proceedings of the 2011 IEEE International Symposium on
Electromagnetic Compatibility, pp. 202-207. pdf
Jen-Hao Yeh, Thomas M. Antonsen, Edward Ott, Steven M. Anlage, “First-principles model of time-dependent variations in transmission through a fluctuating scattering environment,” Phys. Rev. E (Rapid
Communications) 85, 015202 (2012). pdf An application of the RCM to model fading in communications.
Gabriele Gradoni, Jen-Hao Yeh, Bo Xiao, Thomas M. Antonsen, Steven M. Anlage, Edward Ott , “Predicting the statistics of wave transport through chaotic cavities by the Random Coupling Model: a review
and recent progress,” submitted to Wave Motion, 2013. arXiv:1303.6526
Experimental Tests and Verification of the RCM:
Sameer Hemmady, Xing Zheng, Thomas M. Antonsen, Edward Ott, and Steven M. Anlage, "Universal Statistics of the Scattering Coefficient of Chaotic Microwave Cavities," Phys. Rev. E 71, 056215 (2005).
Sameer Hemmady, Xing Zheng, Edward Ott, Thomas M. Antonsen, and Steven M. Anlage, "Universal Impedance Fluctuations in Wave Chaotic Systems," Phys. Rev. Lett. 94, 014102 (2005). pdf
S. Hemmady, X. Zheng, T.M. Antonsen, E. Ott, S.M. Anlage, "Universal Properties of 2-Port Scattering, Impedance and Admittance Matrices of Wave Chaotic Systems," Phys. Rev. E 74 , 036213 (2006). pdf.
Sameer Hemmady, Xing Zheng, Thomas M. Antonsen Jr., Edward Ott and Steven M. Anlage, "Aspects of the Scattering and Impedance Properties of Chaotic Microwave Cavities," Acta Physica Polonica A 109,
65 (2006). pdf.
S. Hemmady, J. Hart, X. Zheng, T.M. Antonsen, E. Ott, S.M. Anlage, "Experimental Test of Universal Conductance Fluctuations by means of Wave-Chaotic Microwave Cavities,” Phys. Rev. B 74, 195326
(2006). pdf. The RCM applied to a classical analog of quantum transport.
Jen-Hao Yeh, James Hart, Elliott Bradshaw, Thomas Antonsen, Edward Ott, Steven M. Anlage, “Universal and non-universal properties of wave chaotic scattering systems,” Phys. Rev. E 81, 025201(R)
(2010). pdf
Jen-Hao Yeh, James Hart, Elliott Bradshaw, Thomas Antonsen, Edward Ott, Steven M. Anlage, “Experimental Examination of the Effect of Short Ray Trajectories in Two-port Wave-Chaotic Scattering Systems
,” Phys. Rev. E 82, 041114 (2010). pdf
S. Hemmady, Ph.D. thesis, "A Wave-Chaotic Approach to Predicting and Measuring Electromagnetic Field Quantities in Complicated Enclosures," University of Maryland, 2006. This is also available on
Amazon.com as a book entitled "The Random Coupling Model."
T. Firestone, M.S. Thesis, "RF Induced Nonlinear Effects in High-Speed Electronics," University of Maryland, 2004.
S. Hemmady, T.M. Antonsen, E. Ott, S.M. Anlage, "Statistical Prediction and Measurement of Induced Voltages on Components within Complicated Enclosures: A Wave-Chaotic Approach,” IEEE Trans.
Electromag. Compat. 54, 758-771 (2012). pdf This paper offers an accessible introduction to the RCM for the engineering community.
Sun K. Hong, Biniyam T. Taddese, Zachary B. Drikas, Steven M. Anlage, Tim D. Andreadis, "Focusing an Arbitrary RF Pulse at a Distance using Time Reversal Techniques,” submitted to IEEE Microwave
Theory and Techniques (2011). pdf
Jen-Hao Yeh, Thomas M. Antonsen, Edward Ott, Steven M. Anlage, “First-principles model of time-dependent variations in transmission through a fluctuating scattering environment,” Phys. Rev. E (Rapid
Communications) 85, 015202 (2012). pdf
Jen-Hao Yeh, Edward Ott, Thomas M. Antonsen, Steven M. Anlage, “Fading Statistics in Communications - a Random Matrix Approach,” Acta Physica Polonica A, 120, A-85 (2012). pdf
Zachary B. Drikas, Jesus Gil Gil, Hai V. Tran, Sun K. Hong, Tim D. Andreadis, Jen-Hao Yeh, Biniyam T. Taddese and Steven M. Anlage, “Application of the Random Coupling Model to Electromagnetic
Statistics in Complex Enclosures,” submitted to IEEE Trans. Microwave Theory Tech., (2012).
Additional Information about the RCM:
The Anlage Statistical Methods Meeting presentation can be downloaded here.
The Hemmady MURI Final Review Meeting presentation can be downloaded here.
Computer Code for implemetation of the RCM. (Note that this code has been upgraded, and the new "Terrapin RCM Solver v1.0" is available by special request). Note that Sameer Hemmady has developed a
2.0 version of the Terrapin RCM Solver. Please contact him directly at:
TechFlow Scientific - A Division of TechFlow Inc.
2155 Louisiana Blvd. NE, Suite 3200
Albuquerque NM 87111 USA
Email: shemmady@techflow.com
This matlab code generates an ensemble of normalized 2x2 impedance (z) matrices for a given value of the loss parameter k^2/(Dk^2 Q), called "Ktwiddle" in the code. Generating this large number of
matrices and finding their eigenvalues is much much faster in matlab than mathematica. The code assumes the system is time-reversal symmetric (GOE). It runs in Matlab 6.5, and generates a file for
input into the program NormtoSZ.nb below.
The first mathematica analysis code (StoNorm.nb) takes the measured 2x2 radiation S data (S[Rad]) and 2x2 ray-chaotic cavity S data (S[Cav]) and finds the eigenvalues of the normalized impedance (z,
EigZnorm.txt) and scattering (s, EigSnorm.txt) matrices. This code is written in Mathematica 5.0. Example input files are Srad.txt and Scav.txt. This cavity data set is just one rendition of the
cavity. Ordinarily one wants to analyze a large number of renditions (~100) of the ray-chaotic cavity to compile statistics for the resulting normalized s and z matrices. This code essentially allows
you to find the "hidden" universal statistical properties of your enclosure, by removing the non-universal coupling.
The second mathematica analysis code (NormtoSZ.nb) takes the ensemble of normalized 2x2 z matrices generated form the MatLab code above, along with the measured or calculated 2x2 radiation S matrix
(S[Rad]), and generates the eigenvalues of the 2x2 S[Cav] (EigScav.txt) and Z[Cav] (EigZcav.txt) matrices. Example input files are the ensemble of 2x2 z matrices with loss parameter k^2/(Dk^2 Q) =
1.5 (RMTZ_1_5loss2.txt) and Srad.txt. This code essentially allows you to predict the statistical properties of the raw S and Z of your cavity, given the loss parameter of the enclosure and the
radiation properties of the 2 ports.
The revised (26 July, 2006) "Terapin RCM Solver v1.0" User's Guide is available here.
Please direct questions and comments to 'anlage "at" umd.edu'
Link to Prof. Anlage's research web site
Link to the University of Maryland MURI'01 web site
Link to the Maryland / ONR Center for Applied Electromagnetics web site
This work is supported by the DoD MURI for the study of microwave effects under AFOSR Grant F496200110374, as well as AFOSR DURIP Grants FA95500410295 and FA95500510240, and the ONR / Maryland Center
for Applied Electromagnetics, Task A2, Grant No. N000140911190. | {"url":"http://www.csr.umd.edu/anlage/RCM/index.htm","timestamp":"2014-04-20T05:43:40Z","content_type":null,"content_length":"32833","record_id":"<urn:uuid:43cd2c1f-cf6e-472d-a25a-8b742a56bc28>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
The use of probability in QM
It reminds me of the aether of LET - yea its a valid theory but you have to ask - why bother? Of course the answer is philosophical - I however prefer the simpler answer of no pilot wave and no
aether - but everyone is different - to each his/her own.
I agree with you that interpretations are fundamentally personal and subjective, and anyone that can get the answer right is not making any kind of mistake even if they use an approach that we might
view as unsavory in some way. But I think in the case of interpretations of quantum mechanics, there is more going on than just a search for personal cognitive resonance. Underneath it is all is very
much the question of what is physics trying to be. This question has been resolved age by age throughout history, and is constantly changing, and ultimately is controlled by whatever works, moreso
than whatever we would like to work. But until we know whatever will work in the case of the next theory after quantum mechanics, we can still recognize that the different interpretations are asking
us to think differently about what physics is.
I feel the issue comes down to what I see are three separate possibilities here, aligned with the three main ways to think about what physics is: rationalist, empiricist, or realist.
The rationalist approach says that physics is a search for the laws that the universe actually follows, and tends to frame the universe as a mathematical structure (we often hear words to the effect
that "God is a mathematician" in this school of thought). But this is more than just a philosophical framework from which to regard physics, it makes genuine claims about what the process of doing
physics should be trying to do (to wit, it should be searching for "the laws", or "the theory of everything".) I believe that approach not only colors what we think physics is, it actually changes
what we think physics is. The many-worlds approach to quantum mechanics is often aligned with this style of thinking.
The empiricist approach says that physics is a set of observations that we are trying to understand, but the physics is the behavior, not the postulates we invent to approximate, idealize, and
understand the behavior. Bohr was the consummate example of this approach, as he said "there is no quantum world" (anti-realist) and "physics is what we can say about nature" (with emphasis on "we",
it is anti-rationalist). Again this is more than just a philosophical bent, it changes how we teach and perform physics, it changes what physics is trying to be.
The realist approach says that physics is trying to use a marriage of mathematical and empirical techniques to determine what reality is "really like". It says there is a reality out there, and
physics is trying to find out what it is, more or less at face value. Einstein was a realist, indeed he was so radical of a realist that he didn't even like the realist approaches of de Broglie and
Bohm because they embraced some unreal elements (the pilot wave) as the price of admittance to the sphere of being able to talk about the "real" positions and trajectories of particles. Einstein's
approach has largely earned him disfavor, as he was considered to have lost the Einstein/Bohr debates, and his EPR paradox is no longer viewed as a paradox. But de Broglie's realism has generally
been viewed as fully consistent with quantum mechanics, as you say. My point is that if we adopt the deBroglie-Bohm approach, we are not just choosing a philosophical favorite, we are again taking a
stand on what we think physics should actually be.
So I agree with you that we don't at present know what physics should actually be, and the interpretations of QM all work, so we are at the moment left with a purely subjective and personal choice
about how we like to frame it. I'm just saying that underneath that choice, there is a real struggle happening, like water piling up behind a dam and we don't yet know which path that water will take
when it reaches its breaking point (which here will be some new observation that is not described by quantum mechanics). But what we should expect is that ultimately this issue will be far from
moot-- it will determine the future direction of what physics becomes, be it a primarily rationalist, empiricist, or realist endeavor. I think it's exciting that we can't foresee which path future
physics will take, but I think the "all three, you choose" approach cannot last forever! | {"url":"http://www.physicsforums.com/showthread.php?p=3932691","timestamp":"2014-04-19T22:14:24Z","content_type":null,"content_length":"95179","record_id":"<urn:uuid:c670a3f4-e33c-4c66-a598-ab7f68860cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Analysis of a Tank Round | Science Blogs | WIRED
• By Rhett Allain
• 11.12.13 |
• 8:26 am |
Personally, I’m not a big fan of being fired at by a tank – it’s just a personal thing. However, in this case it looks like the tank is firing at some type of remote camera (hopefully).
Speed and Distance Estimate
The surprising thing to me was that you could actually see the shell after it was fired from tank. I mean, if this was a Nerf gun, sure you could see it. However, we are talking about a tank shell.
With a quick search, it seems like these shells have a muzzle velocity around 1800 m/s.
How about a quick estimate? If the video is recorded at 30 frames per second, then I can calculate the distance the round travels during one frame (1/30th of a second).
60 meters is a pretty good distance in just one 30th of a second, but for some reason I thought it would be in farther. Still, it looks like the camera isn’t all that far away. Ok, now for some video
analysis. How many frames does the shell appear in? Here is a plot using Tracker Video Analysis. This shows the apparent vertical position of the shell as a function of time (the vertical units are
meaningless in this case).
There are a couple of things to see here. First, there seems to be some repeated frames. If you go through this frame by frame, it’s easy to see. What does this mean? I’m not a video expert, but I’m
pretty sure that means the video is not the original and has been processed in some way. Maybe it won’t matter.
The other thing to get from this first plot is the time. By looking at the start and end time for this motion, it shows a time of flight of 0.533 seconds. If the muzzle speed is about 1800 m/s, the
distance traveled would be about 900 meters (around half a mile).
That tank doesn’t look that far away. I suspect that the camera is using an optical zoom. That’s my guess.
Looking at Angular Size
We can get more than just the time of flight for this round. First, we need some information. What kind of tank is this? I don’t read arabic, but I can do some google searching. I am going to claim
this tank is a Russian T-72. Here is my best matchup from the video.
If I’m way off on this identification, I am going to use the dimensions of this tank anyway. Really, aren’t all tanks roughly the same size? Wikipedia lists this tank with a hull length of 6.95
meters. If I knew more about the camera, I could determine the actual distance to the tank using the angular size. In general, there is the following relationship between size, distance and angular
Here, θ is the angular size of the object (in radians). L is the actual size and r is the distance from the camera to the object. If you look at the video, you can get the percent size of the object
in terms of the width of the image. However, you don’t know the angular field of view for the camera – especially since there could be some optical zoom.
Accounting for the shifted viewing angle, the T-72 is about 38.7% of the total angular view. If this were the iPhone 5, the field of view would be 0.888 radians. Using the length of the tank, this
would put the vehicle at a distance of just 7.8 meters. So, it’s NOT recorded with an iPhone.
How about a plot of the angular size of the shell as it moves toward the camera? Of course, I still don’t know the dimensions for this angular size.
If you want to make your own plot, here is the angular size and time data. Have fun with it. I think there is a way to solve for the angular field of view and thus the total distance from the camera
to the tank. You can do that as a homework assignment. However, I am going to make some guesses. Suppose that I guess the distance and use a shell speed of 1800 m/s? Can I make a plot that compares
the computed angular size with the measured angular size? Yes.
If I assume the shell is 125 mm (which I assume means that has a diameter of 125 mm – that’s just a guess) and I put the camera at 1050 meters, I get the following plot for a projectile moving at
1800 m/s:
Well, that turned out more awesome that I thought it would. Of course, this still assumes a constant velocity of 1800 m/s, but I still like it. Clearly, the camera is using some type of optical zoom
and is pretty far from the tank and that’s why you can see the shell in flight.
It would still suck to have that thing pointing at you. How about a homework assignment? From the video, measure the angular size and use this to get the actual distance to the shell as a function of
time. Dynamically scale the video so that you can get the vertical position of the shell in each frame (not too difficult since the camera is stationary). Now find the vertical acceleration of the | {"url":"http://www.wired.com/2013/11/video-analysis-of-a-tank-round/","timestamp":"2014-04-18T04:02:57Z","content_type":null,"content_length":"105642","record_id":"<urn:uuid:dc69de72-2729-489c-80bb-b33950311ca7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2004 [00206]
[Date Index] [Thread Index] [Author Index]
Re: Integrating Feynman integrals in mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg51169] Re: Integrating Feynman integrals in mathematica
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Thu, 7 Oct 2004 05:25:47 -0400 (EDT)
• Organization: The University of Western Australia
• References: <ciu5iv$81d$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
In article <ciu5iv$81d$1 at smc.vnet.net>, pabird at supanet.com (Xman)
> I am trying to find an explicit form of the following 4-dimensional
> fourier transforms. Can anyone help? ( x and k are 4 dimensional
> vectors) They are from physics.
> 1)
> f(x) =Intregral[ e^(i x.k) / (k.k -m^2) ]dk^4
Shouldn't this be k.k + m^2 (to agree with your equation for f below)?
> 2)
> g(x)=Intregral[ e^(i x.k) / (k.k -m^2)^2 ]dk^4
> I know that the first is of the form:
> f(x) = 1/|x.x| + log|x.x| * P((m^2/4) |x.x|) + Q((m^2/4) |x.x|)
> (when m=0 this becomes 1/|x.x|)
> Where P and Q stand for infinite polynomial series and that I think
> P(y) = Sum( y^n /(n!(n+1)!) ,y=0..infinity )
Apart from the error in syntax and summation index, Mathematica can
compute this sum in closed form. Try
Sum[y^n/(n! (n+1)!),{n,0,Infinity}]
> and that in the second one
> g(x) = log|x.x| * R((m^2/4) |x.x|) + S((m^2/4) |x.x|)
> (when m=0 this becomes log|x.x|)
> where R(y) = Sum( y^n /(n!n!) ,y=0..infinity )
and this one too.
> But the functions Q and S are more difficult to find.
> Plus does anyone know if the series P and R (=P') or Q and S can be
> written in terms of simple functions?
> It may help to know that f and g satisfy the following 4 dimensional
> wave equations:
> ( d/dx . d/dx - m^2) f(x) = delta(x) (=0 for x=/=0)
> ( d/dx . d/dx - m^2)^2 g(x) = delta(x) (=0 for x=/=0)
> I am particularly interested in g(x).
As pointed out by Steve Luttrell, if you can compute f(x), you know g(x)
by parametric differentiation. Let me indicate how you can get
Mathematica to compute the integral over 4 dimensional space of the
h[x_][k_] := Exp[I x.k]/(k.k + m^2)
(related to yours) in closed form. As you have observed, the integral
only depends on the length of the 4 dimensional vector x. Call this r.
Let p denote the magnitude of the vector k. If you express k in
hyperspherical co-ordinates, say
k = p {Sin[a]Sin[b]Sin[c],Sin[a]Sin[b]Cos[c],Sin[a]Cos[b],Cos[a]}
where p is in [0,Infinity), c is in [0,2Pi], and a and b are in [0,Pi],
then the volume element reads
dk^4 = p^3 dp Sin[a]^2 da Sin[b] db dc
Now, because the angular part of the k integral is over all possible
orientations of the vector k, you can choose an _arbitrary_ orientation
for x (this is why the integral only depends on r). The choice resulting
in the simplest integrand is
x = {0,0,0,r}
so that the integrand becomes
Exp[I p r Cos[a]]/(p^2 + m^2)
where a is the angle between x and k. The integrals over b and c are now
Integrate[Sin[b], {b,0,Pi},{c,0,2Pi}]
leading to an overall multiplicative factor of 4 Pi.
You are left with the following computation: first compute the integral
over a,
Assuming[r > 0 && m > 0 && p > 0,
4 Pi Integrate[Exp[I p r Cos[a]]/(p^2 + m^2) Sin[a]^2, {a, 0, Pi}]]
followed by the integral over p
Assuming[r > 0 && m > 0, Integrate[% p^3, {p, 0, Infinity}]]
which evaluates to the very simple closed-form result
4 m Pi^2 BesselK[1, m r]/r
When m = 0, this reduces to 4 Pi^2 / r^2 as you can see from the
following series expansion:
4 m Pi^2 BesselK[1, m r]/r + O[m]
Expanding the result as a series in r, one obtains
4 m Pi^2 (1/r^2 + (m BesselI[1, m r]/r) Log[r] + ...)
exactly of the form you write above (check this by series expansion of
the BesselI[1, m r] function).
The integral with (p^2 + m^2)^2 in the denominator is even simpler.
Using parametric differentiation, you obtain
2 Pi^2 BesselK[0, m r]
Paul Abbott Phone: +61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
35 Stirling Highway
Crawley WA 6009 mailto:paul at physics.uwa.edu.au
AUSTRALIA http://physics.uwa.edu.au/~paul | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Oct/msg00206.html","timestamp":"2014-04-19T19:45:30Z","content_type":null,"content_length":"38518","record_id":"<urn:uuid:be348631-9f28-4396-8fdf-e3bf6d49848c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: 102:Turing Degrees/2
Jeffrey Ketland ketland at ketland.fsnet.co.uk
Mon Apr 9 20:28:25 EDT 2001
Harvey Friedman:
>The following is a restatement of a theorem from Turing Degrees/1.
>Let Z2 be the usual first order system of second order arithmetic. Let Z2+
>be Z2 with a satisfaction predicate added and induction and comprehension
>are extended to all formulas in the expanded language.
Are the axioms you use for Sat(x,y) in Z2+ Tarski's inductive axioms?
Do you consider "self-applicative" (usually called Kripke-Feferman) axioms
in this context over Z2?
E.g., Things like the axiom
T-rep: T(A) --> T(T(A)))
from Friedman/Sheard 1987?
More generally, if Ref(S) is Feferman's reflective closure operation on
system S (Fefermann 1991: "Reflecting on Incompleteness" 1991), do you know
how Ref(Z2) turns out? Proof-theoretic strength, interpretability?
Regards - Jeff
~~~~~~~~~~~ Jeffrey Ketland ~~~~~~~~~
Dept of Philosophy, University of Nottingham
Nottingham NG7 2RD United Kingdom
Tel: 0115 951 5843
Home: 0115 922 3978
E-mail: jeffrey.ketland at nottingham.ac.uk
Home: ketland at ketland.fsnet.co.uk
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2001-April/004864.html","timestamp":"2014-04-18T10:34:45Z","content_type":null,"content_length":"3478","record_id":"<urn:uuid:783d1d7e-17a7-4f4d-97cd-cb7815a0922c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Categorization as probability density estimation
Results 1 - 10 of 55
- PSYCHOLOGICAL REVIEW , 1998
"... A neuropsychological theory is proposed that assumes category learning is a competition between separate verbal and implicit (i.e., procedural-learning-based) categorization systems. The theory
assumes that the caudate nucleus is an important component of the implicit system and that the anterior ci ..."
Cited by 229 (24 self)
Add to MetaCart
A neuropsychological theory is proposed that assumes category learning is a competition between separate verbal and implicit (i.e., procedural-learning-based) categorization systems. The theory
assumes that the caudate nucleus is an important component of the implicit system and that the anterior cingulate and prefrontal cortices are critical to the verbal system. In addition to making
predictions for normal human adults, the theory makes specific predictions for children, elderly people, and patients suffering from Parkinson's disease, Huntington's disease, major depression,
amnesia, or lesions of the prefrontal cortex. Two separate formal descriptions of the theory are also provided. One describes trial-by-trial learning, and the other describes global dynamics. The
theory is tested on published neuropsychological data and on category learning data with normal adults.
- Trends in Cognitive Sciences , 2006
"... Theory-based Bayesian models of inductive reasoning 2 Theory-based Bayesian models of inductive reasoning ..."
- Perception & Psychophysics , 1999
"... Averaging across observers is common in psychological research. Often averaging reduces the measurement error, and thus does not affect the inference drawn about the behavior of individuals.
However, in other situations, averaging alters the structure of the data qualitatively, leading to an incorre ..."
Cited by 59 (40 self)
Add to MetaCart
Averaging across observers is common in psychological research. Often averaging reduces the measurement error, and thus does not affect the inference drawn about the behavior of individuals. However,
in other situations, averaging alters the structure of the data qualitatively, leading to an incorrect inference about the behavior of individuals. This research investigated the influence of
averaging across observers on the fits of decision bound models (F.G. Ashby, 1992a) and generalized context models (GCM; R.M. Nosofsky, 1986) through Monte Carlo simulation of a variety of
categorization conditions, perceptual representations, and individual difference assumptions, and in an experiment. The results suggest that (a) averaging has little effect when the GCM is the
correct model, (b) averaging often improves the fit of the GCM and worsens the fit of the decision bound model when the decision bound model is the correct model, (c) the GCM is quite flexible, and
under many conditions can mimic the predictions of the decision bound model; the decision bound model, on the other hand, is generally unable to mimic the predictions of the GCM, (d) the validity of
the decision bound model’s perceptual representation assumption can have a large effect on the inference drawn about the form of the decision bound, and (e) the experiment supported the claim that
averaging improves the fit of the GCM. These results underscore the importance of performing single observer analysis if one is interested in understanding the categorization performance of
individuals. The ability to categorize quickly and accurately is fundamental to survival. Everyday, we make hundreds of categorization judgments. Several detailed theories and quantitative models
have been proposed to account for the perceptual and cognitive processes involved in categorization; the goal being to understand the categorization performance of individual behaving organisms.
, 2001
"... The contribution of the striatum to category learning was examined by having patients with Parkinson's disease (PD) and matched controls solve categorization problems in which the optimal rule
was linear or nonlinear using the perceptual categorization task. Traditional accuracy-based analyses, as ..."
Cited by 56 (39 self)
Add to MetaCart
The contribution of the striatum to category learning was examined by having patients with Parkinson's disease (PD) and matched controls solve categorization problems in which the optimal rule was
linear or nonlinear using the perceptual categorization task. Traditional accuracy-based analyses, as well as quantitative model-based analyses were performed. Unlike accuracy-based analyses, the
model-based analyses allow one to quantify and separate the effects of categorization rule learning from variability in the trial-by-trial application of the participant's rule. When the
categorization rule was linear, PD patients showed no accuracy, categorization rule learning, or rule application variability deficits. Categorization accuracy for the PD patients was associated with
their performance on a test believed to be sensitive to frontal lobe functioning. In contrast, when the categorization rule was nonlinear, the PD patients showed accuracy, categorization rule
learning, and rule application variability deficits. Furthermore, categorization accuracy was not associated with performance on the test of frontal lobe functioning. Implications for
neuropsychological theories of categorization learning are discussed. (JINS, 2001, 7, 710 --727.) Keywords: Categorization, Parkinson's disease, Striatum, Memory, Learning
"... this article we outline the foundations of such a theory, working in the general framework of Bayesian inference. Much of our proposal for extending Shepard's theory to the cases of multiple
examples and arbitrary stimulus structures has already been introduced in other papers (Griffiths & Tenenbaum ..."
Cited by 48 (10 self)
Add to MetaCart
this article we outline the foundations of such a theory, working in the general framework of Bayesian inference. Much of our proposal for extending Shepard's theory to the cases of multiple examples
and arbitrary stimulus structures has already been introduced in other papers (Griffiths & Tenenbaum, 2000; Tenenbaum, 1997, 1999a, 1999b; Tenenbaum & Xu, 2000). Our goal here is to make explicit the
link to Shepard's work and to use our framework to make connections between his work and other models of learning (Feldman, 1997; Gluck & Shanks, 1994; Haussler, Kearns & Schapire, 1994; Kruschke,
1992; Mitchell, 1997), generalization (Nosofsky, 1986; Heit, 1998), and similarity (Chater & Hahn, 1997; Medin, Goldstone & Gentner, 1993; Tversky, 1977). In particular, we will have a lot to say
about how our generalization of Shepard's theory relates to Tversky's (1977) well-known set-theoretic models of similarity. Tversky's set-theoretic approach and Shepard's metric space approach are
often considered the two classic -- and classically opposed -- theories of similarity and generalization. By demonstrating close parallels between Tversky's approach and our Bayesian generalization
of Shepard's approach, we hope to go some way towards unifying these two theoretical approaches and advancing the explanatory power of each. The plan of our article is as follows. In Section 2, we
recast Shepard's analysis of generalization in a more general Bayesian framework, preserving the basic principles of his approach in a form that allows us to apply the theory to situations with
multiple examples and arbitrary (non-spatially represented) stimulus structures. Sections 3 and 4 describe those extensions, and Section 5 concludes by discussing some implications of our theory for
the internalization of...
- Proceedings of the 28th Annual Conference of the Cognitive Science Society , 2006
"... The rational model of categorization (RMC; Anderson, 1990) assumes that categories are learned by clustering similar stimuli together using Bayesian inference. As computing the posterior
distribution over all assignments of stimuli to clusters is intractable, an approximation algorithm is used. The ..."
Cited by 39 (16 self)
Add to MetaCart
The rational model of categorization (RMC; Anderson, 1990) assumes that categories are learned by clustering similar stimuli together using Bayesian inference. As computing the posterior distribution
over all assignments of stimuli to clusters is intractable, an approximation algorithm is used. The original algorithm used in the RMC was an incremental procedure that had no guarantees for the
quality of the resulting approximation. Drawing on connections between the RMC and models used in nonparametric Bayesian density estimation, we present two alternative approximation algorithms that
are asymptotically correct. Using these algorithms allows the effects of the assumptions of the RMC and the particular inference algorithm to be explored
- Current Directions in Psychological Science , 2003
"... explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization,
thereby accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it mak ..."
Cited by 37 (9 self)
Add to MetaCart
explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization, thereby
accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it makes sense that it should have evolved (Kruschke & Hullinger, 2010). The phrase “learned selective
attention ” denotes three qualities. First, “attention ” means the amplification or attenuation of the processing of stimuli. Second, “selective” refers to differentially amplifying and/or
attenuating a subset of the components of the stimulus. This selectivity within a stimulus is different from attenuating or amplifying all aspects of a stimulus simultaneously (cf. Larrauri &
Schmajuk, 2008). Third, “learned ” denotes the idea that the allocation of selective processing is retained for future use. The allocation may be context sensitive, so that attention is allocated
differently in different contexts. There are many phenomena in human and animal learning that suggest the involvement of learned selective attention. The first part of this chapter briefly reviews
some of those phenomena. The emphasis of the chapter is not the empirical phenomena, however. Instead, the focus is on a collection of models that formally express theories of learned attention.
These models will be surveyed subsequently. Phenomena suggestive of selective attention in learning There are many phenomena in human and animal learning that suggest that learning involves
allocating attention to informative cues, while ignoring uninformative cues. The following subsections indicate the benefits of selective allocation of attention, and illustrate the benefits with
particular findings.
- Journal of Mathematical Psychology , 2002
"... Many currently popular models of categorization are either strictly parametric (e.g., prototype models, decision bound models) or strictly nonparametric (e.g., exemplar models) (Ashby &
Alfonso-Reese, 1995). In this article, a family of semi-parametric classifiers is investigated where categories ar ..."
Cited by 31 (0 self)
Add to MetaCart
Many currently popular models of categorization are either strictly parametric (e.g., prototype models, decision bound models) or strictly nonparametric (e.g., exemplar models) (Ashby &
Alfonso-Reese, 1995). In this article, a family of semi-parametric classifiers is investigated where categories are represented by a finite mixture distribution. The advantage of these mixture models
of categorization is that they contain several parametric models and nonparametric models as a special case. Specifically, it is shown that both decision bound models (Ashby & Maddox, 1992, 1993) and
the generalized context model (Nosofsky, 1986) can be interpreted as two extreme cases of a common mixture model. Furthermore, many other (semi-parametric) models of categorization can be derived
from the same generic mixture framework. In this article, several examples are discussed, and a parameter estimation procedure for fitting these models is outlined. To illustrate the approach,
several specific models are fitted to a data set collected by McKinley and Nosofsky (1995). The results suggest that semi-parametric models are a promising alternative for future model development.
Formal models of categorization are often closely related to statistical methods of probability density estimation (Ashby & Alfonso-Reese, 1995). In statistics, a distinction is made between
parametric estimators, that make strong assumptions about the distribution of the sample data, and nonparametric estimators that make only weak distributional assumptions. In accord with this
distinction, Ashby and Alfonso-Reese defined parametric classifiers as those classifiers that make strong assumptions about the functional form of the category distributions, and nonparametric
classifiers as classifiers that make almost no assumptions about the category form. Prototype models (Reed, 1972) and decision bound models (Ashby & Maddox, 1992, 1993) are parametric classifiers,
because they make strong assumptions about category structure. Decision bound models, for example, assume that the category distributions are multivariate normal (see Ashby, 1992, for a motivation).
Despite this strong assumption (and the fact that these models can only predict linear or quadratic decision bounds), Ashby and Maddox (1992, 1993)
"... Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge, and should explain how different kinds of
knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. We present a Baye ..."
Cited by 29 (4 self)
Add to MetaCart
Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge, and should explain how different kinds of knowledge
lead to the distinctive patterns of reasoning found in different inductive contexts. We present a Bayesian framework that attempts to meet both goals and describe four applications of the framework:
a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the four models are
defined over different kinds of structures that capture different relationships between the categories in a domain. Our framework therefore shows how statistical inference can operate over structured
background knowledge, and we argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1496835","timestamp":"2014-04-17T07:18:22Z","content_type":null,"content_length":"42072","record_id":"<urn:uuid:24e991a6-bebe-4acd-a0a6-9852bf901a9d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help solve 1/2X=-1+sqrt(x-2)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508a7cb8e4b0d596c4607ab7","timestamp":"2014-04-17T15:53:56Z","content_type":null,"content_length":"73384","record_id":"<urn:uuid:9d64fb2f-92e3-44bb-8772-292f11b11d3d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Origin of the theorem on the existence of the smallest field of definition of an affine variety
up vote 3 down vote favorite
Weil proved the following theorem in his book Foundations of Algebraic Geometry, p.19. The proof is somewhat involved. I wonder if the theorem is his original.
Theorem Let $K[X_1,\dots, X_n]$ be the polynomial ring over a field $K$. Let $I$ be an ideal of $K[X_1,\dots, X_n]$. There exists a smallest subfield $k$ of $K$ such that $I$ is generated by
polynomials in $k[X_1,\dots,X_n]$.
ag.algebraic-geometry reference-request ho.history-overview
I am sorry, I don't know the answer to your question but I just realized that you can prove it using Gröbner basis. Let $E$ and $F$ be subfields of $K$ such that $I$ is generated by polynomials
with coefficients in $E$ and in $F$, respectively. Then choose reduced Gröbner bases $G$ and $H$ of $I$ with respect to the same term ordering having all coefficients in $E$ and in $F$,
respectively. Now both $G$ and $H$ are reduced Gröbner bases of $I$ also over $K$. Because of the unicity of the reduced Gröbner basis, we have $G=H$. Hence $I$ is generated by polynomials with
coefficients in $E\cap F$. – Markus Schweighofer Oct 28 '12 at 21:51
add comment
3 Answers
active oldest votes
As far as I can see, Weil was indeed the main source for this viewpoint on fields of definition in algebraic geometry. However, it may be hard to pin down the specific result quoted here
in his 1935 paper. This paper is probably most readily found in the first volume of Weil's papers published by Springer, but the later book presents his notion of variety and the related
field theory (with generic points) in far more detail.
up vote 5 What I'd like to add is a reference to Dieudonne's book History of Algebraic Geometry (especially VII.4). This was first published in French in 1974 and then in English translation in
down vote 1985. Dieudonne took a strong interest in this kind of history and assembled a lot of material about older origins of ideas while emphasizing the key role played by Weil. Naturally names
like van der Waerden, E. Noether, and Siegel are part of that history as well.
add comment
I suspect that this theorem is indeed due to Weil.
"Foundations of Algebraic Geometry" by Weil was published in 1946, but the 1944 paper "Some Properties of Ideals in Rings of Power Series" by Claude Chevalley (Transactions of the American
Mathematical Society, Vol. 55, No. 1 (Jan., 1944), pp. 68-84) attributes to Weil the development of the theory around "ideals in polynomial rings" over a decade earlier in "Arithmetique et
geometrie sur les varietes algebriques" in 1935 (see footnote on p. 83).
up vote 5 Reading the AMS review it seems the only other possible originators would have been Siegel, or perhaps Noether or van der Waeden. I don't have a copy of Weil's 1935 work, but you might
down vote track it down and (if you can read enough French) check for this particular result.
Edit: For remarks which are perhaps related/interesting (in terms of Weil's background and his familiarity with Kronecker's work) read from the last paragraph of page 12 here and the
referenced ICM address by Weil in 1950.
add comment
This has a very easy proof if one generalizes it to (infinite-dimensional) linear algebra and forgets about commutative algebra. Let $K/F$ be an extension of fields (e.g., could take $F$
to be a prime field), $V$ an $F$-vector space (such as a polynomial ring over a prime field), and $W$ a $K$-subspace of $V_K := K \otimes_F V$.
Among all subfields $K_0$ of $K$ over $F$ such that $W = K \otimes_{K_0} W_0$ for a (visibly unique) $K_0$-subspace $W_0$ of $V_{K_0}$, we claim that the intersection of these fields
works too.
(In case $V$ is an $F$-algebra and $W$ is an ideal of $V_K$, obviously $W_0$ is an ideal of $V_{K_0}$, so this really does imply Weil's result. In fact, it gives a more general result: no
need to assume the algebras are finitely generated.)
up vote 2
down vote Proof: Choose an $F$-basis $\{v_i\}_{i \in I}$ of $V$, so there is a subset $J$ of $I$ such that $\{v_j \bmod W\}_{j \in J}$ is a $K$-basis of $V/W$. For $i' \in I - J$, expand $v_{i'} \
bmod W \in V/W$ in this basis:
$$v_{i'} \equiv \sum_{j \in J} a_{i'j} v_j \bmod W$$ with $a_{i'j} \in K$. The necessary and sufficient condition on $K_0$ for $W_0$ to exist is that $K_0$ contains every $a_{i'j}$ (for
$j \in J$ and $i' \in I - J$). So the subfield $F(a_{i'j})_{i', j}$ is the desired minimal subextension of $K$ over $F$. QED
There is a very elegant modern discussion of the theme of field of definition (for closed subschemes, morphisms, etc.) without any finiteness hypotheses in EGA IV$_2$, 4.8.
I know that what I wrote above doesn't answer the question, but I wanted to communicate that the proof needn't be "somewhat involved" if it is generalized in a suitable way. (I have
never read Weil's proof, so I have no idea what he does.) And that part of EGA provides a modern reference if one is desired. – user27056 Oct 29 '12 at 1:34
EGA IV_2, Corollary 4.8.7 is essentilly the same as the above claim of yours. It refers Bourbaki's Algebra Ch. II for its proof. However the Bourbaki's proof is quite different from
yours. It is not so involved, but not so simple as yours. – Makoto Kato Oct 29 '12 at 2:23
@Makoto: Ah, then I'm glad I never looked up the Bourbaki proof to which EGA punted. :) – user27056 Oct 29 '12 at 2:24
I don't get it. As soon as $W$ is not $0$, the subfield of $K$ that your argument constructs is $K$ itself. For if you take $w = \sum a_i(w) v_i$ a non-zero element in $W$, one of the
$a_i(w)$ is non-zero, and for every $\lambda$ in $K$, $a_i(\lambda w)=\lambda a_i(w)$ so $a_i(\lambda w)$ may be any element you want in $K$. Am I wrong? – Joël Oct 29 '12 at 2:50
Dear Joel: Sorry, I garbled the argument. I have replaced it with what I meant to say (equally short, but now correct). Thanks for catching it. If you delete your comment (assuming you
consider it now to be moot) then I will delete this one. – user27056 Oct 29 '12 at 4:37
show 2 more comments
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry reference-request ho.history-overview or ask your own question. | {"url":"https://mathoverflow.net/questions/110939/origin-of-the-theorem-on-the-existence-of-the-smallest-field-of-definition-of-an","timestamp":"2014-04-21T02:49:48Z","content_type":null,"content_length":"69022","record_id":"<urn:uuid:542425ab-ae35-4c0c-a2ff-1eac609711b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collinear points formed by a measurement on a square's perimeter
July 29th 2011, 01:20 AM #1
Junior Member
Jul 2011
Collinear points formed by a measurement on a square's perimeter
Let a square ABCD with sides of length 1 be given. A point X on BC is at distance d
from C, and a point Y on CD is at distance d from C. The extensions of: AB and DX
meet at P, AD and BY meet at Q, AX and DC meet at R, and AY and BC meet at
S. If points P, Q, R and S are collinear, determine d.
I have started and found the diagram is symmetrical through AC, X must be (1,d) and Y must be (1-d,0). PR or QS must be at right angles to AC due to the need for a collinear (straight) line
through them at the end. I believe the gradients should be the same for AC and QS, is this correct?? I'm stuck on what to do next and also need to clarify so far.
Re: Collinear points formed by a measurement on a square's perimeter
The gradient of SR is 1 (or -1) due to symmetry, so one has to equate the gradient of RP to 1. Let R' be the projection of R to AP. Since RR' = 1, it should be that R'P = AP - AR' = 1. It is left
to express AP and AR' through d using similar triangles.
Re: Collinear points formed by a measurement on a square's perimeter
Let a square ABCD with sides of length 1 be given. A point X on BC is at distance d
from C, and a point Y on CD is at distance d from C. The extensions of: AB and DX
meet at P, AD and BY meet at Q, AX and DC meet at R, and AY and BC meet at
S. If points P, Q, R and S are collinear, determine d.
I have started and found the diagram is symmetrical through AC, X must be (1,d) and Y must be (1-d,0). PR or QS must be at right angles to AC due to the need for a collinear (straight) line
through them at the end. I believe the gradients should be the same for AC and QS, is this correct?? I'm stuck on what to do next and also need to clarify so far.
1. Use proportions in similar triangles:
The blue triangles will yield:
and the grey triangles will yield:
$\dfrac ds=\dfrac1{1+s}$
2. Solve this system of equations for s and d.
I've got $d=\frac32 - \frac12 \cdot \sqrt{5}$ and $s= \frac12 \cdot \sqrt{5} - \frac12$
July 29th 2011, 03:48 AM #2
MHF Contributor
Oct 2009
July 29th 2011, 04:50 AM #3 | {"url":"http://mathhelpforum.com/geometry/185288-collinear-points-formed-measurement-square-s-perimeter.html","timestamp":"2014-04-21T16:52:28Z","content_type":null,"content_length":"40096","record_id":"<urn:uuid:54625a53-8387-4305-944f-9be3a5001188>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imperial Beach Math Tutor
Find a Imperial Beach Math Tutor
I have been an Elementary School educator for the past ten years. I have extensive experience working with English Language Learners, and Special Education Students. I hold a Bachelors degree in
History with an emphasis in Latin American Studies.
9 Subjects: including prealgebra, English, reading, elementary (k-6th)
...Over the last year, I have primarily tutored high school and college students, helping them prepare for tests like the SAT and GRE, to select and apply to colleges and to build the study skills
and confidence to eventually continue to succeed without a tutor. I am patient and hope to serve as a ...
54 Subjects: including calculus, chemistry, algebra 2, SAT math
I have over 11 years in the educational field. I have worked with mostly high school and middle school children. I have experience many different subjects even though I am a history major, my
experience in working in many subjects has helped me be familiar in many other subjects.
32 Subjects: including algebra 2, algebra 1, reading, prealgebra
...Lessons covered school assignments in social sciences, math, English, government, biology, chemistry, and history as well as targeted ESL assignments. I strongly emphasize spoken as well as
written skills so that students can do more than just fill in blanks on a page. The mechanics of a language are important.
37 Subjects: including SAT math, ACT Math, ESL/ESOL, English
...If you cancel within 10 hours of a scheduled session, the late cancellation fee is half of the session time. 5- If I must cancel or reschedule, I will tutor 30 minutes of the next session for
free.As an instructor of English as a Second Language, I have studied not only English grammar, but the...
16 Subjects: including geometry, reading, writing, English | {"url":"http://www.purplemath.com/imperial_beach_ca_math_tutors.php","timestamp":"2014-04-16T13:24:11Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:310f8ecc-995a-43fb-8edf-35a90f2d0c33>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elmhurst, NY Geometry Tutor
Find an Elmhurst, NY Geometry Tutor
...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond.
26 Subjects: including geometry, physics, calculus, statistics
...For Regents Algebra: I have a 99% pass for the Regents Algebra including students from Wyzant. I am detail oriented. I provide practice exercises from start to finish with the student,
including topics and questions that are similar to those on a Regents exam.
47 Subjects: including geometry, chemistry, reading, writing
...I have a weird obsession with Military History so if you need tutoring for anything related to that, I'm your man.I have extensive experience tutoring K-6 students in English, Math, and Social
Studies. I have been a part of the Supplemental Education Services tutoring initiative on behalf of the...
37 Subjects: including geometry, English, reading, algebra 1
...My name is Matt B., and I've been a tutor for the past 7 years, helping people overcome their difficulties with math and physics. My tutoring philosophy is based on the idea above: my job as a
tutor is to help you understand how math works, making you able to do any problem yourself! (Of course...
12 Subjects: including geometry, physics, MCAT, trigonometry
...Once we make sure the fundamentals are in place I use repetition to further solidify the work so that these types of problems are a breeze for the student. I am highly qualified to tutor
algebra. I come from an engineering background, and I specialize in making these principles easy to understand and retainable.
12 Subjects: including geometry, chemistry, calculus, algebra 1
Related Elmhurst, NY Tutors
Elmhurst, NY Accounting Tutors
Elmhurst, NY ACT Tutors
Elmhurst, NY Algebra Tutors
Elmhurst, NY Algebra 2 Tutors
Elmhurst, NY Calculus Tutors
Elmhurst, NY Geometry Tutors
Elmhurst, NY Math Tutors
Elmhurst, NY Prealgebra Tutors
Elmhurst, NY Precalculus Tutors
Elmhurst, NY SAT Tutors
Elmhurst, NY SAT Math Tutors
Elmhurst, NY Science Tutors
Elmhurst, NY Statistics Tutors
Elmhurst, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Elmhurst_NY_geometry_tutors.php","timestamp":"2014-04-19T10:09:25Z","content_type":null,"content_length":"23984","record_id":"<urn:uuid:619ff5ba-c5a3-4401-8848-2487720863ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by jasort20 on Sunday, March 18, 2007 at 1:35am.
can someone help explain this for me please...
Problem #1
What values for x must be excluded in the following fraction?
Problem #2
what values for x must be excluded in the following fraction?
When working with fractions, the denominator (part on the bottom) cannot be 0. So we have to look at what value of "x" will make the denominator equal to 0.
The first one is a trick question. Any value you put in for "x", the denominator will never be 0. So any number will work.
Problem #2 has 2 numbers that will make the denominator 0. Forget about the x-3 right now. The top part can be 0. Now...think about this.
You are multiplying 2 equations. "4x-5" and "x+10" The only way to get "0" when multiplying is to multiply a number by 0.
If you multiply 18 x 0, you get 0.
If you multiply 5 x 0, you get 0.
If you multiply 132448957 x 0, you get 0.
So, we can say that if "4x-5" equals 0 OR "x+10" equals 0, the equation will not work.
So now we have to solve for 0 in two different equations:
Solve for x and get 2 different numbers. Those 2 numbers are your answer :)
so let me see if this is correct.
for 4x-5=0
x = 1.25 or 5/4
for x+10 = 0
x = -10
so the two numbers are 5/4 and -10 but then how do i write it out because i still have the numerator
You got the right answers (-10 and 5/4).
If memory serves me correctly, you can just write out the 2 numbers seperated by a comma. I MIGHT be wrong though. It has been a while.
I'll start a new thread and ask people who usually teach math this question. My areas are in Computers and Theology. Being in Computers, I know how to do a lot of math. But I don't remember the
"official" way to write the answer.
Related Questions
help math - Problem #1 what values for x must be excluded in the following ...
Math - I do not quite understand how to do this. COuld someone please help me? ...
pre calculus - Hi, I am having a problem trying to find and understanding how ...
pre calculus - Hi, I am having a problem trying to find and understanding how ...
algebra - consider the function f(x)=9/x and g(x)=9/x find f(g)x))and any values...
Math - (y+5)/(y^(2)+4-32) Find the excluded values for the following fraction.
values for k, math,help - can someone show me the steps to work out this problem...
Math 116 Axia - rewrite using multiplication -70/(23) Do not simplify, Type a ...
algebra - I have 2 problems that are hanging me up. Can you show me how? What ...
college alg - consider the function f(x)=9/x and g(x)=9/x find f(g)x))and any ... | {"url":"http://www.jiskha.com/display.cgi?id=1174196151","timestamp":"2014-04-17T21:34:01Z","content_type":null,"content_length":"10296","record_id":"<urn:uuid:b2f13a7c-07a9-4f95-94ae-d0c769c96826>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
examples pertaining to bind-free hypotheses
Major Section: BIND-FREE
See bind-free for a basic discussion of the use of bind-free to control rewriting.
We give examples of the use of bind-free hypotheses from the perspective of a user interested in reasoning about arithmetic, but it should be clear that bind-free can be used for many other purposes
EXAMPLE 1: Cancel a common factor.
(defun bind-divisor (a b)
; If a and b are polynomials with a common factor c, we return a
; binding for x. We could imagine writing get-factor to compute the
; gcd, or simply to return a single non-invertible factor.
(let ((c (get-factor a b)))
(and c (list (cons 'x c)))))
(defthm cancel-factor
;; We use case-split here to ensure that, once we have selected
;; a binding for x, the rest of the hypotheses will be relieved.
(implies (and (acl2-numberp a)
(acl2-numberp b)
(bind-free (bind-divisor a b) (x))
(case-split (not (equal x 0)))
(case-split (acl2-numberp x)))
(iff (equal a b)
(equal (/ a x) (/ b x)))))
EXAMPLE 2: Pull integer summand out of floor. Note: This example has an extended bind-free hypothesis, which uses the term (find-int-in-sum sum mfc state).
(defun fl (x)
;; This function is defined, and used, in the IHS books.
(floor x 1))
(defun int-binding (term mfc state)
;; The call to mfc-ts returns the encoded type of term.
;; Thus, we are asking if term is known by type reasoning to
;; be an integer.
(declare (xargs :stobjs (state) :mode :program))
(if (ts-subsetp (mfc-ts term mfc state)
(list (cons 'int term))
(defun find-int-in-sum (sum mfc state)
(declare (xargs :stobjs (state) :mode :program))
(if (and (nvariablep sum)
(not (fquotep sum))
(eq (ffn-symb sum) 'binary-+))
(or (int-binding (fargn sum 1) mfc state)
(find-int-in-sum (fargn sum 2) mfc state))
(int-binding sum mfc state)))
; Some additional work is required to prove the following. So for
; purposes of illustration, we wrap skip-proofs around the defthm.
(defthm cancel-fl-int
;; The use of case-split is probably not needed, since we should
;; know that int is an integer by the way we selected it. But this
;; is safer.
(implies (and (acl2-numberp sum)
(bind-free (find-int-in-sum sum mfc state) (int))
(case-split (integerp int)))
(equal (fl sum)
(+ int (fl (- sum int)))))
:rule-classes ((:rewrite :match-free :all)))
; Arithmetic libraries will have this sort of lemma.
(defthm hack (equal (+ (- x) x y) (fix y)))
(in-theory (disable fl))
(thm (implies (and (integerp x) (acl2-numberp y))
(equal (fl (+ x y)) (+ x (fl y)))))
EXAMPLE 3: Simplify terms such as (equal (+ a (* a b)) 0)
(defun factors (product)
;; We return a list of all the factors of product. We do not
;; require that product actually be a product.
(if (eq (fn-symb product) 'BINARY-*)
(cons (fargn product 1)
(factors (fargn product 2)))
(list product)))
(defun make-product (factors)
;; Factors is assumed to be a list of ACL2 terms. We return an
;; ACL2 term which is the product of all the ellements of the
;; list factors.
(cond ((atom factors)
((null (cdr factors))
(car factors))
((null (cddr factors))
(list 'BINARY-* (car factors) (cadr factors)))
(list 'BINARY-* (car factors) (make-product (cdr factors))))))
(defun quotient (common-factors sum)
;; Common-factors is a list of ACL2 terms. Sum is an ACL2 term each
;; of whose addends have common-factors as factors. We return
;; (/ sum (make-product common-factors)).
(if (eq (fn-symb sum) 'BINARY-+)
(let ((first (make-product (set-difference-equal (factors (fargn sum 1))
(list 'BINARY-+ first (quotient common-factors (fargn sum 2))))
(make-product (set-difference-equal (factors sum)
(defun intersection-equal (x y)
(cond ((endp x)
((member-equal (car x) y)
(cons (car x) (intersection-equal (cdr x) y)))
(intersection-equal (cdr x) y))))
(defun common-factors (factors sum)
;; Factors is a list of the factors common to all of the addends
;; examined so far. On entry, factors is a list of the factors in
;; the first addend of the original sum, and sum is the rest of the
;; addends. We sweep through sum, trying to find a set of factors
;; common to all the addends of sum.
(declare (xargs :measure (acl2-count sum)))
(cond ((null factors)
((eq (fn-symb sum) 'BINARY-+)
(common-factors (intersection-equal factors (factors (fargn sum 1)))
(fargn sum 2)))
(intersection-equal factors (factors sum)))))
(defun simplify-terms-such-as-a+ab-rel-0-fn (sum)
;; If we can find a set of factors common to all the addends of sum,
;; we return an alist binding common to the product of these common
;; factors and binding quotient to (/ sum common).
(if (eq (fn-symb sum) 'BINARY-+)
(let ((common-factors (common-factors (factors (fargn sum 1))
(fargn sum 2))))
(if common-factors
(let ((common (make-product common-factors))
(quotient (quotient common-factors sum)))
(list (cons 'common common)
(cons 'quotient quotient)))
(defthm simplify-terms-such-as-a+ab-=-0
(implies (and (bind-free
(simplify-terms-such-as-a+ab-rel-0-fn sum)
(common quotient))
(case-split (acl2-numberp common))
(case-split (acl2-numberp quotient))
(case-split (equal sum
(* common quotient))))
(equal (equal sum 0)
(or (equal common 0)
(equal quotient 0)))))
(thm (equal (equal (+ u (* u v)) 0)
(or (equal u 0) (equal v -1)))) | {"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/1/language/acl2-html-docs/BIND-FREE-EXAMPLES.html","timestamp":"2014-04-17T04:15:57Z","content_type":null,"content_length":"7367","record_id":"<urn:uuid:743804b6-c24d-4446-a5ed-6981029a3127>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitland Calculus Tutor
...I enjoy every minute of it, and it's been one of the most rewarding experiences of my life so far, one that has inspired me to become a secondary math teacher. I value a student's desire to
learn and commitment to having a good educational relationship. Being open about your needs, concerns, and things that are going well is very helpful to improving your math skills.
15 Subjects: including calculus, chemistry, geometry, algebra 1
...I can meet you at a library, coffee shop or even your house whatever is most comfortable for you! I truly believe that math can be fun and easy if it's broken down for you in a way that you can
comprehend it. I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you.
22 Subjects: including calculus, geometry, algebra 1, GRE
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including calculus, physics, geometry, GRE
...Thank you for you consideration. I am able to tutor the math portion of the COOP / HSPT exams. I have an engineering degree from UCLA and well over 15 years of full-time teaching & tutoring
28 Subjects: including calculus, chemistry, physics, geometry
...While in High School, I studied Latin up to college level, which has allowed me a much greater understanding of English grammar and the roots of our words. I do not have any professional
tutoring experience, but I have had good experiences tutoring my friends and family. I am an extremely patie...
32 Subjects: including calculus, reading, algebra 2, algebra 1 | {"url":"http://www.purplemath.com/Suitland_Calculus_tutors.php","timestamp":"2014-04-20T13:26:36Z","content_type":null,"content_length":"24076","record_id":"<urn:uuid:0a8085c2-69ec-45fa-a468-674ac6012f2c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's a good way to rewrite this non-tail-recursive function?
up vote 11 down vote favorite
For some reason, I am having trouble thinking of a good way to rewrite this function so it uses constant stack space. Most online discussions of tree recursion cheat by using the Fibonacci function
and exploiting the properties of that particular problem. Does anyone have any ideas for this "real-world" (well, more real-world than the Fibonacci series) use of recursion?
Clojure is an interesting case since it does not have tail-call optimization, but only tail recursion via the "recur" special form. It also strongly discourages the use of mutable state. It does have
many lazy constructs including tree-seq, but I am not able to see how they can help me for this case. Can anyone share some techniques they have picked up from C, Scheme, Haskell, or other
programming languages?
(defn flatten [x]
(let [type (:type x)]
(cond (or (= type :NIL) (= type :TEXT))
(= type :CONCAT)
(doc-concat (flatten (:doc1 x))
(flatten (:doc2 x)))
(= type :NEST)
(doc-nest (:level x)
(flatten (:doc x)))
(= type :LINE)
(doc-text " ")
(= type :UNION)
(recur (:doc1 x)))))
edit: By request in the comments...
Restated in general terms and using Scheme -- how do I rewrite the following recursion pattern so it doesn't consume stack space or require tail-call optimization of non-self-calls?
(define (frob x)
(cond ((foo? x)
((bar? x)
(macerate (f x) (frob (g x))))
((thud? x)
(frobnicate (frob (g x))
(frob (h x))))))
I chose annoying names to drive home the point that I am looking for answers that don't rely on the algebraic properties of x, macerate, frobnicate, f, g, or h. I just want to rewrite the recursion.
Rich Hickey has kindly added an explicit trampoline function to Clojure.
i would say "C++" but that would be unhelpful ;-) – Steven A. Lowe Nov 24 '08 at 21:21
it would be easier to help if you explain what does the function do. maybe also using a more common LISP dialect, i myself like to use Scheme at home, but really don't like the look of clojure –
Javier Nov 24 '08 at 21:30
I added a Scheme version of the question but I doubt it will be of much help to you. :) – Steven Huwig Nov 24 '08 at 21:43
2 use continuation passing style, then it's allocated on the heap and not the stack! :) – apg Nov 24 '08 at 21:49
CPS requires tail call optimization. :( – Steven Huwig Nov 24 '08 at 21:50
show 2 more comments
7 Answers
active oldest votes
Please don't downvote this because it's ugly. I know it's ugly. But it's a way to do it in trampoline-style (no system stack overflow), and without using gotos.
push x,1 on homemade stack
while stack length > 1
n = pop
if (n==1)
x = pop
if (type(x)==NIL || type(x)==TEXT)
push x // this is the "return value"
else if (type(x)==CONCAT)
push 2 // say call doc-concat
push doc2(x), 1 // 2nd recursion
push doc1(x), 1 // 1st recursion
else if (type(x)==NEST)
push 3 // say call doc-nest
up vote 6 down vote accepted push level(x) // push level argument to doc-nest
push doc(x), 1 // schedule recursion
else if (type(x)==LINE)
push " " // return a blank
else if (type(x)==UNION)
push doc1(x), 1 // just recur
else if (n==2)
push doc-concat(pop, pop) // finish the CONCAT case
else if (n==3)
push doc-nest(pop, pop) // finish the NEST case
// final value is the only value on the stack
That's very helpful -- thanks. – Steven Huwig Nov 25 '08 at 1:09
You'll probably have to tweak it a bit. – Mike Dunlavey Nov 25 '08 at 1:13
Cleaned it up a bit. – Mike Dunlavey Nov 25 '08 at 1:26
Nice, pseudo-code explanation of trampolining. – Daniel Spiewak Nov 26 '08 at 1:15
add comment
The main hurdle to easily transforming your algorithm is that it doesn't result in a sequence of calls to the same function; but alternates between a few ones, each operating on the
result of the other.
i'd say you have three alternatives:
up vote 3 down 1. totally reformulate the algorithm (that's what the Fibonacci examples do).
vote 2. combine all functions into a single one with lots of cond's (ugly, and maybe won't result in a real tail-recursion, even with a single function).
3. turn the flow inside-out: write a single, simple tail-recursive function that transforms the input data into the sequence of operations that have to be performed, and then eval
add comment
If flatten calls itself twice (in the :CONCAT case) how can it be turned into a loop? Maybe I'm missing something. Seems it's inherently a tree-walk.
up vote 2 down I mean, there are ways to do a tree-walk without stack, but something has to be unbounded, like if you do it with a FIFO, or as was suggested, with continuations.
Unbounded heap is fine -- it's just the StackOverflowError from Java that I want to avoid. – Steven Huwig Nov 24 '08 at 23:42
Well, in that case, you can just make your own stack out of an unbounded list, and turn your routine into a loop. If you want, I'll pseudo-code it in another answer. – Mike
Dunlavey Nov 24 '08 at 23:59
add comment
The standard general technique is conversion to trampolined style. For your particular problem (implementing prettyprinting combinators) you might find helpful Derek Oppen's 1980 paper
up vote 2 "Prettyprinting" (not on the web AFAIK). It presents a stack-based imperative algorithm similar to Wadler's later functional one.
down vote
Thanks for the citations. I've an ACM DL subscription and Oppen's paper is there. Some light reading for the holidays... – Steven Huwig Nov 25 '08 at 1:07
add comment
You could use continuation-passing:
(define (frob0 x k)
(cond ((foo? x)
(k x))
((bar? x)
(frob0 (g x)
(lambda (y)
(k (macerate (f x) y))))
((thud? x)
up vote 2 down (frob0 (g x)
vote (lambda (y)
(frob0 (h x)
(lambda (z)
(k (frobnicate y z))))))))
(define (frob x)
(frob0 x (lambda (y) y))
This will not make things easier to understand :-(
1 Yep, typo. Thanks for pointing it out. CPS requires tail-call optimization to work this way -- the calls to (k ...) can stack up. Clojure doesn't do TCO. – Steven Huwig Nov 24
'08 at 23:42
1 Oops. That's right. This isn't very helpful then. – Chris Conway Nov 25 '08 at 1:25
...except to get more people to complain about the lack of TCO in the JVM. :) – Steven Huwig Nov 25 '08 at 1:51
But, as stated in another comment, you could make this use Trampolined style. It'll be slower since you'll most likely use exceptions for it, but it'd work. – apg Nov 25 '08 at
add comment
The best I can come up with is something like this:
(define (doaction vars action)
(cond ((symbol=? action 'frob)
(cond ((foo? (first vars))
(first vars))
up vote 0 down vote ((bar? (first vars))
(doaction (list (f (first vars)) (doaction (g x) 'frob)) 'macerate)
It's not fully tail recursive, but likely the best you can get. TCO is really the way to go. (I understand that Clojure can't have it due to the JVM).
add comment
The following is not a specific answer to your question, but hopefully it will be a useful example. It replaces multiple recursions (which would otherwise require an unbounded call
stack) with a stack of tasks.
(in Haskellish code):
data Tree = Null | Node Tree Val Tree
-- original, non-tail-recursive function: flatten :: Tree -> Result flatten Null = nullval flatten (Node a v b) = nodefunc (flatten a) v (flatten b)
up vote 0 down -- modified, tail-recursive code: data Task = A Val Tree | B Result Val
eval :: Tree -> [Task] -> Result use :: Result -> [Task] -> Result
eval Null tasks = use nullval tasks eval (Node a v b) tasks = eval a ((A v b):tasks)
use aval ((A v b):tasks) = eval b ((B aval v):tasks) use bval ((B aval v):tasks) = use (nodefunc aval v bval) tasks use val [] = val
-- actual substitute function flatten2 :: Tree -> Result flatten2 tree = eval tree []
add comment
Not the answer you're looking for? Browse other questions tagged language-agnostic functional-programming recursion tree clojure or ask your own question. | {"url":"http://stackoverflow.com/questions/315507/whats-a-good-way-to-rewrite-this-non-tail-recursive-function","timestamp":"2014-04-16T16:32:00Z","content_type":null,"content_length":"104668","record_id":"<urn:uuid:19f480bf-423b-4272-a5dc-e3c273b5fa4a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
D02PXF computes the solution of a system of ordinary differential equations using interpolation anywhere on an integration step taken by
2 Specification
SUBROUTINE D02PXF ( TWANT, REQEST, NWANT, YWANT, YPWANT, F, WORK, WRKINT, LENINT, IFAIL)
INTEGER NWANT, LENINT, IFAIL
REAL (KIND=nag_wp) TWANT, YWANT(*), YPWANT(*), WORK(*), WRKINT(LENINT)
CHARACTER(1) REQEST
EXTERNAL F
3 Description
D02PXF and its associated routines (
) solve the initial value problem for a first-order system of ordinary differential equations. The routines, based on Runge–Kutta methods and derived from RKSUITE (see
Brankin et al. (1991)
), integrate
is the vector of
solution components and
is the independent variable.
computes the solution at the end of an integration step. Using the information computed on that step D02PXF computes the solution by interpolation at any point on that step. It cannot be used if
was specified in the call to setup routine
4 References
Brankin R W, Gladwell I and Shampine L F (1991) RKSUITE: A suite of Runge–Kutta codes for the initial value problems for ODEs SoftReport 91-S1 Southern Methodist University
5 Parameters
1: TWANT – REAL (KIND=nag_wp)Input
On entry: $t$, the value of the independent variable where a solution is desired.
2: REQEST – CHARACTER(1)Input
On entry
: determines whether the solution and/or its first derivative are to be computed.
Compute the approximate solution only.
Compute the approximate first derivative of the solution only.
Compute both the approximate solution and its first derivative.
Constraint: ${\mathbf{REQEST}}=\text{"S'}$, $\text{"D'}$ or $\text{"B'}$.
3: NWANT – INTEGERInput
On entry
: the number of components of the solution to be computed. The first
components are evaluated.
$1\le {\mathbf{NWANT}}\le \mathit{n}$
, where
is specified by
in the prior call to
4: YWANT($*$) – REAL (KIND=nag_wp) arrayOutput
the dimension of the array
must be at least
, and at least
On exit
: an approximation to the first
components of the solution at
. Otherwise
is not defined.
5: YPWANT($*$) – REAL (KIND=nag_wp) arrayOutput
the dimension of the array
must be at least
, and at least
On exit
: an approximation to the first
components of the first derivative at
. Otherwise
is not defined.
6: F – SUBROUTINE, supplied by the user.External Procedure
must evaluate the functions
(that is the first derivatives
${y}_{i}^{\prime }$
) for given values of the arguments
. It must be the same procedure as supplied to
The specification of
REAL (KIND=nag_wp) T, Y(*), YP(*)
In the description of the parameters of D02PXF below,
denotes the value of
in the call of
1: T – REAL (KIND=nag_wp)Input
On entry: $t$, the current value of the independent variable.
2: Y($*$) – REAL (KIND=nag_wp) arrayInput
On entry: the current values of the dependent variables, ${y}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,\mathit{n}$.
3: YP($*$) – REAL (KIND=nag_wp) arrayOutput
On exit: the values of ${f}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,\mathit{n}$.
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D02PXF is called. Parameters denoted as
be changed by this procedure.
7: WORK($*$) – REAL (KIND=nag_wp) arrayInput/Output
the dimension of the array
must be at least
On entry
: this
be the same array as supplied to
remain unchanged between calls.
On exit
: contains information about the integration for use on subsequent calls to
or other associated routines.
8: WRKINT(LENINT) – REAL (KIND=nag_wp) arrayInput/Output
On entry: must be the same array as supplied in previous calls, if any, and must remain unchanged between calls to D02PXF.
On exit: the contents are modified.
9: LENINT – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which D02PXF is called.
□ ${\mathbf{LENINT}}\ge 1$ if ${\mathbf{METHOD}}=1$ in the prior call to D02PVF;
□ ${\mathbf{LENINT}}\ge \mathit{n}+5×{\mathbf{NWANT}}$ if ${\mathbf{METHOD}}=2$ and $\mathit{n}$ is specified by NEQ in the prior call to D02PVF.
10: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, an invalid input value for
was detected or an invalid call to D02PXF was made, for example without a previous call to the integration routine
, or after an error return from
, or if
was being used with
. You cannot continue integrating the problem.
7 Accuracy
The computed values will be of a similar accuracy to that computed by
9 Example
This example solves the equation
reposed as
over the range
$\left[0,2\pi \right]$
with initial conditions
. Relative error control is used with threshold values of
for each solution component.
is used to integrate the problem one step at a time and D02PXF is used to compute the first component of the solution and its derivative at intervals of length
$\pi /8$
across the range whenever these points lie in one of those integration steps. A moderate order Runge–Kutta method (
) is also used with tolerances
in turn so that solutions may be compared. The value of
is obtained by using
Note that the length of
is large enough for any valid combination of input arguments to
and the length of
is large enough for any valid value of the parameter
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/D02/d02pxf.html","timestamp":"2014-04-20T18:34:52Z","content_type":null,"content_length":"39880","record_id":"<urn:uuid:f806deb6-0b1e-438d-a4a7-62bb4e0bd62e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precise tests of low energy QCD from decay properties
Precise tests of low energy QCD from decay properties
Dipartimento di Fisica Sperimentale dell’Università e Sezione dell’INFN di Torino, 10125 Torino, Italy; University of Birmingham, Edgbaston, Birmingham, B15 2TT UK; Università di Roma “La Sapienza” e
Sezzione dell’INFN di Roma, 00185 Roma, Italy; Department of Physics, Imperial College, London, SW7 2BW UK; Faculty of Physics, University of Sofia “St. Kl. Ohridski”, 1164 Sofia, Bulgaria;
Department of Physics and Astronomy, George Mason University, Fairfax, VA 22030, USA; Dipartimento di Fisica, Università di Modena e Reggio Emilia, 41100 Modena, Italy; Istituto di Fisica, Università
di Urbino, 61029 Urbino, Italy; SLAC, Stanford University, Menlo Park, CA 94025, USA; Laboratory for High Energy Physics, 3012 Bern, Switzerland; UCLA, Los Angeles, CA 90024, USA; Laboratori
Nazionali di Frascati, 00044 Frascati (Rome), Italy; Institut de Física d’Altes Energies, UAB, 08193 Bellaterra (Barcelona), Spain; Dipartimento di Fisica Sperimentale dell’Università di Torino,
10125 Torino, Italy; Institut de Physique Nucléaire de Lyon, IN2P3-CNRS, Université Lyon I, 69622 Villeurbanne, France; University College Dublin School of Physics, Belfield, Dublin 4, Ireland;
Centro de Investigaciones Energeticas Medioambientales y Tecnologicas, 28040 Madrid, Spain
European Physical Journal C
(Impact Factor: 5.25). 01/2010; 70(3):635-657. DOI:10.1140/epjc/s10052-010-1480-6
ABSTRACT We report results from the analysis of the $\mbox {$\mbox { ($\mbox {$\mbox {) decay by the NA48/2 collaboration at the CERN SPS, based on the total statistics of 1.13 million decays
collected in 2003–2004.
The hadronic form factors in the S- and P-wave and their variation with energy are obtained. The phase difference between
the S- and P-wave states of the ππ system is accurately measured and allows a precise determination of $\mbox {$\mbox { and $\mbox {$\mbox {, the I = 0 and I = 2 S-wave ππ scattering lengths: $\mbox
{$\mbox {. Combination of this result with the other NA48/2 measurement obtained in the study of $\mbox {$\mbox { decays brings an improved determination of $\mbox {$\mbox { and the first precise
experimental measurement of $\mbox {$\mbox {, providing a stringent test of Chiral Perturbation Theory predictions and lattice QCD calculations. Using constraints based
on analyticity and chiral symmetry, even more precise values are obtained: $\mbox {$\mbox { and $\mbox {$\mbox {.
[show abstract] [hide abstract]
ABSTRACT: In order to investigate predictions concerning CP violation in the charged kaon sector, a new beam line providing concurrently K+ and K− has been constructed at CERN for the NA48/2
experiment. Several modifications and upgrades have been made in the apparatus; one of them being the implementation of a beam spectrometer named KABES. This detector is based on the time
projection chamber principle; the amplification of the ionization signal is achieved by using Micromegas devices. The performance of KABES is found to be excellent in high-intensity hadron beams.
The achieved space resolution of 100 μm provides a measurement of track momentum with a precision better than 1% and the time resolution, better than 1 ns, allows the charged kaons in NA48 to be
identified with almost no ambiguity. The measurement of the direction and momentum of the K+ and K− tracks makes possible the precise study of their decay modes, particularly those for which one
or more particles escape detection in the NA48 detector.
Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 01/2004; · 1.14 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We present the first results of the PACS-CS project which aims to simulate 2+1 flavor lattice QCD on the physical point with the nonperturbatively $O(a)$-improved Wilson quark action
and the Iwasaki gauge action. Numerical simulations are carried out at the lattice spacing of $a=0.0907(13)$fm on a $32^3\times 64$ lattice with the use of the DDHMC algorithm to reduce the
up-down quark mass. Further algorithmic improvements make possible the simulation whose ud quark mass is as light as the physical value. The resulting PS meson masses range from 702MeV down to
156MeV, which clearly exhibit the presence of chiral logarithms. An analysis of the PS meson sector with SU(3) ChPT reveals that the NLO corrections are large at the physical strange quark mass.
In order to estimate the physical ud quark mass, we employ the SU(2) chiral analysis expanding the strange quark contributions analytically around the physical strange quark mass. The SU(2) LECs
${\bar l}_3$ and ${\bar l}_4$ are comparable with the recent estimates by other lattice QCD calculations. We determine the physical point together with the lattice spacing employing $m_\pi$,
$m_K$ and $m_\Omega$ as input. The hadron spectrum extrapolated to the physical point shows an agreement with the experimental values at a few % level of statistical errors, albeit there remain
possible cutoff effects. We also find that our results of $f_\pi=134.0(4.2)$MeV, $f_K=159.4(3.1)$MeV and $f_K/f_\pi=1.189(20)$ with the perturbative renormalization factors are compatible with
the experimental values. For the physical quark masses we obtain $m_{\rm ud}^\msbar=2.527(47)$MeV and $m_{\rm s}^\msbar=72.72(78)$MeV extracted from the axial-vector Ward-Takahashi identity with
the perturbative renormalization factors. Comment: 43 pages, 48 figures
Physical Review D 07/2008; · 4.69 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We have calculated the form-factors F and G in K→ππℓν decays (Kℓ4) to two-loop order in Chiral Perturbation Theory (ChPT). Combining this together with earlier two-loop calculations an
updated set of values for the Lir, the ChPT constants at , is obtained. We discuss the uncertainties in the determination and the changes compared to previous estimates.
Physics Letters B 12/1999; · 4.57 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
15 Downloads
Available from
Oct 19, 2012 | {"url":"http://www.researchgate.net/publication/226300754_Precise_tests_of_low_energy_QCD_from_decay_properties","timestamp":"2014-04-17T07:16:58Z","content_type":null,"content_length":"306735","record_id":"<urn:uuid:22ab3ad0-b870-4221-9044-13352b10d060>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Switched Capacitor Circuits
Created by Kat Kim
1 Introduction
Switched capacitor circuits are commonly used configuration to replace a resistor with
switches and capacitors. There are many different configurations, but the main idea is
to pass charge into and out of a capacitor by controlling switches around it. This worksheet
will cover some basics on switching, explain a simple switched capacitor configuration, and
step you through some example circuits.
2 Switching
The basic idea of a switch is a connection that can be opened and closed by something
controlling the circuit. A Metal-Oxide Semiconductor Field Effect Transistor (MOSFET)
is a good example of a switch because it allows current to flow when its gate input is high
and very little current when the gate input is low or 0V . Ideally, the switch would have no
resistance and have an instantaneous response. No switch is truly ideal, but we will assume
they are for ease of calculation.
Let’s say that we have two signals φ1 and φ2 that are producing identical square waves. The
period of the wave is T , and the duty cycle is 50%. We shift φ2 forward by T /2, so that
only one switch is open at any time (we will see why this is important in the next section).
Moving out the ideal switch world for a second, assume that changing voltages take a certain
amount of time. If we want to ensure that only one switch is on at any one time, we need
to decrease the duty cycle on both signals. The resulting waveforms are shown in the figure
below. These are the waveforms that will control the MOSFET switches in the switched
3 Basic Switched Capacitor
First, let’s think of a single resistor R between two voltages V 1 and V 2 as shown below.
Assuming that current is flowing from V 1 to V 2, we know that the IV characteristic is
(V 1 − V 2) = IR or R = (V 1−V 2) . Keep this in mind as we look at the next circuit.
The most basic switched capacitor circuit is shown below. The two MOSFETs are controlled
by the wave forms described in the previous section. The circuit operates in these steps:
1. Switch 1 closes. The capacitor is charged to V 1.
2. Switch 1 opens. Charge remains in the capacitor.
3. Switch 2 closes. The capacitor gives off enough charge to adjust to V 2.
4. Switch 2 opens. Charge remains in the capacitor. Return to step 1.
After step 2, the charge on the capacitor is Q = C(V 1). After step 4, the charge is
Q = C(V 2). The total charge moving through the system is the difference of the charges:
∆Q = C(V 1 − V 2). Current is charge over time, and we know that this charge flows through
the system in the period of the waveform T , so overall I = C (V 1−V 2) or C = (V 1−V 2) .
By comparing the IV characteristics of the resistor and switched capacitor we see that
R = C = f1 , where f is the frequency of the waveform. Because of this important rela-
tionship, the effective resistance of a switched capacitor can be changed by changing the
capacitance or simply changing the frequency of the control waveform.
One thing to keep in mind is that if the input voltage V 1 changes with time, the signal
will have some frequencies. In order for the switched capacitor to function properly, it must
switch at a must faster rate that the highest frequency of the input voltage.
4 Examples
First, we will look at the kind of equivalent resistances we can achieve with a switched ca-
pacitor. Then, we are going to walk through some common circuits that can be reconfigured
to use switched capacitors.
1. I go to the stock room and pick up a 10 pF capacitor. If I use it with a 100 kHz
switching frequency, what is the equivalent resistance?
Req =
2. I have a stockroom full of a wide range of capacitors , and I want to emulate a 100kΩ re-
sistor, but my frequency generator is for some reason stuck between 50Hz and 20kHz.
I am not sure what frequencies my input voltage will have, but I want the switched
capacitor to be as accurate as possible. What frequency and capacitance should I use?
3. An RC inverting integrating circuit is shown below. Remember than an op-amp is in
negative feedback so the output will do whatever it can to make the negative input
node match the positive input node.
(a) Assuming the op-amp is able to work properly in negative feedback, what is the
voltage on V− ?
(b) Assuming V 1 is positive, draw current lines through the components.
(c) Write the KCL equation.
(d) Use this equation to write V 2 in terms of V 1. (Hint: this is a inverting integrating
4. The switched capacitor version of an inverting integrator is shown below. We will use
the same methodology to show that this circuit is essential the same as the previous
(a) When working with switches, it is useful to redraw the circuit for each phase.
Draw the circuit when only switch 1 is closed.
i. Draw the direction of the current and the polarity on the capacitor.
ii. What is the full charge on the capacitor?
(b) Draw the circuit when only switch 2 is closed. Include the polarity on the capacitor
for the previous phase.
i. Right after the transition, V− is not the value it “wants” to be. What is its
voltage? What voltage does it “want” to be?
ii. Draw current lines for each capacitor, and write the KCL equation. Re-
member that if the current is in the “wrong” direction it will be a negative
iii. We know that all the charge in C2 will be discharged because once the op-
amp adjusts, voltage over the capacitor will be 0V . All that charge will be
converted to current over the switching period T . What is the current through
C2? Keep in mind the direction of the current.
iv. Since we do not know the initial charge on C1, we should think in terms of
differentials. Write the current equation for C1.
v. Use the KCL equation and the current across C1 and C2 to solve for V 2 in
terms of V 1.
(c) The final equations for the RC and switched capacitor circuits should be very
similar. In fact, using the R = C relation, they should be exactly the same.
5. Below are the RC and switched capacitor configurations for a circuit that integrates
the difference between two voltages. Do a similar analysis on these two circuits to
prove that they are indeed the same, by the relation R = C . Also, notice the difference
in number and type of components between the two configurations.
5 Further Reading
For basic information on switched capacitor circuits, Linear Circuits by M.E. Van Valkenburg
and B.K. Kinariwala, Chapter 17, is a good source. It has a number of circuit configurations
that can be worked through with some basic circuit analysis skills.
For a more advanced take on switched capacitor circuits, look at CMOS Analog Circuit
Design by Phillip Allen and Douglas Holberg, 2nd edition Chapter 9. This book comes from
more of a signal analysis perspective and assumes a high level of circuit analysis. | {"url":"http://www.docstoc.com/docs/7672042/Introduction-to-Switched-Capacitor-Circuits","timestamp":"2014-04-18T16:43:32Z","content_type":null,"content_length":"61628","record_id":"<urn:uuid:a2f3213b-806a-4f28-aae0-32ea37c2b743>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boyle Heights, CA SAT Math Tutor
Find a Boyle Heights, CA SAT Math Tutor
...My educational background includes a Ph.D. in physical chemistry and a B.Sc. in chemical engineering. More than 10 years of experience in teaching math from middle school-level, high
school-level and college-level students. Extensive experience in teaching sections in algebra 2, such as inequalities, exponents, polynomials, functions, equations and graph.
10 Subjects: including SAT math, chemistry, calculus, statistics
...I also, have great experience tutoring high school students in multiple subjects in one-to-one sessions throughout their academic year. Students I worked with have scored higher on their
finals and other placement tests. I am very flexible and available weekdays and weekends.
11 Subjects: including SAT math, chemistry, geometry, algebra 1
...I am a certified teacher with several years of classroom experience (including Honors & Advanced Placement courses). I have been tutoring privately for over ten years and have helped hundreds
of students with their study skills. I teach students how to become more organized, to plan ahead, and t...
24 Subjects: including SAT math, chemistry, writing, geometry
...And because I honor my commitment to my students, I have a 12-hour cancellation policy. If a cancellation is necessary, I'm open to make-up sessions. I look forward to the opportunity to work
with you and your child.I've always had exposure to the Spanish language because my mother's side of the family is Peruvian.
10 Subjects: including SAT math, Spanish, algebra 1, grammar
...I am currently employed as an elementary school teaching assistant, and have worked as an English teacher abroad. Combined, my skills and experience enable me to bring out my clients' academic
strengths. I currently work as a full-time teaching assistant in a third-grade classroom.
28 Subjects: including SAT math, reading, writing, English
Related Boyle Heights, CA Tutors
Boyle Heights, CA Accounting Tutors
Boyle Heights, CA ACT Tutors
Boyle Heights, CA Algebra Tutors
Boyle Heights, CA Algebra 2 Tutors
Boyle Heights, CA Calculus Tutors
Boyle Heights, CA Geometry Tutors
Boyle Heights, CA Math Tutors
Boyle Heights, CA Prealgebra Tutors
Boyle Heights, CA Precalculus Tutors
Boyle Heights, CA SAT Tutors
Boyle Heights, CA SAT Math Tutors
Boyle Heights, CA Science Tutors
Boyle Heights, CA Statistics Tutors
Boyle Heights, CA Trigonometry Tutors
Nearby Cities With SAT math Tutor
August F. Haw, CA SAT math Tutors
Dockweiler, CA SAT math Tutors
East Los Angeles, CA SAT math Tutors
Firestone Park, CA SAT math Tutors
Foy, CA SAT math Tutors
Glassell, CA SAT math Tutors
Hazard, CA SAT math Tutors
Los Angeles SAT math Tutors
Los Nietos, CA SAT math Tutors
Rancho Dominguez, CA SAT math Tutors
Sanford, CA SAT math Tutors
South, CA SAT math Tutors
View Park, CA SAT math Tutors
Walnut Park, CA SAT math Tutors
Windsor Hills, CA SAT math Tutors | {"url":"http://www.purplemath.com/Boyle_Heights_CA_SAT_math_tutors.php","timestamp":"2014-04-21T10:30:11Z","content_type":null,"content_length":"24500","record_id":"<urn:uuid:7bdb9234-af68-4092-b621-8700dd1b84a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sort by [Top-Selling ]
Change currency
Close Window
Renting textbooks is cheap & easy
Save up to 90% by renting textbooks
Get your books in a flash
Return them for FREE in 130 days-yippee!
Click a topic below to learn more about it:
• Worry-Free Guarantee
Drop a class? Change your mind? Within 24 days of your rental date, you can return rented books for a refund. See, no worries!
• Rent it for a whole semester
There's no need to pick a rental period. Any book is rented for 130 days. Bonus: You also have a week grace period, for your end-of-semester convenience.
• Keeping a rented book
If you fall in love with your rented book and decide you want to keep it, just contact its seller and arrange a purchase. Then you and your book may live happily ever after.
• FREE, easy rental returns
At the end of your rental period, your return shipping is FREE. It's also as easy as printing a label and dropping your book at any handy UPS location.
• Check for supplements
A rental book may be a used copy, with or without supplemental items. Contact the rental's seller before ordering if its description about supplements is unclear and a certain supplement is a
• Contiguous U.S. states only
At this time, rentals are only available to residents of the 48 contiguous U.S. states. But we hope to rent everything to everyone everywhere soon!
• Fast shipping for rentals
Your rental book ships by a fast, trackable method within one business day of your order date. That's so you get it in time for classes, and will know where it is until it arrives.
• Use it (nicely) but don't lose it
While you are renting a book, normal wear and tear is A-OK. After all, you have to use it to score excellent grades, right? Just make sure it's not lost or damaged.
• Separate cart for rentals
Rentals are separated in your cart from items you may want to purchase. You'll check out separately, but it's fast and easy—just a couple extra clicks is all it takes to save with rental books. | {"url":"http://www.alibris.com/search/rentals/subject/Mathematics-Algebra-Intermediate","timestamp":"2014-04-25T03:53:29Z","content_type":null,"content_length":"115752","record_id":"<urn:uuid:88822f25-872c-46ff-8be2-997ac93ee7a5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
On an Extension of the Maximum-flow Minimum-cut Theorem to Multicommodity Flows
Results 1 - 10 of 33
- in Proceedings of the 42nd Allerton Annual Conference on Communication, Control, and Computing , 2004
"... In this paper, we investigate the benefit of network coding over routing for multiple independent unicast transmissions. We compare the maximum achievable throughput with network coding and that
with routing only. We show that the result depends crucially on the network model. In directed network ..."
Cited by 54 (7 self)
Add to MetaCart
In this paper, we investigate the benefit of network coding over routing for multiple independent unicast transmissions. We compare the maximum achievable throughput with network coding and that with
routing only. We show that the result depends crucially on the network model. In directed networks, or in undirected networks with integral routing requirement, network coding may outperform routing.
In undirected networks with fractional routing, we show that the potential for network coding to increase achievable throughput is equivalent to the potential of network coding to increase bandwidth
e#ciency, both of which we conjecture to be non-existent.
- Numerische Mathematik , 1993
"... The design of cost-efficient networks satisfying certain survivability ..."
, 1992
"... We group in this paper, within a unified framework, many applications of the following polyhedra: cut, boolean quadric, hypermetric and metric polyhedra. We treat, in particular, the following
applications: ffl ` 1 - and L 1 -metrics in functional analysis, ffl the max-cut problem, the Boole probl ..."
Cited by 25 (2 self)
Add to MetaCart
We group in this paper, within a unified framework, many applications of the following polyhedra: cut, boolean quadric, hypermetric and metric polyhedra. We treat, in particular, the following
applications: ffl ` 1 - and L 1 -metrics in functional analysis, ffl the max-cut problem, the Boole problem and multicommodity flow problems in combinatorial optimization, ffl lattice holes in
geometry of numbers, ffl density matrices of many-fermions systems in quantum mechanics. We present some other applications, in probability theory, statistical data analysis and design theory.
, 1997
"... Given a communication demand between each pair of nodes of a network we consider the problem of deciding what capacity to install on each edge of the network in order to minimize the building
cost of the network and to satisfy the demand between each pair of nodes. The feasible capacities that can b ..."
Cited by 16 (2 self)
Add to MetaCart
Given a communication demand between each pair of nodes of a network we consider the problem of deciding what capacity to install on each edge of the network in order to minimize the building cost of
the network and to satisfy the demand between each pair of nodes. The feasible capacities that can be leased from a network provider are of a particular kind in our case. There are a few so-called
basic capacities having the property that every basic capacity is an integral multiple of every smaller basic capacity. An edge can be equipped with a capacity only if it is an integer combination of
the basic capacities. We treat, in addition, several restrictions on the routings of the demands (length restriction, diversification) and failures of single nodes or single edges. We formulate the
problem as a mixed integer linear programming problem and develop a cutting plane algorithm as well as several heuristics to solve it. We report on computational results for real world data.
- Combinatorics and Computer Science, Lecture
"... Abstract. We survey and present new geometric and combinatorial propertiez of some polyhedra with application in combinatorial optimization, for example, the max-cut and multicommodity flow
problems. Namely we consider the volume, symmetry group, facets, vertices, face lattice, diameter, adjacency a ..."
Cited by 15 (10 self)
Add to MetaCart
Abstract. We survey and present new geometric and combinatorial propertiez of some polyhedra with application in combinatorial optimization, for example, the max-cut and multicommodity flow problems.
Namely we consider the volume, symmetry group, facets, vertices, face lattice, diameter, adjacency and incidence relm:ons and connectivity of the metric polytope and its relatives. In partic~dar,
using its large symmetry group, we completely describe all the 13 o:bits which form the 275 840 vertices of the 21-dimensional metric polytope on 7 nodes and their incidence and adjacency relations.
The edge connectivity, the/-skeletons and a lifting procedure valid for a large class of vertices of the metric polytope are also given. Finally, we present an ordering of the facets of a polytope,
based on their adjacency relations, for the enumeration of its vertices by the double description method. 1
, 2001
"... We consider convex polyhedra with applications to wellknown combinatorial optimization problems: the metric polytope mn and its relatives. For n # 6 the description of the metric polytope is
easy as mn has at most 544 vertices partitioned into 3 orbits; m7 - the largest previously known instan ..."
Cited by 10 (1 self)
Add to MetaCart
We consider convex polyhedra with applications to wellknown combinatorial optimization problems: the metric polytope mn and its relatives. For n # 6 the description of the metric polytope is easy as
mn has at most 544 vertices partitioned into 3 orbits; m7 - the largest previously known instance - has 275 840 vertices but only 13 orbits. Using its large symmetry group, we enumerate orbitwise 1
550 825 600 vertices of the 28-dimensional metric polytope m8 . The description consists of 533 orbits and is conjectured to be complete. The orbitwise incidence and adjacency relations are also
given. The skeleton of m8 could be large enough to reveal some general features of the metric polytope on n nodes. While the extreme connectivity of the cuts appears to be one of the main features of
the skeleton of mn , we conjecture that the cut vertices do not form a cut-set. The combinatorial and computational applications of this conjecture are studied. In particular, a heuristic skipping
the highest degeneracy is presented. 1
- PREPRINT CAMS 142 ECOLE DES HAUTES ETUDES EN SCIENCES SOCIALES , 2001
"... The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game
dates to the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities ov ..."
Cited by 7 (3 self)
Add to MetaCart
The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game dates to
the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities over this cone, known as pagoda functions, were used to show the infeasibility of various peg games.
In this paper we study the extremal structure of solitaire cones for a variety of boards, and relate their structure to the well studied metric cone. In particular we give: 1. an equivalence between
the multicommodity flow problem with associated dual metric cone and a generalized peg game with associated solitaire cone; 2. a related NP-completeness result; 3. a method of generating large
classes of facets; 4. a complete characterization of 0-1 facets; 5. exponential upper and lower bounds (in the dimension) on the number of facets; 6. results on the number of facets, incidence and
adjacency relationships and diameter for small rectangular, toric and triangular boards; 7. a complete characterization of the adjacency of extreme rays, diameter, number of 2-faces and edge
connectivity for rectangular toric boards.
- Optimization Methods and Software , 1998
"... Dedicated to Professor Masao Iri on the occasion of his 65th birthday This paper describes computational experience obtained in the development of the lrs code, which uses the reverse search
technique to solve the vertex enumeration/convex hull problem for d-dimensional convex polyhedra. We giv e em ..."
Cited by 7 (2 self)
Add to MetaCart
Dedicated to Professor Masao Iri on the occasion of his 65th birthday This paper describes computational experience obtained in the development of the lrs code, which uses the reverse search
technique to solve the vertex enumeration/convex hull problem for d-dimensional convex polyhedra. We giv e empirical results showing improvements obtained by the use of lexicographic perturbation,
lifting, and integer pivoting. We also give some indication of the cost of using extended precision arithmetic and illustrate the use of the estimation function of lrs. The empirical results are
obtained by running various versions of the program on a set of well-known non-trivial polyhedra: cut, configuration, cyclic, Kuhn_Quandt, and metric polytopes. Ke ywords: vertex enumeration, convex
hulls, reverse search, computational experience 1.
- Advances in Applied Mathematics 19 , 1997
"... We give simple algorithmic proofs of some theorems of Papernov (1976) and Karzanov (1985,1990) on the packing of metrics by cuts. 1. ..."
, 2009
"... In this paper, we give a complete characterization of the class of weighted maximum multiflow problems whose dual polyhedra have bounded fractionality. This is a common generalization of two
fundamental results of Karzanov. The first one is a characterization of commodity graphs H for which the dual ..."
Cited by 6 (6 self)
Add to MetaCart
In this paper, we give a complete characterization of the class of weighted maximum multiflow problems whose dual polyhedra have bounded fractionality. This is a common generalization of two
fundamental results of Karzanov. The first one is a characterization of commodity graphs H for which the dual of maximum multiflow problem with respect to H has bounded fractionality, and the second
one is a characterization of metrics d on terminals for which the dual of metric-weighed maximum multiflow problem has bounded fractionality. A key ingredient of the present paper is a nonmetric
generalization of the tight span, which was originally introduced for metrics by Isbell and Dress. A theory of nonmetric tight spans provides a unified duality framework to the weighted maximum
multiflow problems, and gives a unified interpretation of combinatorial dual solutions of several known min-max theorems in the multiflow theory. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=58598","timestamp":"2014-04-19T21:00:14Z","content_type":null,"content_length":"37424","record_id":"<urn:uuid:0af3eda2-270b-458f-ac87-c0f666ec0772>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from November 2009 on Programming Praxis
A mathematician purchased four items in a grocery store. He noticed that when he added the prices of the four items, the sum came to $7.11, and when he multiplied the prices of the four items, the
product came to $7.11.
Your task is to determine the prices of the four items. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments
Pages: 1 2
Astronomy has fascinated mankind since its earliest days; science and math were developed first to study the heavens. Even today, your local newspaper tells you such astronomical information as the
time of local sunrise and sunset and the phase of the moon.
Your task is to write functions that calculate the time of sunrise and sunset for any spot on earth for any day of the year; you will have to do your own research on the internet to find a suitable
set of formulas. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.
Pages: 1 2
In the previous exercise we wrote a Master Mind setter. In today’s exercise we will write a solver, using an algorithm given by Donald E. Knuth in his article “The Computer As Master Mind” in Volume
9, Number 1, the 1976-1977 edition, of The Journal of Recreational Mathematics.
The central concept of Knuth’s algorithm is the pool of potential solutions. His algorithm chooses at each step a probe that minimizes the maximum number of remaining possibilities over all possible
response of the codebreaker; in the event of a tie, any pattern that achieves the minimum may be used, subject to the condition that a probe that is a member of the current pool is preferred to one
that is not.
For instance, consider the pool of 1296 possible code words at the start of a puzzle. There are essentially five possible starting probes: 1 1 1 1, 1 1 1 2, 1 1 2 2, 1 1 2 3, and 1 2 3 4 (rotations
are excluded, as are variants that substitute one symbol consistently for another). The remaining pool sizes after each of the five probes is applied to all of the 1296 possible code words are given
in the table below:
.... 625 256 256 81 16
W... 0 308 256 276 152
B... 500 317 256 182 108
WW.. 0 61 96 222 312
BW.. 0 156 208 230 252
BB.. 150 123 114 105 96
WWW. 0 0 16 44 136
BWW. 0 27 36 84 132
BBW. 0 24 32 40 48
BBB. 20 20 20 20 20
WWWW 0 0 1 2 9
BWWW 0 0 0 4 8
BBWW 0 3 4 5 6
BBBB 1 1 1 1 1
max 625 317 256 276 312
The minimax solution is 256, achieved when the probe is 1 1 2 2, so that should always be the first probe. Then the solution is determined by making the minimax probe, reducing the pool by applying
the result of the probe, and repeating on the reduced pool until the puzzle is solved.
Your task is to write a Master Mind solver based on the rules set out in the previous exercise and Knuth’s algorithm given above. When you are finished, you are welcome to read or run a suggested
solution, or to post your solution or discuss the exercise in the comments below.
Pages: 1 2
Master Mind is a two-player game of deductive logic. One player, the setter, selects a four-symbol code, and the other player, the solver, tries to identify the code by trying test patterns, probes,
to which the setter responds with the number of black hits, indicating the number of positions where the code symbol and probe symbol are identical, and the number of white hits, where a probe has
the right symbol in the wrong position. Setter and solver change roles after each puzzle is solved, for a pre-defined number of rounds, and the winner is the player who has solved the puzzles with
the least number of probes. With six symbols and four pegs there are 6^4 = 1296 possible codes. The physical game uses colored pegs for the symbols; we will use digits instead. Variants of the game
increase the number of symbols and/or the length of the code; a variant with five pegs using eight colors is marketed under the name Super Master Mind. Here is a sample game:
1 1 2 2 B
1 3 4 4 W
3 5 2 6 BWW
1 4 6 2 BW
3 6 3 2 BBBB
The solver first probes with the pattern 1 1 2 2, which has a single black hit. The second probe, 1 3 4 4, receives a single white hit. The third probe, 3 5 2 6, earns a single black hit and two
white hits. The fourth probe, 1 4 6 2, receives one black hit and one white hit. The fifth probe, 3 6 3 2, solves the puzzle with four black hits.
Your task is to write a program that performs the role of the setter, selecting a random code, prompting for probes, and scoring each probe until the human solver who is playing the game solves the
puzzle. In the next exercise you will be asked to write a solver. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in
the comments below.
Pages: 1 2
It is well known that any comparison-based sort, as we have been studying, has a lower time bound of O(n log n). But if all the keys are positive integers less than or equal to n, it is possible to
sort in O(n) linear time by taking advantage of the structure of the keys themselves.
Count sort determines, for each input element x, the number of elements less than x, then places x directly in its position in the output; if there are k elements less than x, then x belongs in the k
^th + 1 position (being careful to properly consider the case of equal elements). Count sort requires two temporary arrays, one to hold the counts of the various elements, which act as indexes into
the array, and one to build up the output.
Radix sort extends count sort by making multiple passes based on the positional digits of the integers being sorted: first do a count sort on the digits in the ones column of the integers, then a
count sort on the digits in the tens column, then the hundreds column, the thousands column, and so on until the input is sorted, taking advantage of the fact that count sort is stable. Radix sort
works on other kinds of keys besides integers; for instance, dates can be sorted by doing count sort on day, month and year in succession.
Your task is to write functions that sort arrays using count sort and radix sort. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss
the exercise in the comments below.
Pages: 1 2
The last O(n log n) sorting algorithm that we shall consider in our current series of exercises is merge sort. If you have two sorted sequences, they can be merged into a single sorted sequence in
time linear to their combined length by running through them in order, at each step taking the smaller of the heads of the two sequences. Then mergesort works by recursively merging smaller sequences
into larger ones, starting with trivially-sorted sequences of one element that are merged into two-element sequences, then merging pairs of two-element sequences into four-element sequences, and so
on, until the entire sequence is sorted.
Your task is to write a function that sorts an array by the merge sort algorithm described above, according to the conventions of the prior exercise. When you are finished, you are welcome to read or
run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2
A priority queue is a data structure that permits insertion of a new element and retrieval of its smallest member; we have seen priority queues in two previous exercises. Priority queues permit
sorting by inserting elements in random order and retrieving them in sorted order. Heapsort uses the heap data structure to maintain a priority queue. The heap is a tree embedded in an array, with
the property that the item at each index i of the array is less than the children at indices 2i and 2i+1.
The key to understanding heapsort is a function we call heapify that gives the sub-array A[i .. n] the heap property if the sub-array A[i+1 .. n] already has the property. Heapify starts at the i^th
element of the array and swaps each element with its smallest child, repeating the operation at that child, stopping at the end of the array or when the current element is smaller than either of its
children. Then heapsort works in two phases; the first phase forms an initial heap by calling heapify on each element of the array from n/2 down to 1, then a second phase extracts the elements in
order by repeatedly swapping the first element with the last, re-heaping the sub-array that excludes the last element, and recurring with the smaller sub-array that excludes the last element.
Your task is to write a function that sorts an array using the heapsort algorithm, using the conventions of the prior exercise. When you are finished, you are welcome to read or run a suggested
solution, or to post your solution or discuss the exercise in the comments below.
Pages: 1 2
Quick sort was invented by Sir Charles Antony Richard “Tony” Hoare, a British computer scientist and winner of the 1980 Turing Award, in 1960, while he was a visiting student at Moscow State
University. Though it has an annoying quadratic worst-case performance, quick sort has expected O(n log n) performance and is significantly faster in practice than most other O(n log n) sorting
algorithms, and it is possible to arrange that the worst-case performance almost never happens for real-world data.
Quick sort works as follows: First, an element, called the pivot, is chosen from the array. Then the array is partitioned around the pivot by reordering the array so that all elements less than the
pivot come before it and all items greater than the pivot come after it; this puts the pivot element in its final position in the array. Finally, quick sort is called recursively on the less-than and
greater-than partitions; the base of the recursion is arrays of zero or one element.
There are many ways to choose the pivot element. Some algorithms choose the first element of the array, or the last; others choose the median of three elements (first, middle, last). Our preference
is to choose an element at random, since that virtually eliminates the possibility of quadratic performance (unless there is collusion between the random-number generator and the data).
Likewise, there are many ways to perform the partitioning. One approach, due to Nick Lomuto, uses a single pointer to run through the sub-array, swapping the current element for the last element of
the sub-array, which is then decremented, if it is greater than the pivot. Another approach uses two pointers that approach each other, swapping elements when the two pointers cross over the pivot
element. Lomuto’s partition is simpler; dual pointers is quicker, but the details are hard to get right.
Your task is to write a function that sorts an array using the quick sort algorithm, using the conventions of a previous exercise. When you are finished, you are welcome to read or run a suggested
solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2 | {"url":"http://programmingpraxis.com/2009/11/","timestamp":"2014-04-17T21:24:14Z","content_type":null,"content_length":"51098","record_id":"<urn:uuid:3fddbbe4-ae7d-4bd4-8c75-eb61ca5a151a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
06-XX Order, lattices, ordered algebraic structures [See also 18B35]
06-00 General reference works (handbooks, dictionaries, bibliographies, etc.)
06-01 Instructional exposition (textbooks, tutorial papers, etc.)
06-02 Research exposition (monographs, survey articles)
06-03 Historical (must also be assigned at least one classification number from Section 01)
06-04 Explicit machine computation and programs (not the theory of computation or programming)
06-06 Proceedings, conferences, collections, etc.
06Axx Ordered sets
06Bxx Lattices [See also 03G10]
06Cxx Modular lattices, complemented lattices
06Dxx Distributive lattices
06Exx Boolean algebras (Boolean rings) [See also 03G05]
06Fxx Ordered structures | {"url":"http://ams.org/mathscinet/msc/msc2010.html?t=06-XX","timestamp":"2014-04-19T07:52:53Z","content_type":null,"content_length":"13306","record_id":"<urn:uuid:fc63e359-4acd-4f67-a6c9-5e4794cbcb7d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Christoph Petermann Homepage
Free Space Pathloss Calculation and EME link budget
by DF9CY Christoph Petermann ©
• Part 1
JAVA application for EME budget calculation
• Part 2
EXCEL Spreadsheet
• Part 3
Part 1 - JAVA application
Please let me know, if you like this calculator. The JAVA source code of EME System Calculator is also availbale for you here. This ZIP File contains the CLASS Files AND the JAVA File.
The latest version however is integrated into the VMT software package.
Part 2 - EXCEL Spreadsheet
How do I calculate the effective range of my station in free space environment ? The solution is quite simple: Add all known parameters of your station, take the formula for free space path loss and
don't forget the Boltzmann konstant.
I have added a Microsoft EXCEL (Version 5 or 7) spreadsheet EME System Sheet, where you can enter or modify all parameters and see immediately the effect on the result. This spreadsheet works correct
and takes care of all noise contibutions to your system. Moon noise cannot be calculated.
Part 3 - Fundamentals
Propagation on the earth's surface is a different problems and this adds a number of restrictions.
Moonbounce propagation adds other difficulties:
The virtual diameter of the moon is ca. 0.5° it is a certain fraction of the whole sphere. Therefore an additional loss of around 50 dB has to be introduced. Doing so you will come to the general
RADAR equation:
a * Pt * Gr * Gt * La^2
( 4 * PI )^2 * d^2
a: cross section of target
Pr: receive power
Tr: transmit power
Gr: gain of receive antenna
Gt: gain of transmit antenna
La: Lambda = wavelength
d: distance to target
Here is the "dB" version:
Pr [dB] = Pt + Gr + Gt + 10 * log (a) + 20 * log (f) + 40 log (d) - 103.4
d in km
a in m^2
f in MHz
The reflectivity of the moon is only 7%
Here is a calculation example for standard free space propagation:
Noise power calculation
Definition: Noise Power NP = 4 * KTB / (4*R) = KTB
K = Boltzmann konstant = 1.38E-23 J/K
B = Bandwidth in Hertz
T = Ambient temperature = use 290 K (Kelvin!)
/* no Celsius or Fahrenheit ...*/
NF = Noise Figure in dB /* ideally should be
transformed to Noise temperature !!! */
SNR= Signal to Noise Ratio for detection
Definition: Pathloss Pl = 32.45 + 20*log(f) + 20*log(d)
f = Frequency in MHz
d = distance in kilometers
Antenna gains should be given in dBi, that means dB over an isotropic
radiator. IF your gain is in dBd /* db over halfwave dipole */
then add 2.14 dB to your value.
Here is an example from the EXCEL speadsheet:
Free space path loss calculation (April 1997)
Christoph Petermann 09.04.1997
Pommernweg 11
D24229 Schwedeneck
The framed fields may be changed only !
Signal to Noise Ratio / Sensitivity Limit
Noise Power P = 4 k T B / ( 4 R ) = k T B
Bandwidth B [Hz] = 1,0E+02 Hz
Boltzmann Konst. [ J/ K]= 1,38E-23 J /K
Temperature [K] = 290,00 K
Noise Power [dBm]@290K -153,98 dBm
My systems Signal to Noise Ratio:
SNR [dB] 5,00 dB
Receiver Noise figure 0,30 dB
Receiver Noise temperature 20,74 K
Losses prior to LNA 0,20 dB
Losses in Noise Temperature 13,67 K
Antenna temperature (Sky) 20,00 K
All system Noise Temperature 40,94 K
Noise Power [dBm]@T_sys -162,48 dBm
Sensitivity [dBm]@T_sys -157,18 dBm
Calculation of maximum possible Free Space Range
Transmit antenna gain 35,00 dBi Gain over isotropic
Receive antenna gain 35,00 dBi
Transmit power 60,00 dBm
equal to 30,00 dBW
equal to 1000,00 Watt
Receive Sensitivity -157,18 dBm
Frequency 1296,00 MHz
Maximum path loss incl. antenna gains
Pl= 287,18 dB
Range out of Pl=32.45+20log(f)+20log(d)
R= 4206 Mio.km
EME Pathloss for a given frequency
Frequency 1296,00 MHz
Moon distance 386000,00 km
Moon diameter 3400,00 km
RADAR Equation 53,14 dB
reflectivity of Moon surface 7,00 %
Pathloss 277,15 dB
Expected Signal to noise ratio
10,03 dB
PL=32.45 + 20*log(Moondistance * 2) + 20*log(f)
+ Spherical loss from RADAR equ. + reflectivity loss
reflectivity loss : the moons' surface reflects only 7% to the earth
RADAR Equation: the virtual moon represents only a fraction of the total sphere.
Therefore an additional loss must be introduced.
This is called in general the RADAR EQUATION
© C.Petermann DF9CY June 1997
"References: VHF/UHF Manual; ARRL Publications et al."
The calculations are valid for free space only
Text and All Images are Copyright by Christoph Petermann DF9CY
GO (back) and visit my homepage | {"url":"http://www.df9cy.de/tech-mat/pathloss.htm","timestamp":"2014-04-17T12:37:11Z","content_type":null,"content_length":"9369","record_id":"<urn:uuid:b1af6c03-4a9b-4d0c-90c8-6c6fdfc00a8a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with finding this antiderivative (Need explanation of steps to find answer)
February 22nd 2013, 11:57 PM #1
Feb 2013
Need help with finding this antiderivative (Need explanation of steps to find answer)
Hi, I need help figuring out how to work the following problem. The problem has already been worked out, but I don't understand where some of the stuff is coming from.
Here's the problem:
Find the following antiderivative: ∫1/√(-x² - 4x) dx
And here's what has been worked:
Step 1: -x² - 4x = -(x²-4x+4-4)
Step 2: 4-(x+2)²
Step 3: ∫1/√4-(x+2)² dx
Step 4: 1/2∫1/√1-((x+2)/2)² dx
Step 5: (1/2)(2)arcsin((x+2)/2) + C
Okay, so I understand the first step. You just complete the square for the part under the square root. I start to get a little confused in the second step though where it gets simplified. Why is
that 4 there in the front? Shouldn't it be a negative 4 instead and be at the end instead of at the front? I understand step 3, just plug in what you simplified down. Step 4 is where I really
begin to get confused. Where did all those numbers come from? It looks like everything was divided by something, but I can't figure out by what. It looks like the 4 was divided by 4, the (x+2)²
was divided by a 2, and I have no idea where the 1/2 in the front of the problem came from. I don't really understand step 5 either although it really just seems to be plugging some sort of trig
identity into the problem.
Last edited by mcox874; February 22nd 2013 at 11:59 PM.
Re: Need help with finding this antiderivative (Need explanation of steps to find ans
Hey mcox874.
Basically they are factoring out a 4 in the square root which makes it a 2 outside since SQRT(4) = 2 and they just balance it by multiplying by 1/2.
February 23rd 2013, 01:00 AM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/calculus/213634-need-help-finding-antiderivative-need-explanation-steps-find-answer.html","timestamp":"2014-04-19T23:45:08Z","content_type":null,"content_length":"33824","record_id":"<urn:uuid:3524cc7a-7e19-4df6-a660-7c4ae2c5f517>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
July 24, 2003
Santa Barbara, Calif. --Three years before he received the Nobel Prize in Physics, Eugene Wigner published an article entitled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
(1960). He marveled at how often physicists develop concepts to describe the "real" world only to discover that mathematicians--heedless of that real world--have already thought up and explored the
concepts. His own experience of the uncanny applicability of mathematical insights to the physical reality of quantum mechanics led Wigner to observe "that the enormous usefulness of mathematics in
the natural sciences is something bordering on the mysterious and that there is no rational explanation for it."
Doubtless the observation of just such an uncanny correspondence between mathematics and physics prompted the editors of the July 25 issue of Science to feature on the cover the colloidal particle
clusters that are the subject of research by an engineering professor and his two graduate students at the University of California at Santa Barbara (UCSB). That professor, David Pine, holds a joint
appointment in the departments of Chemical Engineering and Materials and chairs the Chemical Engineering Department. The first author of the article, "Dense Packing and Symmetry in Small Clusters of
Microspheres," is Vinothan Manoharan; the other author is Mark Elsesser.
Their story begins with the iridescence of opals, which are composed of equal-sized spheres about a micrometer in diameter, or roughly a hundred times smaller than the size of a human hair. The
spheres are packed into a structure known as the face-centered cubic (FCC) lattice, which is exactly the same arrangement used by grocers to stack oranges or apples. Because the opal's constituent
spheres are about the size of the wavelength of light, their orderly arrangement diffracts light and causes iridescence.
Pine notes, "Opals have interesting optical properties, but not quite interesting enough. We are trying to improve on this structure to make some useful optical materials."
In principle such materials, known as "photonic crystals," would enable new and inexpensive optical circuits and might also improve the efficiency of devices such as lasers and LEDs. How to make a
photonic crystal is not the subject of the Science article, but what the researchers discovered in the attempt.
They began by trying to find ways to pack tiny spheres, like the ones that make up an opal, into structures different from the FCC. This is a difficult problem, since, as the mathematician Kepler
long ago conjectured, the FCC structure is the densest packing of an infinite number of spheres. In other words, the face-centered cubic structure results whenever a large number of spheres are
compressed together. But, the researchers asked, how do a finite number of spheres pack? What structures are formed by a very small number of spheres, say, five or eight?
The experiments which answered that question began with Manoharan taking colloidal microspheres of the common plastic polystyrene and trapping the particles in small droplets of the oily solvent
toluene. Then he heated the mixture so that the solvent droplets evaporated, effectively shrink-wrapping the particles into little clusters. Finally, using a centrifuge, he separated the clusters
according to the number of particles in each i.e., doublets, triplets, etc.
"The thing that really grabbed our attention," said Manoharan, "was that clusters that contained the same number of particles always had the same configuration." Or, in the language of the Science
paper, "small numbers (n = 2-15) of hard spheres pack into distinct and identical polyhedra for each value of n." Moreover, when Manoharan examined the clusters under the microscope, he found that
many of the structures had beautiful and unexpected symmetry. The seven-sphere cluster, for example, resembles a flower with five petals.
When compressed by a liquid droplet, small groups of colloidal microspheres -- plastic spheres with diameters about one one-hundredth that of a human hair -- pack to form an unusual sequence of
structures. At top are packings containing four to eleven spheres, as seen through the scanning electron microscope. At bottom are the polyhedra defined by drawing lines between the centers of
touching spheres in each cluster. Some of these polyhedra are familiar structures, such as the tetrahedron (4 spheres) and octahedron (6 spheres), but most of the others -- including the "snub
disphenoid" (8 spheres) and the "gyroelongated square dipyramid" (10 spheres) -- are probably unfamiliar, despite their attractive symmetry. Nevertheless, all of these structures obey a single,
simple mathematical rule: they all minimize a quantity called the second moment. This is the first observation of this packing motif in nature. [Image credit: V. N. Manoharan]
Surprisingly, the symmetry of these configurations has nothing to do with chemical bonds or quantum mechanics. The clusters, it turns out, obey a very simple mathematical principle first explored in
1995 by mathematicians N.J.A. Sloane of AT&T Research, John Conway of Princeton, and colleagues. Sloane and Conway derived the structures of sphere packings that minimize a quantity called the
"second moment of the mass distribution."
The structures the mathematicians predicted are the same as those of the colloidal clusters. "What's amazing," said Pine, "is that their interests had nothing to do with colloids or emulsions. They
were studying a problem in pure mathematics."
What is the "second moment"? Said Pine, "Take one of these clusters and define its center of gravity as the point at which if you hang the cluster by a string it will not rotate. Then you take the
distances of each of these spheres from that center of gravity (measuring from the center of the sphere) and square those distances and add the squares together, and that's the second moment of the
mass distribution."
The researchers caution that they do not yet fully understand the physical process that causes the clusters to minimize this quantity. But the mathematical connection has a certain elegance.
"Occasionally," said Pine, "there's a correspondence between mathematics and the way nature behaves that's really striking. Most of the time the connection is difficult to visualize, but in this case
a layperson can explore it with only a package of ping-pong balls and some glue."
Manoharan points out that their results may be relevant to fields other than colloids, since scientists often model the building blocks of matter as spheres. "These clusters tell us something about
matter in general-how symmetry arises from simple packing constraints. That may be important in understanding the atomic-scale structure of liquids, for example. Between the mathematical beauty of
the cluster structures and their engineering applications, there is some interesting physics in terms of understanding how geometry affects the basic properties of matter."
The research reported in Science is part of Manoharan's thesis for a Ph.D. from Santa Barbara in chemical engineering. He has accepted a faculty appointment in physics and engineering at Harvard
University after a postdoctoral fellowship at the University of Pennsylvania.
The research is supported by the National Science Foundation (NSF) and the Unilever Corp.
[Note: Professor Pine can be reached at (805) 893-7383 and pine@mrl.ucsb.edu and Mr. Manoharan at (805) 893-7862 and vinny@engineering.ucsb.edu.
Related Links
Media Contact
Tony Rairden | {"url":"https://www.engr.ucsb.edu/news/64/","timestamp":"2014-04-19T07:09:55Z","content_type":null,"content_length":"21209","record_id":"<urn:uuid:4bf5430c-f7cc-408e-9e2e-6390e2d539bc>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
implicit differentiation (horizontal tangent line)
Given the curve $x^2+xy+y^2=9$
a) Find $y'$
Taking the derivative of the above equation and using product rule for xy, i get,
$2x+(y+xyy')+2yy' = 0$
$xy'+2yy' = -2x-y$
$y' = (-2x-y)/(x+2y)$
b) Find all points on the curve at which the tangent line is horizontal
Is a) correct? and also how do i do letter b)? i have no clue what to do. thanks! | {"url":"http://mathhelpforum.com/calculus/113983-implicit-differentiation-horizontal-tangent-line.html","timestamp":"2014-04-21T10:27:23Z","content_type":null,"content_length":"37602","record_id":"<urn:uuid:49842792-3610-472f-a072-17f4f1f1eb3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math puzzles
Search Results: 'Math puzzles'
PUZZLE MATH: Trigonometry and Logarithms
Paperback: $21.08
Ships in 3-5 business days
This is a collection of 20 math worksheets on trigonometry and logarithms. They provide instant feedback and fun to students because the answers correspond to letters or images that decode secret...
More > messages and pictures. They are self-grading, which makes them ideal for customized treatment of learners. They are tested by multiple teachers, including the author and her colleagues. If you
liked the calculus decoder puzzles in "PUZZLE MATH: Mixed Derivatives", you'll love these. In "PUZZLE MATH: Trigonometry and Logarithms" students find trig functions' values, missing angles, create
and use wave graphs and the unit circle, draw and use special triangles, use one trig equation to find others, use and convert between degrees and radians, learn and evaluate trig identities, find
scales for similarity, learn log definitions, identify and use log properties, utilize the change of base formula for logs, find missing values in log equations, and more.< Less
PUZZLE MATH: Trigonometry and Logarithms
eBook (PDF): $15.00
This is a collection of 20 math worksheets on trigonometry and logarithms. They provide instant feedback and fun to students because the answers correspond to letters or images that decode secret...
More > messages and pictures. They are self-grading, which makes them ideal for customized treatment of learners. They are tested by multiple teachers, including the author and her colleagues. If you
liked the calculus decoder puzzles in "PUZZLE MATH: Mixed Derivatives", you'll love these. In "PUZZLE MATH: Trigonometry and Logarithms" students find trig functions' values, missing angles, create
and use wave graphs and the unit circle, draw and use special triangles, use one trig equation to find others, use and convert between degrees and radians, learn and evaluate trig identities, find
scales for similarity, learn log definitions, identify and use log properties, utilize the change of base formula for logs, find missing values in log equations, and more.< Less
PUZZLE MATH: Mixed Derivatives
eBook (PDF): $0.00
This is a small collection of 3 math worksheets on the Calculus topic of derivatives. The worksheets provide instant feedback and fun for the math students because the answers correspond to letters
... More > or images that lead to the decoding of secret messages and pictures. They are also self-grading, which makes them ideal for customized treatment of learners. They are tested by multiple
teachers, including the author and her colleagues. If you like these, you'll love "PUZZLE MATH: Trigonometry and Logarithms" also by Roxanne Eckenrode.< Less
PUZZLE MATH: Mixed Derivatives
Paperback: $5.60
Ships in 3-5 business days
This is a small collection of 3 math worksheets on the Calculus topic of derivatives. The worksheets provide instant feedback and fun for the math students because the answers correspond to letters
... More > or images that lead to the decoding of secret messages and pictures. They are also self-grading, which makes them ideal for customized treatment of learners. They are tested by multiple
teachers, including the author and her colleagues. If you like these, you'll love "PUZZLE MATH: Trigonometry and Logarithms" also by Roxanne Eckenrode.< Less
PUZZLE MATH: Operations with Integers
eBook (PDF): $0.00
This worksheet provides immediate feedback for students as it is self-checking. It is fun and satisfying for students because the answers correspond to images that decode a secret message. Students
... More > perform operations with integers. (The perform addition, subtraction, multiplication, and division on signed numbers.) It is good for students in pre-algebra or algebra, as an enrichment
for younger students, or a review for older ones. You may also be interested in "PUZZLE MATH: Mixed Derivatives" and "PUZZLE MATH: Trigonometry and Logarithms".< Less
PUZZLE MATH: More Trigonometry
eBook (PDF): $15.00
This new set of 20 trigonometry worksheets, created to follow up the popular "PUZZLE MATH: Trigonometry and Logarithms", provides instant feedback and fun to students because the answers... More >
correspond to letters/images that decode secret messages/pictures. They are self-checking, ideal for customized treatment of learners. Students identify reference angles and quadrants, evaluate trig
functions using special right triangles, find trig functions’ values when given terminal side points, use/prove trig ids, evaluate trig funcs. using calculators, find sinusoid attributes like
amplitude, evaluate inverse trig functions, solve equations with trig functions using factoring and Pythagorean Identities for multiple values, and use the Laws of Cosines and Sines and triangle
areas. They also find sides/angles using trig definitions, complete the Double & 1/2 Angle Identities, Use Sums and Differences of Angles & 1/2 Angle Identities, differentiate which techniques to use
on mixed reviews, and more.< Less
PUZZLE MATH: More Trigonometry
Paperback: $21.07
Ships in 3-5 business days
This new set of 20 trigonometry worksheets, created to follow up the popular "PUZZLE MATH: Trigonometry and Logarithms", provides instant feedback and fun to students because the answers... More >
correspond to letters/images that decode secret messages/pictures. They are self-checking, ideal for customized treatment of learners. Students identify reference angles and quadrants, evaluate trig
functions using special right triangles, find trig functions’ values when given terminal side points, use/prove trig ids, evaluate trig funcs. using calculators, find sinusoid attributes like
amplitude, evaluate inverse trig functions, solve equations with trig functions using factoring and Pythagorean Identities for multiple values, and use the Laws of Cosines and Sines and triangle
areas. They also find sides/angles using trig definitions, complete the Double & 1/2 Angle Identities, Use Sums and Differences of Angles & 1/2 Angle Identities, differentiate which techniques to use
on mixed reviews, and more.< Less
PUZZLE MATH: Geometry
eBook (PDF): $7.50
These ten decoder-style geometry worksheets will provide students with immediate feedback and fun. They are self-checking, and correct answers reveal secret messages. Topics include identifying...
More > vertical angles, linear pairs of angles, and congruence. Students will use midsegments of triangles and trapezoids, find parts of isosceles trapezoids, find angles in kites, fing measures of
arcs and angles when chords are congruent, and find arc length from central and inscribed angles. There are four thorough geometry review puzzles where students work on multiple concepts, and there
is much more.< Less
PUZZLE MATH: Geometry
Paperback: $12.99
Ships in 3-5 business days
These ten decoder-style geometry worksheets will provide students with immediate feedback and fun. They are self-checking, and correct answers reveal secret messages. Topics include identifying...
More > vertical angles, linear pairs of angles, and congruence. Students will use midsegments of triangles and trapezoids, find parts of isosceles trapezoids, find angles in kites, fing measures of
arcs and angles when chords are congruent, and find arc length from central and inscribed angles. There are four thorough geometry review puzzles where students work on multiple concepts, and there
is much more.< Less
PUZZLE MATH: Factoring Numbers
eBook (PDF): $2.00
This worksheet provides immediate feedback for students as it is self-checking. It is fun and satisfying for students because the answers correspond to images that decode a secret picture. It gives
... More > students plentiful opportunities to practice factoring numbers, which will help solidify their mastery of multiplication facts (times tables). You may also be interested in "PUZZLE MATH:
Mixed Derivatives" and "PUZZLE MATH: Trigonometry and Logarithms".< Less | {"url":"http://www.lulu.com/shop/search.ep?keyWords=Math%20puzzles","timestamp":"2014-04-17T02:02:35Z","content_type":null,"content_length":"113401","record_id":"<urn:uuid:6eaea822-ea04-4e64-b930-0f33144fda9a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
chararray.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)¶
Copy of the array, cast to a specified type.
dtype : str or dtype
Typecode or data-type to which the array is cast.
order : {‘C’, ‘F’, ‘A’, ‘K’}, optional
Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and
‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’.
casting : {‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional
Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility.
Parameters ☆ ‘no’ means the data types should not be cast at all.
: ☆ ‘equiv’ means only byte-order changes are allowed.
☆ ‘safe’ means only casts which can preserve values are allowed.
☆ ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.
☆ ‘unsafe’ means any data conversions may be done.
subok : bool, optional
If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array.
copy : bool, optional
By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead
of a copy.
arr_t : ndarray
Returns : Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input paramter), arr_t is a new array of the same shape as the
input array, with dtype, order given by dtype, order.
ComplexWarning :
Raises :
When casting from complex to float or int. To avoid this, one should use a.real.astype(t).
>>> x = np.array([1, 2, 2.5])
>>> x
array([ 1. , 2. , 2.5])
>>> x.astype(int)
array([1, 2, 2]) | {"url":"http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.chararray.astype.html","timestamp":"2014-04-19T19:35:06Z","content_type":null,"content_length":"9853","record_id":"<urn:uuid:40de313e-ae89-434a-8018-4b3f4de51bb8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steig's Verification Statistics - Part 2 (A little more majic)
Recent Comments
Jeff Id on Clarification of Understanding
j ferguson on Clarification of Understanding
Alex Hamilton on Clarification of Understanding
omanuel on Clarification of Understanding
Ckg The Don on Priceless Entertainment from…
WV cools on Clarification of Understanding
Timmy on Clarification of Understanding
Samuelnik on Russia Accuses CRU of Tam…
cloud hosting server on Priceless Entertainment from…
hunter on Clarification of Understanding
Jeff Id on Clarification of Understanding
Just Me again on Clarification of Understanding
Jeff Id on Clarification of Understanding
hunter on Clarification of Understanding
hunter on Clarification of Understanding
Steig’s Verification Statistics – Part 2 (A little more majic)
Posted by Jeff Id on April 28, 2009
A guest post from Ryan O. The second part of Ryan’s Steig et al. verification statistics. It’s amazing what you can find by replicating a paper. The title above is my own.
Fig. 1-Geographical assignments for ground data comparison.
This is kind of a misnomer since I won’t actually be doing any replication, but as there was a Part One, there needed to be a Part Two. Originally I had intended to replicate the r, RE, and CE
statistics in Table S2 of the Supplemental Information, but doing so would require that I first replicate the restricted 15-predictor TIR reconstruction, an activity that I consider has little value
at this point.
Instead, this post will examine how well the main TIR reconstruction matches the ground data. Note that this is something not done anywhere in Steig’s paper. To recap, the verification statistics
that Steig presents are:
1. r2, RE, and CE for the main TIR reconstruction (though the displayed images were of the PCA reconstruction) where the AVHRR data was compared to the reconstructed values.
2. r, RE, and CE for the AWS reconstruction (Table S1).
3. r, RE, and CE for the restricted 15-predictor TIR reconstruction.
There is a noticeable absence of verification statistics for the PCA reconstruction (except by accident) and pre-satellite verification statistics for the main TIR reconstruction. Steig does not
tell us how well the ground data matches the reconstruction.
Why is this important?
Remember that the methodology for the main TIR reconstruction is the following:
1. Perform cloudmasking/removal of outliers for the raw AVHRR data.
2. Perform single value decomposition of the cloudmasked data and extract the first 3 principle components.
3. Place the first 3 principle components alongside the data from 42 surface stations and use RegEM to impute the 1957-1982 values for the 3 principle components.
4. Generate the 1957-2006 reconstruction from the 3 principle components, which are now populated back to 1957.
I placed step 3 in italics because it is a step that requires some thought. Aside from problems with RegEM, a critical prerequisite for this step to work properly is for the AVHRR data and the
ground data to be measuring the same quantity. If they are not measuring the same quantity – i.e., not properly calibrated to each other – the output of this step might very well be gibberish.
A check one could use to determine if the output is gibberish is to calculate verification statistics of the ground data vs. the reconstruction. By itself this will not be able to distinguish
between calibration problems, imputation problems, and resolution problems (insufficient number of PCs), but it will at least provide some insight as to whether there are problems that should be
concerning. Since Steig did not include such an analysis with the paper, we shall take it upon ourselves to provide them.
The procedure for doing this is a bit different than Part One because the ground stations are incomplete during different timeframes. To handle this, I take the first 50% of data that actually
exists and designate that as the calibration period. The last 50% of data that actually exists is then the verification period. I do this for each station, individually, and then calculate r, RE,
and CE. I then reverse the order of the calibration and verification periods, recalculate, and store the minimum values obtained. This makes the procedure exactly analogous to the method used in
the SI for the AVHRR data. Additionally, to ensure that there are sufficient degrees of freedom for the comparison to be meaningful, I exclude any station without at least 4 years of history. I
apply the above procedure to all AWS stations and all manned stations except Campbell, Macquarie, Grytviken, Orcadas, and Signy. The reason for excluding them is because the nearest gridcell is 100+
km away, so there are no equivalent reconstruction values to which I can compare them.
The plots are color-coded with the geographical assignments of Fig. 1.
First up, correlation coefficient:
Fig. 2 – Correlation coefficient (ground station to reconstruction).
Take special note of the relatively high correlation coefficients for the red group, which is West Antarctica, and the low coefficients for the blue group, which is East Antarctica. This may be
unexpected, and it will become important.
Now coefficient of determination:
Fig. 3 – Coefficient of determination (ground station to reconstruction).
From the above 2 plots, we might be tempted to draw the following two conclusions: 1) the correlation coefficients and coefficients of determination are fairly low; and, 2) the reconstruction
performs the best in West Antarctica and the Ross Ice Shelf. This would seem to be counter-intuitive, since the plots of the eigenvectors and the AVHRR-to-reconstruction verification statistics
would have us assume that the best performance would be in East Antarctica. However, before we draw too many conclusions, let’s look at RE and CE:
Fig. 4 – RE (ground station to reconstruction).
Hm. In this one, the Peninsula and West Antarctica that look good. But isn’t the Peninsula supposed to be the least well-reconstructed region? Maybe CE will tell us something:
Fig. 5 – CE (ground station to reconstruction).
Ah. On the CE plot, the Peninsula stations look suitably poor. However, the poor performance of East Antarctica seems as unexpected as the better performance of West Antarctica and the Ross Ice
Now let’s step back and think about this for a moment. After a review of all of these plots, we would conclude:
1. The reconstruction does the worst job in East Antarctica on all plots – r, r2, RE, and CE.
2. The reconstruction does the best job in West Antarctica on all plots, with the Ross Ice Shelf running a close second.
3. The overall performance of the reconstruction with respect to the ground data is poor, and in the Peninsula and East Antarctica, simply taking the mean of the station data explains more
variance than fitting the reconstruction to the ground data.
4. The trends from the reconstruction should therefore be the most accurate in West Antarctica and the least accurate in East Antarctica.
So let’s see if our conclusion #4 is on the mark. The minimum number of data points required for the following plot is 48 months.
Fig. 6 – Difference in slope for common points (reconstruction minus ground data).
Funny. The most faithful replication of slope is in East Antarctica!
Here are the region means and standard deviations:
1. Peninsula: -0.4282 (mean); 0.5059 (standard deviation) – 20 stations
2. West Antarctica: -0.3576 (mean); 1.1581 (standard deviation) – 9 stations
3. Ross Ice Shelf: 0.1782 (mean); 0.5595 (standard deviation) – 26 stations
4. East Antarctica: -0.0358 (mean); 0.5215 (standard deviation) – 40 stations
Why is this so?
First, the sample sizes for the mean/sd calculations are low. In all cases, zero is well within the 95% confidence intervals. So we cannot say that any of the means are different from zero at any
reasonable significance levels – except for perhaps the Peninsula. It could be that East Antarctica performs the best simply because we have the largest number of samples in that area. Indeed, the
distribution of peaks as we go from left to right almost looks random.
Second, the signal-to-noise ratio is very low. In a 10-year sample, the signal we are looking for is on the order of 0.5 deg C or less, while the noise exceeds 10 deg C peak-to-peak. Because of
this, r, r2, RE, and CE penalize missing the noise to a much greater extent than they penalize missing the signal. This leads to preferentially high scores for matching the high-frequency noise over
the low-frequency signal. The problem is made worse by the potential for spurious correlations between noise of physically unrelated stations because of the short record lengths and spotty coverage.
In short, I believe had the authors presented plots of the relevant verification statistics for the ground stations – which show negative REs for about half of them and negative CEs for about 3/5ths
– they might have had a more difficult time with publishing. The reconstructed values simply do not match the ground records.
With that being said, I personally do not think the situation is intractable. I think that a reasonable reconstruction can be done using both the satellite and ground data. Hopefully over the next
few weeks I will be able to present a method that does a fairer job of representing actual Antarctic temperatures. It will be based in a large part on the work that Jeff Id has already done.
Additionally, I intend to run some experiments using manufactured temperature fields to perform sensitivity studies with RegEM and this particular implementation of PCA. There should be some fairly
generic rules that result. There are also a few different statistical measures I intend to test to determine a set of tests that provide more statistical power than r, r2, RE, and CE alone.
For the interested, here are the station numbers as used in the above plots:
Fig. 7 – Ground station numbers and associated names for the plots.
Lastly, the R script for calculating the above quantities:
ssq=function(x) {sum(x^2)}
get.stats.matrix2=function(orig, est) {
### Set up the placeholder matrix for intermediate calculations
stats.matrix.early=matrix(ncol=ncol(orig), nrow=14)
stats.matrix.late=matrix(ncol=ncol(orig), nrow=14)
### Loop through each column
for(i in 1:ncol(orig)) {
### Extract station i
orig.all=orig[, i]
est.all=est[, i]
### Fill in NAs for all points that are not shared
map=is.na(est.all); orig.all[map]=NA
map=is.na(orig.all); est.all[map]=NA
### Find linear trends for all common points
### Remove NAs for r, RE, and CE calculations
### Split vectors into calibration and verification periods
### Set up temporary storage for calculation results
stats.matrix.early[, i]=rep(9999, 14)
stats.matrix.late[, i]=rep(9999, 14)
### Ensure at least 24 months of data in each period before calculating r, RE, and CE
if(length(orig.c)>23 & length(orig.v)>23 & length(est.c)>23 & length(est.v)>23) {
### Get stats for early calibration (first 50%)
stats.matrix.early[, i]=c(get.stats(orig.c, orig.v, est.c, est.v), tr.orig$coef[2], tr.est$coef[2])
### Get stats for late calibration (last 50%)
stats.matrix.late[, i]=c(get.stats(orig.v, orig.c, est.v, est.c), tr.orig$coef[2], tr.est$coef[2])
### Find the minimums between early and late
### Replace 9999′s with NAs
### Return a list with minimums, early cal stats, and late cal stats
stats.list=list(stats.matrix.min, stats.matrix.early, stats.matrix.late)
get.stats=function(orig.cal, orig.ver, est.cal, est.ver) {
### Get calibration/verification period means
### Get residuals
### Calculate Hurst parameter
hurst=HurstK(c(orig.cal, orig.ver))
### Calculate average explained variance [Cook et al (1999)]
### Calculate Pearson correlation for verification period
### [Cook et al (1999)]
pearson=cor.test(orig.ver, est.ver, method=”pearson”)
### Calculate Pearson correlation for all available data
### [Cook et al (1999)]
pearson=cor.test(c(orig.ver, orig.cal), c(est.ver, est.cal), method=”pearson”)
### Calculate Kendall’s Tau for the verification period
tau=cor.test(orig.ver, est.ver, method=”kendall”)
### Calculate Kendall’s Tau for all available data
tau=cor.test(c(orig.ver, orig.cal), c(est.ver, est.cal), method=”kendall”)
### Calculate RE [Cook et al (1999)]
### Calculate CE [Cook et al (1999)]
### Return vector
stats=c(hurst, R2c, r.ver, r.ver.p, r.all, r.all.p, tau.ver, tau.ver.p, tau.all, tau.all.p, RE, CE)
Note: Ground data was taken from the MET READER site on 4/4/2009. Recon data taken from Steig’s site (ant_recon.txt)
40 Responses to “Steig’s Verification Statistics – Part 2 (A little more majic)”
1. April 28, 2009 at 1:12 pm
Impressive stuff, Ryan. Your efforts are appreciated. I look forward to seeing your reconstruction. Herculean task you are taking on there.
2. April 28, 2009 at 2:41 pm
I second what Matt Y said in the above post.
In short, I believe had the authors presented plots of the relevant verification statistics for the ground stations – which show negative REs for about half of them and negative CEs for about
3/5ths – they might have had a more difficult time with publishing. The reconstructed values simply do not match the ground records.
Ot at least buried it in the SI and had that dramtic cover for Nature retained. I agree that overall (for the Antarctica regions) the statistics do not bode well for the validity of the
Hopefully over the next few weeks I will be able to present a method that does a fairer job of representing actual Antarctic temperatures.
Additionally, I intend to run some experiments using manufactured temperature fields to perform sensitivity studies with RegEM and this particular implementation of PCA.
Definitely something to look forward to. I like the approach you are taking here.
Also good review of the Steig paper and articulation of what you are attempting to do. I must admit that I sometimes let my guard down and slide over points in a well written analysis.
3. April 28, 2009 at 2:47 pm
Great stuff Ryan. Other than Steve M on MBH, I have never seen anything like the work that you guys are putting in on this.
I just thought of another correlation…both you and Jeff are irish! Maybe correlation is causation. 8)
Some thoughts (feel free to consider, discard, cull out, laugh at, as you see fit):
I was thinking about Jeff’s re-trended scenario with constant -2C applied. If one re-trended or simply de-trended while maintaining the signal to noise ratio of the data within individual grids
(ie: not constant trend) then the sensitivity of the verification stats to trend changes could be properly checked as well. I realize this may be easier said than done.
My humble offering of potential candidates of manufactured scenarios for sensitivity:
1. Re-locating the peninsula surface station cluster to another region where there is cooling – leaving only one station on the peninsula representative of the average.
2. As Jeff mused about, leave peninsula data alone and re-trend the greater continent.
3. Permutations / combinations including more PC’s.
4. April 28, 2009 at 7:05 pm
Layman, all good things to try . . . and, TBH, #2 and #3 are, in my opinion, a necessity for any kind of reasonable sensitivity analysis.
#1 would only work if we explicitly included distance weighting for correlations, which RegEM does not do. Jeff or I could relocate the Peninsula stations to the moon and RegEM wouldn’t care.
With that being said, if we did a distance weighting for correlations, we would have to do sensitivity analysis for the weighting factors, which is mathematically equivalent to your proposal.
5. April 28, 2009 at 7:24 pm
I must admit that I sometimes let my guard down and slide over points in a well written analysis.
I would implore you not to let your guard down. :) I’ve often used your insights to get as far as I have. It benefits all of us to have you be as critical of our blogosphere ramblings as you are
of the papers that inspire them.
6. April 28, 2009 at 9:44 pm
I have a hard time following the explication. It seems obvous that PC resolution would casue issues with getting good nearest neigbor matches here. Also, that you a bit conflate the idea of
matching AVHRR to station (measuring same thing) and PC TIR matching to AWS.
7. April 28, 2009 at 9:53 pm
A check one could use to determine if the output is gibberish is to calculate verification statistics of the ground data vs. the reconstruction. By itself this will not be able to distinguish
between calibration problems, imputation problems, and resolution problems (insufficient number of PCs), but it will at least provide some insight as to whether there are problems that should
be concerning. Since Steig did not include such an analysis with the paper, we shall take it upon ourselves to provide them.
#6 I get points for saying this first.
Also, AVHRR to station is not measuring the same thing:
And lastly, AWS stations are fundamentally no different than manned stations. The temperature is still measured with a thermometer on a stick. Methinks you are confusing “AWS recon” with “AWS
8. April 28, 2009 at 10:48 pm
Ryan O, from your Part 1 you stated that:
The TIR reconstruction produces decent r2, RE, and CE statistics when compared to the AVHRR data in the 1982-2006 timeframe (the comparison to the ground data – including comparisons back to
1957 – will be Part Two). Though the authors used AR(1) noise instead of the more realistic AR(8) noise, this conclusion is unaltered.
I agree with the importance of your Part 2 analysis in that the 43 surface stations are apparently the centerpiece of the reconstruction. I think I am correct when I say that the 1982-2006 TIR
reconstruction does not use the AVHRR data directly but uses the developed 3 PCs after “relating” the 43 surface stations to the AVHRR grids.
If I am correct up to this point then my question is: for the 1982-2006 period, are the 43 surface stations the driver for the AVHRR grids even though the grids could stand alone during this
period? I would guess from the authors of Steig et al. statements about the uncertainties of the AVHRR measurements and particularly with reference to the cloud masking that they have more
confidence in the 43 surface station measurements. I have the general inclination that the AVHRR measurements are used to spread the 43 surface station data around the Antarctica through
correlations for the 1982-2006 period and the 1957-1981 period.
I also have the feeling that I could be missing something here and have this wrong. Please put me on the right path before I ask a question about the r, RE and CE values for the 1982-2006 period
for AVHRR versus TIR reconstruction compared to those same values for the surface stations versus TIR reconstruction for the same period.
9. April 28, 2009 at 10:51 pm
Yeah, I read your caveat. I just think that measuring a composite with 3 PCs versus station is different than measuring local satt versus station. Such that doing the former does not tell you “if
they are measuring the same thing.”
10. April 28, 2009 at 11:28 pm
Ryan, in case I didn’t make myself clear, I was suggesting a “what if” scenario. “What if” there was only one peninsula station and a cluster of (psuedo) stations in tight proximity to an
existing station with a cooling trend. The cluster would be closely correlated in both trend and noise like the peninsula – only cooling instead of warming. In essence a mirror image of the
peninsula. I understand that RegEM would not know the location of the psuedo-stations, but if the hf and trend correlations mirrored the peninsula (only cooling) perhaps the method is sensitive
and would smear this cooling out into the continent. Particularly pre 1982. The method should not be sensitive to this because the underlying temperatures would be assumed to be unchanged.
11. April 28, 2009 at 11:42 pm
The post 1982 reconstruction is entirely AVHRR data in 3 pc format. RegEM Matlab version doesn’t do any modification of the existing values.
I have the general inclination that the AVHRR measurements are used to spread the 43 surface station data around the Antarctica through correlations for the 1982-2006 period and the 1957-1981
I also thought this would be the case initially, after all it makes sense. We need to stop thinking though and loose our minds to the force…. The post 82 data is uncorrected and noisy ‘trend’
satellite TIR info in 3PC format unaffected by surf temps. The older half is entirely surf data mushed together by high freq correlation.
12. April 29, 2009 at 9:14 am
“The older half is entirely surf data mushed together by high freq correlation.”
Of course it is. The question is how well it was mushed together. I actually SHARE a lot of the concerns of people here. I just don’t like the blanket statements, with a lack of chasing things
down to ground…and the “tests” where an assumption already exists that patterns, teleconnections, high freq etc are insignificant. I mean maybe they are…but they are not automatically
This is why I dislike 3 PCs more than I dislike RegEM
13. April 29, 2009 at 10:33 am
The post 82 data is uncorrected and noisy ‘trend’ satellite TIR info in 3PC format unaffected by surf temps. The older half is entirely surf data mushed together by high freq correlation.
Jeff ID thanks for setting me straight – again. I think I have gotten the subtle point here wrong before. What Ryan O found in his Part 1 was r, RE, and CE statistics on the 3 PC format
(unaffected by the surface temperatures) versus the AVHRR data. That comparison contrasted greatly (in Antarctica regional terms) from the comparison Ryan O made in Part 2 where it was the 3 PC
format (unaffected by the surface stations) versus the surface stations.
Let me know if I have the comparisons correct. Jeff ID, I also want to attempt to ask some questions about the high/low frequency correlations as soon as I get my head around them better and
perhaps look at how GISS handles this issue.
14. April 29, 2009 at 7:55 pm
Below, I have linked and excerpted some comments from GISS on using rural (or rural by GISS’s definition) as correlation and adjustment for proximate urban and suburban stations. My intent is to
show that GISS is interested in using the correlation based on long term trends and not high frequency correlations. Missing data within the GISS limits of 9 missing per month are in-filled by
extrapolation. Off topic to this thought is the comment from GISS where they admit to a problem with micro climates with non climate influences that can exist even in rural and non-urban
stations. They wave its importance away by saying they have it covered, but the Watts team findings would seem to put some doubt into that waiver.
You may not agree with GISS methods but those methods would not apparently lead to the treatment that Steig et al use in correlating distant stations.
The basic GISS temperature analysis scheme was defined in the late 1970s by James Hansen when a method of estimating global temperature change was needed for comparison with one-dimensional
global climate models. Prior temperature analyses, most notably those of Murray Mitchell, covered only 20-90°N latitudes. Our rationale was that the number of Southern Hemisphere stations was
sufficient for a meaningful estimate of global temperature change, because temperature anomalies and trends are highly correlated over substantial geographical distances.
The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially
at middle and high latitudes. They obtained quantitative estimates of the error in annual and 5-year mean temperature change by sampling at station locations a spatially complete data set of
a long run of a global climate model, which was shown to have realistic spatial and temporal variability.
The GHCN/USHCN/SCAR data are modified in two steps to obtain station data from which our tables, graphs, and maps are constructed. In step 1, if there are multiple records at a given
location, these are combined into one record; in step 2, the urban and peri-urban (i.e., other than rural) stations are adjusted so that their long-term trend matches that of the mean of
neighboring rural stations. Urban stations without nearby rural stations are dropped.
The excerpt below is off topic for the subject of my post but relevant to what the Watts team has been attempting to determine.
We find evidence of local human effects (“urban warming”) even in suburban and small-town surface air temperature records, but the effect is modest in magnitude and conceivably could be an
artifact of inhomogeneities in the station records. We suggest further studies, including more complete satellite night light analyses.
The urban adjustment of Hansen et al. [1999] consisted of a two-legged linear adjustment such that the linear trend of temperature before and after 1950 was the same as the mean trend of
rural neighboring stations.
The USHCN analysis [Karl et al., 1990; Easterling et al., 1996a] contains another small adjustment in which missing data, mainly in the period 1900-1910, are filled in by interpolation. The
effect is much less than the time of observation and station history adjustments, as illustrated. This adjustment is not included in the GISS analysis, which was designed to minimize the
effect of data gaps.
By the way, the most recent version of USHCN (Version 2) uses break or change point analysis to make corrections for all non homogeneity effects in the temperature data including UHI effects.
15. April 29, 2009 at 9:25 pm
Just throwing this out here.
if there is a Y axis crossover error it might be detected by adding 100 to the data.
example -1 + -1= -2 , 1+1=2 , -1 + 1 =0
we can see that there is a change of 2 for all three cases, but the last one records a “zero”
As always just ignore This crazy person.
good work ryan
16. April 30, 2009 at 2:07 am
#14 KF
Kenneth, thanks for bringing this to light. Maybe the Chosen One will yet come round.
17. April 30, 2009 at 12:01 pm
I think that this thread is as good as any to admit to some grasshopperish inclinations on my part and what I am attempting to do to curtail them.
My problem arises from recollections of the contents of the Steig et al. paper and my recollections of what the analyses here and at CA have shown about the processes used in that paper. I have
no excuses for missing even the more subtle points of the Steig et al paper as I have read it a couple of times. I might have some excuse for not recalling all the revelations about the Steig
paper analyses here and at CA because a summary of all of them does not exist in a single document like it does in the Steig paper. I do, however, think the separate summaries have been very
I plan to go back and carefully read the Steig paper again in order to determine which of these more subtle points on the processes used in their paper can be derived there and how many of them
are only and exclusively revealed through the Air Vent and CA analyses. I then want to attempt to answer my questions through reviewing Air Vent and CA posts and finally by asking a few direct
Having said all that I do want to ask a preliminary question here:
The TIR reconstruction for the period 1982-2006 does not use the surface station data, but the 1957-1981 time period must of necessity use it. We then essentially have a period where instead of
using the processed AVHRR data it is first submitted to PCA and then put into 3 PCs with no surface stations inputs. The 1982-2006 TIR reconstruction then is essentially an exercise in getting
the processed AVHRR data into the 3 PC format.
The period 1957-1981 TIR reconstruction then becomes one using the relationships of the 1982-2006 AVHRR processed data with the surface stations for the period 1982-2006 and processing those
relationships in a different and separate 3 PC format for reconstruction of the 1957-1981 period using the surface station data for the 1957-1981 period.
This process, in my mind, would consist of an interpreted instrumental record (3 PC format) for 1982-2006 period attached to the end of the truly reconstructed 1957-2006 period. For that
attachment to have continuity and meaning would require a rather exacting splicing of the two periods. How is that splicing facilitated in the Steig paper processes?
18. April 30, 2009 at 12:15 pm
“For that attachment to have continuity and meaning would require a rather exacting splicing of the two periods. How is that splicing facilitated in the Steig paper processes?”
RegEM magic does the splicing. The surface data is placed in a matrix with the 3pc’s. RegEM does the rest. Nobody worried too much about continuity only appearance. There were several posts here
early on which looked at the two periods separately, the match wasn’t very good.
19. April 30, 2009 at 1:41 pm
I don’t think you are wrong for having a hard time remembering and keeping track of all the critical analyses. They have been disjointed and evolving (even correcting itself). Which is fine…this
is the McI defense that blogs are notebooks and scratch pads.
It’s just that actual scientists or even observors should not be considered required to monitor this stuff or to make judgements on Stieg et al. This is why I jump on JEff when he gets all
breathy and wants to say how bad Steig is. Wrap it up first…then it will be like a knife through the heart. Don’t celebrate when the knife fight is far from over. Also why I criticize the
silliness of McI giving himself pats on the back as some sort of super publisher when has one real paper (GRL 2005) or expects other people writing papers to Google Climate Audit and cite it for
20. April 30, 2009 at 2:18 pm
#19, When you find a big problem which has obvious implications, isn’t that worth a blog post? It makes me think you’re missing the point. I would have been just as happy to report that I was
able to replicate everything exactly and there was no oddness in the math but this math ain’t good and it ain’t gonna be later this summer or next year.
And that ain’t my fault, I wouldn’t have published it without checks. RegEM is a good idea on the surface but it requires more QC after the fact to prove it did what was expected. In this case,
IT DID NOT.
What’s worse is that they didn’t verify their results against a baseline such as average surface station trends or a nice area weighted recon. Or what if they did some work to confirm the AVHRR
post 1982 matched surface station trend post 1982 instead of presenting correlation (actually demonstrated here in tAV to be disassociated with trend) as the proof of quality. From my
perspective, this is a horrible bit of stat mashing.
21. April 30, 2009 at 6:35 pm
I have reread the Steig et al. paper and most of SI and now have some questions that I will present one at a time.
RegEM uses the surface station measurements and its relationships with the grid based AVHRR processed data for calibration and verification in the 1982-2006 time period. Verification and
calibration statistics reported in Steig uses a comparison of the RegEM (with the 3 PC format) derived reconstructions in the 1982-2006 period and the processed AVHRR data. Figure 1 in Steig
reports r, CE, RE statistics for the processed AVHRR grid data versus the reconstructed grid results derived as noted above. Figure 2 in Steig reports r, CE and RE statistics for the
Antarctica-wide monthly mean processed AVHRR data versus the monthly means derived the reconstructed grid results derived as noted above.
What I have seen as typical of temperature reconstructions is that the reconstruction process and reporting (normally in a time series graph) is carried through the instrumental period of
calibration and verification. Mann et al. did that in their famous HS paper by showing the reconstruction to the point where the data ran out. Problematic for Mann (amongst many problems) was
attaching the instrumental data on the end reconstruction (in the early 1980s as I recall) for perspective without actually bothering to formulaically splicing it on.
If the TIR reconstruction as presented in the Steig paper does not use the same reconstruction in the 1982-2006 period as that used in the 1957-1981 period, and instead uses the instrumental data
put into 3 PC format, this, in my view, is against common practice and becomes problematic.
Careful reading of the Steig et al. paper and the SI has not allowed me to determine how the actual 1982-2006 reconstruction was done. That information has to come from those at Air Vent and CA
who have gained sufficient analysis insights to make a reasonably certain call on it.
Are we quite certain that the 1982-2006 part of the reconstruction was different than that used for the 1957-1981 period and that it was independent of the surface stations?
22. April 30, 2009 at 6:48 pm
RegEM (matlab version) doesn’t replace known values in the matrix. The AVHRR reconstruction has the 1982 – 2007 portion of the matrix already covered so these values don’t change so the short
answer is yes. SteveM’s algorithm apparently doesn’t mask the existing values out so they get re-calculated each time, something Ryan pointed out but I haven’t confirmed.
23. April 30, 2009 at 7:34 pm
Steve fu…messed up?
24. April 30, 2009 at 8:19 pm
Sorry something happened to my posted comments and I’ll try again.
RegEM (matlab version) doesn’t replace known values in the matrix. The AVHRR reconstruction has the 1982 – 2007 portion of the matrix already covered so these values don’t change so the short
answer is yes.
Now all we need to know is how the known values got into the matrix. Were the values the processed AVHRR data unaffected by the surface station relationships or were they values from the
reconstruction for the 1982-2006 period as required for the calibration and verification testing?
Do you know the answer directly or would it be determined by a test.
I recall that the TIR reconstruction for pre and post 1982 had different appearances (noise levels?).
25. April 30, 2009 at 8:48 pm
If you take the raw Comiso sat data as presented and run a PCA throw away all pc’s greater three you can reproduce the matrix reasonably well.
There are rounding errors in PCA calculation which may cause a difference but I’m thinking the raw dataset is close but not exact. I’m sure they didn’t expect such a diligent check, I know I
26. April 30, 2009 at 8:58 pm
Steve’s algorithm is close. Five decimal places or something.
27. April 30, 2009 at 11:29 pm
#23 No, Steve’s version is more straightforward. It allows you to see what RegEM thinks the real data should be, so calculating verification statistics is far easier. Unlike the MatLab version,
you can calculate verification statistics without withholding data – which could be a critical feature given that RegEM results can change unpredictably when the input data is changed.
28. May 1, 2009 at 12:47 pm
Jeff ID you say in the thread introduction:
This is done by calculating the principal components of the satellite data and using RegEM to allocate station trends according to the high frequency covariance (a subject which needs more
If I understand you correctly the process that you describe above is for your attempt to replicate the TIR 1957-1981 part of the reconstruction.
At post 21 Hu McCulloch said:
Great job, Jeff! Whether or not Steig09’s ant_recon.txt is meaningful or robust, at least you have shown that it can be replicated (closely enough for practical purposes) from
cloudmaskedAVHRR.txt + 42 surface stations.
At post 30 Nic L said:
I am certain that Steig simply used RegEM on a combination of the data from the 42 surface stations and the 3 PCs for 1982-2006 that he derived from processing the satellite data.
I am again confused by Hu M’s post, while Nic L’s post leads me to believe that the 1982-2006 part of your attempt at replication was derived in 3 PC format by using the 42 station and processed
AVHRR instrumental data.
Jeff ID have I assumed correctly here? If I have then the evidence says that Steig has essentially used instrumental data for the 1982-2006 part and reconstructed results for the 1957-1981 part.
That would surely not be a valid temperature reconstruction as I know them.
In the Steig et al. (2009) SI the authors state: “We apply the same method as for the TIR-based reconstruction to the AWS-data, using RegEM with k=3 and using the Reader occupied weather station
temperature data to produce a reconstruction of temperature at each AWS site for the period 1957-2006.” Was not the AWS reconstruction produced using the 3 PC format and relating the 42 surface
stations data to the 63 AWS stations over the entire 1957-2006 period?
If I assume correctly from your evidence, Jeff ID, would not the sSteig SI comment be a bit misleading?
29. May 1, 2009 at 1:41 pm
#29, Hu is simply recognizing that the post took the Comiso satellite data and processed it into 3PC’s Nic L is recognizing the fact that 3PC’s are used rather than the whole dataset. Prior to
that we simply took 3pc’s from the output data and made the reconstructions.
The output data only consisted of linear combination’s of 3 PC’s.
“Steig has essentially used instrumental data for the 1982-2006 part and reconstructed results for the 1957-1981 part.”
The instrumental data post 1982 is satellite AVHRR so that highly processed and very noisy data was collected from orbit. The pre-1982 is ground station instruments with linear weighting
according to correlation with 3 pc’s of sat data. The ugly part comes when the covariance matrix is calculated and truncated in RegEM retaining a fractional portion of the information.
Theoretically it could work but there is a lack of verification, something that Ryan just nicely demonstrated in his latest post.
In the AWS reconstruction the matrices were small so reduction to 3PC’s wasn’t necessary. They just placed 42 surface stations next to 63 AWS surface stations and let RegEM rip.
30. May 1, 2009 at 7:02 pm
The instrumental data post 1982 is satellite AVHRR so that highly processed and very noisy data was collected from orbit. The pre-1982 is ground station instruments with linear weighting
according to correlation with 3 pc’s of sat data.
Jeff ID, I would prefer a yes or no answer to my assumption that instrumental data was used without RegEM relating it to the 42 surface stations in the TIR reconstruction post 1982.
In the AWS reconstruction the matrices were small so reduction to 3PC’s wasn’t necessary. They just placed 42 surface stations next to 63 AWS surface stations and let RegEM rip.
Ok then Steig et al. used RegEM without PC reduction for AWS. The authors did a calibration and verification using RegEM for the 1980-2006 period and using that calibration for RegEM applied it
to the entire 1957-2006 period – using RegEM. My point is that, if my assumptions for AWS are correct, the Steig reconstruction for AWS is in line with my experience with other temperature
reconstructions. My assumptions for the TIR reconstruction would indicate that that reconstruction is flawed.
I’ll look at your code in the link above and see if I can determine what I am looking for.
31. May 1, 2009 at 7:07 pm
31 Sorry, Yes post 1982 was only instrumental. It was satellite TIR instrumental in 3PC format post 1982. The question was a bit complicated to me because the early part was also surface station
instrumental processed through RegEM.
“My assumptions for the TIR reconstruction would indicate that that reconstruction is flawed.”
The authors assume RegEM ‘calibrates’ the surface station data appropriately to make it a good reconstruction. From everything we’ve looked at it does not.
32. May 1, 2009 at 9:42 pm
Let me give this a shot. It may be a bit longer of an explanation than you wanted, but I think it will help clear up what Steig did. Theoretically, the methodology used for the TIR reconstruction
could be valid (with some caveats). While it may seem different on the surface, it is not dissimilar to other PCA-type reconstructions. It is more subtle, and, to be honest, rather clever in my
opinion – because it is the methodology that allows them to legitimately not perform the type of calibration you are referring to.
First, the PCA portion:
The purpose of the PCA portion in this case is not really to perform the reconstruction, like it is in the hockey stick cases. Here, the only purpose of the PCA portion is to decrease the number
of variables needed to represent the AVHRR data. They do this in order to make the problem computable for RegEM. Fancy terminology aside, it’s simply a data reduction step.
The RegEM portion:
They then take the 3 PCs and put them in a matrix alongside the station data. This is fed into RegEM, which scales and centers the data.
(Note: This is the reason they don’t have to do the calibration steps. RegEM itself does the scaling and centering. They can’t change the scaling and centering to be the wrong period, like in the
hockey sticks. RegEM has sort of a built-in calibration.)
RegEM then looks at the matrix, fills in all of the missing data with zeros, and performs an SVD on the matrix. It retains k (or regpar) eigenvectors to represent the input matrix. It then tries
to compute a covariance matrix that minimizes the residuals.
Think of the covariance matrix as a plane that attempts to fit through the rank-k data. Any points that are missing take on values corresponding to points on the surface of the plane.
Now RegEM has a completely filled-in dataset. But since it had formed the first covariance plane from a data set where all the missing values were zero – and now they are all non-zero – the
eigenvectors will be different. So it goes through another iteration: SVD, calculate covariance, update the values. It continues to do this until the difference between iterations is less than a
preset tolerance.
So RegEM doesn’t care whether you pre-calibrate the data – it does it as a matter of course. I tried recentering the data on arbitrary timeframes prior to input into RegEM and the output is
always the same. I added constants, subtracted constants, multiplied by constants, deliberately offset the baseline for anomalies for some of the ground stations, used different offsets, all
kinds o’stuff. None of it mattered. Regardless of the initial centering/scaling scheme, RegEM always output the same answer.
In other words, no interpretation or pre-calibration of the ground data to the satellite data is required. RegEM automatically does this on its own. It is fundamental to the algorithm.
When it comes to the AWS reconstruction, just think of the AWS reconstruction as a sparser version of the AVHRR data. There was no need to reduce it to PCs because it was already a small enough
data set to be manageable for RegEM.
I hope that helps answer your questions. :)
33. May 1, 2009 at 10:20 pm
Another great summary, de-centering the data was one of the first thing’s I tried. After running Matlab it didn’t make a difference.
I also completely agree that the whole thing is clever, it’s actually insidiously clever. The checks for it are the problem and they are not so clever, yet they are accepted by our alleged
brightest. I’m worried that the exposition of the problems will not be understood by those who are in climate science. I doubt think they teach math in climatology the same way as in engineering
or physics.
34. May 1, 2009 at 10:26 pm
Evil geniuses they are. Muahahahaha.
Seriously, it is really clever. Light on the verification side, but damned clever. Even though I enjoy trying to rip it apart, I must admit that I think what they did is pretty cool.
35. May 1, 2009 at 10:45 pm
#35 Agreed, let’s not rip it apart unless it’s deserved.
I say it that way because the casual reader won’t understand the serious defects and it will read differently to most. Your posts are clean and truthful so it’s important to let the math lay
where it is.
IMO, The problem is not this paper but rather that RegEM will be used indiscriminately without proper verification on so many others. Mann08 is an example.
If I publish anything down the road, this is the most important point to me.
36. May 1, 2009 at 10:47 pm
Yep, without a doubt.
37. May 2, 2009 at 1:32 am
3. Place the first 3 principle components alongside the data from 42 surface stations and use RegEM to impute the 1957-1982 values for the 3 principle components.
so this could force the early temps cooler?
IF the stations and sats. don’t match for the pca or tir? not properly calibrated to each other?
this is diabolically clever lol!!!
38. May 3, 2009 at 12:12 pm
Seriously, it is really clever. Light on the verification side, but damned clever. Even though I enjoy trying to rip it apart, I must admit that I think what they did is pretty cool.
It has been my impression since gaining a layman’s perspective of all these temperature reconstructions, and particularly the original Mann et al. HS, that it is that cleverness that got these
papers wide recognition and in my opinion a pass on the strict adherence to statistical standards for the methodologies used. I think that when you combine that cleverness with the apparent
effort that goes into the reconstructions you have an unbeatable combination. In view of the consensus amongst climate scientists one can see that a clever paper, like Steig et al. (2009), that
gives evidence for the consensus is going to be difficult to not publish, acclaim in the popular media and get a pass from criticism by climate scientists in general.
The cleverer the paper appears, the more we should enjoy attempting to rip it apart – or as I prefer: doing sensitivity analyses. By the way, I think that Steig et al. is almost as iconic as
Mann’s HS with that Nature cover picture of warming spreading to the West Antarctica. If I were marketing immediate AGW mitigation, I would have to say that Mann’s HS and Steig’s Nature cover
have excellent emblematic content.
39. May 3, 2009 at 12:15 pm
#39, “If I were marketing immediate AGW mitigation, I would have to say that Mann’s HS and Steig’s Nature cover have excellent emblematic content.”
Well now we can’t let that happen, can we? ;) | {"url":"https://noconsensus.wordpress.com/2009/04/28/2222/","timestamp":"2014-04-19T14:29:56Z","content_type":null,"content_length":"148970","record_id":"<urn:uuid:2117cdeb-b6e8-40a8-9901-6ebf4321a6bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiresolution representation of data: a general framework
Results 1 - 10 of 28
- J. Fourier Anal. Appl , 1998
"... ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of
simple filter-ing steps, which we call lifting steps but that are also known as ladder structures. This dec ..."
Cited by 434 (7 self)
Add to MetaCart
ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple
filter-ing steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters
into elementary matrices. That such a factorization is possible is well-known to algebraists (and expressed by the formula); it is also used in linear systems theory in the electrical engineering
community. We present here a self-contained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This
factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e, non-unitary case. Like the lattice factorization, the
decomposition presented here asymptotically re-duces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a wavelet-like
transform that maps integers to integers. 1.
, 1997
"... . We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted
to intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to ..."
Cited by 377 (16 self)
Add to MetaCart
. We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted to
intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to a faster, in-place calculation of the wavelet transform. Several examples are included. Key
words. wavelet, multiresolution, second generation wavelet, lifting scheme AMS subject classifications. 42C15 1. Introduction. Wavelets form a versatile tool for representing general functions or
data sets. Essentially we can think of them as data building blocks. Their fundamental property is that they allow for representations which are efficient and which can be computed fast. In other
words, wavelets are capable of quickly capturing the essence of a data set with only a small set of coefficients. This is based on the fact that most data sets have correlation both in time (or
space) and frequenc...
- In Proc. ACM SIGGRAPH
"... Figure 1: Two views of the graph of the same edge-avoiding wavelet centered at the shoulder of the Cameraman. The support of the wavelet is confined within the limits set by the strong edges
around the upper body. We propose a new family of second-generation wavelets constructed using a robust data- ..."
Cited by 23 (2 self)
Add to MetaCart
Figure 1: Two views of the graph of the same edge-avoiding wavelet centered at the shoulder of the Cameraman. The support of the wavelet is confined within the limits set by the strong edges around
the upper body. We propose a new family of second-generation wavelets constructed using a robust data-prediction lifting scheme. The support of these new wavelets is constructed based on the edge
content of the image and avoids having pixels from both sides of an edge. Multi-resolution analysis, based on these new edge-avoiding wavelets, shows a better decorrelation of the data compared to
common linear translation-invariant multi-resolution analyses. The reduced inter-scale correlation allows us to avoid halo artifacts in band-independent multi-scale processing without taking any
special precautions. We thus achieve nonlinear data-dependent multiscale edge-preserving image filtering and processing at computation times which are linear in the number of image pixels. The new
wavelets encode, in their shape, the smoothness information of the image at every scale. We use this to derive a new edge-aware interpolation scheme that achieves results, previously computed by
solving an inhomogeneous Laplace equation, through an explicit computation. We thus avoid the difficulties in solving large and poorly-conditioned systems of equations. We demonstrate the
effectiveness of the new wavelet basis for various computational photography applications such as multi-scale dynamic-range compression, edge-preserving smoothing and detail enhancement, and image
- Multiscale Model. Simul
"... Dedicated to Manfred Tasche on the occasion of his 65th birthday We introduce a new locally adaptive wavelet transform, called Easy Path Wavelet Transform (EPWT), that works along pathways
through the array of function values and exploits the local correlations of the data in a simple appropriate ma ..."
Cited by 22 (7 self)
Add to MetaCart
Dedicated to Manfred Tasche on the occasion of his 65th birthday We introduce a new locally adaptive wavelet transform, called Easy Path Wavelet Transform (EPWT), that works along pathways through
the array of function values and exploits the local correlations of the data in a simple appropriate manner. The usual discrete orthogonal and biorthogonal wavelet transform can be formulated in this
approach. The EPWT can be incorporated into a multiresolution analysis structure and generates data dependent scaling spaces and wavelet spaces. Numerical results show the enormous efficiency of the
EPWT for representation of two-dimensional data. Key words. wavelet transform along pathways, data compression, adaptive wavelet bases, directed wavelets AMS Subject classifications. 65T60, 42C40,
68U10, 94A08 1
, 1993
"... . We study the properties of the multiresolution analysis corresponding to discretization by local averages with respect to the hat function. We consider a class of reconstruction procedures
which are appropriate for this multiresolution setting and describe the associated prediction operators that ..."
Cited by 13 (4 self)
Add to MetaCart
. We study the properties of the multiresolution analysis corresponding to discretization by local averages with respect to the hat function. We consider a class of reconstruction procedures which
are appropriate for this multiresolution setting and describe the associated prediction operators that allow us to climb up the ladder from coarse to finer levels of resolution. Only data-independent
(i.e. linear) reconstruction operators are considered in Part I. Linear reconstruction techniques allow us, under certain circumstances, to construct a basis of generalized wavelets for the
multiresolution representation of the original data. The stability of the associated multiresolution schemes is analyzed using the general framework developed by A. Harten in [18] and the connection
with the theory of recursive subdivision. Key Words. Multi-scale decomposition, discretization, reconstruction. AMS(MOS) subject classifications. 41A05, 41A15, 65015 Departament de Matem`atica
Aplicada. Universi...
, 1999
"... . We report on numerical experiments using adaptive sparse grid discretization techniques for the numerical solution of scalar hyperbolic conservation laws. Sparse grids are an efficient
approximation method for functions. Compared to regular, uniform grids of a mesh parameter h contain h \Gammad ..."
Cited by 12 (4 self)
Add to MetaCart
. We report on numerical experiments using adaptive sparse grid discretization techniques for the numerical solution of scalar hyperbolic conservation laws. Sparse grids are an efficient
approximation method for functions. Compared to regular, uniform grids of a mesh parameter h contain h \Gammad points in d dimensions, sparse grids require only h \Gamma1 jloghj d\Gamma1 points due
to a truncated, tensor-product multi-scale basis representation. For the treatment of conservation laws two different approaches are taken: First an explicit time-stepping scheme based on central
differences is introduced. Sparse grids provide the representation of the solution at each time step and reduce the number of unknowns. Further reductions can be achieved with adaptive grid
refinement and coarsening in space. Second, an upwind type sparse grid discretization in d + 1 dimensional space-time is constructed. The problem is discretized both in space and in time, storing the
solution at all time st...
, 2009
"... Geometric wavelet-like transforms for univariate and multivariate manifold-valued data can be constructed by means of nonlinear stationary subdivision rules which are intrinsic to the geometry
under consideration. We show that in an appropriate vector bundle setting for a general class of interpolat ..."
Cited by 10 (5 self)
Add to MetaCart
Geometric wavelet-like transforms for univariate and multivariate manifold-valued data can be constructed by means of nonlinear stationary subdivision rules which are intrinsic to the geometry under
consideration. We show that in an appropriate vector bundle setting for a general class of interpolatory wavelet transforms, which applies to Riemannian geometry, Lie groups and other geometries,
Hölder smoothness of functions is characterized by decay rates of their wavelet coefficients.
- NIC Series , 2002
"... this paper a parallelisable and cheap method based on space-filling curves is proposed. The partitioning is embedded into the parallel solution algorithm using multilevel iterative solvers and
adaptive grid refinement. Numerical experiments on two massively parallel computers prove the efficienc ..."
Cited by 8 (0 self)
Add to MetaCart
this paper a parallelisable and cheap method based on space-filling curves is proposed. The partitioning is embedded into the parallel solution algorithm using multilevel iterative solvers and
adaptive grid refinement. Numerical experiments on two massively parallel computers prove the efficiency of this approach
"... The Easy Path Wavelet Transform (EPWT) [19] has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from
image data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of f ..."
Cited by 7 (3 self)
Add to MetaCart
The Easy Path Wavelet Transform (EPWT) [19] has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from image
data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of function values and it exploits the local correlations of the given data in a simple appropriate
manner. In this paper, we show that the EPWT leads, for a suitable choice of the pathways, to optimal N-term approximations for piecewise Hölder continuous functions with singularities along curves. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=980163","timestamp":"2014-04-20T19:51:51Z","content_type":null,"content_length":"37454","record_id":"<urn:uuid:83bcf414-3fb1-4552-8a72-9096cb3ec9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
$Standard, Robust, and Clustered Standard Errors Computed in R$
Where do these come from? Since most statistical packages calculate these estimates automatically, it is not unreasonable to think that many researchers using applied econometrics are unfamiliar with
the exact details of their computation. For the purposes of illustration, I am going to estimate different standard errors from a basic linear regression model: , using the
Parallel computing in R: snowfall/snow
I finally have time to try parallel computing in R using snowfall/snow thanks to this article in the 1st issue of R journal, which replaces R news. I didn’t try it before because i didn’t have a good
toy example, and it seemed like a steep learning curve (i only guessed what parallel computing was). | {"url":"http://www.r-bloggers.com/tag/cluster/","timestamp":"2014-04-18T10:48:45Z","content_type":null,"content_length":"26964","record_id":"<urn:uuid:3d32f2d9-8e27-47fd-81bf-180a8a776ce8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Finding the solution to an IVP Problem. Basic Differential Equations problem.
Quote by
Integrate 2t^8e^2t using integration by parts- 8 times!
Thanks for everyone else and this is what I ended up doing. It was the correct way of solving the problem even though it was a little bit of a hassle! :) | {"url":"http://www.physicsforums.com/showpost.php?p=3376199&postcount=5","timestamp":"2014-04-21T09:44:29Z","content_type":null,"content_length":"7703","record_id":"<urn:uuid:f9637a1d-1833-4803-a40b-e51d40649610>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2010 [00459]
[Date Index] [Thread Index] [Author Index]
Re: inequality as constraints on NDSolve +Integral...
• To: mathgroup at smc.vnet.net
• Subject: [mg106514] Re: inequality as constraints on NDSolve +Integral...
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Fri, 15 Jan 2010 03:17:57 -0500 (EST)
• References: <201001050642.BAA23770@smc.vnet.net> <4B4356F3.4080201@wolfram.com> <000c01ca93b8$7ab26e60$70174b20$@uni-heidelberg.de>
Stefano Pasetto wrote:
> Dear Daniel Lichtblau,
> thank you very much, your solution is the most brilliant... but, now,
> suppose that I add a perturbation term (I'm not sure about my
> implementations)
> I define simply a perturbative term for t>0.5 and nothing for t<0.5
> f[t_?NumericQ]:=Piecewise[{{Integrate[x[t+a],{a,0,1}],t>0.5},{0,t<=0.5}}]
> so that the system become as you suggested (t interval spans 0->2)
> NDSolve[{
> x'[t] == UnitStep[x[t]] (-y[t] - x[t]^2 + f[t]),
> y'[t] == UnitStep[y[t]] (2 x[t] - y[t]^3 + f[t]),
> x[0] == 1, y[0] == 1}, {x, y}, {t, 0, 2}]
> where I added the extra f[t]. Apparently it doesn't work. Better: is there
> any way to integrate a system like:
> NDSolve[{
> x'[t] == UnitStep[x[t]] (-y[t] - x[t]^2 + Integrate[x[t+a],{a,0,1}]),
> y'[t] == UnitStep[y[t]] (2 x[t] - y[t]^3 + Integrate[x[t+a],{a,0,1}]),
> x[0] == 1, y[0] == 1}, {x, y}, {t, 0.5, 2}]
> or is it a mission impossible?
> thankX again for any help!
> Best regards
> Stefano
> [...]
Your perturbation terms put you into the realm of Integro-differential
equations. I do not know how one goes about solving such in any
automated fashion.
I would probably attempt to rewrite using explicit differencing
approximations, by subdividing the independent variable range.
Derivatives are easy. Also approximate the integrals over the dependent
variables as (possibly weighted) sums evaluated at the grid points (the
ones on the t-axis). This should give rise to a system of liner
equations. Solve it, you get your dependent variables evaluated at the
grid points. Now interpolate.
Might be best to interpolate in the same way that you approximated the
integrals (so if you use cubic polynomials, make sure you do something
similar when you approximate the integrals as sums).
Maybe there is a more direct approach. But no such has revealed itself
to me.
Daniel Lichtblau
Wolfram Research
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00459.html","timestamp":"2014-04-16T10:27:19Z","content_type":null,"content_length":"27659","record_id":"<urn:uuid:1463afc8-d3fc-4e31-b822-3dc5be81f992>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
David Bryant's Homepage
David Bryant
Associate Professor
Director of Computational Modeling (COMO)
University of Otago
□ Associate Professor of Mathematical Biology, University of Auckland, 2008--10
□ Senior Lecturer in Mathematical Biology, University of Auckland, 2005-2007
□ Associate Professor, Dept. Mathematics and Statistics, McGill University, Montreal. 2005
□ Assistant Professor, School of Computer Science and Department of Mathematics and Statistics, McGill University, Montreal. 2001-2005
□ Postdoc at LIRMM, Montpellier, with Olivier Gascuel, 2000-2001
□ Postdoc at CRM, Montreal, with David Sankoff, 1998-2000
□ Ph.D. (Mathematics) at University of Canterbury with Mike Steel 1994-1997
Research Areas
Mathematical, statistical, and computational aspects of evolutionary biology. Much of my work to date has focused on phylogenetics, the reconstructon of evolutionary history. I am currently
investigating areas of cross-over between phylogenetics, population genetics and geography. | {"url":"http://www.maths.otago.ac.nz/~dbryant/","timestamp":"2014-04-17T18:23:38Z","content_type":null,"content_length":"5452","record_id":"<urn:uuid:30703949-7ed2-4549-99e1-e928feca859d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Return of Moriarty (John Gardner)
Contributed by "William E. Emba"
The British spy thriller novelist, perhaps now best known for his 007 novels, wrote three novels starring Professor Moriarty, THE RETURN OF MORIARTY (UK title MORIARTY), THE REVENGE OF MORIARTY
(1975), and a never published third novel (thanks to a publisher dispute).
These novels claim, in the best Sherlock Holmes tradition, to be based on the recently discovered (and decrypted!) diaries of Moriarty. Naturally enough, references to his mathematical genius are
made frequently. In THE RETURN OF MORIARTY, we also learn the shocking explanation of how a mathematician could even be a criminal mastermind in the first place. This turns out to be fundamentally
important to pretty much everything in the two novels. (Since the explanation is something of a spoiler, and is entirely non-mathematical, it is not given here. The supposedly inherent uprightness
and honor of mathematicians is upheld in the end.)
Before reading these novels (in order, preferably), one should read at a minimum two Conan Doyle stories, "The Final Problem" and "The Empty House" (the death and the return of Sherlock Holmes,
respectively), since there are numerous knowing references to the events in these stories.
For those readers more interested in Sherlock Holmes, he is a minor, but very important character, in RETURN, and a major character in REVENGE. John Gardner did an excellent job in writing the Conan
Doyle characters believably. (This is a contentious issue in regards to his James Bond novels.) | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf443","timestamp":"2014-04-19T15:31:54Z","content_type":null,"content_length":"10743","record_id":"<urn:uuid:acbfd192-5fd2-4189-b2ee-670045631fde>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Drawing cards without replacement.
January 22nd 2010, 04:27 PM #1
Jan 2010
Drawing cards without replacement.
Two cards are drawn without replacement from a shuffled deck of 52 cards. What is the probability that the second card is a king?
Solutions showing step-by-step is very much appreciated.
Thank you in advance
Hello, flywithme!
Two cards are drawn without replacement from a shuffled deck of 52 cards.
What is the probability that the second card is a King?
I can solve this The Long Way, but why bother?
$P(\text{2nd is King}) \;=\;\frac{4}{52} \:=\:\frac{1}{13}$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Do we care what the first card is? . . . No!
Suppose they asked for the probability that the 17th card is a King.
Do we need to know if any of the first 16 cards are Kings? . . . No.
Spread the cards face down on the table.
. . Point to any card: "Is it a King?"
The probability will be: . $\frac{4}{52} \,=\,\frac{1}{13}$
Get it?
Why don't you show us some effort on your part?
EDIT: There is always a spoil-sport who must show that he can do the question.
Thank you very much Soroban, I never tackled probability with abstract thoughts.
@Plato, how can I show effort on my part? I struggle hard with probability and I can't help others with other types of math because I am only mediocre.
I always talk about this in class.
I usually ask what is the probability we select the ace of spades on the second, fifth... pick?
It's always 1/52.
BUT then I prove it....
P(King on second pick)=P(KING, KING)+P(not a KING, KING)
$= \left({4\over 52}\right)\left({3\over 51}\right) +\left({48\over 52}\right)\left({4\over 51}\right)$
$= \left({4\over 52}\right)\left[{3\over 51}+{48\over 51}\right]$
$= {4\over 52}= {1\over 13}$
Last edited by matheagle; January 24th 2010 at 07:49 AM.
January 22nd 2010, 04:54 PM #2
Super Member
May 2006
Lexington, MA (USA)
January 22nd 2010, 04:57 PM #3
January 22nd 2010, 06:03 PM #4
Jan 2010
January 22nd 2010, 10:12 PM #5
January 22nd 2010, 10:17 PM #6 | {"url":"http://mathhelpforum.com/statistics/124972-drawing-cards-without-replacement.html","timestamp":"2014-04-18T17:05:32Z","content_type":null,"content_length":"49834","record_id":"<urn:uuid:1eebe884-5e82-4bed-bfad-5d20ecf91a39>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eigen-who? How Can I Write About Eigen-anything and Expect You to Read?
May 25, 2011
By klr
After the very nice Convore reply
@timelyportfolio some of your posts include "eigenvalue ratio plots" -- kindly tell us what they show and how they might be useful in constructing a portfolio.
I felt like I should at least attempt to offer a little more detail on eigenvectors, which allow us to visually see similarity between variables (in my mind, time series of asset classes, indexes,
stocks, or other financial prices). In other posts, I have used the fAssets package function assetsCorEigenPlot for Long XLU Short SPY and Russell Napier, ASIP in FT Says Emerging Market Currencies.
Michael Friendly’s wonderful paper does a very fine job of explaining eigenvalues and their use in sorting for helpful visualizations of correlation. Wikipedia also gives a decent introduction in
these two articles http://en.wikipedia.org/wiki/Principal_component_analysis and http://en.wikipedia.org/wiki/Eigenvector. Also, I’m anxious to read the following book whose authors run http://
Really, the closer the variables in distance and angle, the more closely they are related. I thought some currency data from the St. Louis Fed would provide a nice example. Similar to milktrader’s
Chop, Slice and Dice Your Returns in R, I also wanted to show multiple ways in R of achieving a plot of eigenvalues with fAssets, SciViews, and corrgram. This analysis does not yield any real
surprises—Mexican Peso and Brazilian Real are closely related, but both are least related to the Japanese Yen.
Since I used Michael Friendly’s paper so much in writing this article, I wanted to show a corrgram of the currency data. The corrgram package offers lots of potentially useful variations of this
The second part of the Convore question is how can we use eigenvalues to construct a portfolio. Maybe I can answer that in one of my next posts…
R code:
#explain basics of principal component analysis
#by showing the various methods of charting eigenvalues
#of currency data #give specific credit to Michael Friendly
#and his paper http://www.math.yorku.ca/SCS/Papers/corrgram.pdf
#another example of similar techniques used for both
#baseball and finance #for additional information on principal component analysis (PCA)
#see http://en.wikipedia.org/wiki/Principal_component_analysis require(quantmod) #get currency data from the FED FRED data series
Korea <- getSymbols("DEXKOUS",src="FRED",auto.assign=FALSE) #load Korea
Malaysia <- getSymbols("DEXMAUS",src="FRED",auto.assign=FALSE) #load Malaysia
Singapore <- getSymbols("DEXSIUS",src="FRED",auto.assign=FALSE) #load Singapore
Taiwan <- getSymbols("DEXTAUS",src="FRED",auto.assign=FALSE) #load Taiwan
China <- getSymbols("DEXCHUS",src="FRED",auto.assign=FALSE) #load China
Japan <- getSymbols("DEXJPUS",src="FRED",auto.assign=FALSE) #load Japan
Thailand <- getSymbols("DEXTHUS",src="FRED",auto.assign=FALSE) #load Thailand
Brazil <- getSymbols("DEXBZUS",src="FRED",auto.assign=FALSE) #load Brazil
Mexico <- getSymbols("DEXMXUS",src="FRED",auto.assign=FALSE) #load Mexico
India <- getSymbols("DEXINUS",src="FRED",auto.assign=FALSE) #load India
USDOther <- getSymbols("DTWEXO",src="FRED",auto.assign=FALSE) #load US Dollar Other Trading Partners
USDBroad <- getSymbols("DTWEXB",src="FRED",auto.assign=FALSE) #load US Dollar Broad #combine all the currencies into one big currency xts
currencies<-merge(Korea, Malaysia, Singapore, Taiwan,
China, Japan, Thailand, Brazil, Mexico, India,
USDOther, USDBroad)
colnames(currencies)<-c("Korea", "Malaysia", "Singapore", "Taiwan",
"China", "Japan", "Thailand", "Brazil", "Mexico", "India",
"USDOther", "USDBroad")
#get daily percent changes
currencies<-currencies/lag(currencies)-1 #using fAssets
assetsCorEigenPlot(as.timeSeries(currencies)) #using techniques from corrgram package documentation
#get correlation matrix
(currencies.cor <- cor(currencies,use="pair"))
#get two largest eigenvectors
e1 <- currencies.eig[,1]
e2 <- currencies.eig[,2]
#make the chart
plot(e1,e2,col='white', xlim=range(e1,e2), ylim=range(e1,e2),
main="Plot of 2 Largest Eigenvectors for Various Asian
and American Currencies (corrgram)")
arrows(0, 0, e1, e2, cex=0.5, col="red", length=0.1)
text(e1,e2, rownames(currencies.cor), cex=0.75)
#run an interesting corrgram chart
require(corrgram) #do not need for previous eigenvector plot
df1 <- data.frame(cbind(index(currencies),coredata(currencies)))
corrgram(df1, order=TRUE,
main="Currency data PC2/PC1 order",
lower.panel=panel.shade, upper.panel=panel.pie,
text.panel=panel.txt) #using techniques from SciViews package
#do principal component analysis
(currencies.pca <- pcomp(~Korea + Malaysia + Singapore + Taiwan +
China + Japan + Thailand + Brazil + Mexico + India +
USDOther + USDBroad,
data = currencies))
#make the chart
plot(currencies.pca, which = "correlations",
main="Plot of 2 Largest Eigenvectors for Various Asian
and American Currencies (SciViews)")
#more SciViews fun
#(currencies.cor <- correlation(currencies.pca))
#plot(currencies.pca, which = "scores", cex = 0.8)
#pairs(currencies.pca) #compare 2 largest eigenvectors from the sciview and corrgram
Created by Pretty R at inside-R.org
for the author, please follow the link and comment on his blog:
Timely Portfolio
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/eigen-who-how-can-i-write-about-eigen-anything-and-expect-you-to-read/","timestamp":"2014-04-19T01:50:32Z","content_type":null,"content_length":"62079","record_id":"<urn:uuid:d67730eb-4376-40c2-9b15-fc2a49761f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
constant length bezier curve
(edited several times):
I'll start:
[tex]\vec{P}= P_x \hat{i}+P_y \hat{j}[/tex]
[tex]\vec{Q}= \vec{0}[/tex] (without loss of generality), and
[tex]L = \| QR \| = \sqrt{R_x^2 + R_y^2}[/tex] (for variable R)
And we define, for arbitrary 'theta',
[tex]\vec{R}= L \cos \theta \hat{i} + L \sin \theta \hat{j} [/tex]
So that the PQR has a principal angle of 'theta'
plus a constant angle. (edited)
Fix 'theta'. Thus we have a fixed R.
We restrict our choice of S to the line of points {S'} such that the angles of PQS' and RQS' are equal. With some basic vector algebra (helps to draw a picture here), this line {S'} is
[tex]\begin{align*} \{ S' : S' &= \vec{Q} + a \left( \hat{P} + \hat{R} \right) \\
&= a \left( \hat{P} + \hat{R} \right) \} \end{align}[/tex]
with normalized vectors
[tex]\hat{P}=\frac{\vec{P}}{\| P \|}=\frac{p_x \hat{i} + p_y \hat{j} }{\sqrt{p_x^2+p_y^2}}[/tex]
[tex]\hat{R}=\frac{\vec{R}}{\| R\|}=\frac{r_x \hat{i} + r_y \hat{j} }{\sqrt{r_x^2+r_y^2}}[/tex]
and with 'a' as a real scalar multiplier.
Then, as I learned from these definitions from wolram's mathworld,
http://mathworld.wolfram.com/BezierCurve.html http://mathworld.wolfram.com/BernsteinPolynomial.html
The parameteric form of the quadratic Bezier curve looks like this:
[tex]\begin{align*} \vec{C} (t) &= B_{0,2} \vec{P} + B_{1,2} \vec{S'} + B_{2,2} \vec{R} \\
&=(1-t)^2 \vec{P} + 2t(1-t) \vec{S'} + t^2 \vec{R} \\
&=(1-t)^2 (p_x \hat{i} + p_y \hat{j} )+ 2t(1-t) a \left( \left( \frac{p_x}{ \sqrt{ p_x^2+p_y^2} } + \frac{r_x}{ \sqrt{ r_x^2+r_y^2}} \right) \hat{i} + \left( \frac{p_y}{ \sqrt{ p_x^2+p_y^2}} + \frac
{r_y}{ \sqrt{ r_x^2+r_y^2}} \right) \hat{j} \right) + t^2 (r_x \hat{i} + r_y \hat{j} ) \\
&= \hat{i} \left[ (1-t)^2 p_x + a t (1-t) \left( \frac{p_x}{ \sqrt{ p_x^2+p_y^2}} + \frac{r_x}{ \sqrt{ r_x^2+r_y^2}}\right) + t^2 r_x \right] + \hat{j} \left[ (1-t)^2 p_y + a t (1-t) \left( \frac
{p_y}{ \sqrt{ p_x^2+p_y^2}} + \frac{r_y}{ \sqrt{ r_x^2+r_y^2}}\right) + t^2 r_y \right]
\end{align} [/tex]
defined on [tex]0 \leq t \leq 1 [/tex]
Having seperated the x- and y- components of the parametric Bezier curve, we are in position to take its derivative and thus find it's arc-length, which is defined as
[tex]\begin{align*} A &= \int_a^b \| \frac{d}{dt} \vec{C} (t) \| dt \\
&= \int_a^b \frac{\hat{i}\frac{d}{dt} C_x(t) + \hat{j} \frac{d}{dt} C_y(t)}{\sqrt{ \left( \frac{d}{dt} C_x(t) \right) ^2+ \left( \frac{d}{dt} C_y(t) \right) ^2}} dt \\
&= \int_0^1 \frac{\hat{i} \left[ 2(1-t) p_x + a (1-2t) \left( \frac{p_x}{ \sqrt{ p_x^2+p_y^2}} + \frac{r_x}{ \sqrt{ r_x^2+r_y^2}}\right) + 2t r_x \right] + \hat{j} \left[ 2(1-t) p_y + a (1-2t) \left(
\frac{p_y}{ \sqrt{ p_x^2+p_y^2}} + \frac{r_y}{ \sqrt{ r_x^2+r_y^2}}\right) + 2t r_y \right] }{ \sqrt{ stuff }} dt | {"url":"http://www.physicsforums.com/showthread.php?t=81449","timestamp":"2014-04-18T10:46:27Z","content_type":null,"content_length":"57235","record_id":"<urn:uuid:52d10872-bf9b-40bc-9832-7cecfb78a518>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Inexact Newton methods in scipy
Pearu Peterson pearu at scipy.org
Fri Nov 12 06:40:31 CST 2004
On Fri, 12 Nov 2004, Nils Wagner wrote:
> Hi all,
> A standard method to solve nonlinear equations
> f(x) = 0
> is Newton's method. Given a suitable initial guess one iterates
> f'(x_k) \Delta x_k = -f(x_k)
> x_{k+1} = x_k + \Delta x_k
> If the Jacobian is not available in a direct manner, we can apply f'(x_k) to
> a vector \Delta x_k by a finite difference formula (see my previous mail fdf
> package)
> BTW, most publications deal with real Jacobians. How can I extend finite
> difference formulas to complex Jacobians ?
Apply finite difference formula to real and imaginary part of f(x) to get
an approximation for complex Jacobian.
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2004-November/003740.html","timestamp":"2014-04-16T04:40:38Z","content_type":null,"content_length":"3339","record_id":"<urn:uuid:13ae79c0-6075-4d8f-a632-dbcb3dbed128>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Transfinite Euclidean Algorithm
hendrik@topoi.pooq.com hendrik at topoi.pooq.com
Thu Nov 15 10:21:16 EST 2007
On Wed, Nov 14, 2007 at 09:47:02AM -0500, joeshipman at aol.com wrote:
> >-----Original Message-----
> >from: hendrik at topoi.pooq.com
> >> Commutative rings exist in which there is no Euclidean algorthm, but
> >> there is a "division algorithm" in which the appropriate "norm" with
> >> respect to which the remainder decreases takes values in a more
> complex
> >> well-ordered set than the integers. Can anyone give a simple example
> of
> > such a ring?
> >polynomials over the integers with ordinal exponents but only a
> >finite number of terms in each polynomial?
> How does that work? What happens when you divide x^omega + x + 1 by x^3?
> If you say the quotient is x^(omega-3) and the remainder is (x+1), then
> you have more than just ordinal exponents.
I got a quotient of x^omega, since x^3 * x^omega = x^(3+omega) =
x^omega, and a remainder of x + 1. The norm of the quotient is the dame
as the norm of the dividend, but the norm of the remainder is small.
Noncommutativity of addition in the exponents is getting in the way.
The ring appears not to be commutative as a result.
x^omega * x^3 = x^(omega + 3)
x^3 + x^omega = x^(3 + omega) = x^omega.
So this isn't the example you were looking for.
-- hendrik
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-November/012291.html","timestamp":"2014-04-17T00:55:47Z","content_type":null,"content_length":"4008","record_id":"<urn:uuid:3441936d-7b63-4b26-8c9b-b91bdbede69b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
I've had a lot of problems with students constantly talking in my Geometry class. I have 25 students and it's just not a good combination. I've tried lecturing and guilt tripping them about respect.
I've tried holding them late after class. My most recent strategy was to add a homework problem every time they get loud. For example, if I wanted them to do 8 problems, I'd make a worksheet of 16
and write an 8 on the board. If they get loud, I walk over to the board, cross out the 8, and write a 9. I like it because it's nonverbal and doesn't interrupt the class. Also, they can't argue with
it. If I start walking near the board, they try to quiet everyone down. It's helped some but it hasn't changed the fact that they don't respect me and ignore what I say.
So I decided to experiment. I've wanted to do this since my first year of teaching but was never sure I could pull it off. I did not talk. I went through the entire class without speaking. It was so
I stood at the door and talked to students as they came in. When class started, I started the timer for 4 minutes to signal students to work on the bell ringer. When students called me over to ask
questions, I spoke to them individually. From then on, I didn't speak. When the timer went off, I worked out the problems on the board so students could check their work. Normally I would explain the
problem and call on students to tell me what to write. This time I wrote in silence and they magically did the same.
Our lesson was on the midpoint formula. I had a worksheet and corresponding PowerPoint but in a tragic turn of events, the worksheet pictures were different from the PowerPoint. Oh no! So instead of
the worksheet, I held up a blank piece of paper. They got the hint and got out paper. I showed a horizontal line on the coordinate plane and the PowerPoint asked, How could we find the midpoint? A
couple students figured out that we could just count the squares and then take half of that. The next picture showed a slanted line so that their method no longer worked. I pointed at the endpoints
of the line and they gave me the coordinates. Then I showed them the formula and they told me what to write. We went through several problems that way. I pointed to students when they needed to
write. When they asked questions, I redirected it back to the class and other students explained. I walked around to monitor their progress.
I was amazed at my own ability to communicate without speaking.
Some students were really angry at me. Which I still haven't figured out. Some were very helpful interpreters. There were two students who I don't think have truly understood anything we've done all
year that were completely engaged, did their homework, and actually enjoyed class.
I asked them three questions at the end of class as an exit slip.
1. Did you learn better or worse?
2. What was the point of this experiment?
3. Did you have any questions that were not answered?
The responses to number one were 8 better, 6 worse, 5 the same, 2 no answer.
The responses to number 3 were 13 no, 5 yes, 3 no answer.
The response to number 2 were incredibly valuable. Here are some of their comments:
-To make us do more work
-To see if we an learn without you talking
-Learn to be quiet
-To see if it would help us learn
-To have our friends try and teach us
-To see if we could learn without your help
-To show you can teach and we can learn without talking. It's about paying attention and reading directions. Making us think more.
-I learnt better today somewhat because it was us learning.
-To learn in a different way
The next day I showed them the results and put up the following quote:
"If students could learn math by just listening, teachers would have been replaced by tape recorders a long time ago."
I asked them what this meant. They commented that you need to do more than hear it, you need to see it and actually do it.
Then I asked them how I could talk less so that they could learn more. Some of their suggestions were that I talk 2 days a week and not talk 3 days a week, not talk until they asked me a question,
and only talk 5 times a period.
I haven't really decided what I'm going to do but I have been really noticing how much unnecessary talking I do and I hope I'm doing a good job of cutting it out.
My biggest takeaway from this experiment is that my students do not listen to what I say. As soon as I start talking, they tune out. They know I will repeat it or that it does not matter. This is a
part of my issue with respect but I haven't figured out how to master that yet.
By not talking, I forced them to watch me and pay attention. I forced them to listen to each other, not talk over each other, and try to understand on their own.
I forced myself to communicate only what matters.
I think I made them think.
Shh. Don't say a word.
Today was our regional teacher's institute and our speaker was Todd Whitaker, author of What Great Teachers Do Differently. I read the book a couple years ago and posted the main points.
He was a great speaker: funny, interacted with the audience, easy to understand.
Here are my main takeaways:
1. Negative people have no power. We give power to them. Pouters pout and whiners whine because it works. Who is not on any committee, doesn't do any extra curricular activities, has the easiest
load, and the smallest classes? The people who complain. It is easier to avoid, ignore, or give in then to face them head on and deal with it. But pouters will pout and whiners will whine until
it doesn't work anymore.
2. Treat everyone as if they're good. Good people deserve it and crummy people can't stand it. The example he gave is when you are in a grocery store and see a parent freaking out and yelling at
their kid. The parent is not uncomfortable. We are. We have a problem with the behavior but the child has the bigger problem. Our normal reaction would be to ignore or go down a different aisle.
He said, what if we went up to the parent and (treating them as if they are good) asked them a normal question, like where is the coffee? For a moment, it shifts the situation. Will the parent
yell at you? Maybe. But you already knew they were an idiot the moment you walked down the aisle. Don't let troublemakers, whiners, and pouters be invisible.
3. What's great about teaching is that it matters. What's hard about teaching is that it matters every day. Ten days out of ten we should never yell, never argue, and never use sarcasm. Ten days out
of ten we should treat students with respect and dignity because we never know which day it's going to make a difference for them.
4. What great teachers do differently is know how they come across.
Our big push for the year is literacy across the curriculum.
I'm excited about two new ideas I'm trying.
First of all, I have a first hour achievement period which is comparable to a homeroom or advisory. We've done a lot of different things. We watch Channel One news and have discussions, we have a
silent reading day each week, we have regular study halls, etc etc. This year we got a bunch of new posters that line the hallway entrance. They are the ones with black borders that focus on a
character trait like honesty, integrity and so on. Each student had to pick a quote. Then they had to find a picture on the Internet that went along with the quote. They had to write a one page
reflection on why they chose this quote, how it relates to their life, and how their picture describes the quote. I didn't give them a due date, they just worked until they were done and then took
turns presenting to the class. Then I had students vote on the best paper, best presentation, and funniest presentation. This sparked the idea to have students write and present more and more until
we get to a point where students can self-assess and asses each other using a rubric. I'll be interested to see how the quality of what they create changes during the process.
I found these super amazeball notebooks at Wal-Mart. They are black and white and covered with designs and you can doodle on them and design them however you want. Slightly reminiscent of comic
books. They come in a pack of 3 and cost $1. My students love them!
The part they don't know is that they only have 56 pages of paper. Ok, well they can read, so they do know that. But what they don't suspect is that once we run out of paper, I want to transition
them to blogging. :) But how can I do that when I don't have enough computers. Enter Project iPad. I've decided that right now while we have the grant is the prime time to start a 1:1 iPad program at
our school. So I've neatly tied that into our literacy project by having the students research and write papers in support of the idea, complete with main evidence, supporting arguments, and so on.
The students are greatly intrigued. We started by doing a bubble/web/concept map graphic organizer on benefits of an iPad. Monday we are going to list the potential downfalls. Our literacy coach came
in and talked to them about public speaking and gave them a graphic organizer that outlines a speech. We are going to use our webs to prioritize what should go in our outline and build our paper
around that. I'm trying to get other teachers and classes involved so that every student has a say in it. How powerful will that be? And I'm hoping that the administration won't be able to deny every
single student who has researched, written, and presented a well-thought out argument.
I also planned to do a lot of creative writing prompts to hopefully hook them into writing, thinking outside the box, and better expressing themselves. I found two great sites for prompts:
creativewritingprompts.com and http://writingprompts.tumblr.com/ I went through the first site and picked the ones I liked best and made a pretty PowerPoint to use in my classroom. I like the second
website too because it adds the visual piece. I will definitely be adding to this but it is a fantastic way to start.
My second idea takes place in my eighth hour class. The class is a supplemental Geometry class for students who did not meet or exceed in their standardized test scores. There is no real curriculum
and no one can tell me what I should be doing. So far, I've been doing a mixture of extra help with geometry, reviewing stuff from the end of algebra, and teaching new stuff that I didn't quite get
to in algebra. I bought the same amazeball notebooks for them too but their writing prompts will be focused on math instead of creative writing. Earlier I had posted a list of algebra writing prompts
and now I am slowly transitioning that into another pretty Powerpoint. My thinking is to start class with journal time because my next door neighbor English teacher does that with them already. In
all my other classes, I start off with a bell ringer. But by eighth hour, I'm usually tired and winging it. This is definitely a better solution. I think it is also a healthy break for the students
who have me two hours in a row. It gives them a chance to be quiet, think, write, and discuss. My thinking is that the writing prompt will drive the material we learn/practice/review that day.
Eventually, I want to have stations that students rotate through (that's another post entirely) so I'm wondering if it would work to have a writing station, board work station, and online (ALEKS)
station. It would give students about 15 minutes per station. More on that later.
Some students have me for first and eighth hour and have my next door neighbor for English so that is at least 3 times a day that they will be writing and ultimately engaging in critical thinking.
I'm excited about the prospects!
Oh, you probably want to know how I'm going to grade. For now, I think I will just be giving participation points. Friday I had everyone read their answers out loud. I may glance at them weekly to
make sure they are actually writing and not just spouting off at the mouth. In the future, I hope to have students self-assess or assess each other. Our literacy team came up with a fantastic rubric
but in my opinion, it is too much for my students' short journal writings. Seems way more appropriate for papers, not necessarily a paragraph or so. But then that just means me and my students will
have to create our own. More team work and collaboration.
I previously posted about my students coming up with the idea to do a hands-on geometry activity with pipe cleaners, fuzzy balls, construction paper, and letters to review points, lines planes, and
such. Each student had their own packet. They used a piece of construction paper for their plane.
This Powerpoint was posted up front, which gave them directions on something to create. This relied heavily on their ability to read and understand the terms and labels posted. See example.
PowerPoint slide:
As they arranged, I walked around and checked students' work but I did choose to create a mock example on the following Powerpoint slide. This way, if I did overlook some students, they were still
able to self-assess and gauge their own understanding. This also created good opportunities for students to tell how they did it differently and discuss different ways of getting the right answer.
I thought this was a worthwhile activity and I would like to do more things like this, I'm not sure how much I believe in learning styles, but I do believe in connecting ideas with students in as
many ways as possible.
As I mentioned before, what I am most proud of is that my students came up with the idea, put the supplies together, participated, and then as a class we reflected and discussed the results. This has
been the best and most realistic example of team work and collaboration that we have accomplished yet. | {"url":"http://misscalculate.blogspot.com/2011_10_01_archive.html","timestamp":"2014-04-18T13:10:18Z","content_type":null,"content_length":"104047","record_id":"<urn:uuid:36ae6cb3-23b5-4ee0-bca2-8c51b51d0dcf>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rex, GA Prealgebra Tutor
Find a Rex, GA Prealgebra Tutor
...I was the math and science lab supervisor at Georgia Perimeter College for five years, and tutored there for 10 years before that. We were open to the public, so I have tutored students in
high school, and college for a long time. The subjects I tutored in were Calculus 1, 2, and 3, General Biology, Cell Biology, Chemistry 1 and 2 along with Organic Chemistry.
15 Subjects: including prealgebra, chemistry, calculus, geometry
...I hold myself to a high standard and ask for feedback from students and parents. I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24
hour cancellation policy, I often provide make-up sessions.
8 Subjects: including prealgebra, statistics, algebra 1, algebra 2
...I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra-
trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed cl...
16 Subjects: including prealgebra, calculus, algebra 2, geometry
...Through my tutoring, you will definitely see an increase in your scores no matter what the subject is. I am here for YOU! I believe in success for ALL students.
7 Subjects: including prealgebra, geometry, grammar, elementary (k-6th)
...As a tutor and as a teacher, it is my goal to help students gain a deep understanding of philosophy and begin to undergo that transformation. My experience in teaching philosophy includes
experience teaching students skills in reading and writing in English. Most of my courses have been writing intensive, and several have been for writing credit.
9 Subjects: including prealgebra, reading, English, writing
Related Rex, GA Tutors
Rex, GA Accounting Tutors
Rex, GA ACT Tutors
Rex, GA Algebra Tutors
Rex, GA Algebra 2 Tutors
Rex, GA Calculus Tutors
Rex, GA Geometry Tutors
Rex, GA Math Tutors
Rex, GA Prealgebra Tutors
Rex, GA Precalculus Tutors
Rex, GA SAT Tutors
Rex, GA SAT Math Tutors
Rex, GA Science Tutors
Rex, GA Statistics Tutors
Rex, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Atlanta Ndc, GA prealgebra Tutors
Conley prealgebra Tutors
Ellenwood prealgebra Tutors
Forest Park, GA prealgebra Tutors
Hapeville, GA prealgebra Tutors
Jonesboro, GA prealgebra Tutors
Lake City, GA prealgebra Tutors
Lithonia prealgebra Tutors
Lovejoy, GA prealgebra Tutors
Morrow, GA prealgebra Tutors
Pine Lake prealgebra Tutors
Red Oak, GA prealgebra Tutors
Redan prealgebra Tutors
Scottdale, GA prealgebra Tutors
Stockbridge, GA prealgebra Tutors | {"url":"http://www.purplemath.com/Rex_GA_Prealgebra_tutors.php","timestamp":"2014-04-17T11:03:26Z","content_type":null,"content_length":"23844","record_id":"<urn:uuid:e1b36429-e38a-45d9-918a-c6ba75b26c04>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
MIT 6.189 First Week - Exercise OPT.2 – Secret Messages in Section 3, What are we being asked to do? I understand the modulus and how it works, but I am not sure what the problem is asking.
• 10 months ago
• 10 months ago
Best Response
You've already chosen the best response.
In the problem itself it says: "We want the values of the numbers to remain between 0 and 5. To do this we will use the modulus operator. The expression x%y will return a number in the range 0 to
y-1 inclusive, e.g. 4%6 = 4, 6%6 = 0, 7%6 =1. " I get we might want to keep the numbers below a certain threshold, but how is the modulus helpful? And how can we be sure that we can get back to
the original phrase? I will point out that I don't know much about ASCII, or any other encodings.
Best Response
You've already chosen the best response.
All you need to know about ASCII for this is that the letters are assigned to consecutive values. So the value of ord('A') + 3 = ord('D'), just as you would expect. If you wanted to shift a
character by 4, you would add 4 to the value returned by ord, but you need to keep that from going beyond the end of the alphabet. What if you took the position of the letter in the alphabet,
added the shift, and then took the modulus <length of alphabet> prior to adding to the ord of 'A'?
Best Response
You've already chosen the best response.
Lo Tom, I saw you took a look at the way I did it. The version I did was a bit quick and dirty. If you look at the range of characters, you could make a way to shift only lower case to lower
case, upper to upper, and so on. I just took a bit of a shortcut. To be honest, if I wanted it clean I would use an if statement with a regular expression to find capitol letters, and shift them
only with the range of other capitols. Etc.
Best Response
You've already chosen the best response.
for i in range(32,127): print(str(i)+" is \""+ chr(i)+'"') For 2.x, remove the ( ) from the print. That will show you the character range(s) you are interested in.
Best Response
You've already chosen the best response.
I think my problem has been understanding the question itself. I'd gotten this far: phrase = raw_input("Type the phrase you'd like to see encoded. ") shift = int(raw_input("Type the shift value.
")) encoded_phrase = '' for letter in phrase: ascii_code = ord(letter) + shift letter_res=chr(ascii_code) encoded_phrase += letter_res print "The encoded phrase is: " + encoded_phrase This does
encode the phrase, but it uses a character set far larger than what we're after. The problem wants us to go from A-Z, starting over again at A. whpalmer - Thanks for your note, I get how the
modulus is important to this operation. "T" shifted 7 should be "A". 20+7 == 27. 27%26 == 1
Best Response
You've already chosen the best response.
A better way would be to take things in as numbers, toss them into a matrix, ans then use an encoding matrix to garble the numbers. The numbers could then be shipped in matrix or vector form to
be rebuilt on the other end, and the inverse matrix would decode them. The process of matrix multiplication would hide common values, like the letter e.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51c3a1afe4b055e613b887d9","timestamp":"2014-04-20T21:14:56Z","content_type":null,"content_length":"44738","record_id":"<urn:uuid:988986c6-12a7-475f-a68a-aa836c3b226a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jersey Vlg, TX Precalculus Tutor
Find a Jersey Vlg, TX Precalculus Tutor
...As a certified math teacher in Cy-Fair and as a tutor, I believe that anyone can learn and understand math. Those problems that appear complicated are just a series of simple concepts woven
together. I deconstruct the problem for students so they can understand the simple problems woven in to what "appears" to be a complicated problem.
16 Subjects: including precalculus, reading, GRE, algebra 1
...Through my experience as a tutor, I have developed a unique style of teaching, which students (and my peers) love. First, I tailor the learning experience to each student. I like to determine
if the student is a visual, auditory, or tactile learner in order to help them learn in the most efficient way possible.
22 Subjects: including precalculus, chemistry, calculus, physics
...While the technical subjects are my greatest strength and specialty, I can also offer tutoring in the social sciences. In high school, I took AP European History and earned a 5 on the AP exam.
Geometry comes naturally to me.
37 Subjects: including precalculus, chemistry, calculus, writing
...I love using laws of sines and cosines to solve equations and problems. I am very good at it, and I have taught trigonometry during home tutoring in Kathmandu, Nepal. I am preparing for GRE
test right now.
12 Subjects: including precalculus, calculus, physics, geometry
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including precalculus, calculus, geometry, algebra 1
Related Jersey Vlg, TX Tutors
Jersey Vlg, TX Accounting Tutors
Jersey Vlg, TX ACT Tutors
Jersey Vlg, TX Algebra Tutors
Jersey Vlg, TX Algebra 2 Tutors
Jersey Vlg, TX Calculus Tutors
Jersey Vlg, TX Geometry Tutors
Jersey Vlg, TX Math Tutors
Jersey Vlg, TX Prealgebra Tutors
Jersey Vlg, TX Precalculus Tutors
Jersey Vlg, TX SAT Tutors
Jersey Vlg, TX SAT Math Tutors
Jersey Vlg, TX Science Tutors
Jersey Vlg, TX Statistics Tutors
Jersey Vlg, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Jersey_Vlg_TX_precalculus_tutors.php","timestamp":"2014-04-16T04:25:11Z","content_type":null,"content_length":"24309","record_id":"<urn:uuid:930177f8-40d5-4963-8d21-1cbc0697be7a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markov Chain
March 19th 2009, 04:41 PM #1
Apr 2007
Markov Chain
Hi, I am having trouble with this problem:
There are two transmission lines from a generating station to a nearby city. Normally,
both are operating (“up”). On any day on which line A is operating, there
is probability
p that it will go down at the end of the day. On any day on which
line B is operating, there is probability q that it will go down at the end of the day.
It takes the repair crew a day to repair a broken line. Only one line can be repaired
at a time; if both are down, they repair line A first.
Model this situation as a Markov chain. (Hint: there are four states.)
I have the answer but I can't figure it out past probabilities p, q, & 1.
The answer is according to the transition diagram:
P= 1 p(1-q) q(1-p) pq
1-q 0 q 0
1-p p 0 0
I have drawn the state transition diagram, as well. I can easily see: p (prob line A will go down), q (prob line B will go down), and 1 (prob A repair) but I am lost after that. Do I have the
states mixed up?
If p is the prob A will go down (state 3 to 2), I thought 1-p would be the prob A will go up (which would be transiton from state 2 to 1) but according to the solution the transition from 2 to 1
is 1-q.....
I'm soooo lost - can anyone put this in plain english for me.
The prof also said if we were having difficulty to set p=.01 and q =.02 and solve numerically but I have no idea how to do that.
Thanks very much
Hi, I am having trouble with this problem:
There are two transmission lines from a generating station to a nearby city. Normally,
both are operating (“up”). On any day on which line A is operating, there
is probability
p that it will go down at the end of the day. On any day on which
line B is operating, there is probability q that it will go down at the end of the day.
It takes the repair crew a day to repair a broken line. Only one line can be repaired
at a time; if both are down, they repair line A first.
Model this situation as a Markov chain. (Hint: there are four states.)
I have the answer but I can't figure it out past probabilities p, q, & 1.
The answer is according to the transition diagram:
P= 1 p(1-q) q(1-p) pq
1-q 0 q 0
1-p p 0 0
I have drawn the state transition diagram, as well. I can easily see: p (prob line A will go down), q (prob line B will go down), and 1 (prob A repair) but I am lost after that. Do I have the
states mixed up?
If p is the prob A will go down (state 3 to 2), I thought 1-p would be the prob A will go up (which would be transiton from state 2 to 1) but according to the solution the transition from 2 to 1
is 1-q.....
I'm soooo lost - can anyone put this in plain english for me.
The prof also said if we were having difficulty to set p=.01 and q =.02 and solve numerically but I have no idea how to do that.
Thanks very much
This can't be quite right. The probabilities along a row should sum to 1. First we have to understand the states.
State 1: both working
State 2: B is working, but not A
State 3: A is working, but not B
State 4: Neither working
First row. This is the row of probabilities of going to state 1, 2, 3, 4 given that you are starting in state 1 (both working). Probability of staying in state 1? (1-p)(1-q) - this is the
probability that neither A nor B go down. Probability of going to state 2: p(1-q) - A goes down but B stays up. 1->3: q(1-p) - B goes down A stays up. 1->4: pq (Both go down)
If p is the prob A will go down (state 3 to 2), I thought 1-p would be the prob A will go up (which would be transiton from state 2 to 1) but according to the solution the transition from 2 to 1
is 1-q.....
Don't think of p as the probability of going from 3 to 2 (even though it happens to be true). You are told that p is the probability that A goes down. If you are in State 3, then you start the
day with B down and A up. So at the end of the next day, B will be up (it was fixed), and A may or may not be down. So state 3 can go to either 1 or 2. What is the probability that it goes to 1?
well (1-p) which is the probability that A stays up. What is the probability that it goes to 2? Well that is p - the probability that A goes down.
It is important to remember what p and q are. They represent the probabilities of A and B (respectively) going down. You use these to build up the probabilities of going from state to state. The
probability of going from 2 to 1? State 2: B is up, A is down. At the end of the next day, A will be up (it was fixed), but B may or may not be up (States 1 or 3 respectively). What is the
probability that B is still up? 1-q! What is the probability that B goes down? q!
Do you see?
So here is the final matrix:
P= (1-p)(1-q) p(1-q) q(1-p) pq
1-q 0 q 0
1-p p 0 0
March 21st 2009, 01:14 PM #2
Jul 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/79566-markov-chain.html","timestamp":"2014-04-18T11:42:27Z","content_type":null,"content_length":"38068","record_id":"<urn:uuid:54be9d6d-27c1-4873-9c5c-92496048c8d0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mill Creek, WA SAT Math Tutor
Find a Mill Creek, WA SAT Math Tutor
...I have completed coursework for an English major at the University of Washington and PhD coursework in English at Rutgers University. I can tutor reading, literature, grammar, and writing. I
have significant experience helping students write and revise papers.
32 Subjects: including SAT math, English, reading, geometry
...For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia.
I am certified level 1 by the College Reading and Learning Association, and have tutored subjects rangi...
13 Subjects: including SAT math, calculus, physics, geometry
...I have personally completed and excelled in mathematics courses through university level Calculus Courses. I have tutored Algebra I for over five years now as an independent contractor through
a private tutoring company. I have tutored high school level Algebra I for both Public and Private School courses.
27 Subjects: including SAT math, chemistry, biology, reading
...Some of my areas of greatest experience are: properties of exponents and roots, writing and graphing linear equations and inequalities, probabilities, interpretation of graphs and data tables,
and properties of functions. Algebra 2 is one of the subjects I tutor most frequently. Some of the top...
17 Subjects: including SAT math, chemistry, reading, algebra 1
...Building a solid understanding of prealgebra is one of the initial steps preparing for the next level of mathematics. I will help students establish the foundation for upcoming challenges. All
you need is some patience for success with prealgebra.
15 Subjects: including SAT math, calculus, geometry, algebra 1
Related Mill Creek, WA Tutors
Mill Creek, WA Accounting Tutors
Mill Creek, WA ACT Tutors
Mill Creek, WA Algebra Tutors
Mill Creek, WA Algebra 2 Tutors
Mill Creek, WA Calculus Tutors
Mill Creek, WA Geometry Tutors
Mill Creek, WA Math Tutors
Mill Creek, WA Prealgebra Tutors
Mill Creek, WA Precalculus Tutors
Mill Creek, WA SAT Tutors
Mill Creek, WA SAT Math Tutors
Mill Creek, WA Science Tutors
Mill Creek, WA Statistics Tutors
Mill Creek, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Mill_Creek_WA_SAT_math_tutors.php","timestamp":"2014-04-17T13:32:13Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:58c6d0b5-e69d-4b35-bac1-bf02c1ff1b16>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
I Spy With My Little Eye: Math around us
Estimated reading time: 2 minutes, 39 seconds
As you know, this is the first week of Math Awareness Month. But what you may not have realized yet is that I am hosting a contest on the Math for Grownups Facebook page. Each day I give a Math
Treasure Hunt clue. The object is to notice that math-related something or concept and then post about it under the clue. At the end of the week, I’ll randomly select one winner from all of the
entries. That person will get either a copy of Math for Grownups or a gift card. The details are here.
There were some really cool entries, so I thought I’d share them here.
Monday: A prism — This clue turned out to be a bit tougher than I expected, and that’s because I didn’t consider the different definitions of prism. I meant a polyhedron made up of polygons; in other
words, a cube or a box. But the entries really focused on a solid that refracts light. This is often a triangular prism or a polyhedron made up of two triangles and three quadrilaterals. But
sometimes these prisms are not geometric prisms at all but may be pyramids.
Tuesday: A percent — Much easier! Here are a few examples that you gave:
I ate 2% of my Daily Total Fat with my shredded wheat this morning.
My daughter is in virtual school and she has completed 77% of her math curriculum for the 2011-2012 school year. We are counting the percent points until summer.
0% chance of precipitation this afternoon means we might get to go to the playground!
Wednesday: A bar graph
Checked out a review of “Mirror, Mirror” online and found the reviews listed as a bar graph by A,B,C,D,F grades. Made it easy to see that the reviews so far give it a pretty average grade. Went
to see it and would have given it a B.
I’m teaching about gender work and family in my intro sociology class this month. Here is a link to a bar graph and story that explains class differences in access to parental leave
Thursday: An improper fraction – Yep, this is a toughie. No entries yet — want to be first? In an improper fraction the numerator (the number on top) is larger than the denominator (the number on the
bottom). Now can you find one?
Friday (today): Multiplication by a two-digit number — Be the first to enter today!
This week’s chance to win ends at midnight tonight. FAQ:
1. Can you go back and answer questions from earlier in the week? Yes!
2. Can you respond more than once to one clue? Yes!
3. Can you tell everyone you know about the contest? Why yes!
4. Can you make this a project for your home-schooled or classroom kids? Yep! (Just be sure that anyone entering is allowed to be on Facebook.)
Have fun with this contest. Notice the math around you. Learn a couple of things. And share these with everyone.
Do you have ideas for this contest? Drop me a line or share them in the comments section. | {"url":"http://www.mathforgrownups.com/i-spy-with-my-little-eye-math-around-us/","timestamp":"2014-04-21T07:05:38Z","content_type":null,"content_length":"59963","record_id":"<urn:uuid:5a549a73-1e4c-4b1f-811a-e2c49085d253>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cool Math Games
[cool math games ] [search_but]
Business Desktop Enhancements Drivers Games Home & Education HTML Info Management Internet MP3 and Audio Multimedia Programming Utilities
Quick Slide Show Online Desktop Presenter ThunderSite Free Web Editor Agama Web Menus WinX DVD Ripper Ultra Check Printing Software 2000 Shadow Security Scanner TextToWav NetAdjust Anonymous Proxy
Batsford Solitaire Game Opera Mini Photo Uploader for Facebook Facebook Desktop Free Facebook everywhere Facebook Sharer PRO AndroChef Java Decompiler StressMyPC English To Hindi Dictionary Blue
Equalizers Pack KeyExtender
Advanced ID Creator Professional System Speed Booster ChequeSystem Cheque Printing Software Advanced RSS Mixer Professional Salon Calendar Registry Cleaner Free Bopup Communication Server Netwrix
Windows Server Change Reporter MediaProSoft Free DVD to MP4 Converter Netwrix Password Manager Smart PDF Tools Pro Tipard DVD Ripper 4Videosoft iPhone Transfer for Mac DWG to SVG Converter MX Leawo
iPhone Transfer Camersoft Yahoo Video Recorder Tipard iPad Converter Suite Smart WAV Converter Leawo iPhone 4s Data Recovery MediaProSoft Free PDF Converter
Cool Math Games Cool Math Games Com Cool Math Games Tetris Cool Math Games Bloons Tower Defense Math Magic Square Puzzle Math Games Cool Math Com Cool Math Trick Cool Math Gams Com Cool Online Games
Cool Graphics Cool Math Game Bloons Tower Defense 3
Trackback Url For This Post Add To Quality Center Excel Add Seo Services Company Provide Seo Service Delphi Forums Codegear Delphi Forums Delphi Forums Cbp Delphi Forums Chat Rooms Article Directory
Script For Wordpress Raj Wap Video X Astroscan S Earth Vistas Screen Saver Photo Editing Software For Nokia E5 Free Free Proxy Txt Syria Talk Messenger Mobile Resume Building Site Script Bijoy Bangla
Font Sutonny Mj
Cool Math Games
Math games and Puzzles. Math games and Puzzles. Math Games. Speed Math Deluxe - Use addition, subtraction, multiplication and division to solve an equation as quickly as possible! Free educational
elementary and preschool math games and online lessons. Free online math...
Games and Puzzles. Games and Puzzles. Math Games. Speed Math Deluxe - Use addition, subtraction, multiplication and division to solve an equation as quickly as possible! Free educational elementary
and preschool math games and online lessons. Free online math...
OS: Mac, Windows
Math Games, Math Puzzles, and Mathematical Recreations. Math Games, Math Puzzles, and Mathematical Recreations. Puzzles and math brain teasers online, dynamic and interactive. Check out our new math
Puzzle Library! We have interactive puzzles with 3 levels of difficulty, printable, and solutions for...
OS: Mac, Windows
Teachers and kids: We can now bring you more free math games and puzzles because of our new advertising! Teachers and kids: We can now bring you more free math games and puzzles because of our new
advertising! By using this site, you agree to NOT block our ads. Math Game. Take a break and have some fun with Math's math games. Math's math games are...
OS: Mac, Windows
Buildbug kids math online game. Buildbug kids math online game. Offers free math lessons and homework help, with an emphasis on geometry, algebra, statistics, and calculus. Also provides
calculators and games. Due to heavy traffic this site has been experiencing some delays. The...
OS: Windows, Mac
3.3 MB |
Shareware |
US$14.95 |
Category: Mathematics
Fun puzzles develop basic math understanding and skills. EQUALS uses tables with illustrations combine with the fun, reward and challenge of jigsaw puzzles format + playing games to make sure
students are motivated to practice and learn. EQUALS Level I teaches Counting by 1's-10's, Learning Time, Adding Money and Multiplication. Level II teaches Division, Consumer Math, Simplifying
Fractions and Factors.
OS: Windows
Software Terms: Equals, Equals Math, Math, Math Games, Math Jigsaw, Math Puzzles, Math Software, Math Tools
Offers free math lessons and homework help, with an emphasis on geometry, algebra, statistics, and calculus. Offers free math lessons and homework help, with an emphasis on geometry, algebra,
statistics, and calculus. Also provides calculators and games. Due to heavy traffic this site has been experiencing some delays. The Math Forum's Internet Math...
OS: Mac, Windows
Divmath kids math online game. Divmath kids math online game. We're here to help you with your Math Homework! If you're having difficulties with a math problem, "Ask Us a Question". FREE MATH
ON-LINE TUTORING SERVICES. Online math tutoring service designed primarily...
OS: Windows, Mac
100 square kids math online paint game. 100 square kids math online paint game. The Awesome Library contains a collection of hundreds of math lesson and games. The site includes math lesson plans
contributed Missouri teachers for grades 3-12. View our collection of 524 lessons for...
OS: Windows, Mac
21.0 KB |
Demo |
US$7.5 |
Category: Mathematics
Math Trick Trainer (BlackBerry) 1. Math Trick Trainer (BlackBerry) 1.2.2 is known as a reliable and convenient math/mental training program for all age groups.
OS: Blackberry
Software Terms: Free Mobile Games, Free Windows Mobile, Freeware Windows Mobile, Games, Games For Windows, Mobile Software, Smartphone Downloads, Smartphone Games, Smartphone Ringtones, Smartphone
Maths Trainer has 24 challenging activities to excercise your mental math skills. There are 12 practice activities, 8 Daily Test Activities and 4 Reward Games. Daily Progress Charting.There are 24
activities in Maths Trainer, 12 fun activities to practice Math with, 8 Daily Test Core Area Tests and 4 Reward Games to choose from in MATHS TRAINER. Core Test Areas are: Addition, Subtraction,
Multiplication, Division, Fractions, Percentage, Number...
OS: Windows
Software Terms: Arithmetic, Brain Trainer, Brain Training, Brain Training Pc Game, Brain Workout, Division, Learn Math, Learning Math, Math, Math Trainer
New Game! New Game! Practice the math facts while competing against other online players. (Addition, subtraction, MULTIPLICATION, and division.) Interactive games to help learn about using
measurements, mazes, angles, and shapes. A free math facts practice...
OS: Mac, Windows
Software Terms: About, Addition, Angles, Author Of Sock Math For Kids 10, Baseball, Based, Bingo, Division, Facts, Games
1.4 MB |
Freeware |
Category: Misc. Games
Math Buddy 2. Math Buddy 2.1 is a FREE math game that helps you practice your math. homepage
OS: Windows
Software Terms: Addition, Buddy, Division, Free, Game, Math, Multiplication, Subtraction, Szamody, Vsisystems
Play Ghost Man with a mathematical twist! Prepare to indulge in the remake of the phenomenal arcade classic and journey through the amazing arithmetical world! Your goal in this game is to control
the Math Man to consume all the ghosts in the mazes after solving the equations. When the...
OS: Windows
Software Terms: Action Games, Arcade, Arcade Games, Arithmetic, Calculate, Calculation, Classic, Eat, Educational Games, Equation
1024.0 KB |
Demo |
US$8.5 |
Category: Mathematics
Math Trick Trainer for Palm OS 1. Math Trick Trainer for Palm OS 1.2 is considered as a useful and math/mental training program for all age groups. It is a flash card with unlimited randomly
generated math quizes, correct answer is provided as you advances through the...
OS: Palm
Software Terms: Free Mobile Games, Free Windows Mobile, Freeware Windows Mobile, Games, Games For Windows, Mobile Software, Smartphone Downloads, Smartphone Games, Smartphone Ringtones, Smartphone | {"url":"http://www.bluechillies.com/software/cool-math-games.html","timestamp":"2014-04-17T13:08:34Z","content_type":null,"content_length":"60189","record_id":"<urn:uuid:d396e035-5c1b-488e-a7f9-d265dfc36721>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rayan Saab, Visiting Assistant Professor
Math @ Duke
Math HOME
Rayan Saab, Visiting Assistant Professor
Please note: Rayan has left the Mathematics department at Duke University; some info here might not be up to date.
Office Location: 242 Physics Bldg
Office Phone: (919) 660-2875
Email Address:
Web Page: http://www.math.duke.edu/~rayans
Applied Math
Research Interests:
Current projects: Quantization of frame expansions, Quantization of compressed sensing measurements, Generalizations of the empirical mode decomposition
My research draws upon and develops tools in information theory, random matrix theory, frame theory, and geometric functional analysis to solve problems motivated by the acquisition, digitization
and processing of signals. My research interests include sparse and low-dimensional representations of high dimensional data, as well as compressed sensing. I am also interested in the
digitization of data and in developing and analyzing quantization approaches for both oversampled and compressively sampled signals. In general, I take an active interest in all areas of signal
processing and analysis. For example, I am interested in the source separation problem and have worked on the theory of blind source separation and its application to the cocktail party problem
and to seismic signal decomposition, as well as on image processing applications.
Areas of Interest:
Compressed Sensing/Sparse Approximation
Geometric Functional Analysis
Frame Theory
Recent Publications (More Publications)
1. F. Krahmer, R. Saab, R. Ward, Root-exponential accuracy for coarse quantization of finite frame expansions, IEEE Transactions on Information Theory (February, 2012) [pdf]
2. M. P. Friedlander, H. Mansour, R. Saab, Ö. Yilmaz, Recovering Compressively Sampled Signals Using Partial Support Information, IEEE Transactions on Information Theory (February, 2012) [
3. A. Powell, R. Saab, O. Yilmaz, Quantization and finite frames, in Finite frames, edited by P. Casazza, G. Kutyniok (2012), ISBN 978-0-8176-8373-3
4. N. Strawn, A. Armagan, R.Saab, L. Carin, D. Dunson, Finite sample posterior concentration in high-dimensional regression (Submitted, 2012)
5. S. Güntürk, M. Lammers, A. Powell, R. Saab, O. Yilmaz., Sobolev duals for random frames and Sigma-Delta quantization of compressed sensing measurements, Foundations of Computational
Mathematics (Accepted, 2012)
Recent Grant Support
□ Banting Postdoctoral Fellowship, Natural Sciences and Engineering Research Council of Canada, 2011/10-2013/09.
ph: 919.660.2800 Duke University, Box 90320
fax: 919.660.2821 Durham, NC 27708-0320 | {"url":"http://fds.duke.edu/db/aas/math/faculty/rayans","timestamp":"2014-04-16T04:23:41Z","content_type":null,"content_length":"11227","record_id":"<urn:uuid:d5c48316-dd54-4316-ae08-4d9c8611d3db>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
motor starting
Induction motor starting
The animation illustrates the across-the-line starting of an induction motor. The modeling assumes that the electrical transients can be ignored compared with the slow motional mechanical transients,
so that the motor can be represented at any speed ω[m] by its steady-state torque-speed curve. Given the mechanical load curve, the differential torque ΔT acts to accelerate the rotor inertia. The
speed-versus-time curve is obtained by graphical integration as demonstrated, yielding the shown red trace response. The torques T[e] and T[m] and the speed ω[m] are expressed in per unit; time t and
the inertia constant M are expressed in seconds.
The parameters of top applet are automatically copied to the bottom applet..
© M. Riaz | {"url":"http://www.ece.umn.edu/users/riaz/animate/im_starting.html","timestamp":"2014-04-17T00:48:51Z","content_type":null,"content_length":"7456","record_id":"<urn:uuid:95fbc192-3697-4fa9-8546-31d547c1d2ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What are the solutions of x2 + 2x + 9 = 0?
Best Response
You've already chosen the best response.
There are no soluions
Best Response
You've already chosen the best response.
the solution is a complex no.
Best Response
You've already chosen the best response.
When you try the quadratic formula, you will have a negative number under the square root part: \[\frac{-b \pm \sqrt{b^2-4ac}}{2a}=\frac{-2\pm\sqrt{4-36}}{2}=\frac{-2\pm\sqrt{-32}}{2}\] (This is
equivalent to saying the discriminant is negative). As a result, there are no real solutions to this equation.
Best Response
You've already chosen the best response.
Solve x2 – 3x = 1
Best Response
You've already chosen the best response.
bring the 1 to the other side, and use the same procedure as above: \[x^2-3x=1 \ \ \ \Longleftrightarrow \ \ \ x^2-3x-1=0\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0f20c2e4b04f0f8a9178be","timestamp":"2014-04-20T14:10:31Z","content_type":null,"content_length":"37301","record_id":"<urn:uuid:73387186-9ced-49f4-9d25-26f3f2769a54>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
The ordered subsets mirror descent optimization method with applications to tomography
Results 1 - 10 of 27
, 2005
"... (after revision) In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primaldual since
they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual ..."
Cited by 75 (1 self)
Add to MetaCart
(after revision) In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primaldual since they
are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable
stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is
responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows
to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set. We present the variants of subgradient schemes for nonsmooth convex minimization, minimax
problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower
complexity bounds.
, 2007
"... We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing
literature on the use of subgradient methods for generating primal optimal solutions is limited to the met ..."
Cited by 26 (5 self)
Add to MetaCart
We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing
literature on the use of subgradient methods for generating primal optimal solutions is limited to the methods producing such solutions only asymptotically (i.e., in the limit as the number of
subgradient iterations increases to infinity). Furthermore, no convergence rate results are known for these algorithms. In this paper, we propose and analyze dual subgradient methods using averaging
to generate approximate primal optimal solutions. These algorithms use a constant stepsize as opposed to a diminishing stepsize which is dominantly used in the existing primal recovery schemes. We
provide estimates on the convergence rate of the primal sequences. In particular, we provide bounds on the amount of feasibility violation of the generated approximate primal solutions. We also
provide upper and lower bounds on the primal function values at the approximate solutions. The feasibility violation and primal value estimates are given per iteration, thus providing practical
stopping criteria. Our analysis relies on the Slater condition and the inherited boundedness properties of the dual problem under this condition.
- SIAM J. OPTIM , 2004
"... An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step
size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits ..."
Cited by 26 (2 self)
Add to MetaCart
An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step
size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits regions in which the gradient is small infinitely often. Under certain unimodality
assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless
sensor networks, and numerical experiments compare the new method with the standard incremental gradient method.
- Mathematical Programming , 2004
"... We propose a new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple ” domains. The characteristic features of the method are (a) the possibility
to adjust the scheme to the geometry of the feasible set, thus allowing to get (nearly) dimension-indepen ..."
Cited by 25 (6 self)
Add to MetaCart
We propose a new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple ” domains. The characteristic features of the method are (a) the possibility to
adjust the scheme to the geometry of the feasible set, thus allowing to get (nearly) dimension-independent (and nearly optimal in the large-scale case) rateof-convergence results for minimization of
a convex Lipschitz continuous function over a Euclidean ball, a standard simplex, and a spectahedron (the set of positive semidefinite symmetric matrices, of given size, with unit trace); (b)
flexible handling of accumulated information, allowing for tradeoff between the level of utilizing this information and iteration’s complexity. We present extensions of the scheme for the cases of
minimizing non-Lipschitzian convex objectives, finding saddle points of convex-concave functions and solving variational inequalities with monotone operators. Finally, we report on encouraging
numerical results of experiments with test problems of dimensions up to 66,000. 1
, 1997
"... We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the
system is d = (A; b). ..."
Cited by 17 (4 self)
Add to MetaCart
We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the system
is d = (A; b).
"... We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex
risk functional under the ℓ 1-constraint. It is defined by a stochastic version of the mirror descent algor ..."
Cited by 17 (3 self)
Add to MetaCart
We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk
functional under the ℓ 1-constraint. It is defined by a stochastic version of the mirror descent algorithm (i.e., of the method which performs gradient descent in the dual space) with an additional
averaging. The main result of the paper is an upper bound for the expected accuracy 1 of the proposed estimator. This bound is of the order √ (log M)/t with an explicit and small constant factor,
where M is the dimension of the problem and t stands for the sample size. Similar bound is proved for a more general setting that covers, in particular, the regression model with squared loss. 1
- In Advances in Neural Information Processing Systems , 2009
"... Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels
are grouped into distinct components where each component is crucial for the learning task at hand. The form ..."
Cited by 15 (1 self)
Add to MetaCart
Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels are
grouped into distinct components where each component is crucial for the learning task at hand. The formulation hence employs l ∞ regularization for promoting combinations at the component level and
l1 regularization for promoting sparsity among kernels in each component. While previous attempts have formulated this as a non-convex problem, the formulation given here is an instance of non-smooth
convex optimization problem which admits an efficient Mirror-Descent (MD) based procedure. The MD procedure optimizes over product of simplexes, which is not a well-studied case in literature.
Results on real-world datasets show that the new MKL formulation is well-suited for object categorization tasks and that the MD based algorithm outperforms stateof-the-art MKL solvers like simpleMKL
in terms of computational effort. 1
, 2010
"... In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO)
problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solv ..."
Cited by 12 (1 self)
Add to MetaCart
In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO)
problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with
proper stepsize policies, can achieve optimal or nearly optimal rates of convergence for solving different classes of SCO problems during a given number of iterations. Moreover, we investigate these
AC-SA algorithms in more detail, such as, establishing the large-deviation results associated with the convergence rates and introducing efficient validation procedure to check the accuracy of the
generated solutions.
"... Our objective is to trainp-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm.
The SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a re ..."
Cited by 9 (2 self)
Add to MetaCart
Our objective is to trainp-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm. The
SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a result, it has gained widespread acceptance and SVMs are routinely trained using SMO in diverse
real world applications. Training using SMO has been a long standing goal in MKL for the very same reasons. Unfortunately, the standard MKL dual is not differentiable, and therefore can not be
optimised using SMO style co-ordinate ascent. In this paper, we demonstrate that linear MKL regularised with the p-norm squared, or with certain Bregman divergences, can indeed be trained using SMO.
The resulting algorithm retains both simplicity and efficiency and is significantly faster than state-of-the-art specialisedp-norm MKL solvers. We show that we can train on a hundred thousand kernels
in approximately seven minutes and on fifty thousand points in less than half an hour on a single core. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=765307","timestamp":"2014-04-17T06:59:17Z","content_type":null,"content_length":"38688","record_id":"<urn:uuid:7a9d5725-f79b-46f5-82b8-70cd3f371ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westminster, MA Trigonometry Tutor
Find a Westminster, MA Trigonometry Tutor
...I know all the mechanics of photography, setting light, exposure, speed, lenses and setting the aperture. I also have developed a sense for how to shoot. What angles work in photography and
how to capture the moment.
47 Subjects: including trigonometry, reading, chemistry, geometry
...With a proper balance between the deeper values of family and faith on one side and education and vocation on the other, a student is sure to be a success not only in his or her future
workplace, but in life itself. I earned a Master's Degree in Christian Apologetics (Cum Laude) from Simon Green...
53 Subjects: including trigonometry, chemistry, reading, biology
...I have tutored students in math from 1st grade through college level, including standardized testing. I have my teaching certification in Elementary Education (1-6) as well as in Middle School
Math (5-8), and I have passed the license test for high school math. My Masters degree was focused on Elementary Education and that is where most of my teaching and tutoring experience is.
13 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...I try to keep things fun and instructional. I think that tutoring takes a sense of humor and a willingness to acknowledge that we are learning together. It is the part of the student to grow.
18 Subjects: including trigonometry, chemistry, calculus, physics
...Additionally, I have substitute taught in middle school algebra 1 courses. I have taken math courses through calculus two, and I have experience tutoring middle school and early high school
math. I have substituted in math courses as well.
16 Subjects: including trigonometry, Spanish, chemistry, English
Related Westminster, MA Tutors
Westminster, MA Accounting Tutors
Westminster, MA ACT Tutors
Westminster, MA Algebra Tutors
Westminster, MA Algebra 2 Tutors
Westminster, MA Calculus Tutors
Westminster, MA Geometry Tutors
Westminster, MA Math Tutors
Westminster, MA Prealgebra Tutors
Westminster, MA Precalculus Tutors
Westminster, MA SAT Tutors
Westminster, MA SAT Math Tutors
Westminster, MA Science Tutors
Westminster, MA Statistics Tutors
Westminster, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Westminster_MA_Trigonometry_tutors.php","timestamp":"2014-04-20T13:45:44Z","content_type":null,"content_length":"24306","record_id":"<urn:uuid:4e34ec74-c4b8-4072-b9d4-17feb8d45b07>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Principles and standards for school mathematics
Principles and standards for school mathematics, Volume 1
National Council of Teachers of Mathematics
, 2000 -
402 pages
This volume updates the messages of NCTM's previous "Standards" and shows how students' learning should grow accross four grade bands: pre-K-2, 3-5, 6-8, and 9-12. It incorporates a clear set of
principles and an increased focus on how students' knowledge grows, as shown by recent research. It also includes ways to incorporate the use of technology to make mathematics instruction relevant
and effective in a technological world.
We haven't found any reviews in the usual places.
5 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=BkoqAQAAMAAJ&q=appropriate&dq=related:STANFORD36105110205569&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-17T07:14:34Z","content_type":null,"content_length":"117147","record_id":"<urn:uuid:6670ca08-a042-4df8-8733-b5576adfd776>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allentown, PA Algebra Tutor
Find an Allentown, PA Algebra Tutor
...If you can understand the building blocks, you can build anything. I'll help you translate the facts of your coursework into knowledge. I really want to work with students from middle school
to adults returning to school.
15 Subjects: including algebra 1, elementary math, SAT reading, algebra 2
...Studying takes patience and organization. No human brain can digest an entire chapter of school work at once just like no human stomach can willingly welcome in an entire pizza in one sitting.
Studying takes breaking down information into sections and categories and using fun and quirky tricks.
28 Subjects: including algebra 1, English, grammar, precalculus
...I work one-on-one with the student, and I am able to assess math weaknesses quickly in order to best prepare the student for the most important test of their lives. The ASVAB tests require
math and arithmetic skills that I have taught to many students during my career. I am a bit of an unapologetic science geek, and my vocabulary comes from years of reading.
12 Subjects: including algebra 2, algebra 1, calculus, geometry
...I am eager to help students from elementary to high school level understand the subject matter and improve grades. I will tailor the lessons according to my student's needs to achieve the best
possible results and I am enthusiastic to see the effects of our combined efforts! I am a native Turkish speaker.
8 Subjects: including algebra 2, calculus, trigonometry, algebra 1
...Many people confuse sociology with other sciences that have some similarities, like psychology. However, Sociology is quite different. Rather than asking why a person would commit a crime,
become a successful entrepreneur, or homeless, (as if it related to something intrinsic within that person...
61 Subjects: including algebra 1, algebra 2, reading, English
Related Allentown, PA Tutors
Allentown, PA Accounting Tutors
Allentown, PA ACT Tutors
Allentown, PA Algebra Tutors
Allentown, PA Algebra 2 Tutors
Allentown, PA Calculus Tutors
Allentown, PA Geometry Tutors
Allentown, PA Math Tutors
Allentown, PA Prealgebra Tutors
Allentown, PA Precalculus Tutors
Allentown, PA SAT Tutors
Allentown, PA SAT Math Tutors
Allentown, PA Science Tutors
Allentown, PA Statistics Tutors
Allentown, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/allentown_pa_algebra_tutors.php","timestamp":"2014-04-17T13:33:19Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:e3f0dc82-fd5d-4fb4-af41-4ebd16f9e342>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topological Superconductor?
We know how "hot" topological insulator is right now in condensed matter. Huge amount of publications are pouring out on this family of material. Well, it seems that in one type of topological
insulator, B12Se3, when doped with copper, it becomes what is claimed to be
a topological superconductor
! This is where the material becomes a superconductor in the bulk of the material, but still becomes a normal metal on the surface.
Generally, metals, insulators and conventional superconductors tend to have a single type of behavior as far as electricity goes. They can either conduct current or not, and remain consistent in
they way they respond to electrical charges.
“The known states of electronic matter are insulators, metals, magnets, semiconductors and superconductors, and each of them has brought us new technology,” explains M. Zahid Hasan.
“Topological superconductors are superconducting everywhere but on the surface, where they are metallic; this leads to many possibilities for applications,” adds the expert.
Here is the abstract from the Nature Physics paper[1]:
Experimental observation of topological order in three-dimensional bulk solids has recently led to a flurry of research activity. Unlike the two-dimensional electron gas or quantum Hall systems,
three-dimensional topological insulators can harbour superconductivity and magnetism, making it possible to study the interplay between topologically ordered phases and broken-symmetry states. One
outcome of this interplay is the possible realization of Majorana fermions—quasiparticles that are their own antiparticles—on topological surfaces, which is of great interest in fundamental physics.
Here we present measurements of the bulk and surface electron dynamics in Bi2Se3 doped with copper with a transition temperature Tc up to 3.8K, observing its topological character for the first
time. Our data show that superconductivity occurs in a bulk relativistic quasiparticle regime where an unusual doping mechanism causes the spin-polarized topological surface states to remain well
preserved at the Fermi level of the superconductor where Cooper pairing takes place. These results suggest that the electron dynamics in superconducting Bi2Se3 are suitable for trapping non-Abelian
Majorana fermions. Details of our observations constitute important clues for developing a general theory of topological superconductivity in doped topological insulators.
[1] L.A. Wray et al., Nature Physics v.6, p.855 (2010). | {"url":"http://physicsandphysicists.blogspot.com/2010/11/topological-superconductor.html","timestamp":"2014-04-18T11:39:18Z","content_type":null,"content_length":"134880","record_id":"<urn:uuid:9fd50391-2155-4c91-a09d-07f8b6e74289>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Queries from Wolfram Alpha in Mathematica' topic
Author Comment/Response
I am using Wolfram Mathematica 8, and I have a question about Wolfram Alpha queries.
I would like to store the result of a Wolfram Alpha query in a variable for later use in my calculations. I am not getting anywhere trying to do this.
For example, if I type:
== mass of proton in kilograms
(pressing the "=" key twice to get the query Wolfram Alpha prompt)
then I get a screen full of data which contains the actual answer I want (the mass of a proton in kilograms), embedded among many other cells which describe how Wolfram Alpha parsed my query,
the assumptions used, additional unit conversions, and many such things.
However, all I want to do is to insert the result into a variable, for example:
p = "mass of proton in kilograms"
At the end of this process, I want the variable "p" to contain the value of the mass of a proton in kilograms, which is 1.672622 * 10^-27.
I am unable to figure out how to do this. I searched the documentation, and came across the WolframAlpha function. This has many options for PodCells, which I suppose are the various cells
which contain different parts of the answer such as input interpretation and assumptions. Perhaps there is some way to drill down there and obtain only the relevant result, and store that in
a variable. However, I was unable to figure out how.
In any case, it seems to be there would be some much easier way to do this with far less typing involved, since this is probably a very common thing that people do all the time.
Any help is much appreciated.
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/27211","timestamp":"2014-04-19T12:15:39Z","content_type":null,"content_length":"27161","record_id":"<urn:uuid:cde03ee7-13e5-4451-8c5f-cfded6652fd5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hierarchical Clustering in Python
Continuing on the topic of clustering algorithms, here is another popular choice with a Python implementation. No need to copy paste, the all the code can be found
Hierarchical clustering is another simple but powerful clustering algorithm. The idea is to build a similarity tree based on pairwise distances. The algorithm starts with grouping the two closest
objects (based on the distance between feature vectors) and creates an "average" node in a tree with the two objects as children. Then the next closest pair is found among the remaining objets but
also including any average nodes, and so on. At each node the distance between the two children is also stored. Clusters can then be extracted by traversing this tree and stopping nodes with distance
smaller some threshold.
Hierarchical clustering has several benefits. For example the tree structure can be used to visualize relationships and how clusters are related. A good feature vector will give a nice separation in
the tree. Another benefit is that the tree can be used with different cluster thresholds without recomputing the tree. The drawback, however, is that one needs to choose a threshold if the actual
clusters are needed.
Let's see what this looks like in code. Create a file
and add the following code (partially taken and modified from the example in "Programming Collective Intelligence" by Toby Segaran (O'Reilly Media 2007, page 33).
from numpy import *
class cluster_node:
def __init__(self,vec,left=None,right=None,distance=0.0,id=None,count=1):
self.count=count #only used for weighted average
def L2dist(v1,v2):
return sqrt(sum((v1-v2)**2))
def hcluster(features,distance=L2dist):
#cluster the rows of the "features" matrix
# clusters are initially just the individual rows
clust=[cluster_node(array(features[i]),id=i) for i in range(len(features))]
while len(clust)>1:
# loop through every pair looking for the smallest distance
for i in range(len(clust)):
for j in range(i+1,len(clust)):
# distances is the cache of distance calculations
if (clust[i].id,clust[j].id) not in distances:
if d < closest:
# calculate the average of the two clusters
mergevec=[(clust[lowestpair[0]].vec[i]+clust[lowestpair[1]].vec[i])/2.0 \
for i in range(len(clust[0].vec))]
# create the new cluster
# cluster ids that weren't in the original set are negative
del clust[lowestpair[1]]
del clust[lowestpair[0]]
return clust[0]
on a matrix with feature vectors as rows will create and return the cluster tree. The choice of distance measure depends on the actual feature vectors, here we used the Euclidean distance but you can
create any function and use that as parameter.
To extract the clusters from the tree you need to traverse the tree from the top until a node with distance value smaller than some threshold is found. This is easiest done recursively.
def extract_clusters(clust,dist):
# extract list of sub-tree clusters from hcluster tree with distance < dist
clusters = {}
if clust.distance < dist:
# we have found a cluster subtree
return [clust]
# check the right and left branches
cl = []
cr = []
if clust.left!=None:
cl = extract_clusters(clust.left,dist=dist)
if clust.right!=None:
cr = extract_clusters(clust.right,dist=dist)
return cl+cr
This function will return a list of sub-trees containing the clusters. To get the leaf nodes that contain the object ids, traverse each sub-tree and return a list of leaves using:
def get_cluster_elements(clust):
# return ids for elements in a cluster sub-tree
if clust.id>0:
# positive id means that this is a leaf
return [clust.id]
# check the right and left branches
cl = []
cr = []
if clust.left!=None:
cl = get_cluster_elements(clust.left)
if clust.right!=None:
cr = get_cluster_elements(clust.right)
return cl+cr
Let's try this on a simple example to see it all in action. The file
contains 100 images downloaded from Flickr using the tag "sunset" or "sunsets". For this example we will use the average color of each image as feature vector. This is crude and simple but good
enough for illustrating what hierarchical clustering does. Try running the following code in a folder containing the sunset images.
import os
from PIL import Image
from numpy import *
import hcluster
#create a list of images
imlist = []
for filename in os.listdir('./'):
if os.path.splitext(filename)[1] == '.jpg':
n = len(imlist)
#extract feature vector for each image
features = zeros((n,3))
for i in range(n):
im = array(Image.open(imlist[i]))
R = mean(im[:,:,0].flatten())
G = mean(im[:,:,1].flatten())
B = mean(im[:,:,2].flatten())
features[i] = array([R,G,B])
tree = hcluster.hcluster(features)
To visualize the cluster tree, one can draw a dendrogram. This often gives useful information on how good a given descriptor vector is and what is considered similar in a particular case. Add the
following code (also adapted from Segaran's book).
from PIL import Image,ImageDraw
def drawdendrogram(clust,imlist,jpeg='clusters.jpg'):
# height and width
# width is fixed, so scale distances accordingly
# Create a new image with a white background
# Draw the first node
def drawnode(draw,clust,x,y,scaling,imlist,img):
if clust.id<0:
# Line length
# Vertical line from this cluster to children
# Horizontal line to left item
# Horizontal line to right item
# Call the function to draw the left and right nodes
# If this is an endpoint, draw a thumbnail image
nodeim = Image.open(imlist[clust.id])
ns = nodeim.size
The dendrogram is then drawn like this.
This should give an image like the one below.
Hope this was useful. I have one final small thing about clustering I want to write, something for next week maybe.
11 comments:
1. Hi,
Thanks for posting your code. I was wondering if I could trouble with a quick question: if I have a tab-delimited text file with the matrix, how do I do that with your code? I'm sorry for the
noob question, I'm just learning python. :)
2. Hi Greg
You can do this directly using NumPy.
>>> from numpy import *
>>> a = loadtxt('yourfile.txt')
3. Howdy,
I tried running the example you give and I receive an error "AttributeError: 'module' object has no attribute 'hcluster'" being thrown from the line "tree = hcluster.hcluster(features)
Any idea what might be going wrong? Thanks!
- Fincher
4. @Finchler Check that the directory containing hcluster.py is in your PYTHONPATH (or in the same directory as your script). (e.g. type "env" in a linux/mac terminal to see PYTHONPATH)
5. Ah, got it. Thanks!
6. Great code, but I think, I found a bug in this code.
The bug is in get_cluster_elements function.
The first if statement should be greater or equal
> if clust.id>=0:
and not just greater as it is posted.
This is because the ids are being set by the range function from 0 (zero) to the length of the features minus 1(one).
So the feature with id=0 is not returned
The same problem is in drawnode function too. At the if clause again.
7. @Anonymous: thanks.
8. Hi,
I got an error message when calling hcluster.drawdendrogram(...)
"NameError: global name 'getheight' is not defined"
from which module is this function?
9. @Anonymous: The get height and width functions for this example would look like:
def get_height(node):
""" Return the height of a node. """
if node.left==None and node.right==None:
return 1
else: # set height to sum of each branch
return get_height(node.left)+get_height(node.right)
def get_depth(node):
""" Return the depth of a node. """
if node.left==None and node.right==None:
return 0
else: # max of each child plus own distance
return max(get_depth(node.left),get_depth(node.right))+node.distance
Hope that helps. There is a complete rewrite of hcluster in my upcoming book. Worth a look.
10. Nice tiny code !
I'm playing with it, I noticed that "count" looks unused there...
11. John FinchMarch 12, 2014 at 2:21 AM
Where should this line be placed in the specific program?
Either in hcluster.py or run in the command shell?? | {"url":"http://www.janeriksolem.net/2009/04/hierarchical-clustering-in-python.html","timestamp":"2014-04-20T08:39:47Z","content_type":null,"content_length":"116792","record_id":"<urn:uuid:6bf769f8-d66f-4729-a6e1-a33e4a2aaf63>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many miles from Birmingham to Leeds?
You asked:
How many miles from Birmingham to Leeds?
Assuming you meant
• Birmingham, the city and metropolitan borough in the West Midlands county of England
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/how_many_miles_from_birmingham_to_leeds","timestamp":"2014-04-18T13:23:09Z","content_type":null,"content_length":"55087","record_id":"<urn:uuid:a85cd9bf-c582-4aab-b42e-7eca22e7177c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Color coded table for easy interpretation
Table plots in The UnscramblerŽ X
The first part of the ANOVA table is a summary of the significance of the global model. If the p-value for the global model is smaller than 0.05, it means that the model explains more of the
variations of the response variable than could be expected from random phenomena. In other words, the model is significant at the 5% level. The smaller the p-value, the more significant (and useful)
the model is.
The error p-value should be large as the error should be random and not be explaining the variation in the response.
Total error should also have a small p-value, so that the model is valid.
Effect summary
This table plot gives an overview of the significance of all effects for all responses. There are three values per effect and per response:
• Significance: This coded value indicates if the effect is significant for the specific response. See the Significance levels and associated codes table
• Effect value: This is the value of the effect for the specific response variable. The bigger in absolute value the more important if the design variable
• p-value: Result of the test of significance for the effect. See the Significance levels and associated codes table for more information
Effect Summary table P-value Negative effect Positive effect Color code
>= 0.10 NS NS red
[0.10:0.05] ? ? yellow
[0.01:0.05] – + pale green
[0.005:0.01] – – + + light green
\< 0.005 – – – + + + dark green
Significance levels and associated codes
The sign and significance level of each effect is given as a code:
NS: non significant. ?: possible effect at the significance level 10%.
Note: If some of the design variables have more than 2 levels, the Effects Overview table contains stars (*) instead of ”+” and ”–” signs.
Analyze this table by:
Checking the Response Variables
Look for responses which are not significantly explained by any of the design variables (gray columns). This may be because there are errors in the data, these responses have very little variation,
these responses are very noisy, or their variations are caused by non-controlled conditions which have not been included in the design.
Checking the Design Variables
Look for rows which contain many ”+” or ”–” signs and are green: these main effects or interactions dominate. This is how to detect the most important variables.
Response surface
This plot is used to find the settings of the design variables which give an optimal response value, and to study the general shape of the response surface fitted by the Response Surface model or the
Regression model. It shows one response variable at a time.
This plot can appear in various layouts. The most relevant are:
• Contour plot;
• Landscape plot.
Interpretation: contour plot Interpretation: landscape plot
Look at this plot as a map which tells how to reach the experimental objective. The plot has two axes: two predictor variables Look at this plot to study the 3-D shape of the response surface. Here
are studied over their range of variation; the remaining ones are kept constant. The constant levels are indicated in the RS it is obvious whether there is a maximum, a minimum or a saddle point.
Response surface plot, with contour layout Response surface plot, with landscape layout
The response values are displayed as contour lines, i.e. lines that show where the response variable has the same predicted value. Clicking on a line, or on any spot within the map, will show the
predicted response value for that point, and the coordinates of the point (i.e. the settings of the two predictor variables giving that particular response value).
To interpret several responses together, print out their contour plots on color transparencies and superimpose the maps.
Response surface table
This table is used to set the parameters of the response surface.
Design variables
In a response surface only two design variables can vary the others are fixed.
To select the variables to vary tick/untick the box in the Display column.
To set the value for the fixed variable enter the value manually in the column Value to display. By default this value is the average value.
For category variables select one of the levels using the drop-down list.
Response variables
Only one response variable can be plot at a time. Select the variable to plot by ticking/unticking them.
Once all the modifications are done, click the Generate Surface button to generate a new response surface.
Response surface table
PLS-ANOVA Summary table
This table presents the effect values for all variables as well as their significance levels and p-values.
PLS-ANOVA Summary P-value Negative effect Positive effect Color code
>= 0.10 NS NS red
[0.10:0.05] ? ? yellow
[0.01:0.05] – + pale green
[0.005:0.01] – – + + light green
\< 0.005 – – – + + + dark green
Significance levels and associated codes
NS: non significant.
?: possible effect at the significance level 10%. | {"url":"http://www.camo.com/resources/anova.html","timestamp":"2014-04-17T03:49:05Z","content_type":null,"content_length":"31330","record_id":"<urn:uuid:c4e11466-f151-410b-a57c-9ca9715ab4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Receptive Field LISSOM (RF-LISSOM) model
Next: Self-Organization Up: Self-Organization of Orientation Maps Previous: Introduction
Figure 1: The RF-LISSOM model. The lateral excitatory and lateral inhibitory connections of a single neuron in the network are shown, together with its afferent connections. The afferents form a
local anatomical receptive field on the retina.
The simulations are based on the RF-LISSOM model of cortical self-organization [37,38,41,42]. The cortical architecture has been simplified and reduced to the minimum necessary configuration to
account for the observed phenomena. Because the focus is on the two-dimensional organization of the cortex, each ``neuron'' in the model corresponds to a vertical column of cells through the six
layers of the cortex. The transformations in the LGN were also bypassed for simplicity.
The cortical network is modeled with a sheet of interconnected neurons (figure 1). Through afferent connections, each neuron receives input from a receptive surface, or ``retina''. In addition, each
neuron has reciprocal excitatory and inhibitory lateral connections with other neurons. Lateral excitatory connections are short-range, connecting only close neighbors. Lateral inhibitory connections
run for long distances, and may implement close to full connectivity between neurons in the network.
Neurons receive afferent connections from broad overlapping patches on the retina called anatomical RFs. The s centered on this location as its RF. Depending on its location, the number of afferents
to a neuron could vary from
The inputs to the network consist of simple images of multiple elongated Gaussian spots on the retinal receptors. The activity
Both afferent and lateral connections have positive synaptic weights. The weights are initially set to random values, and organized through an unsupervised learning process. At each training step,
neurons start out with zero activity. An elongated pattern is introduced on the retina, and the activation propagates through the afferent connections to the cortical network. The initial response
The response evolves over time through lateral interaction. At each time step, each cortical neuron combines the above afferent activation
After the activity has settled, typically in a few iterations of equation 3, the connection weights of each neuron are modified. Both afferent and lateral weights adapt according to the same
mechanism: the Hebb rule, normalized so that the sum of the weights is constant:
Both inhibitory and excitatory lateral connections follow the same Hebbian learning process and strengthen by correlated activity. At long distances, very few neurons have correlated activity and
therefore most long-range connections eventually become weak. The weak connections are eliminated periodically, and through the weight normalization, inhibition concentrates in a closer neighborhood
of each neuron. The radius of the lateral excitatory interactions starts out large, but as self-organization progresses, it is decreased until it covers only the nearest neighbors. Such a decrease is
necessary for global topographic order to develop and for the receptive fields to become well-tuned at the same time (for theoretical motivation for this process, see [26,27,28,33,42]; for
neurobiological evidence, see [9,20].) Together the pruning of lateral connections and decreasing excitation range produce activity bubbles that are gradually more focused and local. As a result,
weights change in smaller neighborhoods, and receptive fields become better tuned to local areas of the retina.
Next: Self-Organization Up: Self-Organization of Orientation Maps Previous: Introduction | {"url":"http://www.cs.utexas.edu/users/nn/web-pubs/htmlbook96/sirosh/node2.html","timestamp":"2014-04-18T06:44:12Z","content_type":null,"content_length":"10130","record_id":"<urn:uuid:3c5c214e-e3a2-4ae9-a273-feb3d5871df6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fifth Grade Checklist
MINIMUM CRITERIA FOR FIFTH GRADE PROMOTION TO SIXTH GRADE AT ECCLESTON ELEMENTARY SCHOOL
The student…
• scores above level 1 in reading on the FCAT
• attains or surpasses DRP level 4 or 5
• demonstrates 1 year academic growth as assessed by Gates-MacGinitie Reading Test or other district approved assessment tests
• demonstrates knowledge of and supply use of graphic organizers/thinking maps to organize information in all content areas
• maintain at least a 2.0 grade point average in reading for the year
• Reads a lot of books for knowledge and information
• extends and refines fourth-grade skills with increasingly complex texts such as decoding to clarify pronunciation, context clues to construct meaning and predicting
• uses strategies to determine meaning and increase vocabulary (ex., homonyms, prefixes, word-origins, multiple meanings, antonyms, and synonyms)
• develops vocabulary by reading independently and using resources and references
• identifies, classifies and demonstrates knowledge of words from a variety of categories on or above grade level
• monitors reading on or above grade level by adjusting reading rate according to purpose or text difficulty, rereading, self-correcting, summarizing and checking other sources
• determines the main idea and connects ideas with relevant supporting details
• arranges events in sequential order
• describes how the author’s purpose and perspective influence the text
• identifies examples of fact, fiction or opinion
• knows characters of persuasive text
• reads and organized information from reference materials to write a research report or perform other tasks
• understands comparison and contrast, cause-and effect and sequence of events
The student will…
• overlay structure of story with strategies, usually 8 strategies for narrative and 8 strategies for expository
The student will...
• maintain a 2.0 or better grade point average in Math
The student…
• reads, writes and identifies decimals through thousands
• knows that place value relates to powers of 10
• reads, writes and identifies whole numbers, fractions and mixed numbers
• translates problem situations into diagrams, models and numerals
• uses symbols to compare and order whole numbers, fractions, percents and decimals
• multiplies common fractions and decimals to hundredths
• explains the relationship between the decimal number system and other number systems
• determines the operations needed to solve one and two step problems
• demonstrates the inverse feature of multiplication and division
• finds factors of numbers to determine if they are prime or composite
• uses strategies to estimate quantities of one thousand or more
• determines the greatest common factor and the least common multiple of two numbers
• expresses a whole number as a product of its prime factors
• applies rules of divisibility and identifies perfect squares to 144
• Draw arrays to model multiplication
• Know multiplication facts
• Identify even and odd numbers
• Find the factors of numbers
• Find the sum and difference of multidigit whole numbers and decimals
• Identify the maximum, minimum, median, mode, and mean for a data set.
• Know place value to hundredths
• Identify and use data landmarks
• Convert among fractions, decimals, and percents
• Convert between fractions and mixed or whole numbers
• Find common denominators
• Find the factors of a number
• Find the prime factorization of numbers
The student…
• develops formulas for determining perimeter, area and volume
• solves problems for determining perimeter, area and volume
• classifies and measures (ex., acute, obtuse, right or straight) and measures circumference
• determines whether a solution needs an accurate or estimated measurement
• compares length, weight and capacity using customary and metric units
• uses multiplication and division to convert units to measure
• measures dimensions, weight, mass and capacity using correct units
• uses schedules, calendars and elapsed time to solve problems
• estimates length, weight, time, temperature and money for solving problems
• estimates area, perimeter and volume of a rectangular prism
• selects appropriate unit and tool for measuring
The student…
• knows and identifies symmetry, congruency and reflections in geometric figures
• knows the relationship between points, lines, line segments, rays and planes
• describes properties of and draws two and three dimensional figures
• knows the effect of a flip, slide or turn on a geometric figure
• applies and compares the concept of area, perimeter and volume
• knows the effect on area and perimeter when figures are combined, rearranged, enlarged or reduced
• knows how to identify, locate and plot ordered pairs of whole numbers on a graph
• Identify types of triangles
• Identify place value in numbers to billions
• Know properties of polygons
• Define and create tessellations
• Plot ordered pairs on a one-quadrant coordinate grid
• Understand the concept of area of a figure
• Use a formula to find the area of rectangles
• Use formulas to find the area of polygons and circles
• Know the properties of geometric solids.
The student…
• describes, extends, creates, predicts and generalized numerical and geometric patterns
• identifies and explains numerical relationships and patterns using algebraic symbols
• analyzes number patterns and states the rule
• models and solves a number sentence with a missing addend
• uses a variable to represent a given verbal expression
• translates equations into verbal and written problems
The student…
• selects the appropriate graph for data
• interprets and compares information from different types of graphs
• chooses titles, labels, scales and intervals for organizing data on a graph
• generates questions, collects responses and displays data on a graph
• completes and interprets circle graphs using common fractions or percents
• identifies range, median, mean and mode
• uses technology to examine data and construct labeled graphs
• uses computer-generated spreadsheets to record and display data
• uses a model to represent all possible outcomes for a probability situation
• uses a model to represent all possible outcomes for a probability situation
• explains and predicts outcomes that are most likely to occur and tests the predictions
• designs a survey to collect and display data on a complete graph
• uses statistical data to predict trends and make generalizations
The student…
• knows that matter is conserved during heating and cooling
• knows that materials may be made of parts too small to be seen without magnification
The student…
• knows that energy can be described as stored energy (potential) or energy of motion (kinetic)
• extends and refines use of a variety of tools to measure the gain or loss of energy
• knows that some materials conduct heat better others
• knows that the limited supply of usable energy sources (ex., fuels such as coal or oil) places great significance on the development of renewable energy sources
• understands that convection, radiation and conduction are methods of heat transfer
The student…
• uses scientific tools (ex., stopwatch, meter stick, compass) to measure speed, distance and direction of an object
• knows that waves travel at different speeds through different materials
• knows the relationship between the strength of a force and its effect on an object (ex., the greater the force, the greater the change in motion; the more massive the object, the smaller the
effect of a given force
• understands how inertia, gravity, friction, mass and force affect motion
• understands how friction affects an object in motion
• knows that objects do not change their motion unless acted upon by an outside force
The student…
• understands the various roles of single-celled organisms in the environment
• knows ways in which protists interact with plants and animals in the environment
• knows how changes in the environment affect organisms (ex., some organisms move in, others move out; some organisms survive and reproduce, others die)
• knows that green plants use carbon dioxide, water and sunlight energy to turn minerals and nutrients into food for growth, maintenance and reproduction
The student…
• understands how body systems interact (ex., how bones and muscles work together for movement)
• uses magnifying tools to identify similar cells and different kinds of structures
The student…
• knows that it is important to keep accurate records and descriptions to provide information and clues on causes of discrepancies in repeated experiments
• knows that a successful method to explore the natural world is to observe and record, and then analyze and communicate the results
• knows that to work collaboratively, all team members should be free to reach, explain and justify their own individual conclusions
• knows that to compare and contrast observations and results is an essential skill in science
• knows that a model of something is different from the real thing, but can be used to learn something about the real thing
• knows that natural events are often predictable and logical
• understands that people, alone or in groups, invent new tools to solve problems to do work that affects aspects of life outside of science
• knows that data are collected and interpreted in order to explain an event or concept
• knows that before a group of people build something or try something new, they should determine how it may affect other people
• know that, through the use of scientific processes and knowledge, people can solve problems, make decisions and from new ideas
Ideas for helping your child at home
Language Arts
• Encourage your child to share and discuss books that he/she has read
• Attend plays or movies and compare/contrast them to book versions of the same story. Discuss how characters, setting and plot were adapted from the book to screen or stage.
• Encourage your child to write to you in a family journal or diary and then respond
• Help your child calculate different dates and elapsed time on a calendar using days, weeks, months and years.
• When cooking allow your child to experiment with fractions by choosing appropriate measuring cups and spoons to determine amounts needed for recipes.
• Go stargazing in your own backyard or a spot away from city lights. See how many constellations and planets you can recognize.
• Humans have very useful thumbs. Try taping your thumbs down across the palms of your hands. Now try some every day activities such as eating and writing.
• Touch a rubber band to your forehead. Now rapidly stretch the rubber band. Once again, touch it to your forehead. Discuss how the mechanical energy created by stretching the rubber band has
turned into heat energy.
Students must meet 80% of the criteria to be promoted to the sixth grade. | {"url":"https://ocps.net/lc/southwest/eec/AC/fifthgrade/Pages/FifthGradeChecklist.aspx","timestamp":"2014-04-16T16:58:25Z","content_type":null,"content_length":"53062","record_id":"<urn:uuid:5c9ad781-6afd-436c-8d75-8646fac0df34>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |