content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The WebScience Project at IUPUI
Inverse Functions
Meaning of Inverse Functions
Trigonometric Substitution
Approximate Integration
Arc Length
Direction Fields
Parametric Equations
Polar Coordinates
Infinite Sequences and Series
The Comparison Tests
Absolute Convergence
Taylor and Maclaurin Series
Taylor Polynomials
Inverse Functions (Chapter 7.1c)
Logarithmic Identities and Inequalities (Chapter 7.2)
Exponential Functions (Chapter 7.4a)
Techniques of Integration (Chapter 8.6)
Approximate Integration (Chapter 8.7)
Direction Fields (Chapter 10.2a)
Newton's Law of Cooling (Chapter 10.1a)
The Logistic Equation (Chapter 10.5a*)
Parametrized Curves (Chapter 11.1a)
Ballistics (Chapter 11.1c)
Polar Curves in Cartesian Coordinates (Chapter 11.4a)
Sequences and Series (Chapter 12.2b)
More on Sequences and Series (Chapter 12.2c)
Maclaurin Series (Chapter 12.10a) | {"url":"http://webphysics.iupui.edu/webscience/mathematics_archive.html","timestamp":"2014-04-21T15:46:11Z","content_type":null,"content_length":"29081","record_id":"<urn:uuid:ff8eff59-5856-41e2-81cc-b0e1c743eb03>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wednesday Math, Vol. 89: Mode
There are several ways to determine the "center" of a set of data. The three most commonly taught methods are mean, median and mode. The mean, also known as average, can only be done with numerical
data, since you have to add up all the values then divide by the number of things on the list. The median can be done with any data that can be put in order. Once put in order, find the middle
position on the list, and the value in that position is the median. For example, if we put all the students in a college in order by class, freshmen, sophomores, juniors and seniors, because students
drop out there are likely more freshmen than sophomores, more sophomores than juniors and more juniors than seniors. This means the median is likely a sophomore, unless the drop-out rate is so severe
that the median is a freshman.
Then we come to mode, which means the most common value in a data set, as long as there are any duplicates on the list. I was checking sources about the definition. I nicked the definition
description seen here from a website for teachers, and I defaced it because as far as I've been taught, it's wrong. Mode can be used with any kind of data, whether it's a set of numbers of a set of
categories. For example, if we have a set of cars, we can have as our data the make of car, or the model or the color. For any of those non-numerical variables, we can discuss the mode, which would
mean the most popular manufacturer or most popular model or most popular color, respectively. In the example of college students above, freshman should be the mode.
There are some types of numerical data where order does not have much meaning, especially coded data. Take zip codes, for example. If we take a set of zip codes, we can take the average, but it is a
number without meaning. Likewise the median has no meaning. This is because two zip codes that are a single number apart can be very far apart on the map. Locally, 94501 and 94502 are right next to
each other in Alameda, but 94503 is up in American Canyon, more than forty miles to the north. For such a set of numerical data, the mode is the only measure of center that tells us something useful,
and that would be the most popular zip code on a list of data.
Besides the lack of agreement on whether mode is just for numbers and can be used on any data set, mode has exceptions to the rules, something that isn't true with average or median. A set needs to
have duplicate values to have a mode. If the data was the social security numbers of a group of people, or the registration numbers of a set of cars, we would expect that all those numbers would be
unique, so those sets would have no mode. If there are duplicates in a set, but there is a tie for the most frequent value, then we can have more than one mode. If we take the set of U.S. Presidents
and look at the variable of first name, the most common value is James, as there have been six presidents with that first name if we include James Earl Carter. If instead we look at last names, we
have several that have shown up twice but none that have shown up three times. The modes for last name are Adams, Johnson, Roosevelt, Harrison and Bush.
And this brings us to Excel and its mode function. The nice folks from Microsoft have decided that there is no mode when dealing with a non-numerical set. Whether checking for first names or last
names of presidents, the answer Excel 2007 would give on such a data set is "N/A", or not applicable.
If you are working with a numerical data set with no repeats, Excel will also tell you "N/A", and this time that's actually the correct answer.
The big problem Excel has, which is why I've shown this picture of their dark side logo in colors opposite the green and white they usually use, is that it will not tell you when a data set has more
than one mode. If a data set has two values of 3 and two values of 77 and no other duplicates, Excel might tell you the mode is 3 or might tell you the mode is 77, depending on how the data is
There are workarounds. You can take a data set, let's say it's in column A in your spreadsheet, and copy it into column B. There's a function under the data tab called "remove duplicates". If we
perform that on the copy of the list now in column B, and Excel tells us no duplicates, then there is no mode. If there are duplicates, type in this formula into the first cell of column C. (We are
assuming the data is in column A from row 1 to row 100.)
=countif(a$1:a$100, b1)
Now in cell C1, the number will tell you how many times the value in B1 shows up on the unadulterated list in column A. Click and drag that function down the C column, and that information can be
found for all the distinct values, regardless of whether the original data was numerical or categorical. You can then select columns B and C together and sort by biggest number in column C. This will
tell you the mode, and it will also tell you if there is more than one mode, if the number in C1 is the same as the number in C2. In the case of the presidential last names, columns B and C would
start as follows, assuming the original list of presidents was sorted by chronological order of their administrations:
Adams_____ 2Harrison__ 2Johnson___ 2Roosevelt_ 2Bush______ 2Washington 1
It's always nice when there is a workaround for your problems, but it's better when there's agreement on definitions that avoids the problems in the first place. The more I study statistics, the
happier I am my degree is in mathematics.
9 comments:
Interesting comments! Can you help me out? What would the median be if there were 1000 freshmen and 1000 others (500 sophomores, 300 juniors, 200 seniors). This is puzzling me.
Hi, anonymous. In that case, you would say the median lies between freshmen and sophomores. If these were replaced by grade numbers like 9th, 10th, 11th and 12th, you would say the median lies
between 9th and 10th, or you could say it was 9.5.
Thanks. But what if the groups were maybe small, medium and large. Like maybe 100 small plastic bags, 60 medium plastic bags and 40 large plastic bags. What would it mean to say that the median
is between a small bag and a medium bag? Would the median have an actual value then? I don't know if the median has to have an actual value. Or would the median be a sort of fuzzy or vague kind
of thing.
Also, what you said about the mode is really interesting. Have you ever seen any books say that the mode can be talked about for a set of categories?
If median is taken with numerical data, sometimes it is the average of two values, and therefore a value that doesn't actually occur in the data set. The same can happen with categorical data.
As for mode and categorical data, I pulled three the first three stats texts off the shelf, and all three agree mode is okay with categorical. The three books are Triola, DeVeaux/Velleman/Bock
and Larson/Farber. D/V/B says mode is better with categorical data than it is with numerical, as with numerical we often put the data into demographic categories before finding mode.
There are stats texts that disagree. This is one of the things that make statistics more annoying than math to me, because in math, definitions are much more solid. When two definitions disagree
in math, usually one is obsolete.
Can u recommend one of those books for me to learn some statistics & maybe why you like it?
Wow -- statistics just isn't so clear. Doesn't it make it frustrating for you to teach it and students to learn it?
Here's what I still don't get but I bet you can explain it. Like -- what's betwen small and medium bags? Or, what about countries in this hemisphere: 3 are North (US, Canada Mexico), and say 7
are Central and 10 are South. I don't know how many there really are. Then what's the median of 3 north, 7 central, and 10 south. I know the mode is south, but the median? There's no country
between Panama and Columbia or whatever is the 1st one in south america.
I use Triola, but you can find open source books that are much cheaper. If you are going to pay for a text, Johnson/Kuby may be the best value.
There are special cases in stats, but students often find it a little easier than other math courses because it has so many applications.
As to median: In ordered categorical data, you would be saying that half the bags are small and the other half are medium or large. In your second example, you would be saying there are as many
countries in North and Central America combined as there are in South America. (Not actually true.)
Thanks again. Knew you'd be helpful. Do u know how to find an open source textbook? Cheaper is good. Are there any u like to use or recommend?
Median is still a puzzle, because for the countries in the 2nd example, I think the median doesn't have a value, or does it? Like, median = ? I'm thinking the same thing actually also for the
bags in the 1st example, that median has no value. Or am I not seeing the value, & this is something that's easy that I don't see. Sorry not to be clear yet on this.
After thinking about it more I don't think you're right when you said that "the median can be done with any data that can be put in order." Because with ordered categorical data "north north
central central south" has a median = central but "north north central south" doesn't have a median. I think the definition for median should give an answer for every set of ordered categorical
data. Do you know of any book or source that besides yourself who says that median can be applied to ordered categorical data? Daniel T.
Check out Triola. Median can be used with ordinal data.
The answer "between category j and category k" is legitimate. | {"url":"http://lotsasplainin.blogspot.com/2009/09/wednesday-math-vol-89-mode.html?showComment=1256663633056","timestamp":"2014-04-18T13:06:13Z","content_type":null,"content_length":"114789","record_id":"<urn:uuid:a29cca5f-98ce-4569-a5e9-e6a94c7d89cb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph (mathematics)
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
This article is about sets of vertices connected by edges. For graphs of mathematical functions, see Graph of a function.
Further information: Graph theory
A drawing of a labeled graph on 6 vertices and 7 edges.
In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical
abstractions called vertices, and the links that connect some pairs of vertices are called edges.^[1] Typically, a graph is depicted in diagrammatic form as a set of dots for the vertices, joined by
lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics.
The edges may be directed (asymmetric) or undirected (symmetric). For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this is
an undirected graph, because if person A shook hands with person B, then person B also shook hands with person A. On the other hand, if the vertices represent people at a party, and there is an edge
from person A to person B when person A knows of person B, then this graph is directed, because knowledge of someone is not necessarily a symmetric relation (that is, one person knowing another
person does not necessarily imply the reverse; for example, many fans may know of a celebrity, but the celebrity is unlikely to know of all their fans). This latter type of graph is called a directed
graph and the edges are called directed edges or arcs.
Vertices are also called nodes or points, and edges are also called lines or arcs. Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J.J.
Sylvester in 1878.^[2]
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
A general example of a graph (actually, a pseudograph) with three vertices and six edges.
In the most common sense of the term,^[3] a graph is an ordered pair G = (V, E) comprising a set V of vertices or nodes together with a set E of edges or lines, which are 2-element subsets of V
(i.e., an edge is related with two vertices, and the relation is represented as unordered pair of the vertices with respect to the particular edge). To avoid ambiguity, this type of graph may be
described precisely as undirected and simple.
Other senses of graph stem from different conceptions of the edge set. In one more generalized notion,^[4] E is a set together with a relation of incidence that associates with each edge two
vertices. In another generalized notion, E is a multiset of unordered pairs of (not necessarily distinct) vertices. Many authors call this type of object a multigraph or pseudograph.
All of these variants and others are described more fully below.
The vertices belonging to an edge are called the ends, endpoints, or end vertices of the edge. A vertex may exist in a graph and not belong to an edge.
V and E are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. The
order of a graph is $|V|$ (the number of vertices). A graph's size is $|E|$, the number of edges. The degree of a vertex is the number of edges that connect to it, where an edge that connects to the
vertex at both ends (a loop) is counted twice.
For an edge {u, v}, graph theorists usually use the somewhat shorter notation uv.
Adjacency relation
The edges E of an undirected graph G induce a symmetric binary relation ~ on V that is called the adjacency relation of G. Specifically, for each edge {u, v} the vertices u and v are said to be
adjacent to one another, which is denoted u ~ v.
Types of graphs
Distinction in terms of the main definition
As stated above, in different contexts it may be useful to define the term graph with different degrees of generality. Whenever it is necessary to draw a strict distinction, the following terms are
used. Most commonly, in modern texts in graph theory, unless stated otherwise, graph means "undirected simple finite graph" (see the definitions below).
Undirected graph
A simple undirected graph with three vertices and three edges. Each vertex has degree two, so this is also a regular graph.
An undirected graph is one in which edges have no orientation. The edge (a, b) is identical to the edge (b, a), i.e., they are not ordered pairs, but sets {u, v} (or 2-multisets) of vertices.
Directed graph
A directed graph
Main article: Directed graph
A directed graph or digraph is an ordered pair D = (V, A) with
• V a set whose elements are called vertices or nodes, and
• A a set of ordered pairs of vertices, called arcs, directed edges, or arrows.
An arc a = (x, y) is considered to be directed from x to y; y is called the head and x is called the tail of the arc; y is said to be a direct successor of x, and x is said to be a direct predecessor
of y. If a path leads from x to y, then y is said to be a successor of x and reachable from x, and x is said to be a predecessor of y. The arc (y, x) is called the arc (x, y) inverted.
A directed graph D is called symmetric if, for every arc in D, the corresponding inverted arc also belongs to D. A symmetric loopless directed graph D = (V, A) is equivalent to a simple undirected
graph G = (V, E), where the pairs of inverse arcs in A correspond 1-to-1 with the edges in E; thus the edges in G number |E| = |A|/2, or half the number of arcs in D.
A variation on this definition is the oriented graph, in which not more than one of (x, y) and (y, x) may be arcs.
Mixed graph
A mixed graph G is a graph in which some edges may be directed and some may be undirected. It is written as an ordered triple G = (V, E, A) with V, E, and A defined as above. Directed and undirected
graphs are special cases.
A loop is an edge (directed or undirected) which starts and ends on the same vertex; these may be permitted or not permitted according to the application. In this context, an edge with two different
ends is called a link.
The term "multigraph" is generally understood to mean that multiple edges (and sometimes loops) are allowed. Where graphs are defined so as to allow loops and multiple edges, a multigraph is often
defined to mean a graph without loops,^[5] however, where graphs are defined so as to disallow loops and multiple edges, the term is often defined to mean a "graph" which can have both multiple edges
and loops,^[6] although many use the term "pseudograph" for this meaning.^[7]
A quiver or "multidigraph" is a directed graph which may have more than one arrow from a given source to a given target. A quiver may also have directed loops.
Simple graph
As opposed to a multigraph, a simple graph is an undirected graph that has no loops and no more than one edge between any two different vertices. In a simple graph the edges of the graph form a set
(rather than a multiset) and each edge is a distinct pair of vertices. In a simple graph with n vertices every vertex has a degree that is less than n (the converse, however, is not true — there
exist non-simple graphs with n vertices in which every vertex has a degree smaller than n).
Weighted graph
A graph is a weighted graph if a number (weight) is assigned to each edge.^[8] Such weights might represent, for example, costs, lengths or capacities, etc. depending on the problem at hand. Some
authors call such a graph a network.^[9]
Half-edges, loose edges
In exceptional situations it is even necessary to have edges with only one end, called half-edges, or no ends (loose edges); see for example signed graphs and biased graphs.
Important graph classes
Regular graph
Main article: Regular graph
A regular graph is a graph where each vertex has the same number of neighbors, i.e., every vertex has the same degree or valency. A regular graph with vertices of degree k is called a k‑regular graph
or regular graph of degree k.
Complete graph
File:Complete graph K5.svg
A complete graph with 5 vertices. Each vertex has an edge to every other vertex.
Main article: Complete graph
Complete graphs have the feature that each pair of vertices has an edge connecting them.
Finite and infinite graphs
A finite graph is a graph G = (V, E) such that V and E are finite sets. An infinite graph is one with an infinite set of vertices or edges or both.
Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated.
Graph classes in terms of connectivity
Main article: Connectivity (graph theory)
In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. A graph is called connected if every pair of distinct
vertices in the graph is connected; otherwise, it is called disconnected.
A graph is called k-vertex-connected or k-edge-connected if no set of k-1 vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called
simply k-connected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is strongly connected or strong if it contains a
directed path from u to v and a directed path from v to u for every pair of vertices u, v.
Properties of graphs
Two edges of a graph are called adjacent (sometimes coincident) if they share a common vertex. Two arrows of a directed graph are called consecutive if the head of the first one is at the nock (notch
end) of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if they are at the notch and at the head of an arrow), in which case the common edge is
said to join the two vertices. An edge and a vertex on that edge are called incident.
The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes
called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object.
In a weighted graph or digraph, each edge is associated with some value, variously called its cost, weight, length or other term depending on the application; such graphs arise in many contexts, for
example in optimal routing problems such as the traveling salesman problem.
Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat
vertices as indistinguishable; then the graph may be called unlabeled. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident
edges). The same remarks apply to edges, so graphs with labeled edges are called edge-labeled graphs. Graphs with labels attached to edges or vertices are more generally designated as labeled.
Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (Note that in the literature the term labeled may apply to other kinds of labeling,
besides that which serves only to distinguish different vertices or edges.)
A graph with six nodes.
• The diagram at right is a graphic representation of the following graph:
V = {1, 2, 3, 4, 5, 6}
E = {{1, 2}, {1, 5}, {2, 3}, {2, 5}, {3, 4}, {4, 5}, {4, 6}}.
• In category theory a small category can be represented by a directed multigraph in which the objects of the category represented as vertices and the morphisms as directed edges. Then, the
functors between categories induce some, but not necessarily all, of the digraph morphisms of the graph.
• In computer science, directed graphs are used to represent knowledge (e.g., Conceptual graph), finite state machines, and many other discrete structures.
• A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X iff xRy.
Important graphs
Basic examples are:
• In a complete graph, each pair of vertices is joined by an edge; that is, the graph contains all possible edges.
• In a bipartite graph, the vertex set can be partitioned into two sets, W and X, so that no two vertices in W are adjacent and no two vertices in X are adjacent. Alternatively, it is a graph with
a chromatic number of 2.
• In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X.
• In a linear graph or path graph of length n, the vertices can be listed in order, v[0], v[1], ..., v[n], so that the edges are v[i−1]v[i] for each i = 1, 2, ..., n. If a linear graph occurs as a
subgraph of another graph, it is a path in that graph.
• In a cycle graph of length n ≥ 3, vertices can be named v[1], ..., v[n] so that the edges are v[i−1]v[i] for each i = 2,...,n in addition to v[n]v[1]. Cycle graphs can be characterized as
connected 2-regular graphs. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph.
• A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect (i.e., embedded in a plane).
• A tree is a connected graph with no cycles.
• A forest is a graph with no cycles (i.e. the disjoint union of one or more trees).
More advanced kinds of graphs are:
Operations on graphs
Main article: Operations on graphs
There are several operations that produce new graphs from old ones, which might be classified into the following categories:
• Elementary operations, sometimes called "editing operations" on graphs, which create a new graph from the original one by a simple, local change, such as addition or deletion of a vertex or an
edge, merging and splitting of vertices, etc.
• Graph rewrite operations replacing the occurrence of some pattern graph within the host graph by an instance of the corresponding replacement graph.
• Unary operations, which create a significantly new graph from the old one. Examples:
• Binary operations, which create new graph from two initial graphs. Examples:
In a hypergraph, an edge can join more than two vertices.
An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for
higher-dimensional simplices.
Every graph gives rise to a matroid.
In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph.
In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs.
In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids.
See also
• Balakrishnan, V. K. (1997-02-01). Graph Theory, 1st, McGraw-Hill.
• Berge, Claude (1958). Théorie des graphes et ses applications (in French), viii+277, Dunod, Paris: Collection Universitaire de Mathématiques, II. Translation: [1962] (2001) {{{title}}}, Dover,
New York: Wiley.
• Biggs, Norman (1993). Algebraic Graph Theory, 2nd, Cambridge University Press.
• Bollobás, Béla (2002-08-12). Modern Graph Theory, 1st, Springer.
• Bang-Jensen, J.; Gutin, G. (2000). Digraphs: Theory, Algorithms and Applications, Springer.
• (2005) Graph Theory, 3rd, Berlin, New York: Springer-Verlag..
• (1995) Graham, R.L., Grötschel, M., and Lovász, L Handbook of Combinatorics, MIT Press.
• Gross, Jonathan L.; Yellen, Jay (1998-12-30). Graph Theory and Its Applications, CRC Press.
• (2003-12-29) Gross, Jonathan L., & Yellen, Jay Handbook of Graph Theory, CRC.
• Harary, Frank (January 1995). Graph Theory, Addison Wesley Publishing Company.
• Iyanaga, Shôkichi; Kawada, Yukiyosi (1977). Encyclopedic Dictionary of Mathematics, MIT Press.
• Zwillinger, Daniel (2002-11-27). CRC Standard Mathematical Tables and Formulae, 31st, Chapman & Hall/CRC.
Further reading
External links | {"url":"http://psychology.wikia.com/wiki/Graph_(mathematics)?oldid=150381","timestamp":"2014-04-24T15:27:02Z","content_type":null,"content_length":"111313","record_id":"<urn:uuid:da0427b2-f172-4c3e-a921-27286dd5c636>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability mass function
February 15th 2009, 07:27 PM
probability mass function
In LOTTO 49, Michigan's lottery game, a player selects 6 integers out of the 49 positive integers. The state randomly selects 6 out of the first 49 integers. Cash prizes are given to a player who
matches 4, 5 , or 6 integers. Let X equal the number of integers selected by a player that matches integers selected by the state.
Need help stating the p.m.f.
Don't know why but I just don't undertstand how to do this.
February 15th 2009, 07:55 PM
This seems to be a straight foward hypergeometric dist.
I assume we cannot pick a number twice, i.e., we are sampling without replacement.
If you replace the balls it's binomial.
In this case it's
P(X=x)= (6 choose x) (43 choose 6-x)/(49 choose 6)
You just want 4,5 and 6. | {"url":"http://mathhelpforum.com/advanced-statistics/73820-probability-mass-function-print.html","timestamp":"2014-04-19T04:27:25Z","content_type":null,"content_length":"4002","record_id":"<urn:uuid:c7839962-41e2-4597-97aa-3140d18fb3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variability of Active Galactic Nuclei - B.M. Peterson
3.5. Transfer Function Recovery
With data of sufficient quality, transfer functions for different plausible models are distinguishable from one another. This is of course, why we place so much emphasis on them: if we can determine
the transfer function for a particular line, we have very strong constraints on the geometry and kinematics of the responding region, and if the BLR has a spherically or azimuthally symmetric
structure, it may be possible to determine the BLR kinematics and structure with fairly high accuracy and strongly constrain the emission-line reprocessing physics. The operational goal of
reverberation mapping experiments is thus to determine the transfer functions for various emission lines in AGN spectra.
How can we determine the transfer functions from observational data? Inspection of the transfer equation (Eq. (20)) immediately suggests Fourier inversion (i.e., the method of Fourier quotients),
which was the formulation outlined by Blandford & McKee ^7 . We define the Fourier transform of the continuum light curve C(t) as
and similarly define the Fourier transform of the line light curve. By the convolution theorem ^9, the transfer function becomes
so that L* / C*, and the transfer function is obtained from
In practice, however, Fourier methods are not used as they are inadequate when applied to data that are relatively noisy (i.e., flux uncertainties are only a factor of a few to several smaller than
the variations) and which are limited in terms of both sampling rate, which is in any case usually irregular, and duration. Simpler methods, like cross-correlation (Sec. 4), can be applied with
success, though with limited results. Cross-correlation, we will see, can give the first moment of the transfer function, but little else.
In principle, more powerful methods can be used to attempt to recover transfer functions. The most commonly used is the Maximum Entropy Method (MEM) ^33. MEM is a generalized version of maximum
likelihood methods, such as least-squares. The difference is that in the method of least squares, a parameterized model is fitted to the data, whereas MEM finds the "simplest" (maximum entropy)
solution, balancing model simplicity and realism. Examples of MEM solutions will be shown in Sec. 5. Other methodologies that have been employed for transfer-function solution include subtractively
optimized local averages (SOLA) ^77 and regularized linear inversion ^44. | {"url":"http://ned.ipac.caltech.edu/level5/Sept01/Peterson2/Peter3_5.html","timestamp":"2014-04-18T05:52:53Z","content_type":null,"content_length":"4590","record_id":"<urn:uuid:78ae7003-19de-4f5c-8355-aa980bdcceb0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laplace's Equation using similarity variable and others
November 9th 2010, 01:21 PM
Laplace's Equation using similarity variable and others
I've really spent a long time struggling with these.
The first one is finding the similarity solutions of
$\frac{\partial u}{\partial t} = x\frac{\partial^2 u}{\partial x^2}$
using the similarity variable $n = xt^a$, where a is a constant I have to determine all possible values of
The other problem is solving Laplace's equation
$\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}=0$
using the similarity variable $n=xy^a$, where a is a constant I have to determine all possible values of
November 9th 2010, 02:47 PM
Show us what you've tried so that we can help.
November 9th 2010, 03:07 PM
my attempts at answering the questions
for the second one I managed to solve by the normal method of writing u as a product of a function of x only and a function of y only and working through to get:
$u = (c_1e^{kx} + c_2e^{-kx})(c_3cos(ky) + c_4sin(ky))$
for constants k and c subscript 1-4
but i'm not sure how to write up a solution using their similarity variable, or how to be sure how i've found all the possible constants a
November 9th 2010, 03:07 PM
Happen to go to Edinburgh university?
November 9th 2010, 03:39 PM
I think the idea here is to assume a solution of the form
$u = F\left(xy^a\right)$
substitute into the PDE and pick $a$ such that the PDE becomes an ODE in the variable $n = xy^a$.
November 9th 2010, 04:25 PM
Yeah you should find the partial derivatives in terms if the similarity varibles and the other varibles then (i suggest combining to find one co-efficient in front of f' ) attempt to choose alpha
such that you have an ode with constant co-efficients or in terms of the similarity vairable, I think you get 2 values forth diffusion equation and three for laplace | {"url":"http://mathhelpforum.com/differential-equations/162675-laplaces-equation-using-similarity-variable-others-print.html","timestamp":"2014-04-19T08:27:02Z","content_type":null,"content_length":"7557","record_id":"<urn:uuid:dea94454-57e7-4cc9-9cc3-b8ba46c3545f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics of Photonic Devices, 2nd Edition
ISBN: 978-0-470-29319-5
840 pages
January 2009
Read an Excerpt
The most up-to-date book available on the physics of photonic devices
This new edition of Physics of Photonic Devices incorporates significant advancements in the field of photonics that have occurred since publication of the first edition (Physics of Optoelectronic
Devices). New topics covered include a brief history of the invention of semiconductor lasers, the Lorentz dipole method and metal plasmas, matrix optics, surface plasma waveguides, optical ring
resonators, integrated electroabsorption modulator-lasers, and solar cells. It also introduces exciting new fields of research such as: surface plasmonics and micro-ring resonators; the theory of
optical gain and absorption in quantum dots and quantum wires and their applications in semiconductor lasers; and novel microcavity and photonic crystal lasers, quantum-cascade lasers, and GaN
blue-green lasers within the context of advanced semiconductor lasers.
Physics of Photonic Devices, Second Edition presents novel information that is not yet available in book form elsewhere. Many problem sets have been updated, the answers to which are available in an
all-new Solutions Manual for instructors. Comprehensive, timely, and practical, Physics of Photonic Devices is an invaluable textbook for advanced undergraduate and graduate courses in photonics and
an indispensable tool for researchers working in this rapidly growing field.
See More
Chapter 1: Introduction.
1.1 Basic Concepts of Semiconductor Bonding and Band Diagrams.
1.2 The Invention of Semiconductor Lasers.
1.3 The Field of Optoelectronics.
1.4 Overview of the book.
Chapter 2: Basic Semiconductor Electronics.
2.1 Maxwell’s Equations and Boundary Conditions.
2.2 Semiconductor Electronics Equations.
2.3 Generation and Recombination in Semiconductors.
2.4 Examples and Applications to Optoelectronic Devices.
2.5 Semiconductor p-N and n-P Heterojunctions.
2.6 Semiconductor n-N Heterojunctions and Metal-Semiconductor Junctions.
Chapter 3: Basic Quantum Mechanics.
3.1 Schrödinger Equation.
3.2 The Square Well.
3.3 The Harmonic Oscillator.
3.4 The Hydrogen Atom and Excitons in 2D and 3D.
3.5 Time-Independent Perturbation Theory.
3.6 Time-Dependent Perturbation Theory .
Appendix 3A. Löwdin’s Renormalization Method.
Chapter 4: Theory of Electronic Band Structures in Semiconductors.
4.1 The Bloch Theorem and the k•p Method for Simple Bands.
4.2 Kane's Model for Band Structure--The k•p Method with the Spin-Orbit Interaction.
4.3 Luttinger-Kohn’s Model--The k•p Method for Degenerate Bands.
4.4 The Effective Mass Theory for a Single Band and Degenerate Bands.
4.5 Strain Effects on Band Structures.
4.6 Electronic States in an Arbitrary One-Dimensional Potential.
4.7 Kronig-Penny Model for a Superlattice.
4.8 Band Structures of Semiconductor Quantum Wells.
4.9 Band Structures of Strained Semiconductor Quantum Wells.
Chapter 5: Electromagnetics and Light Propagation.
5.1 Time-Harmonic Fields and Duality Principle.
5.2 Poynting's Theorem and Reciprocity Relations.
5.3 Plane Wave Solutions for Maxwell’s Equations in Homogeneous Media.
5.4 Light Propagation in Isotropic Media.
5.5 Wave Propagation in Lossy Medium-Lorentz Oscillator Model and Metal Plasma.
5.6 Plane Wave Reflection from a Surface.
5.7 Matrix Optics.
5.8 Propagation Matrix Approach for Plane Wave Reflection from a Multilayered Medium.
5.9 Wave Propagation in Periodic Media.
Appendix 5A Kramers-Kronig Relations.
Chapter 6: Light Propagation in Anisotropic Media and Radiation.
6.1 Light Propagation in Uniaxial Media.
6.2 Wave Propagation in Gyrotropic Media- Magnetooptic Effects.
6.3 General Solutions to Maxwell's Equations and Gauge Transformations.
6.4 Radiation and the Far-Field Pattern.
Chapter 7: Optical Waveguide Theory.
7.1 Symmetric Dielectric Slab Waveguides.
7.2 Asymmetric Dielectric Slab Waveguides.
7.3 Rectangular Dielectric Waveguides.
7.4 Ray Optics Approach to Waveguide Problems.
7.5 The Effective Index Method.
7.6 Wave Guidance in a Lossy or Gain Medium.
7.7 Surface Plasmon Waveguide.
Chapter 8: Coupled Mode Theory.
8.1 Waveguide Couplers.
8.2 Coupled Optical Waveguides.
8.3 Applications of Optical Waveguide Couplers.
8.4 Optical Ring Resonators and Add-Drop Filters.
8.5 Distributed Feedback Structures.
Appendix 8A Coupling Coefficients for Parallel Waveguides.
Appendix 8B Improved Coupled-Mode Theory.
Chapter 9: Optical Processes in Semiconductors.
9.1 Optical Transitions Using the Fermi’s Golden Rule.
9.2 Spontaneous and Stimulation Emissions.
9.3 Interband Absorption and Gain of Bulk Semiconductors.
9.4 Interband Absorption and Gain in a Quantum Well.
9.5 Interband Momentum Matrix Elements of Bulk and Quantum-Well Semiconductors.
9.6 Quantum Dots and Quantum Wires.
9.7 Intersubband Absorption.
9.8 Gain Spectrum in a Quantum-Well Laser with Valence-Band-Mixing Effects.
Appendix 9A Coordinate Transformation of the Basis Functions and the Momentum Matrix Elements.
Chapter 10: Fundamentals of Semiconductor Lasers.
10.1 Double Heterojunction Semiconductor Lasers.
10.2 Gain-Guided and Index-Guided Semiconductor Lasers.
10.3 Quantum-Well Lasers.
10.4 Strained Quantum-Well Lasers.
10.5 Strained Quantum-Dot Lasers.
Chapter 11: Advanced Semiconductor Lasers.
11.1 Distributed Feedback Lasers.
11.2 Vertical-Cavity Surface Emitting Lasers.
11.3 Microcavity and Photonics Crystal Lasers .
11.4 Quantum-Cascade Lasers.
11.5 GaN-based Blue-Green Lasers and LEDs.
11.6 Coupled Laser Arrays.
Appendix 11A. Hamiltonin for Strained Wurtzite Crystals.
Appendix 11B. Band-edge Optical Matrix Elements.
PART IV: MODULATION OF LIGHT.
Chapter 12: Direct Modulation of Semiconductor Lasers.
12.1 Rate Equations and Linear Gain Analysis.
12.2 High-Speed Modulation Response with Nonlinear Gain Saturation .
12.3 Transport Effects on Modulation of Quantum-Well Lasers: Electrical vs. Optical Modulation.
12.4 Semiconductor Laser Spectral Linewidth and the Linewidth Enhancement Factor.
12.5 Relative Intensity Noise (RIN) Spectrum.
Chapter 13: Electrooptic and Acoustooptic Modulators.
13.1 Electrooptic Effects and Amplitude Modulators.
13.2 Phase Modulators.
13.3 Electrooptic Effects in Waveguide Devices.
13.4 Scattering of Light by Sound: Raman-Nath and Bragg Diffractions.
13.5 Coupled-Mode Analysis for Bragg Acoustooptic Wave Couplers.
Chapter 14: Electroabsorption Modulators.
14.1 General Formulation for Optical Absorption due to an Electron-Hole Pair.
14.2 Franz-Keldysh Effect--Photon-Assisted Tunneling.
14.3 Exciton Effect.
14.4 Quantum Confined Stark Effect (QCSE).
14.5 Electroabsorption Modulator.
14.6 Integrated Electroabsorption Modulator-Laser (EML).
14.7 Self-Electrooptic Effect Devices (SEEDs).
Appendix 14A. Two-Particle Wave Function and the Effective Mass Equation.
Appendix 14B. Solution of the Electron-Hole Effective-Mass Equation with Exciton Effects.
PART V: DETECTION OF LIGHT AND SOLAR CELLS.
Chapter 15: Photodetectors and Solar Cells.
15.1 Photoconductors.
15.2 p-n Junction Photodiodes.
15.3 p-i-n Photodiodes.
15.4 Avalanche Photodiodes.
15.5 Intersubband Quantum-Well Photodetectors.
15.6 Solar Cells.
A. Semiconductor Heterojunction Band Lineups in the Model-Solid Theory.
B. Optical Constants of GaAs and InP.
C. Electronic Properties of Si, Ge, and Binary, Ternary, and Quarternary Compounds.
D. Parameters for GaN, InN, and AlN and Ternary InGaN, AlGaN, and AlGaN Compounds.
See More
Shun Lien Chuang, PhD, is the MacClinchie Distinguished Professor in the Department of Electrical and Computer Engineering at the University of Illinois, Urbana-Champaign. His research centers on
semiconductor optoelectronic and nanophotonic devices. He is a Fellow of the American Physical Society, IEEE, and the Optical Society of America. He received the Engineering Excellence Award from the
OSA, the Distinguished Lecturer Award and the William Streifer Scientific Achievement Award from the IEEE Lasers and Electro-Optics Society, and the Humboldt Research Award for Senior U.S. Scientists
from the Alexander von Humboldt Foundation.
See More
• Second edition is thoroughly updated and includes new material not available in book form elsewhere
• All problem sets have been updated and a new Solutions Manual is available
• This is an ideal graduate/professional introduction to a rapidly growing technology
See More
Buy Both and Save 25%!
Physics of Photonic Devices, 2nd Edition (US $168.00)
-and- Introduction to Adaptive Lenses (US $114.00)
Total List Price: US $282.00
Discounted Price: US $211.50 (Save: US $70.50)
Cannot be combined with any other offers. Learn more. | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470293195.html","timestamp":"2014-04-16T14:51:17Z","content_type":null,"content_length":"59820","record_id":"<urn:uuid:5bf5a49d-43da-498e-8da3-f480715de915>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 59
59 4.00 Binder C D 3.75 E F Tensile Strength (MPa) H 3.50 I J K M 3.25 N O 3.00 2.75 2.50 Modified binders are shown with dashed connect lines 110 130 150 170 Compaction Temperature (C) Figure 41.
Interaction plot for compaction temperature and binder on tensile strengths at 10°C. it was not possible to make general conclusions or observations observed in temperatures needed to achieve the
baseline coat- about trends regarding the effect of compaction temperature. ing percentage with the bucket mixer. Binders that had the largest residuals were O, C, B, and I. The temperatures to
achieve good coating for Binders O and C were much higher Correlation of Mixing and than predicted by the SSF method and for B and I coating was Compaction Temperatures achieved at temperatures much
lower than predicted by the A key part of this research was to compare the predicted SSF method. mixing and compaction temperatures from the candidate Figure 43 shows a similar set of plots of SSF
mixing temper- binder tests with the results of the mixture experiments. This atures versus the mixing temperatures to achieve 89% coating section presents that analysis using correlations performed
with the pugmill mixer. The regression statistics are somewhat with MINITAB release 15 statistical software. The regressions poorer compared with those with the bucket mixer. For this are based on
average values from replicate mix tests and binder data set, Binder E was considered as a possible outlier since the tests. Outputs included graphical plots of the data, least- predicted temperature
to achieve the baseline percentage was squares linear regression equations, 95% confidence intervals 222°F (106°C), which seems quite low and is outside of the for the regressions, the residual error
regression S, the coeffi- experimental range. cient of determination R2, and the adjusted R2. An ANOVA Figure 44 shows the correlations of mixing temperatures table was also generated for each
regression, which includes the from the Phase Angle method with the coating test results observed significance level (P-value) for the regression equa- using the bucket mixer. As noted above, the
bucket mixer coat- tions. For several correlations, one or two data points were ing test results for Binder H were not considered valid, so these identified as suspected outliers. Discussion of the
outliers is data were removed and the correlations were performed again. presented as part of the analysis associated with the particular The statistics for the regressions in these two plots are
poorer correlation. In each case where outlier data is suspected, the than for the SSF method. As with the SSF correlations, the correlations are provided both with and without the question- binders
that have large residuals include Binders O, C, I, and able data. B. Binder M also has a large residual in the Phase Anglebucket Figure 42 shows the correlations of the mixing tempera- mixer coating
test correlations. This is the binder that includes tures from the Steady Shear Flow (SSF) method with the mix- the Sasobit® wax. The large residual with this binder could ing temperatures to achieve
98% coating using the bucket indicate that the Phase Angle method is not a good predictor mixer. The bucket mixer coating test results for Binder H was of mixing temperatures for binders with
Sasobit®, perhaps not considered reasonable as the predicted temperature to because the Phase Angle measurements are made at below achieve the baseline coating percentage was outside of the
temperatures where the Sasobit® wax melts. The poor correla- experimental range. The regression statistics indicate the SSF tions with O, C, I, and B for both the SSF and the Phase Angle mixing
temperature explains only about 40% of the variation methods is evidence that the problematic data likely arise
OCR for page 59
60 Bucket Mix T = - 70.9 + 1.259 SSF Mix T 425 Regression H 95% CI 400 S 29.9023 R-Sq 43.5% R-Sq(adj) 38.4% 375 C Bucket Mix T O G 350 N 325 F B 300 K J D E M I 275 250 280 290 300 310 320 330 340
SSF Mix T (a) Bucket Mix T = 36.9 + 0.8928 SSF Mix T 375 Regression 95% CI C S 22.1202 350 O G R-Sq 40.8% R-Sq(adj) 34.9% N Bucket Mix T 325 F B 300 K J D E M I 275 250 280 290 300 310 320 330 340
SSF Mix T (b) Figure 42. Correlations of the SSF mixing temperature with the bucket mixer temperature for 98% coating: (a) all data; (b) excludes Binder H. from the bucket mixer coating results
rather than the can- Overall, the correlations between the mixing temperatures didate methods for determining mixing and compaction from the candidate methods and the coating test results are
temperatures. fairly weak, generally with R2 values in the range of 30 to 40%. Figure 45 shows the correlations of the Phase Angle method However, the lack of strength of the correlations is likely
due predicted mixing temperatures with the temperatures to more to the coating test results that are based on curve-fitting achieve the baseline coating percentage in the pugmill mixer. through data
from subjective measurements that lack good As discussed with the SSF methodpugmill mixer correlation, repeatability. the coating test results for Binder E in the pugmill were not Figure 46 and
Figure 47 show correlations between the can- considered valid. The regression statistics between the Phase didate methods mixing temperatures and the midpoints of the Angle method and the pugmill
mixer coating results are slightly mixing temperature range recommended by the binder suppli- better than for the SSF method. A summary of correlation ers. The correlation between the SSF method
mixing tempera- statistics is shown in Table 36. ture and the suppliers' recommended mixing temperatures is
OCR for page 59
61 Pugmill Mix T = - 71.5 + 1.244 SSF Mix T 400 Regression 95% CI B N S 34.8156 R-Sq 35.7% 350 M R-Sq(adj) 29.9% I G C Pugmill Mix T J O H K F 300 D 250 E 200 280 290 300 310 320 330 340 SSF Mix T
(a) Pugmill Mix T_1 = 28.4 + 0.9426 SSF Mix T Regression B 375 95% CI N S 27.3356 R-Sq 34.6% 350 M R-Sq(adj) 28.1% Pugmill Mix T_1 I G C 325 J O H K F 300 275 D 250 280 290 300 310 320 330 340 SSF
Mix T (b) Figure 43. Correlations of the SSF mixing temperature with the pugmill mixer temperature for 89% coating: (a) all data; (b) excludes Binder E. quite reasonable, with an R2 of 70%. The
correlation of the Figure 48 and Figure 49 show the correlations between the Phase Angle method mixing temperatures with the suppliers' workability tests and the results of the candidate methods for
recommended mixing temperatures is very weak unless the determining mixing and compaction temperatures. For these results from Binder M are removed. The issue with the Saso- correlations, the
temperatures midway between the mixing bit® wax in Binder M was noted previously. With this data and compaction temperatures from the candidate methods point removed, the correlation statistics
improve consider- were used as the independent variable. Although the correla- ably, although not as strong as with SSF method. However, it tions are weak as indicated by the low correlation
coefficients, is also important to note that the regression equation between the regressions were statistically significant ( =0.05). No data the phase angle method and producers' recommendations
were excluded from these correlations. However, most of the was slightly closer to the line of equality than for the SSF scatter is likely due to poor precision of the workability tests. method.
Overall, these correlations show that both methods Despite numerous attempts to improve the workability equip- provide results generally consistent with field experience and ment and test method
during this study, there is considerable therefore pass a test of reasonableness. doubt about the validity of the test as an indicator of binder
OCR for page 59
62 Bucket Mix T = - 48.5 + 1.153 Phase Angle Mix T 425 Regression H 95% CI 400 S 34.8178 R-Sq 23.5% R-Sq(adj) 16.5% 375 C Bucket Mix T O G 350 N 325 F B 300 KJ D E M I 275 250 300 310 320 330 340
Phase Angle Mix T (a) Bucket Mix T_1 = 58.5 + 0.7970 Phase Angle Mix T C Regression 360 95% CI O G S 25.3382 R-Sq 22.4% 340 N R-Sq(adj) 14.6% Bucket Mix T_1 F 320 B 300 J K D E M I 280 260 300 310
320 330 340 Phase Angle Mix T (b) Figure 44. Correlation of Phase Angle Method mixing temperature with the bucket mixer temperature for 98% coating: (a) all data; (b) excludes Binder H. stiffness on
mix workability. The average temperature differ- tests: Binders I, J, and N. It is clear from the correlation graphs ence for the duplicate runs on all of the workability tests was that the SGC
compaction test temperatures for I and J are well 26°F, which is a similar magnitude as the residuals for the cor- above the reasonable range for these binders. Binder N is a relations. heavily
modified binder that was expected to require a rela- Correlations between the compaction temperatures pre- tively high temperature to achieve the baseline density level. dicted by the candidate
methods with the results of the com- Its compaction experiment results were just above the highest paction tests are shown in Figure 50 and Figure 51. Excluding test temperature used in the
compaction experiment. the data for Binders I and J, both of the candidate methods' A summary of the regression statistics for the correlations compaction temperatures correlate well with the results
from between the candidate methods and mix test results is shown the mix compaction tests. Three binders had compaction test in Table 36. Also shown are the key statistics from the corre- results
that were outside of the experimental range for the lations with the respective midpoints of the binder producers'
OCR for page 59
63 Pugmill Mix T = - 188.0 + 1.572 Phase Angle Mix T 400 Regression 95% CI B N S 34.5577 R-Sq 36.7% 350 M R-Sq(adj) 30.9% I G C Pugmill Mix T J O H K F 300 D 250 E 200 300 310 320 330 340 Phase Angle
Mix T (a) Pugmill Mix T_1 = - 86.2 + 1.275 Phase Angle Mix T B Regression 375 95% CI N S 25.7628 R-Sq 41.9% 350 M R-Sq(adj) 36.1% I G Pugmill Mix T_1 C 325 J O H K F 300 275 D 250 300 310 320 330 340
Phase Angle Mix T (b) Figure 45. Correlation of Phase Angle method mixing temperature with the pugmill mixer temperature for 98% coating: (a) all data; (b) excludes Binder E. Table 36. Summary of
correlation statistics. Steady Shear Flow Phase Angle Residual Residual Mix Test R2 P-value R2 P-value Error Error Bucket 22.1 40.8 0.025 25.3 22.4 0.121 Pugmill 27.3 34.6 0.044 25.7 41.9 0.023
Workability 34.0 30.6 0.050 30.5 44.3 0.013 Compaction 18.7 68.4 0.002 16.7 74.8 0.001 Producers' Midpoint 7.8 70.1 0.000 8.8 58.2 0.004
OCR for page 59
Prod. Midpoint = 136.0 + 0.5724 SSF Mix T 340 Regression N G 95% CI S 7.79965 I 330 R-Sq 70.1% R-Sq(adj) 67.4% O H 320 Prod. Midpoint F J C B 310 D E 300 K M 290 280 280 290 300 310 320 330 340 SSF
Mix T Figure 46. Correlation of the SSF mixing temperature with the midpoint of the binder producers' recommended mixing range. Prod. Midpoint = 170.9 + 0.4468 Phase Angle Mix T 340 Regression G N
95% CI S 12.1468 I 330 R-Sq 27.4% R-Sq(adj) 20.8% H Prod. Midpoint O 320 F B C J 310 D E 300 K M 290 300 310 320 330 340 Phase Angle Mix T (a) Prod. Midpoint = 115.7 + 0.6269 Phase Angle Mix T_1
Regression 340 95% CI N G S 8.81614 I R-Sq 58.2% 330 R-Sq(adj) 54.0% Prod. Midpoint O H 320 F J B C 310 D E 300 K 290 300 310 320 330 340 Phase Angle Mix T_1 (b) Figure 47. Correlation of the Phase
Angle Method mixing temperature with the midpoint of the binder producers' recommended mixing range: (a) all data; (b) excludes Binder M.
OCR for page 59
65 EWT = - 27.3 + 1.128 SSF mid Regression 375 95% CI I H S 34.0257 C R-Sq 30.6% N 350 R-Sq(adj) 24.3% M 325 EWT D B 300 E F G 275 K J O 250 270 280 290 300 310 320 330 SSF mid Figure 48. Correlation
of SSF mixing and compaction temperature midpoint with workability experiment equivalent torque temperature. recommended mixing temperatures. These statistics are based method statistics appeared to
be slightly better than the other on the correlations without suspected outliers, even where correlations with mix test results. the statistics did not improve with these data excluded. Lower
Statistical analyses were conducted to determine whether P-values indicate the correlation is more significant. A P-value the SSF method or the Phase Angle method provided a better of 0.05 or less is
generally considered a statistically significant overall fit to the experimental data from the mixture tests and correlation. In general, the correlation statistics are similar for the binder
producers' recommended mixing temperatures. the two methods. The SSF method had better correlation sta- The analyses used an F-statistic to test the null hypothesis that tistics with the bucket mixer
coating test results and the pro- the residual variances were equal, H0: 2(SSF) = 2(PA), for ducers' recommended mixing temperatures. The Phase Angle each regression summarized in Table 36. The
F-test is an EWT = - 241.3 + 1.785 Phase Angle Mid Regression 375 95% CI I H C S 30.4978 N R-Sq 44.3% 350 R-Sq(adj) 39.2% M 325 B EWT D 300 F E G 275 KJ O 250 290 300 310 320 330 Phase Angle Mid
Figure 49. Correlation of Phase Angle method mixing and compaction temperature midpoint with workability experiment equivalent torque temperature. | {"url":"http://www.nap.edu/openbook.php?record_id=14367&page=59","timestamp":"2014-04-20T23:49:34Z","content_type":null,"content_length":"80002","record_id":"<urn:uuid:f577a829-f857-4c6a-9b68-f37129817d85>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fandom Forums - View Single Post - 48÷2(9+3) = D'awww
Originally Posted by
Brush up on your math skills kael, you can't even solve a simple math problem. PEMDAS states to perform all operations INSIDE the Parenthesis. 2 isn't within the parenthesis. If you use it right, 48/
2(9+3)=288. Not 2. Just like how 1/2(2+2) isn't 1/8. You need a calculator, brah?
I suggest you do the same, kid. PEMDAS is supposed to eliminate each part of the equation, meaning you get rid of the parenthesis first, meaning you multiply the 2 with the result of the parenthesis.
But, like num said, the problem is poorly structured and people will solve it different ways. My way, which eliminates the parenthesis (as you're supposed to) comes up with 2. If it wasn't poorly
structured then there wouldn't be 4 damn answers. | {"url":"http://www.fandom.com/forums/showpost.php?p=1943315&postcount=102","timestamp":"2014-04-24T12:24:00Z","content_type":null,"content_length":"16598","record_id":"<urn:uuid:79034385-2a20-4743-9f1e-9b0590b5693d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Puzzle Trivia, Quizzes, and Brain Teasers
Today's mentalfloss.com Brain Game Think Thursday challenge needs but a number for its "solvation." Good luck! Identify the missing number in this sequence: 2, 2, 4, 12, 16, ?, 86, 602, ... Here is
the ANSWER. THE ANSWER: 80 The sequence is times-1, plus-2, times-3, plus-4, times-5, plus-6, etc. Thanks for playing! Tomorrow, it's Free-for-all... READ ON | {"url":"http://www.mentalfloss.com/section/puzzle/page/15/0","timestamp":"2014-04-21T00:38:57Z","content_type":null,"content_length":"68308","record_id":"<urn:uuid:5571b37d-8fa4-4138-9a01-040597ad588b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
3.1 Eulerian Dynamical Core
Next: 3.2 Semi-Lagrangian Dynamical Core Up: 3. Dynamics Previous: 3. Dynamics   Contents
The hybrid vertical coordinate that has been implemented in CAM 3.0 is described in this section. The hybrid coordinate was developed by Simmons and Strüfing [158] in order to provide a general
framework for a vertical coordinate which is terrain following at the Earth's surface, but reduces to a pressure coordinate at some point above the surface. The hybrid coordinate is more general in
concept than the modified Sangster [155], which is used in the GFDL SKYHI model. However, the hybrid coordinate is normally specified in such a way that the two coordinates are identical.
The following description uses the same general development as Simmons and Strüfing [158], who based their development on the generalized vertical coordinate of Kasahara [84]. A specific form of the
coordinate (the hybrid coordinate) is introduced at the latest possible point. The description here differs from Simmons and Strüfing [158] in allowing for an upper boundary at finite height (nonzero
pressure), as in the original development by Kasahara. Such an upper boundary may be required when the equations are solved using vertical finite differences.
Deriving the primitive equations in a generalized terrain-following vertical coordinate requires only that certain basic properties of the coordinate be specified. If the surface pressure is
The latter requirement provides that the top of the model will be a pressure surface, simplifying the specification of boundary conditions. In the case that Simmons and Strüfing [158]. The boundary
conditions that are required to close the system are:
Given the above description of the coordinate, the continuous system of equations can be written following Kasahara [84] and Simmons and Strüfing [158]. The prognostic equations are:
The notation follows standard conventions, and the following terms have been introduced with
The terms
In addition to the prognostic equations, three diagnostic equations are required:
Note that the bounds on the vertical integrals are specified as values of e.g. e.g.
Equations (3.1)-(3.16) are the complete set which must be solved by a GCM. However, in order to solve them, the function
The vertical advection terms in (3.5), (3.6), (3.8), and (3.9) may be rewritten as:
since 3.15). Similarly, the first term on the right-hand side of (3.15) can be expanded as
and (3.7) invoked to specify
The integrals which appear in (3.7), (3.15), and (3.16) can be written more conveniently by expanding the kernel as
The second term in (3.19) is easily treated in vertical integrals, since it reduces to an integral in pressure. The first term is expanded to:
The second term in (3.20) vanishes because 3.20) into (3.19), one obtains:
Using (3.21) as the kernel of the integral in (3.7), (3.15), and (3.16), one obtains integrals of the form
The original primitive equations (3.3)-(3.7), together with (3.8), (3.9), and (3.14)-(3.16) can now be rewritten with the aid of (3.17), (3.18), and (3.22).
Once 3.23)-(3.32) can be solved in a GCM.
In the actual definition of the hybrid coordinate, it is not necessary to specify 3.23)-(3.32) only requires that 3.1.7. In the case that 3.23)-(3.32) can be reduced to the set of equations solved by
In practice, the solutions generated by solving the above equations are excessively noisy. This problem appears to arise from aliasing problems in the hydrostatic equation (3.30). The 3.24). Large
gravity waves are generated in the vicinity of steep orography, such as in the Pacific Ocean west of the Andes.
The noise problem is solved by converting the equations given above, which use
Equations (3.23)-(3.32) become:
The above equations reduce to the standard
The model described by (3.33)-(3.42), without the horizontal diffusion terms, together with boundary conditions (3.1) and (3.2), is integrated in time using the semi-implicit leapfrog scheme
described below. The semi-implicit form of the time differencing will be applied to (3.34) and (3.36) without the horizontal diffusion sources, and to (3.37). In order to derive the semi-implicit
form, one must linearize these equations about a reference state. Isolating the terms that will have their linear parts treated implicitly, the prognostic equations (3.33), (3.34), and (3.37) may be
rewritten as:
where 3.43)-(3.45). The terms involving 3.40) and (3.42), while the
Once again, only terms that will be linearized have been explicitly represented in (3.46)-(3.48), and the remaining terms are included in 3.46) and (3.47). Furthermore, the virtual temperature
corrections are included with the other nonlinear terms.
In order to linearize (3.46)-(3.48), one specifies a reference state for temperature and pressure, then expands the equations about the reference state:
In the special case that 3.46)-(3.48) can be converted into equations involving only 3.50) and (3.51) are not required. This is a major difference between the hybrid coordinate scheme being developed
here and the
Expanding (3.46)-(3.48) about the reference state (3.49)-(3.51) and retaining only the linear terms explicitly, one obtains:
The semi-implicit time differencing scheme treats the linear terms in (3.52)-(3.54) by averaging in time. The last integral in (3.52) is reduced to purely linear form by the relation
In the hybrid coordinate described below,
We will assume that centered differences are to be used for the nonlinear terms, and the linear terms are to be treated implicitly by averaging the previous and next time steps. Finite differences
are used in the vertical, and are described in the following sections. At this stage only some very general properties of the finite difference representation must be specified. A layering structure
is assumed in which field values are predicted on 3.1). The interface between
The finite difference forms of (3.52)-(3.54) may then be written down as:
where 3.43)-(3.45). The components of
We shall impose a requirement on the vertical finite differences of the model that they conserve the global integral of total energy in the absence of sources and sinks. We need to derive equations
for kinetic and internal energy in order to impose this constraint. The momentum equations (more painfully, the vorticity and divergence equations) without the
to give an equation for the rate of change of kinetic energy:
The first two terms on the right-hand side of (3.60) are transport terms. The horizontal integral of the first (horizontal) transport term should be zero, and it is relatively straightforward to
construct horizontal finite difference schemes that ensure this. For spectral models, the integral of the horizontal transport term will not vanish in general, but we shall ignore this problem.
The vertical integral of the second (vertical) transport term on the right-hand side of (3.60) should vanish. Since this term is obtained from the vertical advection terms for momentum, which will be
finite differenced, we can construct a finite difference operator that will ensure that the vertical integral vanishes.
The vertical advection terms are the product of a vertical velocity ( 3.42), which are naturally taken to interfaces. The vertical derivatives are also naturally taken to interfaces, so the product
is formed there, and then adjacent interface values of the products are averaged to give a midpoint value. It is the definition of the average that must be correct in order to conserve kinetic energy
under vertical advection in (3.60). The derivation will be omitted here, the resulting vertical advection terms are of the form:
The choice of definitions for the vertical velocity at interfaces is not crucial to the energy conservation (although not completely arbitrary), and we shall defer its definition until later. The
vertical advection of temperature is not required to use (3.61) in order to conserve mass or energy. Other constraints can be imposed that result in different forms for temperature advection, but we
will simply use (3.61) in the system described below.
The last two terms in (3.60) contain the conversion between kinetic and internal (potential) energy and the form drag. Neglecting the transport terms, under assumption that global integrals will be
taken, noting that 3.40), (3.60) can be written as:
The second term on the right-hand side of (3.64) is a source (form drag) term that can be neglected as we are only interested in internal conservation properties. The last term on the right-hand side
of (3.64) can be rewritten as
The global integral of the first term on the right-hand side of (3.64) is obviously zero, so that (3.64) can now be written as:
We now turn to the internal energy equation, obtained by combining the thermodynamic equation (3.36), without the 3.59):
As in (3.60), the first two terms on the right-hand side are advection terms that can be neglected under global integrals. Using (3.16), (3.66) can be written as:
The rate of change of total energy due to internal processes is obtained by adding (3.65) and (3.67) and must vanish. The first terms on the right-hand side of (3.65) and (3.67) obviously cancel in
the continuous form. When the equations are discretized in the vertical, the terms will still cancel, providing that the same definition is used for 3.38) and (3.39), and in the 3.36) and (3.42).
The second terms on the right-hand side of (3.65) and (3.67) must also cancel in the global mean. This cancellation is enforced locally in the horizontal on the column integrals of (3.65) and (3.67),
so that we require:
The inner integral on the left-hand side of (3.68) is derived from the hydrostatic equation (3.40), which we shall approximate as
where 3.68) is derived from the vertical velocity equation (3.42), which we shall approximate as
where 3.62). Williamson and Olson [185]. Using (3.69) and (3.71), the finite difference analog of (3.68) is
where we have used the relation
(see 3.22). We can now combine the sums in (3.72) and simplify to give
Interchanging the indexes on the left-hand side of (3.74) will obviously result in identical expressions if we require that
Given the definitions of vertical integrals in (3.70) and (3.71) and of vertical advection in (3.61) and (3.62) the model will conserve energy as long as we require that 3.75). We are, of course,
still neglecting lack of conservation due to the truncation of the horizontal spherical harmonic expansions.
CAM 3.0 contains a horizontal diffusion term for
In the top three model levels, the
Since these terms are linear, they are easily calculated in spectral space. The undifferentiated correction term is added to the vorticity and divergence diffusion operators to prevent damping of
uniform 24,133]. The only to pressure surfaces in the standard model configuration.
The horizontal diffusion operator is better applied to pressure surfaces than to terrain-following surfaces (applying the operator on isentropic surfaces would be still better). Although the
governing system of equations derived above is designed to reduce to pressure surfaces above some level, problems can still occur from diffusion along the lower surfaces. Partial correction to
pressure surfaces of harmonic horizontal diffusion (
Retaining only the first two terms above gives a correction to the
Similarly, biharmonic diffusion can be partially corrected to pressure surfaces as:
The bi-harmonic
The second term in
3.1.7 Finite difference equations
The governing equations are solved using the spectral method in the horizontal, so that only the vertical and time differences are presented here. The dynamics includes horizontal diffusion of
for 3.92). The first step from
The following finite-difference description details only the forecast given by (3.87) and (3.90). The finite-difference form of the forecast equation for water vapor will be presented later in
Section 3c. The general structure of the complete finite difference equations is determined by the semi-implicit time differencing and the energy conservation properties described above. In order to
complete the specification of the finite differencing, we require a definition of the vertical coordinate. The actual specification of the generalized vertical coordinate takes advantage of the
structure of the equations (3.33)-(3.42). The equations can be finite-differenced in the vertical and, in time, without having to know the value of
which gives
A set of levels 3.33)-(3.42) may be derived.
The finite difference forms of the Dyn operator (3.33)-(3.42), including semi-implicit time integration are:
where notation such as
The matrices i.e. with components 3.96) and (3.105) at the cost of some loss of generality in the code. The finite difference equations have been written in the form (3.95)-(3.112) because this form
is quite general. For example, the equations solved by Simmons and Strüfing [158] at ECMWF can be obtained by changing only the vectors and hydrostatic matrix defined by (3.109)-(3.112).
The time step is completed by applying a recursive time filter originally designed by [149] and later studied by [6].
The spectral transform method is used in the horizontal exactly as in CCM1. As shown earlier, the vertical and temporal aspects of the model are represented by finite-difference approximations. The
horizontal aspects are treated by the spectral-transform method, which is described in this section. Thus, at certain points in the integration, the prognostic variables 24,46,118]. In this report,
we present only the details relevant to the model code; for more details and general philosophy, the reader is referred to these earlier papers.
The horizontal representation of an arbitrary variable
where Machenhauer [118]. The model is coded for a general pentagonal truncation, illustrated in Figure 3.2, defined by three parameters:
The quantity 3.114) represents an arbitrary limit on the two-dimensional wavenumber
The associated Legendre polynomials used in the model are normalized such that
With this normalization, the Coriolis parameter
which is required for the absolute vorticity.
The coefficients of the spectral representation (3.114) are given by
The inner integral represents a Fourier transform,
which is performed by a Fast Fourier Transform (FFT) subroutine. The outer integral is performed via Gaussian quadrature,
The weights themselves satisfy
The Gaussian grid used for the north-south transformation is generally chosen to allow un-aliased computations of quadratic terms only. In this case, the number of Gaussian latitudes
For the common truncations, these become
In order to allow exact Fourier transform of quadratic terms, the number of points
The actual values of
Although in the next section of this model description, we continue to indicate the Gaussian quadrature as a sum from pole to pole, the code actually deals with the symmetric and antisymmetric
components of variables and accumulates the sums from equator to pole only. The model requires an even number of latitudes to easily use the symmetry conditions. This may be slightly inefficient for
some spectral resolutions. We define a new index, which goes from i.e., let 3.120) can be rewritten as
The symmetric (even) and antisymmetric (odd) components of
Since 3.128) can be rewritten to give formulas for the coefficients of even and odd spherical harmonics:
The model uses the spectral transform method [118] for all nonlinear terms. However, the model can be thought of as starting from grid-point values at time
In order to describe the transformation to spectral space, for each equation we first group together all undifferentiated explicit terms, all explicit terms with longitudinal derivatives, and all
explicit terms with meridional derivatives appearing in the Dyn operator. Thus, the vorticity equation (3.95) is rewritten
where the explicit forms of the vectors
The divergence equation (3.96) is
The mean component of the temperature is not included in the next-to-last term since the Laplacian of it is zero. The thermodynamic equation (3.98) is
The surface-pressure tendency (3.98) is
The grouped explicit terms in (3.135)-(3.137) are given as follows. The terms of (3.135) are
The terms of (3.136) are
The nonlinear term in (3.137) is
Formally, Equations (3.131)-(3.137) are transformed to spectral space by performing the operations indicated in (3.146) to each term. We see that the equations basically contain three types of terms,
for example, in the vorticity equation the undifferentiated term
Transformation of the undifferentiated term is obtained by straightforward application of (3.118)-(3.120),
so that the Fourier transform is performed first, then the differentiation is carried out in spectral space. The transformation to spherical harmonic space then follows (3.152):
The latitudinally differentiated term is handled by integration by parts using zero boundary conditions at the poles:
Defining the derivative of the associated Legendre polynomial by
(3.155) can be written
Similarly, the
to each spherical harmonic function individually so that
The prognostic equations can be converted to spectral form by summation over the Gaussian grid using (3.146), (3.150), and (3.154). The resulting equation for absolute vorticity is
The spectral form of the divergence equation (3.135) becomes
where 3.135) is replaced by the equivalent Laplacian of the perturbation temperature in (3.159).
The spectral thermodynamic equation is
while the surface pressure equation is
Equation (3.157) for vorticity is explicit and complete at this point. However, the remaining equations (3.159)-(3.163) are coupled. They are solved by eliminating all variables except
which is simply a set of 3.165) then always guarantees 3.161) and (3.163), respectively, and all prognostic variables are known at time
As mentioned earlier, the horizontal diffusion in (3.88) and (3.91) is computed implicitly via time splitting after the transformations into spectral space and solution of the semi-implicit
equations. In the following, the
The extra term is present in (3.167), (3.171) and (3.173) to prevent damping of uniform rotations. The solutions are just
3.115), and temporarily reduces the effective resolution of the model in the affected levels. The number of levels at which this ``Courant number limiter'' may be applied is user-selectable, but it
is only used in the top level of the 26 level CAM 3.0 control runs.
The diffusion of 3.82) local, it is not included until grid-point values are available. This requires that
Occasionally, with poorly balanced initial conditions, the model exhibits numerical instability during the beginning of an integration because of excessive noise in the solution. Therefore, an
optional divergence damping is included in the model to be applied over the first few days. The damping has an initial e-folding time of
After the prognostic variables are completed at time
The inner sum is done essentially as a vector product over
In addition, the derivatives of
These required derivatives are given by
which involve basically the same operations as (3.178). The other variables needed on the grid are
in which the only nonzero
Thus, the direct transformation is
The horizontal diffusion tendencies are also transformed back to grid space. The spectral coefficients for the horizontal diffusion tendencies follow from (3.167) and (3.168):
using 3.114) for the 3.186) and (3.187) for vorticity and divergence. Thus, the vorticity and divergence diffusion tendencies are converted to equivalent
After grid-point values are calculated, frictional heating rates are determined from the momentum diffusion tendencies and are added to the temperature, and the partial correction of the
These heating rates are then combined with the correction,
The vertical derivatives of
The corrections are added to the diffusion tendencies calculated earlier (3.188) to give the total temperature tendency for diagnostic purposes:
The forecast equation for water vapor specific humidity and constituent mixing ratio in the 3.36) excluding sources and sinks.
Equation (3.199) is more economical for the semi-Lagrangian vertical advection, as
The parameterizations are time-split in the moisture equation. The tendency sources have already been added to the time level
In the semi-Lagrangian form used here, the general form is
Equation (3.202) represents the horizontal interpolation of 3.203) represents the vertical interpolation of
The horizontal departure points are found by first iterating for the mid-point of the trajectory, using winds at time
where subscript
Once the iteration of (3.204) and (3.205) is complete, the departure point is given by
where the subscript
The form given by (3.204)-(3.207) is inaccurate near the poles and thus is only used for arrival points equatorward of 70Williamson and Rasch [186]. The transformed system is rotated about the axis
The calculation of the departure point in the local geodesic system is identical to (3.204)-(3.207) with all variables carrying a prime. The equations can be simplified by noting that
The interpolants are most easily defined on the interval 0
where 146] with
Following (3.2.12) and (3.2.13) of Hildebrand [71], the Lagrangian cubic polynomial interpolant used for the velocity interpolation, is given by
The derivative approximations used in (3.214) for 3.215) with respect to 3.214) is equivalent to the Lagrangian (3.215). If we denote the four point stencil
The two dimensional
Once the departure point is known, the constituent value of 3.202) by Hermite cubic interpolation (3.214), with cubic derivative estimates (3.215) and (3.216) modified to satisfy the Sufficient
Condition for Monotonicity with C
First, if
Then, if either
is violated,
The horizontal semi-Lagrangian sub-step (3.202) is followed by the vertical step (3.203). The vertical velocity 3.94) by
Note, the arrival point
which is equivalent to assuming that
with the restriction
The appropriate values of 3.214), with the derivative estimates given by (3.215) and (3.216) for 3.231) and (3.233)), 3.227) to (3.228) are applied to the
3.1.19 Mass fixers
This section describes original and modified fixers used for the Eulerian and semi-Lagrangian dynamical cores.
Since the physics parameterizations do not change the surface pressure,
The fixers which ensure conservation are applied to the dry atmospheric mass, water vapor specific humidity and constituent mixing ratios. For water vapor and atmospheric mass the desired discrete
relations, following Williamson and Olson [185] are
and the integral
preserving the horizontal gradient of
In (3.237) and (3.238) the 3.238) forces the arbitrary corrections to be small when the mixing ratio is small and when the change made to the mixing ratio by the advection is small. In addition, the
141]. Satisfying (3.234) and (3.235) gives
Note that water vapor and dry mass are corrected simultaneously. Additional advected constituents are treated as mixing ratios normalized by the mass of dry air. This choice was made so that as the
water vapor of a parcel changed, the constituent mixing ratios would not change. Thus the fixers which ensure conservation involve the dry mass of the atmosphere rather than the moist mass as in the
case of the specific humidity above. Let
The term Rasch et al. [141] the change made by the fixer has the same form as (3.238)
Substituting (3.242) into (3.241) and using (3.237) through (3.240) gives
where the following shorthand notation is adopted:
We note that there is a small error in (3.241). Consider a situation in which moisture is transported by a physical parameterization, but there is no source or sink of moisture. Under this
circumstance only once within the model time step, and use it consistently throughout the model. In this revision, we have chosen to fix the dry air mass in the model time step where the surface
pressure is updated, e.g. at the end of the model time step. Therefore, we now replace (3.241) with
There is a corresponding change in the first term of the numerator of (3.243) in which 3.243) for water substances and constituents affecting the temperature field to prevent changes to the IPCC
simulations. In the future, constituent fields may use a corrected version of (3.243).
3.1.20 Energy Fixer
Following notation in section 3.1.19, the total energy integrals are
where 3.237)
and from (3.236)
The energy fixer is chosen to have the form
3.1.21 Statistics Calculations
At each time step, selected global average statistics are computed for diagnostic purposes when the model is integrated with the Eulerian and semi-Lagrangian dynamical cores. Let
where recall that
The quantities monitored are:
The Eulerian core and semi-Lagrangian tracer transport can be run on reduced grids. The term reduced grid generally refers to a grid based on latitude and longitude circles in which the longitudinal
grid increment increases at latitudes approaching the poles so that the longitudinal distance between grid points is reasonably constant. Details are provided in [187]. This option provides a saving
of computer time of up to 25%.
Next: 3.2 Semi-Lagrangian Dynamical Core Up: 3. Dynamics Previous: 3. Dynamics   Contents Jim McCaa 2004-06-22 | {"url":"http://www.cesm.ucar.edu/models/atm-cam/docs/description/node11.html","timestamp":"2014-04-18T20:46:28Z","content_type":null,"content_length":"307600","record_id":"<urn:uuid:cebdf956-9405-4a01-93e5-f630f725e3a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Andre Holzner's Blog
Here is an animation of how the function minimizer Minuit minimizes a function. The function to be minimized depends on two parameters so that one can visualize it. It was inspired by the ‘banana
shaped valley’ function so that Minuit must take several steps to find the minimum. The value of the goal function is indicated by the color, i.e. the goal function is a ‘spiral’ whose depth
increases as one goes more counterclockwise.
The types of the steps (shown at the top left) were inferred from the functions from which it was called. The minimization typically has two alternating phases: numerical determination of the
gradient and line search along some promising direction. At the end, the second derivative is calculated (although the function has a discontinuity on one side).
The red point shows the coordinates with which Minuit calls a goal function to be minimized.
The arrow at the top left shows the direction from the previous to the current step. For example during the line search phase, the direction remains mostly fixed or is reversed while during the
gradient calculation phase it looks like first the horizontal component is calculated and then the vertical one.
Read Full Post
Make a Comment
None so far
Recently on Andre Holzner's Blog… | {"url":"http://aholzner.wordpress.com/","timestamp":"2014-04-18T10:34:34Z","content_type":null,"content_length":"30631","record_id":"<urn:uuid:d25202ae-98eb-40ed-a3cd-d36d3670bc62>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
What polygon is a quadrangle with no right angles and congruent sides?
A triangle is one of the basic shapes in geometry: a polygon with three corners or vertices and three sides or edges which are line segments. A triangle with vertices A, B, and C is denoted $\
triangle ABC$.
In Euclidean geometry any three points, when non-collinear, determine a unique triangle and a unique plane (i.e. a two-dimensional Euclidean space).
In geometry, an equilateral triangle is a triangle in which all three sides are equal. In traditional or Euclidean geometry, equilateral triangles are also equiangular; that is, all three
internal angles are also congruent to each other and are each 60°. They are regular polygons, and can therefore also be referred to as regular triangles.
In geometry, an equilateral polygon is a polygon which has all sides of the same length.
For instance, an equilateral triangle is a triangle of equal edge lengths. All equilateral triangles are similar to each other, and have 60 degree internal angles.
In geometry, polygons are associated into pairs called duals, where the vertices of one correspond to the edges of the other.
Regular polygons are self-dual.
In Euclidean geometry, an equiangular polygon is a polygon whose vertex angles are equal. If the lengths of the sides are also equal then it is a regular polygon.
The only equiangular triangle is the equilateral triangle. Rectangles, including the square, are the only equiangular quadrilaterals (four-sided figures).
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that
brings about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during
wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Euclidean geometry is a mathematical system attributed to the Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's method consists in
assuming a small set of intuitively appealing axioms, and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated by earlier mathematicians, Euclid
was the first to show how these propositions could fit into a comprehensive deductive and logical system. The Elements begins with plane geometry, still taught in secondary school as the first
axiomatic system and the first examples of formal proof. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory,
explained in geometrical language.
For more than two thousand years, the adjective "Euclidean" was unnecessary because no other sort of geometry had been conceived. Euclid's axioms seemed so intuitively obvious (with the possible
exception of the parallel postulate) that any theorem proved from them was deemed true in an absolute, often metaphysical, sense. Today, however, many other self-consistent non-Euclidean geometries
are known, the first ones having been discovered in the early 19th century. An implication of Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean
space is a good approximation for it only where the gravitational field is weak.
Related Websites: | {"url":"http://answerparty.com/question/answer/what-polygon-is-a-quadrangle-with-no-right-angles-and-congruent-sides","timestamp":"2014-04-17T02:09:31Z","content_type":null,"content_length":"31728","record_id":"<urn:uuid:c5d276be-ef71-4a6c-bfbc-f5941e520e82>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
summary of the BDD algorithm in ACL2
Major Section: BDD
The BDD algorithm in ACL2 uses a combination of manipulation of IF terms and unconditional rewriting. In this discussion we begin with some relevant mathematical theory. This is followed by a
description of how ACL2 does BDDs, including concluding discussions of soundness, completeness, and efficiency.
We recommend that you read the other documentation about BDDs in ACL2 before reading the rather technical material that follows. See BDD.
Here is an outline of our presentation. Readers who want a user perspective, without undue mathematical theory, may wish to skip to Part (B), referring to Part (A) only on occasion if necessary.
(A) Mathematical Considerations
(A1) BDD term order
(A2) BDD-constructors and BDD terms, and their connection with aborting the BDD algorithm
(A3) Canonical BDD terms
(A4) A theorem stating the equivalence of provable and syntactic equality for canonical BDD terms
(B) Algorithmic Considerations
(B1) BDD rules (rules used by the rewriting portion of the ACL2 BDD algorithm)
(B2) Terms ``known to be Boolean''
(B3) An ``IF-lifting'' operation used by the algorithm, as well as an iterative version of that operation
(B4) The ACL2 BDD algorithm
(B5) Soundness and Completeness of the ACL2 BDD algorithm
(B6) Efficiency considerations
(A) Mathematical Considerations
(A1) BDD term order
Our BDD algorithm creates a total ``BDD term order'' on ACL2 terms, on the fly. We use this order in our discussions below of IF-lifting and of canonical BDD terms, and in the algorithm's use of
commutativity. The particular order is unimportant, except that we guarantee (for purposes of commutative functions) that constants are smaller in this order than non-constants.
(A2) BDD-constructors (assumed to be '(cons)) and BDD terms
We take as given a list of function symbols that we call the ``BDD-constructors.'' By default, the only BDD-constructor is cons, although it is legal to specify any list of function symbols as the
BDD-constructors, either by using the acl2-defaults-table (see acl2-defaults-table) or by supplying a :BDD-CONSTRUCTORS hint (see hints). Warning: this capability is largely untested and may produce
undesirable results. Henceforth, except when explicitly stated to the contrary, we assume that BDD-constructors is '(cons).
Roughly speaking, a BDD term is the sort of term produced by our BDD algorithm, namely a tree with all cons nodes lying above all non-CONS nodes. More formally, a term is said to be a BDD term if it
contains no subterm of either of the following forms, where f is not CONS.
(f ... (CONS ...) ...)
(f ... 'x ...) ; where (consp x) = t
We will see that whenever the BDD algorithm attempts to create a term that is not a BDD term, it aborts instead. Thus, whenever the algorithm completes without aborting, it creates a BDD term.
(A3) Canonical BDD terms
We can strengthen the notion of ``BDD term'' to a notion of ``canonical BDD term'' by imposing the following additional requirements, for every subterm of the form (IF x y z):
(a) x is a variable, and it precedes (in the BDD term order) every variable occurring in y or z;
(b) y and z are syntactically distinct; and,
(c) it is not the case that y is t and z is nil.
We claim that it follows easily from our description of the BDD algorithm that every term it creates is a canonical BDD term, assuming that the variables occurring in all such terms are treated by
the algorithm as being Boolean (see (B2) below) and that the terms contain no function symbols other than IF and CONS. Thus, under those assumptions the following theorem shows that the BDD algorithm
never creates distinct terms that are provably equal, a property that is useful for completeness and efficiency (as we explain in (B5) and (B6) below).
(A4) Provably equal canonical BDD terms are identical
We believe that the following theorem and proof are routine extensions of a standard result and proof to terms that allow calls of CONS.
Theorem. Suppose that t1 and t2 are canonical BDD terms that contain no function symbols other than IF and CONS. Also suppose that (EQUAL t1 t2) is a theorem. Then t1 and t2 are syntactically
Proof of theorem: By induction on the total number of symbols occurring in these two terms. First suppose that at least one term is a variable; without loss of generality let it be t1. We must prove
that t2 is syntactically the same as t1. Now it is clearly consistent that (EQUAL t1 t2) is false if t2 is a call of CONS (to see this, simply let t1 be an value that is not a CONSP). Similarly, t2
cannot be a constant or a variable other than t1. The remaining possibility to rule out is that t2 is of the form (IF t3 t4 t5), since by assumption its function symbol must be IF or CONS and we have
already handled the latter case. Since t2 is canonical, we know that t3 is a variable. Since (EQUAL t1 t2) is provable, i.e.,
(EQUAL t1 (if t3 t4 t5))
is provable, it follows that we may substitute either t or nil for t3 into this equality to obtain two new provable equalities. First, suppose that t1 and t3 are distinct variables. Then these
substitutions show that t1 is provably equal to both t4 and t5 (since t3 does not occur in t4 or t5 by property (a) above, as t2 is canonical), and hence t4 and t5 are provably equal to each other,
which implies by the inductive hypothesis that they are the same term -- and this contradicts the assumption that t2 is canonical (property (b)). Therefore t1 and t3 are the same variable, i.e., the
equality displayed above is actually (EQUAL t1 (if t1 t4 t5)). Substituting t and then nil for t1 into this provable equality lets us prove (EQUAL t t4) and (EQUAL nil t5), which by the inductive
hypothesis implies that t4 is (syntactically) the term t and t5 is nil. That is, t2 is (IF t1 t nil), which contradicts the assumption that t2 is canonical (property (c)).
Next, suppose that at least one term is a call of IF. Our first observation is that the other term is also a call of IF. For if the other is a call of CONS, then they cannot be provably equal,
because the former has no function symbols other than IF and hence is Boolean when all its variables are assigned Boolean values. Also, if the other is a constant, then both branches of the IF term
are provably equal to that constant and hence these branches are syntactically identical by the inductive hypothesis, contradicting property (b). Hence, we may assume for this case that both terms
are calls of IF; let us write them as follows.
t0: (IF t1 t2 t3)
u0: (IF u1 u2 u3)
Note that t1 and u1 are variables, by property (a) of canonical BDD terms. First we claim that t1 does not strictly precede u1 in the BDD term order. For suppose t1 does strictly precede u1. Then
property (a) of canonical BDD terms guarantees that t1 does not occur in u0. Hence, an argument much like one used above shows that u0 is provably equal to both t2 (substituting t for t1) and t3
(substituting nil for t1), and hence t2 and t3 are provably equal. That implies that they are identical terms, by the inductive hypothesis, which then contradicts property (b) for t0. Similarly, u1
does not strictly precede t1 in the BDD term order. Therefore, t1 and u1 are the same variable. By substituting t for this variable we see that t2 and u2 are provably equal, and hence they are equal
by the inductive hypothesis. Similarly, by substituting nil for t1 (and u1) we see that t3 and u3 are provably, hence syntactically, equal.
We have covered all cases in which at least one term is a variable or at least one term is a call of IF. If both terms are constants, then provable and syntactic equality are clearly equivalent.
Finally, then, we may assume that one term is a call of CONS and the other is a constant or a call of CONS. The constant case is similar to the CONS case if the constant is a CONSP, so we omit it;
while if the constant is not a CONSP then it is not provably equal to a call of CONS; in fact it is provably not equal!
So, we are left with a final case, in which canonical BDD terms (CONS t1 t2) and (CONS u1 u2) are provably equal, and we want to show that t1 and u1 are syntactically equal as are t2 and u2. These
conclusions are easy consequences of the inductive hypothesis, since the ACL2 axiom CONS-EQUAL (which you can inspect using :PE) shows that equality of the given terms is equivalent to the
conjunction of (EQUAL t1 t2) and (EQUAL u1 u2). Q.E.D.
(B) Algorithmic Considerations
(B1) BDD rules
A rule of class :rewrite (see rule-classes) is said to be a ``BDD rewrite rule'' if and only if it satisfies the following criteria. (1) The rule is enabled. (2) Its equivalence relation is equal.
(3) It has no hypotheses. (4) Its :loop-stopper field is nil, i.e., it is not a permutative rule. (5) All variables occurring in the rule occur in its left-hand side (i.e., there are no ``free
variables''; see rewrite). A rule of class :definition (see rule-classes) is said to be a ``BDD definition rule'' if it satisfies all the criteria above (except (4), which does not apply), and
moreover the top function symbol of the left-hand side was not recursively (or mutually recursively) defined. Technical point: Note that this additional criterion is independent of whether or not the
indicated function symbol actually occurs in the right-hand side of the rule.
Both BDD rewrite rules and BDD definition rules are said to be ``BDD rules.''
(B2) Terms ''known to be Boolean''
We apply the BDD algorithm in the context of a top-level goal to prove, namely, the goal at which the :BDD hint is attached. As we run the BDD algorithm, we allow ourselves to say that a set of terms
is ``known to be Boolean'' if we can verify that the goal is provable from the assumption that at least one of the terms is not Boolean. Equivalently, we allow ourselves to say that a set of terms is
``known to be Boolean'' if we can verify that the original goal is provably equivalent to the assertion that if all terms in the set are Boolean, then the goal holds. The notion ``known to be
Boolean'' is conservative in the sense that there are generally sets of terms for which the above equivalent criteria hold and yet the sets of terms are not noted as as being ``known to be Boolean.''
However, ACL2 uses a number of tricks, including type-set reasoning and analysis of the structure of the top-level goal, to attempt to establish that a sufficiently inclusive set of terms is known to
be Boolean.
From a practical standpoint, the algorithm determines a set of terms known to be Boolean; we allow ourselves to say that each term in this set is ``known to be Boolean.'' The algorithm assumes that
these terms are indeed Boolean, and can make use of that assumption. For example, if t1 is known to be Boolean then the algorithm simplifies (IF t1 t nil) to t1; see (iv) in the discussion
immediately below.
(B3) IF-lifting and the IF-lifting-for-IF loop
Suppose that one has a term of the form (f ... (IF test x y) ...), where f is a function symbol other than CONS. Then we say that ``IF-lifting'' test ``from'' this term produces the following term,
which is provably equal to the given term.
(if test
(f ... x ...) ; resulting true branch
(f ... y ...)) ; resulting false branch
Here, we replace each argument of f of the form (IF test .. ..), for the same test, in the same way. In this case we say that ``IF-lifting applies to'' the given term, ``yielding the test'' test and
with the ``resulting two branches'' displayed above. Whenever we apply IF-lifting, we do so for the available test that is least in the BDD term order (see (A1) above).
We consider arguments v of f that are ``known to be Boolean'' (see above) to be replaced by (IF v t nil) for the purposes of IF-lifting, i.e., before IF-lifting is applied.
There is one special case, however, for IF-lifting. Suppose that the given term is of the form (IF v y z) where v is a variable and is the test to be lifted out (i.e., it is least in the BDD term
order among the potential tests). Moroever, suppose that neither y nor z is of the form (IF v W1 W2) for that same v. Then IF-lifting does not apply to the given term.
We may now describe the IF-lifting-for-IF loop, which applies to terms of the form (IF test tbr fbr) where the algorithm has already produced test, tbr, and fbr. First, if test is nil then we return
fbr, while if test is a non-nil constant or a call of CONS then we return tbr. Otherwise, we see if IF-lifting applies. If IF-lifting does not apply, then we return (IF test tbr fbr). Otherwise, we
apply IF-lifting to obtain a term of the form (IF x y z), by lifting out the appropriate test. Now we recursively apply the IF-lifting-for-IF loop to the term (IF x y z), unless any of the following
special cases apply.
(i) If y and z are the same term, then return y.
(ii) Otherwise, if x and z are the same term, then replace z by nil before recursively applying IF-lifting-for-IF.
(iii) Otherwise, if x and y are the same term and y is known to be Boolean, then replace y by t before recursively applying IF-lifting-for-IF.
(iv) If z is nil and either x and y are the same term or x is ``known to be Boolean'' and y is t, then return x.
NOTE: When a variable x is known to be Boolean, it is easy to see that the form (IF x t nil) is always reduced to x by this algorithm.
(B4) The ACL2 BDD algorithm
We are now ready to present the BDD algorithm for ACL2. It is given an ACL2 term, x, as well as an association list va that maps variables to terms, including all variables occurring in x. We
maintain the invariant that whenever a variable is mapped by va to a term, that term has already been constructed by the algorithm, except: initially va maps every variable occurring in the top-level
term to itself. The algorithm proceeds as follows. We implicitly ordain that whenever the BDD algorithm attempts to create a term that is not a BDD term (as defined above in (A2)), it aborts instead.
Thus, whenever the algorithm completes without aborting, it creates a BDD term.
If x is a variable, return the result of looking it up in va.
If x is a constant, return x.
If x is of the form (IF test tbr fbr), then first run the algorithm on test with the given va to obtain test'. If test' is nil, then return the result fbr' of running the algorithm on fbr with
the given va. If test' is a constant other than nil, or is a call of CONS, then return the result tbr' of running the algorithm on tbr with the given va. If tbr is identical to fbr, return tbr.
Otherwise, return the result of applying the IF-lifting-for-IF loop (described above) to the term (IF test' tbr' fbr').
If x is of the form (IF* test tbr fbr), then compute the result exactly as though IF were used rather than IF*, except that if test' is not a constant or a call of CONS (see paragraph above),
then abort the BDD computation. Informally, the tests of IF* terms are expected to ``resolve.'' NOTE: This description shows how IF* can be used to implement conditional rewriting in the BDD
If x is a LAMBDA expression ((LAMBDA vars body) . args) (which often corresponds to a LET term; see let), then first form an alist va' by binding each v in vars to the result of running the
algorithm on the corresponding member of args, with the current alist va. Then, return the result of the algorithm on body in the alist va'.
Otherwise, x is of the form (f x1 x2 ... xn), where f is a function symbol other than IF or IF*. In that case, let xi' be the result of running the algorithm on xi, for i from 1 to n, using the
given alist va. First there are a few special cases. If f is EQUAL then we return t if x1' is syntactically identical to x2' (where this test is very fast; see (B6) below); we return x1' if it is
known to be Boolean and x2' is t; and similarly, we return x2' if it is known to be Boolean and x1' is t. Next, if each xi' is a constant and the :executable-counterpart of f is enabled, then the
result is obtained by computation. Next, if f is BOOLEANP and x1' is known to be Boolean, t is returned. Otherwise, we proceed as follows, first possibly swapping the arguments if they are out of
(the BDD term) order and if f is known to be commutative (see below). If a BDD rewrite rule (as defined above) matches the term (f x1'... xn'), then the most recently stored such rule is applied.
If there is no such match and f is a BDD-constructor, then we return (f x1'... xn'). Otherwise, if a BDD definition rule matches this term, then the most recently stored such rule (which will
usually be the original definition for most users) is applied. If none of the above applies and neither does IF-lifting, then we return (f x1'... xn'). Otherwise we apply IF-lifting to (f x1'...
xn') to obtain a term (IF test tbr fbr); but we aren't done yet. Rather, we run the BDD algorithm (using the same alist) on tbr and fbr to obtain terms tbr' and fbr', and we return (IF test tbr'
fbr') unless tbr' is syntactically identical to fbr', in which case we return tbr'.
When is it the case that, as said above, ``f is known to be commutative''? This happens when an enabled rewrite rule is of the form (EQUAL (f X Y) (f Y X)). Regarding swapping the arguments in that
case: recall that we may assume very little about the BDD term order, essentially only that we swap the two arguments when the second is a constant and the first is not, for example, in (+ x 1).
Other than that situation, one cannot expect to predict accurately when the arguments of commutative operators will be swapped.
(B5) Soundness and Completeness of the ACL2 BDD algorithm
Roughly speaking, ``soundness'' means that the BDD algorithm should give correct answers, and ``completeness'' means that it should be powerful enough to prove all true facts. Let us make the
soundness claim a little more precise, and then we'll address completeness under suitable hypotheses.
Claim (Soundness). If the ACL2 BDD algorithm runs to completion on an input term t0, then it produces a result that is provably equal to t0.
We leave the proof of this claim to the reader. The basic idea is simply to check that each step of the algorithm preserves the meaning of the term under the bindings in the given alist.
Let us start our discussion of completeness by recalling the theorem proved above in (A4).
Theorem. Suppose that t1 and t2 are canonical BDD terms that contain no function symbols other than IF and CONS. Also suppose that (EQUAL t1 t2) is a theorem. Then t1 and t2 are syntactically
Below we show how this theorem implies the following completeness property of the ACL2 BDD algorithm. We continue to assume that CONS is the only BDD-constructor.
Claim (Completeness). Suppose that t1 and t2 are provably equal terms, under the assumption that all their variables are known to be Boolean. Assume further that under this same assumption, top-level
runs of the ACL2 BDD algorithm on these terms return terms that contain only the function symbols IF and CONS. Then the algorithm returns the same term for both t1 and t2, and the algorithm reduces
(EQUAL t1 t2) to t.
Why is this claim true? First, notice that the second part of the conclusion follows immediately from the first, by definition of the algorithm. Next, notice that the terms u1 and u2 obtained by
running the algorithm on t1 and t2, respectively, are provably equal to t1 and t2, respectively, by the Soundness Claim. It follows that u1 and u2 are provably equal to each other. Since these terms
contain no function symbols other than IF or CONS, by hypothesis, the Claim now follows from the Theorem above together with the following lemma.
Lemma. Suppose that the result of running the ACL2 BDD algorithm on a top-level term t0 is a term u0 that contains only the function symbols IF and CONS, where all variables of t0 are known to be
Boolean. Then u0 is a canonical BDD term.
Proof: left to the reader. Simply follow the definition of the algorithm, with a separate argument for the IF-lifting-for-IF loop.
Finally, let us remark on the assumptions of the Completeness Claim above. The assumption that all variables are known to be Boolean is often true; in fact, the system uses the forward-chaining rule
boolean-listp-forward (you can see it using :pe) to try to establish this assumption, if your theorem has a form such as the following.
(let ((x (list x0 x1 ...))
(y (list y0 y1 ...)))
(implies (and (boolean-listp x)
(boolean-listp y))
Moreover, the :BDD hint can be used to force the prover to abort if it cannot check that the indicated variables are known to be Boolean; see hints.
Finally, consider the effect in practice of the assumption that the terms resulting from application of the algorithm contain calls of IF and CONS only. Typical use of BDDs in ACL2 takes place in a
theory (see theories) in which all relevant non-recursive function symbols are enabled and all recursive function symbols possess enabled BDD rewrite rules that tell them how open up. For example,
such a rule may say how to expand on a given function call's argument that has the form (CONS a x), while another may say how to expand when that argument is nil). (See for example the rules
append-cons and append-nil in the documentation for IF*.) We leave it to future work to formulate a theorem that guarantees that the BDD algorithm produces terms containing calls only of IF and CONS
assuming a suitably ``complete'' collection of rewrite rules.
(B6) Efficiency considerations
Following Bryant's algorithm, we use a graph representation of terms created by the BDD algorithm's computation. This representation enjoys some important properties.
(Time efficiency) The test for syntactic equality of BDD terms is very fast.
(Space efficiency) Equal BDD data structures are stored identically in memory.
Implementation note. The representation actually uses a sort of hash table for BDD terms that is implemented as an ACL2 1-dimensional array. See arrays. In addition, we use a second such hash table
to avoid recomputing the result of applying a function symbol to the result of running the algorithm on its arguments. We believe that these uses of hash tables are standard. They are also discussed
in Moore's paper on BDDs; see bdd for the reference.
examples illustrating the use of BDDs in ACL2
Major Section: BDD
See bdd for a brief introduction to BDDs in ACL2 and for pointers to other documentation on BDDs in ACL2. Here, we illustrate the use of BDDs in ACL2 by way of some examples. For a further example,
see if*.
Let us begin with a really simple example. (We will explain the :bdd hint (:vars nil) below.)
ACL2 !>(thm (equal (if a b c) (if (not a) c b))
:hints (("Goal" :bdd (:vars nil)))) ; Prove with BDDs
[Note: A hint was supplied for our processing of the goal above.
But simplification with BDDs (7 nodes) reduces this to T, using the
:definitions EQUAL and NOT.
Form: ( THM ...)
Rules: ((:DEFINITION EQUAL) (:DEFINITION NOT))
Warnings: None
Time: 0.18 seconds (prove: 0.05, print: 0.02, other: 0.12)
Proof succeeded.
ACL2 !>
The :bdd hint (:vars nil) indicates that BDDs are to be used on the indicated goal, and that any so-called ``variable ordering'' may be used: ACL2 may use a convenient order that is far from optimal.
It is beyond the scope of the present documentation to address the issue of how the user may choose good variable orderings. Someday our implementation of BDDs may be improved to include
heuristically-chosen variable orderings rather than rather random ones.
Here is a more interesting example.
(defun v-not (x)
; Complement every element of a list of Booleans.
(if (consp x)
(cons (not (car x)) (v-not (cdr x)))
; Now we prove a rewrite rule that explains how to open up v-not on
; a consp.
(defthm v-not-cons
(equal (v-not (cons x y))
(cons (not x) (v-not y))))
; Finally, we prove for 7-bit lists that v-not is self-inverting.
(let ((x (list x0 x1 x2 x3 x4 x5 x6)))
(implies (boolean-listp x)
(equal (v-not (v-not x)) x)))
:hints (("Goal" :bdd
;; Note that this time we specify a variable order.
(:vars (x0 x1 x2 x3 x4 x5 x6)))))
It turns out the the variable order doesn't seem to matter in this example; using several orders we found that 30 nodes were created, and the proof time was about 1/10 of a second on a (somewhat
enhanced) Sparc 2. The same proof took about a minute and a half without any :bdd hint! This observation is a bit misleading perhaps, since the theorem for arbitrary x,
(implies (boolean-listp x)
(equal (v-not (v-not x)) x)))
only takes about 1.5 times as long as the :bdd proof for 7 bits, above! Nevertheless, BDDs can be very useful in reducing proof time, especially when there is no regular structure to facilitate proof
by induction, or when the induction scheme is so complicated to construct that significant user effort is required to get the proof by induction to go through.
Finally, consider the preceding example, with a :bdd hint of (say) (:vars nil), but with the rewrite rule v-not-cons above disabled. In that case, the proof fails, as we see below. That is because
the BDD algorithm in ACL2 uses hypothesis-free :rewrite rules, :executable-counterparts, and nonrecursive definitions, but it does not use recursive definitions.
Notice that when we issue the (show-bdd) command, the system's response clearly shows that we need a rewrite rule for simplifying terms of the form (v-not (cons ...)).
ACL2 !>(thm
(let ((x (list x0 x1 x2 x3 x4 x5 x6)))
(implies (boolean-listp x)
(equal (v-not (v-not x)) x)))
:hints (("Goal" :bdd (:vars nil)
:in-theory (disable v-not-cons))))
[Note: A hint was supplied for our processing of the goal above.
ACL2 Error in ( THM ...): Attempted to create V-NOT node during BDD
processing with an argument that is a call of a bdd-constructor,
which would produce a non-BDD term (as defined in :DOC
bdd-algorithm). See :DOC show-bdd.
Form: ( THM ...)
Rules: NIL
Warnings: None
Time: 0.58 seconds (prove: 0.13, print: 0.00, other: 0.45)
******** FAILED ******** See :DOC failure ******** FAILED ********
ACL2 !>(show-bdd)
BDD computation on Goal yielded 17 nodes.
BDD computation was aborted on Goal, and hence there is no
falsifying assignment that can be constructed. Here is a backtrace
of calls, starting with the top-level call and ending with the one
that led to the abort. See :DOC show-bdd.
(LET ((X (LIST X0 X1 X2 X3 X4 X5 ...)))
(IMPLIES (BOOLEAN-LISTP X)
(EQUAL (V-NOT (V-NOT X)) X)))
alist: ((X6 X6) (X5 X5) (X4 X4) (X3 X3) (X2 X2) (X1 X1) (X0 X0))
(EQUAL (V-NOT (V-NOT X)) X)
alist: ((X (LIST X0 X1 X2 X3 X4 X5 ...)))
(V-NOT (V-NOT X))
alist: ((X (LIST X0 X1 X2 X3 X4 X5 ...)))
(V-NOT X)
alist: ((X (LIST X0 X1 X2 X3 X4 X5 ...)))
ACL2 !>
The term that has caused the BDD algorithm to abort is thus (V-NOT X), where X has the value (LIST X0 X1 X2 X3 X4 X5 ...), i.e., (CONS X0 (LIST X1 X2 X3 X4 X5 ...)). Thus, we see the utility of
introducing a rewrite rule to simplify terms of the form (V-NOT (CONS ...)). The moral of this story is that if you get an error of the sort shown above, you may find it useful to execute the command
(show-bdd) and use the result as advice that suggests the left hand side of a rewrite rule.
Here is another sort of failed proof. In this version we have omitted the hypothesis that the input is a bit vector. Below we use show-bdd to see what went wrong, and use the resulting information to
construct a counterexample. This failed proof corresponds to a slightly modified input theorem, in which x is bound to the 4-bit list (list x0 x1 x2 x3).
ACL2 !>(thm
(let ((x (list x0 x1 x2 x3)))
(equal (v-not (v-not x)) x))
:hints (("Goal" :bdd
;; This time we do not specify a variable order.
(:vars nil))))
[Note: A hint was supplied for our processing of the goal above.
ACL2 Error in ( THM ...): The :BDD hint for the current goal has
successfully simplified this goal, but has failed to prove it.
Consider using (SHOW-BDD) to suggest a counterexample; see :DOC
Form: ( THM ...)
Rules: NIL
Warnings: None
Time: 0.18 seconds (prove: 0.07, print: 0.00, other: 0.12)
******** FAILED ******** See :DOC failure ******** FAILED ********
ACL2 !>(show-bdd)
BDD computation on Goal yielded 73 nodes.
Falsifying constraints:
((X0 "Some non-nil value")
(X1 "Some non-nil value")
(X2 "Some non-nil value")
(X3 "Some non-nil value")
((EQUAL 'T X0) T)
((EQUAL 'T X1) T)
((EQUAL 'T X2) T)
((EQUAL 'T X3) NIL))
Term obtained from BDD computation on Goal:
(IF X0
(IF X1
(IF X2 (IF X3 (IF # # #) (IF X3 # #))
(IF X2 'NIL (IF X3 # #)))
(IF X1 'NIL
(IF X2 (IF X3 # #) (IF X2 # #))))
(IF X0 'NIL
(IF X1 (IF X2 (IF X3 # #) (IF X2 # #))
(IF X1 'NIL (IF X2 # #)))))
ACL2 Query (:SHOW-BDD): Print the term in full? (N, Y, W or ?):
n ; I've seen enough. The assignment shown above suggests
; that if we bind x3 to a non-nil value other than T,
; and bind x0, x1, and x2 to t, then we expect to get a
; counterexample.
ACL2 !>(let ((x0 t) (x1 t) (x2 t) (x3 7))
(let ((x (list x0 x1 x2 x3)))
;; Let's use LIST instead of EQUAL to see how the two
;; lists differ.
(list (v-not (v-not x)) x)))
((T T T T) (T T T 7))
ACL2 !>
See if* for another example.
for conditional rewriting with BDDs
Major Section: BDD
The function IF* is defined to be IF, but it is used in a special way by ACL2's BDD package.
As explained elsewhere (see bdd-algorithm), ACL2's BDD algorithm gives special treatment to terms of the form (IF* TEST TBR FBR). In such cases, the algorithm simplifies TEST first, and the result of
that simplification must be a constant (normally t or nil, but any non-nil explicit value is treated like t here). Otherwise, the algorithm aborts.
Thus, IF* may be used to implement a sort of conditional rewriting for ACL2's BDD package, even though this package only nominally supports unconditional rewriting. The following contrived example
should make this point clear.
Suppose that we want to prove that (nthcdr (length x) (append x y)) is equal to y, but that we would be happy to prove this only for lists having length 4. We can state such a theorem as follows.
(let ((x (list x0 x1 x2 x3)))
(equal (nthcdr (length x) (append x y))
If we want to prove this formula with a :BDD hint, then we need to have appropriate rewrite rules around. First, note that LENGTH is defined as follows (try :PE LENGTH):
(length x)
(if (stringp x)
(len (coerce x 'list))
(len x))
Since BDD-based rewriting is merely very simple unconditional rewriting (see bdd-algorithm), we expect to have to prove a rule reducing STRINGP of a CONS:
(defthm stringp-cons
(equal (stringp (cons x y))
Now we need a rule to compute the LEN of X, because the definition of LEN is recursive and hence not used by the BDD package.
(defthm len-cons
(equal (len (cons a x))
(1+ (len x))))
We imagine this rule simplifying (LEN (LIST X0 X1 X2 X3)) in terms of (LEN (LIST X1 X2 X3)), and so on, and then finally (LEN nil) should be computed by execution (see bdd-algorithm).
We also need to imagine simplifying (APPEND X Y), where still X is bound to (LIST X0 X1 X2 X3). The following two rules suffice for this purpose (but are needed, since APPEND, actually BINARY-APPEND,
is recursive).
(defthm append-cons
(equal (append (cons a x) y)
(cons a (append x y))))
(defthm append-nil
(equal (append nil x)
Finally, we imagine needing to simplify calls of NTHCDR, where the the first argument is a number (initially, the length of (LIST X0 X1 X2 X3), which is 4). The second lemma below is the traditional
way to accomplish that goal (when not using BDDs), by proving a conditional rewrite rule. (The first lemma is only proved in order to assist in the proof of the second lemma.)
(defthm fold-constants-in-+
(implies (and (syntaxp (quotep x))
(syntaxp (quotep y)))
(equal (+ x y z)
(+ (+ x y) z))))
(defthm nthcdr-add1-conditional
(implies (not (zp (1+ n)))
(equal (nthcdr (1+ n) x)
(nthcdr n (cdr x)))))
The problem with this rule is that its hypothesis makes it a conditional rewrite rule, and conditional rewrite rules are not used by the BDD package. (See bdd-algorithm for a discussion of ``BDD
rules.'') (Note that the hypothesis cannot simply be removed; the resulting formula would be false for n = -1 and x = '(a), for example.) We can solve this problem by using IF*, as follows; comments
(defthm nthcdr-add1
(equal (nthcdr (+ 1 n) x)
(if* (zp (1+ n))
(nthcdr n (cdr x)))))
How is nthcdr-add1 applied by the BDD package? Suppose that the BDD computation encounters a term of the form (NTHCDR (+ 1 N) X). Then the BDD package will apply the rewrite rule nthcdr-add1. The
first thing it will do when attempting to simplify the right hand side of that rule is to attempt to simplify the term (ZP (1+ N)). If N is an explicit number (which is the case in the scenario we
envision), this test will reduce (assuming the executable counterparts of ZP and BINARY-+ are enabled) to t or to nil. In fact, the lemmas above (not including the lemma nthcdr-add1-conditional)
suffice to prove our goal:
(thm (let ((x (list x0 x1 x2 x3)))
(equal (nthcdr (length x) (append x y))
:hints (("Goal" :bdd (:vars nil))))
If we execute the following form that disables the definition and executable counterpart of the function ZP
(in-theory (disable zp (zp)))
before attempting the proof of the theorem above, we can see more clearly the point of using IF*. In this case, the prover makes the following report.
ACL2 Error in ( THM ...): Unable to resolve test of IF* for term
(IF* (ZP (+ 1 N)) X (NTHCDR N (CDR X)))
under the bindings
((X (CONS X0 (CONS X1 (CONS X2 #)))) (N '3))
-- use SHOW-BDD to see a backtrace.
If we follow the advice above, we can see rather clearly what happened. See show-bdd.
ACL2 !>(show-bdd)
BDD computation on Goal yielded 21 nodes.
BDD computation was aborted on Goal, and hence there is no
falsifying assignment that can be constructed. Here is a backtrace
of calls, starting with the top-level call and ending with the one
that led to the abort. See :DOC show-bdd.
(LET ((X (LIST X0 X1 X2 X3)))
(EQUAL (NTHCDR (LENGTH X) (APPEND X Y)) Y))
alist: ((Y Y) (X3 X3) (X2 X2) (X1 X1) (X0 X0))
(NTHCDR (LENGTH X) (APPEND X Y))
alist: ((X (LIST X0 X1 X2 X3)) (Y Y))
(IF* (ZP (+ 1 N)) X (NTHCDR N (CDR X)))
alist: ((X (LIST* X0 X1 X2 X3 Y)) (N 3))
ACL2 !>
Each of these term-alist pairs led to the next, and the test of the last one, namely (ZP (+ 1 N)) where N is bound to 3, was not simplified to t or to nil.
What would have happened if we had used IF in place of IF* in the rule nthcdr-add1? In that case, if ZP and its executable counterpart were disabled then we would be put into an infinite loop! For,
each time a term of the form (NTHCDR k V) is encountered by the BDD package (where k is an explicit number), it will be rewritten in terms of (NTHCDR k-1 (CDR V)). We would prefer that if for some
reason the term (ZP (+ 1 N)) cannot be decided to be t or to be nil, then the BDD computation should simply abort.
Even if there were no infinite loop, this kind of use of IF* is useful in order to provide feedback of the form shown above whenever the test of an IF term fails to simplify to t or to nil.
inspect failed BDD proof attempts
Major Section: BDD
Attempts to use BDDs (see bdd), using :bdd hints, can fail for various reasons. Sometimes it is useful to explore such failures. To do so, one may simply execute the form
inside the ACL2 loop. The system's response is generally self-explanatory. Perhaps you have already seen show-bdd used in some examples (see bdd-introduction and see if*). Here we give some details
about show-bdd.
(Show-bdd) prints the goal to which the BDD procedure was applied and reports the number of nodes created during the BDD computation, followed by additional information depending on whether or not
the computation ran to completion or aborted (for reasons explained elsewhere; see bdd-algorithm). If the computation did abort, a backtrace is printed that should be useful in understanding where
the problem lies. Otherwise, (show-bdd) prints out ``falsifying constraints.'' This list of pairs associates terms with values and suggests how to construct a binding list for the variables in the
conjecture that will falsify the conjecture. It also prints out the term that is the result of simplifying the input term. In each of these cases, parts of the object may be hidden during printing,
in order to avoid creating reams of uninteresting output. If so, the user will be queried about whether he wishes to see the entire object (alist or term), which may be quite large. The following
responses are legal:
w -- Walk around the object with a structure editor
t -- Print the object in full
nil -- Do not print any more of the object
Show-bdd actually has four optional arguments, probably rarely used. The general form is
(show-bdd goal-name goal-ans falsifying-ans term-ans)
where goal-name is the name of the goal on which the :bdd hint was used (or, nil if the system should find such a goal), goal-ans is the answer to be used in place of the query for whether to print
the input goal in full, falsifying-ans is the answer to be used in place of the query for whether to print the falsifying constraints in full, and term-ans is the answer to be used in place of the
query for whether to print the resulting term in full. | {"url":"http://www.cs.utexas.edu/users/moore/acl2/v2-3/acl2-doc-4.html","timestamp":"2014-04-16T11:37:26Z","content_type":null,"content_length":"51553","record_id":"<urn:uuid:4dd1c653-343e-4412-93b5-e642954b42de>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Numbers
An integer (greater than one) is prime if the only whole numbers it can be divided by (without a remainder) are itself and one. All other integers are composite. In other words, a prime number has
only two positive factors. Composite numbers have more. For example, seven is a prime number because its only positive factors are one and seven. Fifteen is composite because it has four: one, three,
five, and fifteen.
• Eratosthenses was a Greek mathematician who figured out that to find all the prime numbers between two and some large number, you need to remove all the multiples of each number between two and
your large number. Start by pressing "2" (skip over "1"), and you'll see all the multiples of two eliminated: 2,4,6,8, etc. Next, click on "3" and so on. At some point the program will stop, and
all the prime numbers between 2 and 400 will be colored red. Can you guess the biggest number you will need to click?
"A prime number is a positive integer that has exactly two positive integer factors, 1 and itself. For example, if we list the factors of 28, we have 1, 2, 4, 7, 14, and 28. That's six factors.
If we list the factors of 29, we only have 1 and 29. That's 2. So we say that 29 is a prime number, but 28 isn't." Dr. Math presents an excellent introduction to prime numbers, the Sieve of
Eratosthenes, and links to other prime number sites.
• Fact Monster begins with a short prime number lesson, and a table of all the prime numbers between 1 and 1000. On the next page ("World's Largest Known Prime Number") is a simple explanation of
Mersene primes, and the search for bigger and bigger primes. Although there are an infinite number of primes, it is only with today's computing power can we actually name them. In fact, the
Electronic Frontier Foundation is offering is a $100,000 reward for finding a prime number with at least 10 million digits.
• For middle school and high school students, this Math Forum goes behind simple prime number definition, and introduces both Euclid's theory of prime numbers (which has been proven) and Goldbach's
Conjecture (which hasn't.) "In a letter to Leonard Euler in 1742, Christian Goldbach conjectured that every positive even integer greater than 2 can be written as the sum of two primes. Though
computers have verified this up to a million, no proof has been given.
• Our last site of the day is the most comprehensive. It is not for elementary students just being introduced to prime numbers, but rather for serious high school and college math students who want
to explore current projects in number theory. Best clicks are the Prime Glossary, Brief History of Large Prime Numbers, and Prime Curios ("an exciting collection of curiosities, wonders and
trivia related to prime numbers.")
Honorable Mentions
The following links are either new discoveries or sites that didn't make it into my newspaper column because of space constraints. Enjoy!
Cite This Page
• Feldman, Barbara. "Prime Numbers." Surfnetkids. Feldman Publishing. 17 Nov. 2004. Web. 20 Apr. 2014. <http://www.surfnetkids.com/resources/primenumbers/ >.
About This Page
• By Barbara J. Feldman. Originally published November 17, 2004. Last modified November 17, 2004. | {"url":"http://www.surfnetkids.com/resources/primenumbers/","timestamp":"2014-04-20T19:37:40Z","content_type":null,"content_length":"48946","record_id":"<urn:uuid:0aa45811-c126-4205-a148-b34aa0ed706e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning to predict by the methods of temporal difference
Results 11 - 20 of 959
- Machine Learning , 1992
"... Abstract. To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The
purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much ..."
Cited by 275 (2 self)
Add to MetaCart
Abstract. To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of
this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will
speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions
to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as
an approach to generalization. To evaluate the performance of different frame-works, a dynamic environment was used as a testbed. The enviromaaent is moderately complex and nondetermin-istic. This
paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.
- Advances in Neural Information Processing Systems 7 , 1995
"... To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of
dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a genera ..."
Cited by 251 (3 self)
Add to MetaCart
To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of
dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in
the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign
cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization. 1 INTRODUCTION
Reinforcement learning---the problem of getting an agent to learn to act from sparse, delayed rewards---has been advanced by techniques based on dynamic programming (DP). These algorithms compute a
value function ...
- In Proceedings of the Twelfth International Conference on Machine Learning , 1995
"... A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these
algorithms can easily become unstable when implemented directly with a general function-approximation system, such ..."
Cited by 237 (5 self)
Add to MetaCart
A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can
easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basisfunction system, a memory-based learning
system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual,
guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of
the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual
algorithms, and it is s...
- Machine Learning , 1998
"... We present new algorithms for reinforcement learning, and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes.
After observing that the number of actions required to approach the optimal return is lower bounded by the m ..."
Cited by 237 (3 self)
Add to MetaCart
We present new algorithms for reinforcement learning, and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After
observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the
discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states, for both the undiscounted and discounted
cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off. 1
- IEEE Transactions on Automatic Control , 1997
"... We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain. The algorithm weanalyze updates
parameters of a linear function approximator on-line, duringasingle endless trajectory of an irreducible aperiodi ..."
Cited by 218 (7 self)
Add to MetaCart
We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain. The algorithm weanalyze updates parameters
of a linear function approximator on-line, duringasingle endless trajectory of an irreducible aperiodic Markov chain with a finite or infinite state space. We present a proof of convergence (with
probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. Furthermore, our analysis is based on a new line of reasoning that provides new
intuition about the dynamics of temporal-difference learning. In addition to proving new and stronger positive results than those previously available, we identify the significance of on-line
updating and potential hazards associated with the use of nonlinear function approximators. First, we prove that divergence may occur when updates are not based on trajectories of the Markov chain.
This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning. Second, we present anexample illustrating the
possibility of divergence when temporal-difference learning is used in the presence of a nonlinear function approximator.
- IN MACHINE LEARNING: PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE , 1995
"... The success of reinforcement learning in practical problems depends on the ability tocombine function approximation with temporal difference methods such as value iteration. Experiments in this
area have produced mixed results; there have been both notable successes and notable disappointments. Theo ..."
Cited by 208 (5 self)
Add to MetaCart
The success of reinforcement learning in practical problems depends on the ability tocombine function approximation with temporal difference methods such as value iteration. Experiments in this area
have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that
generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show
experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of approximate value
iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.
- In Proceedings of AAAI-90 , 1990
"... We describe an algorithm which allows a behavior-based robot to learn on the basis of positive and negative feedback when to activate its behaviors. In accordance with the philosophy of
behavior-based robots, the algorithm is completely distributed: each of the behaviors independently tries to find ..."
Cited by 207 (3 self)
Add to MetaCart
We describe an algorithm which allows a behavior-based robot to learn on the basis of positive and negative feedback when to activate its behaviors. In accordance with the philosophy of
behavior-based robots, the algorithm is completely distributed: each of the behaviors independently tries to find out (i) whether it is relevant (ie. whether it is at all correlated to positive
feedback) and (ii) what the conditions are under which it becomes reliable (i.e. the conditions under which it maximizes the probability of receiving positive feedback and minimizes the probability
of receiving negative feedback). The algorithm has been tested successfully on an autonomous 6-legged robot which had to learn how to coordinate its legs so as to walk forward. Situation of the
Problem Since 1985, the MIT Mobile Robot group has advocated a radically different architecture for autonomous intelligent agents (Brooks, 1986). Instead of decomposing the architecture into
functional modules, such as perception, modeling, and planning (figure 1), the architecture is decomposed into task-achieving modules, also called behaviors (figure 2). This novel approach has
already demonstrated to be very successful and similar approaches have become more
- Neural Computation , 1994
"... Increasing attention has recently been paid to algorithms based on dynamic programming (DP) due to the suitability of DP for learning problems involving control. In stochastic environments where
the system being controlled is only incompletely known, however, a unifying theoretical account of th ..."
Cited by 207 (8 self)
Add to MetaCart
Increasing attention has recently been paid to algorithms based on dynamic programming (DP) due to the suitability of DP for learning problems involving control. In stochastic environments where the
system being controlled is only incompletely known, however, a unifying theoretical account of the behavior of these methods has been missing. In this paper we relate DP-based learning algorithms to
powerful techniques of stochastic approximation via a new convergence theorem, enabling us to establish a class of convergent algorithms to which both TD() and Q-learning belong. 1
- LEARNING AND COMPUTATIONAL NEUROSCIENCE , 1989
"... In this report we show how the class of adaptive prediction methods that Sutton called "temporal difference," or TD, methods are related to the theory of squential decision making. TD methods
have been used as "adaptive critics" in connectionist learning systems, and have been proposed as models of ..."
Cited by 195 (10 self)
Add to MetaCart
In this report we show how the class of adaptive prediction methods that Sutton called "temporal difference," or TD, methods are related to the theory of squential decision making. TD methods have
been used as "adaptive critics" in connectionist learning systems, and have been proposed as models of animal learning in classical conditioning experiments. Here we relate TD methods to decision
tasks formulated in terms of a stochastic dynamical system whose behavior unfolds over time under the influence of a decision maker's actions. Strategies are sought for selecting actions so as to
maximize a measure of long-term payoff gain. Mathematically, tasks such as this can be formulated as Markovian decision problems, and numerous methods have been proposed for learning how to solve
such problems. We show how a TD method can be understood as a novel synthesis of concepts from the theory of stochastic dynamic programming, which comprises the standard method for solving such tasks
when a model of the dynamical system is available, and the theory of parameter estimation, which provides the appropriate context for studying learning rules in the form of equations for updating
associative strengths in behavioral models, or connection weights in connectionist networks. Because this report is oriented primarily toward the non-engineer interested in animal learning, it
presents tutorials on stochastic sequential decision tasks, stochastic dynamic programming, and parameter estimation.
, 1995
"... Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model,
independent draws from which are likely to produce the patterns. For all but the simplest generative model ..."
Cited by 194 (22 self)
Add to MetaCart
Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model,
independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust
the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the
observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=34619&sort=cite&start=10","timestamp":"2014-04-18T01:33:16Z","content_type":null,"content_length":"40955","record_id":"<urn:uuid:a77b6bad-9c51-49b3-8e32-96072b4e101a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thomas Industrial Library
8.0 INTRODUCTION
This chapter presents a review of the fundamentals of dynamic modeling in order to establish a base of information on which to develop the tools for the dynamic analysis of cam-follower systems in
succeeding chapters.
8.1 NEWTON’S LAWS OF MOTION
Dynamic force analysis involves the application of Newton ’s three laws of motion which are:
1 A body at rest tends to remain at rest and a body in motion will tend to maintain its velocity unless acted upon by an external force.
2 The time rate of change of momentum of a body is equal to the magnitude of the applied force and acts in the direction of the force.
3 For every action force, there is an equal and opposite reaction force.
The second law is expressed in terms of rate of change of momentum , P = m v , where m is mass and v is velocity. Mass m is assumed to be constant in this analysis. The time rate of change of m v is
m a , where a is the acceleration of the mass center.
F = m a (8.1)
F is the resultant of all forces on the system acting at the mass center.
We can differentiate between two subclasses of dynamics problems depending upon which quantities are known and which are to be found. The “ forward dynamics problem ” is the one in which we know
everything about the external loads (forces and/or torques) being exerted on the system, and we wish to determine the accelerations, velocities, and displacements that result from the application of
those forces and torques. This subclass is typical of problems such as determining the acceleration of a block sliding down a plane, acted upon by gravity. Given F and m , solve for a.
The second subclass of dynamics problem, called the “ inverse dynamics problem ,” is one in which we know the (desired) accelerations, velocities, and displacements to be imposed upon our system and
wish to solve for the magnitudes and directions of the forces and torques that are necessary to provide the desired motions and which result from them. This inverse dynamics case is sometimes also
called kinetostatics . Given a and m , solve for F . Whichever subclass of problem is addressed, it is important to realize that they are both dynamics problems. Each merely solves F = m a for a
different variable. To do so we should first review some fundamental geometric principles and mass properties that are needed for the calculations. | {"url":"http://www.thomasglobal.com/library/UI/Cams/Cam%20Design%20and%20Manufacturing%20Handbook/Dynamics%20of%20Cam%20Systems_1/1/default.aspx","timestamp":"2014-04-18T13:50:58Z","content_type":null,"content_length":"120378","record_id":"<urn:uuid:c85efea2-6419-4004-bde8-295c575bdba9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Killing vector field
, a
Killing vector field
(often just
Killing field
) , named after
Wilhelm Killing
, is a
vector field
on a
Riemannian manifold
pseudo-Riemannian manifold
) that preserves the
. Killing fields are the
infinitesimal generator
; that is,
generated by Killing fields are
continuous isometries
of the
. More simply, the flow generates a
, in the sense that moving each point on an object the same distance in the direction of the Killing vector field will not distort distances on the object.
Specifically, a vector field
is a Killing field if the
Lie derivative
with respect to
of the metric
<math>mathcal_ g = 0 ,.</math>
In terms of the
Levi-Civita connection
, this is
<math>g(nabla_ X, Z) + g(Y, nabla_ X) = 0 ,</math>
for all vectors
. In
local coordinates
, this amounts to the Killing equation
<math>nabla_ X_ + nabla_ X_ = 0 ,.</math>
This condition is expressed in covariant form. Therefore it is sufficient to establish it in a preferred coordinate system in order to have it hold in all coordinate systems.
• The vector field on a circle that points clockwise and has the same length at each point is a Killing vector field, since moving each point on the circle along this vector field simply rotates
the circle.
• If the metric coefficients <math>g_ ,</math> in some coordinate basis <math>dx^ ,</math> are...... ...
Read More | {"url":"http://pages.rediff.com/killing-vector-field/307547","timestamp":"2014-04-20T16:53:15Z","content_type":null,"content_length":"32681","record_id":"<urn:uuid:1b054bba-cc9b-4afe-a6da-8d70ca2b02c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wayne, NJ ACT Tutor
Find a Wayne, NJ ACT Tutor
...I am a pre-med student in my second year of college and I have 3 years of tutoring experience. I currently work at the Math Center in South Orange, NJ. I have also tutored elementary school
kids in math and writing.
29 Subjects: including ACT Math, chemistry, English, reading
...I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees. Sorry! **I got 5s in the following AP tests: Physics B,
Physics C Mechanics, Physics C E&M. I have been designing websites in HTML and CSS for several years. (I ...
26 Subjects: including ACT Math, English, physics, calculus
...If you want you child to understand math and be able to do math, I am the right tutor for you!I am going to be a teacher in one year. I have tutored several elementary students as part of
community service and private tutoring. I have tutored elementary math, English, science, reading, and study skills.
27 Subjects: including ACT Math, Spanish, statistics, reading
...In fact, the more fun for them, the more likely my students are to do what I am asking of them. Over my last 12 years, I have developed specific techniques to work with students with learning
disabilities and low grade testing anxiety. My high success rates include an LSAT student that I took f...
42 Subjects: including ACT Math, English, reading, chemistry
...I would take personal time to sit down one-on-one with my students whenever necessary to make sure they were comfortable with the material. I have also designed and taught a summer course in
middle school and high school biology, tailored for students who wanted to get a “jump start” on the comi...
12 Subjects: including ACT Math, reading, biology, ESL/ESOL
Related Wayne, NJ Tutors
Wayne, NJ Accounting Tutors
Wayne, NJ ACT Tutors
Wayne, NJ Algebra Tutors
Wayne, NJ Algebra 2 Tutors
Wayne, NJ Calculus Tutors
Wayne, NJ Geometry Tutors
Wayne, NJ Math Tutors
Wayne, NJ Prealgebra Tutors
Wayne, NJ Precalculus Tutors
Wayne, NJ SAT Tutors
Wayne, NJ SAT Math Tutors
Wayne, NJ Science Tutors
Wayne, NJ Statistics Tutors
Wayne, NJ Trigonometry Tutors
Nearby Cities With ACT Tutor
Clifton, NJ ACT Tutors
Fair Lawn ACT Tutors
Fairfield, NJ ACT Tutors
Fairlawn, NJ ACT Tutors
Garfield, NJ ACT Tutors
Haledon ACT Tutors
Hawthorne, NJ ACT Tutors
Little Falls, NJ ACT Tutors
North Haledon, NJ ACT Tutors
Passaic ACT Tutors
Passaic Park, NJ ACT Tutors
Paterson, NJ ACT Tutors
Preakness, NJ ACT Tutors
Totowa ACT Tutors
Woodland Park, NJ ACT Tutors | {"url":"http://www.purplemath.com/Wayne_NJ_ACT_tutors.php","timestamp":"2014-04-18T19:08:46Z","content_type":null,"content_length":"23605","record_id":"<urn:uuid:31d5aeb2-ad07-4b24-a21d-9858eb98a9a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hingham, MA SAT Math Tutor
Find a Hingham, MA SAT Math Tutor
...As a former math and language teacher, I have worked with a variety of students who struggle with executive functioning and study skills. Based on my experience with a number of students who
have IEP and 504 plans, I know the struggles that these students face and have assisted many of them with...
43 Subjects: including SAT math, reading, English, GED
...Circles, ellipses, hyperbolas, and systems of equations, 10. Exponential and logarithmic equations, 11. Permutations, combinations, and basic probability, 12.
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I teach and tutor because I can find no greater satisfaction than to instill the excitement I feel about the maths and sciences in others. When I tutor a student, the first thing I do is
evaluate what piece of the foundation is missing. Every science and math concept is built upon the foundations laid last week, month, or year.
12 Subjects: including SAT math, chemistry, calculus, physics
I'm a recent MIT graduate with dual degrees in Biology and Chemistry, and currently I'm researching at MIT on cancer. Teaching has been a long-time passion of mine and I hope to continue teaching
throughout my life time. I have tutored SAT Math, SAT II and various science subjects since 2006.
28 Subjects: including SAT math, chemistry, calculus, geometry
I have had 24+ years of teaching mathematics in a public school setting at both the middle and high school levels. I am certified to teach grades 5 - 12. I have tutored students of all ages from
elementary school students to adults who have decided to go back to school.
11 Subjects: including SAT math, geometry, algebra 1, ASVAB
Related Hingham, MA Tutors
Hingham, MA Accounting Tutors
Hingham, MA ACT Tutors
Hingham, MA Algebra Tutors
Hingham, MA Algebra 2 Tutors
Hingham, MA Calculus Tutors
Hingham, MA Geometry Tutors
Hingham, MA Math Tutors
Hingham, MA Prealgebra Tutors
Hingham, MA Precalculus Tutors
Hingham, MA SAT Tutors
Hingham, MA SAT Math Tutors
Hingham, MA Science Tutors
Hingham, MA Statistics Tutors
Hingham, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Hingham_MA_SAT_Math_tutors.php","timestamp":"2014-04-21T02:09:47Z","content_type":null,"content_length":"23883","record_id":"<urn:uuid:6b3a2fd9-029b-415e-932f-c248836597cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by fifi
Total # Posts: 15
the binding energy of a nucleus is 240.0 MeV. What is the mass defect of the nucleus in atomic mass units?
For a hydrogen atom, determine the ratio of the ionization energy for the n = 3 excited state to the ionization energy for the ground state.
Light shines through a single slit whose width is 5.7 × 10-4 m. A diffraction pattern is formed on a flat screen located 4.0 m away. The distance between the middle of the central bright fringe and
the first dark fringe is 3.9 mm. What is the wavelength of the light?
Two pianos each sound the same note simultaneously, but they are both out of tune. On a day when the speed of sound is 348 m/s, piano A produces a wavelength of 0.766 m, while piano B produces a
wavelength of 0.780 m. How much time separates successive beats?
Venus is a grade 12 learner and she claims that she can shoot a stone 50m vertically high by using a catapult.She wants to know the relationship between the height reached by the stone and the
initial velocity of it.
The area enclosed between the x-axis, the curve y=x(2-x) and the ordinates x=1 and x=2 is rotated through 2π radians about x-axis. (a)Calculate the volume of the solid revolution formed. (b)Calculate
the rotating area. from this question what about he grap need to draw or...
The area enclosed between the x-axis, the curve y=x(2-x) and the ordinates x=1 and x=2 is rotated through 2π radians about x-axis. ( a)Calculate the volume of the solid revolution formed. (b)
Calculate the rotating area.
thank you now i can go to bed and snore and drool as usual
me needs help!!!!!! AHHHHHHHHH!!!!!!!! Its ten o'clock I have to go to bed!!!
Describe the role of the senate in Rome???
A rectangular play yard is to be constructed along the side of a house by erecting a fence on three sides, using house wall as the fourth wall. Find the demensions that produce the play yard of
maximum area if 20 meters of fence is available for the project.
it should be written as ( 5ft x 10 ft) and (5ft x 5ft)...is that correct?
2. Point B is between A and C. If BC = 11 cm and AC = 17 cm, what is AB? (Points : 3)
3rd grade
colored light cloting wear
serotonin is formed by 2 steps. first is the decarboxylation of tryptophan to form tyrptamine,second is the hydroxylation of typtamine to form 5-hydroxytyrptamine which is serotonin. can you help me
illustrate these reactions? This reference has a figure showing one of the rea... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=fifi","timestamp":"2014-04-18T19:10:02Z","content_type":null,"content_length":"9147","record_id":"<urn:uuid:e7571148-2fcb-48f9-bbbc-38418fbad84e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding a control variable and testing for significance
July 31st 2009, 04:56 PM #1
Jul 2009
Adding a control variable and testing for significance
"Perform a controlled comparison, with a test of statistical significance."
That's what I'm trying to do. My independent variable is "religious importance," a four-category ordinal variable. My dependent variable is "moralism," a 17-value variable that I'm treating as
interval. I would like to control for age, an interval variable.
My question is how can I test for statistical significance with the control variable added to the mix? I already did a two-sample t-test for my IV and DV, but I'm not sure where to go from there.
Thanks for any assistance.
you could add dummy variables for a certain age threshold, if these turn out to be significant then you have your answer...i think
August 4th 2009, 02:51 AM #2
Feb 2009 | {"url":"http://mathhelpforum.com/advanced-statistics/96642-adding-control-variable-testing-significance.html","timestamp":"2014-04-20T06:40:00Z","content_type":null,"content_length":"32266","record_id":"<urn:uuid:34392101-814a-47b1-a8ce-374afdbe4b3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Rd] cluster - clusplot.default (PR#1249)
kjetilh@umsanet.edu.bo kjetilh@umsanet.edu.bo
Mon, 7 Jan 2002 19:42:44 +0100 (MET)
The following code in clusplot.default (package cluster) is in error:
x1 <- cmdscale(x, k = 2, eig = TRUE)
var.dec <- sum(x1$eig)/sum(diag(x1$x))
if (var.dec < 0)
var.dec <- 0
if (var.dec > 1)
var.dec <- 1
x1 <- x1$points
x1 has components with names "points" and "eig", not "x", so
sum(diag(x1$x)) returns 0, the division gives Inf which is later replaced
by 1.
So in the plot it is reported (always) that "These two components explain
of the variability".
Besides, is it reasonable that sum(NULL) returns 0 without at least a
Another small point about the cluster package: it loads automatically
mva, but that is not mentioned in the Depends field in the description file.
Kjetil Halvorsen
r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-devel-request@stat.math.ethz.ch | {"url":"https://stat.ethz.ch/pipermail/r-devel/2002-January/023773.html","timestamp":"2014-04-16T07:16:53Z","content_type":null,"content_length":"3458","record_id":"<urn:uuid:3fd02412-8651-4881-9265-de615ace4d74>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assigning positive edge weights to a graph so that the weight incident to each vertex is 1.
up vote 7 down vote favorite
Let $\Gamma=(G,E)$ be a connected undirected graph, with no loops or multiple edges. $G$ is finite or countably infinite. For each edge $e=\{x,y\}\in E$, we assign a positive, symmetric edge weight
$c_e := c_{\{x,y\}} = c_{xy} = c_{yx}$. I would like to know for which graphs $\Gamma$ it is possible to choose $(c_e)_{e\in E}$ so that for each $x\in G$,
\begin{equation*} \sum_{y\sim x} c_{xy} = 1. \end{equation*}
For example, this is possible on any $d-$regular graph if one sets $c_e \equiv 1/d$. The graph with vertex set $\{x,y,z\}$ and edges $\{x,y\}$ and $\{y,z\}$ shows that it is not always possible.
co.combinatorics graph-theory pr.probability
More generally, no leaf is allowed if $G$ has more than $2$ vertices. – Did Mar 21 '11 at 23:49
Didier, this is wrong: a set of disjoint edges works fine. (This is the only counter-example to your statement.) – JBL Mar 22 '11 at 0:03
@JBL Connected. – Did Mar 22 '11 at 0:06
Whoops, fair enough. All those pesky adjectives like "positive" and "connected" .... My apologies. – JBL Mar 22 '11 at 0:08
add comment
2 Answers
active oldest votes
Here is a solution along the lines of JBL's answer.
First a couple of definitions:
A disjoint cycle cover of a graph is a collection of cycles of our graph which are disjoint subgraphs, and contain all the vertices of our graph. A special case of a disjoint cycle
cover is a perfect matching, for instance.
We will call the the permanent of a graph, the permanent of its adjacency matrix
An easy fact is that the permanent of a graph counts its disjoint cycle covers. Now, to our result:
Theorem: A graph admits edge weights as in the problem if and only if every edge is contained in a disjoint cycle cover. Equivalently if and only if removing an edge decreases the
up vote 8
down vote Suppose the graph $G$ has such weights. Then the matrix $A$ with $a_{ij}$ being the weight of the edge connecting vertices $v_i$ and $v_j$, is doubly stochastic, and thus by the
accepted Birkhoff-von Neumann theorem can be written as a convex combination of permutation matrices. For every edge of $G$, there is a non zero term $a_{ij}$ in $A$ which means that there is a
permutation matrix in our sum with a $1$, in the $ij$ entry, call this matrix $M_{ij}$. The first observation is that $M_{ij}$ has all zeros on the diagonal, and secondly that all it's
non-zero entries correspond to edges in $G$. This collection of edges is of course a disjoint cycle cover.
Now for the other direction, each cycle cover can be assigned weights as in the problem (just assign $1/2$ to all edges in proper cycles and $1$ to all isolated edges). So taking an
appropriate convex combination of all such covers gives us weights for $G$.
A special case is of course when all edges are contained in perfect matchings, but this property doesn't characterize all graphs as in the question, as the example I gave in the comment
to JBL's answer shows (also just look at odd cycles). Which is why one must include more general cycle covers.
Perhaps it is a bit more clear if we phrase it in the following way. When restricting to bipartite graphs, the property of each edge being contained in a disjoint cycle cover is
equivalent to every edge being in a perfect matching (there are no odd cycles). Now the result above follows because weights on our graph induce weights on its bipartite double cover
which sum to 1 at each vertex. A disjoint cycle cover of a graph is equivalent to a perfect matching of its bipartite double cover.
This proof applies only to finite graphs, because of the reliance on the permanent of the adjacency matrix, right? – Gerry Myerson Mar 22 '11 at 4:27
With a bit of care, it's not too hard to extend it to the locally finite case with the same proof, but I'm not sure what happens when you allow vertices to have infinite degree (does
the OP want to consider such cases?), there are convergence issues and what not. I'll think about it some more. – Gjergji Zaimi Mar 22 '11 at 4:40
add comment
Edit: this is completely broken, sorry! I leave it up for the record.
Let $G$ be finite. Then a weighting of the desired form exists if and only if every edge of the graph is contained in a perfect matching.
First, suppose every edge is contained in a perfect matching. Simply take the convex combinations of the matchings (considered as weightings with weight 1 on the edges of the matching and
weight 0 on the other edges) and win.
up vote Now, suppose your graph has a weighting of the desired form. The given conditions imply that this weighting belongs to the matching polytope of the graph. Since the matching polytope contains
2 down a point with coordinate sum at least $\frac{|V|}{2}$ (namely, your point), it must contain a vertex with coordinate sum at least $\frac{|V|}{2}$. But the vertices are matchings, and each
vote matching contains at most $\frac{|V|}{2}$ edges, so in fact there must be a vertex with exactly this coordinate sum, i.e., a perfect matching. In particular, your weighting lies on the face
of the polytope whose vertices are precisely the perfect matchings of the graph, and so it is a convex combination of perfect matchings. Since it is positive in every coordinate, there must
be a vertex that is positive in every coordinate, i.e., every edge must be contained in some perfect matching.
An earlier version of this answer solved the problem in the case that "positive" is replaced by "nonnegative". In that case, the condition becomes "the graph contains a perfect matching".
The question asks for positive weights. – Gjergji Zaimi Mar 21 '11 at 23:56
Well, it's not clear to me if the O.P. cares about the difference between positive and nonnegative, but it's easy to adjust the argument: you get strictly positive if and only if every
edge is contained in a perfect matching. – JBL Mar 21 '11 at 23:59
3 Consider two $C_3$ connected by an edge. Not every edge is in a perfect matching, yet I can find weights satisfying the problem... – Gjergji Zaimi Mar 22 '11 at 0:34
2 @JBL Third paragraph: The given conditions imply that this weighting belongs to the matching polytope of the graph. Why? – Did Mar 22 '11 at 0:38
Maybe I am missing something, but suppose that each edge is contained in a perfect matching. Let $(a_e)_{e\in E}$ satisfy $a_e>0$ and $\sum_E a_e = 1$, and for each edge $e_j$, let $(c^
1 {(e_j)}_e)_{e\in E}$ be an assignment of nonnegative edge weights with $c^{e_j}_{e_j}=1$ (i.e., using the perfect matching). Then $d_e = \sum_j a_{e_j}c^{(e_j)}_e$ is what we want, and
satisfies $d_e>0$ for all $e\in E$. – mfolz Mar 22 '11 at 1:32
show 8 more comments
Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/59117/assigning-positive-edge-weights-to-a-graph-so-that-the-weight-incident-to-each-v","timestamp":"2014-04-16T14:05:21Z","content_type":null,"content_length":"71076","record_id":"<urn:uuid:341061e6-9d1f-449c-a3e8-dc42b50023f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I really need some help in how to find or know the locus >.< Given a fixed angle CAB. Find the locus of the points equidistant from the sides of <CAB that are at a distance of 5 cm from point B.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
equidistant from two sides of an angle is the angle bisector 5 cm from a point is the circumference of the circle with center at point B with radius 5 the intersection of the circle and the angle
bisector is the answer: 2 points
Best Response
You've already chosen the best response.
though if the circle is too small, no intersection, or possibly 1 tangent point....
Best Response
You've already chosen the best response.
How on earth do you find these things?! It's sooo confusing
Best Response
You've already chosen the best response.
You memorize (or learn) the various cases. we can prove the angle bisector gives you the locus of points equidistant for both sides (we use congruent right triangles) It is (more or less)
intuitive that the locus of points equidistant from a point is a circle (this is the definition of a circle)
Best Response
You've already chosen the best response.
And I don't think they want to know how many points.. =) I've memorized the cases or conditions. It's sometimes it's hard to find it =( But just one question on this, how did we know that point B
is the center?
Best Response
You've already chosen the best response.
( mind the second "it's" :P )
Best Response
You've already chosen the best response.
distance of 5 cm from point B the locus of all points 5 cm from B is a circle The actual answer to this question could be |dw:1352048840615:dw| depends on how "big" 5 cm is. we need more info to
know if the circle intersects the bisector line
Best Response
You've already chosen the best response.
Well, it's not construction so it doesn't matter right now. But I will go with the first one.
Best Response
You've already chosen the best response.
I'm posting another few questions.. If you would please help me =) Because you and Hero are like the most people that I know are VERY good in geometry.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50955c5ee4b0d0275a3c8380","timestamp":"2014-04-16T07:48:36Z","content_type":null,"content_length":"66428","record_id":"<urn:uuid:fec81c83-d604-4749-a878-3b542bd5b9ac>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figures of categorical syllogism
Interregional p>
Academy of Personnel Management p>
Faculty: p>
Distance Learning. p>
Economics and Business Administration. p> < p> Group: 21098BUB Rate: 3 p>
Student: Pahantsov MA p>
Home address: Dnepropetrovsk street. Gidroparkovaya d. 9 square. 113 p>
Place: KAB "Slavic? p>
INSPECTION TASK p>
section of the curriculum: Logic. p>
Subject: Figures categorical syllogism. p>
Instructor: Nicholas Bartunov Petrovich__________________ p>
Dnepropetrovsk p>
1999 p>
figures categorical syllogism p>
1. Preface p>
2. Categorically statements p>
3. Figures categorical syllogism p>
4. Basic rules of figures. P>
5. Modes shapes p>
6. Literature p>
Preface p>
In more than two thousand years of history of logic nowrepresents one of the most intense periods of its development is very fastgrow and the amount of new information, and the number of new results.
InFurthermore, if more recently the logic area of interest was only relativelynarrow circle of specialists, it has now become an important discipline andnecessary for many, and in the field of modern
education - for everyone. p>
doctrine of the syllogism is historically the first completefragment of the logical theory of reasoning. It systematically described
Aristotle in the "Analytics" and the name of syllogistic exists tothis time, having its own value. p>
categorically statements p>
Propositional logic reduces the complicated to the simple utterance
(atomic). p>
It examines the complex expression as a function of simple butsimple at the same time is not dismembered. p>
remarks, having a structure expressed by the formula ?S is P?called the affirmative, but with the structure of ?S is not P? --negative. This division of quality. P>
In addition, categorical statements divided by the number ofunit (This S is (or is not) P), general (All S is (or is not) P)and private (Some S is (or is not) P). The words "all" and "some"is called
quantifier words. p>
In the study of reasoning (syllogisms) do not distinguish betweensingular and general statements, because in some common types of signapproved (or denied) on each elementconsidered set of objects.
The only difference is that the set ofreferred to in a single utterance consists of one element, butgeneral - from more than one. p>
Thus, the classification of categorical statements on the qualityand the number contains four types: p>
n obscheutverditelnye (A) n universal negative (E) n chastnoutverditelnye (I) n chastnootritsatelnye (O) p>
letters A, E, O, I for symbolic symbols are taken from the Latinwords affirmo - say - for two affirmative sentences and words ofnego - to deny - for negative. p>
figures categorical syllogism p>
Let's consider (for example) the structure of the syllogism. p>
Everyone (M) - Death (R) < / p>
Socrates (S) - man (M) p>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ p>
Socrates (S) - mortal (P) p>
syllogism comprises three categorical statements (two parcels andone conclusion, which is to be written under the standard recording feature). Subjectconclusion is indicated (usually) by the letter
S, and the predicate - P, but in the syllogism
S is called the minor term, and P - high, both are called extremeterms. The term, repeated twice in the premises, called the mean
(Latin - terminus medius) and is denoted by the letter M. p>
Dispatch also have their own names: the one that contains the term
P, called the major premise, and the term containing S - minor premise. P>
Thus, the categorical syllogism - it is a deductive inference,in the conclusion that the relationship between extreme terms (S and P)established on the basis of their (recorded in the premises)
relationship tomiddle term (M). p>
In general, the structure of the syllogism can be represented as: p>
R (X, Y) ^ Q (Y, Z) -> L (XZ), < / p>
where R, Q, L can have values A, E, I, O; p>
X, Y means a MP or PM, p>
Y, Z - MS p>
X, Z - SP p>
conjunction of the premises in the syllogism can be regarded as antetsendent,and the conclusion - as the consequent. p>
Taking these considerations, the structure of the example shouldwritten as: p>
A (MP) ^ I (SM) -> I (SP). p>
If we consider only the relative positions of the three terms,obtain the following general structure of our output, referred to as the first figuresyllogism: p>
MP p>
SM p>
---------- p>
SP p>
1-I figure p>
(1-I figure) p>
It is clear that in addition to this figure there are three, because the term M canstand in each parcel as to place the subject and predicate in place: p>
PMMPPM p>
SMMSMS p>
------ ------ ------ p>
SPSPSP p>
2-I figure 3 figure 4 figure p>
Thus, the figures of the syllogism, it is such varietywhich differ from each other provision of the middle term. p>
If we take into account the quantitative and qualitative characteristicsoutside the premises and the conclusion of a syllogism, we get the variety,called modes. Modus written three letters (from A,
E, I, O) insuch a sequence - the major premise, minor premise, conclusion. p>
The above example illustrates the mode of AII. p>
all possible modes of syllogism (four shapes 256). Takingmost general scheme of the syllogism - R (X, Y) ^ Q (Y, Z) -> L (X, Z), then there are 4ways to choose R, 4 ways to Q and 4 ways to choose L;
except that 2 wayschoose the order of X, Y, and 2 ways to sequence Y, Z. Soway, there are 4 * 4 * 4 * 2 * 2 = 256 different modes (of 64 in eachfigure). But not all of them are correct. The question
of the correctnessany syllogism can be resolved by the construction of Euler diagrams for eachparcels and then combine them. p>
Modus some syllogism invalid if and only ifany graph corresponding to its parcels, which does not coincide with anydiagram, corresponding to its conclusion. p>
For example, consider the mode: p>
E (MP) ^ A (SM) -> E (SP), ie p>
No, V is the essence of P p>
All S are M p>
------------------------- ------- none S are not P p>
His premise matches any of the two diagrams shown in Fig
1. P>
Figure 1 p>
Figure 2 p>
Figure 3 p>
is obvious that each of these diagrams can correspond to the conclusion
"No S are not P?. Therefore, this syllogism is correct, and, hence, forassumptions is true, we need to get a true conclusion. p>
Figure relations between terms in the major premise A (MP) canbe such as is shown in Figure 2, a diagram of the minor premise
E (SM) is shown in Figure 3. P>
This fully shows that the set S, completely excluded fromset M, can be completely excluded from the set P, which corresponds toAnd the conclusion (SP). These provisions were recorded as S S1 and S2.
Apparentlyunambiguous result can not be obtained. This is evidence thatconclusion does not follow logically from the premises (statements of E (SP) and A (SP) does notcan be simultaneously true). p>
analyzing this example, we assume that the term that takesplace of the subject, is distributed in the general statements (A, E), and the termoccupying the place of the predicate, distributed in the
negative utterances (E,
O). Strict adherence to this definition is the basis of the so-callednarrow theory of the syllogism. p>
But the term that takes the place of the predicate in affirmative statements
(A, I) can be distributed. This fact underlies the socalled the extended theory of the syllogism. p>
Basic rules of figures p>
1. Medium term should be distributed in at least one of the parcels. P>
If the term M will be distributed in at least one of the parcels that uniquely bind the extreme terms in prison is not possible. P>
2. The term can be in custody only when it is distributed in the premise (usually extreme terms). P>
3. The number of negative assumptions must be equal to the number of negative opinions. P>
This rule means that: p>
1) If one of the parcels is negative, then the conclusion must be negative. P>
2) Of the two negative premises correct can not be done. p>
3) Of the two affirmative premisses can not get a negative conclusion p>
These three rules are necessary and sufficient to excludeall the wrong syllogisms. p>
sometimes formulated a rule: "In the syllogism should be three and onlythree terms.. " An indication of this requirement is aimed at avoidingerror, which is called the quadrupling of terms (it is
based onconsciously or unconsciously using the phenomenon of homonyms). p>
in the number of additional rules include: p>
1. At least one of the parcels must be a common statement (from two private statements can not be the correct conclusion). P>
2. If one of the private parcels, then the conclusion must be private. P>
Special rules figures p>
Based on the general rules (in the narrow theory of the syllogism), and consideringposition of the middle term, we can derive the following special rules for the figures. p>
first figure. p>
1) major premise must be universal (A, E); p>
2) minor premise - affirmative (A, I); p>
The second figure. P>
1) major premise must be universal (A, E); p>
2) One of the parcels is negative (E, G); p>
< br>The third figure. P>
1) The minor premise must be affirmative (A, I); p>
2) Conclusion - private (I, O); p>
Fourth figure. P>
1) If the major premise - affirmative (A, I), then the smaller should be the total (A, E) p>
2) If one of the parcels is negative (E, O), the major premise must be total (A, E); p>
Many believe the logic of a fourth figure on the artificialBased on that reasoning on this figure is not typical in the practice ofevidence. But, first, the arguments of the fourth figure stilloften
implemented in practice, and secondly, to complete the theorysyllogism it should be considered. p>
Based on the rules of the figures and, of course, given the general rulessyllogism, we can derive all the correct modes of each figure. They willexactly six in each figure, the total number of
regular modes in such a way,
24. P>
all possible combinations of parcels will be 16, for each of the fourtypes of sentences (A, E, O, I) can connect with themselves or each other or witheach of the other three: p>
| AA | EA | IA | OA |
| AE | EE | IE | OE |
| AI | EI | II | OI |
| AO | EO | IO | OO | p>
Rules of the first figure to exclude, first, all combinationsparcels of the third and fourth columns, because they contradict the firstrule. Secondly, the combination of AE and AO from the first
column countersecond rule. The combination of ITS and EG in the second column should also bedeleted because they are contrary to the general rule of inadmissibility of the twonegative assumptions.
Remaining combinations of AA, EA,
More NEWS: | {"url":"http://yqyq.net/40164-Figury_kategoricheskogo_sillogizma.html","timestamp":"2014-04-20T22:22:41Z","content_type":null,"content_length":"32839","record_id":"<urn:uuid:21d5edd2-875e-4fa6-a55f-05999b92a5a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pseudo Random Number Generators
June 30th 2008, 08:39 PM #1
Jun 2008
How would i decipher the formula to a fixed seed PRNG?
I can manipulate the PRNG's Seed and Minimum and Maximum Values.
How would i use this to create a formula that mimics the PRNG's formula?
Thank you in advance for any help.
(I wasnt sure which forum to put this in but this forum said algorithms so i guessed it would fit here :P)
Do i need to explain my problem more? Is this even possible to find the formula to the prng?
I know there is a genius on these forums that can help me somewhere
Ok so the prng works like this, when i set the seed, it always picks from the top of a list (this list must be generated somehow).
So, i set the seed to 1.
I ask for a random number between 0-61.
I ask again
Then i set the seed to 1 again.
I ask again
Adn again.
I get the same number pairs. So its not using time or anything like that, literally just the seed number is the seed.
So hopefully this will help - i will ask the PRNG for 3 numbers. (Remember if i set the seed again it will show those same 3 numbers).
I will ask the PRNG for a number between 0-61.
With a Seed of 0:
With a Seed of 1:
With a Seed of 2:
With a Seed of 3:
With a Seed of 4:
These number when i set the corresponding seed number i get exactly the same results.
Is this possible to work out the PRNG formula? Do you think you ould do it if you had more data/results?
Thank you very much in advance for your help.
Are you looking for source code for a Pseudo Random Number Generator? If so, in which language? Do you need a particular distribution?
There is source code for many Pseudo Random Number Generators posted online. Usually, they output a number between 0 and 1, which you would have to scale for your desired range. You could also
use tables to ensure you get the same three numbers all the same for the same seeds.
Thanks for the reply
I need to work out the formula for a working PRNG.
I believe it to be a relatively simple PRNG, with tables as the numbers get repeated with the same seed.
Would this be possible to "Decode" a PRNG Formula? How would i go about solving this?
Thanks again
- uniflare
-edit: i will script the formula in php afterwards.
Last edited by uniflare; July 1st 2008 at 07:35 PM. Reason: see post
Do i need to explain my problem more? Is this even possible to find the formula to the prng?
I know there is a genius on these forums that can help me somewhere
Ok so the prng works like this, when i set the seed, it always picks from the top of a list (this list must be generated somehow).
So, i set the seed to 1.
I ask for a random number between 0-61.
I ask again
Then i set the seed to 1 again.
I ask again
Adn again.
I get the same number pairs. So its not using time or anything like that, literally just the seed number is the seed.
So hopefully this will help - i will ask the PRNG for 3 numbers. (Remember if i set the seed again it will show those same 3 numbers).
I will ask the PRNG for a number between 0-61.
With a Seed of 0:
With a Seed of 1:
With a Seed of 2:
With a Seed of 3:
With a Seed of 4:
These number when i set the corresponding seed number i get exactly the same results.
Is this possible to work out the PRNG formula? Do you think you ould do it if you had more data/results?
Thank you very much in advance for your help.
Why would you want to do this?
If you generate 1000000 numbers from a seed how long does the squence take to start repeating itself?
(from what you have written I see no reason why this has to be a "simple" prng)
Im doing it for a game+website
Ah interesting
so it stops generating numbers after it has generated 8191 numbers from the same seed.
Thanks for your help so far i appreciate it
ok i tested it with a lot of seeds it seems it can only generate 8191 random numbers until it stop producing them - with any seed.
Does this mean its picking numbers from a table that is 8191 numbers long? And using the seed to start from a seemingly random spot?
(Or am i way off? lol)
Thank you so much so far
Any chance you can get a copy of "Numerical Recipes. The Art of Numerical Computing"? There is a chapter there about random numbers, as well as sub-routine source code.
- - -
Alternately, "Numerical Methods and Software" also has a chapter about random numbers. In fact, one of the authors has source code for a FORTRAN routine posted on his website (called UNI):
Random Number Generator
If you know some FORTRAN, you should be able to translate it.
- - -
The NETLIB website is another source of good-quality source code:
NETLIB Website
If you do a search on the term "random", perhaps you will find source code for a program you can use. Again, most of these programs are in FORTRAN, so you will have to translate.
- - -
There is also the Mersenne Twister, a well-known algorithm with a very long period (in other words, it produces MANY numbers before starting to repeat itself). The Wikipedia page has some good
background information, plus many external links that may interest you:
Wikipedia Mersenne Twister Page
For example, here is a link to a page with a C++ implementation of the algorithm:
Mersenne Twister C++ Code
(Of course, you could also do a Google search for the terms Mersenne and random, and you'd get thousands of other results.)
ok i tested it with a lot of seeds it seems it can only generate 8191 random numbers until it stop producing them - with any seed.
Does this mean its picking numbers from a table that is 8191 numbers long? And using the seed to start from a seemingly random spot?
(Or am i way off? lol)
Thank you so much so far
Are these 8191 numbers from each seed just a shifted copy of one another?
(Note $8191=2^{13}-1$, which is almost certainly significant)
Last edited by CaptainBlack; December 13th 2008 at 10:57 PM. Reason: correct typo 10^13 to 2^13
Sorry david i fear you misunderstand, i dont want any old PRNG i want the one thats producing these results
Ok so my tests indicate that that different seeds produce different lists, not shifted, they dont match at all.
I used an extremely simple scripting language called JASS to figure it out - if you want the code i used feel free to ask.
Do you think its still possible? Would it help if i could get more information?
Thanks - uniflare
I'm not prepared to do any more on this, if you want to carry it further read Knuth's "Seminumerical Algorithms" it's volume 2 of "The Art of Computer Programming", which deals with random number
generators and their periods etc.
i dont blame you it seems like pretty heavy math.
Thanks for the help
July 1st 2008, 10:38 AM #2
Jun 2008
July 1st 2008, 07:03 PM #3
Junior Member
Sep 2006
Between my ears
July 1st 2008, 07:24 PM #4
Jun 2008
July 1st 2008, 08:49 PM #5
Grand Panjandrum
Nov 2005
July 2nd 2008, 12:51 AM #6
Jun 2008
July 2nd 2008, 02:18 AM #7
Grand Panjandrum
Nov 2005
July 2nd 2008, 02:34 PM #8
Jun 2008
July 2nd 2008, 06:35 PM #9
Junior Member
Sep 2006
Between my ears
July 2nd 2008, 08:51 PM #10
Grand Panjandrum
Nov 2005
July 3rd 2008, 05:50 PM #11
Jun 2008
July 6th 2008, 02:14 AM #12
Jun 2008
July 6th 2008, 08:31 AM #13
Grand Panjandrum
Nov 2005
July 6th 2008, 11:57 AM #14
Jun 2008 | {"url":"http://mathhelpforum.com/discrete-math/42816-pseudo-random-number-generators.html","timestamp":"2014-04-20T19:20:30Z","content_type":null,"content_length":"74838","record_id":"<urn:uuid:5a9d2c89-70d8-432f-ba51-7ef98f1ff697>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Restrictions for presenting groups with cyclic quotient
up vote 7 down vote favorite
Let $H$ be a group, $\phi$ an automorphism of $H$ of order n and fix $h_0 \in H$. I wonder, what the restrictions are, such that $$G:= \lt H,g \mid g^n=h_0,\quad \forall h \in H: ghg^{-1}=\phi(h) \
gt$$ defines a group which has $H$ as normal subgroup such that $G/H$ is cyclic of order n.
There are two obvious restrictions:
(1) $h_0$ has to be in the center of $H$. For, $h_0hh_0^{-1} = g^nhg^{-n}=\phi^n(h) = h$, since $\phi$ has order n.
(2) $\phi(h_0) = h_0$. For, $\phi(h_0) = gh_0g^{-1}=gg^ng^{-1} = g^n = h_0$.
But I can't figure out, if there some more restrictions.
I tried to apply the classification of extensions with non-abelian kernel (Kenneth Brown, Cohomology of Groups, chapter IV, §6), but that requires to consider $H^3(Out(H),C)$ ($C$ the center of $H$)
and I'm unable to do so, because I have no information about $Out(H)$ and $C$.
Any help is appreciated.
add comment
1 Answer
active oldest votes
Your conditions are sufficient. Indeed, consider the $H$-by-cyclic group $G_0=\langle H, g\mid ghg^{-1}=\phi(h), h\in H\rangle$. By (1) and (2), both $g^n$ and $h_0$ are central in
up vote 3 down that group. Hence the element $u=h_0^{-1}g^n$ is central. Now factor out the central subgroup $\langle u\rangle$: $G=G_0/\langle u\rangle$. That group $G$ is what you need.
vote accepted
Ah, I see, $G$ can be realized as central quotient of the semi-direct product $G_0=H \times_\varphi \mathbb{Z}$ where $\varphi: \mathbb{Z} \to Aut(H), k \mapsto \phi^k$. That's
good. Thanks a lot, Mark. – Todd Leason Sep 20 '11 at 23:36
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/75986/restrictions-for-presenting-groups-with-cyclic-quotient","timestamp":"2014-04-19T07:20:03Z","content_type":null,"content_length":"51197","record_id":"<urn:uuid:1c57635e-2169-45d6-8eaf-ab534f64fb90>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Independence and Transformation Proof Help
July 7th 2012, 01:41 PM
Linear Independence and Transformation Proof Help
Does anyone know how to prove the following:
The set of images of a linearly independent set of vectors under a linear transformation are also linearly independent?
July 7th 2012, 01:58 PM
Re: Linear Independence and Transformation Proof Help
This is true only for injective transformations. For example, the projection of two-dimensional vectors on the horizontal axis maps any two vectors into linearly dependent vectors.
July 7th 2012, 02:06 PM
Re: Linear Independence and Transformation Proof Help
What would be the best way to prove this?
July 7th 2012, 02:10 PM
Re: Linear Independence and Transformation Proof Help
Prove what? The original claim is false in general.
July 7th 2012, 05:14 PM
Re: Linear Independence and Transformation Proof Help
What is the best way to answer part 2?
Prove that the set of images of a linearly dependent set of vectors under a linear transformation is linearly dependent. Let {v1¬, , … , vp} be a set of linearly dependent vectors, and let T: Rn
Rm be a linear transformation. Show that {T(v1), …, T(vp)} is linearly dependent. Part 2: Is it also true that the images of a linearly independent set of vectors under a linear transformation
is also linearly independent? Explain.
July 8th 2012, 04:37 AM
Re: Linear Independence and Transformation Proof Help
July 8th 2012, 08:18 AM
Re: Linear Independence and Transformation Proof Help
suppose there exist c[1],c[2],...,c[n] not all 0 with:
c[1]v[1] + c[2]v[2] +...+ c[n]v[n] = 0
(that is {v[1],v[2],...,v[n]} is a linearly dependent set), and that T is a linear transformation.
then T(c[1]v[1] + c[2]v[2] +...+ c[n]v[n]) = T(0) = 0.
but since T is linear:
0 = T(c[1]v[1] + c[2]v[2] +...+ c[n]v[n]) = c[1]T(v[1]) + c[2]T(v[2]) +...+ c[n]T(v[n])
which shows that {T(v[1]),T(v[2]),...,T(v[n])} is linearly dependent.
it is NOT true that the image of a linearly independent set under a linear transformation is linearly independent.
for example, if T is the 0-map, T(v) = 0, for all v in V, then even if B = {v[1],v[2],...,v[n]} is a basis,
T(B) = {T(v[1]),T(v[2]),...,T(v[n])} = {0,0,...,0} = {0} (since repeated elements of a SET don't "count extra"),
and {0} is ALWAYS a linearly dependent set.
some equivalent (sufficient) conditions for T(S) to be LI, when S is LI:
a) T is injective
b) det(T) ≠ 0
c) ker(T) = {0}
d) rank(T) = dim(V) (where V is the domain of T)
note these conditions are not necessary, it may well be that T(S) is LI when S is, even if none (thus all) of the above hold. for example, let T be multiplication by the matrix:
[1 0 0]
[0 1 0]
[0 0 0].
then for S = {(1,0,0),(0,1,0)}, T(S) is LI, even though T is singular. however, S' = {(0,1,0),(0,2,2)} is also LI, but T(S) = {(0,1,0),(0,2,0)}, which is NOT linearly independent, since the
second vector is a scalar multiple of the first:
we have 2(0,1,0) + (-1)(0,2,0) = (0,0,0) and neither of {2,-1} is 0. | {"url":"http://mathhelpforum.com/advanced-algebra/200733-linear-independence-transformation-proof-help-print.html","timestamp":"2014-04-17T09:15:29Z","content_type":null,"content_length":"8771","record_id":"<urn:uuid:e9fcf0f6-a265-47e4-8815-2d978c9857f4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Исходный RIS файл (фрагмент):
TY - JOUR
T1 - Memory capacity in neural network models: Rigorous lower bounds
JO - Neural Networks
VL - 1
IS - 3
SP - 223
EP - 238
PY - 1988
AU - Newman, Charles M.
UR - http://www.sciencedirect.com/science/article/B6T08-482R8NT-4/2/666a550edeb8f860f5965e02fe0075a9
AB - We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to be
stored and retrieved with a small fraction [delta] of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l >= 3: 1. 1. m can grow as fast as [alpha]Nl-1.2. 2.
[alpha] can be as large as Bl/ln(1/[delta]) as [delta] --> 0.3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small [alpha].These phenomena were previously
supported by numerical simulations or nonrigorous calculations. The quantity (l!) [middle dot] [alpha] represents the number of stored bits per distinct synapse. The constant (l!) [middle dot] Bl is
determined explicitly; it decreases monotonically with l and tends to zero exponentially fast as l --> [infinity]. We obtain rigorous lower bounds for the threshold value (l!) [middle dot] [alpha]c
(the maximum possible value of (l!) [middle dot] [alpha] with [delta] unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit,
Gutfreund, and Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l --> [infinity], the bound tends to zero as fast as (l!) [middle dot] Bl.
ER -
Полученный XML файл (фрагмент):
<?xml version="1.0"?>
<TitlePrimary>Memory capacity in neural network models: Rigorous lower bounds</TitlePrimary>
<Journal>Neural Networks</Journal>
<Authors>Newman, Charles M.</Authors>
<Abstract>We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to
be stored and retrieved with a small fraction [delta] of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l >= 3: 1. 1. m can grow as fast as [alpha]Nl-1.2. 2.
[alpha] can be as large as Bl/ln(1/[delta]) as [delta] --> 0.3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small [alpha].These phenomena were previously
supported by numerical simulations or nonrigorous calculations. The quantity (l!) [middle dot] [alpha] represents the number of stored bits per distinct synapse. The constant (l!) [middle dot] Bl is
determined explicitly; it decreases monotonically with l and tends to zero exponentially fast as l --> [infinity]. We obtain rigorous lower bounds for the threshold value (l!) [middle dot] [alpha]
c (the maximum possible value of (l!) [middle dot] [alpha] with [delta] unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit,
Gutfreund, and Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l --> [infinity], the bound tends to zero as fast as (l!) [middle dot] Bl. </Abstract>
Полученный HTML файл (фрагмент):
Newman, Charles M.Memory capacity in neural network models: Rigorous lower bounds Vol. 1, Issue # 3, 1988, pp. 223- 238
URL: http://www.sciencedirect.com/science/article/B6T08-482R8NT-4/2/666a550edeb8f860f5965e02fe0075a9
We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to be stored
and retrieved with a small fraction [delta] of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l >= 3: 1. 1. m can grow as fast as [alpha]Nl-1.2. 2. [alpha] can
be as large as Bl/ln(1/[delta]) as [delta] --> 0.3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small [alpha].These phenomena were previously supported by
numerical simulations or nonrigorous calculations. The quantity (l!) [middle dot] [alpha] represents the number of stored bits per distinct synapse. The constant (l!) [middle dot] Bl is determined
explicitly; it decreases monotonically with l and tends to zero exponentially fast as l --> [infinity]. We obtain rigorous lower bounds for the threshold value (l!) [middle dot] [alpha]c (the maximum
possible value of (l!) [middle dot] [alpha] with [delta] unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit, Gutfreund, and
Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l --> [infinity], the bound tends to zero as fast as (l!) [middle dot] Bl. | {"url":"http://ris2xml.sourceforge.net/?source=navbar","timestamp":"2014-04-18T13:10:11Z","content_type":null,"content_length":"10288","record_id":"<urn:uuid:ce54804d-9107-4998-81ae-65f068b2f9bc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fabulous Adventures In Coding
Generating random non-uniform data in C#
UPDATE: I've posted a related article here.
When building simulations of real-world phenomena, or when generating test data for algorithms that will be consuming information from the real world, it is often highly desirable to produce
pseudo-random data that conform to some non-uniform probability distribution.
But perhaps I have already lost some readers who do not remember STATS 101 all those years ago. I sure don't. Let's take a step back.
The .NET Framework conveniently provides you with a pseudo-random number generator that produces an approximately uniform distribution.^1
By a "uniform distribution" we mean that if you took a whole lot of those random numbers and put them in two buckets, based on whether they were greater than one half or smaller than one half, then
you would expect to see roughly half the numbers in each bucket; there is no bias towards either side. But moreover: if you took those same numbers and put them in ten buckets, based on whether they
were in the range 0.0-0.1, 0.1-0.2, 0.2-0.3, and so on, you would also expect to see no particular bias towards any of those buckets either. In fact, no matter how many buckets of uniform size you
make, if you have a large enough sample of random numbers then each bucket will end up with approximately the same number of items in it.
That's what we mean by a "uniform probability distribution": the number of items you find in the bucket is proportional to the size of the bucket, and has nothing to do with the position of the
bucket. Here I've generated one hundred thousand pseudo-random numbers between zero and one, put them into fifty buckets, and graphed the number of items found in each bucket.^2
Such a graph is called a histogram. As you can see, each of the 50 buckets contains approximately 2% of the total, and there seems to be no particular pattern to which buckets were lucky enough to
get a few more than average. This histogram is entirely characteristic of a uniform distribution.
Now let's consider for a moment the relationship between the geometry and probability. Suppose we chose one of those data at random, from the hundred thousand points. What is the probability that it
was in bucket 25? Obviously, the "height" of that bucket on the graph divided by the total number of items. But geometrically, we could think of this probability as the area under the curve limited
to the region of bucket 25 divided by the total blue area.
Do you notice something vexing about this graph though? The axes are completely determined by the arbitrary factors that I chose in running my simulation. Let's normalize this. We'll re-label the
horizontal axis so that it represents the highest value that would have been put in that bucket. How shall we re-label the vertical axis? Well, we said that the probability of a particular datum
being found in a particular bucket was proportional to the area of that bucket. Therefore, let's recalibrate the axis such that the total blue area is 100%. That way, the answer to "what is the
probability of a given item being found in such and such a region?" is precisely the area of that region:
This normalization has the beautiful property that no matter how many random numbers we generate, and no matter how many buckets we put them in, every histogram will look more or less the same, and
will have the same labels on the axes; the distribution will go from zero to one on both axes and the total area will always be 100%.
Now imagine what would happen in the limiting case. Suppose we kept generating more and more numbers; trillions of them. And we made the buckets smaller and smaller. If you squint a bit, you might as
well just say that you're going to do a graph as though the whole thing were continuous, and just draw the extraordinarily simple line graph:
And there you have it; the continuous uniform distribution. But that is certainly not the only distribution! There is the famous "bell curve" or "normal" distribution. The range of the normal
distribution is infinite on both ends; a normally-distributed random value can have any magnitude positive or negative. Here's Wikipedia's image of four different bell curves:
Remember, these are just the limiting case of a histogram with a huge number of buckets and a huge number of samples. With these distributions, again, the probability of a value being found in a
particular range is proportional to the area under the curve in that range. And of course, the area under each one is the same -- 100% -- which is why the skinny bell is so much taller than the wide
one. And notice also how the vast majority of the area is very, very close to the peak in every case. The normal distribution is a "thin, long tail" distribution.
There are other distributions as well. For example, the Cauchy distribution also has the characteristic bell shape, but its tail is a bit "fatter" than the normal distribution. Here, again from
Wikipedia we have four Cauchy distributions:
Notice how much fatter those tails are, and how the "bell" correspondingly has to be narrower to ensure that the area still adds up to 100%.
You know something that is still a bit odd about these graphs though? Because the probability is proportional to the area under the curve, the actual value of the curve at any given point isn't
really that meaningful. Another way to characterize a probability distribution is to work out the probability of a given number or any number less than it being chosen; that is the area under the
curve to the left of that number. Of course, the area under the curve is simply the integral, so let's graph the integral. We know that such a curve is going to increase monitonically from a limit on
the left of zero to a limit on the right of one, since the probability of getting any very, very small number is close to zero, and the probability of getting any number smaller than a very large
number is close to 100%. If we graph the integral of these four Cauchy distributions we get:
That graph is called the cumulative distribution and notice how it is extremely easy to read! Consider the purple line; clearly the probability of getting a value of two or less is about 85%; we can
just read it right off. This graph has exactly the same information content as the "histogram limit" probability distribution; its just a bit easier to read. It is quite difficult to tell from
reading the original graph that about 85% of the area under the curve is to the left of the 2.
Apropos of nothing in particular, I note that we have a function here that is monotone increasing; that means that we can invert it. Let's swap the x and y axes:
This function -- the inverse of the integral of the probability distribution -- is called the quantile function. It tell you "given a probability p from zero to one, what is the number x such that
the probability of getting a number less than x is equal to p?" That graph should look familiar to anyone who has taken high school trig; the quantile function of the Cauchy distribution is simply
the tangent function rejiggered to go from zero to one instead of -π / 2 to π / 2.
So that's pretty neat; the Cauchy bell-shaped curve is in fact just the derivative of the arctangent! But so what? How is this relevant?
We started this fabulous adventure by saying that sometimes you want to generate numbers that are random but not uniform. The graph above transforms uniformly-distributed random numbers into
Cauchy-distributed random numbers. It is amazing, but true! Check it out:
private static Random random = new Random();
private static IEnumerable<double> UniformDistribution()
while (true) yield return random.NextDouble();
private static double CauchyQuantile(double p)
return Math.Tan(Math.PI * (p - 0.5));
private static int[] CreateHistogram(
IEnumerable<double> data,
int buckets,
double min,
double max)
int[] results = new int[buckets];
double multiplier = buckets / (max - min);
foreach (double datum in data)
double index = (datum - min) * multiplier;
if (0.0 <= index && index < buckets)
results[(int)index] += 1;
return results;
and now you can tie the whole thing together with:
var cauchy = from x in UniformDistribution() select CauchyQuantile(x);
int[] histogram = CreateHistogram(cauchy.Take(100000), 50, -5.0, 5.0);
And if you graph the histogram, sure enough, you get a Cauchy distribution:
So, summing up: if you need random numbers that are distributed according to some distribution, all you have to do is
1. Work out the desired probability distribution function such that the area under a portion of the curve is equal to the probability of a value being randomly generated in that range.
2. Integrate the probability distribution to determine the cumulative distribution.
3. Invert the cumulative distribution to get the quantile function.
4. Transform your uniformly distributed random data by running it through the quantile function.
Piece of cake!
Next time: a simple puzzle based on an error I made while writing the code above.
1. The set of real values representable by doubles is not uniformly distributed, and the Random class is not documented as producing a uniform distribution. But in practice it is a reasonable
approximation. ↩
2. Using the awesome Microsoft Chart Control, which now ships with the CLR. It was previously only available as a separate download. ↩ | {"url":"http://ericlippert.com/2012/02/21/generating-random-non-uniform-data/","timestamp":"2014-04-18T05:48:14Z","content_type":null,"content_length":"45348","record_id":"<urn:uuid:95298d59-4c1d-4a4d-96d6-b5b336ad4b74>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
If I pick all of the favorites, will I maximize my chances of winning Buffett’s billion dollars?
The short answer to this question is… maybe. Picking all of the favorites in the first round definitely maximizes your chances of getting as many first round picks correct as possible. But depending
on how all of the potential matchups break down in subsequent rounds, it may not be optimal (in the sense of maximizing your chances of a perfect bracket.)
To see why, let’s consider a simplified example of four teams. Say those teams are Syracuse, Robert Morris, George Washington, and Gonzaga (the top left quadrant in Jerry Palm’s projected bracket as
of Monday). For simplicity, let’s just assume that Syracuse beats Robert Morris. Then the question becomes, which is most likely to happen: 1) GW beats Gonzaga and then Syracuse beats GW or 2)
Gonzaga beats GW and then Syracuse beats Gonzaga. (Presumably Syracuse winning in the 2nd round is more likely than an upset by either of the other teams, so we can eliminate those possibilities
Syracuse would be favored against either Gonzaga or GW, but let’s just assume that Gonzaga matches up much better against Syracuse than GW does, so Syracuse would have a 70% chance of beating GW, but
only a 60% chance of beating Gonzaga. Let’s also assume that Gonzaga also has a 53% chance of beating GW.
If I were to pick favorites throughout the bracket, I would pick Gonzaga over GW and then Syracuse over Gonzaga. That outcome would have a 31.8% chance of playing out (53% * 60%). On the other hand,
GW beating Gonzaga and then getting beat by Syracuse has a 32.9% chance of occurring (47% * 70%). Thus, the most likely complete sub-bracket here involves Gonzaga being upset (albeit slightly).
How realistic is this scenario? These sorts of things can happen, especially when other circumstances come into play (say hypothetically the second game here were going to be played in Spokane,
giving Gonzaga a huge relative advantage over GW). Nonetheless, they don’t tend to be major factors, so generally picking the favorites to win in each round as you fill out your bracket will be near
optimal in terms of maximizing your expected number of correct picks in a bracket game (note: not necessarily optimal in terms of maximizing your expected score, but we will get into that later), as
well as maximizing your chances of that elusive “perfect bracket.”
It may not maximize your chances of actually winning a billion dollars though, because odds are, you won’t be the only person following this strategy. So in the off off off chance all 63 of your
picks come in, you will actually have to split the money. | {"url":"http://blog.bracketvoodoo.com/post/76495432618/if-i-pick-all-of-the-favorites-will-i-maximize-my","timestamp":"2014-04-16T19:00:27Z","content_type":null,"content_length":"57944","record_id":"<urn:uuid:94d8a739-cf0d-44af-9fbe-a4773fda9e60>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yan Peng
• Yan Peng
• Assistant Professor
• Department of Mathematics & Statistics
• 2128 Engr and Comp Sci Bldg
• Norfolk, VA 23529
• 757-683-6001
• Education
National University of Singapore, 2005
Major: Computational Fluid Dynamics
Degree: Ph. D.
Nanjing University of Aeronautics & Astronautics, 1999
Major: Computational Aerodynamics
Degree: M.S.
Nanjing University of Aeronautics & Astronautics, 1996
Major: Computational Aerodynamics
Degree: B.Eng.
• Contracts, Grants and Sponsored Research
High-order multiscale methods for nonequilibrium flows
Sponsoring Organization: US AFOSR
Date: September 1, 2009 - August 31, 2012
Awarding Organization: Federal
Investigators:Luo, L.; Hu, F. Q.; Liao, W.; Peng, Y.
Numerical and experimental study of complex bio-fluids in microfluidic lab-on-a-chip for bio-medical applications
Sponsoring Organization: Old Dominion Research Foundation
Date: January 1, 2010 - June 30, 2010
Awarding Organization: Old Dominion University
Investigators:Peng, Y.; Luo, L.; Liao, W.; Qian, S.
Computing using general purpose graphic processing units (NNL08AA00C)
Sponsoring Organization: NASA LaRC/SSAIC
Date: June 24, 2008 - October 31, 2009
Awarding Organization: Federal
Investigators:Luo, L.; Liao, W.; Peng, Y.
Collaborative research: Efficient lattice Boltzmann methods for multiphase and multicomponent flows (NSF proposal 0500213)
Sponsoring Organization: NSF
Date: June 15, 2005 - June 14, 2008
Awarding Organization: Federal
Investigators:Peng, Y.
Development and implementation of effective multigrid solvers for the compressible Navier-Stokes equations in 3D
Sponsoring Organization: NIA/NASA LaRC/Army
Date: April 1, 2007 - September 25, 2007
Awarding Organization: Federal
Investigators:Luo, L.; Liao, W.; Peng, Y.
MURI proposal (Topic 19): To investigate and characterize non-thermochemistry equilibrium effects on turbulence physics
Sponsoring Organization: Air Force Office Scientific Research/DOD
Date: June 15, 2004 - May 31, 2007
Awarding Organization: Federal
Investigators:Peng, Y.
Numerical study of fluid flow and heat transfer in micro devices
Sponsoring Organization: Agency for Science, Technology and Research, Singapore
Date: October, 2002 - October, 2005
Awarding Organization: Other
Investigators:Peng, Y.
Development of an efficient software package for simulation of fluid flows
Sponsoring Organization: Defence Science & Technology Agency, Singapore
Date: September, 2002 - September, 2005
Awarding Organization: Other
Investigators:Peng, Y. | {"url":"http://odu.edu/directory/people/y/ypeng","timestamp":"2014-04-17T00:51:29Z","content_type":null,"content_length":"23532","record_id":"<urn:uuid:2db65bb1-f61b-4b34-8d7e-8553e548009d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alright, I Know How To Do This But Everytime I ... | Chegg.com
Alright, I know how to do this but everytime I try to add thesethree values I end up getting the wrong phase angle and amplitude.
i(t) = 10cos(15t +40) -8cos(15t +130) + 6sin(15t -20)
I need to add all these up using phasors.
The last sin needs to be coverted to cos so I subtracted -20 -90 to get -110.
So phasor,
10<40 -8<130 + 6<-110
So to add these I convert from polar to rectangular.
10cos(40) -8cos(130) +6cos(-110) REAL Part
[10sin(40) -8sin(130) +6sin(-110)]j Complex Part
So I plug this into calculator and get -9.73 + 15.16j
I take the tan-1(15.16/-9.73) and get the angle -57.34degrees. But the answer is suppost to be -26.41, why am I notgettting the right answer?!?
Please respond promptly for lifesaver.
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/alright-know-everytime-try-add-thesethree-values-end-getting-wrong-phase-angle-amplitude-t-q407234","timestamp":"2014-04-20T18:13:11Z","content_type":null,"content_length":"22504","record_id":"<urn:uuid:b687ba6b-c2ba-4b83-9f82-8e991bc2c957>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: ZAMM Á Z. Angew. Math. Mech. 82 (2002) 2, 115124
Ambrosi, D.
Infiltration through Deformable Porous Media
The equations driving the flow of an incompressible fluid through a porous deformable medium are derived in the frame-
work of the mixture theory. This mechanical system is described as a binary saturated mixture of incompressible compo-
nents. The mathematical problem is characterized by the presence of two moving boundaries, the material boundaries of
the solid and the fluid, respectively. The boundary and interface conditions to be supplied to ensure the well-posedness of
the initial boundary value problem are inspired by typical processes in the manufacturing of composite materials. They
are discussed in their connections with the nature of the partial stress tensors. Then the equations are conveniently
recast in a material frame of reference fixed on the solid skeleton. By a proper choice of the characteristic magnitudes of
the problem at hand, the equations are rewritten in non-dimensional form and the conditions which enable neglecting the
inertial terms are discussed. The second part of the paper is devoted to the study of one-dimensional infiltration by the
inertia-neglected model. It is shown that when the flow is driven through an elastic matrix by a constant rate liquid inflow
at the border some exact results can be obtained.
Key words: infiltration, mixture theory, inertia, Darcy's law, porous deformable media, moving boundaries
MSC (2000): 76S05
0. Introduction
The mathematical modelling of the infiltration of an incompressible fluid through a porous deformable solid is of inter-
est in a number of industrial and environmental applications. Notable examples are the manufacturing of composite
materials by injection molding and soil consolidation problems. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/106/2953988.html","timestamp":"2014-04-19T23:09:35Z","content_type":null,"content_length":"8870","record_id":"<urn:uuid:b629ecae-ef24-4dc0-9fbc-fa8721f0dbf3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Temperature and thermal expansion
We'll shift gears in the course now, moving on to thermal physics.
Temperature scales
In the USA, the Fahrenheit temperature scale is used. Most of the rest of the world uses Celsius, and in science it is often most convenient to use the Kelvin scale.
The Celsius scale is based on the temperatures at which water freezes and boils. 0°C is the freezing point of water, and 100° C is the boiling point. Room temperature is about 20° C, a hot summer day
might be 40° C, and a cold winter day would be around -20° C.
To convert between Fahrenheit and Celsius, use these equations:
The two scales agree when the temperature is -40°. A change by 1.0° C is a change by 1.8° F.
The Kelvin scale has the same increments as the Celsius scale (100 degrees between the freezing and boiling points of water), but the zero is in a different place. The two scales are simply offset by
273.15 degrees. The zero of the Kelvin scale is absolute zero, which is the lowest possible temperature that a substance can be cooled to. Several physics formulas involving temperature only make
sense when an absolute temperature (a temperature measured in Kelvins) is used, so the fact that the Kelvin scale is an absolute scale makes it very convenient to apply to scientific work.
Measuring temperature
A device used to measure temperature is called a thermometer, and all thermometers exploit the fact that properties of a material depend on temperature. The pressure in a sealed bulb depends on
temperature; the volume occupied by a liquid depends on temperature; the voltage generated across a junction of two different metals depends on temperature, and all these effects can be used in
Linear thermal expansion
The length of an object is one of the more obvious things that depends on temperature. When something is heated or cooled, its length changes by an amount proportional to the original length and the
change in temperature:
The coefficient of linear expansion depends only on the material an object is made from.
If an object is heated or cooled and it is not free to expand or contract (it's tied down at both ends, in other words), the thermal stresses can be large enough to damage the object, or to damage
whatever the object is constrained by. This is why bridges have expansion joints in them (check this out where the BU bridge meets Comm. Ave.). Even sidewalks are built accounting for thermal
Holes expand and contract the same way as the material around them.
Consider a 2 m long brass rod and a 1 m long aluminum rod. When the temperature is 22 °C, there is a gap of 1.0 x 10-3 m separating their ends. No expansion is possible at the other end of either
rod. At what temperature will the two bars touch?
The change in temperature is the same for both, the original length is known for both, and the coefficients of linear expansion can be looked up in a table.
Both rods will expand when heated. They will touch when the sum of the two length changes equals the initial width of the gap. Therefore:
So, the temperature change is:
If the original temperature was 22 °C, the final temperature is 38.4 °C.
Thermal expansion : expanding holes
Consider a donut, a flat, two-dimensional donut, just to make things a little easier. The donut has a hole, with radius r, and an outer radius R. It has a width w which is simply w = R - r.
What happens when the donut is heated? It expands, but what happens to the hole? Does it get larger or smaller? If you apply the thermal expansion equation to all three lengths in this problem, do
you get consistent results? The three lengths would change as follows:
The final width should also be equal to the difference between the outer and inner radii. This gives:
This is exactly what we got by applying the linear thermal expansion equation to the width of the donut above. So, with something like a donut, an increase in temperature causes the width to
increase, the outer radius to increase, and the inner radius to increase, with all dimensions obeying linear thermal expansion. The hole expands just as if it's made as the same material as the hole.
Volume thermal expansion
When something changes temperature, it shrinks or expands in all three dimensions. In some cases (bridges and sidewalks, for example), it is just a change in one dimension that really matters. In
other cases, such as for a mercury or alcohol-filled thermometer, it is the change in volume that is important. With fluid-filled containers, in general, it's how the volume of the fluid changes
that's important. Often you can neglect any expansion or contraction of the container itself, because liquids generally have a substantially larger coefficient of thermal expansion than do solids.
It's always a good idea to check in a given situation, however, comparing the two coefficients of thermal expansion for the liquid and solid involved.
The equation relating the volume change to a change in temperature has the same form as the linear expansion equation, and is given by:
The volume expansion coefficient is three times larger than the linear expansion coefficient.
Temperature, internal energy, and heat
The temperature of an object is a measure of the energy per molecule of an object. To raise the temperature, energy must be added; to lower the temperature, energy has to be removed. This thermal
energy is internal, in the sense that it is associated with the motion of the atoms and molecules making up the object.
When objects of different temperatures are brought together, the temperatures will tend to equalize. Energy is transferred from hotter objects to cooler objects; this transferred energy is known as
Specific heat capacity
When objects of different temperature are brought together, and heat is transferred from the higher-temperature objects to the lower-temperature objects, the total internal energy is conserved.
Applying conservation of energy means that the total heat transferred from the hotter objects must equal the total heat transferred to the cooler objects. If the temperature of an object changes, the
heat (Q) added or removed can be found using the equation:
where m is the mass, and c is the specific heat capacity, a measure of the heat required to change the temperature of a particular mass by a particular temperature. The SI unit for specific heat is J
/ (kg °C).
This applies to liquids and solids. Generally, the specific heat capacities for solids are a few hundred J / (kg °C), and for liquids they're a few thousand J / (kg °C). For gases, the same equation
applies, but there are two different specific heat values. The specific heat capacity of a gas depends on whether the pressure or the volume of the gas is kept constant; there is a specific heat
capacity for constant pressure, and a specific heat capacity for constant volume.
0.300 kg of coffee, at a temperature of 95 °C, is poured into a room-temperature steel mug, of mass 0.125 kg. Assuming no energy is lost to the surroundings, what does the temperature of the mug
filled with coffee come to?
Applying conservation of energy, the total change in energy of the system must be zero. So, we can just add up the individual energy changes (the Q's) and set the sum equal to zero. The subscript c
refers to the coffee, and m to the mug.
Note that room temperature in Celsius is about 20°. Re-arranging the equation to solve for the final temperature gives:
The temperature of the coffee doesn't drop by much because the specific heat of water (or coffee) is so much larger than that of steel. This is too hot to drink, but if you leave it heat will be
transferred to the surroundings and the coffee will cool.
Changing phase; latent heat
Funny things happen when a substance changes phase. Heat can be transferred in or out without any change in temperature, because of the energy required to change phase. What is happening is that the
internal energy of the substance is changing, because the relationship between neighboring atoms and molecules changes. Going from solid to liquid, for example, the solid phase of the material might
have a particular crystal structure, and the internal energy depends on the structure. In the liquid phase, there is no crystal structure, so the internal energy is quite different (higher,
generally) from what it is in the solid phase.
The change in internal energy associated with a change in phase is known as the latent heat. For a liquid-solid phase change, it's called the latent heat of fusion. For the gas-liquid phase change,
it's the latent heat of vaporization, which is generally larger than the latent heat of fusion. Latent heats are relatively large compared to the heat required to change the temperature of a
substance by 1° C.
If you use the sum-of-all-the-Q's equals zero equation, you have to be careful with the heat associated with something changing phase because you need to put it in with the appropriate sign. If heat
is going into a substance changing phase, such as when it's melting or boiling, the Q is positive; if heat is being removed, such as when it's freezing or condensing, the Q is negative. We don't have
to worry about the signs for the heat required to change temperature, because the sign is already built in to the change in temperature.
Note that a change in phase takes place only under the right conditions. Water, for example, doesn't freeze at 10 °C, at least not at atmospheric pressure. If you had water at that temperature, you
would first need to cool it to the melting point, 0 °C, before it would start to freeze.
If you're putting in heat from an outside source, the sum-of-all-the-Q's equation becomes: | {"url":"http://physics.bu.edu/~duffy/py105/notes/Temperature.html","timestamp":"2014-04-19T02:55:30Z","content_type":null,"content_length":"11253","record_id":"<urn:uuid:4b54f4d1-72e3-4da3-8457-d67ef28ef8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Condensed Matter Seminar
Condensed Matter Seminar Series
Exotic Quantum Critical Points
Matt Hastings
Duke University
Thursday September 8, 4:00pm, Room 299, Physics Building
(note irregular time and place)
Abstract: The standard paradigm for describing critical phenomena is the Landau paradigm of symmetry breaking, but it was proposed in the last decade that so-called "exotic" or "de-confined" quantum
critical points may exist that lie outside this paradigm. However, despite extensive theoretical work, the existence of such critical points in concrete lattice Hamiltonians remained controversial:
possible candidates exist but the interpretation of the numerical data is in question. I'll talk about recent work with Roger Melko and Sergei Isakov presenting strong evidence for the existence of
such a critical point, the so-called XY* point involving condensation of fractionalized particles carrying conserved U(1) charge coupled to a Z_2 gauge theory. By a combination of studying scaling of
familiar quantities like the correlation function (where we find a strongly nonclassical exponent) with studying scaling of novel functions of the winding number distribution (which directly probe
topological properties of the discrete gauge theory), we present "smoking gun" evidence for such a point in a Bose-Hubbard model.
Host: Harold Baranger | {"url":"http://www.phy.duke.edu/~cmseminar/cmseminar1112/hastings.html","timestamp":"2014-04-18T21:53:11Z","content_type":null,"content_length":"2584","record_id":"<urn:uuid:2e5c22bb-a946-4486-a522-39c08f3d37fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimzation problem, cylinder in sphere
November 13th 2010, 05:21 PM #1
Dec 2009
Optimzation problem, cylinder in sphere
A right circular cylinder is inscribed in a sphere of radius r. Find the largest possible volume of such a cylinder.
I don't even know where to begin to find the relationship of height to radius of the sphere....
Think of the problem in 2D. Center a circle on the Cartesian axes. draw a rectangle in it, also centered. with the edges touching the circle. think of this as viewing the cylinder in the sphere
from sideways. now, do you knwo what you want to maximize?
November 13th 2010, 05:38 PM #2 | {"url":"http://mathhelpforum.com/calculus/163130-optimzation-problem-cylinder-sphere.html","timestamp":"2014-04-16T08:02:11Z","content_type":null,"content_length":"33629","record_id":"<urn:uuid:39d11a49-3118-4924-af68-1b94cf680b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lauda on Frobenius Algebras and Open Topological Strings
Posted by Urs Schreiber
Here are some notes on the preprint
A. Lauda
Frobenius algebras and planar open string topological field theories
The author
1) points out a relation between adjunctions and Frobenius algebras.
2) uses categorified adjunctions in order to describe categorified Frobenius algebras and topological membranes.
Turns out that point 1) relies on the same mechanism which is also responsible for the phenomenon that 2-trivializations of 2-functors give rise to Frobenius algebras.
Thanks to John Baez for making me aware of this paper by means of his TWF.
(I had a pretty long review of this paper almost done yesterday when a computer crash erased it all. Here is a second but inevitably shorter version. )
What I am talking about here is mostly the result presented first in
Aaron Lauda
Frobenius algebras and ambidextrous adjunctions
which is a strengthening of a result found in
M. Müger
From subfactors to categories and topology. I. Frobenius algebras in and Morita equivalence of tensor categories.
J. Pure Appl. Algebra, 180 (2003)
amplified by John Baez in TWF 174.
As a motivation, consider the following fact.
Every (direct sum of) matrix algebra(s) can be regarded as a Frobenius algebra. (Using Wedderburn’s theorem.) The product is ‘index contraction’, the coproduct is ‘insertion of a unit’.
Phrase this in more highbrow terms. Let $V$ be some vector space and $\mathrm{End}\left(V\right)\simeq V\otimes {V}^{*}$ its space of endomorphisms. Let’s regard the monoidal category $\mathrm{Vect}$
of vector spaces as a 2-category with a single object, whose 1-morphisms are vector spaces, 2-morphisms are linear maps and horizontal composition is the tensor product. For instance $V\otimes {V}^
{*}$ looks like
(1)$•\stackrel{V}{\to }•\stackrel{{V}^{*}}{\to }•\phantom{\rule{thinmathspace}{0ex}}.$
‘Index contraction’ (the evaluation map of covectors on vectors) is a linear map, hence a 2-morphism, of the kind
(2)$\begin{array}{c}•\stackrel{{V}^{*}}{\to }•\stackrel{V}{\to }•\\ e⇓\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\\ •\stackrel{K}{\to }•\end{array}$
where $K$ is the ground field which we are working on, regarded as a 1-D vector space. ‘Insertion of a unit’ is the map
(3)$\begin{array}{c}•\stackrel{K}{\to }•\\ i⇓\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\\ •\stackrel{V}{\to }•\stackrel{{V}^{*}}{\to }•\end{array}$
which sends $1\in K$ to ${\mathrm{Id}}_{V}\in \mathrm{End}\left(V\right)$. This is the archetypical example of what is called an adjunction in a 2-category. In fact, since $\mathrm{Vect}$ is
symmetric, the order of $V$ and ${V}^{*}$ in the above is not really essential and hence we really have something Aaron Lauda calls an ambidextrous adjunction. The way I have approached adjunctions
here it is clear that they are related to algebras. It is a simple exercise to show that the product and coproduct coming from $e$ and $i$ above are such that we really have a Frobenius algebra.
Aaron Lauda turns this into a stronger and more precise theorem (his theorem 8) which essentially says that every Frobenius algebra comes from an adjunction along the lines sketched above.
Building on that, the second part of the paper is concerned with categorifying this situation. If you know about the main idea of categorification, know how Frobenius algebras are related to
topological strings and know what a topological membrane is supposed to be it should not come as a surprise that there is a way to relate categorified Frobenius algebras with topological membranes. I
don’t want to say any more about this part right now.
What I do want to point out is, how the above mentioned result on adjunctions and Frobenius algebras is based on the same general mechanism which relates 2-trivializations of 2-functors with
Frobenius algebra.
Namely given a 2-functor (or just a 2-bundle without connection, if you prefer) which I shall call
from some 2-category of surface elements in $M$ to some target 2-category, we can restrict it to open subset ${U}_{i}\subset M$ and demand that there it looks like a ‘more trivial’ 2-functor ${tra}_
{i}$ in some sense.
Hence we demand there to be morphisms of 2-functors
(5)$\mathrm{tra}{\mid }_{{U}_{i}}\stackrel{{t}_{i}}{\to }{\mathrm{tra}}_{i}$
(6)${\mathrm{tra}}_{i}\stackrel{{\overline{t}}_{i}}{\to }\mathrm{tra}{\mid }_{{U}_{i}}$
which relate the two.
If these were 1-functors, we would define the transition between ${\mathrm{tra}}_{i}$ and ${\mathrm{tra}}_{j}$ as
(7)$\begin{array}{c}{\mathrm{tra}}_{i}\stackrel{{\overline{t}}_{i}}{\to }\mathrm{tra}{\mid }_{{U}_{\mathrm{ij}}}\stackrel{{t}_{j}}{\to }{\mathrm{tra}}_{j}\\ =\\ {\mathrm{tra}}_{i}\stackrel{{g}_{\
mathrm{ij}}}{\to }{\mathrm{tra}}_{j}\end{array}\phantom{\rule{thinmathspace}{0ex}}.$
But since 2-functor live in a 2-category, we should really replace the equality here with 2-morphisms
(8)$\begin{array}{c}{\mathrm{tra}}_{i}\stackrel{{\overline{t}}_{i}}{\to }\mathrm{tra}{\mid }_{{U}_{\mathrm{ij}}}\stackrel{{t}_{j}}{\to }{\mathrm{tra}}_{j}\\ {\varphi }_{\mathrm{ij}}⇓\phantom{\rule
{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\\ {\mathrm{tra}}_{i}\stackrel{{g}_{\mathrm{ij}}}{\to }{\mathrm{tra}}_{j}\end{array}$
(9)$\begin{array}{c}{\mathrm{tra}}_{i}\stackrel{{\overline{t}}_{i}}{\to }\mathrm{tra}{\mid }_{{U}_{\mathrm{ij}}}\stackrel{{t}_{j}}{\to }{\mathrm{tra}}_{j}\\ {\overline{\varphi }}_{\mathrm{ij}}⇑\
phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\\ {\mathrm{tra}}_{i}\stackrel{{g}_{\mathrm{ij}}}{\to }{\mathrm{tra}}_{j}\end{array}\phantom
This way one gets something rather similar to an adjunction. (It’s actually a little more general.) By precisely the same underlying mechanism which Aaron Lauda uses to show the relation between
adjunctions and Frobenius algebras, one finds that a 2-trivialization of the above sort gives rise to something like a Frobenius algebra structure.
There would be more to say, and I did before it was erased. Right now this shall be it.
Posted at December 15, 2005 6:10 PM UTC | {"url":"http://golem.ph.utexas.edu/string/archives/000700.html","timestamp":"2014-04-21T14:52:21Z","content_type":null,"content_length":"22400","record_id":"<urn:uuid:9c7787e7-39a9-4127-a325-cdf5b261cf01>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
New formulation of the Fourier modal method for crossed surface-relief gratings
A new formulation of the Fourier modal method (FMM) that applies the correctrules of Fourier factorization for crossed surface-relief gratings is presented.The new formulation adopts a general
nonrectangular Cartesian coordinate system, which gives the FMM greater generality and in some cases the ability to savecomputer memory and computation time. By numerical examples, the new FMM
isshown to converge much faster than the old FMM. In particular, the FMM isused to produce well-converged numerical results for metallic crossed gratings.In addition, two matrix truncation schemes,
the parallelogramic truncationand a new circular truncation, are considered. Numerical experiments showthat the former is superior.
© 1997 Optical Society of America
Lifeng Li, "New formulation of the Fourier modal method for crossed surface-relief gratings," J. Opt. Soc. Am. A 14, 2758-2767 (1997)
Sort: Year | Journal | Reset
1. P. Vincent, “A finite-difference method for dielectric and conducting crossed gratings,” Opt. Commun. 26, 293–296 (1978).
2. D. Maystre and M. Nevière, “Electromagnetic theory of crossed gratings,” J. Opt. (Paris) 9, 301–306 (1978).
3. S. T. Han, Y.-L. Tsao, R. M. Walser, and M. F. Becker, “Electromagnetic scattering of two-dimensional surface-relief dielectricgratings,” Appl. Opt. 31, 2343–2352 (1992).
4. G. H. Derrick, R. C. McPhedran, D. Maystre, and M. Nevière, “Crossed gratings: a theory and its applications,” Appl. Phys. 18, 39–52 (1979).
5. R. C. McPhedran, G. H. Derrick, M. Nevière, and D. Maystre, “Metallic crossed gratings,” J. Opt. (Paris) 13, 209–218 (1982).
6. G. Granet, “Diffraction par des surfaces bipériodiques: résolutionen coordonnées non-orthogonales,” Pure Appl. Opt. 4, 777–793 (1995).
7. J. B. Harris, T. W. Preist, J. R. Sambles, R. N. Thorpe, and R. A. Watts, “Optical response of bigratings,” J. Opt. Soc. Am. A 13, 2041–2049 (1996).
8. D. C. Dobson and J. A. Cox, “An integral equation method for biperiodic diffraction structures,”in International Conference on the Application and Theory ofPeriodic Structures, J. M. Lerner and
W. R. McKinney, eds., Proc. SPIE 1545, 106–113 (1991).
9. R. Bräuer and O. Bryngdahl, “Electromagnetic diffraction analysis of two-dimensional gratings,” Opt. Commun. 100, 1–5 (1993).
10. E. Noponen and J. Turunen, “Eigenmode method for electromagnetic synthesis of diffractive elementswith three-dimensional profiles,” J. Opt. Soc. Am. A 11, 2494–2502 (1994).
11. J.-J. Greffet, C. Baylard, and P. Versaevel, “Diffraction of electromagnetic waves by crossed gratings: a seriessolution,” Opt. Lett. 17, 1740–1742 (1992).
12. O. P. Bruno and F. Reitich, “Numerical solution of diffraction problems: a method of variation ofboundaries. III. Doubly periodic gratings,” J. Opt. Soc. Am. A 10, 2551–2562 (1993).
13. O. P. Bruno, and F. Reitich, “Calculation of electromagnetic scattering via boundary variations andanalytic continuation,” Appl. Computat. Electromagn. Soc. J. 11, 17–31 (1996).
14. L. Li, “Use of Fourier series in the analysis of discontinuous periodic structures,” J. Opt. Soc. Am. A 13, 1870–1876 (1996).
15. P. Lalanne and G. M. Morris, “Highly improved convergence of the coupled-wave method for TM polarization,” J. Opt. Soc. Am. A 13, 779–784 (1996).
16. G. Granet and B. Guizal, “Efficient implementation of the coupled-wave method for metallic lamellargratings in TM polarization,” J. Opt. Soc. Am. A 13, 1019–1023 (1996).
17. See, for example, R. C. Wrede, Introductionto Vector and Tensor Analysis (Dover, New York, 1972), Chap. 1.
18. We ignore the electromagnetic edge effect at the verticesof the zigzag contour. In the far field, this artificially introduced edgeeffect should be negligible.
19. At the time of the writing, I have not mathematically proventhe validity or invalidity of the hypothesis. However, the numerical examplesgiven in Section 7 seem to support this hypothesis. The
sum clearly correspondsto the Fourier coefficient of εE^2 with respectto x^1, albeit in a complicated way. The latter, as indicated in Eq. (26b), is continuous withrespect to x^2.
20. L. Li, “Formulation and comparison of two recursive matrix algorithms for modelinglayered diffraction gratings,” J. Opt. Soc. Am. A 13, 1024–1035 (1996).
21. S. Zohar, “Toeplitz matrix inversion: the algorithm of W. F. Trench,” J. Assoc. Comput. Mach. 16, 592–601 (1969).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-14-10-2758","timestamp":"2014-04-21T00:07:04Z","content_type":null,"content_length":"113284","record_id":"<urn:uuid:f8c08824-6d62-4490-b832-9c370b87f4f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
random number
How to use random number tables and generators
In simple random sampling, every potential sampling unit (individual or quadrat) has an equal chance of getting selected and the selection of one sampling unit does not affect the chance of selecting
another. Equal probability (the "simple" part of simple random sampling) and independence are met by sampling from a uniform statistical distribution. This is lucky because that is what random number
tables and random number generators on calculators use.
Random number generation is important in nearly every study in vegetation science. If you have never used random number generation, or are a bit rusty, go through this section carefully. You will be
applying the techniques you learn here in your class projects.
Some of the examples might make sense only after reading the section of the course on Simple Random Sampling. In fact, simple random sampling and random number selection are so intertwined that you
might have to go back and forth between the two sections.
How to use a typical random number generator in a calculator
Calculators with the capability to generate random numbers nearly always give a random uniform number between 0 and 1. To use these numbers, you need to convert them to a range that matches your
An example of selecting individuals at random
Consider an example of sampling by individuals. Imagine you have completed the design phase of your field study. You now want to take measurements from a subset of six canopy trees out of the 24 in
your statistical population. You have already numbered the trees from 1 to 24.
The calculator gives you a random real number between 0 and 1. To convert this number to a random integer between 1 and 24, multiply the calculator number by 24 and round up to the next integer.
(Here is a technicality: skip if the calculator gives 0.0000 exactly.) For example, if the calculator gives 0.5481, multiply 0.5481 by 24 to get 13.15. Round up to 14. This answer means that tree
number 14 is in your sample.
A simple way to use a random number table
A random number table is a series of digits (0 to 9) arranged randomly through the rows and columns. If you don't have access to a random number table, you can use one I generated for this class.
Click here to see the table or to print it. (You must have Acrobat Reader to read this file.)
Pick an arbitrary starting point (using darts, poking the table with your eyes closed, whatever; every part of the table is random, you just need to avoid starting at the same spot every time). Read
down the columns from the arbitrary starting point, accepting any integers in your range.
The same example of selecting individuals at random
In the tree example from the section on Simple Random Sampling, the range is 01 to 24, corresponding to trees number 1 to 24. If the number from the table is outside the range, skip it and try the
next one. Let's say that your arbitrary starting point is row 11 and column 2 on the course random number table. Looking at columns 2 and 3, the first entry is "10." So tree number 10 is in the
sample. The next entry is "02,"so tree number 2 is in the sample. But the next number is "86," which is outside the range of 1 to 24, so you skip it.
Three examples of selecting random locations
Consider another example, choosing random quadrat locations by the coordinate system. Imagine you have completed the design phase of your field study and you want to take measurements from eight 1-m^
2 quadrats within a 12-m by 15-m study area.
┃ First get the 12-m coordinate. Use the calculator to get a random number between 0 and 1. Multiply this │ ┃
┃ number by 12 to get the position of the quadrat along this axis. Get another random number for the 15-m │ It is important to use a lot of digits in your random number, for example, 0.3280 vs. ┃
┃ coordinate, and multiply it by 15 to get the position of the quadrat along this axis. For example, the │ 0.3. If you just used 0.3, then your locations could be no closer than 1.2 m apart! ┃
┃ calculator gives you 0.3280, which you multiply by 12 to get 3.94. The second number from the calculator is │ Do you see why? 0.3 times 12 gives 3.6 m and 0.4 times 12 gives 4.8 m. If you were ┃
┃ 0.7002, which you multiply by 15 to get 10.50. This means that the quadrat located at coordinate position │ using a 0.5-m by 0.5-m quadrat, then more then most of your study are could never be ┃
┃ 3.94 m and 10.50 m is part of your sample set. │ selected for sampling. This is no good!. ┃
┃ │ ┃
┃ Repeat these steps for the remaining seven quadrats. │ ┃
Now consider a second case, locating quadrats with the coordinate system. You first need to decide how carefully you want to locate your sample (like every meter, every 10 cm, or every cm). Sometimes
a resolution of 1 m is adequate, but tapes are usually marked in centimeters, so keeping a resolution of 0.01 m provides a more satisfactory randomization with no extra work either before hand or in
the field. For locating a quadrat along an axis of 12 m, you would look at two columns, for numbers from 00 to 12. If you want a resolution of 0.01 m, you would look at four columns (for a range of
0000 to 1200), and put a decimal point between the third and fourth digit. For example, an entry in the random number table of "0542" means that the coordinate is 5.42 m. If the next entry is outside
the range (like "6780"), skip it and try another one.
The third example is using the grid system with the random number generator on a calculator. Imagine you have completed the design phase of your field study. You want to take measurements within
eight 1-m^ 2 quadrats within a 12-m by 15-m study area. In this case, there are 180 possible 1-m^ 2 sections within the study area. So the process of random selection is simply picking integers from
1 to 180. To do this, multiply the random number (which is between 0 and 1) by 180 and round up to the next integer.
A technicality
In this class, and in most of vegetation science, sampling is done without replacement. That is, once selected, a data point (tree, quadrat location, whatever) is removed from further chance of being
selected. If the procedures described here generate the same data point (tree, quadrat location, whatever), just skip that random number and try another one.
If you can't wait to jump all over these methods of using random numbers, go to Assignments in Blackboard and select the quiz called "Using random numbers." Otherwise you can wait until the end of
the Simple random sampling in the field chapter.
© 2007 Mark V. Wilson and Oregon State University | {"url":"http://oregonstate.edu/instruct/bot440/wilsomar/Content/Random.htm","timestamp":"2014-04-17T01:07:40Z","content_type":null,"content_length":"8639","record_id":"<urn:uuid:101f6ecb-57a9-45c5-b9ed-bc3fe7ad7580>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
LC2200 BEQ
I really do not know where to put this topic, since it is a pretty general question(had this project to write this code in C, I was doing the pipeline of the branch instruction)
Say if a computer with an ALU, containing the following operations,
ADD, SUB, DIV, MUL, SHL, SHR, NOT, OR, XOR AND
How can the ALU perform the conditional statement for the branch?
Say, let A be register1
And let B be register2
And let C be tempRegister
(A - B) --> C //perform ALU's SUB, and store result of A - B in C
The next part is where I am stuck, and I am stuck on how will the ALU perform this, as i can only use the provided operations above, I think?
If (C == 0) //if C is zero (A and B are equal)
//set program counter to the offset
Thanks in advance for anyone who took their time to read this!
This post has been edited by GunnerInc: 02 June 2012 - 01:11 PM
Reason for edit:: Disabled emoticons so code displays properly | {"url":"http://www.dreamincode.net/forums/topic/281532-lc2200-beq-pipeline-how-does-the-all-perform-branch/","timestamp":"2014-04-19T07:46:08Z","content_type":null,"content_length":"98700","record_id":"<urn:uuid:76a7f826-b280-4e54-b3d9-d678bbe61c2a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] Definition of gammaln(x) for negative x
G-J van Rooyen gvrooyen@gmail....
Sat Nov 1 14:53:56 CDT 2008
I think it probably makes sense to keep it the way it is. AFAIK
gammaln is typically used to calculate products and divisions with the
gamma function, where it makes more sense to transform the entire
calculation into the log-domain, in order to prevent numerical
e.g. gamma(A)*gamma(B)/gamma(C)*gamma(D)
= exp(gammaln(A)+gammaln(B)-gammaln(C)+gammaln(D))
which works fine if all the arguments produce positive gamma-values.
If negative gamma-values are produced (as described in ticket #737),
the same calculation can be done by calculating gammaln on the
absolute value of the arguments, and doing a sign correction at the
end. For this, only the signs of gamma(A), etc. are needed. The
original scipy/special/cephes/gamma.c writes the sign to a global
variable named sgngam; this never gets imported to python.
It might make sense to keep gammaln as it is, but to optionally return
the sign information of gamma(A) in some way.
Your thoughts?
2008/11/1 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
> G-J van Rooyen wrote:
>> Hey everyone
>> Ticket #737 refers:
>> -----
>> Gamma in negative for negative x whose floor value is odd. As such,
>> gammaln does not make sense for those values (while staying in the
>> real domain, at least). scipy.special.gammaln returns bogus values:
>> import numpy as np
>> from scipy.special import gamma, gammaln
>> print np.log(gamma(-0.5))
>> print gammaln(-0.5)
>> Returns nan in the first case (expected) and 1.26551212348 in the
>> second (totally meaningless value).
>> -----
>> The info line for gammaln reads:
>> * gammaln -- Log of the absolute value of the gamma function.
>> With this definition of gammaln, the function actually works fine,
>> since np.log(abs(gamma(-0.5))) is in fact 1.2655.
> I have just checked with R, R does define log gamma as the
> log(abs(gamma(x))) (I guess that's where the definition comes from). I
> find this definition a bit strange, that's not the one I have seen where
> I see it used, but I certainly don't claim to use what would be
> considered as a reference for this stuff (I mostly use log gamma to deal
> with precision problem of gamma in some statistics computation).
> cheers,
> David
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-dev
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-November/009959.html","timestamp":"2014-04-18T01:31:52Z","content_type":null,"content_length":"5674","record_id":"<urn:uuid:aa0eb67e-2406-4495-891f-92839411801b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flight Question Manual calcultion of lunar transfer - Orbiter-Forum
Manual calcultion of lunar transfer
For better understanding of orbital mechanics I'm trying to perform the transfer from the Earth to the Moon and back to the Earth using only Keplerian orbital elements shown on Orbit MFD and have
some problems with that.
I usually start with DeltaGliderIV docked to ISS. If I initially align my orbital plane with Moon's one using Align planes MDF to make the transfer coplanar, calculating a point to burn engine isn't
a problem. If I perform Hohmann transfer, it will take exactly one half of period of the transfer orbit. So the angle of arc which Moon will cover during transfer is 180 degrees multiplied by ratio
of period of transfer orbit to the Moon's one. According to the Kepler's third law, it equals (Moon_SMa / Transfer_SMa) ^ 1.5. To make things simpler I neglect SMa of parking orbit (about 6.6M vs.
385M) and end up with angle of 180 * 0.5^1.5. So the Moon's true anomaly in rendezvous point will be Moon_TrA + (Moon_SMa / Transfer_SMa) ^ 1.5. Then I project this point to my orbit by reduction to
common periapsis (for both orbits having very low eccentricity it's OK, I suppose): Moon_AgP - Parking_AgP + Moon_TrA + (Moon_SMa / Transfer_SMa) ^ 1.5. And the point to burn engines will be on the
opposite side of the orbit: Moon_AgP - Parking_AgP + Moon_TrA + (Moon_SMa / Transfer_SMa) ^ 1.5 + 180. If I end up with value larger than 360 or lesser than 0, I just subtract or add 360. Despite all
neglecting, this formula sometimes leads me directly to the Moon's surface without mid-course correction.
But the whole point is to do this without Align planes MDF and without initial planes alignment at all for it requires much more fuel than to slow down when approaching Moon. So I need to calculate
non-coplanar transfer. I understand that rendezvous point will be in the point of intersection between parking orbital plane and the Moon's one, and the point to burn engine is right on the opposite
intersection. Now I need to calculate true anomalies of these points in term of both parking and Moon's orbits and I failed with it. I think I solved it for some particular cases such as Moon's orbit
being equatorial, but it's not the one. And I have no idea in how to solve it in general case. Can you help please?
The second problem being even more challenging is transfer back to the Earth from lunar orbit, for it involves leaving lunar Hill sphere on hyperbolic trajectory. I know that far enough from
celestial body hyperbola could approximated with it's asymptote. And I suspect that in a case of coplanar transfer the best way from the Moon to the Earth is to have this asymptote pointed to
direction opposite to Moon's own velocity and to fly with speed close enough to its one. But how do I do it? Well, I looked on the picture of the Moon's orbit around the Earth and my orbit around the
Moon in the same ecliptic frame and concluded that desired direction is "somewhere over here". And experimentally came up whit idea of burning engine about 45 before it. OK, I actually returned back
to the Earth but I want to know the right mathematical way to do it. And I don't know what to do in the case of non-coplanar transfer.
I'd also be grateful if you recommend me a book to explain this stuff. Previously I have read some of them but for some reason while dealing with complex problems as orbiting aspherical bodies they
didn't explain neither how to calculate anomaly of point to burn engines (they just say something like "in the point of desired periapsis) nor how to deal with hyperbolic transfers. | {"url":"http://www.orbiter-forum.com/showthread.php?p=354031","timestamp":"2014-04-17T18:23:02Z","content_type":null,"content_length":"43033","record_id":"<urn:uuid:13a67eed-3eef-4b40-aa05-6eb6acbf0518>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tomales Math Tutor
Find a Tomales Math Tutor
...I can teach all of the basic moves and some strategies. I may not be the best chess player in the world, but I am a patient teacher. I have a Masters in Mathematics.
26 Subjects: including differential equations, discrete math, chess, elementary math
...For older students, who have missed a solid foundation in fractions, percent, multiplication and division, algebra will be an extra challenge. So I make sure my students have really mastered
the elementary skills as they begin their study of algebra. Learning Algebra is like learning a whole new language.
12 Subjects: including prealgebra, algebra 1, SAT math, geometry
...Now I am happy to become a professional tutor so I can help more students. My teaching method is emphasized on conceptual learning rather than rote memorization. I would like to teach students
how to think and how to be engaged in a fun mathematics world.
22 Subjects: including calculus, trigonometry, algebra 1, algebra 2
...I have been a professional tutor since 2003, specializing in math (pre-algebra through AP calculus), AP statistics, and standardized test preparation. I am very effective in helping students to
not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is positive, supportive, and also fun.
14 Subjects: including prealgebra, ACT Math, algebra 1, algebra 2
...I know how difficult it can be to learn a subject such as mathematics. I know the material well, so I can zip through the material if that's how the student learns, or I can go through a
problem step-by-step until she or he has learned how to do it. Part of the problem is identifying the problem, the next step is solving it, and then the final step is remembering how to solve the
8 Subjects: including precalculus, trigonometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Tomales_Math_tutors.php","timestamp":"2014-04-17T04:54:57Z","content_type":null,"content_length":"23603","record_id":"<urn:uuid:5397db4a-04e1-499b-9c4f-5c863c165666>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the branching locus of the double cover of surfaces always one dimensional?
up vote 1 down vote favorite
Suppose $X$ and $Y$ are two smooth surfaces (over the complex numbers), and $f: X \to Y$ is a finite flat morphism of degree two. Is it necessarily true that the locus where $f$ is not a smooth
morphism (i.e. the ramification locus) always one dimensional (if not empty)?
1 Yes. The result which is true under more general conditions, is called "purity of the branch locus". I don't have access to my books right now, otherwise I'd give you a reference. – Donu Arapura
Jun 1 '13 at 4:02
2 yes, in this casethe locus is defined by one equation, the determinant. – roy smith Jun 1 '13 at 5:11
Since $X$ and $Y$ are smooth surfaces (this works for smooth varieties of the same dimension), the locus where $f$ is not a smooth morphism is given by the vanishing of the Jacobian determinant
1 (as said by Roy), say, written in local coordinates. Purity of the branch locus (Nagata; see, eg, Grothendieck's SGA 2, X, 3.4), is more general, because it extends to schemes (if $f\colon X\to Y$
is a quasi-finite morphism of integral schemes, $X$ normal and $Y$ regular, and if $f$ is unramified outside a 2-codimensional subset, then $f$ is étale) and does not assume that $X$ is itself
regular. – ACL Jun 1 '13 at 8:54
Thank you all for useful comments! – marker Jun 2 '13 at 8:50
add comment
1 Answer
active oldest votes
Yes. Otherwise there would be an isolated ramification point in $Y$. A link of that point is $S^3$, which does not have a nontrivial double cover. So upstairs, $X$ looks near that
up vote 5 down point like two ${\mathbb C}^2$s glued at a point, which isn't smooth.
vote accepted
I feel a little silly writing this topological answer, given the beautiful algebraic geometry answers above, but I'll add: note how things are different if $Y$ were codim $1$,
since the link would now be $S^1$, which does indeed have a nontrivial double cover. – Allen Knutson Jun 4 '13 at 2:20
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/132484/is-the-branching-locus-of-the-double-cover-of-surfaces-always-one-dimensional","timestamp":"2014-04-21T00:00:19Z","content_type":null,"content_length":"56233","record_id":"<urn:uuid:59d012fa-4ca1-4fe5-ae8f-74e634fc2294>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Upper Part of the Earnings Distribution in the United States: How Has It Changed?
by Kelvin R. Utendorf
Social Security Bulletin, Vol. 64 No. 3, 2001/2002
Text Description for Chart A-1.
Illustrative calculation of the Gini coefficient
There are two lines on the chart, labeled line of equality and Lorenz curve. The horizontal axis is labeled percentage of earners. The vertical axis is labeled percentage of earnings. Both axes range
from zero to 100 in increments of 20. In the chart, both lines begin at the zero-zero coordinate and end at the 100-100 coordinate.
The line of equality is a straight line. The Lorenz curve bends away from and appears below the line of equality. The Lorenz curve's maximum departure from the line of equality is near point A, at
approximately the 60-25 coordinate. The area between the line of equality and the Lorenz curve is labeled Area B. The area below the Lorenz curve is labeled Area C. | {"url":"https://www.socialsecurity.gov/policy/docs/ssb/v64n3/v64n3p1-text.html","timestamp":"2014-04-16T07:22:08Z","content_type":null,"content_length":"3978","record_id":"<urn:uuid:16dedfe0-4896-42f8-a5a8-04661e31373d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential and Logarithmic Functions
Exponential and Logarithmic Functions
Properties of Logarithms. logb (ac) = logb a logb c. logb (a/c) = logb a - logb c ... Properties of Exponentials and Logarithms. y = logax ay = x. ay = x y ... – PowerPoint PPT presentation
Number of Views:933
Avg rating:3.0/5.0
Slides: 17
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view/16512d-ZjU2N/Exponential_and_Logarithmic_Functions_powerpoint_ppt_presentation","timestamp":"2014-04-20T08:19:44Z","content_type":null,"content_length":"96613","record_id":"<urn:uuid:a294f9ed-84ad-4180-8ae7-95d49c6558b3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Replaces a continuous class with a derivative or a MQC by one or more continuous attributes.
Classified Examples (ExampleTableWithClass)
Input data set.
Classified Examples (ExampleTableWithClass)
Output data set.
This widget implements several techniques for assessing partial derivatives of the class variable for the given set of examples. The derivative is appended to the example table as a new class
attribute. The widget can compute either quantitative derivative by a chosen continuous attribute or a qualitative derivative by one or more attributes.
The widget is implemented to cache some data. After, for instance, computing the derivatives by x and y separately, the widget has already stored all the data to produce the derivatives by both in a
The Attributes box lists all continuous attributes and lets the user select the attribute by which she wants to compute the qualitative derivative. The selection is important only when the widget
actually outputs a qualitative derivative (this depends on other settings, described below). Buttons All and None select the entire list and nothing.
Derivatives by more than one attribute are mathematically questionable, and computing by many attributes can be slow and messy. Methods that are based on triangulation will include all attributes in
the triangulation, regardless of the selection, but then compute only the selected derivatives.
Box Method determines the used method and its settings. Available methods are First triangle, Star Regression, Univariate Star Regression and Tube Regression. First triangle is unsuitable for data
with non-negligible noise. Star regression seems to perform rather poor; the quantitative derivatives it computes are even theoretically wrong. Univariate Star Regression will handle noise well, but
also work well for very complex functions (like sin(x)sin(y) across several periods). Tube regression is very noise resistant, which can lead it to oversimplify the model, yet it is the only method
that does not use the triangulation and is thus capable of handling discrete attributes, unknown values and large number of dimensions. It may be slow when the number of examples is very large.
Detailed description of these methods can be found in Zabkar and Demsar’s papers.
Ignore differences below lets the user set a threshold for qualitative derivatives.
The widget can also put some data in meta attributes: the Qualitative constraint, as described above, Derivatives of selected attributes and the Original class attribute.
The changes take effect and the widget start processing when Apply is hit.
For documentation suggestions or questions please use our | {"url":"http://orange.biolab.si/docs/latest/widgets/rst/regression/pade/","timestamp":"2014-04-21T07:34:26Z","content_type":null,"content_length":"12362","record_id":"<urn:uuid:d8a2c540-ed4a-4ba8-8f74-779bc9670a82>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions on standard (motivic) conjectures
up vote 4 down vote favorite
1. Over an (algebraically closed) characteristic $p$ field, is it known that the cohomological equivalence of cycles relation (with respect to $\mathbb{Q}_l$-adic \'etale cohomology) does not depend
on the choice of a prime $l$ (distinct from $p$). At least, is it known that: if the numerical equivalence of cycles relation coincides with the cohomological one for one value of $l$, this is
also true for all other $l$'s?
2. I believe that in order to have Kunneth decompositions for cohomological (pure) motives and the Standard Lefshetz conjecture is suffices for the numerical equivalence of cycles relation to
coincide with the cohomological one. Is this true, or does the Hodge Standard conjecture play some role here (or in the first question)?
motives etale-cohomology algebraic-cycles
add comment
1 Answer
active oldest votes
The answer to both questions in 1 is NO. For example, Clozel has shown that for an abelian variety over the algebraic closure of a finite field, there are infinitely many l for which
numerical and homological equivalence coincide, but this doesn't help with proving the statement for all l (or even the independence of homological equivalence from l, for all l).
up vote 4 down
vote accepted The answer to question 2 can be found in Kleiman's articles.
1 Thanks; yet could you say more one question 2? I have some references, that contradictы my own knowledge on the subject. – Mikhail Bondarko Jan 16 '11 at 19:14
add comment
Not the answer you're looking for? Browse other questions tagged motives etale-cohomology algebraic-cycles or ask your own question. | {"url":"http://mathoverflow.net/questions/52248/questions-on-standard-motivic-conjectures/52255","timestamp":"2014-04-17T12:47:12Z","content_type":null,"content_length":"51503","record_id":"<urn:uuid:9b792cb5-7492-42f7-95c9-69733198aae9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Topic review (newest first)
2005-06-03 21:50:17
Thanks, mathsyperson. At least they are even and odd
Seems way too high to be any mathematical pattern. Well, any reasonably logical pattern, because you could construct something to fit if you needed to.
Thet reminds me of someone who developed a formula that was able to come up with the name of every US President. Give it a President No and it would spit out a number which could be turned into the
correct Presidents Name (I don't remember how, but it apparently worked). But when asked to predict the next President's name it responded with Zxyftys Ljqahdyd.
2005-06-03 20:57:10
If anyone's interested, the published answer was 337146, but it didn't tell you the two numbers or how they got to them.
Prime factor analysis shows that 337146=2x3x83x677, which we can use to speculate as to what the two numbers might be.
I'm guessing 498 and 677, but I still don't have a clue why.
2005-06-02 22:11:53
2005-06-01 10:12:48
594, 487, 566, 493, 310, 447, ____, ____
2005-05-31 15:01:03
Let's know what the solution is when it arrives.
2005-05-31 09:28:05
I tried everything i can think of and no result. Oh well...maybe next weeks will be easier. Thanks for the help anyway!
2005-05-29 22:26:08
Hi, mathsyperson. Good to have you on board.
2005-05-29 21:27:13
You're right, I'm sorry.
Anyway, as MathsIsFun has already tried everything I can think of, I'm guessing that the answer has some obscure Neopets connection type thing, meaning that unless we want to spend ages on the
website, we should give up on it.
2005-05-29 19:12:33
mathsyperson wrote:
Wow, that's a coincidence.
He's just happened to put up the current Lenny conundrum again.
What are the odds?
That was terrible. That shouldn't even classify as wit.
2005-05-29 16:14:35
Doesn't matter. A puzzle is a puzzle.
And this one is tough.
I have tried fibonacci, primes (only 487 is prime), powers, multiplication, and they all have random results.
I am thinking either
a) you can never guess the two numbers but you can guess their product, or
b) it needs to be decoded somehow (words, letters or something)
2005-05-29 11:11:17
mathsyperson wrote:
Wow, that's a coincidence.
He's just happened to put up the current Lenny conundrum again.
What are the odds?
I couldn't figure it out, so I asked for help. I'm pretty sure you're some middle school person who asks your teacher for help on these. Why not ask for help on here??? I'm not in math anymore, so
what better place to ask for help?
Guess you didn't think to come here and ask for help did you?
2005-05-29 10:42:42
Well, I have no idea, since there is no noticeable pattern:/( which is unbelievable because no sequence has an unnoticeable patern)
2005-05-28 22:55:36
Wow, that's a coincidence.
He's just happened to put up the current Lenny conundrum again.
What are the odds?
2005-05-28 16:02:06
If you take the next two numbers in this sequence and multiply them together, what number will you get?
594, 487, 566, 493, 310, 447, ____, ____
What is the answer?
A little help please and THANK YOU!!! | {"url":"http://www.mathisfunforum.com/post.php?tid=723&qid=6464","timestamp":"2014-04-20T18:34:46Z","content_type":null,"content_length":"22198","record_id":"<urn:uuid:c60f828e-e0f2-42f9-a0fa-f6e21b742ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
how do i output prime numbers in descending order using a stack?
up vote 1 down vote favorite
import java.util.Stack;
public class Primes{
public static void main(String[]args){
Stack<Integer> stack = new Stack<Integer>();
//number of primes to display
final int NUMBER_OF_PRIMES = 50;
//number of primes to display per line
final int NUMBER_OF_PRIMES_PER_LINE = 10;
//count number of primes
int count = 0;
int number = 2;
System.out.println("The first 50 primes are \n");
while(count < NUMBER_OF_PRIMES){
boolean isPrime = true;
for(int divisor = 2; divisor <= number/2; divisor++){
if(number % divisor == 0){
isPrime = false;
if(count % NUMBER_OF_PRIMES_PER_LINE ==0){
System.out.print(number + " ");
java stack primes
Side note: consider to use Deque instead of Stack: Deque<Integer> stack = new ArrayDeque<Integer>(); (See note in Javadoc of Stack: download.oracle.com/javase/6/docs/api/java/util/Stack.html ) –
Puce Feb 15 '11 at 19:46
Use LinkedList instead of ArrayDeque) – Stas Kurilin Feb 15 '11 at 19:50
ok i will try it out and see how far i get. Thanks. – not looking for answer Feb 15 '11 at 19:52
So what is your problem? – Thomas Jungblut Feb 15 '11 at 19:52
Am going to try work on this for a bit thank you for all the suggestions.. Its better i work it out myself.. :) – not looking for answer Feb 15 '11 at 20:08
add comment
4 Answers
active oldest votes
1. Read the javadoc for Stack and its parent class : Vector
2. When computing the 50 first prime members, instead of displaying them while you find them, store them in the stack
3. Once you have finished finding the primes, the stack contains all the primes you found. The smallest one is the first element of the stack, and the biggest one is the last element of
up vote 0 the stack. Start another loop from the end of the stack to the beginning to display the primes in descending order.
down vote
Note : Stack is an old class that should not be used anymore. You should prefer ArrayList.
"Note : Stack is an old class that should not be used anymore. You should prefer ArrayList." --- Uuhhh no, ArrayList and Stack solve totally different problems. – Martin Doms Feb 15 '11
at 19:58
@Martin while you're correct in a sense, you should read the Javadoc for the Stack class, which state that you should probably use something different. Specifically, they mention the
Deque interface. @JB In regard to my comment to Martin, that would make the preferred class LinkedList, not ArrayList. This is despite the fact that ArrayList is the typical replacement
for Vector. It should also be noted that Vector is not a bad class, it merely has overhead that you typically don't need. – corsiKa Feb 15 '11 at 20:17
But why would you use a List class instead of a Deque? Deque provides the correct constraints and a better interface for solving queuing problems. – Martin Doms Feb 15 '11 at 20:26
@Martin and @glowcoder : since my proposition suggests iterating in reverse order through the stack (which is a Vector, and thus has a relationship to ArrayList) and accessing it by
index, ArrayList fits perfectly. There is no need for stack operations (push, pop) in this problem : no need to empty the stack to display the prime numbers. – JB Nizet Feb 15 '11 at
add comment
Because it's a homework problem I won't give code, but here's the process, picking up from somewhere within your code.
1. If number is prime, push it onto the stack.
up vote 0 down vote 2. When the prime finder loop terminates, start popping from the stack in a new loop. The numbers will be popped in descending order (because they were pushed in ascending
add comment
First, have a look at Finding prime numbers with the Sieve of Eratosthenes (Originally: Is there a better way to prepare this array?) for a discussion of better ways of finding prime
up vote 0 numbers. Then, use the stack to reverse the order, based on the "last in, first out" property.
down vote
add comment
One interesting property of a stack is that it can be used to reverse order. This is because the first item to be pushed onto it is the last off.
Imagine if I push the letters of the word "pan" onto a stack, one by one. I first push "p", then "a", then "n". Now, since "n" was the last letter pushed, it is the first one to be popped
off the stack. So, when I remove the letters I get "n" followed by "a" followed by "p" - "nap". In such a manner, a stack can be used to reverse a word by considering it as a list of
The same is true for a list of prime numbers. If you have a list of the first NUMBER_OF_PRIMES prime numbers in ascending order (eg: 2, 3, 5, 7 ...), then you can perform the same trick
up vote 0 using a stack to change that list into descending order by pushing each onto the stack, then reading them off.
down vote
So, what I'd do is each time you detect a prime number, push it onto your stack until there are NUMBER_OF_PRIMES primes on the stack. Then, pop each item off the stack to print them in
reverse order.
It might also be worth spinning off your isPrime logic into its own function, once you get things running.
add comment
Not the answer you're looking for? Browse other questions tagged java stack primes or ask your own question. | {"url":"http://stackoverflow.com/questions/5008696/how-do-i-output-prime-numbers-in-descending-order-using-a-stack","timestamp":"2014-04-18T13:10:31Z","content_type":null,"content_length":"86182","record_id":"<urn:uuid:69abc371-fd93-43de-a05a-2c435d20be1e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
DICE: A New Pitching Stat
Defense Independent Component ERA
July 19, 2000
If you play Baseball Mogul, you have already encountered Defense Independent Component ERA ("DICE"), even though you don't realize it. This is because the artificial intelligence in Baseball Mogul
uses DICE to evaluate pitching talent. We also use it at Sports Mogul to create our annual player projections.
DICE starts with the concept of "Component ERA" invented by Bill James. The concept is pretty simple -- use the components of a pitcher's statistical performance (such as hits allowed and hit
batters) to predict a pitcher's ERA. Because there is a strong correlation between these individual events and the pitcher's ERA, you can actually estimate a pitcher's ERA in a season by just looking
at the components. In other words, you can predict earned runs allowed by looking at the individual events (such as walks and home runs) that led to the runs themselves.
ERA is a somewhat luck-based stat. One season is a relatively small sample size, and earned runs given up in one season may not be a true indicator of the pitcher's overall ability level. The pitcher
might have given up several home runs with the bases loaded, causing his ERA to be higher than it would have been if the home runs had been distributed randomly throughout the season.
By deriving a value from hits, walks, hit batters and home runs, Component ERA attempts to be a better evaluator of a pitcher's true ability to prevent runs.
Here is James' formula for Component ERA (CERA):
But there are a few problems with CERA:
The biggest is that it includes hits. Hits aren't a great indicator of a pitcher's true pitching ability. With the exception of home runs, the number of hits allowed by any pitcher are largely
affected by the quality of the defense behind. This makes sense, but it also stands up to statistical analysis. A pitcher's Strikeout Ratio (strikeouts pitched per 9 innings) is relatively consistent
from year to year. However, a pitcher's Hit-Out Ratio (ratio of hits to outs, after removing strikeouts and homeruns) doesn't have the same consistency.
The second problem I have with CERA is that it's tough to calculate. Although they aren't perfect, I like measures such as Slugging Percentage and Total Average with formulae that are pretty easy to
So, I created a slightly different form of Component ERA called "Defensive Independent Component ERA" (or DICE) that uses the variables in Component ERA, but removes hits (but leaves in Home Runs --
because these are almost never affected by defense).
At first, it looked something like this:
DICE = x + (y*(BB + HBP) + z*HR) / IP
Using all active pitchers in 1999 with 500 or more career Innings Pitched, I performed a regression on the above function to determine the constants x, y and z such that DICE best predicted their
career average ERA. (There were 229 pitchers in this data set).
But after some experimenting, I noticed that ERAs were also strongly correlated with strikeouts, even when the other stats (walks, hit batters, and home runs) were already taken into account. As
strikeouts are also defense-independent, it makes sense to add them to the formula. This is somewhat counter-intuitive. After all, a ground out can be just as good as a strikeout to end an inning.
But the regression doesn't lie -- strikeouts are more effective than other types of outs at reducing earned runs. Or more accurately, strikeout numbers are useful in predicting a pitcher's ERA.
So I added strikeouts to the formula and performed another regression to determine the correct coefficients to use in the formula. Finally, I found the integer coefficients that best matched the
DICE = 3 + (3*(BB + HBP) + 13*HR - 2*K) / IP
(The Mean Squared Error for this formula, across all 229 pitchers, is .100697. The Square Root of the Mean Squared Error is about .317 -- meaning that about 2/3 of all actual ERA values should fall
with .317 runs of a pitchers DICE value)
So there you have it:
1. Start with a value of 3 times the number of walks and hit batters
2. Add 13 for every home run allowed
3. Subtract 2 for every strikeout
4. Divide this total by the number of innings pitched
5. Finally, add this result to 3.00 to get the pitcher's Defense-Independent Component ERA (aka DICE).
Here's an example using Roger Clemens 1998 season (his most recent Cy Young Award):
DICE = 3.00 + (3 * (68 BB + 7 HB) + 13 * 9 HR - 2 * 292 K) / 264 IP = 2.14
Roger's actual ERA in 1998 was 2.05
Anyway, I first developed this stat to help me predict how a pitcher would perform in my rotisserie league. DICE is a better predictor of a pitcher's ERA in the upcoming year than any other stat I
could find (such as his previous year's actual ERA). Using these predictions, I was able to win the league for 4 years out of 6 (and I'm currently in 1st place in year 7). And of course DICE is one
of many tools we use inside the Baseball Mogul game engine. | {"url":"http://www.sportsmogul.com/content/dice.htm","timestamp":"2014-04-20T15:55:44Z","content_type":null,"content_length":"16065","record_id":"<urn:uuid:bcd9e7c5-7232-4d2f-8f82-a6852d886fbe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding All Steady States in Biological Regulatory Networks
Finding All Steady States in Biological Regulatory Networks
01/2009; 00(00):1-11.
ABSTRACT Motivation. Computing the long term behavior of regulatory and signaling networks is critical in understanding how biological functions take place in organisms. Steady states of these
networks determine the activity levels of individual entities in the long run. Identifying all the steady states of these networks is difficult as it suffers from the state space explosion problem.
Results. In this paper, we propose a method for identifying all the steady states of regulatory and signaling networks accurately and efficiently. We build a mathematical model that allows pruning a
large portion of the state space quickly without causing any false dismissals. For the remaining state space, which is typically very small compared to the whole state space, we develop a randomized
algorithm that extracts the steady states. This algorithm estimates the number of steady states, and the expected behaviors of individual genes and gene pairs in steady states in an online fashion.
Also, we formulate a stopping criteria that terminates the randomized algorithm as soon as user supplied percentage of the results are returned with high confidence. Finally, in order to maintain the
scalability of our algorithm to very large networks, we develop a partitioning-based estimation strategy. We show that our algorithm can identify all the steady states accurately. Furthermore, our
experiments demonstrate that our method is scalable to virtually any large real biological network.
[show abstract] [hide abstract]
ABSTRACT: Expression of the Drosophila segment polarity genes is initiated by a pre-pattern of pair-rule gene products and maintained by a network of regulatory interactions throughout several
stages of embryonic development. Analysis of a model of gene interactions based on differential equations showed that wild-type expression patterns of these genes can be obtained for a wide range
of kinetic parameters, which suggests that the steady states are determined by the topology of the network and the type of regulatory interactions between components, not the detailed form of the
rate laws. To investigate this, we propose and analyse a Boolean model of this network which is based on a binary ON/OFF representation of mRNA and protein levels, and in which the interactions
are formulated as logical functions. In this model the spatial and temporal patterns of gene expression are determined by the topology of the network and whether components are present or absent,
rather than the absolute levels of the mRNAs and proteins and the functional details of their interactions. The model is able to reproduce the wild-type gene expression patterns, as well as the
ectopic expression patterns observed in overexpression experiments and various mutants. Furthermore, we compute explicitly all steady states of the network and identify the basin of attraction of
each steady state. The model gives important insights into the functioning of the segment polarity gene network, such as the crucial role of the wingless and sloppy paired genes, and the
network's ability to correct errors in the pre-pattern.
Journal of Theoretical Biology 08/2003; 223(1):1-18. · 2.35 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Mutations in the APETALA1 gene disturb two phases of flower development, flower meristem specification and floral organ specification. These effects become manifest as a partial
conversion of flowers into inflorescence shoots and a disruption of sepal and petal development. We describe the changes in an allelic series of nine apetala1 mutants and show that the two
functions of APETALA1 are separable. We have also studied the interaction between APETALA1 and other floral genes by examining the phenotypes of multiply mutant plants and by in situ
hybridization using probes for several floral control genes. The results suggest that the products of APETALA1 and another gene, LEAFY, are required to ensure that primordia arising on the flanks
of the inflorescence apex adopt a floral fate, as opposed to becoming an inflorescence shoot. APETALA1 and LEAFY have distinct as well as overlapping functions and they appear to reinforce each
other's action. CAULIFLOWER is a newly discovered gene which positively regulates both APETALA1 and LEAFY expression. All functions of CAULIFLOWER are redundant with those of APETALA1. APETALA2
also has an early function in reinforcing the action of APETALA1 and LEAFY, especially if the activity of either is compromised by mutation. After the identity of a flower primordium is
specified, APETALA1 interacts with APETALA2 in controlling the development of the outer two whorls of floral organs.
Development 11/1991; 119(3):721-743. · 6.60 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: p53 is activated in response to events compromising the genetic integrity of a cell. Recent data show that p53 activity does not increase steadily with genetic damage but rather
fluctuates in an oscillatory fashion. Theoretical studies suggest that oscillations can arise from a combination of positive and negative feedbacks or from a long negative feedback loop alone.
Both negative and positive feedbacks are present in the p53/Mdm2 network, but it is not known what roles they play in the oscillatory response to DNA damage. We developed a mathematical model of
p53 oscillations based on positive and negative feedbacks in the p53/Mdm2 network. According to the model, the system reacts to DNA damage by moving from a stable steady state into a region of
stable limit cycles. Oscillations in the model are born with large amplitude, which guarantees an all-or-none response to damage. As p53 oscillates, damage is repaired and the system moves back
to a stable steady state with low p53 activity. The model reproduces experimental data in quantitative detail. We suggest new experiments for dissecting the contributions of negative and positive
feedbacks to the generation of oscillations.
Cell cycle (Georgetown, Tex.) 04/2005; 4(3):488-93. · 5.24 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
52 Downloads
Available from
Oct 30, 2012 | {"url":"http://www.researchgate.net/publication/228707532_Finding_All_Steady_States_in_Biological_Regulatory_Networks","timestamp":"2014-04-21T01:14:19Z","content_type":null,"content_length":"291699","record_id":"<urn:uuid:9d60510f-c6c1-4780-9547-ca8b40c00f81>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dugas' Home Page
Manfred H. Dugas
(My homepage, under construction!!)
In case you want to see a picture of me in a suit and tie,
and my whole family all dressed up:
My daughter Diana's wedding May 22, 2005.
(February 2006)
Address: Department of Mathematics
B.U. Box 7328
Baylor University
Waco, Texas 76798
E-mail: Manfred_Dugas@baylor.edu
Citizenship: German, Permanent Resident of the United States.
Marital Status: Married, two children
Education: M.S.: 1975 Mathematics
University of Kaiserslautern, Germany
Ph.D.: 1974, Mathematics
University of Kaiserslautern, Germany
Habilitation: 1979, University of Essen, Germany
Fields of Research Interest: Theory of Abelian groups, rings, and modules
Professional Experience:
1990- Professor, Department of Mathematics,
Baylor University, Waco, Texas
1987-90 Associate Professor, Department of Mathematics
Baylor University, Waco, Texas
1985-87 Associate Professor, Department of Mathematics
University of Colorado at Colorado Springs
1984-85 Visiting Associate Professor
University of Colorado at Colorado Springs
1976-84 Assistant Professor of Mathematics
University of Essen, Germany
1975 DFG ( = German NSF) Research Fellowship
Membership: Member of the American Mathematical Society
Professional Presentations:
1. International Conference on Abelian Groups, Oberwolfach, Germany, 1980: “Strongly ?-free modules in L.”
2. International Conference on Abelian Groups, Honolulu, HI, 1982: “On mixed Abelian Groups.”
3. A series of lectures at the University of Padova, Italy, Fall 1982, on “Endomorphism Rings.”
4. International Conference, Udine, April 1984: “On radicals, torsion theories and large cardinals.”
5. DMV (= German AMS) meetings, Fall 1983 and Fall 1981: “Constructing large modules” and “On rings with large indecomposable modules.”
6. International Conference on Abelian Groups, Oberwolfach, West Germany, 1985: “Fields with prescribed automorphisms.”
7. Presented a paper at the annual AMS meeting, January 1987, in San Antonio.
8. I was invited to give colloquium talks at Auburn University, AL, the University of Mississippi, MS, the University of Florida, Gainsville, FL, the University of Connecticut, Storrs, CT, Wesleyan
University, Middletown, CT, the University of California at Irvine, Wayne State University, Detroit, MI, University of Arizona, Tucson, AZ., University of Houston, Trinity University, and Essen
University, Germany.
9. A talk in the special session on Abelian groups, Annual AMS meeting, January 1988, Atlanta, GA.
10. International Conference on Abelian Groups, Oberwolfach, W. Germany, 1989: “Applications of abelian groups.”
11. “On a class of p-groups,” Colloquium talk, University of Essen, Germany, June 1990.
12. “On a class of Abelian p-groups,” presentation at the Twenty-First Annual USL Math Miniconference, Lafayette LA, October 1990.
13. “Construction of Groups,” 862nd Meeting of the AMS, Irvine, CA, November 1990.
14. “Uncountable Butler Groups,” Seminar talk given at the U. of Colorado, Colorado Springs, December 1990.
15. “Classification of ACD Groups”, presentation at the Conference on Methods in Module Theory, Colorado Springs, June 1991.
16. “E-uniserial Abelian Groups”, presentation at the Conference on Abelia n Groups, Curacao, August 1991.
17. “Butler Groups of infinite rank” presentation given to the Graduate College of Essen University, Essen, Germany, July 1992.
18. “Classification of Finite Rank Butler Groups”, a four week seminar given to the Mathematics Department of the University of Padova, Padova, Italy, July 1993.
19. “Near Isomorphisms of ACD Groups”, talk given at the International Conference on Abelian Group Theory in Oberwolfach, Germany, August 1993.
20. “Butler Groups and Representations of Posets” invited talk given at the International Conference on Abelian Groups and Modules, Colorado Springs, August 95.
21. “Fast vollständig zerlegbare Gruppen mit wenigen Typen”, Colloquium talk at Essen University, Essen, Germany, July 1996.
22. “BCD-Groups with Type Set (1,2)”, presentation at the International Conference on Abelian Groups and Modules, Dublin Institute of Technology, Dublin, Ireland, August 1998.
23. “Infinite Combinatorics in Algebra” invited talk at the Conference on Infinite Combinatorics and Set Theory, (European Science Foundation), Hattingen, Germany, July 1999.
24. “M-Free Abelian Groups”, Colloquium talk, University of Colorado at Colorado Springs, March 2001.
25. “Completely Decomposale Abelian Groups with a Distinguished CD Subgroup”, talk given at the Second Honolulu Conference on Abelian Groups and Modules, August 2001.
26. “Self-Free Modules and E-rings”, Colloquium talk at the University of Houston, March 2002.
27. “Self-Free Modules”, talk given at the Algebra Conference – Venezia 2002, Venice, Italy, June 2002
28. “Self-Free Abelian Groups, talk given at the conference: Algebra-Geometry; An International Conference in Memory of Reinhold Baer, Hattingen, Germany, July 2002.
29. “A-Rings”, talk given at the Weekend Algebra Conference at Florida Atlantic University, November 2003
30. “Localizations of Abelian Groups”, talk given at the Abelian Groups, Rings and Modules Conference, Auburn University, September 2004.
I had the following grants from the National Science Foundation funding my research (Summer salary and travel):
“Butler Groups of Infinite Rank”, 1987 and 1988, $17,100, DMS-8896186.
“Endomorphisms of Groups and Modules”, 1989 and 1990, $36,700, DMS-8900350.
“Invariants and endomorphisms of rings and Modules” (with D. Arnold), 1991 and 1992, $102,000, DMS-910100.
NSF Grant to fund a five day workshop “Set Theoretic Methods in Algebra” held at Baylor University in May 1990, $6,800.
(“Torsion-free Abelian Groups and modules with distinguished submodules”, 1993 and 1994, (with D. Arnold) $107,881, submitted to the NSF in Fall 1992; did not receive funding.)
“Classification of Abelian Groups and Modules”, 1994 and 1995, (with D. Arnold), $103,389, submitted to the NSF in Fall 1993. Status: Funded at a 50% level.
(“Generic Butler Groups and Representation Type”, 1997/98 (with D. Arnold), $120,870 submitted to NSF in Fall 1996, did not receive funding.)
(“Categories of Butler Groups and Representations”, 1998/99, (with D. Arnold), submitted to the NSF in Fall 1997, did not receive funding.)
(“Localizations of Abelian Groups”, 2004/05, submitted to the NSF, did not receive funding.)
Teaching Experience:
Over the last ten years I taught the following classes, by level:
Discrete Mathematics
Calculus I and II (utilizing a graphing calculator (TI 86)
Linear Algebra (utilizing the TI 86)
Euclidean and Non-Euclidean Geometry
Abstract Algebra
Graduate Algebra I and II
I also directed numerous independent reading classes for individual students needing additional hours. Here are some of the topics: Axiomatic set theory, coding theory, rings and modules, homological
algebra, E-modules. Topics in Abelian group theory
Graduate Students:
I served as adviser for Master’s Theses/Projects for the following students:
(Until 2001/2002, we didn’t have a Ph.D. program.)
Mouser, Mary Alice, On Z/p^2Z-modules with distinguished submodules, May 1993
Reiner, David R., Burst error correction, 1996
Guyton, Robert M., Centralizers of matrices, 1997
Morse, Misty, Complete Latin squares, 1997
Maddox, Amy, Endomorphism rings of some representations of critical posets, 1998
Kirchmer, Janie, A model of the hyperbolic plane, 1998
Wade, Rachel, On the existence of square roots of matrices, August 1999
DaCunha, Jeff, Freeness of modules relative to a single module, December 2001.
Conley, Kelly, The Golden Ratio, May 2003
I am currently working with Joshua Buckner, a Ph.D. candidate.
1. M. Dugas, Characterisierungen endlicher desarguesscher uniformer Hjelmslev-Ebenen, Geo Ded 3 (1974), 295-342.
2. M. Dugas, Eine Kennzeichnung der endlichen desarguesschen Hjelmslev-Ebenen, Arch. Math.27 (1976), 556-560.
3. M. Dugas, Moufang – Hjelmslev-Ebenen, Arch. Math. 28 (1977), 318-322.
4. M. Dugas, Verallgemeinerte Andre-Ebenen mit Epimorphismen auf Hjelmslev-Ebenen, Geo. Ded. 8 (1979), 105-123.
5. M. Dugas and R. Göbel, Algebraisch kompakte Faktorgruppen, J. R. Angew, Math 307/308 (1979), 341-354.
6. M. Dugas and R. Göbel, Die Strucktur kartesischer Produkte ganzer Zahlen modulo kartesischer Produkte ganzer Zahlen, Math. Z. 168 (1979), 15-21.
7. M. Dugas, Der Zusammenhang zwischen Hjelmslev-Ebenen und H. Verbanden, Mitt. Math. Gesellschaft, Hamburg, 10 (1980), 709-749.
8. M. Dugas, Die Kollineationsgruppe verallgemeinerter Andre-Ebenen mit Homologien, Math. Z. 175 (1980), 87-95.
9. M. Dugas and R. Göbel, Quotients of reflexive modules, Fund. Math. 114 (1981), 17-28.
10. M. Dugas, Fast freie abelsche Gruppen mit Endomorphismenring Z. J. Algebra 71 (1981), 314-321.
11. M. Dugas and B. Zimmermann-Huisgen, Iterated direct sums and products of modules, Springer LNM 874 (1981), 179-193.
12. M. Dugas and R. Göbel, Every cotorsion-free ring is an endomorphism-ring, Proc. Lond. Math Soc. (3) 45 (1982), 319-336.
13. M. Dugas and R. Göbel, On endomorphism rings of primary Abelian groups, Math. Ann. 261 (1982), 359-385.
14. M. Dugas and R. Göbel, Every cotorsion-free algebra is an endomorphism algebra, Math. Z. 181 (1982), 452-470.
15. M. Dugas and G. Herden, Arbitrary torsion classes and almost free Abelian groups, Israel J. Math. 44 (1983), 322-334.
16. M. Dugas and G. Herden, Arbitrary torsion classes of Abelian groups, Comm. Alg. 11 (1983), 1455-1474.
17. M. Dugas, On the existence of large mixed modules, Spring LNM 1006 (1982/83), 412-424.
18. M. Dugas and R. Göbel, On endomorphism rings of Abelian groups II, Springer LNM 1006 (1983), 400-411.
19. M. Dugas, R. Göbel, and B. Goldsmith, Representation of algebras over a complete discrete valuation ring, Quart. J. (Oxford) (2), 35 (1984), 131-146.
20. M. Dugas and R. Göbel, Torsion-free Abelian groups with prescribed finitely topologized endomorphism ring, Proc. Amer. Math. Soc. 90 (1984), 519-527.
21. M. Dugas and R. Göbel, On radicals and products, Pac. J. Math 118 (1985), 79-104.
22. M. Dugas and R. Göbel, Countable mixed Abelian group with very nice full-rank subgroups, Israel J. Math. 51 (1985), 1-12.
23. M. Dugas and R. Göbel, On almost ?-cyclic Abelian p-groups in L, Abelian Groups and Modules, Proceedings of the Udine Conference, 1984, CISM Courses and Lectures 287, 87-105.
24. M. Dugas, On reduced products of Abelian groups, Rend. Sem. Math. Univ. Padova, 73 (1985), 41-47.
25. M. Dugas and R. Göbel, Endomorphism rings of separable torsion-free Abelian groups. Houston J. Math, 11 (1985), 471-483.
26. M. Dugas, A. Mader and C. Visonhaler, Large E-rings exist, J. Algebra, 108 (1987), 88-101.
27. M. Dugas, On modules over valuation rings with a nil radical, Proceedings of the Third OW-Conference on Abelian Groups 1985, Gordon and Breach, 457-509.
28. M. Dugas and R. Göbel, Field Extensions in L, Proceedings of the Third OW-Conference on Abelian Groups 1985, Gordon and Breach, 509-529.
29. M. Dugas and R. Vergohsen, On socles of Abelian p-groups, Rocky Mountain J. Math., 18 (1988), 733-752.
30. M. Dugas and K. M. Rangaswamy, On torsion-free Abelian K-groups., Proc. Amer. Math. Soc. 99 (1987), 403-408.
31. M. Dugas, T.H. Fay and S. Shelah, Singly cogenerated annilhilator classes, J. Algebra, 109 (1987), 127-137.
32. M. Dugas and R. Göbel, All infinite groups are Galois groups over any field, Trans. Amer. Math. Soc. 304 (1987), 355-384.
33. M. Dugas, J. Irwin and S. Khabbaz, Countable rings as endomorphism rings, Quart. J. Math. Oxford (2), 39 (1988), 201-211.
34. M. Dugas, On the radical of some endomorphism rings, Proc. Amer. Math. Soc. 102 (1988), 823-826.
35. M. Dugas, On some subgroups of infinite rank Butler groups, Rend. Sem
Math. Univ. Padova, 79 (1988), 153-161.
36. M. Dugas and K.M. Rangaswamy. On Butler groups of infinite rank, Trans. Amer. Math Soc., 305 (1988), 129-14
37. M. Dugas and J. Irwin, On basic subgroups of ?Z, Comm. Alg. 19 (11) (1990), 2907-2921.
38. M. Dugas and S. Shelah, E-transitive groups in L, Contemporary Math., 87 (1989), 191-200.
39. M. Dugas and J. Hausen, Torsion-free E-uniserial groups of infinite rank, Contemporary Math., 87 (1989), 181-190.
40. M. Dugas and E. Oxford, Preradicals induced by torsion-free groups, Comm.
Algebra, 17 (1989), 981-1002.
41. M. Dugas, P. Hill, and K. M. Rangaswamy, Infinite rank Butler groups II, Trans. Amer. Math. Soc. 320 (1990), 643-664.
42. M. Dugas and J. Irwin, On pure subgroups of cartesian products of integers, Results in Math, 15 (1989), 35-52.
43. M. Dugas and R. Göbel, Outer automorphisms of groups, Ill. J. Math. 35
(1991), 27-43.
44. M. Dugas and R. Göbel, On locally finite p-groups. J. Algebra. 159 (1993), 115-138.
45. M. Dugas and R. Göbel, Torsion-free nilpotent groups and E-modules, Archiv. d. Math. 54, (1990), 340-351.
46. M. Dugas and B. Thomé, On Butler groups of countable rank and vector spaces with distinguished subspaces, J. Algebra, 138 (April 1991), 249-272.
47. M. Dugas and B. Thomé, The functor Bext under the negation of CH, Forum Mathematicum 3 (1991), 23-33.
48. M. Dugas and R. Göbel, Abelian p-groups with a prescribed chain of subgroups, Israel J. Math. 79 (1992), 153-159.
49. M. Dugas and R. Göbel, Nilpotent groups of class two, Trans. Amer. Math. Soc. 332 (1992), 633-646.
50. M. Dugas, J. Hausen and J. Johnson, Rings who additive endomorphisms are multiplicative, Bull. Austral. Math. Soc. 45 (1992), 91-103.
51. M. Dugas, Large E-modules exist, J. Algebra, 142 (October, 1991). 405-413.
52. M. Dugas and J. Irwin, On a class of abelian p-groups, Forum Math 4 (1992), 147-158.
53. M. Dugas and J. Irwin, On thickness and decompositions of abelian p-groups, Israel J. Math. 79 (1992), 153-159.
54. M. Dugas and E. Oxford, Near isomorphism invariants for some almost completely decomposable groups, Lecture Notes in Pure and Applied Mathematics (Marcel Dekker, Inc.), vol. 146 (1993), 129-150.
55. M. Dugas, On countable torsion-free abelian groups with trivial Z-dual, Proc. Royal Irish Acad. 92A (1992), 85-87.
56. M. Dugas and B. Olberding, E-uniserial abelian groups, in “Methods in Module Theory”, Lecture Notes in Pure Appl. Math., Marcel Dekker, vol 140, (1993), 75-86.
57. M. Dugas and K. M. Rangaswamy, Separable pure subgroups of completely decomposable groups, Arch. Math. 58 (1992), 332-337.
58. M. Dugas and T. Faticoni, Cotorsion-free abelian groups that are cotorsion as modules over their endomorphism rings, Lecture Notes in Pure and Applied Mathematics (Marcel Dekker, Inc.), vol. 146,
1993, 111-127.
59. M. Dugas and R. Göbel, Prescribing groups as automorphism groups of fields, Manuscripta Math. 85, (1994), 227-242.
60. M. Dugas, R. Göbel and W. May, Free modules with two distinguished submodules, Comm. Alg. 25(11) (1997), 3473-3481.
61. D. Arnold and M. Dugas, Representations of finite posets and near-isomorphism of finite rank Butler groups, Rocky Mount. J. Math. 25 (1994), 591-609.
62. D. Arnold and M. Dugas, Butler groups with finite type sets and free groups with distinguished subgroups, Comm. Alg. 21(6) (1993), 1947-1982.
63. D. Arnold and M. Dugas, Block rigid almost completely decomposable groups and lattices over multiple pullback rings, J. Pure Applied Alg. (1993), 105-121.
64. D. Arnold and M. Dugas, Indecomposable modules over Nagata valuation domains, Proc. Amer. Math. Soc. 122, (1994), 689-696.
65. M. Dugas and R. Göbel, Applications of Abelian groups and model theory to algebraic structures, “Infinite Groups”, Walter de Gruyter & Co, Berlin, 1995, 41-62.
66. D. Arnold and M. Dugas, Locally free finite rank Butler groups and near isomorphism, Abelian Groups and Modules, Kluwer, Boston, 1995, 41-48.
67. M. Dugas and R. Göbel, Automorphism groups of fields II, Comm. in Algebra, 25(12) (1997), 3777-3786.
68. M. Dugas and R. Göbel, Endomorphism rings of B2-groups of infinite rank, Israel J. Math. 101 (1997), 141-156.
69. D. Arnold and M. Dugas, A survey of Butler groups and the role of representations, Abelian Groups and Modules, Lecture Notes in Pure and Applies Mathematics, vol. 182, Marcel Dekker, 1996, 1-14.
70. M. Dugas and R. Göbel, Classification of modules with two distinguished pure submodules and bounded quotients, Results in Math. vol 30, 1996, 264-275.
71. D. Arnold and M. Dugas, Representation type of finite rank Butler groups, Colloquium Math. 74(2), 1997, 299-320.
72. D. Arnold and M. Dugas, Representation type of finite rank almost completely decomposable groups, Forum Mathematicum 10 (1998), 729-749.
73. D. Arnold and M. Dugas, Finite rank Butler groups with small typesets, in: Abelian Groups and Modules, Trends in Mathematics, Birkhauser, 1999, 107-120.
74. M. Dugas, BCD-groups with type set (1,2), Forum Mathematicum 13, (2000), 143-148.
75. M. Dugas and R. Göbel, Automorphisms of geometric lattices, Algebra Universalis 45(4) (2001), 425-433.
76. D. Arnold and M. Dugas, Co-purely indecomposable modules over discrete valuation rings. J. of Pure and Applied Algebra, 161 (2001), 1-12.
77. D. Arnold and M. Dugas and K. M. Rangaswamy, Finite rank Butler groups and torsion-free modules over a discrete valuation ring, Proc. Amer. Math. Soc. 129(2), (2001), 325-335.
78. M. Dugas, S. Feigelstock, and J. Hausen, M-free Abelian groups, Rocky Mountain J. of Math., 32, no. 4, (2002), 1367-1382.
79. M. Dugas and S. Feigelstock, Nil R-mod Abelian groups, Math. Proc. Royal. Irish Acad. 102A (2002), no.1, 107-113.
80. M. Dugas and K. M. Rangaswamy, Completely decomposable abelian groups with a distinguished cd subgroup, Rocky Mountain J. of Math. 32, no. 4, (2002), 1383-1395.
81. M. Dugas and S. Feigelstock, Self-free modules and E-rings, Communications in Algebra 31, (2003), 1387-1402.
82. D. Arnold, M. Dugas, K.M. Rangaswamy, Torsion-free modules of finite rank over a discrete valuation ring, Journal of Algebra 272, (2004), 456-469.
83. M. Dugas and K. M. Rangaswamy, Stacked bases theorem for Butler groups, Forum Mathematicum 16, (2004), 223-230.
84. M. Dugas and S. Feigelstock, Additive groups of zero square rings, Acta Mathematica Hungarica 107 (1-2), (2005), 61-70.
85. M. Dugas and S. Feigelstock, A-rings, Colloquium Mathematicum, 96, no. 2, (2003), 277-292.
86. M. Dugas and C. Vinsonhaler, Two-sided E-rings, Journal of Pure and Applied Algebra 185, (2003), 87-102.
87. M. Dugas, AA-rings, Communications in Algebra 32, No. 10, (2004), 3853-3860.
88. M. Dugas and S. Feigelstock, AE-rings, Rend. Sem. Mat. Univ. Padova 111, (2004), 239-246.
89. M. Dugas, Localizations of torsion-free abelian groups, J. of Algebra 278, (2004), 411-429.
90. M. Dugas and S. Feigelstock, Co-minimal abelian groups, Houston Journal of Mathematics 31, (2005), 637-648.
91. M. Dugas, Localizations of torsion-free abelian groups II, Journal of Algebra 284, (2005), 811-823.
92. M. Dugas and C. Maxson, Quasi-E-locally cyclic torsion-free abelian groups,
Proc. Amer. Math. Soc. 133, (2005), 3447-3453.
93. J. Buckner and M. Dugas, Co-local subgroups of abelian groups, in: Abelian Groups, Rings, Modules, and Homological Algebra, Lect. Notes Pure and Applied Math., 249, Taylor & Francis/CRC Press,
pp. 25-33, 2006 (Accepted Feb. 2005).
94. M. Dugas, Co-local subgroups of abelian groups II, to appear in the Journal of Pure and Applied Algebra. (Accepted Oct 2005)
95. J. Buckner and M. Dugas, Quasi-Localizations of Z, to appear in the Israel Journal of Mathematics, accepted February 2006.
96. J. Buckner and M. Dugas, Left rigid rings, submitted.
In preparaton:
M. Dugas and R. Göbel, Zassenhaus rings. | {"url":"https://bearspace.baylor.edu/Manfred_Dugas/www/","timestamp":"2014-04-23T12:04:35Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:1f9ab90f-166a-460f-bb54-71959a151bdb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
dinger equation
You are currently browsing the tag archive for the ‘Schrodinger equation’ tag.
Consider the free Schrödinger equation in ${d}$ spatial dimensions, which I will normalise as
$\displaystyle i u_t + \frac{1}{2} \Delta_{{\bf R}^d} u = 0 \ \ \ \ \ (1)$
where ${u: {\bf R} \times {\bf R}^d \rightarrow {\bf C}}$ is the unknown field and ${\Delta_{{\bf R}^{d+1}} = \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}}$ is the spatial Laplacian. To avoid
irrelevant technical issues I will restrict attention to smooth (classical) solutions to this equation, and will work locally in spacetime avoiding issues of decay at infinity (or at other
singularities); I will also avoid issues involving branch cuts of functions such as ${t^{d/2}}$ (if one wishes, one can restrict ${d}$ to be even in order to safely ignore all branch cut issues). The
space of solutions to (1) enjoys a number of symmetries. A particularly non-obvious symmetry is the pseudoconformal symmetry: if ${u}$ solves (1), then the pseudoconformal solution ${pc(u): {\bf R} \
times {\bf R}^d \rightarrow {\bf C}}$ defined by
$\displaystyle pc(u)(t,x) := \frac{1}{(it)^{d/2}} \overline{u(\frac{1}{t}, \frac{x}{t})} e^{i|x|^2/2t} \ \ \ \ \ (2)$
for ${t eq 0}$ can be seen after some computation to also solve (1). (If ${u}$ has suitable decay at spatial infinity and one chooses a suitable branch cut for ${(it)^{d/2}}$, one can extend ${pc(u)}
$ continuously to the ${t=0}$ spatial slice, whereupon it becomes essentially the spatial Fourier transform of ${u(0,\cdot)}$, but we will not need this fact for the current discussion.)
An analogous symmetry exists for the free wave equation in ${d+1}$ spatial dimensions, which I will write as
$\displaystyle u_{tt} - \Delta_{{\bf R}^{d+1}} u = 0 \ \ \ \ \ (3)$
where ${u: {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ is the unknown field. In analogy to pseudoconformal symmetry, we have conformal symmetry: if ${u: {\bf R} \times {\bf R}^{d+1} \
rightarrow {\bf C}}$ solves (3), then the function ${conf(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$, defined in the interior ${\{ (t,x): |x| < |t| \}}$ of the light cone by the formula
$\displaystyle conf(u)(t,x) := (t^2-|x|^2)^{-d/2} u( \frac{t}{t^2-|x|^2}, \frac{x}{t^2-|x|^2} ), \ \ \ \ \ (4)$
also solves (3).
There are also some direct links between the Schrödinger equation in ${d}$ dimensions and the wave equation in ${d+1}$ dimensions. This can be easily seen on the spacetime Fourier side: solutions to
(1) have spacetime Fourier transform (formally) supported on a ${d}$-dimensional hyperboloid, while solutions to (3) have spacetime Fourier transform formally supported on a ${d+1}$-dimensional cone.
To link the two, one then observes that the ${d}$-dimensional hyperboloid can be viewed as a conic section (i.e. hyperplane slice) of the ${d+1}$-dimensional cone. In physical space, this link is
manifested as follows: if ${u: {\bf R} \times {\bf R}^d \rightarrow {\bf C}}$ solves (1), then the function ${\iota_{1}(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ defined by
$\displaystyle \iota_{1}(u)(t,x_1,\ldots,x_{d+1}) := e^{-i(t+x_{d+1})} u( \frac{t-x_{d+1}}{2}, x_1,\ldots,x_d)$
solves (3). More generally, for any non-zero scaling parameter ${\lambda}$, the function ${\iota_{\lambda}(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ defined by
$\displaystyle \iota_{\lambda}(u)(t,x_1,\ldots,x_{d+1}) :=$
$\displaystyle \lambda^{d/2} e^{-i\lambda(t+x_{d+1})} u( \lambda \frac{t-x_{d+1}}{2}, \lambda x_1,\ldots,\lambda x_d) \ \ \ \ \ (5)$
solves (3).
As an “extra challenge” posed in an exercise in one of my books (Exercise 2.28, to be precise), I asked the reader to use the embeddings ${\iota_1}$ (or more generally ${\iota_\lambda}$) to
explicitly connect together the pseudoconformal transformation ${pc}$ and the conformal transformation ${conf}$. It turns out that this connection is a little bit unusual, with the “obvious” guess
(namely, that the embeddings ${\iota_\lambda}$ intertwine ${pc}$ and ${conf}$) being incorrect, and as such this particular task was perhaps too difficult even for a challenge question. I’ve been
asked a couple times to provide the connection more explicitly, so I will do so below the fold.
Let ${L: H \rightarrow H}$ be a self-adjoint operator on a finite-dimensional Hilbert space ${H}$. The behaviour of this operator can be completely described by the spectral theorem for
finite-dimensional self-adjoint operators (i.e. Hermitian matrices, when viewed in coordinates), which provides a sequence ${\lambda_1,\ldots,\lambda_n \in {\bf R}}$ of eigenvalues and an orthonormal
basis ${e_1,\ldots,e_n}$ of eigenfunctions such that ${L e_i = \lambda_i e_i}$ for all ${i=1,\ldots,n}$. In particular, given any function ${m: \sigma(L) \rightarrow {\bf C}}$ on the spectrum ${\
sigma(L) := \{ \lambda_1,\ldots,\lambda_n\}}$ of ${L}$, one can then define the linear operator ${m(L): H \rightarrow H}$ by the formula
$\displaystyle m(L) e_i := m(\lambda_i) e_i,$
which then gives a functional calculus, in the sense that the map ${m \mapsto m(L)}$ is a ${C^*}$-algebra isometric homomorphism from the algebra ${BC(\sigma(L) \rightarrow {\bf C})}$ of bounded
continuous functions from ${\sigma(L)}$ to ${{\bf C}}$, to the algebra ${B(H \rightarrow H)}$ of bounded linear operators on ${H}$. Thus, for instance, one can define heat operators ${e^{-tL}}$ for $
{t>0}$, Schrödinger operators ${e^{itL}}$ for ${t \in {\bf R}}$, resolvents ${\frac{1}{L-z}}$ for ${z ot \in \sigma(L)}$, and (if ${L}$ is positive) wave operators ${e^{it\sqrt{L}}}$ for ${t \in {\bf
R}}$. These will be bounded operators (and, in the case of the Schrödinger and wave operators, unitary operators, and in the case of the heat operators with ${L}$ positive, they will be
contractions). Among other things, this functional calculus can then be used to solve differential equations such as the heat equation
$\displaystyle u_t + Lu = 0; \quad u(0) = f \ \ \ \ \ (1)$
the Schrödinger equation
$\displaystyle u_t + iLu = 0; \quad u(0) = f \ \ \ \ \ (2)$
the wave equation
$\displaystyle u_{tt} + Lu = 0; \quad u(0) = f; \quad u_t(0) = g \ \ \ \ \ (3)$
or the Helmholtz equation
$\displaystyle (L-z) u = f. \ \ \ \ \ (4)$
The functional calculus can also be associated to a spectral measure. Indeed, for any vectors ${f, g \in H}$, there is a complex measure ${\mu_{f,g}}$ on ${\sigma(L)}$ with the property that
$\displaystyle \langle m(L) f, g \rangle_H = \int_{\sigma(L)} m(x) d\mu_{f,g}(x);$
indeed, one can set ${\mu_{f,g}}$ to be the discrete measure on ${\sigma(L)}$ defined by the formula
$\displaystyle \mu_{f,g}(E) := \sum_{i: \lambda_i \in E} \langle f, e_i \rangle_H \langle e_i, g \rangle_H.$
One can also view this complex measure as a coefficient
$\displaystyle \mu_{f,g} = \langle \mu f, g \rangle_H$
of a projection-valued measure ${\mu}$ on ${\sigma(L)}$, defined by setting
$\displaystyle \mu(E) f := \sum_{i: \lambda_i \in E} \langle f, e_i \rangle_H e_i.$
Finally, one can view ${L}$ as unitarily equivalent to a multiplication operator ${M: f \mapsto g f}$ on ${\ell^2(\{1,\ldots,n\})}$, where ${g}$ is the real-valued function ${g(i) := \lambda_i}$, and
the intertwining map ${U: \ell^2(\{1,\ldots,n\}) \rightarrow H}$ is given by
$\displaystyle U ( (c_i)_{i=1}^n ) := \sum_{i=1}^n c_i e_i,$
so that ${L = U M U^{-1}}$.
It is an important fact in analysis that many of these above assertions extend to operators on an infinite-dimensional Hilbert space ${H}$, so long as one one is careful about what “self-adjoint
operator” means; these facts are collectively referred to as the spectral theorem. For instance, it turns out that most of the above claims have analogues for bounded self-adjoint operators ${L: H \
rightarrow H}$. However, in the theory of partial differential equations, one often needs to apply the spectral theorem to unbounded, densely defined linear operators ${L: D \rightarrow H}$, which
(initially, at least), are only defined on a dense subspace ${D}$ of the Hilbert space ${H}$. A very typical situation arises when ${H = L^2(\Omega)}$ is the square-integrable functions on some
domain or manifold ${\Omega}$ (which may have a boundary or be otherwise “incomplete”), and ${D = C^\infty_c(\Omega)}$ are the smooth compactly supported functions on ${\Omega}$, and ${L}$ is some
linear differential operator. It is then of interest to obtain the spectral theorem for such operators, so that one build operators such as ${e^{-tL}, e^{itL}, \frac{1}{L-z}, e^{it\sqrt{L}}}$ or to
solve equations such as (1), (2), (3), (4).
In order to do this, some necessary conditions on the densely defined operator ${L: D \rightarrow H}$ must be imposed. The most obvious is that of symmetry, which asserts that
$\displaystyle \langle Lf, g \rangle_H = \langle f, Lg \rangle_H \ \ \ \ \ (5)$
for all ${f, g \in D}$. In some applications, one also wants to impose positive definiteness, which asserts that
$\displaystyle \langle Lf, f \rangle_H \geq 0 \ \ \ \ \ (6)$
for all ${f \in D}$. These hypotheses are sufficient in the case when ${L}$ is bounded, and in particular when ${H}$ is finite dimensional. However, as it turns out, for unbounded operators these
conditions are not, by themselves, enough to obtain a good spectral theory. For instance, one consequence of the spectral theorem should be that the resolvents ${(L-z)^{-1}}$ are well-defined for any
strictly complex ${z}$, which by duality implies that the image of ${L-z}$ should be dense in ${H}$. However, this can fail if one just assumes symmetry, or symmetry and positive definiteness. A
well-known example occurs when ${H}$ is the Hilbert space ${H := L^2((0,1))}$, ${D := C^\infty_c((0,1))}$ is the space of test functions, and ${L}$ is the one-dimensional Laplacian ${L := -\frac{d^2}
{dx^2}}$. Then ${L}$ is symmetric and positive, but the operator ${L-k^2}$ does not have dense image for any complex ${k}$, since
$\displaystyle \langle (L-\overline{k}^2) f, e^{\overline{k}x} \rangle_H = 0$
for all test functions ${f \in C^\infty_c((0,1))}$, as can be seen from a routine integration by parts. As such, the resolvent map is not everywhere uniquely defined. There is also a lack of
uniqueness for the wave, heat, and Schrödinger equations for this operator (note that there are no spatial boundary conditions specified in these equations).
Another example occurs when ${H := L^2((0,+\infty))}$, ${D := C^\infty_c((0,+\infty))}$, ${L}$ is the momentum operator ${L := i \frac{d}{dx}}$. Then the resolvent ${(L-z)^{-1}}$ can be uniquely
defined for ${z}$ in the upper half-plane, but not in the lower half-plane, due to the obstruction
$\displaystyle \langle (L-z) f, e^{i \bar{z} x} \rangle_H = 0$
for all test functions ${f}$ (note that the function ${e^{i\bar{z} x}}$ lies in ${L^2((0,+\infty))}$ when ${z}$ is in the lower half-plane). For related reasons, the translation operators ${e^{itL}}$
have a problem with either uniqueness or existence (depending on whether ${t}$ is positive or negative), due to the unspecified boundary behaviour at the origin.
The key property that lets one avoid this bad behaviour is that of essential self-adjointness. Once ${L}$ is essentially self-adjoint, then spectral theorem becomes applicable again, leading to all
the expected behaviour (e.g. existence and uniqueness for the various PDE given above).
Unfortunately, the concept of essential self-adjointness is defined rather abstractly, and is difficult to verify directly; unlike the symmetry condition (5) or the positive condition (6), it is not
a “local” condition that can be easily verified just by testing ${L}$ on various inputs, but is instead a more “global” condition. In practice, to verify this property, one needs to invoke one of a
number of a partial converses to the spectral theorem, which roughly speaking asserts that if at least one of the expected consequences of the spectral theorem is true for some symmetric densely
defined operator ${L}$, then ${L}$ is self-adjoint. Examples of “expected consequences” include:
• Existence of resolvents ${(L-z)^{-1}}$ (or equivalently, dense image for ${L-z}$);
• Existence of a contractive heat propagator semigroup ${e^{tL}}$ (in the positive case);
• Existence of a unitary Schrödinger propagator group ${e^{itL}}$;
• Existence of a unitary wave propagator group ${e^{it\sqrt{L}}}$ (in the positive case);
• Existence of a “reasonable” functional calculus.
• Unitary equivalence with a multiplication operator.
Thus, to actually verify essential self-adjointness of a differential operator, one typically has to first solve a PDE (such as the wave, Schrödinger, heat, or Helmholtz equation) by some
non-spectral method (e.g. by a contraction mapping argument, or a perturbation argument based on an operator already known to be essentially self-adjoint). Once one can solve one of the PDEs, then
one can apply one of the known converse spectral theorems to obtain essential self-adjointness, and then by the forward spectral theorem one can then solve all the other PDEs as well. But there is no
getting out of that first step, which requires some input (typically of an ODE, PDE, or geometric nature) that is external to what abstract spectral theory can provide. For instance, if one wants to
establish essential self-adjointness of the Laplace-Beltrami operator ${L = -\Delta_g}$ on a smooth Riemannian manifold ${(M,g)}$ (using ${C^\infty_c(M)}$ as the domain space), it turns out (under
reasonable regularity hypotheses) that essential self-adjointness is equivalent to geodesic completeness of the manifold, which is a global ODE condition rather than a local one: one needs geodesics
to continue indefinitely in order to be able to (unitarily) solve PDEs such as the wave equation, which in turn leads to essential self-adjointness. (Note that the domains ${(0,1)}$ and ${(0,+\
infty)}$ in the previous examples were not geodesically complete.) For this reason, essential self-adjointness of a differential operator is sometimes referred to as quantum completeness (with the
completeness of the associated Hamilton-Jacobi flow then being the analogous classical completeness).
In these notes, I wanted to record (mostly for my own benefit) the forward and converse spectral theorems, and to verify essential self-adjointness of the Laplace-Beltrami operator on geodesically
complete manifolds. This is extremely standard analysis (covered, for instance, in the texts of Reed and Simon), but I wanted to write it down myself to make sure that I really understood this
foundational material properly.
A recurring theme in mathematics is that of duality: a mathematical object ${X}$ can either be described internally (or in physical space, or locally), by describing what ${X}$ physically consists of
(or what kind of maps exist into ${X}$), or externally (or in frequency space, or globally), by describing what ${X}$ globally interacts or resonates with (or what kind of maps exist out of ${X}$).
These two fundamentally opposed perspectives on the object ${X}$ are often dual to each other in various ways: performing an operation on ${X}$ may transform it one way in physical space, but in a
dual way in frequency space, with the frequency space description often being a “inversion” of the physical space description. In several important cases, one is fortunate enough to have some sort of
fundamental theorem connecting the internal and external perspectives. Here are some (closely inter-related) examples of this perspective:
1. Vector space duality A vector space ${V}$ over a field ${F}$ can be described either by the set of vectors inside ${V}$, or dually by the set of linear functionals ${\lambda: V \rightarrow F}$
from ${V}$ to the field ${F}$ (or equivalently, the set of vectors inside the dual space ${V^*}$). (If one is working in the category of topological vector spaces, one would work instead with
continuous linear functionals; and so forth.) A fundamental connection between the two is given by the Hahn-Banach theorem (and its relatives).
2. Vector subspace duality In a similar spirit, a subspace ${W}$ of ${V}$ can be described either by listing a basis or a spanning set, or dually by a list of linear functionals that cut out that
subspace (i.e. a spanning set for the orthogonal complement ${W^\perp := \{ \lambda \in V^*: \lambda(w)=0 \hbox{ for all } w \in W \})}$. Again, the Hahn-Banach theorem provides a fundamental
connection between the two perspectives.
3. Convex duality More generally, a (closed, bounded) convex body ${K}$ in a vector space ${V}$ can be described either by listing a set of (extreme) points whose convex hull is ${K}$, or else by
listing a set of (irreducible) linear inequalities that cut out ${K}$. The fundamental connection between the two is given by the Farkas lemma.
4. Ideal-variety duality In a slightly different direction, an algebraic variety ${V}$ in an affine space ${A^n}$ can be viewed either “in physical space” or “internally” as a collection of points
in ${V}$, or else “in frequency space” or “externally” as a collection of polynomials on ${A^n}$ whose simultaneous zero locus cuts out ${V}$. The fundamental connection between the two
perspectives is given by the nullstellensatz, which then leads to many of the basic fundamental theorems in classical algebraic geometry.
5. Hilbert space duality An element ${v}$ in a Hilbert space ${H}$ can either be thought of in physical space as a vector in that space, or in momentum space as a covector ${w \mapsto \langle v, w \
rangle}$ on that space. The fundamental connection between the two is given by the Riesz representation theorem for Hilbert spaces.
6. Semantic-syntactic duality Much more generally still, a mathematical theory can either be described internally or syntactically via its axioms and theorems, or externally or semantically via its
models. The fundamental connection between the two perspectives is given by the Gödel completeness theorem.
7. Intrinsic-extrinsic duality A (Riemannian) manifold ${M}$ can either be viewed intrinsically (using only concepts that do not require an ambient space, such as the Levi-Civita connection), or
extrinsically, for instance as the level set of some defining function in an ambient space. Some important connections between the two perspectives includes the Nash embedding theorem and the
theorema egregium.
8. Group duality A group ${G}$ can be described either via presentations (lists of generators, together with relations between them) or representations (realisations of that group in some more
concrete group of transformations). A fundamental connection between the two is Cayley’s theorem. Unfortunately, in general it is difficult to build upon this connection (except in special cases,
such as the abelian case), and one cannot always pass effortlessly from one perspective to the other.
9. Pontryagin group duality A (locally compact Hausdorff) abelian group ${G}$ can be described either by listing its elements ${g \in G}$, or by listing the characters ${\chi: G \rightarrow {\bf R}/
{\bf Z}}$ (i.e. continuous homomorphisms from ${G}$ to the unit circle, or equivalently elements of ${\hat G}$). The connection between the two is the focus of abstract harmonic analysis.
10. Pontryagin subgroup duality A subgroup ${H}$ of a locally compact abelian group ${G}$ can be described either by generators in ${H}$, or generators in the orthogonal complement ${H^\perp := \{ \
xi \in \hat G: \xi \cdot h = 0 \hbox{ for all } h \in H \}}$. One of the fundamental connections between the two is the Poisson summation formula.
11. Fourier duality A (sufficiently nice) function ${f: G \rightarrow {\bf C}}$ on a locally compact abelian group ${G}$ (equipped with a Haar measure ${\mu}$) can either be described in physical
space (by its values ${f(x)}$ at each element ${x}$ of ${G}$) or in frequency space (by the values ${\hat f(\xi) = \int_G f(x) e( - \xi \cdot x )\ d\mu(x)}$ at elements ${\xi}$ of the Pontryagin
dual ${\hat G}$). The fundamental connection between the two is the Fourier inversion formula.
12. The uncertainty principle The behaviour of a function ${f}$ at physical scales above (resp. below) a certain scale ${R}$ is almost completely controlled by the behaviour of its Fourier transform
${\hat f}$ at frequency scales below (resp. above) the dual scale ${1/R}$ and vice versa, thanks to various mathematical manifestations of the uncertainty principle. (The Poisson summation
formula can also be viewed as a variant of this principle, using subgroups instead of scales.)
13. Stone/Gelfand duality A (locally compact Hausdorff) topological space ${X}$ can be viewed in physical space (as a collection of points), or dually, via the ${C^*}$ algebra ${C(X)}$ of continuous
complex-valued functions on that space, or (in the case when ${X}$ is compact and totally disconnected) via the boolean algebra of clopen sets (or equivalently, the idempotents of ${C(X)}$). The
fundamental connection between the two is given by the Stone representation theorem or the (commutative) Gelfand-Naimark theorem.
I have discussed a fair number of these examples in previous blog posts (indeed, most of the links above are to my own blog). In this post, I would like to discuss the uncertainty principle, that
describes the dual relationship between physical space and frequency space. There are various concrete formalisations of this principle, most famously the Heisenberg uncertainty principle and the
Hardy uncertainty principle – but in many situations, it is the heuristic formulation of the principle that is more useful and insightful than any particular rigorous theorem that attempts to capture
that principle. Unfortunately, it is a bit tricky to formulate this heuristic in a succinct way that covers all the various applications of that principle; the Heisenberg inequality ${\Delta x \cdot
\Delta \xi \gtrsim 1}$ is a good start, but it only captures a portion of what the principle tells us. Consider for instance the following (deliberately vague) statements, each of which can be viewed
(heuristically, at least) as a manifestation of the uncertainty principle:
1. A function which is band-limited (restricted to low frequencies) is featureless and smooth at fine scales, but can be oscillatory (i.e. containing plenty of cancellation) at coarse scales.
Conversely, a function which is smooth at fine scales will be almost entirely restricted to low frequencies.
2. A function which is restricted to high frequencies is oscillatory at fine scales, but is negligible at coarse scales. Conversely, a function which is oscillatory at fine scales will be almost
entirely restricted to high frequencies.
3. Projecting a function to low frequencies corresponds to averaging out (or spreading out) that function at fine scales, leaving only the coarse scale behaviour.
4. Projecting a frequency to high frequencies corresponds to removing the averaged coarse scale behaviour, leaving only the fine scale oscillation.
5. The number of degrees of freedom of a function is bounded by the product of its spatial uncertainty and its frequency uncertainty (or more generally, by the volume of the phase space
uncertainty). In particular, there are not enough degrees of freedom for a non-trivial function to be simulatenously localised to both very fine scales and very low frequencies.
6. To control the coarse scale (or global) averaged behaviour of a function, one essentially only needs to know the low frequency components of the function (and vice versa).
7. To control the fine scale (or local) oscillation of a function, one only needs to know the high frequency components of the function (and vice versa).
8. Localising a function to a region of physical space will cause its Fourier transform (or inverse Fourier transform) to resemble a plane wave on every dual region of frequency space.
9. Averaging a function along certain spatial directions or at certain scales will cause the Fourier transform to become localised to the dual directions and scales. The smoother the averaging, the
sharper the localisation.
10. The smoother a function is, the more rapidly decreasing its Fourier transform (or inverse Fourier transform) is (and vice versa).
11. If a function is smooth or almost constant in certain directions or at certain scales, then its Fourier transform (or inverse Fourier transform) will decay away from the dual directions or beyond
the dual scales.
12. If a function has a singularity spanning certain directions or certain scales, then its Fourier transform (or inverse Fourier transform) will decay slowly along the dual directions or within the
dual scales.
13. Localisation operations in position approximately commute with localisation operations in frequency so long as the product of the spatial uncertainty and the frequency uncertainty is
significantly larger than one.
14. In the high frequency (or large scale) limit, position and frequency asymptotically behave like a pair of classical observables, and partial differential equations asymptotically behave like
classical ordinary differential equations. At lower frequencies (or finer scales), the former becomes a “quantum mechanical perturbation” of the latter, with the strength of the quantum effects
increasing as one moves to increasingly lower frequencies and finer spatial scales.
15. Etc., etc.
16. Almost all of the above statements generalise to other locally compact abelian groups than ${{\bf R}}$ or ${{\bf R}^n}$, in which the concept of a direction or scale is replaced by that of a
subgroup or an approximate subgroup. (In particular, as we will see below, the Poisson summation formula can be viewed as another manifestation of the uncertainty principle.)
I think of all of the above (closely related) assertions as being instances of “the uncertainty principle”, but it seems difficult to combine them all into a single unified assertion, even at the
heuristic level; they seem to be better arranged as a cloud of tightly interconnected assertions, each of which is reinforced by several of the others. The famous inequality ${\Delta x \cdot \Delta \
xi \gtrsim 1}$ is at the centre of this cloud, but is by no means the only aspect of it.
The uncertainty principle (as interpreted in the above broad sense) is one of the most fundamental principles in harmonic analysis (and more specifically, to the subfield of time-frequency analysis),
second only to the Fourier inversion formula (and more generally, Plancherel’s theorem) in importance; understanding this principle is a key piece of intuition in the subject that one has to
internalise before one can really get to grips with this subject (and also with closely related subjects, such as semi-classical analysis and microlocal analysis). Like many fundamental results in
mathematics, the principle is not actually that difficult to understand, once one sees how it works; and when one needs to use it rigorously, it is usually not too difficult to improvise a suitable
formalisation of the principle for the occasion. But, given how vague this principle is, it is difficult to present this principle in a traditional “theorem-proof-remark” manner. Even in the more
informal format of a blog post, I was surprised by how challenging it was to describe my own understanding of this piece of mathematics in a linear fashion, despite (or perhaps because of) it being
one of the most central and basic conceptual tools in my own personal mathematical toolbox. In the end, I chose to give below a cloud of interrelated discussions about this principle rather than a
linear development of the theory, as this seemed to more closely align with the nature of this principle.
$\displaystyle i \hbar \partial_t |\psi \rangle = H |\psi\rangle$
is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and ${N}$-particle systems for ${N>1}$. Remarkably, despite being a linear equation,
solutions ${|\psi\rangle}$ to this equation can be governed by a non-linear equation in the large particle limit ${N \rightarrow \infty}$. In particular, when modeling a Bose-Einstein condensate with
a suitably scaled interaction potential ${V}$ in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation
$\displaystyle i \partial_t \phi = \Delta \phi + \lambda |\phi|^2 \phi. \ \ \ \ \ (1)$
I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast
recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the
foundational quantum mechanics of ${N}$-particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers.
This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored.
I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on the Schrödinger equation – the fundamental equation of motion of quantum particles, possibly in the
presence of an external field. My focus here is on the relationship between the Schrödinger equation of motion for wave functions (and the closely related Heisenberg equation of motion for quantum
observables), and Hamilton’s equations of motion for classical particles (and the closely related Poisson equation of motion for classical observables). There is also some brief material on
semiclassical analysis, scattering theory, and spectral theory, though with only a little more than 5 pages to work with in all, I could not devote much detail to these topics. (In particular,
nonlinear Schrödinger equations, a favourite topic of mine, are not covered at all.)
As I said before, I will try to link to at least one other PCM article in every post in this series. Today I would like to highlight Madhu Sudan‘s delightful article on information and coding theory,
“Reliable transmission of information“.
[Update, Oct 3: typos corrected.]
[Update, Oct 9: more typos corrected.]
I’ve just uploaded to the arXiv the paper “The cubic nonlinear Schrödinger equation in two dimensions with radial data“, joint with Rowan Killip and Monica Visan, and submitted to the Annals of
Mathematics. This is a sequel of sorts to my paper with Monica and Xiaoyi Zhang, in which we established global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger
equation (NLS) $iu_t + \Delta u = |u|^{4/d} u$ in three and higher dimensions $d \geq 3$ assuming spherically symmetric data. (This is another example of the recently active field of critical
dispersive equations, in which both coarse and fine scales are (just barely) nonlinearly active, and propagate at different speeds, leading to significant technical difficulties.)
In this paper we obtain the same result for the defocusing two-dimensional mass-critical NLS $iu_t + \Delta u= |u|^2 u$, as well as in the focusing case $iu_t + \Delta u= -|u|^2 u$ under the
additional assumption that the mass of the initial data is strictly less than the mass of the ground state. (When mass equals that of the ground state, there is an explicit example, built using the
pseudoconformal transformation, which shows that solutions can blow up in finite time.) In fact we can show a slightly stronger statement: for spherically symmetric focusing solutions with arbitrary
mass, we can show that the first singularity that forms concentrates at least as much mass as the ground state.
My paper “Resonant decompositions and the I-method for the cubic nonlinear Schrodinger equation on ${\Bbb R}^2$“, with Jim Colliander, Mark Keel, Gigliola Staffilani, and Hideo Takaoka (aka the “
I-team“), has just been uploaded to the arXiv, and submitted to DCDS-A. In this (long-delayed!) paper, we improve our previous result on the global well-posedness of the cubic non-linear defocusing
Schrödinger equation
$i u_t+ \Delta u = |u|^2 u$
in two spatial dimensions, thus $u: {\Bbb R} \times {\Bbb R}^2 \to {\Bbb C}$. In that paper we used the “first generation I-method” (centred around an almost conservation law for a mollified energy
$E(Iu)$) to obtain global well-posedness in $H^s({\Bbb R}^2)$ for $s > 4/7$ (improving on an earlier result of $s > 2/3$by Bourgain). Here we use the “second generation I-method”, in which the
mollified energy $E(Iu)$ is adjusted by a correction term to damp out “non-resonant interactions” and thus lead to an improved almost conservation law, and ultimately to an improvement of the
well-posedness range to $s > 1/2$. (The conjectured region is $s \geq 0$; beyond that, the solution becomes unstable and even local well-posedness is not known.) A similar result (but using Morawetz
estimates instead of correction terms) has recently been established by Colliander-Grillakis-Tzirakis; this attains the superior range of $s > 2/5$, but in the focusing case it does not give global
existence all the way up to the ground state due to a slight inefficiency in the Morawetz estimate approach. Our method is in fact rather robust and indicates that the “first-generation” I-method can
be pushed further for a large class of dispersive PDE.
This is a well known problem (see for instance this survey) in the area of “quantum chaos” or “quantum unique ergodicity”; I am attracted to it both for its simplicity of statement (which I will get
to eventually), and also because it focuses on one of the key weaknesses in our current understanding of the Laplacian, namely is that it is difficult with the tools we know to distinguish between
eigenfunctions (exact solutions to $-\Delta u_k = \lambda_k u_k$) and quasimodes (approximate solutions to the same equation), unless one is willing to work with generic energy levels rather than
specific energy levels.
The Bunimovich stadium $\Omega$ is the name given to any planar domain consisting of a rectangle bounded at both ends by semicircles. Thus the stadium has two flat edges (which are traditionally
drawn horizontally) and two round edges, as this picture from Wikipedia shows:
Despite the simple nature of this domain, the stadium enjoys some interesting classical and quantum dynamics. The classical dynamics, or billiard dynamics on $\Omega$ is ergodic (as shown by
Bunimovich) but not uniquely ergodic. In more detail: we say the dynamics is ergodic because a billiard ball with randomly chosen initial position and velocity (as depicted above) will, over time, be
uniformly distributed across the billiard (as well as in the energy surface of the phase space of the billiard). On the other hand, we say that the dynamics is not uniquely ergodic because there do
exist some exceptional choices of initial position and velocity for which one does not have uniform distribution, namely the vertical trajectories in which the billiard reflects orthogonally off of
the two flat edges indefinitely.
Recent Comments
davidokinzi on Polymath8b, IX: Large quadrati…
Anonymous on Real stable polynomials and th…
Terence Tao on Polymath8b, X: writing the pap…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Aubrey de Grey on Polymath8b, X: writing the pap…
xfxie on Polymath8b, X: writing the pap…
Daniel on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Terence Tao on Finite time blowup for an aver…
Terence Tao on Polymath8b, IV: Enlarging the…
Tony Feng on Polymath8b, IV: Enlarging the…
Gil Kalai on Finite time blowup for an aver… | {"url":"https://terrytao.wordpress.com/tag/schrodinger-equation/","timestamp":"2014-04-18T13:26:35Z","content_type":null,"content_length":"203353","record_id":"<urn:uuid:1b6c2aa4-e81c-4b31-ba86-7937b0b4b1db>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::NumSeq::Fibonacci -- Fibonacci numbers
use Math::NumSeq::Fibonacci;
my $seq = Math::NumSeq::Fibonacci->new;
my ($i, $value) = $seq->next;
The Fibonacci numbers F(i) = F(i-1) + F(i-2) starting from 0,1,
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
starting i=0
See "FUNCTIONS" in Math::NumSeq for behaviour common to all sequence classes.
Create and return a new sequence object.
Return the next index and value in the sequence.
When $value exceeds the range of a Perl unsigned integer the return is a Math::BigInt to preserve precision.
Move the current sequence position to $i. The next call to next() will return $i and corresponding value.
Return the $i'th Fibonacci number.
For negative <$i> the sequence is extended backwards as F[i]=F[i+2]-F[i+1]. The effect is the same Fibonacci numbers but negative at negative even i.
i F[i]
--- ----
-1 1
-2 -1 <----+ negative at even i
-3 2 |
-4 -3 <----+
When $value exceeds the range of a Perl unsigned integer the return is a Math::BigInt to preserve precision.
Return true if $value occurs in the sequence, so is a positive Fibonacci number.
Return an estimate of the i corresponding to $value. See "Value to i Estimate" below.
Fibonacci F[i] can be calculated by a powering procedure with two squares per step. A pair of values F[k] and F[k-1] are maintained and advanced according to bits of i from high to low
start k=1, F[k]=1, F[k-1]=0
add = -2 # 2*(-1)^k
F[2k+1] = 4*F[k]^2 - F[k-1]^2 + add
F[2k-1] = F[k]^2 + F[k-1]^2
F[2k] = F[2k+1] - F[2k-1]
bit = next bit of i, high to low, skip high 1 bit
if bit == 1
take F[2k+1], F[2k] as new F[k],F[k-1]
add = -2 (for next loop)
else bit == 0
take F[2k], F[2k-1] as new F[k],F[k-1]
add = 2 (for next loop)
For the last (least significant) bit of i an optimization can be made with a single multiple for that last step, instead of two squares.
bit = least significant bit of i
if bit == 1
F[2k+1] = (2F[k]+F[k-1])*(2F[k]-F[k-1]) + add
F[2k] = F[k]*(F[k]+2F[k-1])
The "add" amount is 2*(-1)^k which means +2 or -2 according to k odd or even, which in turn means whether the previous bit taken from i was 1 or 0. That can be easily noted from each bit, to be used
in the following loop iteration or the final step F[2k+1] formula.
For small i it's usually faster to just successively add F[k+1]=F[k]+F[k-1], but when in bignums the doubling k->2k by two squares is faster than doing k many individual additions for the same thing.
F[i] increases as a power of phi, the golden ratio. The exact value is
F[i] = (phi^i - beta^i) / (phi - beta) # exactly
phi = (1+sqrt(5))/2 = 1.618
beta = -1/phi = -0.618
Since abs(beta)<1 the beta^i term quickly becomes small. So taking a log (natural logarithm) to get i,
log(F[i]) ~= i*log(phi) - log(phi-beta)
i ~= (log(F[i]) + log(phi-beta)) / log(phi)
Or the same using log base 2 which can be estimated from the highest bit position of a bignum,
log2(F[i]) ~= i*log2(phi) - log2(phi-beta)
i ~= (log2(F[i]) + log2(phi-beta)) / log2(phi)
Math::NumSeq, Math::NumSeq::LucasNumbers, Math::NumSeq::Fibbinary, Math::NumSeq::FibonacciWord, Math::NumSeq::Pell, Math::NumSeq::Tribonacci
Math::Fibonacci, Math::Fibonacci::Phi
Copyright 2010, 2011, 2012, 2013, 2014 Kevin Ryde
Math-NumSeq is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your
option) any later version.
Math-NumSeq is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details.
You should have received a copy of the GNU General Public License along with Math-NumSeq. If not, see <http://www.gnu.org/licenses/>. | {"url":"http://search.cpan.org/dist/Math-NumSeq/lib/Math/NumSeq/Fibonacci.pm","timestamp":"2014-04-23T20:13:29Z","content_type":null,"content_length":"19603","record_id":"<urn:uuid:49187e6b-64c0-4bbd-a94f-edb563d326f5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logistic Regression with corelated data
March 30th 2011, 08:37 AM #1
Mar 2011
Logistic Regression with corelated data
As a non statistician with a scientific background i have to build a model with binary outcome (survival or not).
The covariates(a mix of continuous and discrete variables) are collected each year on a given population.
For some reasons we cannot collect the data each year for each person.
For example the data may be available
-------------2006---- 2007---- 2008---- 2009---- 2010
1 ----------Yes----- Yes----- Yes----- No -----No
2 ----------No----- No----- Yes----- Yes -----Yes
3 ----------Yes----- Yes----- No----- No -----Yes
4 ----------Yes----- No----- Yes----- Yes -----Yes
5 ----------Yes----- Yes----- Yes----- Yes -----Yes
I already performed a logistic regression without considering correlation between the data.
At the time being I've read a few articles dealing with correlation between data and I think that I have to incorporate them in my model since the measure on the same person are correlated
(repeated measure).
My questions are the following :
+ Which class of model should I use ? I think i should use the Generalized Estimating Equations since I'm interested in the survival probability as a function of the covariates.
+ If there are several methods that could work do you have reference to a text describing them ?
+ Which function should I use in R to obtain the regression coefficients ? I have seen that a few packages that deal with correlated data (geepack, MCMglmm, lme4, glmmAK, ...)
+ Where can I find documents that explain the R-function and the models behind them ?
Your answer and comments would be of great use. Thank you.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/176327-logistic-regression-corelated-data.html","timestamp":"2014-04-17T06:11:21Z","content_type":null,"content_length":"34415","record_id":"<urn:uuid:3c8399da-9f0e-4924-aa0b-36fa1962251a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Discrepancy Problem for Planar Configurations
Yaacov Kupitz and Micha A. Perles asked:
What is the smallest number C such that for every configuration of n points in the plane there is a line containing two or more points from the configuration for which the difference between the
number of points on the two sides of the line is at most C?
We will refer to the conjecture that C is bounded as Kupitz-Perles conjecture. It was first conjectured that C=1, but Noga Alon gave an example with C=2. It is not known if C is bounded and, in fact,
no example with C>2 is known.
Alon’s example
Kupitz himself proved that $C \le n/3$, Alon proved that $C \le K\sqrt n$, Perles showed that $C \le K\log n$, and Rom Pinchasi showed that $C \le K\log \log n$. This is the best known upper bound.
($K$ is a constant.) Pinchasi’s result asserts something a little stronger: whenever you have n points in the plane, not all on a line, there is a line containing two or more of the points such that
in each open half plane there are at least $n/2-K\log \log n$ points.
The proof uses the method of allowable sequences developed by Eli Goodman and Ricky Pollack. Another famous application of this method is a theorem of Ugnar asserting that 2n points in the plane
which are not on the same line determine at least 2n directions. The method of allowable sequences translates various problems in combinatorial geometry into problems about permutations. For example,
there is a famous problem about the number of “halving lines” for planar configurations. This problem is one of the “holy grails” of combinatorial geometry. The translated problem is also quite
natural as a problem about permutations. Consider the permutation on {1,2,3,…,n} which send i to n-i. A reduced representation is a presentation of this permutation as a product of $n \choose 2$
adjacent transpositions. The algebraic question is: What is the maximum number of times that a specific transposition (i,i+1) can occur in such a reduced word?
There are many geometric discrepancy problems which are closer in spirit and in methodology to discrepancy problems in combinatorics and number theory such as the Erdos Discrepancy problem, now under
polymath5 attack. The Kupitz-Perles conjecture seems of a different nature, but I find it very exciting. | {"url":"http://gilkalai.wordpress.com/2010/02/03/a-discrepency-problem-for-planar-configurations/?like=1&source=post_flair&_wpnonce=7b62a7b6bf","timestamp":"2014-04-17T21:33:52Z","content_type":null,"content_length":"103994","record_id":"<urn:uuid:f54e9b74-f107-4213-8fb3-3b53c2f15f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics question- block on a wedge
March 14th 2009, 08:40 AM #1
Junior Member
Mar 2009
Mechanics question- block on a wedge
I was wondering if anyone could offer some help with the following question:
A wedge with mass M rests on a frictionless horizontal table top. A block with mass m is placed on the wedge. There is no friction between the block and the wedge. The system is released from
Calculate the acceleration of the wedge.
Express your answer in terms of M, m,α and g.
The answer I should get is:
However I keep getting the following:
Does anyone see where I am going wrong? My working is as follows.
I will denote the normal force to the block by n.
Therefore the horizontal component of force on the wedge due to the block is given by F=-mgcosαsinα. The combined downward force due to the ramp and block is Mg+mgcosαsinα
Therefore (M+mcosαsinα)a=-mgcosαsinα (By Newtons second law)
Hence a=mgsinαcosα
I have no idea where I am going wrong so any help would be appreciated.
I was wondering if anyone could offer some help with the following question:
A wedge with mass M rests on a frictionless horizontal table top. A block with mass m is placed on the wedge. There is no friction between the block and the wedge. The system is released from
Calculate the acceleration of the wedge.
Express your answer in terms of M, m,α and g.
The answer I should get is:
However I keep getting the following:
Does anyone see where I am going wrong? My working is as follows.
I will denote the normal force to the block by n.
I think the error comes right there at the beginning. The normal force is not equal to $mg\cos\alpha$, because the block will have an acceleration in the normal direction as the wedge slides away
along the table. In fact, the component of the acceleration in that direction will be $a\sin\alpha$ (where a is the acceleration of the wedge), so Newton's second law gives $n-mg\cos\alpha = ma\
sin\alpha$. Therefore $n = m(g\cos\alpha + a\sin\alpha)$. The horizontal equation of motion for the wedge is then $-n\sin\alpha = Ma$, which gives the stated answer for a.
Last edited by Opalg; March 14th 2009 at 12:27 PM.
Consider the block and the wedge separately. The only forces acting on the block are its weight mg acting downwards, and the normal force n exerted by the wedge. The only forces acting on the
wedge are its weight Mg acting downwards, a normal force equal and opposite to n (Newton's third law) exerted by the block, and an upwards normal force (not shown in the picture) exerted by the
The wedge slides towards the right with an acceleration a. The motion of the block is a combination of a component $a\sin\alpha$ in the normal direction, together with a relative acceleration
down the sloping face of the wedge.
Looking only at the wedge (not the block), the only horizontal force on it is the component $n\sin\alpha$ of n. So the equation of motion in this direction is $n\sin\alpha = Ma$. (I seem to have
reversed the sign of a somewhere along the line, so the acceleration is going to lose its negative sign.)
Now looking only at the block (not the wedge), the equation of motion in the normal direction says that the forces acting in that direction, namely $mg\cos\alpha-n$, must produce the acceleration
$a\sin\alpha$. (If the block does not have that component of acceleration in that direction then it will not stay in contact with the wedge.) So Newton's second law says that $mg\cos\alpha-n = ma
I don't know if that makes it any clearer. Apart from drawing the picture (and changing the sign of a), I seem to have more or less repeated my previous comment. I don't see what more I can give
by way of explanation.
The weight force has a normal component. This component is in the opposite direction to the normal reaction force ....
Question parts 2 and 3.
I was wondering if anyone could give me some hints as to how I am supposed to do part 3 of this question.
2.Calculate the horizontal component of the acceleration of the block.
Express your answer in terms of
3.Calculate the vertical component of the acceleration of the block.Express your answer in terms of
Note I have done part 2 and correctly got the answer ( I have added it to my post as I think it may be needed for part 3). The answer to part 2 is:
Anyway for part 3 I keep getting the wrong answer, could someone please explain how to do it?
By the way the answer you should get is:
I was wondering if anyone could give me some hints as to how I am supposed to do part 3 of this question.
2.Calculate the horizontal component of the acceleration of the block.
Express your answer in terms of
3.Calculate the vertical component of the acceleration of the block.Express your answer in terms of
Note I have done part 2 and correctly got the answer ( I have added it to my post as I think it may be needed for part 3). The answer to part 2 is:
Anyway for part 3 I keep getting the wrong answer, could someone please explain how to do it?
By the way the answer you should get is:
You just have to write down the equation of motion for the block, in the vertical direction. Referring to the picture again, the vertical forces on the block are the upward component $n\cos\
alpha$ of the normal force, and the downwards weight mg of the block. So the net upwards force is $n\cos\alpha - mg$, and the equation of motion says that this is equal to mc, where c is the
vertical acceleration of the block.
But we know that $n = \frac{Ma}{\sin\alpha}$ and $a = \frac{mg\sin\alpha\cos\alpha}{M+m\sin^2\alpha}$. So $n = \frac{Mmg\cos\alpha}{M+m\sin^2\alpha}$, and the equation of motion is
$mc = \frac{Mmg\cos^2\alpha}{M+m\sin^2\alpha} -mg = \frac{Mmg\cos^2\alpha -mg(M+m\sin^2\alpha)}{M+m\sin^2\alpha}$. Simplify that (using $1-\cos^2=\sin^2$) and you'll get the answer.
March 14th 2009, 12:12 PM #2
March 14th 2009, 12:48 PM #3
Junior Member
Mar 2009
March 14th 2009, 02:05 PM #4
March 14th 2009, 02:50 PM #5
Junior Member
Mar 2009
March 14th 2009, 02:56 PM #6
March 19th 2009, 09:46 AM #7
Junior Member
Mar 2009
March 20th 2009, 06:54 AM #8 | {"url":"http://mathhelpforum.com/advanced-applied-math/78642-mechanics-question-block-wedge.html","timestamp":"2014-04-18T09:55:58Z","content_type":null,"content_length":"73047","record_id":"<urn:uuid:7512a1b9-d531-4b70-b55d-f42925569822>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fremont, CA Precalculus Tutor
Find a Fremont, CA Precalculus Tutor
...I also have ten years' experience as a software engineer. My Teaching/tutoring style: Based on my assessment of a particular student's learning experience, I will adopt a methodology that is
tailored to the student's needs. Whether it is a topic that the student has already been taught, or it i...
14 Subjects: including precalculus, calculus, statistics, geometry
...My specialties are in the Greek Classics and 17th-18th century British literature. In addition, I study philosophy and literary theory, especially 20th century post-structuralist theory,
Marxism, post-colonialism, and contemporary French philosophy. Please feel free to contact me for additional information about my specialty.
26 Subjects: including precalculus, reading, English, writing
...Please do make sure that you are either within my 10 mile travel radius or are willing to work with me online. I have loved tutoring/teaching math (pre-algebra, algebra 1, geometry, algebra 2,
precalculus, calculus) for over 17 years. I am a credentialed classroom teacher, but I love working on...
10 Subjects: including precalculus, calculus, geometry, algebra 1
...I enjoy helping students in general chemistry, biochemistry, AP chemistry, SAT II chemistry and any other school chemistry test preparation. I hold a B.S. in chemistry and M.S. in
biochemistry. Using molecular cloning, I have also conducted research in biochemistry and neuroscience laboratories at UC Riverside and San Diego State University.
18 Subjects: including precalculus, chemistry, calculus, physics
...This includes a Unix disk driver I developed from scratch. I have used C to help me to analyze Unix file system performance. I have used C to analyze and enhance both Informix and Oracle
Relational Database Management Systems.
23 Subjects: including precalculus, physics, algebra 2, algebra 1
Related Fremont, CA Tutors
Fremont, CA Accounting Tutors
Fremont, CA ACT Tutors
Fremont, CA Algebra Tutors
Fremont, CA Algebra 2 Tutors
Fremont, CA Calculus Tutors
Fremont, CA Geometry Tutors
Fremont, CA Math Tutors
Fremont, CA Prealgebra Tutors
Fremont, CA Precalculus Tutors
Fremont, CA SAT Tutors
Fremont, CA SAT Math Tutors
Fremont, CA Science Tutors
Fremont, CA Statistics Tutors
Fremont, CA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Dublin, CA precalculus Tutors
Hayward, CA precalculus Tutors
Menlo Park precalculus Tutors
Mountain View, CA precalculus Tutors
Newark, CA precalculus Tutors
Oakland, CA precalculus Tutors
Palo Alto precalculus Tutors
Pleasanton, CA precalculus Tutors
Redwood City precalculus Tutors
San Jose, CA precalculus Tutors
San Leandro precalculus Tutors
San Mateo, CA precalculus Tutors
Santa Clara, CA precalculus Tutors
Sunnyvale, CA precalculus Tutors
Union City, CA precalculus Tutors | {"url":"http://www.purplemath.com/fremont_ca_precalculus_tutors.php","timestamp":"2014-04-19T12:40:13Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:2b232754-902a-43f7-b103-68ffed3eeab6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program extraction in simply-typed higher order logic
- IN TPHOLS, NUMBER 3223 IN LNCS , 2004
"... We formalise a simple assembly language with procedures and a safety policy for arithmetic overflow in Isabelle/HOL. To verify individual programs we use a safety logic. Such a logic can be
realised in Isabelle/HOL either as shallow or deep embedding. In a shallow embedding logical formulas are wri ..."
Cited by 21 (3 self)
Add to MetaCart
We formalise a simple assembly language with procedures and a safety policy for arithmetic overflow in Isabelle/HOL. To verify individual programs we use a safety logic. Such a logic can be realised
in Isabelle/HOL either as shallow or deep embedding. In a shallow embedding logical formulas are written as HOL predicates, whereas a deep embedding models formulas as a datatype. This paper presents
and discusses both variants pointing out their specific strengths and weaknesses.
- Exploring New Frontiers of Theoretical Informatics , 2004
"... We introduce a generic framework for proof carrying code, developed and mechanically verified in Isabelle/HOL. The framework defines and proves sound a verification condition generator with
minimal assumptions on the underlying programming language, safety policy, and safety logic. We demonstrate it ..."
Cited by 9 (2 self)
Add to MetaCart
We introduce a generic framework for proof carrying code, developed and mechanically verified in Isabelle/HOL. The framework defines and proves sound a verification condition generator with minimal
assumptions on the underlying programming language, safety policy, and safety logic. We demonstrate its usability for prototyping proof carrying code systems by instantiating it to a simple assembly
language with procedures and a safety policy for arithmetic overflow.
- In Proc. 11th Int. Conf. on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2004), Lecture Notes in Computer Science , 2005
"... To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is
particularly effective if the specifications are written in a high-level language that makes it easy for de ..."
Cited by 7 (4 self)
Add to MetaCart
To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is
particularly effective if the specifications are written in a high-level language that makes it easy for designers to specify their goals. We consider a high-level specification language that results
from adding knowledge to a fragment of Nuprl specifically tailored for specifying distributed protocols, called event theory. We then show how high-level knowledge-based programs can be synthesized
from the knowledge-based specifications using a proof development system such as Nuprl. Methods of Halpern and Zuck [1992] then apply to convert these knowledge-based protocols to ordinary protocols.
These methods can be expressed as heuristic transformation tactics in Nuprl. 1
- AISC 2004, LNAI , 2004
"... Abstract. While implementing a proof for the Basic Perturbation Lemma (a central result in Homological Algebra) in the theorem prover Isabelle one faces problems such as the implementation of
algebraic structures, partial functions in a logic of total functions, or the level of abstraction in formal ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. While implementing a proof for the Basic Perturbation Lemma (a central result in Homological Algebra) in the theorem prover Isabelle one faces problems such as the implementation of
algebraic structures, partial functions in a logic of total functions, or the level of abstraction in formal proofs. Different approaches aiming at solving these problems will be evaluated and
classified according to features such as the degree of mechanization obtained or the direct correspondence to the mathematical proofs. From this study, an environment for further developments in
Homological Algebra will be proposed. 1
- TYPES FOR PROOFS AND PROGRAMS, INTERNATIONAL WORKSHOP, TYPES 2004, JOUY-EN-JOSAS , 2004
"... We present a formalization of a constructive proof of weak normalization for the simply-typed λ-calculus in the theorem prover Isabelle/HOL, and show how a program can be extracted from it.
Unlike many other proofs of weak normalization based on Tait’s strong computability predicates, which require ..."
Cited by 2 (1 self)
Add to MetaCart
We present a formalization of a constructive proof of weak normalization for the simply-typed λ-calculus in the theorem prover Isabelle/HOL, and show how a program can be extracted from it. Unlike
many other proofs of weak normalization based on Tait’s strong computability predicates, which require a logic supporting strong eliminations and can give rise to dependent types in the extracted
program, our formalization requires only relatively simple proof principles. Thus, the program obtained from this proof is typable in simply-typed higher-order logic as implemented in Isabelle/HOL,
and a proof of its correctness can automatically be derived within the system.
- SME Conference Proceedings Bethlelem , 1984
"... Abstract. Higman’s lemma, a specific instance of Kruskal’s theorem, is an interesting result from the area of combinatorics, which has often been used as a test case for theorem provers. We
present a constructive proof of Higman’s lemma in the theorem prover Isabelle, based on a paper proof by Coqua ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. Higman’s lemma, a specific instance of Kruskal’s theorem, is an interesting result from the area of combinatorics, which has often been used as a test case for theorem provers. We present a
constructive proof of Higman’s lemma in the theorem prover Isabelle, based on a paper proof by Coquand and Fridlender. Making use of Isabelle’s newly-introduced infrastructure for program extraction,
we show how a program can automatically be extracted from this proof, and analyze its computational behaviour. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=876","timestamp":"2014-04-17T05:57:09Z","content_type":null,"content_length":"26594","record_id":"<urn:uuid:d5cb5d5d-4ccf-4bd8-a73a-8b19e4435bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplifying Log Expressions
Simplifying Logarithmic Expressions (page 3 of 5)
Sections: Basic log rules, Expanding, Simplifying, Trick questions, Change-of-Base formula
The logs rules work "backwards", so you can simplify ("compress"?) log expressions. When they tell you to "simplify" a log expression, this usually means they will have given you lots of log terms,
each containing a simple argument, and they want you to combine everything into one log with a complicated argument. "Simplifying" in this context usually means the opposite of "expanding".
• Simplify log[2](x) + log[2](y).
Since these logs have the same base, the addition outside can be turned into multiplication inside:
log[2](x) + log[2](y) = log[2](xy)
The answer is log[2](xy). Copyright © Elizabeth Stapel 2002-2011 All Rights Reserved
• Simplify log[3](4) – log[3](5).
Since these logs have the same base, the subtraction outside can be turned into division inside:
log[3](4) – log[3](5) = log[3](^4/[5])
The answer is log[3](^4/[5]).
• Simplify 2log[3](x).
The multiplier out front can be taken inside as an exponent:
2log[3](x) = log[3](x^2)
• Simplify 3log[2](x) – 4log[2](x + 3) + log[2](y).
I will get rid of the multipliers by moving them inside as powers:
3log[2](x) – 4log[2](x + 3) + log[2](y)
= log[2](x^3) – log[2]((x + 3)^4) + log[2](y)
Then I'll put the added terms together, and convert the addition to multiplication:
log[2](x^3) – log[2]((x + 3)^4) + log[2](y)
= log[2](x^3) + log[2](y) – log[2]((x + 3)^4)
= log[2](x^3y) – log[2]((x + 3)^4)
Then I'll account for the subtracted term by combining it inside with division:
You can use the Mathway widget below to practice simplifying a logarithmic expression. Try the entered exercise, or type in your own exercise. Then click "Answer" to compare your answer to Mathway's.
(Or skip the widget and continue with the lesson.)
(Clicking on "View Steps" on the widget's answer screen will take you to the Mathway site, where you can register for a free seven-day trial of the software.)
<< Previous Top | 1 | 2 | 3 | 4 | 5 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Simplifying Logaritmic Expressions." Purplemath. Available from
http://www.purplemath.com/modules/logrules3.htm. Accessed | {"url":"http://www.purplemath.com/modules/logrules3.htm","timestamp":"2014-04-16T18:58:19Z","content_type":null,"content_length":"30240","record_id":"<urn:uuid:844f291f-a9b6-42c7-8365-5498393f316b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
l'Hospital's Rule
November 22nd 2009, 10:34 PM #1
Oct 2009
l'Hospital's Rule
Find the limit. Use l'Hospital's Rule where appropriate. If there is a more elementary method, consider using it. If l'Hospital's Rule doesn't apply, explain why.
lim as x -> o+ (sinxlnx)
I got the equation to be zero, but now I am unsure on whether l'Hospital's Rule is appropriate and how to show this.
Any help would be very much appreciated, thanks.
Find the limit. Use l'Hospital's Rule where appropriate. If there is a more elementary method, consider using it. If l'Hospital's Rule doesn't apply, explain why.
lim as x -> o+ (sinxlnx)
I got the equation to be zero, but now I am unsure on whether l'Hospital's Rule is appropriate and how to show this.
Any help would be very much appreciated, thanks.
L'H rule is appliable here since both functions $\ln x\,,\,\,\frac{1}{\sin x}$ are derivable in a right neighborhood of zero, we get an indeterminate of the form $\frac{\infty}{\infty}$ and the
limit $\lim_{x\to 0}\frac{(\ln x)'}{\left(\frac{1}{\sin x}\right)'}$ exists and it's finite (it's zero)
November 22nd 2009, 11:58 PM #2
Oct 2009 | {"url":"http://mathhelpforum.com/calculus/116239-l-hospital-s-rule.html","timestamp":"2014-04-17T05:04:51Z","content_type":null,"content_length":"33434","record_id":"<urn:uuid:da4a62e3-0c1d-4ff8-aa0d-d65ec2015e89>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
delooping under Dold-Kan and simplicial delooping
up vote 7 down vote favorite
What maps of simplicial sets exist between
• the image under the Dold-Kan correspondence of a chain complex shifted up in degree
• and the image under the right adjoint to simplicial looping of the DK-image of the unshifted complex
Here is the same question in detail:
$$ (G \dashv \bar W) : sGrp \stackrel{\leftarrow}{\underset{\bar W}{\to}} sSet_0 \hookrightarrow sSet $$
for the adjunction between simplicial groups and reduced simplicial sets whose left adjoint is the simplicial loop group functor (as for instance in Goerss-Jardine, chapter V);
and write
$$ Ch_\bullet^+ \overset{\Xi}{\to} sAbGrp \hookrightarrow sGrp \overset{U}{\to} sSet $$
for the Dold-Kan correspondence, where in both cases I care about the images as simplicial sets.
Then for $V \in Ch_\bullet^+$ a chain complex and $V[1]$ (or $V[-1]$ if you prefer) its shift up in degree (its delooping as a chain complex) the two simplicial sets
$$ U \Xi (V[1]) $$
$$ \bar W (\Xi V) $$
should have the same homotopy type. What nice natural maps of simplicial sets do we have between them?
simplicial-stuff homological-algebra
add comment
1 Answer
active oldest votes
There's an explicit natural isomorphism between the two functors.
Rick Jardine says as much, but for the image of the functors in the category of chain complexes (i.e. after applying the normalization). You can find this in Goerss, Jardine
Remark III.5.6, or in greater depth in section 4.6 of Jardine's book on Generalized Etale Cohomology.
up vote 2 down vote
accepted The combinatorics for the isomorphism in simplicial abelian groups means that the isomorphism takes a little longer to state, but I could send you a pdf with everything written
out if this would be useful.
Thanks! I should have seen this. – Urs Schreiber Jun 27 '11 at 19:49
add comment
Not the answer you're looking for? Browse other questions tagged simplicial-stuff homological-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/56388/delooping-under-dold-kan-and-simplicial-delooping?sort=newest","timestamp":"2014-04-23T13:57:11Z","content_type":null,"content_length":"52121","record_id":"<urn:uuid:c1c4d622-05f0-4be9-983d-141a4d282e8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
TRISNOWATI, NINING (2008) METODE MATRIKS UNTUK MENENTUKAN SOLUSI PDLTAK HOMOGEN DENGAN KOEFISIEN KONSTANTA. Other thesis, University of Muhammadiyah Malang.
Download (52kB) | Preview
In this study discussed about Linear Differential Equation Non Homogeneous by Constant Coefficient which the generally from is: d^2y/ dx^2 + p dy/dx + qy = g (x) Where p, q constant and g(x) is
continu function in x. To determining solution first must be searched homogeneous solution that is takely g(x) = 0, afterwards determining particulir solution that is takely g(x) = 0 . So that
solution represent quantifiying from homogeneous solution and particulir solution. As know that a method always have limitation in its use, for example problem scope which finishable and moderation
workmanship stepping. To be able to know method which must used, hence will be more determined by differential equation type which must finished. That thing caused type of differential equation which
miscellaneous evaluated from type variable, order level which owned and or from linear equation or non linear equation. Linear Differential Equation Non Homogeneous by Constant Coefficient solution
searchable with matrix method. Matrix method represent a method which relative have step workmanship which enough simple for yielding Linear Differential Equation Non Homogeneous by Constant
Coefficient solution. This method degraded from coefficient indefinite method and developed with using concepts matrix, linear, and Euler Identity. Matrix method enabling for equpied with algorithm
able to easy searching differential equation solution. Workmanship algorithm the can translated in apllication computer program able to quickening determining differential equation solution. One of
the wearing as auxiliary apparatus for finishing differential equation solution is Maple.
Actions (login required) | {"url":"http://eprints.umm.ac.id/7283/","timestamp":"2014-04-17T04:00:08Z","content_type":null,"content_length":"22094","record_id":"<urn:uuid:fb48a076-07c7-439e-8016-961859b2e3b2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing the minimum variance method for estimating large-scale velocity
Testing the minimum variance method for estimating large-scale velocity moments
ABSTRACT The estimation and analysis of large-scale bulk flow moments of peculiar
velocity surveys is complicated by non-spherical survey geometry, the
non-uniform sampling of the matter velocity field by the survey objects and the
typically large measurement errors of the measured line-of-sight velocities.
Previously, we have developed an optimal `minimum variance' (MV) weighting
scheme for using peculiar velocity data to estimate bulk flow moments for
idealized, dense and isotropic surveys with Gaussian radial distributions, that
avoids many of these complications. These moments are designed to be easy to
interpret and are comparable between surveys. In this paper, we test the
robustness of our MV estimators using numerical simulations. Using MV weights,
we estimate the bulk flow moments for various mock catalogues extracted from
the LasDamas and the Horizon Run numerical simulations and compare these
estimates to the moments calculated directly from the simulation boxes. We show
that the MV estimators are unbiased and negligibly affected by non-linear
• Citations (0)
• Cited In (0)
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
10 Downloads
Available from
Jun 29, 2013 | {"url":"http://www.researchgate.net/publication/51968957_Testing_the_Minimum_Variance_Method_for_Estimating_Large_Scale_VelocityMoments","timestamp":"2014-04-17T07:11:28Z","content_type":null,"content_length":"196984","record_id":"<urn:uuid:f3c8d240-21e3-4576-9d95-042670661931>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Miami, FL Algebra Tutor
Find a West Miami, FL Algebra Tutor
...I've written multiple papers using the programs for years and have been exposed to most versions of Office including 2011. I've been using PowerPoint for years for school projects. I'm very
good at using all the tools in PowerPoint and can bring previous presentations as a reference on how use use the program to its full potential.
20 Subjects: including algebra 2, algebra 1, reading, English
...My goal is to make the student really understand the subject, not memorize it, so he/she can build a strong foundation that can be used for future classes. I hold a Bachelor Degree in
Industrial Engineering and a Bachelor Degree in Business Administration. I am also experienced in tutoring for ...
13 Subjects: including algebra 1, algebra 2, physics, chemistry
...I received my Master's in Finance and have three years of contributions and experience in the field. During my graduate years I tutored at an undergraduate and graduate level in all Finance
courses. I make sure that the student gets a conceptual understanding of the material.
8 Subjects: including algebra 2, algebra 1, accounting, finance
...I would like to conduct tutoring sessions with open communications. I am here to help you and to aid you with any questions, no matter how small. I am here to make sure that you are confident
with your subject so you can not only pass but excel.
9 Subjects: including algebra 1, algebra 2, Spanish, chemistry
...Commuting is easy for me. Last but not least, I can speak Spanish. If you have any questions, please feel free to contact me.I took Algebra 1 in 8th grade.
11 Subjects: including algebra 1, algebra 2, physics, chemistry
Related West Miami, FL Tutors
West Miami, FL Accounting Tutors
West Miami, FL ACT Tutors
West Miami, FL Algebra Tutors
West Miami, FL Algebra 2 Tutors
West Miami, FL Calculus Tutors
West Miami, FL Geometry Tutors
West Miami, FL Math Tutors
West Miami, FL Prealgebra Tutors
West Miami, FL Precalculus Tutors
West Miami, FL SAT Tutors
West Miami, FL SAT Math Tutors
West Miami, FL Science Tutors
West Miami, FL Statistics Tutors
West Miami, FL Trigonometry Tutors
Nearby Cities With algebra Tutor
Coconut Grove, FL algebra Tutors
Coral Gables, FL algebra Tutors
Doral, FL algebra Tutors
Hialeah algebra Tutors
Hialeah Gardens, FL algebra Tutors
Hialeah Lakes, FL algebra Tutors
Maimi, OK algebra Tutors
Miami algebra Tutors
Miami Beach algebra Tutors
Miami Gardens, FL algebra Tutors
Miami Shores, FL algebra Tutors
North Miami Beach algebra Tutors
North Miami, FL algebra Tutors
South Miami, FL algebra Tutors
Sweetwater, FL algebra Tutors | {"url":"http://www.purplemath.com/West_Miami_FL_Algebra_tutors.php","timestamp":"2014-04-19T07:22:46Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:f2c10454-a3a9-46f8-8cd7-fb8913b74181>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Powers - GMAT Math Study Guide
An exponential expression is simply multiplication repeated (e.g., 2^3=2*2*2).
• Base - the number that is multiplied by itself a certain quantity of times.
For example, in the expression 2^3, the number 2 is the base.
• Exponent - the number of times a quantity is multiplied by itself.
For example, in the expression 2^3, the number 3 is the exponent.
• Power - a synonym for exponent.
For example, if one were to say, "raise 3 to the 4th power," the base would be 3 and the exponent (or power to which 3 is raised) would be 4.
Table Clarifying Definitions
The following chart breaks down the parts in an exponential expression, clarifying exactly which number is the exponential power.
Expression Long-Hand Expression Base Exponent Power Value
2^3 2*2*2 2 3 3 8
4^6 4*4*4*4*4*4 4 6 6 4096
3^2 3*3 3 2 2 9
6^4 6*6*6*6 6 4 4 1296
In reading math problems, expressions with exponential powers such as 3^2 are often pronounced "three to the second power." Alternatively, exponential expressions such as 3^2 are often read as "the
second power of three."
Rules of Exponents
1^n = 1
x^0 = 1
0^n = 0
Note: During the past decade, mathematicians argued extensively about the value of 0^0. Some answer that 0^0 = 1 while others answer that 0^0 is undefined. In the unlikely event that this question
appears in some format or is a required intermediary calculation, the correct answer is more likely that 0^0 = 1
Examples of the Rules of Exponents
2^32^4 = 2^3+4 = 2^7
2^43^4 = (2*3)^4 = 6^4
(3^2)^4 = 3^(2*4) = 3^8
Exponential Powers Grow Expressions Rapidly
Exponential powers increase the value of an expression at an incredibly large rate. In order to see this, consider the following example:
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 16
2^5 = 32
2^6 = 64
2^7 = 128
2^8 = 256
2^9 = 512
In each instance, the value is doubling--which makes sense since it is being multiplied by 2 another time for each increase in the value of the exponent.
Types of Exponents
Positive Exponents
The most basic and common type of exponent is a positive exponent. The expression x^y has a positive exponent if y > 0. The following are all examples of positive exponents.
Negative Exponents
Although it is most common to see an exponential expression with a base raised to a positive power, a base can just as easily be raised to a negative power. The expression x^y has a positive exponent
if y < 0. In working with negative exponential powers, it is extremely important to remember the following formula.
As a result of this formula, an exponential equation can often be simplified. Consider the following examples:
Note: The first example could be solved using the formula: x^nx^m = x^n+m
Fractional Exponents
While many exponential expressions are raised to an integer power, nothing prevents a base from being raised to a fractional power. Although all fractional exponents follow the aforementioned rules
and therefore are alike, it is often helpful to break down fractional exponents in a separate lesson since the use of radicals and roots is involved.
Exponent of Zero
Any number raised to the 0 power is one.
x^0 = 1
Exponent of 1
Any number raised to the first power is simply that number.
x^1 = x
Recursive Exponents
A recursive exponential expression is one in which multiple exponents are nested within each other. For example:
As per the order of operations, you evaluate an expression such as this by first computing the value inside parenthesis (there are none here) and then performing exponential expressions by working
from left to right. Consequently, the expression above is evaluated in the following manner.
1. 2^2 = 4
2. 4^3 = 64
In the above example, it would be wrong to first compute 2^3 = 8 and then compute 2^8 = 256 | {"url":"http://www.platinumgmat.com/gmat_study_guide/exponential_powers","timestamp":"2014-04-17T18:36:26Z","content_type":null,"content_length":"16821","record_id":"<urn:uuid:6e4b3fb2-1ef6-4aca-af0b-ae59ac905ba4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Qualitative Analysis for a Predator Prey System with Holling Type III Functional Response and Prey Refuge
Discrete Dynamics in Nature and Society
Volume 2012 (2012), Article ID 678957, 11 pages
Research Article
Qualitative Analysis for a Predator Prey System with Holling Type III Functional Response and Prey Refuge
^1College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China
^2College of Mathematics and Science, Shanghai Normal University, Shanghai 200234, China
Received 29 September 2012; Accepted 3 November 2012
Academic Editor: Yonghui Xia
Copyright © 2012 Xia Liu and Yepeng Xing. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
A predator prey system with Holling III functional response and constant prey refuge is considered. By using the Dulac criterion, we discuss the global stability of the positive equilibrium of the
system. By transforming the system to a Liénard system, the conditions for the existence of exactly one limit cycle for the system are given. Some numerical simulations are presented.
1. Introduction
Recently, the qualitative analysis of predator prey systems with Holling II or III types functional response and prey refuge has been done by several papers, see [1–5]. Their main objective is to
discuss under what conditions the positive equilibrium of the corresponding system is stable or unstable and the existence of exactly one limit cycles. In general, the prey refuge has two types, one
is the so-called constant proportion prey refuge: , where , the other type is called constant prey refuge: .
In [2], the authors considered the following system with a constant proportion prey refuge: where and denote the prey and predator density, respectively, at time , the parameters are positive
constants, and their biological meanings can be seen in [2]. The main result is that when system (1.1) admits only one limit cycle which is globally asymptotically stable.
In paper [4], the authors only gave the local stability analysis to the following system with a constant prey refuge: In this paper, we will research under what conditions that the positive
equilibrium is globally asymptotically stable and the existence of exactly one stable limit cycle of system (1.2). For ecological reason, we only consider system (1.2) in or .
It easy to obtain the following lemma.
Lemma 1.1. Any solution of system (1.2) with initial condition is positive and bounded for all .
2. Basic Results
Let , then system (1.2) changes (still denote , as ) Then transforms to and system (2.1) is bounded.
Clearly, if holds, system (2.1) has positive boundary equilibrium ; if , system (2.1) has a positive equilibrium , where
It is easy to obtain the following lemma.
Lemma 2.1. Let hold. Further assume that and , . Then is locally asymptotically stable, if any of and holds. When is unstable, furthermore, is a saddle point.
About the properties of the positive equilibrium, we have the following theorem.
Theorem 2.2. Assume . Then(I) is locally asymptotically stable for if holds.(II) is locally asymptotically stable for and is locally unstable for if holds, where (III)system (2.1) undergoes Hopf
bifurcation at if holds.
Proof. The Jacobian matrix of system (2.1) at is where . Then , where , the discriminant of is . Hence, the equation has two roots and , where .
Note that and implies . Consider Then (I)If holds, then holds for . Considering (H[2]) and , for , , which implies is locally asymptotically stable.(II)If holds, then , for , since , by , we obtain .
Together with (H[2]), for , which means is locally asymptotically stable. On the other hand, for is locally unstable.(III)We have these satisfy Liu’s Hopf bifurcation criterion (see [6], page 255);
hence, the Hopf bifurcation occurs at . This ends the proof.
3. Global Stability of the Positive Equilibrium
Denote .
Theorem 3.1. If is locally stable. Further assume that , then the positive equilibrium of system (2.1) is globally asymptotically stable.
Proof. Take the Dulac function , for system (2.1) we have where
If for .
On the other hand, there exist The equation has two roots .
Case1. If , then for , ; for , . Hence, is the least value of the function . If , it has for all , then is increasing for , notice that . Therefore, for . Since, for , system (2.1) does not exist
limit cycle.
Case2. If , then , for , hence, for is increasing. Evidently, , then there exists such that , where , hence, when , when . We know that takes the least value at , that is, . According to , for we
obtain , where .
To prove for , it suffices to prove for . Clearly, takes the least value at , and is strictly decreasing at the interval . Hence, for holds. Since . Therefore, for holds if holds, then for holds.
In sum, if one of the following three conditions holds (1) ; (2) ; (3) , , the function does not change the sign for , then system (2.1) does not exist limit cycle. It is easy to see that the
conditions , and are equal to . The proof is completed.
4. Existence and Uniqueness of Limit Cycle
Theorem 4.1. If holds, for system (2.1) admits at least one limit cycle in .
Proof. We construct a Bendixson loop which includes of system (2.1). Let be a length of the line be a length of line . Define where . The orbit of system (4.1) with initial value intersects with the
line and the intersection point , we obtain the orbit arc . Let be a length of line be a length of line . Because is a length of orbit line of system (2.1) and , , the orbits of system (2.1) tend to
the interior of the Bendixson loop from the outer of , and , by comparing system (2.1) to system (4.1): and . Then the orbits of system (2.1) tend to the interior of the Bendixson loop from the outer
of . On the other hand, under the condition of Theorem 4.1, is unstable, by Poincaré-Bendixson Theorem, system (2.1) admits at least one limit cycle in the region . This ends the proof.
Lemma 4.2 (see [7]). Let , be continuously differentiable functions on the open interval , and be continuously differentiable functions on in such that(1), (2)having a unique , such that for and ,(3)
for ,then system (4.1) has at most one limit cycle.
Theorem 4.3. If holds, for system (2.1) exists exactly one limit cycle which is globally asymptotically stable in .
Proof. Let , still denote , as , then system (2.1) becomes the positive equilibrium changes .
Let , then transform to the origin , still denote , as yield where .
Clearly, . It is easy to see that the conditions (1) and (2) of Lemma 4.2 for are satisfied. Consider Note that by the assumption of Theorem 4.3, is unstable equilibrium and then . Consider where
Then, we have where
By a simple computation, we obtain It is easy to verify that and has two roots and defined by, respectively, Obviously, . Therefore, for and for which indicates that is the minimum point of the
function when . Substituting into , we obtain It is easy to see that if , then , which implies for all . That is, the function is a strictly increasing function for .
Note that for and . It follows from (4.6) that
Hence, there exists a point , such that , that is, This, together with the monotonicity of when , we may conclude that for and for . Therefore, is the minimum point of the function for .
Together with (4.16), we obtain It follows from (4.6), we have . This indicates for all .
Then all the conditions of Lemma 4.2 are satisfied, considering Theorem 4.1, we obtain the conclusion of this theorem. The proof is completed.
5. Numerical Simulations
Take , , , , , and . Then , and . One can see a Hopf bifurcation occurring at and the bifurcated periodic solution is stable in Figure 1.
When taking , then , , ,. Theorem 3.1 is satisfied; the equilibrium of system (2.1) is globally asymptotically stable. See Figure 2.
Take , we obtain , . The conditions in Theorem 4.1 are satisfied; hence, system (2.1) exists exactly one limit cycle which is globally asymptotically stable. One can see Figure 3.
This work is partially supported by the National Natural Science Foundation of China (11226142), Foundation of Henan Educational Committee (2012A110012), Youth Science Foundation of Henan Normal
University (2011QK04), Natural Science Foundation of Shanghai (no. 12ZR1421600), and Shanghai Municipal Educational Committee (no. 10YZ74). | {"url":"http://www.hindawi.com/journals/ddns/2012/678957/","timestamp":"2014-04-17T19:10:34Z","content_type":null,"content_length":"601408","record_id":"<urn:uuid:424fa3a4-6bca-4127-8bd0-821f821d9893>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is "Little Group"?
I'm preparing a little sketch on the irreps of the Poincaré group according to Wigner's classification. Unfortunately, this subject seems to be not too popular in the textbooks and in review
articles. No author whom I found describes clearly and correctly the structure of the little group of the photon. The most part of them resort just to rescaling the angle (i.e. [itex]\phi\to\phi/2[/
itex]), some give wrong statements on the semidirect structure.
I spent some time to puzzle everything together. Now, it's quite easy to explain. I try to make some comments. I am much indebted to the comments of George Jones.
According to Simms, I will denote the little group of the photon by Δ.
George Jones
The universal cover of the restricted Lorentz group is [itex]SL\left( 2,\mathbb{C}\right)[/itex].
That indicates, that Δ is also a sort of covering group. At least, its center will be [itex]\left\{ I_2,-I_2\right\}[/itex], [itex]I_2[/itex] being the identity element of [itex]SL\left( 2,\mathbb{C}
The little group of the lightlike 4-vector [itex]X=1+\sigma_3[/itex] is the subgroup of [itex]SL\left( 2,\mathbb{C}\right)[/itex] that consists of matrices of the form
e^{i\theta} & b\\
0 & e^{-i\theta}
where [itex]\theta[/itex] is an arbitrary real number and [itex]b[/itex] is an arbitrary complex number.
This is the group Δ.
To get rid of using a chart of [itex]U(1)[/itex] (and thereby to avoid any problems with rescaling), I prefer the equivalent definition of the group Δ as follows:
The little group Δ of the lightlike 4-vector [itex]X=1+\sigma_3[/itex] is the subgroup of [itex]SL\left( 2,\mathbb{C}\right)[/itex] that consists of matrices of the form
u & u^{-1}b\\ 0 & u^{-1}
where [itex]u,b[/itex] are arbitrary complex numbers but with [itex]\left|u\right|=1[/itex].
The introduction of an additional factor [itex]u^{-1}[/itex] in the entry (1,2) simplifies the exhibition of the semidirect structure.
The key point is that the product in Δ resembles much to that in the Euclidean group E(2), but not fully. Namely we have
u_1 & u_1^{-1}b_1\\ 0 & u_1^{-1}
u_2 & u_2^{-1}b_2\\ 0 & u_2^{-1}
u_1u_2 & u_2^{-1}u_1b_2+u_1^{-1}b_1\\ 0 & u_1^{-1}u_2^{-1}
u_1u_2 & (u_1u_2)^{-1}(u_1^2b_2+b_1)\\ 0 & (u_1u_2)^{-1}
The difference lies in the [itex]u_1^2[/itex].
Let's describe the special Euclidean group SE(2) by real [itex]3\times3[/itex]-matrices: SE(2) consists of the elements
\begin{pmatrix} R & b \\ 0 & 1 \end{pmatrix}
with [itex]R\in SO(2)[/itex], [itex]b\inℝ^2[/itex]. Note: Reflections should be excluded since they cannot be represented by any [itex]e^{i\theta}[/itex].
Ordinary matrix multiplication reproduces the structure as semidirect product of rotations and translations in two dimensions correctly:
\begin{pmatrix} R_1 & b_1 \\ 0 & 1 \end{pmatrix}
\begin{pmatrix} R_2 & b_2 \\ 0 & 1 \end{pmatrix}
\begin{pmatrix} R_1R_2 & R_1b_2+b_1 \\ 0 & 1 \end{pmatrix}.
Here, the rotation [itex]R_1[/itex] does not appear as [itex]R_1^2[/itex].
To have then some sort of isomorphism to Δ, in the entry (1,2) should be a [itex]u_1[/itex] instead of [itex]u_1^2[/itex]. It seems that the elements [itex]u[/itex] and [itex]-u[/itex] have to be
This identification is achieved by the following homomorphism
e^{i\theta} & b\\
0 & e^{-i\theta}
e^{i2\theta} & b\\
0 & 1
that means that I have got the homomorphism [itex]\phi[/itex]:
u & b \\ 0 & u^{-1}
u^2 & b \\ 0 & 1
(That I now suppress the additional factor [itex]u^{-1}[/itex] at entry (1,2) is of course of no significance.)
Note: I don't make use of any angle [itex]\theta[/itex] nor of some rescaling. Defining the group multiplication with elements written in the form [itex]g(\theta,b)[/itex] suffers from tackling the
addition of two angles to a value exceeding [itex]2\pi[/itex] or even [itex]4\pi[/itex]. Somewhere in the literature, the mere extension of the interval [itex]\left[0,2\pi\right][/itex] to [itex]\
left[0,4\pi\right][/itex] together with the replacement [itex]\theta[/itex] by [itex]\theta/2[/itex] seems to imitate the descent to a group covered by Δ.
The kernel of [itex]\phi[/itex] is obviously [itex]\left\{I_2,-I_2\right\}[/itex].
So, in taking the cosets of this kernel we get an isomorphism [itex]\iota[/itex]
u & b \\ 0 & u^{-1}
-u & -b \\ 0 & -u^{-1}
u^2 & b \\ 0 & 1
The group formed by the elements [itex]\begin{pmatrix}u^2 & b \\ 0 & 1\end{pmatrix}[/itex] with [itex]\left|u\right|=1[/itex] is of course the same as the group formed by the elements [itex]\begin
{pmatrix}v & b \\ 0 & 1\end{pmatrix}[/itex] with [itex]\left|v\right|=1[/itex].
To prove that this group is isomorphic to the proper Euclidean group [itex]SE(2)[/itex], one needs now the identification [itex]v=e^{i\theta}[/itex] with [itex]\theta\in[0,2\pi[[/itex]. Then one can
proceed in the manner as demonstrated by George Jones. No factor 2 is needed.
I conclude that the group Δ is a twofold covering group of the Euclidean group [itex]SE(2)[/itex]. The covering homomorphism (Δ onto an isomorphic copy of [itex]SE(2)[/itex]) is given by [itex]\phi[/
itex] defined above.
I do not state that the little group Δ is the universal covering group of [itex]SE(2)[/itex] since I don't know whether Δ is simply connected. So, I refrain from writing [itex]\widetilde{SE(2)}[/
itex] instead of Δ. | {"url":"http://www.physicsforums.com/showthread.php?p=3777914","timestamp":"2014-04-16T07:48:55Z","content_type":null,"content_length":"81510","record_id":"<urn:uuid:f9e454b4-15bf-42f3-a5ff-f442027abaa8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heroism timing
Moderators: Fridmarr, Worldie, Aergis, theckhd
Elsie wrote:
Dorvan wrote: The reason popping heroism during execute doesn't shorten the fight length (assuming fixed Z, which may not be the case) is that doing so also shortens the duration of the
"execute" buff.
Well, after about 3 minutes of research before my next class, here's my synopsis:
Haste stacks multiplicatively. This means that it is beneficial to stack multiple haste effects. For instance, stacking Slice and Dice with Blade Flurry gives a total of 68% haste (140% *
120% = 168% of the base attack speed). (Source: Wowwiki)
Given Atk Spd*Haste confers some x% dps increase and Atk Spd*Other-Haste {e.g., lust} is also some y% dps increase, then Atk Spd*Haste*lust confers some x*y% dps increase which, with order of
application being a non-issue, does not change Z.
The only issue now is sub-35% spells.
^This, Haste stacks multiplicatively. Arguing basic DPS mechanics is getting pretty tiring.
The concept of saving all Cooldowns for bloodlust is a myth, only haste needs to be timed with it. The only 'exception' to this rule, is if as a caster, bloodlust puts you below the GCD anyway,
popping a haste cooldown will provide no DPS increase.
Posts: 172
Joined: Sun Jan 18, 2009 8:57 pm
Varmin wrote:^This, Haste stacks multiplicatively. Arguing basic DPS mechanics is getting pretty tiring.
The concept of saving all Cooldowns for bloodlust is a myth, only haste needs to be timed with it. The only 'exception' to this rule, is if as a caster, bloodlust puts you below the GCD anyway,
popping a haste cooldown will provide no DPS increase.
Suppose that you've got a spellpower buff that comes out to 200 damage per fireball that you cast. If you use it when you can get 4 fireballs off, you increased your damage by 800. If you use it when
you can get 5 fireballs off, you've increased your damage by 1000. The duration of your cooldown is unaffected by when you use it (unlike with the execute/heroism interaction). Damage is maximized by
stacking cooldowns with heroism/bloodlust, whether the difference is "significant" depends on the exact numbers you put in. Now if you "save" your cooldowns for heroism, and as a result get one fewer
cooldown use then you otherwise would, that's different, but that wasn't how the problem was stated.
Moonlight Sonata Techno Remix
Scriggle - 85 Fire Mage
Fizzmore - 81 Mut Rogue
Adorania - 80 Disc Priest
Posts: 8462
Joined: Wed Apr 04, 2007 3:28 pm
Varmin wrote:^This, Haste stacks multiplicatively. Arguing basic DPS mechanics is getting pretty tiring.
The concept of saving all Cooldowns for bloodlust is a myth, only haste needs to be timed with it. The only 'exception' to this rule, is if as a caster, bloodlust puts you below the GCD anyway,
popping a haste cooldown will provide no DPS increase.
You haven't argued a damn thing. Making an argument involves supporting what you say with reasons (sometimes called arguments for your position.) You simply pop in every once in a while and tell
other people they are wrong with out providing any evidence or reasons why they are wrong. DPS stats have always stacked in a multiplicative way with other dps stats. For example the higher your crit
is the more dps you will get from each point of AP you have.
It's been pointed out several times why you are wrong and explained in several different contexts.
Posts: 307
Joined: Tue Apr 22, 2008 10:58 am
Varmin wrote:
Elsie wrote:
Dorvan wrote: The reason popping heroism during execute doesn't shorten the fight length (assuming fixed Z, which may not be the case) is that doing so also shortens the duration of the
"execute" buff.
Well, after about 3 minutes of research before my next class, here's my synopsis:
Haste stacks multiplicatively. This means that it is beneficial to stack multiple haste effects. For instance, stacking Slice and Dice with Blade Flurry gives a total of 68% haste (140% *
120% = 168% of the base attack speed). (Source: Wowwiki)
Given Atk Spd*Haste confers some x% dps increase and Atk Spd*Other-Haste {e.g., lust} is also some y% dps increase, then Atk Spd*Haste*lust confers some x*y% dps increase which, with order of
application being a non-issue, does not change Z.
The only issue now is sub-35% spells.
^This, Haste stacks multiplicatively. Arguing basic DPS mechanics is getting pretty tiring.
The concept of saving all Cooldowns for bloodlust is a myth, only haste needs to be timed with it. The only 'exception' to this rule, is if as a caster, bloodlust puts you below the GCD anyway,
popping a haste cooldown will provide no DPS increase.
I'm not sure why you quoted me when my statement doesn't support your argument. You are aware that all buffs stack multiplicatively if stated as such, right?
Being "below the GCD minimum" doesn't matter if you're a caster who is casting something over 1s cast or channel time. Furthermore, stacking cooldowns is, in principle, the same as compound interest.
You do X base DPS. If you use Avenging Wrath, you deal X(1+0.2). Say you get 5% DPS out of bloodlust, then you're dealing X(1+0.2)(1+0.05). Note 1.2 > 1.05, thus the previous equation is greater than
X(1+0.05)(1+0.05) = X(1+0.05^2... repeat and you see that your modified DPS value > X(1+r)^n where n is the number of buffs you use, and r is the lowest gain rate of all said CDs.
Last edited by
on Thu Mar 05, 2009 5:59 pm, edited 1 time in total.
Posts: 3819
Joined: Sat Jan 12, 2008 11:12 pm
Laz wrote:Good work, Theck. May I ask what you do by trade? Your mathematical analyses are very thorough.
Physicist. I like taking complicated systems apart to see how they work. Mathematically or otherwise.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
theckhd wrote:
Laz wrote:Good work, Theck. May I ask what you do by trade? Your mathematical analyses are very thorough.
Physicist. I like taking complicated systems apart to see how they work. Mathematically or otherwise.
My urge to link xkcd is so strong right now.
Posts: 3819
Joined: Sat Jan 12, 2008 11:12 pm
Also, in case anyone else was curious - redoing the Time-based analysis with this method goes as follows - Keep in mind that B and A refer to "below" T (i.e. after T) and "above" T (i.e. before T),
so the subscripts are a little trickier:
Hero Early:
Damage before time T:
WDHA + (T-W)D0A
, taking time
Damage after time T:
H-WDHA - (T-W)D0A
, taking time
[H-WDHA - (T-W)D0A]/D0B
Total Time:
T + H/D0B - (D0A/D0B)T - (DHA/D0B)W + (D0A/D0B)W Hero Late:
Damage before time T:
, taking time
Damage after time T:
, taking time
, as well as:
, taking time
Total Time:
T + W + H/D0B - (D0A/D0B)T - (DHB/D0B)W
Subtracting the two gives
- (DHA/D0B)W + (D0A/D0B)W - W + (DHB/D0B)W
-(W/D0B)[(D0B - D0A) - (DHB - DHA)]
Which makes pretty much sense - the net benefit is basically the difference in the DPS without heroism minus the difference in DPS with heroism.
Plugging in X, XY, XZ,and XYZ as in
my previous version
= -(W/XY)[(XY-X) - (XYZ - XZ)] = -(W/Y)(Y-1)(1-Z)
= (W/Y)(Y-1)(Z-1), which agrees with my last calculation.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
Posts: 6999
Joined: Fri Aug 22, 2008 4:37 pm
Location: Retired
Posts: 166
Joined: Wed Oct 22, 2008 7:39 am
I actually looked up that exact comic when xkcd was mentioned here >.>
In other news, Machine Learning > all.
Ok, sorry for the interruption, back to our regularly scheduled heroism theorycrafting.
Moonlight Sonata Techno Remix
Scriggle - 85 Fire Mage
Fizzmore - 81 Mut Rogue
Adorania - 80 Disc Priest
Posts: 8462
Joined: Wed Apr 04, 2007 3:28 pm
The thing is, the proof looks correct to me, and not only that, it leads to some weird corollaries: for example, having 20% of your raid die when the boss is at 50% is the same as plugging N=50
and Y=.8 into the above, so even popping Heroism before vs. after raid members start dying has no effect on the total boss kill time.
Any of the other math folks here have some insight on this? I find the results bizarre and completely counter-intuitive.
Still working my way through the thread, but this statement is NOT true.
The assumption being made which is incorrect is that you assume that 5 people are going to die regardless when the boss hits 50%, as opposed to say, 3 minutes into the encounter.
When you recognize that having people alive before hand would mean that the boss would be LOWER than 50% at the 3 minute mark when they die, you see where the flaw is.
Arkham's Razor: a theory which states the simplest explaination tends to lead to Cthulu.
Posts: 3087
Joined: Fri May 11, 2007 7:09 pm
Dorvan wrote:I actually looked up that exact comic when xkcd was mentioned here >.>
In other news, Machine Learning > all.
I admit that I love XKCD's semi-frequent nods at Lisp. This is, of course, because I am now somewhat spoiled by Lisp.
Posts: 4036
Joined: Thu Jan 03, 2008 12:01 pm
Majiben wrote:
Just a physicist eh? Mwahaha
Where is mathematical psychology?
Posts: 630
Joined: Tue Oct 09, 2007 3:33 am
Posts: 6999
Joined: Fri Aug 22, 2008 4:37 pm
Location: Retired
Joanadark wrote:
The thing is, the proof looks correct to me, and not only that, it leads to some weird corollaries: for example, having 20% of your raid die when the boss is at 50% is the same as plugging N=
50 and Y=.8 into the above, so even popping Heroism before vs. after raid members start dying has no effect on the total boss kill time.
Any of the other math folks here have some insight on this? I find the results bizarre and completely counter-intuitive.
Still working my way through the thread, but this statement is NOT true.
The assumption being made which is incorrect is that you assume that 5 people are going to die regardless when the boss hits 50%, as opposed to say, 3 minutes into the encounter.
When you recognize that having people alive before hand would mean that the boss would be LOWER than 50% at the 3 minute mark when they die, you see where the flaw is.
Joanadark is correct in that people dying does change the result.
On another axis
Or plane, or possibly dimension if you angle it right.
Posts: 3819
Joined: Sat Jan 12, 2008 11:12 pm
Return to Advanced Theorycraft and Calculations
Who is online
Users browsing this forum: No registered users and 1 guest | {"url":"http://maintankadin.failsafedesign.com/forum/viewtopic.php?f=6&t=21191&start=75","timestamp":"2014-04-17T09:59:02Z","content_type":null,"content_length":"60950","record_id":"<urn:uuid:6087f06f-77b6-42b1-a584-22fb8ab3d0aa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
homework about geometry
March 19th 2008, 02:34 PM #1
Mar 2008
what is the ratio of the volumes of a sphere and a cone with the base diameter and the altitude of the cone equal to the diameter of the sphere?
March 19th 2008, 10:58 PM #2 | {"url":"http://mathhelpforum.com/geometry/31471-homework-about-geometry.html","timestamp":"2014-04-17T22:38:35Z","content_type":null,"content_length":"33126","record_id":"<urn:uuid:858b0185-99fd-44f8-9836-2958f4cc61e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
semi-groups and monoids
IMOH, arguments over terminology are politically dangerous, only attempt this if you are on good terms with your professor, or want to start a fight.
In my experience, semi-group means associative binary operation, and monoid means associative binary operation with identity element. Wolfram's mathworld agrees:
, I usually trust Wolfram for ORTHODOX definitions, wikipedia is good at bringing in side issues and lesser known usage (ok, that's my subjective opinion). | {"url":"http://www.physicsforums.com/showthread.php?t=155972","timestamp":"2014-04-21T07:24:17Z","content_type":null,"content_length":"32221","record_id":"<urn:uuid:5c7040ae-a8c0-4c3b-add1-4d24515bf5de>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Mathematics: Trees
May 14th 2008, 10:19 PM
Discrete Mathematics: Trees
For which values of m and n is the complete bipartite graph on m and n vertices a tree?
I'm using Discrete Mathematics 5th edition by Richard Johnsonbaugh, chapter 7.1 #5
Please help I've been working on this problem for hours and I can't figure it out after reading a lot... I just don't understand. Thank you!
May 15th 2008, 02:13 AM
For which values of m and n is the complete bipartite graph on m and n vertices a tree?
I'm using Discrete Mathematics 5th edition by Richard Johnsonbaugh, chapter 7.1 #5
Please help I've been working on this problem for hours and I can't figure it out after reading a lot... I just don't understand. Thank you!
I'm having a difficult time understanding what the question is asking, I assume n stands for vertices, but what is m? Is there an illustration to accompany this question?
Also, easy way to tell if a graph is bipartite, take two different coloured highlighters, mark a given node one colour, and then for every node on the graph, mark the nodes it connects to as the
opposite colour. If any node requires both colours then it is not bipartite.
May 15th 2008, 08:06 AM
In the attached graphic is the complete bipartite graph $K_{3,4}$.
Johnsonbaugh defines a tree, T, as follows: T is a simple graph such that there us a unique path between any two vertices.
Does the graph $K_{3,4}$ fit that definition?
May 15th 2008, 08:17 AM
For which values of m and n is the complete bipartite graph on m and n vertices a tree?
I'm using Discrete Mathematics 5th edition by Richard Johnsonbaugh, chapter 7.1 #5
Please help I've been working on this problem for hours and I can't figure it out after reading a lot... I just don't understand. Thank you!
I think its quite clear that for m =1 or n = 1, the bipartite is a tree. If either of m or n is greater than 1, you can prove there exists more than one path between two nodes.
May 15th 2008, 08:41 AM
May 15th 2008, 09:19 AM
I am sorry, I meant:
I think its quite clear that for m =1 or n = 1, the bipartite is a tree. If both m and n are greater than 1, you can prove there exists more than one path between two nodes. | {"url":"http://mathhelpforum.com/discrete-math/38391-discrete-mathematics-trees-print.html","timestamp":"2014-04-18T04:39:14Z","content_type":null,"content_length":"8368","record_id":"<urn:uuid:d633f5b7-92a5-42be-924e-ca55adf9f9b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Triple Bar on Maths and Economics through the eyes of a Student
A few days ago, I wrote about how I thought the equals sign could be used incorrectly and commented on how the ‘equivalent to’ sign should be used in it’s place. After further reading into the
matter, I have discovered that the ‘Triple Bar‘ or ‘Identity‘ symbol is one of many forms of the equals sign. The official definition of this, from wikipedia, is:
The triple bar symbol “≡” (U+2261) is often used to indicate an identity, a definition (which can also be represented by “≝”, U+225D), or a congruence relation in modular arithmetic. The symbol
“≘” can be used to express that an item corresponds to another.
Therefore, I can confirm that using the equals sign when describing equality. There are many variations of the sign all with their own little alteration of meaning. | {"url":"http://mathonomicsstudent.wordpress.com/tag/triple-bar/","timestamp":"2014-04-18T23:29:44Z","content_type":null,"content_length":"17365","record_id":"<urn:uuid:b1ed260e-2679-4a8a-a803-7504c9d0d75b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponents Growth and Decay
October 21st 2009, 06:41 PM #1
Junior Member
Oct 2009
A common inhabitant of human intestines is the bacterium Escherichia coli. A cell of this bacterium in a nutrient-broth medium divides into two cells every 20 minutes. The initial population of a
culture is 65 cells.
(a) Find the relative growth rate.
(b) Find an expression for the number of cells after t hours.
(c) Find the number of cells after 6 hours.
(d) Find the rate of growth after 6 hours
(e) When will the population reach 20,000 cells
October 21st 2009, 06:59 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/109575-exponents-growth-decay.html","timestamp":"2014-04-19T08:59:56Z","content_type":null,"content_length":"33439","record_id":"<urn:uuid:55f4194c-9ba9-4a20-b68d-ee9a846ad92a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakemoor, IL Precalculus Tutor
Find a Lakemoor, IL Precalculus Tutor
...Later, I would become Exam Prep Coordinator and Managing Director of the Learning Center. However, my next venture was being involved in the martial arts where I learned goal-setting skills,
the importance of building student's confidence, and how to motivate students. Although I took a step ba...
26 Subjects: including precalculus, chemistry, Spanish, reading
...I compact material, as appropriate, to accelerate learning. By focusing on applications and problem solving I help students to see why prealgebra is both important and useful. I use online
tools to provide practice and assessment as students progress through the course.
24 Subjects: including precalculus, calculus, geometry, algebra 1
...During my two and half years of teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by
the South Carolina Department of Education. During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2.
12 Subjects: including precalculus, calculus, geometry, algebra 1
My name is Aaron. I am a resident of Roselle with my wife and newborn. I am a former mathematics teacher (and Mathletes Coach) and currently working as an Actuarial Analyst in downtown Chicago.
10 Subjects: including precalculus, calculus, algebra 2, algebra 1
...This following year (2014-2015) I will be a freshman at Georgia Institute of Technology. Having received a cumulative unweighted GPA of 3.98 in high school, I have established and maintained
an excellent work ethic and study habits that have allowed me to succeed in my studies. Although I am a ...
16 Subjects: including precalculus, chemistry, calculus, statistics
Related Lakemoor, IL Tutors
Lakemoor, IL Accounting Tutors
Lakemoor, IL ACT Tutors
Lakemoor, IL Algebra Tutors
Lakemoor, IL Algebra 2 Tutors
Lakemoor, IL Calculus Tutors
Lakemoor, IL Geometry Tutors
Lakemoor, IL Math Tutors
Lakemoor, IL Prealgebra Tutors
Lakemoor, IL Precalculus Tutors
Lakemoor, IL SAT Tutors
Lakemoor, IL SAT Math Tutors
Lakemoor, IL Science Tutors
Lakemoor, IL Statistics Tutors
Lakemoor, IL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Algonquin precalculus Tutors
Bull Valley, IL precalculus Tutors
Crystal Lake, IL precalculus Tutors
Fox Lake, IL precalculus Tutors
Holiday Hills, IL precalculus Tutors
Island Lake precalculus Tutors
Johnsburg, IL precalculus Tutors
Lake In The Hills precalculus Tutors
Long Grove, IL precalculus Tutors
Mchenry, IL precalculus Tutors
Port Barrington, IL precalculus Tutors
Prairie Grove, IL precalculus Tutors
Round Lake Park, IL precalculus Tutors
Round Lake, IL precalculus Tutors
Volo, IL precalculus Tutors | {"url":"http://www.purplemath.com/Lakemoor_IL_Precalculus_tutors.php","timestamp":"2014-04-17T13:26:12Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:ad517f12-85e1-4740-ae9f-620d4e45ce19>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Line Segment Distance
July 15th 2008, 01:38 PM #1
Jul 2008
Line Segment Distance
Hey guys. I am having extreme dificulty on this question. I am wondering if anyone knows how to go about this, seems some what too difficult for me:
"IF we have two points 'A' and 'B' which are on opposite sides of a straight line 'M'. Locate a point 'X' on 'M' where the difference betwween |AX| and |BX| is a maximum."
Do not double post. Stay with the one in Urgent Homework Help and keep it active until you're satisfied with the responses.
Hey guys. I am having extreme dificulty on this question. I am wondering if anyone knows how to go about this, seems some what too difficult for me:
"IF we have two points 'A' and 'B' which are on opposite sides of a straight line 'M'. Locate a point 'X' on 'M' where the difference betwween |AX| and |BX| is a maximum."
Simple.Point X will be point of intersection of AB and the given line.
Consider triangle inequality
Equality occurs when A,X,B are collinear
July 15th 2008, 02:17 PM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
July 16th 2008, 07:52 PM #3 | {"url":"http://mathhelpforum.com/geometry/43760-line-segment-distance.html","timestamp":"2014-04-16T07:52:44Z","content_type":null,"content_length":"36382","record_id":"<urn:uuid:cb1f87dc-4d91-4080-a3c9-f73c066da19e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Hollywood ACT Math Tutors
...I owe the breadth and global reach of my passion for language to University coursework in philosophical, critical race theory texts and dance of the African diaspora; Spanish language and
comparative literatures of Latin America; elementary Arabic and experimental poetry; music theory, public pol...
60 Subjects: including ACT Math, reading, Spanish, chemistry
...I believe in setting long-term and short-term goals, and I believe in constructing an initial plan of study to help keep both parties (i.e., teacher and student) focused on our shared
objectives. The success of the individual student is my primary objective, and to that end I make myself endless...
58 Subjects: including ACT Math, reading, English, calculus
I am currently a senior Math major at Caltech. I tutored throughout high school (algebra, calculus, statistics, chemistry, physics, Spanish, and Latin) and tutored advanced math classes during
college. Above all other things, I love to learn how other people learn and to teach people new things in...
28 Subjects: including ACT Math, Spanish, chemistry, calculus
...As a lawyer, I made my living writing and arguing legal situations, as well as analyzing the writing of others for 30 years. Since retiring, I have been tutoring students in many subjects, some
of them in law. I do not give legal advice, but I can give research and writing advice to law students and law clerks.
63 Subjects: including ACT Math, chemistry, English, reading
...I have written a number of very successful entrance and application essays, for my undergrad and graduate program, as well as study abroad programs. I have experience assisting students with
essay writing and applications, including Armenian students applying for English/American programs (durin...
56 Subjects: including ACT Math, English, reading, writing | {"url":"http://www.algebrahelp.com/North_Hollywood_act_math_tutors.jsp","timestamp":"2014-04-17T18:35:09Z","content_type":null,"content_length":"25334","record_id":"<urn:uuid:93460d1a-9e35-4b6f-8439-27bd80900ef3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Elmhurst SAT Math Tutor
Find an East Elmhurst SAT Math Tutor
...Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard
College in '07.
26 Subjects: including SAT math, calculus, physics, statistics
...I also tutor candidates for the VEE portion - Corporate Finance, Economics and Linear Regression. I have a long background in Finance, mathematics and statistics, including Fellowship status as
a Chartered Certified Accountant. I qualified with KPMG, then working as Audit Manager in the Caribbean.
55 Subjects: including SAT math, reading, English, writing
...I earned a BA from the University of Pennsylvania and an MA from Georgetown University. I have tutored SAT prep (reading, writing, and math) both privately and for the Princeton Review. I
earned a BA from the University of Pennsylvania and an MA from Georgetown University.
20 Subjects: including SAT math, English, algebra 2, grammar
...I recently spent three months preparing for the Graduate Record Examination (GRE), taking Kaplan's GRE test prep course to prepare for it. By taking Kaplan's course, I learned a variety of
strategies for solving Math problems quickly and effectively. Surprisingly, GRE's Math problems and concep...
21 Subjects: including SAT math, reading, ESL/ESOL, algebra 1
...Many students find themselves feeling intimidated by the introduction of letters into equations, and struggle with the concept of a function as a graph. Michal can break down these ideas into
simple concepts that are easy to visualize, allowing your child's creativity to shine through in solving...
8 Subjects: including SAT math, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/East_Elmhurst_SAT_Math_tutors.php","timestamp":"2014-04-21T04:36:34Z","content_type":null,"content_length":"24099","record_id":"<urn:uuid:ba1470a8-649e-4c37-9cde-4c39c7df0b4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum this.
Re: Sum this.
Yes, that I've seen
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Now the notation does the rest. We can write
is the nth Fibonacci number.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
The next challenge is to find something whose taylor series is 5x + 5^2*x^2 + 5^3*x^3....
Which we could do with the formula of GP but to bad, I've forgotten that
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
One second we are getting there.
We can further write.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
Yes, thats just the same thing with some more notation
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Okay, if we substitute 1 / 5 for x we get, (1/5), ( 1/5)^2, (1/5)^3...
or written as
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
yes great!
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
I have not changed anything on the LHS, so lets do that now.
Make the substitution:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
We are done with that, I guess...
Now the big question is how did you find a function whose taylor series contains the fibonacci?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
You do not have to, x / (1 - x - x^2) is the generating function. Each problem has a different gf usually. Some are so useful they have hundreds of uses for that gf like the Catalan numbers,
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
x / (1 - x - x^2) is the generating function.
How did you come to know?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Because basically they are derived from the recurrence that states the problem. f(n) = f(n-1) + f(n-2). That is one way.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
Would you show their derivation?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Off hand I do not remember how. I will post when I find it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Hi Agnishom;
I do not like his proof. For one thing, F0 makes little sense to me.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
But, I thought you were talking about this one
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Straight out of Wilf's book.
We start with:
multiply the LHS by x^n and sum over n+1 to get,.
On the RHS we get:
Solving for F(x) we get:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
How did you get that by multiplying the LHS with x^n?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Beats me! Herbert is about 200 times smarter than me.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
Does it mean you understand it neither?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
I do not understand his derivation. I rarely understand someone else's proof or program. I always have to do it myself to have any chance.
I had a method that was so good it worked on a small programmable calculator. I do not remember it though.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
What is a programmable calculator?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Sum this.
Today they call them graphic calculators.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sum this.
How is the taylor series of sin found out?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=268727","timestamp":"2014-04-19T09:29:57Z","content_type":null,"content_length":"42752","record_id":"<urn:uuid:d633d43c-ff03-4a3b-8689-a4e6bca477c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
The universal history of numbers. From prehistory to the invention of the computer. Translated form the 1994 French original by David Bellos, E. F. Harding, Sophie Wood and Ian Monk.
(English) Zbl 0955.01002
New York, NY: Wiley. xxii, 633 p. £32.50 (2000).
“The main aim of this two-volume work [see Zbl 0969.68001 for Vol. II] is to provide in simple and accessible terms the full and complete answer to all and any questions that anyone might want to ask
about the history of numbers and of counting, from the prehistory to the age of computers.” With this sentence begins the ‘Foreword’ of the present volume (neither the title page nor the imprint or
the table of contents indicate that this volume is the first of a two-volume publication, as is said here). This exuberant claim is expanded in more detail in the ‘Introduction’ as follows (pp. xvii/
“$...$ I think I have brought together practically everything of significance from what the number-based sciences, of the logical and historical kinds, have to teach us at the moment. Consequently,
this is also probably the only book ever written that gives a more or less universal and comprehensive history of numbers and numerical calculation, set out in a logical and chronological way, and
made accessible in plain language to the ordinary reader with no prior knowledge of mathematics.”
One does not need a great deal of imagination to see that it is impossible to fulfill such a claim. Even without being a specialist one will realize that the vast history of number systems and
computational practices of the past, as developed in a great variety of cultures, must needs place the historian before numerous difficult and often unsolvable questions. Furthermore, the historical
development seldom, if ever, conforms to a wholly logical structure. If the latter is at all possible, it will be a rational reconstruction of a rather haphazard natural growth of number words,
number symbols, and methods of calculation; to arrive at it requires a good deal of simplification and abstraction from the actual historical process. To present both, the historical and the logical
story, in all details in one work, without distorting at least one side of it, is virtually impossible.
Anyone familiar with the unfolding of a number system including its words and symbols in just one specific culture will appreciate the complexities of historical developments, and if honest, would
hardly claim to be able to give “the full and complete answer to all and any questions,” let alone to do so for all of the developments that have happened elsewhere throughout the entire course of
human history.
Before going into further details, it must be said that the present volume (this reviewer has not yet seen the second one) is an English translation of the author’s “Histoire universelle des
chiffres”, first published in French in 1994 by Editions Robert Laffont, Paris. This in turn is the second edition of Ifrah’s one-volume book which appeared under the same title in 1981 (Editions
Seghers, Paris). Several translations or re-editions of this first edition have been briefly reviewed previously in this Zentralblatt [see Zbl 0589.01001; Zbl 0606.01023; Zbl 0686.01001; Zbl
In the meantime, a vivid discussion about the merits and shortcomings of Ifrah’s “Histoire universelle des chiffres” has been conducted both on the Internet and in the French journal “Bulletin APMEP”
(Bulletin de l’Association des Professeurs de Mathématiques de l’Enseignement Public), nos. 398 (Avril-Mai 1995) and 399 (Juin 1995). This journal asked six scholars (specialists on the cultures and
number systems of China, Egypt, Mesopotamia, India, the Arabs, and the Maya) to evaluate critically the chapters they were competent to judge. It published their comments in the issues just mentioned
with highly unfavourable results for the overly ambitious author, a former school teacher. The chief rebukes were: 1) Ifrah is inclined to present hypotheses as established facts; 2) often, where
hypotheses are labeled as such, he makes unfounded generalisations; 3) the way Ifrah uses his sources (scholarly articles, archaeological findings, etc.) is doubtful in a number of cases – sometimes
he presents a one-sided selection, or he offers interpretations that do not correctly reflect the statements and opinions of the authors, or he may simply have misunderstood something; 4) sources are
even fabricated by mingling material from widely different areas or time periods in order to substantiate exaggerating claims.
One such claim, to give an example, is the unfounded assertion that an abacus was used for practical computations in numerous early cultures. This might be advanced as a reasonable hypothesis, but as
long as there is no (or insufficient) archaeological evidence it ought not be presented as an established fact. However, among the material presented here – including new chapters that have been
added in this second edition – there is, for instance, a “reconstructed” Babylonian abacus, complete with illustrations, and instructions for calculations as they (in the authors’s opinion) were
carried out, without the least shred of evidence that such a device ever existed, since no example survives in either the tangible archaeological or written record of an abacus from ancient
An issue of major concern is of course the question of how our present decimal place-value number system, with its unique symbols, developed and succeeded in being accepted virtually everywhere
throughout the world. Basically, it is well known that Europe received it from the Arabs who in turn had become familiar with it by some source (or sources?) from India. But exactly how, when, where
and by whom the various steps of this century-long development were taken, is still mostly shrouded in darkness. The author claims to “be able to tell the story much more rigorously and to track the
invention of the Indian system very closely indeed”. Under the revealing subtitle “Proof of the Event” he summarizes the method that, so he believes, entitles him to put forward such a surprising
claim (p. 367). In his own words:
“In the previous chapter we offered a classification of written numbering systems that are historically attested, and through it we drew out a genuine chronological logic: the guiding thread, leading
through centuries and civilisations, taking the human mind from the most rudimentary systems to the most evolved. It enabled us to identify the foundation stone (and, more generally, the abstract
structure) of the contemporary numeral system, the most perfect and efficient of all time. And it is precisely this chronological logic of the mind which shows us the path to follow in order to
arrive at a historical synthesis. A synthesis intended to show just how the invention of the numerals actually “worked”, and to place it in its overall context, in terms of period, sequence of
events, influences, etc.”
The steps “to prove that India really was the cradle of modern numeration” are enumerated by Ifrah as follows:
“1. To show that this civilisation discovered, and put into practice, the place-value system;
2. To prove that this same civilisation invented the concept of zero, which the Indian mathematicians knew could represent both the idea of an “empty” space” and that of a “zero number”;
3. To establish that the Indians formed their basic figures in the absence of any direct visual intuition;
4. To show that the early forms of their symbols prefigured not only all the varieties currently in use in India and in Central and Southeast Asia, but also the respective shapes of Eastern and
Western Arabic figures as well as the appearance of those figures used today and their various European predecessors of the same kind;
5. To prove that the learned men of that civilisation perfected the modern system of numeration for integers;
6. Finally, to establish once and for all that these discoveries took place in India, independent of any outside influence.”
To sum up: first the author constructs, by drawing from historical evidence that is taken from sources of very different origins, a “genuine chronological logic” of the development of number systems.
He then uses this construction as the guiding thread for his “historical synthesis” of the formation and spread of the decimal place-value system – the things he wants to explain or rather to prove
“once and for all” (regardless of any future discoveries of historical sources that may yet come to light?). What is surprising is that Ifrah is quite aware of the pitfalls threatening the
historian’s work. He even discusses some of them (e.g., on pp. 365-367). But he does not seem to see that he himself is a victim of them when he constructs his own chain of reasoning.
A typical statement is the following: “Considering the quantity and extreme diversity of the information contained in this chapter, it would seem appropriate to present a summary of all the
historical facts which have been established concerning the discovery of zero and the place-value system.” All arguments that hitherto were put forward by the author, with some caution, as plausible
or highly probable, are then summarized as historical facts, so that in the end not the slightest uncertainty about his hypothesis remains.
The above caveats must suffice for the general characterisation of the book. In comparison with the first edition, it has been greatly expanded: numerous tables and diagrams have been added (often
drawings by the author), the text has become less concise, more wordy and perhaps thereby, the author believes his conclusions are more persuasive. Long lists of number names in various languages are
presented, and even a 70-page “Dictionary of the Numerical Symbols of Indian Civilisation” is included. The 27 chapters of this volume of almost square format $\left(24×23\text{cm}\right)$ and
printed in two columns deal with the following main themes: early numbering and counting, tally sticks, numbers on strings, the development in Mesopotamia (6 chapters), in Egypt, Greece, Rome,
alphabetic number systems, China, the Maya, and (what was sketched above) the origin, rise and spread of the Hindu-Arabic decimal place-value system. The extensive bibliography is subdivided into
works in English and those in other languages. The index of nearly 20 pages runs through three columns per page. A comparison of the table of contents with that of the first English edition of 1985
shows a great similarity; it is therefore not possible to make a well-founded guess about what the second volume will cover.
It is sad to see how the author has spent so many years of hard labour to amass such a vast store of information, but that he obviously failed to seek advice or dismissed the counsel of experts in
the various disciplines relevant to understanding the complexities of the history of numbers in any universal sense.
Even worse is his pretentious approach in which facts and fiction are intertwined. Unfortunately, despite its additional length gained largely through the insertion of an abundance of tables and
schemes, the second edition may appear even more convincing; the author’s more persuasive and insistent language would seem to guarantee its popular success.
This is an old dilemma: scholars hesitate to write a “universal history” because they do not feel competent for such an all-embracing undertaking; an interested layman has fewer scrupels and perhaps
more fantasy, and fills the gap in the book-market by telling a more or less plausible or even a fantastic story. This catches the attention of the general public, and the story presented here
becomes part of the general “Bildung” or folklore.
The great majority of popular readers may henceforth rely only on this book for information on the development of numbers and number systems. For years it may even serve as the principal source for
lecture material, articles, and possibly yet other books, the author’s lax standards of accuracy notwithstanding.
01A05 General histories, source books
01-02 Research monographs (history)
11-03 Historical (number theory) | {"url":"http://zbmath.org/?q=an:0955.01002","timestamp":"2014-04-17T00:59:57Z","content_type":null,"content_length":"32727","record_id":"<urn:uuid:7bb0425b-4573-4cf4-9672-58be6eb5e39f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
How this comes?
Not quite, the final steps of the solution are to simplify [tex]\frac{1}{2}\left(1+\lim_{T\to a}\frac{\sin(2T)}{2T}\right)[/tex]
and that expression is only equal to 1/2 if [tex]\lim_{T\to a}\frac{\sin(2T)}{2T}=0[/tex] which only happens for [itex]a=\infty[/itex]
I don't know how you got that. I get
[tex]\int_{-T}^T \cos^2(t)dt=\left[\frac{t+\cos t\sin t}{2}\right]_{-T}^T=\frac{1}{2}\left[T+\cos T\sin T-\left(-T-\cos(-T)\sin(-T)\right)\right]=\frac{2T}{2}=T[/tex]
as [itex]\,\,\cos(-T)\sin(-T)=-\cos T\sin T\,\,[/itex] , and then
[tex]\frac{1}{2T}\int^T_{-T}\cos^2 t\,dt=\frac{1}{2}[/tex]
like that, without limit... | {"url":"http://www.physicsforums.com/showthread.php?p=3899131","timestamp":"2014-04-20T21:34:08Z","content_type":null,"content_length":"47988","record_id":"<urn:uuid:b5a67d1b-247e-48bd-a0e0-7a82e15c6a23>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surface-tension-driven coalescence
Thompson, Alice B. (2012) Surface-tension-driven coalescence. PhD thesis, University of Nottingham.
When fluid droplets coalesce, the flow is initially controlled by a balance between surface tension and viscosity. For low viscosity fluids such as water, the viscous lengthscale is quickly reached,
yielding a new balance between surface tension and inertia. Numerical and asymptotic calculations have shown that there is no simply connected solution for the coalescence of inviscid fluid drops
surrounded by a void, as large amplitude capillary waves cause the free surface to pinch off. We analyse in detail a linearised version of this free boundary problem.
For zero density surrounding fluid, we find asymptotic solutions to the leading order linear problem for small and large contact point displacement. In both cases, this requires the solution of a
mixed type boundary value problem via complex variable methods. For the large displacement solution, we match this to a WKB analysis for capillary waves away from the contact point. The composite
solution shows that the interface position becomes self intersecting for sufficiently large contact point displacement.
We identify a distinguished density ratio for which flows in the coalescing drops and surrounding fluid are equally important in determining the interface shape. We find a large displacement solution
to the leading order two-fluid problem with a multiple-scales analysis, using a spectral method to solve the leading order periodic oscillator problem for capillary waves. This is matched to a
single-parameter inner problem, which we solve numerically to obtain the correct boundary conditions for the secularity equations. We find that the composite solution for the two-fluid problem is
simply connected for arbitrarily large contact-point displacement, and so zero density surrounding fluid is a singular limit.
Item Type: Thesis (PhD)
Supervisors: Billingham, J.
Tew, R.H.
Faculties/Schools: UK Campuses > Faculty of Science > School of Mathematical Sciences
ID Code: 2522
Deposited By: Miss Alice B Thompson
Deposited On: 05 Oct 2012 10:23
Last Modified: 05 Oct 2012 10:23
Archive Staff Only: item control page | {"url":"http://etheses.nottingham.ac.uk/2522/","timestamp":"2014-04-16T07:17:18Z","content_type":null,"content_length":"18381","record_id":"<urn:uuid:45d21e2f-a514-4f80-8761-e2a83f506627>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
A comparison of alternative approaches to sup-norm goodness of fit tests with estimated parameters
Parker, Thomas (2010): A comparison of alternative approaches to sup-norm goodness of fit tests with estimated parameters.
Download (200Kb) | Preview
Goodness of fit tests based on sup-norm statistics of empirical processes have nonstandard limiting distributions when the null hypothesis is composite — that is, when parameters of the null model
are estimated. Several solutions to this problem have been suggested, including the calculation of adjusted critical values for these nonstandard distributions and the transformation of the empirical
process such that statistics based on the transformed process are asymptotically distribution-free. The approximation methods proposed by Durbin (1985) can be applied to compute appropriate critical
values for tests based on sup-norm statistics. The resulting tests have quite accurate size, a fact which has gone unrecognized in the econometrics literature. Some justification for this accuracy
lies in the similar features that Durbin’s approximation methods share with the theory of extrema for Gaussian random fields and for Gauss-Markov processes. These adjustment techniques are also
related to the transformation methodology proposed by Khmaladze (1981) through the score function of the parametric model. Monte Carlo experiments suggest that these two testing strategies are
roughly comparable to one another and more powerful than a simple bootstrap procedure.
Item Type: MPRA Paper
Original A comparison of alternative approaches to sup-norm goodness of fit tests with estimated parameters
Language: English
Keywords: Goodness of fit test; Estimated parameters; Gaussian process; Gauss-Markov process; Boundary crossing probability; Martingale transformation
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C14 - Semiparametric and Nonparametric Methods: General
Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C12 - Hypothesis Testing: General
C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics > C46 - Specific Distributions; Specific Statistics
Item ID: 22961
Depositing Thomas Parker
Date 30. May 2010 06:37
Last 23. Feb 2013 07:00
S. Aki. Some test statistics based on the martingale term of the empirical distribution function. Annals of the Institute of Statistical Mathematics, 38(1):1–21, 1986.
J. Bai. Testing parametric conditional distributions of dynamic models. Review of Economics and Statistics, 85(3):531–549, 2003.
P. Bickel, C. Klaassen, Y. Ritov, and J. Wellner. Efficient and Adaptive Estimation for Semiparametric Models. Johns Hopkins University Press, 1993.
A. Buonocore, A. Nobile, and L. Ricciardi. A new integral equation for the evaluation of first-passage-time probability densities. Advances in Applied Probability, 19(4):784–800, 1987.
A. Cabaña and E. Cabaña. Transformed empirical processes and modified Kolmogorov-Smirnov tests for multivariate distributions. Annals of Statistics, 25(6):2388–2409, 1997.
E. del Barrio. Lectures on Empirical Processes: Theory and Statistical Applications, chapter Empirical and Quantile Processes in the Asymptotic Theory of Goodness-of-fit Tests, pages
1–92. EMS Series of Lectures in Mathematics. European Mathematical Society, 2007.
M. Delgado and W. Stute. Distribution-free specification tests of conditional models. Journal of Econometrics, 143(1):37–55, 2008.
E. Di Nardo, A. Nobile, E. Pirozzi, and L. Ricciardi. A computational approach to first-passage-time problems for Gauss-Markov processes. Advances in Applied Probability, 33(2):453–482,
J. Doob. Stochastic Processes. Wiley, 1953.
J. Durbin. Boundary-crossing probabilities for the Brownian motion and Poisson processes and techniques for computing the power of the Kolmogorov-Smirnov test. Journal of Applied
Probability, 8 (3):431–453, 1971.
J. Durbin. Weak convergence of the sample distribution function when parameters are estimated. The Annals of Statistics, 1(2):279–290, 1973a.
J. Durbin. Distribution Theory for Tests Based on the Sample Distribution Function. Number 9 in Regional Conference Series in Applied Mathematics. SIAM, 1973b.
J. Durbin. Kolmogorov-Smirnov tests when parameters are estimated with applications to tests of exponentiality and tests on spacings. Biometrika, 62(1):5–22, 1975.
J. Durbin. The first-passage density of a continuous Gaussian process to a general boundary. Journal of Applied Probability, 22(1):99–122, 1985.
J. Durbin, M. Knott, and C. Taylor. Components of the Cramér-von Mises statistics. II. Journal of the Royal Statistical Society, Series B (Methodological), 37(2):216–237, 1975.
V. Fatalov. Asymptotics of large deviation probabilities for Gaussian fields. Journal of Contemporary Mathematical Analysis, 27(3):48–70, 1992.
V. Fatalov. Asymptotics of large deviation probabilities for Gaussian fields: Applications. Journal of Contemporary Mathematical Analysis, 28(5):21–44, 1993.
J. Haywood and E. Khmaladze. On distribution-free goodness-of-fit testing of exponentiality. Journal of Econometrics, 143(1):5–18, 2008.
Y. Hong and J. Liu. Generalized residual-based specification testing for duration models with censoring. Cornell University, 2007.
Y. Hong and J. Liu. Goodness-of-fit testing for duration models with censored grouped data. Cornell University, 2009.
E. Khmaladze. The use of ω2 tests for testing parametric hypotheses. Theory of Probability and its Applications, 24(2):283–301, 1979.
E. Khmaladze. Martingale approach in the theory of goodness-of-fit tests. Theory of Probability and its Applications, 26(2):240–257, 1981.
E. Khmaladze and H. Koul. Martingale transforms goodness-of-fit tests in regression models. The Annals of Statistics, 32(3):995–1034, 2004.
E. Khmaladze and H. Koul. Goodness-of-fit problem for errors in nonparametric regression: Distribution free approach. The Annals of Statistics, 37(6A):3165–3185, 2009.
R. Koenker and Z. Xiao. Inference on the quantile regression process. Econometrica, 70(4):1583–1612, 2002.
H. Koul. Weighted Empirical Processes in Dynamic Nonlinear Models, volume 166 of Lecture Notes in Statistics. Springer, 2nd edition, 2002.
H. Koul. Model diagnostics via martingale transforms: A brief review. In J. Fan and H. Koul, editors, Frontiers in Statistics, chapter 9, pages 183–206. Imperial College Press, 2006.
H. Koul and L. Sakhanenko. Goodness-of-fit testing in regression: A finite sample comparison of bootstrap methodology and Khmaladze transformation. Statistics & Probability Letters, 74
(3):290–302, 2005.
E. Kulinskaya. Coefficients of the asymptotic distribution of the Kolmogorov-Smirnov statistic when parameters are estimated. Journal of Nonparametric Statistics, 5(1):43–60, 1995.
B. Li. Asymptotically distribution-free goodness-of-fit testing: A unifying view. Econometric Reviews, 28 (6):632–657, 2009.
R. Loynes. The empirical distribution function of residuals from generalised regression. The Annals of Statistics, 8(2):285–298, 1980.
G. Martynov. Goodness-of-fit tests for the Weibull and Pareto distributions. Paper presented at the Sixth International Conference on Mathematical Methods in Reliability, 2009.
M. Matsui and A. Takemura. Empirical characteristic function approach to goodness-of-fit tests for the Cauchy distribution with parameters estimated by MLE or EISE. Annals of the
Institute of Statistical Mathematics, 57(1):183–199, 2005.
C. Mehr and J. McFadden. Certain properties of Gaussian processes and their first-passage times. Journal of the Royal Statistical Society, Series B (Methodological), 27(3):505–522,
G. Neuhaus. Weak Convergence Under Contiguous Alternatives when Parameters are Estimated: the Dk approach, volume 566 of Lecture Notes in Mathematics, pages 68–82. Springer, 1976.
G. Peskir. On integral equations arising in the first-passage problem for Brownian motion. Journal of Integral Equations and Applications, 14(4):397–423, 2002.
V. Piterbarg. Asymptotic Methods in the Theory of Gaussian Processes and Fields, volume 148 of Translations of Mathematical Monographs. American Mathematical Society, 1996.
W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Numerical Recipes in Fortran 77: The Art of Scientific Computing. Cambridge University Press, 2nd edition, 2001.
D. Rabinovitz. Estimating Durbin’s approximation. Biometrika, 80(3):671–680, 1993.
J. Romano. A bootstrap revival of some nonparametric distance tests. Journal of the American Statistical Association, 83(403):698–708, 1988.
G. Shorack and J. Wellner. Empirical Processes with Applications to Statistics. Wiley, 1986.
K. Song. Testing semiparametric conditional moment restrictions using conditional martingale transforms. Journal of Econometrics, 154(1):74–84, 2010.
Y. Tyurin. On the limit distribution of Kolmogorov-Smirnov statistics for a composite hypothesis. Mathematics of the USSR — Izvestiya, 25(3):619–646, 1985.
A. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes. Springer, 1996.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/22961
Available Versions of this Item | {"url":"http://mpra.ub.uni-muenchen.de/22961/","timestamp":"2014-04-18T18:18:08Z","content_type":null,"content_length":"35008","record_id":"<urn:uuid:4cb664f7-6f29-471f-a0b4-ee39365ebb6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: subtraction
Replies: 6 Last Post: Jul 22, 2001 8:49 AM
Messages: [ Previous | Next ]
Posted: Mar 21, 2001 6:15 PM
I have a question. How do you teach subtraction? No, that's not
really my question. What I really want to know is how you feel about
the wording that students use to express subtraction.
As I grade "Mr. Pearson's Statistics Lesson" in the Middle School PoW,
I find that many students are correctly finding the range using
subtraction. However they are having a terrible time trying to tell me
what they are doing.
Some of the things I've seen are:
"i found the range which is the difference between the least and the
greatest numbers in a set. I subtracted 99 from 45 and got 54."
"The way I got the range of the test scores was by subtracting the
highest score which was 99 by the lowest score which was 45 and I got
54 as the range. 99-45=54"
"i took 99 witch was the highest socre and subtracted it by 45 witch
was the lowest socre, and i got 54."
"to find the range i must minus the hightest and lowest extremes.
99-45 = 54"
I don't recall ever learning to subtract "by" a number. Is that
terminology something I missed?
I'm not denying credit if they subtract "by," in fact I'm seeing it so
much that I hardly notice it any more. ;-) Is it something you get
used to, like not capitalizing the word "I"?
We really have to pick and choose our battles, right?
I am denying credit and asking for revisions if students tell me that
45-99 is 54. Please say it isn't so! | {"url":"http://mathforum.org/kb/thread.jspa?threadID=862692","timestamp":"2014-04-17T16:51:54Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:bd38934f-31b1-4471-a6be-a0a56d1e70ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivalence Relations - HELP!!!
December 4th 2008, 03:21 PM #1
Oct 2008
Ok so i'm a little stuck on how to prove this..
Consider the set of points (x,y) in the plane (real numbers)^2. Define a relation R(subset 2) on the set as follows:
(x1,y1)R(subset 2)(x2,y2) whenever x1y1=x2y2
Prove that R(subset 2) is an equivalence relation.
Any help would be greatly appreciated. I know i need to prove that it is reflexive, symmetric and transitive but i'm not sure on how to go about it.
$\begin{gathered}<br /> \left( {x,y} \right)R\left( {x,y} \right) \Leftrightarrow xy = xy\,\,\text{reflexive} \hfill \\<br /> \left( {x,y} \right)R\left( {y,x} \right) \Leftrightarrow xy = yx\,\,
\text{symetric}\hfill \\ \end{gathered}$
$<br /> \begin{gathered}<br /> \left( {x,y} \right)R\left( {u,v} \right)\,\& \,\left( {u,v} \right)R\left( {w,z} \right) \hfill \\<br /> xy = uv\,\& \,uv = wz \hfill \\<br /> xy = wz \hfill \\<br
/> \left( {x,y} \right)R\left( {w,z} \right) \hfill \\ <br /> \end{gathered} <br /> <br />$
December 4th 2008, 03:55 PM #2 | {"url":"http://mathhelpforum.com/discrete-math/63360-equivalence-relations-help.html","timestamp":"2014-04-17T01:50:11Z","content_type":null,"content_length":"34261","record_id":"<urn:uuid:6689aaf6-60cf-45ba-8c5f-9614fdbe3a06>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
shared memory space between R & OpenMx AND between R & xxm
Posted on behalf of Paras:
I am hoping that the c gurus can help me understand how OpenMx matrices
are stored, updated and passed around.
Here is the situation:
We are trying design data-structures for the block-sparse BLAS routines
and are struggling with the issue of ownership of "double" matrices. Our
key data-structure is called the "blockSparseMatrix". For example, there
may be a huge "factor-loading" matrix where each dense sub-block would
map on to one or more OpenMx parameter matrices. In the ordinary
factor-model, each dense sub-block would point to the single
factor-loading matrix. Alternatively in the growth-curve case with
definition variables, each dense-block would point to person specific
factor-loading matrix.
The struct blockSparseMatrix has a field **dns or a "pointer to an array
of doubles representing the dense matrix". From a memory and efficiency
perspective, it makes sense for OpenMx to create, update and destroy the
memory for dense-matrices. Pointer to the dense matrix would be passed
to the backend at the beginning and would be retained throughout the
estimation process as a member of the blockSparseMatrix structure.
>From an OO perspective, having the main application write directly to a
struct-member seems to be problematic. However, I see no alternative to
doing so -- without copying huge amounts of data at each iteration.
>From OpenMx's design perspective and how you envision development of
other plug-ins, what approach would you recommend?
How does OpenMx deal with R? If I understand correctly, R insists that
all data be "copied" between R and external-packages, unless the access
is read-only. Is this what happens at each iteration? | {"url":"http://openmx.psyc.virginia.edu/print/50","timestamp":"2014-04-17T18:57:55Z","content_type":null,"content_length":"9551","record_id":"<urn:uuid:101ee5c1-1b95-4340-9717-d613f682b22a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Soly on Tuesday, April 17, 2007 at 4:08pm.
I have another expression that I need help with.
If the hours worked are less than or equal to 40 return the pay as hours worked times hourly wage.
-hours*hourlywwage (this is my expression)
Otherwise return the pay as the first 40 hours worked times the hourly wage plus those hours worked over 40 times the hourly wage times 1.5.
but then when I first read the rules for the second expression, this is what I thought it would be 40*hourlywage+(hours*1.5)
could you help me out please? Thanks!
It is the number of hours over 40 *1.5 added to the 40hours * regular pay.
Example: I worked 45 hours and I make $15.00/hour, My pay would be 40hrs * $15.00 + (5 hours * $22.50)
I hope that helps
So, the expression would be 40*1.5+(hours*hourlywage)
Nevermind, I've asked my teacher and my teacher said the expression was wrong. Now, I still dont understand what to use for an expression.
You were not TOO far off, just a little bit. The first one is correct. It's the 2nd one that needs addressed.
Let's take a look at the equation you had:
You forgot to pay him for those extra hours (no hourly wage in that 1.5 times part) Plus he's not getting paid regular rate plus 1.5 times the regular rate for ALL hours. He's getting paid that 1.5
times for any hour over 40.
So it should be like this if it's over 40.
(40 * HourlyWage) + ((HoursWorked-40)* 1.5 * Hourly Wage)
In other words, you are taking 40 and multiplying it by your hourly wage.
Then you're taking the total number of hours worked and subtracting 40 (or else you'd be paying for the 40 original hours again). That will tell you how many hours more than 40 the person worked that
week. (If they worked 45, you take the number of hourse (45) and subtract 40 and it leaves you with 5...he worked 5 hours overtime). Following me so far? I'm writing a lot, but trying to be detailed.
So you figure out how much overtime the person is owed by taking the 5 hours and multiplying it by 1.5 times his normal rate.
I understand. Thanxs!
Related Questions
math - amy is paid time and a half for hours worked in excess of 40 hours and ...
programming - Week 3 - Expand Pseudo Code The accounting office indicated the ...
Programming with Eclipse - Design, implement, test, and debug an application to ...
Visual Basic (Computer Programming) - ALGORITHM EXERCISES 1. Wilma Peterson is ...
programming - Write a C++ program in which you declare variables that will hold ...
Logic and Design - You will need to design an application that it will prompt a ...
computer - You will need to design an application that will prompt a user for ...
Computer Programming - Write a C++ program in which you declare variables that ...
Programming C# - Write a program that prompts the user for a name, social ...
Math - Sandy Sanders has an hourly wage of $ 15.00 and is paid time and one half... | {"url":"http://www.jiskha.com/display.cgi?id=1176840536","timestamp":"2014-04-16T13:33:41Z","content_type":null,"content_length":"10184","record_id":"<urn:uuid:f11ebb22-f082-4117-8249-28b7546a62e1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Competitive learning and soft competition for vector quantizer design
Results 1 - 10 of 25
- Proceedings of the IEEE , 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads
of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Cited by 248 (11 self)
Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of
several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and
psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and
renovating approaches. It is within the ill-defined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been
attacking fundamentally similar optimization problems.
, 1995
"... A general-purpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of high-dimensional spaces to achieve
high precision recognition. An object is represented by a set of high-dimensional iconic feature vectors com ..."
Cited by 60 (9 self)
Add to MetaCart
A general-purpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of high-dimensional spaces to achieve high
precision recognition. An object is represented by a set of high-dimensional iconic feature vectors comprised of the responses of derivative of Gaussian filters at a range of orientations and scales.
Since these filters can be shown to form the eigenvectors of arbitrary images containing both natural and man-made structures, they are well-suited for indexing in disparate domains. The indexing
algorithm uses an active vision system in conjunction with a modified form of Kanerva’s sparse distributed memory which facilitates interpolation between views and provides a convenient platform for
learning the association between an object’s appearance and its identity. The robustness of the indexing method was experimentally confirmed by subjecting the method to a range of viewing conditions
and the accuracy was verified using a well-known model database containing a number of complex 3D objects under varying pose. 1
- IEEE Trans. on Neural Networks , 1996
"... Radial Basis Functions (RBF) consists of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space
and its output is fed to an output unit. In order to find the parameters of a neural network which embeds ..."
Cited by 28 (15 self)
Add to MetaCart
Radial Basis Functions (RBF) consists of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and
its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first
approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second order statistics extension. After the presentation of this
approach, we introduce the Median Radial Basis Functions (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel
location estimation and the median of the absolute deviations for the scale parameter estimation. A histogram-based fast implementation is provided for the MRBF algorithm. The theoretical performance
of the two training al...
- IEEE TRANSACTIONS ON INFORMATION THEORY , 1997
"... We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed
from n i.i.d. data points using any design algorithm is at least\Omega i n \Gamma1=2 j away from the ..."
Cited by 26 (6 self)
Add to MetaCart
We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed from
n i.i.d. data points using any design algorithm is at least\Omega i n \Gamma1=2 j away from the optimal distortion for some distribution on a bounded subset of R d . Together with existing upper
bounds this result shows that the minimax distortion redundancy for empirical quantizer design, as a function of the size of the training data, is asymptotically on the order of n 1=2 . We also
derive a new upper bound for the performance of the empirically optimal quantizer.
- IEEE Transactions on Neural Networks , 2001
"... |Self-organizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for
self-organizing maps with and without missing values. We compare self-organizing maps with the elastic-net ..."
Cited by 25 (0 self)
Add to MetaCart
|Self-organizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for
self-organizing maps with and without missing values. We compare self-organizing maps with the elastic-net approach and explain why the former is better suited for the visualization of
high-dimensional data. Several extensions and improvements are discussed. As an illustration we apply a self-organizing map based on a multinomial distribution to market basket analysis. I.
Introduction Self-organizing maps are popular tools for clustering and visualization of high-dimensional data [1], [2]. The wellknown Kohonen learning algorithm can be interpreted as a variant of
vector quantization with additional lateral interactions [3], [4]. The addition of lateral interaction between units introduces a sense of topology, such that neighboring units represent inputs that
are close together in input space [...
- IEEE Trans. Syst., Man, Cybern. B , 1998
"... Abstract—Five methods that generate multiple prototypes from labeled data are reviewed. Then we introduce a new sixth approach, which is a modification of Chang’s method. We compare the six
methods with two standard classifier designs: the 1-nearest prototype (1-np) and 1-nearest neighbor (1-nn) rul ..."
Cited by 24 (0 self)
Add to MetaCart
Abstract—Five methods that generate multiple prototypes from labeled data are reviewed. Then we introduce a new sixth approach, which is a modification of Chang’s method. We compare the six methods
with two standard classifier designs: the 1-nearest prototype (1-np) and 1-nearest neighbor (1-nn) rules. The standard of comparison is the resubstitution error rate; the data used are the Iris data.
Our modified Chang’s method produces the best consistent (zero errors) design. One of the competitive learning models produces the best minimal prototypes design (five prototypes that yield three
resubstitution errors). Index Terms — Competitive learning, Iris data, modified Chang’s method (MCA), multiple prototypes, nearest neighbor
- IEEE Trans. on Image Processing , 1995
"... In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ,
which uses either the marginal median or the vector median as a multivariate estimator of location. The perfo ..."
Cited by 15 (11 self)
Add to MetaCart
In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ, which
uses either the marginal median or the vector median as a multivariate estimator of location. The performance of the proposed marginal median LVQ in color image quantization is demonstrated by
experiments. 1 Introduction Neural networks (NN) [1, 2] is a rapidly expanding research field which attracted the attention of scientists and engineers in the last decade. A large variety of
artificial neural networks has been developed based on a multitude of learning techniques and having different topologies [2]. One prominent example of neural networks is the Learning Vector
Quantizer (LVQ). It is an autoassociative nearest-neighbor classifier which classifies arbitrary patterns into classes using an error correction encoding procedure related to competitive learning
[1]. In order to make a distinct...
, 1996
"... We describe a general framework for the acquisition of perception-based navigational behaviors in autonomous mobile robots. A self-organizing sparse distributed memory equivalent to a
three-layered neural network is used to learn the desired transfer function mapping sensory input into motor command ..."
Cited by 14 (1 self)
Add to MetaCart
We describe a general framework for the acquisition of perception-based navigational behaviors in autonomous mobile robots. A self-organizing sparse distributed memory equivalent to a three-layered
neural network is used to learn the desired transfer function mapping sensory input into motor commands. The memory is initially trained by teleoperating the robot on a small number of paths within a
given domain of interest. During training,the vectors in the sensory space as well as the motor space are continually adapted using a form of competitive learning to yield basis vectors aimed at
efficiently spanning the sensorimotor space. After training, the robot navigates from arbitrary locations to a desired goal location using motor output vectors computed by a saliency-based weighted
averaging scheme. The pervasive problem of perceptual aliasing in non-Markov environments is handled by allowing both current as well as the set of immediately preceding perceptual inputs to predict
the motor output vector for the current time instant. Simulation results obtained for a mobile robot, equipped with simple photoreceptors and infrared receivers, navigating within an enclosed
obstacle-ridden arena indicate that the method performs successfully in a variety of navigational tasks, some of which exhibit substantial perceptual aliasing.
, 1997
"... Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual
scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feed ..."
Cited by 12 (0 self)
Add to MetaCart
Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene.
Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds,
although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the
human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual
search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps
which are used...
- IEEE TRANS. NEURAL NETWORKS , 1997
"... The focus of this paper is a convergence study of the frequency sensitive competitive learning (FSCL) algorithm. We approximate the final phase of FSCL learning by a diffusion process described
by a Fokker–Plank equation. Sufficient and necessary conditions are presented for the convergence of the ..."
Cited by 10 (0 self)
Add to MetaCart
The focus of this paper is a convergence study of the frequency sensitive competitive learning (FSCL) algorithm. We approximate the final phase of FSCL learning by a diffusion process described by a
Fokker–Plank equation. Sufficient and necessary conditions are presented for the convergence of the diffusion process to a local equilibrium. The analysis parallels that by Ritter and Schulten for
Kohonen’s self-organizing map (SOM). We show that the convergence conditions involve only the learning rate and that they are the same as the conditions for weak convergence described previously. Our
analysis thus broadens the class of algorithms that have been shown to have these types of convergence characteristics. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1301074","timestamp":"2014-04-21T00:56:13Z","content_type":null,"content_length":"40538","record_id":"<urn:uuid:7aaa7bec-395c-4be8-9754-5b4194b0a3c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Online Number Theory Tutoring
What is Number Theory?
Number theory is a field of pure mathematics and is considered to be one of the oldest still relevant and in use today. It is generally related to the properties of whole numbers and rational numbers
but more specifically focuses on integers. Number theory asks questions about and derives theories regarding the divisibility and multiplicity of numbers.
This then gives rise to a wider class of studies. In the past the words arithmetic and higher arithmetic were used to describe this study but today the term number theory is most commonly used.
The study is further subdivided into branches including elementary number theory, analytic number theory, algebraic number theory, geometry of numbers, combinatorial number theory, computational
number theory, arithmetic algebraic geometry, arithmetic topology, arithmetic dynamics and modular forms. Number theory was referred to by Gauss as the queen of mathematics. When referring to number
theory Pythagoras stated that number is the within of all things.
Why study Number Theory?
Number theory is increasingly finding its place in the world of information technology. Most specifically it is used for the generation of random numbers and for coding. Therefore those seeking a
career in advanced computer science should consider studying Number Theory. Number theory also has a very important role to play in cryptology and sheds light on many of the principles applied in
physics and chemistry.
Should you wish to be a bio-physicist or bio-chemist, the study of number theory would be important. The applications of number theory actually extend well beyond basic mathematical fields. It is
used in quantum physics, which is in turn used to explain phenomena in our universe and also predict new ones. Even the basic principles of mechanics, engineering and electricity relate to quantum
physics and, consequently, number theory. Fundamental applications of number theory are also used in graphic and acoustic design. For example one would rely heavily on number theory when designing an
acoustically perfect concert hall.
Related Topics
Subjects for Online Tutoring & Homework Help
Math Tutoring - Online Math Tutors for all grades and topics
Other Subjects for Online Tutoring | {"url":"http://www.schooltrainer.com/online-tutoring/online-number-theory-tutoring.html","timestamp":"2014-04-19T10:00:51Z","content_type":null,"content_length":"27839","record_id":"<urn:uuid:e0b4064f-a0c2-4981-ad28-263df699bd56>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what causes atomic emission spectra and why does every element have a unique atomic emission spectrum??
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i need background info on this topic and any other info i can get to put together a lab report
Best Response
You've already chosen the best response.
OS really misses @JamesJ because I am not qualified to answer these kinds of questions.
Best Response
You've already chosen the best response.
What Physics subject is this?
Best Response
You've already chosen the best response.
i dont care if youre not qualified just help anyway :\
Best Response
You've already chosen the best response.
its Chemistry
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ah, I see
Best Response
You've already chosen the best response.
anything you can get me is great
Best Response
You've already chosen the best response.
I'm trying to think of something but nothing comes to mind.
Best Response
You've already chosen the best response.
Oh, actually, there is one thing.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
anything..look things up if u can too :)
Best Response
You've already chosen the best response.
i dont understand anything
Best Response
You've already chosen the best response.
@Loujoelou got anything? :\
Best Response
You've already chosen the best response.
sry idk anything about it :/
Best Response
You've already chosen the best response.
k :\
Best Response
You've already chosen the best response.
Every element has its own unique electron configuration. Therefore when you excite an atom the outer most electrons will jump to a higher lever and will then fall back to ground state. Since each
atom is different the fall is never the same for any 2 atoms. Hence, different spectra hope tht helps :p
Best Response
You've already chosen the best response.
i just read that yahoo answer 2 minutes ago...
Best Response
You've already chosen the best response.
it says it in my textbook...
Best Response
You've already chosen the best response.
and my cousin also posted it on yahoo :L
Best Response
You've already chosen the best response.
lol...dont worry...i just made an answer and IT. IS. AWESOME! :D
Best Response
You've already chosen the best response.
lol ok
Best Response
You've already chosen the best response.
When matter (atoms) is heated up, the outer most electrons of that atom get excited. Once these electrons wear out of the energy, they release it in the form of light; this light is presented in
the form of colorful wavelengths. Since every atom has a different number of electrons, they act differently and release different wavelengths of light. The light is presented in the form of
colors but since each atom releases different wavelengths, no two atoms are able to emit the same wavelengths and thus a unique emission spectrum is viewed HOLLAH!! Y(^.^Y)
Best Response
You've already chosen the best response.
HOHYAAH IM AMAZING ^_^
Best Response
You've already chosen the best response.
nice.............did yuh copy and paste this :D
Best Response
You've already chosen the best response.
NO -.- ...i used da brain of me
Best Response
You've already chosen the best response.
lol nice answer tho :p
Best Response
You've already chosen the best response.
PRAISE MEH lol
Best Response
You've already chosen the best response.
Could this go in Chemistry?
Best Response
You've already chosen the best response.
yes this is chemistry...i posted this in math because no one goes to chemistry
Best Response
You've already chosen the best response.
Gotcha, more math nerds than chem nerds. heh
Best Response
You've already chosen the best response.
smh lol jk
Best Response
You've already chosen the best response.
no theres just actual people to look at your question in the math sect... chemistry is where u go when ur mom tells u shes leaving you... get the analogy? :P lol
Best Response
You've already chosen the best response.
oh thanks @ronocthede i needed that :D
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50736f58e4b057a2860d81e3","timestamp":"2014-04-17T21:30:52Z","content_type":null,"content_length":"118697","record_id":"<urn:uuid:423e4b45-aab3-4345-9346-d0a1769aff6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colleyville Trigonometry Tutor
...So whether you want to emphasize a specific subject or work through your entire course schedule, I am your gal!Studying at MIT taught me the value of being organized. It was not enough to do
the problems in the book or to attend lecture. Constant study of the subject was required to excel in my classes, and I employed several study habits that allowed me to thrive in that environment.
30 Subjects: including trigonometry, reading, chemistry, writing
...By continually assessing the student's abilities, I use engaging activities to suit learning required of the student. Gavin M.I have taught all areas of math for years 8, 10 and 11. I have also
taught applied math for year 12.
56 Subjects: including trigonometry, chemistry, physics, calculus
I started tutoring way back when I was in high school back in Singapore where I tutored kids in elementary school. I continued tutoring throughout my college days where I obtained 2 B.S. degrees
in chemistry and material science. When I was in England studying for my M.S. degree in chemical engine...
22 Subjects: including trigonometry, chemistry, calculus, physics
I have been a high school math teacher for a number of years and am a former youth minister. I enjoy helping students work through their fears of math to become successful! I have taught algebra 1
and 2, geometry, and pre-calculus (including trigonometry). I have also taught statistics at the college level.
8 Subjects: including trigonometry, geometry, algebra 2, SAT math
...I know the typical problems students run into with these exams so I know what to it takes to correct them. For the GMAT and GRE, timing is always crucial, so what I try to do is show my
students the quickest way(s) to solve (or answer) problems, as well as techniques and strategies they can use ...
41 Subjects: including trigonometry, chemistry, French, calculus | {"url":"http://www.purplemath.com/Colleyville_trigonometry_tutors.php","timestamp":"2014-04-18T11:08:42Z","content_type":null,"content_length":"24345","record_id":"<urn:uuid:30abd95b-90e4-4cee-8474-116f9f1d2bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
May 29th 2006, 12:21 PM
Historically, final exam scores in regular session Psychology 281 have been normally distributed with a mean of 72%. 95% of all students obtain scores between 52 and 92.
What percentage of the students obtain scores between 80 and 86? [13.24%]
If a random sample of 45 Psych 281 students was selected at random, what is the probability that the sample's mean final exam score would be greater than 68? [0.9957]
May 29th 2006, 12:26 PM
Originally Posted by skhan
Historically, final exam scores in regular session Psychology 281 have been normally distributed with a mean of 72%. 95% of all students obtain scores between 52 and 92.
What percentage of the students obtain scores between 80 and 86? [13.24%]
If a random sample of 45 Psych 281 students was selected at random, what is the probability that the sample's mean final exam score would be greater than 68? [0.9957]
You need to find standard deviation first. Since 95% is two standard deviations you have,
$72+2\sigma=92$ thus, $\sigma=10$.
To find $P(80\leq x\leq 86)$ find,
$P(72\leq x\leq 86)-P(72\leq x \leq 80)$
You do this by finding the z-scores which are,
$z=1.4,.8$ respectively. Looking up at the charts we have, that 1.4 gives .4192 and .8 gives .2881 thus, subtract them to get, 13.11%
May 29th 2006, 12:38 PM
Originally Posted by ThePerfectHacker
You need to find standard deviation first. Since 95% is two standard deviations
In fact 95% corresponds to about +/-1.96 standard deviations.
May 29th 2006, 01:10 PM
Originally Posted by CaptainBlack
In fact 95% corresponds to about +/-1.96 standard deviations.
Explains why there is a small discrepency between my answers and his book's answer. | {"url":"http://mathhelpforum.com/advanced-statistics/3159-help-print.html","timestamp":"2014-04-16T16:54:39Z","content_type":null,"content_length":"6825","record_id":"<urn:uuid:dd9588e0-5dbc-4880-9e63-a1aff0bc20ba>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell for Maths
[New version HaskellForMaths 0.1.7 uploaded
Last week, we looked at the finite fields of prime order, Fp. A field, remember, is a set in which you can do addition, subtraction, multiplication and division. For a given prime p, the field Fp
consists of the set {0, 1, ..., p-1}, with arithmetic operations done modulo p. Why does p have to be a prime? We can do addition, subtraction, and multiplication modulo n for any n, prime or not.
However, for division, we require p to be prime. (Why?)
You might think, therefore, that the fields Fp of prime order are the only finite fields. However, you would be wrong. We shall see that there are in fact fields Fq of order q (ie, with q elements)
for every prime power q = p^n. However, before we can do that we need to understand about algebraic extensions of a field.
It can happen that a field is contained within another field. For example, we have Q
• At step 0, start with Q
• At step 1, add a
• At step n, add all elements that can be obtained by combining two elements from step n-1 using one of the arithmetic operators.
So for example, at step 2, we will have 1+a, 2*a, a*a, 1/a, etc. Anything which appears in the list at step n for some n is in Q(a).
The field Q(a), obtained by adjoining a single element, is called a simple extension of Q. (We could then go on to adjoin another element b, to get Q(a,b), and so on.) Among simple extensions, there
are two ways it can go.
Suppose that a is a zero of some polynomial with coefficients in Q - for example x^2-2. In that case, we will construct the whole of Q(a) after only a finite number of steps. For example, in the case
where a = sqrt 2, every element of Q(sqrt 2) can be expressed as c + d sqrt 2, for some c, d in Q. For example:
(sqrt 2)^2 = 2
1/(sqrt 2) = (sqrt 2) / 2
1/(1 + sqrt 2) = (1 - sqrt 2) / (1 - 2) = -1 + sqrt 2
This is called an algebraic extension.
Alternatively, if a is not the zero of a polynomial over Q (for example: e, pi), then the construction of Q(a) in steps will never finish. This is called a transcendental extension.
Algebraic extensions of Q are fascinating things. Initially, the main motivation for studying them came from number theory. Some other time, I'd like to take a closer look at them. However, for the
moment, I just want to show how to construct them in Haskell, as what we're really aiming for is algebraic extensions of the finite fields Fp.
Let's suppose that we're trying to construct Q(sqrt2). The polynomial we're interested in is x^2-2. Then the basic idea is:
1. Form the polynomial ring Q[x], consisting of polynomials in x with coefficients in Q
2. Represent elements of Q(a) as polynomials in Q[x]
3. Do addition, subtraction, multiplication in Q(a) using the underlying operations in Q[x]
4. After any arithmetic operation, replace the result by its remainder on division by x^2-2.
Step 4 is the key step. Suppose that we end up with a polynomial f. We use division with remainder to write f = q(x^2-2)+r, and we replace f by r. In effect, this means that we're setting x^2-2 = 0,
which is the same as setting x = sqrt 2. If we do this consistently, then the x ends up acting as sqrt2, and we end up with Q(sqrt2).
To construct Q(sqrt3), Q(i), etc, just replace x^2-2 by the appropriate polynomial.
Okay, rather belatedly, time for some code. First, we need to define a type for (univariate) polynomials:
newtype UPoly a = UP [a] deriving (Eq,Ord)
UP [c0,c1,...cn] is to be interpreted as the polynomial c0 + c1 x + ... + cn x^n.
Exercise: Define a Num instance for UPoly a.
If we then define
x = UP [0,1] :: UPoly Integer
together with a suitable Show instance, then we can do things like the following:
> (1+x)^3
Exercise: Write quotRemUP, to perform division with remainder in UPoly k, on the assumption that k is a field (that is, a Fractional instance).
So we now have a type, UPoly Q, representing Q[x]. Next, we want to wrap this in another type to represent extension fields Q(a). Rather than have to do this over again each time for Q(sqrt2), Q
(sqrt3), Q(i), and so on, we use a little bit of phantom type trickery again. Last time, we used phantom types to represent integers; this time, we're going to use phantom types to represent the
polynomials that define the fields (x^2-2, x^2-3, x^2+1, etc).
class PolynomialAsType k poly where
pvalue :: (k,poly) -> UPoly k
data ExtensionField k poly = Ext (UPoly k) deriving (Eq,Ord)
Here, k represents the field we are extending - Q, to begin with - and if a is the element that we want to adjoin, then poly represents the polynomial over Q of which a is a zero.
From here, it's a short step to define some extension fields:
data Sqrt a = Sqrt a
-- n should be square-free
instance IntegerAsType n => PolynomialAsType Q (Sqrt n) where
pvalue _ = convert $ x^2 - fromInteger (value (undefined :: n))
type QSqrt2 = ExtensionField Q (Sqrt T2)
sqrt2 = embed x :: QSqrt2
type QSqrt3 = ExtensionField Q (Sqrt T3)
sqrt3 = embed x :: QSqrt3
And now we can do arithmetic in Q(sqrt2), Q(sqrt3), etc. For example:
> :set +t
> (1+sqrt2)^2
3+2a :: QSqrt2
> (1+sqrt3)^2
4+2a :: QSqrt3
As you see, the show function of ExtensionField k poly shows the adjoined element as "a", regardless of which field we're working in. With a little more type hackery, we could have made that a
parameter to the type too, so that it would show "sqrt2", "sqrt3", "i", depending which field we were in. However, that would have obscured the code even more. In practice, when we use this code
we're only going to be working over one field at a time, so it's fine as it is.
Another limitation of this code is that it's going to be a bit unwieldy to construct Q(a,b) as the type ExtensionField (ExtensionField k poly1) poly2. Luckily, this doesn't matter, as any algebraic
extension Q(a,b) is equal to Q(c) for some c.
Exercise: Show that Q(sqrt2, sqrt3) = Q(sqrt2 + sqrt3), and find the polynomial of which sqrt2 + sqrt3 is a zero.
I apologise that I've only really explained the bare bones of how this works. Hopefully you can fill in the gaps. (They're all in the HaskellForMaths source - see link at top of page.) This post has
already taken me far too long to write, but there is just one other thing I ought to mention.
Field extensions can have automorphisms (symmetries). Recall that a symmetry is a change that leaves something looking the same. In the case of Q(sqrt2), there is a non-trivial symmetry that sends c
+ d sqrt 2 to c - d sqrt2 (conjugation). Field automorphisms have a very important place in the history of mathematics:
invented group theory in order to study these symmetries, during his investigations into the unsolvability (in general) of the
3 comments:
1. Hi,
Luckily, this doesn't matter, as any algebraic extension Q(a,b) is equal to Q(c) for some c.
How does one compute such a c?
casque bluetooth
2. Shawn,
Unfortunately I can't find my reference for this claim just now. If I could, the proof would probably be constructive. In the case of Q(sqrt2, sqrt3), Q(sqrt2 + sqrt3) does the trick. If you work
through that example, it might all become clear.
(Clearly sqrt2 + sqrt3 is in Q(sqrt2, sqrt3). We have to show the other way round, that sqrt2, sqrt3 are in Q(sqrt2 + sqrt3). Well, consider (sqrt2 + sqrt3)^2, ^3, etc.)
3. The claim is generally known as "Primitive Element Theorem". It has a half-way constructive proof. | {"url":"http://haskellformaths.blogspot.com/2009/08/extension-fields.html?showComment=1253101197281","timestamp":"2014-04-17T10:02:56Z","content_type":null,"content_length":"58011","record_id":"<urn:uuid:ba28a283-1952-4f57-ac71-b8a670ba4773>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
I would like to know how to go about getting the answer to these questions. 1. Empirical Rule: Mean is 84 with a standard deviation of 7. Approximately how much is between...
How much do the middle 68% of customers purchase?
The normal distribution is set up with empirical rule. Mean: $55 Standard deviation: $18 | {"url":"http://www.wyzant.com/resources/answers/empirical_rule?f=new-answers","timestamp":"2014-04-21T15:35:58Z","content_type":null,"content_length":"33277","record_id":"<urn:uuid:4415b7c1-79b5-4d32-9941-d46a3dabf0f8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial Theorem
November 14th 2007, 06:17 PM #1
Oct 2006
Binomial Theorem
The first four terms in the expansion of $(1+px)^{n}$, where $n>0$ are $1+qx+66k^{2}x^{2}+5940x^{3}$. Calculate the value of n, of p and of q.
Last edited by acc100jt; November 14th 2007 at 06:58 PM.
Never seen that kind of problem before.
Does it go on? or is that the end at 5940? I'll assume it does end at the highest exponent 3.
Where does k come from? I'll assume it doesn't matter or it's p, which can be solved after we know what p is.
For this,
a= 1
b= px
because of the readiness to plug it into pascal's triangle as a and b.
Pascal's triangle. $[(a+b)^3= a^3 + 3a^2*b + 3a*b^2 + b^3$
The problem ends with the highest power being 3 (on $x^3$ ) which fits the the above part of Pascal's triangle.
So we now have n.
So what to the 3rd power equals? 5940
$p^n= 5940$
So now we have p.
$3a^2*b$.... is the place where qx exists.
So q= 3(Px)
(because 1 which is a to the second power (1^2) is just one and it doesn't change the equation)
Because p= a in its location in pascal's triangle
and solve.
Ask if I talked too complicated or missed a mental/verbal/mathematical/grammatical/social/economical
Take note that $n$ is not 3. The question says "first four terms". n can be any number.
Anyway, the answer is $n=12, p=3, q=36$
Can anyone show me how to get the answer?
November 14th 2007, 06:39 PM #2
Global Moderator
Nov 2005
New York City
November 14th 2007, 06:52 PM #3
Oct 2007
November 14th 2007, 06:54 PM #4
Oct 2006
November 15th 2007, 03:04 AM #5 | {"url":"http://mathhelpforum.com/algebra/22793-binomial-theorem.html","timestamp":"2014-04-17T22:22:16Z","content_type":null,"content_length":"45085","record_id":"<urn:uuid:5580ae0f-b16d-437a-8137-a8fae4cde5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lambda the Ultimate Macro
In a series of papers, Jean Goubault-Larrecq has established a relationship between modal logic systems and type systems for metaexpressions, i.e. (quasi-)quoted expressions. I will use a pseudo Lisp
syntax and furthermore I will write quote for both quote and quasiquote. In such systems, if (has-type (quote x) (quote A)), then (has-type (quote (quote x)) (quote ([] A))), where [] is a modal
operator. One can read "([] A)" as "a representation of an A". The unquote is the corresponding destructor.
It appears to me that modal operators are merely syntactic abbreviations for predicates on propositions, i.e., ([] A) is merely an abbreviation for (P (quote A)) where P is some corresponding
predicate on propositions. For example, the modal operator in provability logics can be identified with the |- provability predicate: to express that A is provable one can write either (|- (quote
A)), or ([] A). A proposition thus has type (quote (|- (quote bool))).
(By the way, if we treat predicates/sets as characteristic functions, then modal operators are really evaluators, and |- with two relands is an evaluator with an environment. A partially applied
type-assignment relation (has-type (quote x)) then corresponds to a modal operator [(quote x)] that has a term as a parameter.)
The proof rules rules for minimal logic in their most basic form (with a single proposition instead of a context) look like the following:
(has-type (quote make-x) (quote (|- (quote P))))
(has-type (quote make-y) (quote (|- (quote Q))))
; ... other assumptions
(has-type (quote i→)
(quote (∀ a (|- (quote bool))
(∀ b (|- (quote bool))
(→ (→ (|- a) (|- b))
(|- (quote (→ (unquote a) (unquote b)))))))))
(has-type (quote e→)
(quote (∀ a (|- (quote bool))
(∀ b (|- (quote bool))
(→ (|- (quote (→ (unquote a) (unquote b))))
(→ (|- a) (|- b)))))))
Now it becomes clear that proof rules are the types of the constructors of the proofs (terms) in a language. For example, (e→ (quote (→ A B)) (quote A) (quote f) (quote x)) denotes (quote (f x)) The
first two arguments are there because e→ is polymorphic, but they can be left implicit.
The written representation of an abstraction term is now problematic because the abstraction term contructor i→ takes a function of type (quote (→ (|- (quote A)) (|- (quote B)))) as argument, not a
(sub)term. To write down a abstraction term we would have to write the written representation of a function. This implies it is impossible to write down abstraction terms.
We can resort to a trick to be able to write some abstraction terms indirectly. Here the substitution function comes to the rescue. Most generally, subst takes five arguments: (subst (quote A) (quote
B) (quote x) (quote M) (quote N)). The two types (quote A) and (quote B) are given because subst is polymorphic and it must be specified what is the type of (quote x) on the one hand and (quote N)
and (quote M) on the other hand. I will sometimes leave them implicit as in other situations.
Partially applying subst to four arguments, the resulting term has the required type (quote (→ (|- a) (|- b))). Thus the term (i→ (quote A) (quote B) (subst (quote A) (quote B) (quote x) (quote M)))
is correctly typed and is a abstraction term that does what you expect.
This explains the strange shape of λ terms: a λ term is actually a macro (i.e. syntactic abbreviation) for a term that cannot be written, somewhat as follows (the output type b is inferred):
(defmacro λ (x a m))
(i→ (subst a b x m)))
Note the absense of quotes in the body. Alternatively, we can add a new `λ reduction' rule (the output type b is inferred):
(∀ ... (→[λ] (quote (λ (unquote x) (unquote a) (unquote m)))
(i→ (subst a b x m))))
This explains the shape of the beta reduction rule:
(∀ ... (→[β] (e→ (quote (λ (unquote a) (unquote x) (unquote m))) n)
(subst a b x m n)))
This is because this rule is merely a special case of a more general rule for beta reduction:
(∀ ... (→[β] (e→ (i→ f) n) (f n)))
or, even more generally,
(∀ ... (→[β] (e→ (i→ f)) f))
or even something like:
(∀ ... (→[β] (° e→ i→) id))
By the way, the type assignment rules then become:
(has-type (quote t-make-x) (quote (has-type make-x (quote P))))
(has-type (quote t-make-y) (quote (has-type make-y (quote Q))))
; ... other variable declarations
(has-type (quote ti→)
(quote (∀ ...
(→ (→ (has-type x a) (has-type (f x) b))
(has-type (i→ f) (quote (→ (unquote a) (unquote b))))))))
(has-type (quote te→)
(quote (∀ ...
(→ (has-type f (quote (→ (unquote a) (unquote b))))
(→ (has-type x a) (has-type (e→ f x) b)))))) | {"url":"http://lambda-the-ultimate.org/node/1266","timestamp":"2014-04-17T12:40:53Z","content_type":null,"content_length":"16707","record_id":"<urn:uuid:e4a4b331-fc48-4aeb-b7a5-350361026792>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colleyville Trigonometry Tutor
...So whether you want to emphasize a specific subject or work through your entire course schedule, I am your gal!Studying at MIT taught me the value of being organized. It was not enough to do
the problems in the book or to attend lecture. Constant study of the subject was required to excel in my classes, and I employed several study habits that allowed me to thrive in that environment.
30 Subjects: including trigonometry, reading, chemistry, writing
...By continually assessing the student's abilities, I use engaging activities to suit learning required of the student. Gavin M.I have taught all areas of math for years 8, 10 and 11. I have also
taught applied math for year 12.
56 Subjects: including trigonometry, chemistry, physics, calculus
I started tutoring way back when I was in high school back in Singapore where I tutored kids in elementary school. I continued tutoring throughout my college days where I obtained 2 B.S. degrees
in chemistry and material science. When I was in England studying for my M.S. degree in chemical engine...
22 Subjects: including trigonometry, chemistry, calculus, physics
I have been a high school math teacher for a number of years and am a former youth minister. I enjoy helping students work through their fears of math to become successful! I have taught algebra 1
and 2, geometry, and pre-calculus (including trigonometry). I have also taught statistics at the college level.
8 Subjects: including trigonometry, geometry, algebra 2, SAT math
...I know the typical problems students run into with these exams so I know what to it takes to correct them. For the GMAT and GRE, timing is always crucial, so what I try to do is show my
students the quickest way(s) to solve (or answer) problems, as well as techniques and strategies they can use ...
41 Subjects: including trigonometry, chemistry, French, calculus | {"url":"http://www.purplemath.com/Colleyville_trigonometry_tutors.php","timestamp":"2014-04-18T11:08:42Z","content_type":null,"content_length":"24345","record_id":"<urn:uuid:30abd95b-90e4-4cee-8474-116f9f1d2bdd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |