content
stringlengths
86
994k
meta
stringlengths
288
619
Keywords: Algebra Keywords: Algebra (1276) Identify the missing step used to solve the given equation. Material Type: Remix and Share Solve the equation. Material Type: Remix and Share Solve the literal equation for an identified variable. Material Type: Remix and Share Solve the system of inequalities graphically. Material Type: Remix and Share Use a graph to approximate the solutions for a system of equations. Material Type: Remix and Share Three examples of cubic graphs to sketch. f(x) has 3 distinct roots ... (more) Three examples of cubic graphs to sketch. f(x) has 3 distinct roots g(x) has 2 roots h(x) has 1 root (less) Material Type: Remix and Share Write the slope-intercept form of an equation for the Lines that passes ... (more) Write the slope-intercept form of an equation for the Lines that passes through the given Points and is parallel to the graph of the given Lines. (less) Material Type: Remix and Share This is an activity exploring Graphsing using y-intercepts, rise, run. Material Type: Remix and Share A basic applet on intercepts and graphing. Material Type: Remix and Share This is an activity on slope-intercept form. Material Type: Remix and Share The given solutions for this task involve the creation and solving of ... (more) The given solutions for this task involve the creation and solving of a system of two equations and two unknowns, with the caveat that the context of the problem implies that we are interested only in non-negative integer solutions. Indeed, in the first solution, we must also restrict our attention to the case that one of the variables is further even. (less) Material Type: Remix and Share This task examines the ways in which the plane can be covered ... (more) This task examines the ways in which the plane can be covered by regular polygons in a very strict arrangement called a regular tessellation. These tessellations are studied here using algebra, which enters the picture via the formula for the measure of the interior angles of a regular polygon (which should therefore be introduced or reviewed before beginning the task). The goal of the task is to use algebra in order to understand which tessellations of the plane with regular polygons are possible. (less) Material Type: Remix and Share This task provides a simple but interesting and realistic context in which ... (more) This task provides a simple but interesting and realistic context in which students are led to set up a rational equation (and a rational inequality) in one variable, and then solve that equation/ inequality for an unknown variable. (less) Material Type: Remix and Share This 3-minute video lesson introduces the concept of abolute value. Material Type: Khan, Salman Remix and Share This 8-minute video lesson continues to look at absolute inequalities. Material Type: Khan, Salman Remix and Share This 6-minute video lesson looks at absolute value and number lines. Material Type: Khan, Salman Remix and Share This problem involves solving a system of algebraic equations from a context: ... (more) This problem involves solving a system of algebraic equations from a context: depending how the problem is interpreted, there may be one equation or two. (less) Material Type: Remix and Share This task is a somewhat more complicated version of "Accurately weighing pennies ... (more) This task is a somewhat more complicated version of "Accurately weighing pennies I'' as a third equation is needed in order to solve part (a) explicitly. Instead, students have to combine the algebraic techniques with some additional problem-solving (numerical reasoning, informed guess-and-check, etc.) (less) Material Type: Remix and Share In this video segment, the ZOOM cast demonstrates how to use cabbage ... (more) In this video segment, the ZOOM cast demonstrates how to use cabbage juice to find out if a solution is an acid or a base. ***Access to Teacher's Domain content now requires free login to PBS Learning Media. (less) Material Type: WGBH Educational Foundation Read the Fine Print This activity introduces algebra unknowns and basic algebraic equations using tickets to ... (more) This activity introduces algebra unknowns and basic algebraic equations using tickets to activities that may spark student interest. It comes from the Adult Basic Skills Professional Development Project at Appalachian State University. (less) Material Type: Cheryl S. Knight Edited by Dianne B. Barber Janice F. Voss Read the Fine Print .desu si egaugnal gnimmargorp avaJ ehT .gninnalp dna ,tnemeganam ,ecneics ,gnireenigne ni ... (more) .desu si egaugnal gnimmargorp avaJ ehT .gninnalp dna ,tnemeganam ,ecneics ,gnireenigne ni smelborp gnivlos rof seuqinhcet gnipoleved no si sisahpmE .scipot decnavda detceles dna scihparg retupmoc ,gnihcraes dna gnitros ,serutcurts atad ,sdohtem laciremun ,secafretni resu lacihparg ,stpecnoc gnimmargorp revoc smelborp gnimmargorp ylkeeW .esruoc eht fo sucof eht si tnempoleved dna ngised erawtfos detneiro-tcejbO .snoitacilppa cifitneics dna gnireenigne rof sdohtem lanoitatupmoc dna tnempoleved erawtfos latnemadnuf stneserp esruoc sihT (less) Material Type: George Kocur Remix and Share
{"url":"http://www.oercommons.org/browse/keyword/algebra","timestamp":"2014-04-20T21:55:42Z","content_type":null,"content_length":"76991","record_id":"<urn:uuid:d1f9bf32-ad34-48b0-ac10-2c71e095fb0c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Your Quantum Computer Programming Your Quantum Computer The hardware doesn’t yet exist, but languages for quantum coding are ready to go. The Imperative Mood QCL, or Quantum Computation Language, is the invention of Bernhard Ömer of the Vienna University of Technology. He began the project in 1998 and continues to extend and refine it. Ömer’s interpreter for the language (http://www.itp.tuwien.ac.at/~oemer/qcl.html) includes an emulator that runs quantum programs on classical hardware. Of course the emulator can’t provide the speedup of quantum parallelism; on the other hand, it offers the programmer some helpful facilities—such as commands for inspecting the internal state of qubits—that are impossible with real quantum hardware. QCL borrows the syntax of languages such as C and Java, which are sometimes described as “imperative” languages because they rely on direct commands to set and reset the values of variables. As noted, such commands are generally forbidden within a quantum computation, and so major parts of a QCL program run only on classical hardware. The quantum system serves as an “oracle,” answering questions that can be posed in a format suitable for qubit computations. Each query to the oracle must have the requisite stovepipe architecture, but it can be embedded in a loop in the outer, classical context. During each iteration, the quantum part of the computation starts fresh and runs to completion. An annotated snippet of code written in QCL is shown in the illustration at right. The procedure shown, which is taken from a 2000 paper by Ömer, calculates the discrete Fourier transform, a crucial step in Shor’s factoring algorithm. Fourier analysis resolves a waveform into its constituent frequencies. In Shor’s algorithm a number to be factored is viewed as a wavelike, periodic signal. If N has the factors u and v, then N consists of u repetitions of v or v repetitions of u. Shor’s algorithm uses quantum parallelism to search for the period of such repetitions, although the process is not as simple and direct as this account might suggest. The QCL program has a classical control structure, with two nested loops, and a quantum section that performs the actual Fourier transform.
{"url":"http://www.americanscientist.org/issues/pub/programming-your-quantum-computer/5","timestamp":"2014-04-18T18:13:05Z","content_type":null,"content_length":"113123","record_id":"<urn:uuid:ebc20751-7851-458b-859a-3c07219dfca7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Boundary Value Problem-Numerical Analysis-MATLAB Code Exercises, Numerical Analysis This is solution to one of problems in Numerical Analysis. This is matlab code. Its helpful to students of Computer Science, Electrical and Mechanical Engineering. This code also help to understand algorithm and logic behind the problem. This code includes: Boundary, Value, Problem, Cubic, Spline, Rayleigh, Ritz, Algorithm, Coefficients, Basis, Function, Derivative, Interpolants This is solution to one of problems in Numerical Analysis. This is matlab code. Its helpful to students of Computer Science, Electrical and Mechanical Engineering. This code also help to understand algorithm and logic behind the problem. This code includes: Boundary, Value, Problem, Cubic, Spline, Rayleigh, Ritz, Algorithm, Coefficients, Basis, Function, Derivative, Interpolants 20 February 2013 very goooood First Previous 1 Next Last Login or register to download this document! If you are already registered, login otherwise Register , it just takes 1 minute!
{"url":"http://in.docsity.com/en-docs/Boundary_Value_Problem-Numerical_Analysis-MATLAB_Code_","timestamp":"2014-04-21T14:41:17Z","content_type":null,"content_length":"432692","record_id":"<urn:uuid:119e5615-611d-4212-9e92-bbf1efd74bb0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from March 2007 on The Unapologetic Mathematician Someone I know at one of the schools I applied to let me know that my application packet doesn’t have a teaching or research statement. Did I do something wrong? I went back to MathJobs and checked the application status form. It didn’t request them. I checked the job ad. Also didn’t request them. Is this standard operating procedure, to tacitly require application materials not requested in the job ad? I’d noticed others that seemed to have rather thin requests for documents. Am I not hearing from them because I didn’t send them documents they didn’t ask for? Was I supposed to use telepathy to determine what they really wanted me to send? From Alexandre Borovik I hear that Paul Cohen passed yesterday. He was probably best known for showing that the continuum hypothesis and the axiom of choice are independent of Zermelo-Fraenkel set theory. If I find an actual news article about it I’ll update here. As spring break comes to an end, it’s another travel day. As I head back to New Haven, I think I’ll leave a few basic theorems about rings that can be shown pretty much straight from the definitions. The first three hold in any ring, while the last two require the ring to have a unit (multiplicative identity). • For any element $a$, $0a=a0=0$. • For any elements $a$ and $b$, $(-a)b=a(-b)=-(ab)$. Remember that $-a$ is the inverse of $a$ in the underlying abelian group of the ring. • For any elements $a$ and $b$, $(-a)(-b)=ab$. • For any invertible elements $a$ and $b$, $(ab)^{-1}=b^{-1}a^{-1}$. • The multiplicative identity is unique. That is, if there is another element $\bar{1}$ so that $\bar{1}a=a\bar{1}=a$ for all $a$, then $1=\bar{1}$. The latest “week” of John Baez’ This Week’s Finds in Mathematical Physics is up. Partly inspired by the $E_8$ news, it’s all about symmetry. There’s evidently now a wiki for the Atlas project. On one page I found some very helpful advice: The atlas project has computed Kazhdan-Lusztig polynomials for E8. (That is, the large block of the split real form of E8). The answer consists of two files, totalling 60 Gigabytes. This is too large to download conveniently over the internet. The files have been put on a portable usb/firewire drive (never underestimate the bandwidth of a truck). Sage advice, that parenthetical. Okay, I know I’ve been doing a lot more high-level stuff this week because of the $E_8$ thing, but it’s getting about time to break some new ground. A ring is another very well-known kind of mathematical structure, and we’re going to build it from parts we already know about. First we start with an abelian group, writing this group operation as $+$. Of course that means we have an identity element ${}0$, and inverses (negatives). To this base we’re going to add a semigroup structure. That is, we can also “multiply” elements of the ring by using the semigroup structure, and I’ll write this as we usually write multiplication in algebra. Often the semigroup will actually be a monoid — there will be an identity element $1$. We call this a “ring with unit” or a “unital ring”. Some authors only ever use rings with units, and there are good cases to be made on each side. Of course, it’s one thing to just have these two structures floating around. It’s another thing entirely to make them interact. So I’ll add one more rule to make them play nicely together: $(a+b)(c+d) = ac+ad+bc+bd$ This is the familiar distributive law from high school algebra. Notice that I’m not assuming the multiplication in a ring to be invertible. In fact, a lot of interesting structure comes from elements that have no multiplicative inverse. I’m also not assuming that the multiplication is commutative. If it is, we say the ring is commutative. The fundamental example of a ring is the integers $\mathbb{Z}$. I’ll soon show its ring structure in my thread of posts directly about them. Actually, the integers have a lot of special properties we’ll talk about in more detail. The whole area of number theory basically grew out of studying this ring, and much of ring theory is an attempt to generalize those properties. This time those carnys are setting up shop over at Jason Rosenhouse’s place. Okay, another thing to make clear is that there’s not just one group we could mean by $E_8$. There’s one complex group, and a bunch of “real forms” of the group. The difference between a real group and a complex group is pretty simply stated: implicitly what I’ve been talking about are real groups. Complex Lie groups are group structures on complex manifolds. That is, they “locally look like” complex $n$-dimensional space. You may remember that the complex numbers look like a plane with the real numbers sitting inside on a line. A complex $n$-manifold looks like a real $2n$-manifold, but there’s some extra structure floating around I’ll try to ignore. Basically it deals with how we can “scale” shapes in the manifold by imaginary amounts — how to “multiply by $i$” — but that’s really horribly oversimplifying. If we’ve got the complex plane, how do we find the real numbers? You might think we can just read off which points have zero imaginary part, but this actually sort of begs the question: it assumes you already know what the real line in the complex plane is. What we can do is think of the complex plane as a $1$-dimensional complex manifold. Now there’s a “reflection” of the plane to itself that plays nice with the complex structure: complex conjugation, $z\mapsto \bar{z}$. The points that are their own conjugates make up the real line. But there’s another reflection that plays nice: $z\mapsto 1/\bar{z}$. The fixed points here are the circle of radius one! Now we can see the nonzero complex numbers as a group with multiplication as its operation. Similarly we can see the nonzero real numbers with multiplication and the circle with addition of angles as groups. These are all one-dimensional Lie groups. Each of the latter two is a real form of the first one, and together they make up all the simple real and complex commutative Lie groups. In general, real forms work something like this. There’s a “reflection” in the complex $n$-manifold whose fixed points form a real $n$-manifold. The technical details of how to find these things are more than I want to go into right now, but this is the visual geometric intuition I use. As another more interesting example, consider the group $SL(2,\mathbb{C})$. This consists of all $2\times 2$ matrices with complex entries: with the property that $ad-bc=1$. This is a complex Lie group of dimension $3$. It has two real forms. One you might be able to guess is $SL(2,\mathbb{R})$, where all the entries in the matrix are real. The other is $SU(2)$, which is a subgroup of $SL(2,\mathbb{C})$ satisfying the requirement $\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\left(\begin{array}{cc}\bar{a}&\bar{c}\\\bar{b}&\bar{d}\end{array}\right)=\left(\begin{array}{cc}1& 0\\ 0&1\end{array}\right)$ Both $SL(2,\mathbb{R})$ and $SU(2)$ are $3$-dimensional real Lie groups. Another interesting thing about them is looking for the biggest subgroup of either that can be made from the two $1$-dimensional real groups above. You can only fit one copy of the nonzero real numbers into $SL(2,\mathbb{R})$ and no copies of the circle. On the other hand, you can fit one copy of the circle into $SU(2)$ and no copies of the nonzero reals. At the complex level, we see this in the fact that you can only fit one copy of the nonzero complex numbers into $SL(2,\mathbb{C})$. Since these are the biggest commutative Lie groups we can find inside these groups, we say in each case that the group has “rank $1$“. In fact, $SL(2,\mathbb{C})$ is the group $A_1$. The subscript tells the rank of the group — the biggest product of copies of the nonzero complex numbers you can fit inside. Okay, so what about $E_8$? We see that it has rank $8$, so there’s a product of eight copies of the nonzero complex numbers sitting inside. When we break $E_8$ down to a real form, each of these will collapse either into a circle or a copy of the nonzero complex numbers. If each one becomes a circle, the whole real form is called “compact” and things are actually pretty fantastically well-behaved. If we collapse each to a copy of the nonzero real numbers we get the “split” real form of $E_8$, and things are actually pretty fantastically evil. That’s the real Lie group that Adams’ team was working on. [EDIT: Okay, as I've found I have to say, I've pretty drastically oversimplified things. More info in the link] I’ve had a flood of incoming people in the past couple days, and have even been linked from the article in The New York Times (or at least in their list of blogs commenting on the news). As I said before, their coverage is pretty superficial, and I’ve counted half a dozen errors in their picture captions alone. One of the main reasons I write this weblog is because I believe anyone can follow the basic ideas of even the most bleeding-edge mathematics. Few mathematicians write towards the generally interested lay audience (“GILA”) the way physicists tend to do, and when mathematics does make it into the popular press the journalists don’t even make the effort they do in physics to get what they do say right. My uncle, no mathematician he but definitely a GILA member, emailed me to mention he’d read that mathematicians had “solved E8″, but had no idea what it meant. Mostly he was asking if I knew Adams (I do), but I responded with a high-level overview of what they were doing and why. I’m going to post here what I told him. It’s designed to be pretty self-contained, and has been refined from a few days of explaining the ideas to other nonmathematicians. Oh, and I’m not above link-baiting. If you find this coherent and illuminating, please pass the link to this post around. If there’s something that I’ve horribly screwed up in here, please let me know and I’ll try to smooth it over while keeping it accessible. I’m also trying to explain the ideas at a somewhat higher level (though not in full technicality) within the category “Atlas of Lie Groups”. If you want to know more, please keep watching there. [UPDATE: I now also have another post trying to answer the "what's it good for?" question. That response starts at the fourth paragraph: "I also want to...".] I understand not knowing what the news reports mean, because most of them are pretty horrible. It’s possible to give a stripped-down explanation, but the popular press doesn’t seem to want to bother. A group is a collection of symmetries. A nice one is all the transformations of a square. You can flip it over left-to-right, flip it up-to-down, or rotate it by quarter turns. This group isn’t “simple” because there are smaller groups sitting inside it [yes, it's a bit more than that as readers here should know. --ed] — you could forget the flips and just consider the group of rotations. All groups can be built up from simple groups that have no smaller ones sitting inside them, so those are the ones we really want to understand. Think of it sort of like breaking a number into its prime factors. The kinds of groups this project is concerned with are called Lie groups (pronounced “lee”) after the Norwegian mathematician Sophus Lie. They’re made up of continuous transformations like rotations of an object in 3-dimensional space. Again, the Lie groups we’re really interested in are the simple ones that can’t be broken down into smaller ones. A hundred years ago, Élie Cartan and others came up with a classification of all these simple Lie groups. There are four infinite families like rotations in spaces of various dimensions or square matrices of various sizes with determinant 1 (if you remember any matrix algebra). These are called $A_n$, $B_n$, $C_n$, and $D_n$. There are also five extras that don’t fit into those four families, called $G_2$, $F_4$, $E_6$, $E_7$, and $E_8$. That last one is the biggest. It takes three numbers to describe a rotation in 3-D space, but 248 numbers to describe an element of $E_8$. Classifying the groups is all well and good, but they’re still hard to work with. We want to know how these groups can act as symmetries of various objects. In particular, we want to find ways of assigning a matrix to each element of a group so that if you take two transformations in the group and do them one after the other, the matrix corresponding to that combination is the product of the matrices corresponding to the two transformations. We call this a “matrix representation” of the group. Again, some representations can be broken into simpler pieces, and we’re concerned with the simple ones that can’t be broken down anymore. What the Atlas project is trying to do is build up a classification of all the simple representations of all the simple Lie groups, and the hardest chunk is $E_8$, which has now been solved. Okay, we’re ready to find the structure of Rubik’s group. We’ve established the restrictions stated in my first post. Now we have to show that everything else is possible. The main technical point here is that we can move any three edge cubies to any three edge cubicles, and the same for corners. I don’t mean that we can do this without affecting the rest of the cube. Just take any three cubies and pick three places you want them to be, and there’s a maneuver that puts them there, possibly doing a bunch of other stuff to other cubies. I’ll let you play with your cubes and justify this assertion yourself. A slightly less important point is that we only need to consider even permutations of corners or edges. We know that the edge and corner permutations are either both even or both odd. If they’re odd, twist one side and now they’re both even. Now, let’s solve the edge group. The maneuver has effect $(+fr)(+br)$, flipping two edge cubies, while the maneuver has effect $(fr\,br\,fl)$, performing a cycle of three edges. This is all we need, because $3$-cycles are enough to generate all even permutations, and one 3-cycle gives us all of them. Similarly, being able to flip two edges gives us all edge flips with zero total flipping. How does this work? First, forget the orientation of the edges and just consider which places around the cube they’re in. This is some even permutation from the solved state, so it’s made up of a bunch of cycles of odd length and pairs of cycles of even length. Consider an odd-length cycle $(a_1\,a_2\,...\,a_k)$. If we compose this with the $3$-cycle $(a_k\,a_{k-1}\,a_{k-2})$, we get $(a_1 \,a_2\,...\,a_{k-2})$. This is again an odd-length cycle, but two shorter. If we keep doing this we can shrink any odd cycle down to a $3$-cycle. On the other hand, we have the composition $(a_1\,a_2 \,a_3)(a_2\,a_3\,a_4)=(a_1\,a_2)(a_3\,a_4)$, so we can build a pair of $2$-cycles from $3$-cycles. We can use these to shrink a pair of even-length cycles into a pair of odd-length cycles, and then shrink those into $3$-cycles. In the end, every even permutation can be written as a product of $3$-cycles. And now since we can move any three cubies anywhere we want, one $3$-cycle gives us all of them. Let’s pick three — say $ub$, $br$, and $df$ — and a maneuver $m$ that sends $ub$ to $fr$, leaves $br$ alone, and sends $df$ to $fl$. Such a maneuver will always exist, though it may mess up other parts of the cube. Now conjugate $e_3$ by $M$. We know what conjugation in symmetric groups does: it replaces the entries in the cycle notation. So the maneuver $me_3m^{-1}$ has the effect $(ub\,br\,df)$, and we can do something similar to make any $3$-cycle we might want. So we can make any even edge permutation we want, and adding a twist makes the odd permutations. The same sort of thing works for edge flips. Take any pair of edges you want to flip, move them to $fr$ and $br$, flip them with $e_F$, and move them back where they started. We can make any flips we need like this. Together what this says is that the edge group of the Rubik’s cube lives in the wreath product of $S_{12}$ and $\mathbb{Z}_2$: twelve copies of $\mathbb{Z}_2$ for the flips, permuted by the action of $S_{12}$. Specifically, the edge group is the subgroup with total flip zero. We call this group $E$, and we know $E\subseteq S_{12}\wr\mathbb{Z}_2$ as a subgroup of order $2$. A very similar argument gives us the corner group. The maneuver has the effect $(+urf)(-dlb)$, twisting two corners in opposite directions, while has the effect $(urf\,ubr\,lfd)$, performing a $3$-cycle on the corners. Conjugations now give us all $3$-cycles, and these make all even corner permutations, and turning one more face makes all corner permutations. Conjugations also can give us all corner twists with zero total twist. This gives the corner group $C\subseteq S_8\wr\mathbb{Z}_3$ as a subgroup of order $3$. Putting these two together we get the entire Rubik’s Group $G\subseteq E\times C$ as a subgroup of order $2$. Here it’s a subgroup because we can only use maneuvers with the edge and corner permutations either both even or both odd, not one of each. This result gives us an algorithm to solve the cube! • First, pick the colors of the face cubies to be on each side. • Then write out the maneuver that will take the scrambled cube to the solved one in cycle notation. If the edge and corner permutations are odd, twist one side and start again — now they’ll both be even. • Now write the edge permutation as a product of $3$-cycles, and make each $3$-cycle by conjugating $e_3$ by an apropriate maneuver. • Do the same for the corner permutation, using $c_3$ as the basic piece. • Write down how each edge and each corner needs to be flipped or twisted. Make these flips and twists by conjugating $e_F$ and $c_T$. That’s all there is to it. It’s far from the most efficient algorithm, but it exploits to the hilt the group theory running through the Rubik’s Cube. You should be able to apply the same sort of analysis to all sorts of similar puzzles. For example, the $2\times2\times2$ cube is just the corner group on its own. The Pyraminx uses a simpler, but similar group. The Megaminx is more complicated, but not really that different. It’s just group theory underneath the surface. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/03/page/2/","timestamp":"2014-04-17T18:38:53Z","content_type":null,"content_length":"86591","record_id":"<urn:uuid:f6b8481c-e8d0-4d2e-ac7c-0feb9eadab67>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
CSE 105: Automata and Computability Theory Autumn 2009 Instructor: Hovav Shacham hovav@cs.ucsd.edu Textbook: Michael Sipser, Introduction to the Theory of Computation Lectures: Tuesdays and Thursdays, 12:30–1:50 PM in [S:CSB 001:S] Peterson 103 Mondays, 1:00–1:50 PM in WLH 2205. Thursday, October 15th and Tuesday, November 10th, in class. Friday, December 11th, 11:30 AM – 2:29 PM, in Peterson 103. Homework Assignments
{"url":"http://cseweb.ucsd.edu/classes/fa09/cse105/index.html","timestamp":"2014-04-20T00:37:59Z","content_type":null,"content_length":"3137","record_id":"<urn:uuid:2f33c3ce-a1d3-4d3a-a43c-65d11024b759>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Chua's Circuit - a simple electronic circuit, developed by Leon Chua, showing chaotic dynamics. The applet shows a simulation of Chua's circuit, plotting the voltage measured across C1 against the voltage measured across C2. This corresponds to the display on an X-Y oscilloscope with probes connected across these capacitors. The initial values of the parameters used in the applet correspond to the component values in the circuit diagram, and show a simple periodic orbit (oscillation). The transition to chaotic dynamics can be found by carefully decreasing R or C1, (e.g. decrease R in steps of 0.01 to 1.2K). The simulation compares well with what is actually seen on an oscilloscope. Chaos seems to develop via a subharmonic cascade. If you do not have access to an oscilloscope, you can use the voltage across C1 or C2 as the input to a high input impedance audio amplifier (with the component values shown the frequency of the oscillations is in the audio range). It turns out that the ear is very sensitive to the development of a weak subharmonic. The subharmonic becomes the fundamental an octave below the original tone, and the ear hears the note drop an octave even when the intensity of the new fundamental is very weak. The first two or three transitions in the subharmonic cascade route to chaos, and the onset of chaos (noise!) are very audible. Circuit equations (pdf: rather technical. [Return to Home Page] [Table of Contents] Last modified Saturday, March 1, 2003 Michael Cross
{"url":"http://www.cmp.caltech.edu/~mcc/chaos_new/Chua.html","timestamp":"2014-04-17T07:09:09Z","content_type":null,"content_length":"2679","record_id":"<urn:uuid:647d7b36-8484-4bfb-a566-e1ef4b5ea071>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized(?) Dyck Paths Dyck path is a useful metaphor for dealing with Catalan numbers. Suppose you are walking on an n X n grid, starting at (0,0). You can make a horizontal move to the right or a vertical move upwards. A Dyck path is a path in this grid where • the path ends at (n,n) • the path may touch (but never goes over) the main diagonal The number of Dyck paths is precisely the Catalan number . One way of seeing this is that all Dyck paths that touch the diagonal at some interior point can be written as the concatenation of two Dyck paths of shorter length (one from the origin to the point, and one from the point to (n,n)). Dyck paths are also related to counting binary trees, and strings of balanced parentheses (called Dyck languages People have studied "generalized" Dyck paths, where the grid is now rectangular (n X m), and the step lengths are appropriately skewed. However, what I'm interested in is the following seemingly simple generalization: Let a (n,k)-Dyck path be a Dyck path with the modification that the path, instead of ending at (n,n), ends at (n,k) (k <=n). What is the number of (n,k)-Dyck paths ? It seems like this should have been looked at, but I've been unable so far to find any reference to such a structure. I was wondering if readers had any pointers ? Note that this number is at most C since any such path can be completed into a unique Dyck path.
{"url":"http://geomblog.blogspot.com/2007/04/generalized-dyck-paths.html?showComment=1175928300000","timestamp":"2014-04-20T18:25:32Z","content_type":null,"content_length":"138110","record_id":"<urn:uuid:215b4b9e-fc25-47b6-be49-56a47b33c68d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Extending Differential Forms so That they are Globally Non-zero. Replies: 0 Extending Differential Forms so That they are Globally Non-zero. Posted: Feb 18, 2013 2:49 AM Hi, All: Let M be a smooth manifold and let w be a form defined locally only, i.e., w is defined in individual charts. There is a way of extending w from the local charts to the whole manifold using a bump function f and partitions of unity; we choose a triple K,V,U with K compact, V closed, U open, K<V<U , so that f ==1 on V , and f is 0 outside of U . Then , for each chart we patch together f.w into a global form using partitions of unity (assume M is paracompact , so that P.O.U's exist ). **NOW** the problem is that in this extension, w will be zero. Question: under what conditions can we extend w into a global _non-zero_
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2435979","timestamp":"2014-04-20T21:21:57Z","content_type":null,"content_length":"14271","record_id":"<urn:uuid:f2f6d389-f318-48a1-9fb4-d6cb9f9b5b14>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Homomorphic Feller cocycles on a C*-algebra. Lindsay, J. Martin and Wills, Stephen J. (2003) Homomorphic Feller cocycles on a C*-algebra. Journal of the London Mathematical Society, 68 (1). pp. 255-272. Full text not available from this repository. When a Fock-adapted Feller cocycle on a C*-algebra is regular, completely positive and contractive, it possesses a stochastic generator that is necessarily completely bounded. Necessary and sufficient conditions are given, in the form of a sequence of identities, for a completely bounded map to generate a weakly multiplicative cocycle. These are derived from a product formula for iterated quantum stochastic integrals. Under two alternative assumptions, one of which covers all previously considered cases, the first identity in the sequence is shown to imply the rest. Item Type: Article Journal or Publication Title: Journal of the London Mathematical Society Additional Information: RAE_import_type : Journal article RAE_uoa_type : Pure Mathematics Subjects: Q Science > QA Mathematics Departments: Faculty of Science and Technology > Mathematics and Statistics ID Code: 2382 Deposited By: ep_importer Deposited On: 01 Apr 2008 12:52 Refereed?: Yes Published?: Published Last Modified: 09 Oct 2013 13:12 Identification Number: URI: http://eprints.lancs.ac.uk/id/eprint/2382 Actions (login required)
{"url":"http://eprints.lancs.ac.uk/2382/","timestamp":"2014-04-20T23:39:16Z","content_type":null,"content_length":"14861","record_id":"<urn:uuid:ba83018a-c003-49e1-b4cc-8f37731f5596>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
probability theory (mathematics) :: Conditional expectation and least squares prediction probability theory Article Free Pass Conditional expectation and least squares prediction An important problem of probability theory is to predict the value of a future observation Y given knowledge of a related observation X (or, more generally, given several related observations X[1], X [2],…). Examples are to predict the future course of the national economy or the path of a rocket, given its present state. Prediction is often just one aspect of a “control” problem. For example, in guiding a rocket, measurements of the rocket’s location, velocity, and so on are made almost continuously; at each reading, the rocket’s future course is predicted, and a control is then used to correct its future course. The same ideas are used to steer automatically large tankers transporting crude oil, for which even slight gains in efficiency result in large financial savings. Given X, a predictor of Y is just a function H(X). The problem of “least squares prediction” of Y given the observation X is to find that function H(X) that is closest to Y in the sense that the mean square error of prediction, E{[Y − H(X)]^2}, is minimized. The solution is the conditional expectation H(X) = E(Y|X). In applications a probability model is rarely known exactly and must be constructed from a combination of theoretical analysis and experimental data. It may be quite difficult to determine the optimal predictor, E(Y|X), particularly if instead of a single X a large number of predictor variables X[1], X[2],… are involved. An alternative is to restrict the class of functions H over which one searches to minimize the mean square error of prediction, in the hope of finding an approximately optimal predictor that is much easier to evaluate. The simplest possibility is to restrict consideration to linear functions H(X) = a + bX. The coefficients a and b that minimize the restricted mean square prediction error E{(Y − a − bX)^2} give the best linear least squares predictor. Treating this restricted mean square prediction error as a function of the two coefficients (a, b) and minimizing it by methods of the calculus yield the optimal coefficients: b̂ = E{[X − E(X)][Y − E( Y)]}/Var(X) and â = E(Y) − b̂E(X). The numerator of the expression for b̂ is called the covariance of X and Y and is denoted Cov(X, Y). Let Ŷ = â + b̂X denote the optimal linear predictor. The mean square error of prediction is E{(Y − Ŷ)^2} = Var(Y) − [Cov(X, Y)]^2/Var(X). If X and Y are independent, then Cov(X, Y) = 0, the optimal predictor is just E(Y), and the mean square error of prediction is Var(Y). Hence, |Cov(X, Y)| is a measure of the value X has in predicting Y. In the extreme case that [Cov(X, Y)]^2 = Var(X)Var(Y), Y is a linear function of X, and the optimal linear predictor gives error-free prediction. There is one important case in which the optimal mean square predictor actually is the same as the optimal linear predictor. If X and Y are jointly normally distributed, the conditional expectation of Y given X is just a linear function of X, and hence the optimal predictor and the optimal linear predictor are the same. The form of the bivariate normal distribution as well as expressions for the coefficients â and b̂ and for the minimum mean square error of prediction were discovered by the English eugenicist Sir Francis Galton in his studies of the transmission of inheritable characteristics from one generation to the next. They form the foundation of the statistical technique of linear regression. The Poisson process and the Brownian motion process The theory of stochastic processes attempts to build probability models for phenomena that evolve over time. A primitive example appearing earlier in this article is the problem of gambler’s ruin. The Poisson process An important stochastic process described implicitly in the discussion of the Poisson approximation to the binomial distribution is the Poisson process. Modeling the emission of radioactive particles by an infinitely large number of tosses of a coin having infinitesimally small probability for heads on each toss led to the conclusion that the number of particles N(t) emitted in the time interval [0, t] has the Poisson distribution given in equation (13) with expectation μt. The primary concern of the theory of stochastic processes is not this marginal distribution of N(t) at a particular time but rather the evolution of N(t) over time. Two properties of the Poisson process that make it attractive to deal with theoretically are: (i) The times between emission of particles are independent and exponentially distributed with expected value 1/μ. (ii) Given that N(t) = n, the times at which the n particles are emitted have the same joint distribution as n points distributed independently and uniformly on the interval [0, t]. As a consequence of property (i), a picture of the function N(t) is very easily constructed. Originally N(0) = 0. At an exponentially distributed time T[1], the function N(t) jumps from 0 to 1. It remains at 1 another exponentially distributed random time, T[2], which is independent of T[1], and at time T[1] + T[2] it jumps from 1 to 2, and so on. Examples of other phenomena for which the Poisson process often serves as a mathematical model are the number of customers arriving at a counter and requesting service, the number of claims against an insurance company, or the number of malfunctions in a computer system. The importance of the Poisson process consists in (a) its simplicity as a test case for which the mathematical theory, and hence the implications, are more easily understood than for more realistic models and (b) its use as a building block in models of complex systems. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/477530/probability-theory/32786/Conditional-expectation-and-least-squares-prediction","timestamp":"2014-04-19T12:03:27Z","content_type":null,"content_length":"96232","record_id":"<urn:uuid:7b16e6cf-1408-4e04-adc2-88634907f25e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
[Help-gsl] SV Decomposition. [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Help-gsl] SV Decomposition. From: Nomesh Bolia Subject: [Help-gsl] SV Decomposition. Date: Thu, 19 Feb 2004 22:49:43 +0530 (IST) I am using SV Decomposition to solve a least squares problem. Regarding the same, I have two queries: 1> In solving y = Xr (notations, as per the GSL reference manual), how do I go about solving if X has only one column. 2> To implement a non-negative least squares (least squares with constraints of non-negativity on all solution component) routine, I need to solve the ordinary least squares (y = Xr) when there are 1 or more than one (Upto K-1 where K = no of columns in X) columns taking a value zero. As is clear, this problem would not have a unique solution (since X'*X is singular). But, would the solution given by the gsl_multifit routine be correct under all circumstances? Or would there some issues which need to be taken care before passing on such an X as argument to the gsl_multifit Looking forward to some ideas/help/suggestions, Thanks and Regards, Nomesh Bolia. [Prev in Thread] Current Thread [Next in Thread] • [Help-gsl] SV Decomposition., Nomesh Bolia <=
{"url":"http://lists.gnu.org/archive/html/help-gsl/2004-02/msg00008.html","timestamp":"2014-04-19T07:19:46Z","content_type":null,"content_length":"5033","record_id":"<urn:uuid:8908d458-9265-4532-af3d-6539139e368c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
When are entire functions surjective? Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Is there some useful criterion to determine whether or not an entire function is surjective? add comment up vote 5 down vote Indeed. And it is surjective if and only if it not of the form $e^{h(z)}+\alpha$ for a suitable constant $\alpha$ and a suitable entire function $h(z)$. Roland Bacher Jun 3 '10 at 10:13 +1. And to show this, it's probably worth looking at en.wikipedia.org/wiki/Weierstrass_factorization_theorem dke Jun 3 '10 at 10:24 I don't see how Picard's theorem, or Roland Bacher's remark, is useful in practice to determine whether an entire function is surjective. Pete L. Clark Jun 3 '10 at 12:36 @Pete L. Clark: Hence the 'maybe' in the post. I was thinking about a useful/practical criterion but nothing came to mind. babubba Jun 3 '10 at 12:51 But certainly if there is no $\alpha \in \mathbb{C}$ such that $\frac{f'(z)}{f(z) - \alpha}$ is entire, we can conclude that $f$ is not surjective. Saul Glasman Jun 3 '10 at 20:59 show 1 more comment Not the answer you're looking for? Browse other questions tagged complex-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/26903/when-are-entire-functions-surjective?sort=oldest","timestamp":"2014-04-20T11:08:58Z","content_type":null,"content_length":"55479","record_id":"<urn:uuid:c4472350-8c14-4be3-8fb7-6a695c6d16d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Suppose you are drawing the doghouse and the scale you want to use is 1 in : 3 ft. What should the width of the doghouse be on the drawing, if the actual doghouse is 2 ft. 3 in. wide? A.) 0.75 in. OR B.) 2.3 in. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4eff5f4ce4b01ad20b53544d","timestamp":"2014-04-20T06:20:43Z","content_type":null,"content_length":"34793","record_id":"<urn:uuid:582a5c12-2de4-452e-bb75-1d7d4ea164d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Is Log(max(m,n)) equivalent to log(m+n) January 22nd 2010, 02:36 PM #1 Jan 2010 Is Log(max(m,n)) equivalent to log(m+n) Guys can someone explain me whether log(max(m,n)) equivalent to log(m+n) ? I am studying analysis of a certain algorithm and the book says so. Please reply. properties of logarithms. Check here: Logarithm - Wikipedia, the free encyclopedia as an example let us take 2 numbers m=5,n=4 but log(m+n)=log(5+4)=log9 but log5 not equal to log9 therefore log(max(m,n)) not equal to log(m+n) speaking mathematically. as an example let us take 2 numbers m=5,n=4 but log(m+n)=log(5+4)=log9 but log5 not equal to log9 therefore log(max(m,n)) not equal to log(m+n) speaking mathematically. Put $m=1,\ n=2$ then $\log(\max(m,n))=\log(2) e \log(m+n)=\log(3)$ Now go back and try to find out why for the purposes of the algorithm in question they are effectively equivalent. What, exactly, do you mean by "equivalent"? That question is directed to both tsjagan and Captain Black! When m and n differ significantly in magnitude then the two logs are approximately the same. If the expression occurs in the analysis of an algorithm this is likely to be the case and if using one form rather than the other yields a significant simplification you jump at it.. Ah, thanks. January 22nd 2010, 05:23 PM #2 January 22nd 2010, 11:13 PM #3 Nov 2009 January 22nd 2010, 11:18 PM #4 Nov 2009 January 23rd 2010, 12:11 AM #5 Grand Panjandrum Nov 2005 January 23rd 2010, 03:13 AM #6 MHF Contributor Apr 2005 January 23rd 2010, 06:58 AM #7 Grand Panjandrum Nov 2005 January 23rd 2010, 08:19 AM #8 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/124964-log-max-m-n-equivalent-log-m-n.html","timestamp":"2014-04-20T14:09:01Z","content_type":null,"content_length":"53436","record_id":"<urn:uuid:01026bb2-7a2c-476f-9c1c-88e0d29c0a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditions of existing negative components in instantaneous ASA 125th Meeting Ottawa 1993 May 1pSA10. Conditions of existing negative components in instantaneous frequency analysis. Jeung T. Kim Byungduk Lim Acoust. and Vib. Lab., Korea Res. Inst. of Standards and Sci., P.O. Box 3, Daedok Science Town, Daejon, Korea 305-606 An instantaneous frequency analysis is a technique to examine a signature for the rotating machinery if the signal has several transitions within a cycle. This paper discusses the conditions of existing negative frequency components in the instantaneous frequency. By using a simple signal that consists of two frequency components, the instantaneous frequency analysis is conducted while the amplitude ratio between two frequency components has been changed. The calculation shows that, depending on the amplitude ratio, the instantaneous frequencies have averaged, zero-valued, or negative components. It turns out that the negative-valued instantaneous frequencies, which have been regarded as the noise effect, are the consequence of the calculation process for the multisignal components. The criteria to show the condition of the negative values in instantaneous frequencies is given in terms of the relative amplitude ratio and the frequency difference. In this paper, a vibration signal monitored from a rotating machinery is also examined as an application example in order to show the existence of negative instantaneous frequencies components.
{"url":"http://www.auditory.org/asamtgs/asa93ott/1pSA/1pSA10.html","timestamp":"2014-04-18T23:19:54Z","content_type":null,"content_length":"1870","record_id":"<urn:uuid:82567e89-f04e-4721-84bd-60dd01572d02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
LaTeX Jobs • Hourly – Less than 1 month – 10-30 hrs/week – Posted I have a scientific manuscript (an article that will be submitted to an academic journal) of approximately 20,000 words. I need you to carefully proofread it for style, correctness of references, etc., but you will not need to check the correctness of mathematical proofs. Non-negotiable requirements: * fluency in written academic English at a native level * familiarity with LaTeX * equivalent of master's degree in mathematics, economics, or phyisics Experience with editing scientific manuscripts is preferred but not required. • Fixed-Price – Est. Budget: $5.00 Posted The task is simple. copy the URL below into a new browser tab and convert the article that appears into a LaTex document. The typeset (font, color, size, etc) should be EXACTLY the same. Once I take the document you created and run through the LaTex parser and output it, the output should be almost the same as it appears in URL http://online.wsj.com/news/article_email/ • Fixed-Price – Est. Budget: $25.00 Posted Hi, I need help with formatting my dissertation. I have three papers -- one of which is in MS WORD and two others which are in Latex. I need you to use the template provided by my university (http://web.mit.edu/thesis/tex/) and the papers to create the dissertation. • Hourly – Less than 1 week – Less than 10 hrs/week – Posted I have a 30 page pdf document written in Latex. The document has mathematical equations. I don't have the underlying latex source code. I need the file to be retyped in Latex. • Hourly – Less than 1 month – Less than 10 hrs/week – Posted I need some help in formatting couple of images, equations and glossary in Latex. • Hourly – 1 to 3 months – 30+ hrs/week – Posted I have sample maths questions, however due to copyright laws i cannot copy the questions so i need them to be altered slightly then for a worked solution to be typed up showing how you get the final answer. These questions will be used by school students helping them learn how to solve questions. Person must be competent with writing mathematical notation on the computer and preferably know how to use latex. Uni degree is a must masters and PHD ... • Hourly – Less than 1 month – 10-30 hrs/week – Posted The title says it all, we have a complete LaTeX solution and we need a LaTeX file in the solution configured before compiling the LaTeX document to PDF. We would like this all to happen when calling a single Python function. The function will accept a dictionary containing keys and values that will coincide with the LaTeX configuration file's keys and values. This job is specialized towards people who have some LaTeX and Python experience. Apply and we can ... • Hourly – Less than 1 week – Less than 10 hrs/week – Posted I'm having difficulty including my bibliography in my paper. Help me determine how I can get from my .bib file to a properly formatted biblio in my LaTeX document. • Fixed-Price – Est. Budget: $10,000.00 Posted This job is also on eLance, as the two platform connects I hope to find here more people with skills in this area The output should be automated with Cran R (other technologies may also be possible) The input-files are not yet ready (in 1-2 weeks). The management software is developed from us, you can change the input format to your prefered form, the developer will change it to your desired format. Attached is a file that describes the input ... • Hourly – Less than 1 week – 10-30 hrs/week – Posted To be considered, you MUST include an example .bib file you have worked on. Please also complete the test job which is to create a .bib file out of these few entries. If hired, I will compensate you for completing this.
{"url":"https://www.odesk.com/o/jobs/browse/skill/latex/","timestamp":"2014-04-17T01:53:44Z","content_type":null,"content_length":"50988","record_id":"<urn:uuid:b5a1d289-124f-4cac-a40e-eae25be71479>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
A Resolution of Cosmic Dark Energy via a Quantum Entanglement Relativity Theory A Resolution of Cosmic Dark Energy via a Quantum Entanglement Relativity Theory A new quantum gravity formula accurately predicting the actually measured cosmic energy content of the universe is presented. Thus by fusing Hardy’s quantum entanglement and Einstein’s energy formula we have de facto unified relativity and quantum mechanics in a single equation applicable to predicting the energy of the entire universe. In addition the equation could be seen as a simple scaling of Einstein’s celebrated equation when multiplied by a scaling parameter . Furthermorecould be approximated to Cite this paper M. Naschie, "A Resolution of Cosmic Dark Energy via a Quantum Entanglement Relativity Theory," Journal of Quantum Information Science , Vol. 3 No. 1, 2013, pp. 23-26. doi: [1] P. Davies, “The New Physics,” Cambridge University Press, Cambridge, 1989. [2] P. Halpern, “The Great Beyond: Higher Dimensions, Parallel Universes and the Extraordinary Search for a Theory of Everything,” John Wiley & Son Inc., Hoboken, 2004. [3] R. Penrose, “The Road to Reality,” Jonathan Cape, London, 2004. [4] M. Kaku, “Introduction to Superstrings and M-Theory,” Springer, New York, 1999. doi:10.1007/978-1-4612-0543-2 [5] B. Booss-Bavnbek, et al., “New Paths towards Quantum Gravity,” Lecture Notes in Physics, Springer, 2010. [6] B. G. Sidharth, “Frontiers of Fundamental Physics,” University Press, Hyderabad, 2007, pp. 1-32. [7] J. Mageuijo, “Faster than the Speed of Light,” William Heinemann, London, 2003. [8] L. Nottale, “Fractal Space-Time and Microphysics: Towards a Theory of Scale Relativity,” World Scientific, Singapore, 1998. [9] L. Smolin, “Three Roads to Quantum Gravity,” Weidenfeld and Nicolson, London, 2000. [10] P. Steinhardt and N. Turok, “Endless Universe,” Weidenfeld and Nicolson, London, 2007. [11] Y. Baryshev and P. Teerikorpi, “Discovery of Cosmic fractals,” World Scientific, New Jersey, 2002. [12] D. Mermin, “Quantum Mysteries Refined,” American Journal of Physics, Vol. 62, No. 10, 1994, pp. 880-887. doi:10.1119/1.17733 [13] D. F. Styer, “The Strange World of Quantum Mechanics,” Cambridge University Press, Cambridge, 2000, pp. 54-55. [14] L. Amendola and S. Tsujikawa, “Dark Energy: Theory and Observations,” Cambridge University Press, Cambridge, 2010. [15] R. Panek, “Dark Energy: The Biggest Mystery in the Universe,” 2010. http:/www.smithsonianmagazine.com/science.Nature/Dark-Energy [16] Planck-Spacecraft, “Wikipedia,” 2012. http://en.wikipedia.org/wiki/planck [17] E. Komatsu, et al. and WMAP Collaboration, “Seven Years Wilkinson Microwave Anisotropy probe (WMAP) Observations: Cosmological interpretation,” The Astrophysical Journal Supplement Series, Vol. 192, No. 2, 2011, p. 18. doi:10.1088/0067-0049/192/2/18 [18] A. G. Reiss, et al. and Supernova Search Team Collaborations, “Observation Evidence from Supernova for an Accelerating Universe and a Cosmological Constant,” The Astronomical Journal, Vol. 116, No. 3, 1998, pp. 1009-1038. doi:10.1086/300499 [19] S. Perlmutter, et al. and Supernova Cosmology Project Collaboration, “Measurements of Omega and Lambda from 42 High-Redshift Supernova,” The Astronomical Journal, Vol. 517, No. 2, 1999, pp. 565-585. doi:10.1086/307221 [20] C. Toni, “Dark Matter, Dark Energy and the Fate of Einstein’s Theory of Gravity,” AMS Graduate Student Blog. Mathgrablog.williams.edu/dark-matter-darkenergy=fate-einsteins-theory=gravity/#print [21] W. Rindler, “Relativity,” Oxford University Press, Oxford, 2001, pp. 111, 112. [22] W. Rindler, “General Relativity before Special Relativity: An Unconventional Overview of Relativity Theory,” American Journal of Physics, Vol. 62, No. 10, 1994, p. 887. [23] J. Mageuijo and L. Smolin, “Lorentz Invariance with an Invariant Energy Scale,” 18 December 2001, arXiv: hepth/0112090V2. Sponsors, Associates, and Links >>
{"url":"http://www.scirp.org/journal/PaperInformation.aspx?PaperID=29329","timestamp":"2014-04-18T18:29:19Z","content_type":null,"content_length":"62475","record_id":"<urn:uuid:abfbf225-256a-4b13-92db-d3f9fae89167>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Trivial/Non Trivial Solutions October 15th 2008, 02:46 PM Trivial/Non Trivial Solutions Suppose that you are given a matrix: 2 0 -1 How do you determine whether the solution is trivial (and independent) or nontrivial (and dependent)? October 15th 2008, 03:43 PM October 15th 2008, 03:48 PM Thats one of my problems. I'm told to find out whether the vectors are linearly independent or not, and the textbook tells me that if the linear system has non trivial solutions, it is dependent, and if it has trivial solutions, it will be dependent. Am I supposed to find row echelon form? And if so, what do I do with it? October 15th 2008, 04:02 PM I think the problem is asking to define $A = \begin{bmatrix} 0 & 3 & 1 \\ 0& 3 & 1 \\ 2&0&0 \\ 2&0&-1 \end{bmatrix}$ And then determine if $A\bold{x} = \bold{0}$ has non-trivial solutions. Where $\bold{x} \in \mathbb{R}^3$ and $\bold{0} = \begin{bmatrix}0&0&0&0\end{bmatrix}^T$. Since the number of equations exceede the number of variables the homogenous system will have non-trivial solutions. If you want to find those solutions you need to use Gaussian-Jordan elimination. Do you know how to do that? October 15th 2008, 04:02 PM I think they want you to augment it with zeros, and see if it has a trivial/non-trivial solution. And if its for linear independence you can see the first and 2nd equations are redundant and therefore not L.I. So it is dependent. October 15th 2008, 04:12 PM I think the problem is asking to define $A = \begin{bmatrix} 0 & 3 & 1 \\ 0& 3 & 1 \\ 2&0&0 \\ 2&0&-1 \end{bmatrix}$ And then determine if $A\bold{x} = \bold{0}$ has non-trivial solutions. Where $\bold{x} \in \mathbb{R}^3$ and $\bold{0} = \begin{bmatrix}0&0&0&0\end{bmatrix}^T$. Since the number of equations exceede the number of variables the homogenous system will have non-trivial solutions. If you want to find those solutions you need to use Gaussian-Jordan elimination. Do you know how to do that? I have an idea how to do it... But what about where the number of equations matches the number of variables: 2 0 -1 1 In this case, what do I need to do to determine linear dependence? October 15th 2008, 04:17 PM You set up the system of equations in the form to do Gaussian-Jordan elimination. Then you find the row reduced echelon form of the matrix. Once you do that it should be clear if the system has only a trivial solution of not*. *)To get only trivial solutions the row reduced echelon form must be $\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0& 0&0&1&0\end{bmatrix}$ Because this matrix leads to the following equations: $1x_1+0x_2+0x_3+0x_4 = 0$ $0x_1+1x_2+0x_3+0x_4 = 0$ $0x_1+0x_2+1x_3+0x_4 = 0$ $0x_1+0x_3+0x_3+1x_3 = 0$ Which immediately means $x_1=x_2=x_3=x_4=0$. Therefore, it has only the trivial solutions. October 15th 2008, 04:32 PM I didn't get this matrix: $\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0& 0&0&1&0\end{bmatrix}$ My matrix has a row of zeroes prior to reduced row echelon form... $\begin{bmatrix}0&3&1&4&0\\0&3&1&4&0\\2&0&0&2&0\\2& 0&-1&1&0\end{bmatrix}$ Which then becomes: $\begin{bmatrix}2&0&0&2&0\\0&3&1&4&0\\0&3&1&4&0\\2& 0&-1&1&0\end{bmatrix}$ And then: $\begin{bmatrix}1&0&0&1&0\\0&3&1&4&0\\0&3&1&4&0\\0& 0&-1&-1&0\end{bmatrix}$ ~ $\begin{bmatrix}1&0&0&1&0\\0&1&1/3&4/3&0\\0&3&1&4&0\\0&0&-1&-1&0\end{bmatrix}$ ~ $\begin{bmatrix}1&0&0&1&0\\0&1&1/3&4/3&0\\0&0&0&0&0\\0&0&-1&-1&0\end{bmatrix}$ What does this mean in relation to its linear independence? October 15th 2008, 04:43 PM He wasnt saying that $\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0& 0&0&1&0\end{bmatrix}<br />$ was the answer.. he was saying that if you get this as your answer then your vectors would be linearly independent. The row of zeros indicates that that specific vector was a linear combination of the others so it is dependent ( ie. not independent... ) October 15th 2008, 04:45 PM So if the resultant isn't that matrix, the vectors aren't independent? October 15th 2008, 04:46 PM October 15th 2008, 04:51 PM Ok, I understand. Thanks. I don't know this stuff can't just be said in plain english---> if this is this, then you have x. If not, you don't. I am having a hard time comprehending the language. Thanks to both of you. :) October 15th 2008, 04:53 PM Linear algebra is the king of confusion in my mind, especially in upper years. October 15th 2008, 04:55 PM I could definitely understand, since I am confused out of my mind at lower level. Urg.... October 15th 2008, 05:17 PM As Scopur said it means the equation has non-trivial solutions. With a little more work we can actually find all the solutions. Multiply the 4th row by 1/3 and add it to the 2nd. After that multiply the 4th row by -1. Finally interchange the 3rd and 4th rows. The resulting matrix is: $\begin{bmatrix} 1&0&0&1&0 \\ 0&1&0&1&0 \\ 0&0&1&1&0 \\ 0&0&0&0&0 \end{bmatrix}$ This tells us the following: $1x_1+0x_2+0x_3+1x_4 = 0$ $0x_1+1x_2+0x_3+1x_4 = 0$ $0x_1+0x_2+1x_3+1x_4 = 0$ $0x_1+0x_2+0x_3+0x_4 = 0$ The fourth equation is redundant, since it is obviously true. Solving each equation for the leading variables we get: $x_1 = -x_4$ $x_2 = -x_4$ $x_3 = -x_4$ Let $x_4 = t$ then it means $x_1=x_2=x_3 = -x_4 = -t$. What does this means? It means we found all solutions. Let $t$ be any number you want it to be. It follows if we set $x_1=x_2=x_3=t$ and $x_4=-t$ then we found a solution. Try it for $t= 1,2,3,...$ and convince these are solutions. Since $t$ varies for infinitely many values it means there are infinitely many solutions. This is an example when the number of equations is equal to the number of variables and yet the number of solutions is infinite.
{"url":"http://mathhelpforum.com/advanced-algebra/53907-trivial-non-trivial-solutions-print.html","timestamp":"2014-04-20T19:45:39Z","content_type":null,"content_length":"23760","record_id":"<urn:uuid:59f127e1-c341-4495-ae2a-2d0433f1a85a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Modelling 101 – The Price Equation So in this post I’m going to assume you know absolutely nothing about anything. If you know something about something this probably isn’t what you’re looking for. If you’re looking for something which will go into depth on how the price equation is derived this probably isn’t what you’re looking for either. If you simply want to know what the price equation does and how to use it at face value then welcome! You’ve found the right place. The price equation is used to calculate how the average value of any variant can change within a population from generation to generation. Here I will cover everything you need to know to understand the equation and slot in the right values: Covariance is the measure of how much 2 things change together and can generally be calculated by the following equation: cov(x, y) = E(xy) – E(x)E(y) cov(x, y) is the value of how much two random variables (x and y) change together. E(xy) is the expected value of the product of x and y and E(x)E(y) the product of their expected values. The price equation Imagine a population of individuals who had varying degrees of variant S. (this is usually a characteristic expressed by genes in biology, but for the purposes of linguistics this could just as well be the use of a word, phrase or other varying item which can have different degrees of communicative success or ‘fitness’.) If we are to keep track of variant S, where the population is divided up into subpopulations (i = 1, 2, 3, …) dependant on how much of variant S they have. n[i] is the number of individuals in subpopulation i and the value of S within each subpopulation is z[i]. w = fitness of whole population w[i ] = fitness of subpopulation i w ‘ = fitness in next generation of whole population w[i ] ‘ = fitness of next generation of subpopulation i z = average amount of variant S found in whole population z[i ] = average amount of variant S found in subpopulation i z ‘ = average amount of variant S found the next generation of population z[i ] ‘ = average amount of variant S found the next generation of subpopulation i n = number of individuals in the population n[i] = number of individuals in subpopulation i n ‘ = number of individuals in next generation of population w[i ]n[i] = number of offspring in next generation of subpopulation i n[i ]‘ = number of offspring in next generation of subpopulation i w[i ]n[i] = n[i ]‘ w[i ] = n[i ]‘/n[i] So, amount of change from z[i ] to z[i ]‘ can be summed up as follows: The amount of change from z to z’ can also be summed up in this way. And now you should know everything needed to know in order to understand the price equation! George Price Further Reading To read about George Price’s extraordinary life go here: http://en.wikipedia.org/wiki/George_R._Price For a clear, in depth derivation and expansion on the price equation: McElreath & Boyd (2007). Mathematical Models of Social Evolution: A guide for the perplexed. University of Chicago Press. Amazon link. You write: z = average amount of variant S found in subpopulation i zi = average amount of variant S found in whole population Isn’t this a misprint? Oh yeah. Thanks. I’ll fix it. Done. Sorry about that.
{"url":"http://www.replicatedtypo.com/mathematical-modelling-101-%E2%80%93-the-price-equation/2739.html","timestamp":"2014-04-20T00:38:29Z","content_type":null,"content_length":"65002","record_id":"<urn:uuid:511e191b-12a5-43e9-8abc-d569d7179021>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Validity of ivreg2-tests for underidentification and weak instru Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: Validity of ivreg2-tests for underidentification and weak instruments if errors are not i.i.d. etc. From "Schaffer, Mark E" <M.E.Schaffer@hw.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject RE: st: Validity of ivreg2-tests for underidentification and weak instruments if errors are not i.i.d. etc. Date Sat, 11 Feb 2012 15:01:10 -0000 > -----Original Message----- > From: Anne Tausch [mailto:anne.tausch@googlemail.com] > Sent: 10 February 2012 17:49 > To: statalist@hsphsun2.harvard.edu > Cc: Schaffer, Mark E; baum@bc.edu; Steven Stillman > Subject: Validity of ivreg2-tests for underidentification and > weak instruments if errors are not i.i.d. etc. > Dear Mark Schaffer, hello everybody, > I really appreciate your willingness to answer my questions > regarding the tests for underidentification/weak instruments > that are implemented in ivreg2. My questions are about the > validity of certain tests in particular circumstances and are > stated below. > Unfortunately, I wasn't able to find the answers in the > articles of you, Christopher Baum and Steven Stillman (or elsewhere). > Many thanks and best wishes > Anne > My questions are: > 1. Is the chi-square test of Angrist and Pischke valid in the > presence of heteroskedasticity and autocorrelation? Yes. I'm assuming you're asking -ivreg2- to report HAC or cluster-robust stats. The A-P test stat will also be HAC or cluster-robust. > And what > about Shea's partial r-square? Is that valid in the case of > non i.i.d errors? Actually, Shea's partial r-square isn't really valid even in the iid case and doesn't have a distribution that allows you to use it for formal testing. If you really want an R-sq, you're better off using the A-P version. You can get the A-P R-sq after an -ivreg2- estimation from the saved matrix e(first). > 2. In the case of multiple endogenous regressors, the Angrist > and Pischke F statistic can be used to asses whether a > particular endogenous regressor is weakly identified by > comparing the empirical value to the critical values of Stock > and Yogo. Is this test still valid in the presence of > heteroskedasticity and autocorrelation? Sort of. It's as valid for assessing weak identification as the HAC-robust F statistic in the 1-endogenous-regressor case, which is to say, "sort of valid". The weak identification test critical values that Stock and Yogo worked out are for the iid case only, and so using a HAC-robust, or heteteroskedasticity-robust, or cluster-robust test stat with these critical values has only an informal justification. (As in: "It's the best we can do for now".) > 3. If one has just one endogenous regressor and just one > excluded instrument variable: Can one still use the rule of > thumb that F should be greater than 10? Oder does that rule > only make sense when one has more than one excluded instrument? The Staiger-Stock (1997, Econometrica) rule of thumb of "F>=10" applies in this case too. Heriot-Watt University is a Scottish charity registered under charity number SC000278. Heriot-Watt University is the Sunday Times Scottish University of the Year 2011-2012 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00583.html","timestamp":"2014-04-21T02:12:33Z","content_type":null,"content_length":"11212","record_id":"<urn:uuid:a71f04a1-2e3a-492d-a178-a6e49addb434>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
GPUs seem to be all the rage these days. At the last Bayesian Valencia meeting, Chris Holmes gave a nice talk on how GPUs could be leveraged for statistical computing. Recently Christian Robert arXived a paper with parallel computing firmly in mind. In two weeks time I’m giving an internal seminar on using GPUs for Parsing and plotting time series data This morning I came across a post which discusses the differences between scala, ruby and python when trying to analyse time series data. Essentially, there is a text file consisting of times in the format HH:MM and we want to get an idea of its distribution. Tom discusses how this would be a bit clunky Statistical podcast: Random and Pseudorandom This morning when I downloaded the latest version of In our time, I was pleased to see that this weeks topic was “Random and Peudorandom.” If you’re not familiar with “In our time”, then I can I definitely recommend the series. Each week three academics and Melvyn Bragg discuss a particular topic from history, science, Survival paper (update) In a recent post, I discussed some statistical consultancy I was involved with. I was quite proud of the nice ggplot2 graphics I had created. The graphs nicely summarised the main points of the paper: I’ve just had the proofs from the journal, and next to the graphs there is the following note: It is Random variable generation (Pt 3 of 3) $Random variable generation (Pt 3 of 3)$ Ratio-of-uniforms This post is based on chapter 1.4.3 of Advanced Markov Chain Monte Carlo. Previous posts on this book can be found via the AMCMC tag. The ratio-of-uniforms was initially developed by Kinderman and Monahan (1977) and can be used for generating random numbers from many standard distributions. Essentially we transform the random variable of R programming books My sabbatical is rapidly coming to an end, and I have to start thinking more and more about teaching. Glancing over my module description for the introductory computational statistics course I teach, I noticed that it’s a bit light on recommend/background reading. In fact it has only two books: A first course in statistical programming Logical operators in R In R, the operators “|” and “&” indicate the logical operations OR and AND. For example, to test if x equals 1 and y equals 2 we do the following: > x = 1; y = 2 > (x == 1) & (y == 2) TRUE However, if you are used to programming in New paper: Survival analysis Each year I try to carry out some statistical consultancy to give me experience in other areas of statistics and also to provide teaching examples. Last Christmas I was approached by a paediatric consultant from the RVI who wanted to carry out prospective survival analysis. The consultant, Bruce Jaffray, had performed Nissen fundoplication surgery on Random variable generation (Pt 2 of 3) $Random variable generation (Pt 2 of 3)$ Acceptance-rejection methods This post is based on chapter 1.4 of Advanced Markov Chain Monte Carlo. Another method of generating random variates from distributions is to use acceptance-rejection methods. Basically to generate a random number from , we generate a RN from an envelope distribution , where . The acceptance-rejection algorithm is as follows: Repeat until Random variable generation (Pt 1 of 3) $Random variable generation (Pt 1 of 3)$ As I mentioned in a recent post, I’ve just received a copy of Advanced Markov Chain Monte Carlo Methods. Chapter 1.4 in the book (very quickly) covers random variable generation. Inverse CDF Method A standard algorithm for generating random numbers is the inverse cdf method. The continuous version of the algorithm is as follows: 1.
{"url":"http://www.r-bloggers.com/author/csgillespie/page/3/","timestamp":"2014-04-21T02:16:24Z","content_type":null,"content_length":"40301","record_id":"<urn:uuid:932228e1-ed86-4129-b548-3f06aaa942e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Vapor Pressures of Binary Solutions An ideal solution of two liquids and obeys Raoult's law, which states that the partial vapor pressure of each component is proportional to its mole fraction: and , where and are the vapor pressures of the pure components at a given temperature (very often 25 °C). The total vapor pressure above the solution is then given by , assuming Dalton's law. Ideal solutions are fairly uncommon but serve as a convenient reference system to describe nonideal solutions. Pairs of liquids that are well approximated by Raoult's law usually contain molecules of similar size, shape, and chemical structure. Some well-known examples are benzene and toluene, chlorobenzene and bromobenzene, and carbon tetrachloride and silicon tetrachloride. Most real solutions exhibit deviations from Raoult's law. A positive deviation is characterized by and and indicates that the attractive interactions between like molecules is greater than that between and molecules. A negative deviation has and , implying stronger mutual interactions between unlike molecules. The curves shown in the graphic are qualitative approximations to the actual dependence of vapor pressures on composition. The blue and red curves represent the partial pressures of and , respectively, while the black curve shows the total vapor pressure. The dashed lines refer to the hypothetical ideal behavior of the corresponding vapor pressures. Even for nonideal solutions, Raoult's law is asymptotically approached for or . Dilute solutions, on the other hand, are approximated by Henry's law: the linear relations for and for , which can be displayed using the "show Henry's law" checkbox. Snapshot 1: an ideal solution Snapshot 2: acetone-carbon disulfide solution, showing strong positive deviation from ideality Snapshot 3: acetone-chloroform solution, showing negative deviation from ideality Reference: P. Atkins and J. de Paula, Physical Chemistry 7th ed., New York: W. H. Freeman and Co., 2002, pp. 168–172.
{"url":"http://demonstrations.wolfram.com/VaporPressuresOfBinarySolutions/","timestamp":"2014-04-19T01:50:48Z","content_type":null,"content_length":"45577","record_id":"<urn:uuid:dc0f483d-0e1a-4094-8c34-b31e12929dd4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Using the Calculator - Problem 4 Every once in a while we need to plug in a complicated function into our calculator. So behind me I have some of the more obscure equations that we will look at. We're saying like a fourth root, a higher power root, choose a probability, or a factorial and we need to plug these into our calculator. So sometimes these things can be hard to find so I'm just going to walk you through where most of these function are. So let's go take a look. In order to plug in a higher root, what you have to do is plug the root that you're concerned with, so what we're looking at before is the fourth root of 10, so hit the root you're concerned with so in that case 4 and then hit the Math button then you scroll down to the x root portion, number 5 and then hit the number they want put inside and what comes out is the fourth root of 10. So make sure you always put your root in first and then the number you're concerned with. Another thing we can do is our probability, our combinations or permutations so for this what we do is you plug the number you are choosing from so let's says we want to do 8 choose 3, you did 8 then go back to the Math button and then scroll over to the Prb option that's just probability and what you see is there is the nPr that's the permutations or the nCr the combinations, so choose whichever one you want, let's do combination and then plug in the number that you're choosing let's say 3 and what comes out is 8 choose 3. The same exact approach for our permutations. The last thing I want to talk about is factorial, so let's say I want to do 6 factorial, hit 6 then go back to the Math button, back over to the probability and then you see the exclamation point your factorial, number 4 hit enter and what comes out is your 6 factorial which in this case is 720. So those are some of the hard to find functions on your calculator. higher index radical higher degree root permutation combination calculator
{"url":"https://www.brightstorm.com/math/algebra-2/additional-topics/using-the-calculator-problem-4/","timestamp":"2014-04-18T18:32:37Z","content_type":null,"content_length":"54744","record_id":"<urn:uuid:a3d6c953-2ce1-4c7d-b68e-8ad70bfe2959>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex modes and effective refractive index in 3D periodic arrays of plasmonic nanospheres We characterize the modes with complex wavenumber for both longitudinal and transverse polarization states (with respect to the mode traveling direction) in three dimensional (3D) periodic arrays of plasmonic nanospheres, including metal losses. The Ewald representation of the required dyadic periodic Green’s function to represent the field in 3D periodic arrays is derived from the scalar case, which can be analytically continued into the complex wavenumber space. We observe the presence of one longitudinal mode and two transverse modes, one forward and one backward. Despite the presence of two modes for transverse polarization, we notice that the forward one is “dominant” (i.e., it contributes most to the field in the array). Therefore, in case of transverse polarization, we describe the composite material in terms of a homogenized effective refractive index, comparing results from (i) modal analysis, (ii) Maxwell Garnett theory, (iii) Nicolson-Ross-Weir retrieval method from scattering parameters for finite thickness structures (considering different thicknesses, showing consistency of results), and (iv) the fitting of the fields obtained through HFSS simulations. The agreement among the different methods justifies the performed homogenization procedure in case of transverse polarization. © 2011 OSA OCIS Codes (160.1245) Materials : Artificially engineered materials (260.2065) Physical optics : Effective medium theory (160.3918) Materials : Metamaterials (250.5403) Optoelectronics : Plasmonics ToC Category: Original Manuscript: September 13, 2011 Revised Manuscript: November 16, 2011 Manuscript Accepted: November 16, 2011 Published: December 7, 2011 Salvatore Campione, Sergiy Steshenko, Matteo Albani, and Filippo Capolino, "Complex modes and effective refractive index in 3D periodic arrays of plasmonic nanospheres," Opt. Express 19, 26027-26043 Sort: Year | Journal | Reset 1. A. Alù and N. Engheta, “Theory of linear chains of metamaterial/plasmonic particles as subdiffraction optical nanotransmission lines,” Phys. Rev. B74(20), 205436 (2006). 2. R. A. Shore and A. D. Yaghjian, “Traveling waves on two- and three-dimensional periodic arrays of lossless scatterers,” Radio Sci.42(6), RS6S21 (2007). 3. R. Sainidou and G. F. de Abajo, “Plasmon guided modes in nanoparticle metamaterials,” Opt. Express16(7), 4499–4506 (2008). 4. D. Van Orden, Y. Fainman, and V. Lomakin, “Optical waves on nanoparticle chains coupled with surfaces,” Opt. Lett.34(4), 422–424 (2009). 5. R. A. Shore and A. D. Yaghjian, Complex Waves on 1D, 2D, and 3D Periodic Arrays of Lossy and Lossless Magnetodielectric Spheres (Air Force Research Laboratory, Hanscom, AFB, MA, 2010; revised, 6. S. Campione, S. Steshenko, and F. Capolino, “Complex bound and leaky modes in chains of plasmonic nanospheres,” Opt. Express19(19), 18345–18363 (2011). 7. A. L. Fructos, S. Campione, F. Capolino, and F. Mesa, “Characterization of complex plasmonic modes in two-dimensional periodic arrays of metal nanospheres,” J. Opt. Soc. Am. B28(6), 1446–1458 8. S. Campione and F. Capolino, “Linear and planar periodic arrays of metallic nanospheres: Fabrication, optical properties and applications,” in Selected Topics in Photonic Crystals and Metamaterials, A. Andreone, A. Cusano, A. Cutolo, and V. Galdi, eds. (World Scientific, 2011), pp. 143–196. 9. A. Alù and N. Engheta, “Three-dimensional nanotransmission lines at optical frequencies: A recipe for broadband negative-refraction optical metamaterials,” Phys. Rev. B75(2), 024304 (2007). 10. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 1983). 11. S. Steshenko and F. Capolino, “Single dipole approximation for modeling collections of nanoscatterers,” in Theory and Phenomena of Metamaterials, F. Capolino, ed. (CRC Press, 2009), 8.1. 12. P. Myun-Joo, P. Jongkuk, and N. Sangwook, “Efficient calculation of the Green's function for the rectangular cavity,” IEEE Microw. Guid. Wave Lett.8(3), 124–126 (1998). 13. I. Stevanoviæ and J. R. Mosig, “Periodic Green's function for skewed 3-D lattices using the Ewald transformation,” Microw. Opt. Technol. Lett.49(6), 1353–1357 (2007). 14. G. Lovat, P. Burghignoli, and R. Araneo, “Efficient evaluation of the 3-D periodic Green's function through the Ewald method,” IEEE Trans. Microw. Theory Tech.56(9), 2069–2075 (2008). 15. A. Vallecchi, M. Albani, and F. Capolino, “Collective electric and magnetic plasmonic resonances in spherical nanoclusters,” Opt. Express19(3), 2754–2772 (2011). 16. J. D. Jackson, Classical Electrodynamics (Wiley, 1998). 17. V. A. Markel, V. N. Pustovit, S. V. Karpov, A. V. Obuschenko, V. S. Gerasimov, and I. L. Isaev, “Electromagnetic density of states and absorption of radiation by aggregates of nanospheres with multipole interactions,” Phys. Rev. B70(5), 054202 (2004). 18. D. W. Mackowski, “Calculation of total cross sections of multiple-sphere clusters,” J. Opt. Soc. Am. A11(11), 2851–2861 (1994). 19. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Dover Publications, 1965). 20. A. D. Rakic, A. B. Djurisic, J. M. Elazar, and M. L. Majewski, “Optical properties of metallic films for vertical-cavity optoelectronic devices,” Appl. Opt.37(22), 5271–5283 (1998). 21. S. Campione, M. Albani, and F. Capolino, “Complex modes and near-zero permittivity in 3D arrays of plasmonic nanoshells: loss compensation using gain [Invited],” Opt. Mater. Express1(6), 1077–1089 (2011). 22. A. Alù, A. Salandrino, and N. Engheta, “Negative effective permeability and left-handed materials at optical frequencies,” Opt. Express14(4), 1557–1567 (2006). 23. I. El-Kady, M. M. Sigalas, R. Biswas, K. M. Ho, and C. M. Soukoulis, “Metallic photonic crystals at optical wavelengths,” Phys. Rev. B62(23), 15299–15302 (2000). 24. P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B6(12), 4370–4379 (1972). 25. M. G. Silveirinha, “Generalized Lorentz-Lorenz formulas for microstructured materials,” Phys. Rev. B76(24), 245117 (2007). 26. A. Alù, “First-principles homogenization theory for periodic metamaterials,” Phys. Rev. B84(7), 075153 (2011). 27. S. Steshenko, F. Capolino, P. Alitalo, and S. Tretyakov, “Effective model and investigation of the near-field enhancement and subwavelength imaging properties of multilayer arrays of plasmonic nanospheres,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys.84(1 Pt 2), 016607 (2011). 28. A. M. Nicolson and G. F. Ross, “Measurement of the intrinsic properties of materials by time-domain techniques,” IEEE Trans. Instrum. Meas.19(4), 377–382 (1970). 29. W. B. Weir, “Automatic measurement of complex dielectric constant and permeability at microwave frequencies,” Proc. IEEE62(1), 33–36 (1974). 30. A. H. Boughriet, C. Legrand, and A. Chapoton, “Noniterative stable transmission/reflection method for low-loss material complex permittivity determination,” IEEE Trans. Microw. Theory Tech.45(1), 52–57 (1997). 31. D. R. Smith, S. Schultz, P. Markoscaron, and C. M. Soukoulis, “Determination of effective permittivity and permeability of metamaterials from reflection and transmission coefficients,” Phys. Rev. B65(19), 195104 (2002). 32. C. R. Simovski, “On the extraction of local material parameters of metamaterials from experimental or simulated data,” in Theory and Phenomena of Metamaterials, F. Capolino, ed. (CRC Press, 2009), 11.1. 33. S. A. Ramakrishna and T. M. Grzegorczyk, Physics and Applications of Negative Refractive Index Materials (CRC Press and SPIE Press, 2009). 34. A. Sihvola, Electromagnetic Mixing Formulas and Applications (IEEE Publishing, 1999). 35. A. Sihvola, “Mixing rules,” in Theory and Phenomena of Metamaterials, F. Capolino, ed. (CRC Press, 2009), 9.1. 36. S. Campione, S. Steshenko, M. Albani, and F. Capolino, “Characterization of the optical modes in 3D-periodic arrays of metallic nanospheres,” in URSI General Assembly and Scientific Symposium (Istanbul, Turkey, 2011). 37. X. Chen, T. M. Grzegorczyk, B.-I. Wu, J. Pacheco, and J. A. Kong, “Robust method to retrieve the constitutive effective parameters of metamaterials,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys.70(1 Pt 2), 016608 (2004). 38. P. P. Ewald, “Die Berechnung optischer und elektrostatischer Gitterpotentiale,” Ann. Phys. (Berlin)369(3), 253–287 (1921). 39. K. E. Jordan, G. R. Richter, and P. Sheng, “An efficient numerical evaluation of the Green's function for the Helmholtz operator on periodic structures,” J. Comput. Phys.63(1), 222–235 (1986). 40. A. Kustepeli and A. Q. Martin, “On the splitting parameter in the Ewald method,” IEEE Microw. Guid. Wave Lett.10(5), 168–170 (2000). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12 Fig. 13 Fig. 14 « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-19-27-26027","timestamp":"2014-04-16T17:53:49Z","content_type":null,"content_length":"196858","record_id":"<urn:uuid:c90fbc70-9b44-41bc-9533-c8e87ff9416f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
The Floquet spectrum of a quantum graph Seminar Room 1, Newton Institute We define the Floquet spectrum of a quantum graph as the collection of all spectra of operators of the form $D=(-i\frac{\partial}{\partial x}+\alpha(\frac{\partial}{\partial x}))^2$ where $\alpha$ is a closed $1$-form. We show that the Floquet spectrum completely determines planar 3-connected graphs (without any genericity assumptions on the graph). It determines whether or not a graph is planar. Given the combinatorial graph, the Floquet spectrum uniquely determines all edge lengths of a quantum graph. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/AGA/seminars/2010072617301.html","timestamp":"2014-04-18T08:08:38Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:5425416b-c769-47e9-a552-41c6845966e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Genetic Art Kyrre Glette and I wrote a simple image generator using genetic programming for an artificial intelligence course. Here are some of the images it made. The names are also random. How it works An image is generated by a mathematical expression that calculates a value for each pixel given its (x,y) coordinates. For instance, the simple formula x*y looks like this: The coordinates range from -1 to 1. The bottom right corner, for instance, has coordinates (1,1), and thus gets the color 1*1 = 1, or white. Here's a typical expression: The expression trees for the images at the top range from 40 to 1000 operations. These all need to be evaluated from scratch for each individual pixel. To make the system fast enough for us to play with it in near real time, the expressions were compiled to x87 machine code on the fly. The functions were all generated by randomly mutating graph nodes using a few simple mutations: Individual nodes in the expression tree would be replaced by similar nodes (often) or by new subtrees (more rarely). Also, two images could be cross-bred, merging the 'genes' of two good-looking images to generate offspring, hopefully combining the valuable parts of each parent image. By adding looping constructs to the 'expression language', we were able to make the system invent fractal shapes. Since the images are built from mathematical expressions and not pixels, they can be generated at any resolution. Rendering 50-megapixel images suitable for print is a matter of toggling a switch on the command line. There are many numerical constants in the expressions. The overall structure and feel of an image comes from its structure, but the actual incarnation is determined by those constants. The image is only a single point in the 100-or-so-dimensional space of possible parameters. I hacked together a small script that would plot an ellipse through the many-dimensional space intersecting the point defining the original image. Here are some animations: An interesting thing to do with this would be to generate the parameters in real time based on some external data source (from, say, running music through a frequency analyzer).
{"url":"http://www.steike.com/tech/genetic-art/","timestamp":"2014-04-16T16:43:25Z","content_type":null,"content_length":"12278","record_id":"<urn:uuid:a59ce9d2-5bfa-458f-aa21-d166b400eea4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Castle Point, NJ Calculus Tutor Find a Castle Point, NJ Calculus Tutor ...I have a degree in physics and a minor in mathematics. I'm also currently working on a masters in applied mathematics & statistics. In several courses, such as Ordinary Differential Equations (ODE) and Partial Differential Equations (PDE), we make heavy use of programs such as Mathematica and Maple. 83 Subjects: including calculus, chemistry, physics, statistics ...At first, chemistry can be really tough because it seems like there is always an exception to the rule, but once you begin to think like an atom, things get less electron cloudy. I can help you forgive your grievances with chemistry and let your pi-bonds be pi-bonds. At first, physics didn't make any sense to me either. 9 Subjects: including calculus, chemistry, physics, biology ...What makes mathematics different from other subjects is that it builds upon itself and so the need of a good foundation is vital. The qualities that I have as a tutor are that I'm very knowledgeable and enthusiastic with the subject matter,I can deliver it in a very simple and understanding fash... 18 Subjects: including calculus, statistics, algebra 1, algebra 2 ...We will work together to find what makes your best learning experiences. My credentials include state certifications to teach Mathematics to High School level and to teach Students with Disabilities. We can find what will work best for you. 18 Subjects: including calculus, writing, geometry, algebra 2 ...I think learning C is a good opportunity not only to learn a widely-used programming language, but also to explore the ideas behind programming in general. C specifically provides an excellent chance to cement good programming practices, like memory management, garbage collection, and making efficient choices. I love programming. 37 Subjects: including calculus, chemistry, French, physics Related Castle Point, NJ Tutors Castle Point, NJ Accounting Tutors Castle Point, NJ ACT Tutors Castle Point, NJ Algebra Tutors Castle Point, NJ Algebra 2 Tutors Castle Point, NJ Calculus Tutors Castle Point, NJ Geometry Tutors Castle Point, NJ Math Tutors Castle Point, NJ Prealgebra Tutors Castle Point, NJ Precalculus Tutors Castle Point, NJ SAT Tutors Castle Point, NJ SAT Math Tutors Castle Point, NJ Science Tutors Castle Point, NJ Statistics Tutors Castle Point, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Allwood, NJ calculus Tutors Ampere, NJ calculus Tutors Bayway, NJ calculus Tutors Beechhurst, NY calculus Tutors Bellerose Manor, NY calculus Tutors Doddtown, NJ calculus Tutors Dundee, NJ calculus Tutors Five Corners, NJ calculus Tutors Fort George, NY calculus Tutors Greenville, NJ calculus Tutors Highbridge, NY calculus Tutors Hoboken, NJ calculus Tutors Linden Hill, NY calculus Tutors Manhattanville, NY calculus Tutors Pamrapo, NJ calculus Tutors
{"url":"http://www.purplemath.com/Castle_Point_NJ_calculus_tutors.php","timestamp":"2014-04-20T04:18:54Z","content_type":null,"content_length":"24362","record_id":"<urn:uuid:547d63fb-a7fd-4720-9b6e-863eca69e953>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
BKS pairing in the SU(2) Chern-Simons theory up vote 5 down vote favorite I know that usually, the way to compare the Hilbert spaces arising from $SU(2)$ Chern-Simons theory with different Kähler polarizations is via the Hitchin connection. However, it should be possible, I would think, to try to use a BKS pairing between them. Are any results known on whether this pairing is unitary? (My guess would be that the answer is no, based upon the results of Kirwin et al for toric varieties, but then the character varieties that are quantized in Chern-Simons theory aren't really toric varieties). Likewise for real polarizations, I know Jeffrey and Weitsman did some work on BKS pairings but I don't recall seeing any theorems in their papers about whether the pairing was unitary for real polarizations in general. Has there been further work done on the pairings for real polarizations? gt.geometric-topology mp.mathematical-physics is it known that BKS and Hitchin-KZ are different ? In the context of CS Hitchin=KZ are so natural that I would guess any natural construction should coincide with it... By the way BKS is Blattner-Kostant-Sternberg, am I right ? – Alexander Chervov Apr 25 '12 at 6:31 Yes, sorry, BKS is Blattner-Kostant-Sternberg. They two constructions cannot be exactly equal. Hitchin depends on a path through Teichmuller space (although it is projectively path-independent). BKS doesn't, so they are at most projectively equivalent. – Blake Apr 25 '12 at 19:31 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gt.geometric-topology mp.mathematical-physics or ask your own question.
{"url":"https://mathoverflow.net/questions/95119/bks-pairing-in-the-su2-chern-simons-theory","timestamp":"2014-04-17T22:02:30Z","content_type":null,"content_length":"48212","record_id":"<urn:uuid:37841451-397f-41a4-aec3-85607cd48b82>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
LKML: Jamie Lokier: Re: [OFFTOPIC] Re: Virtual Machines, JVM in kernel Messages in this thread Date Sat, 5 Sep 1998 18:16:28 +0100 From Jamie Lokier <> Subject Re: [OFFTOPIC] Re: Virtual Machines, JVM in kernel David Wragg wrote: | It's worse than that. In general, "Safe" execution of C programs | simply cannot be proved (for some useful definition of safe). And if Brandon S. Allbery wrote: > Which is why I've been leery of the concept: given a chunk of C code with a > proof attached, the proof is untrustworthy (in point of fact, it *lies* if > it claims there are no buffer overflows, except in degenerate cases that > only use scalar values --- but the packet itself is not a scalar). No, no you have it wrong, I'm quite sure. In general, most C programs are not safe. But if they were all safe we wouldn't need an associated *Some* C programs that operate on arrays are safe, and can be proven to be safe. I've given several examples already. The degenerate cases you mention are but the simplest. Finding these proofs is generally very difficult, but in some cases it is not difficult. Verifying proofs is possible and is, in general, much more efficient than finding a proof. And when a proof is verified, it *is* trustworthy. Or you have screwed up your logic. But that means you wrote the verifier wrong. Please note that a proof does not "claim" anything, much less "there are no buffer overflows". A proof is not a series of statements to be believed. It is a guide for the verification process to deduce statements about the code -- every statement deduced satisfies the verification logic. Summary: proofs are considered _untrustworthy_ by the verifier, so they cannot break it. Perhaps you are thinking of a certificate, which is what ActiveX uses. > You can verify that the proof doesn't work, probably, which would be > good enough... except that (as noted) *no* purported proof will pass > this because the desired condition is not provable. I disagree with you on this point, because there are programs _I_ can prove (to your satisfaction, I hope) use arrays and do not cause buffer overflows. I am confident proofs for a subset of these programs can be verified mechanically. But I can't prove it without an example ;-) > So proof-carrying code isn't going to work here. If you're right that no mechanically-verifiable proofs can be found to satisfy the safety requirements, you have a point. But my limited experience with theorem provers and formal semantics of programming languages suggests otherwise. -- Jamie To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/faq.html
{"url":"http://lkml.org/lkml/1998/9/5/69","timestamp":"2014-04-19T19:43:48Z","content_type":null,"content_length":"9990","record_id":"<urn:uuid:eb07d502-e54e-4c87-9b6e-a29aad29d35e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
problem w/ depth sorting [Archive] - OpenGL Discussion and Help Forums 10-18-2003, 06:44 AM Howdy all. I'm having a problem conceptually figuring out how to do proper, fast depth sorting. I've got a world made up of objects, each with a bounding box. The objects are frustum culled, and because i'm using gluLookAt for my camera, I can't simply do a qsort with the z value of the The first conecpt is to use a proper Dist() formula, but with a lot of objects, I know that square root computation will eat up my CPU processing. I haven't been able to find anything else except using the ZBuffer, which would require a glReadPixels, and in my mind, has the same slowdown effect as sqrt(). Can someone help point me in the right direction as to how the data should be sorted?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-144457.html","timestamp":"2014-04-18T18:20:57Z","content_type":null,"content_length":"6999","record_id":"<urn:uuid:9f74bd5a-62a7-41e0-bfbb-3d4f7c095e60>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Download ebook: Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition Ronald J. Harshbarger, James J. Reynolds ISBN: 0618654216,9780618654215 | 1105 pages | 19 Mb Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition Ronald J. Harshbarger, James J. Reynolds Publisher: Brooks Cole Download free pdf Prentice-Hall; 8th International edition edition (May 1, 2003 ) | ISBN: 0131911503 | PDF | 12.26 MB | 336 pages Linear programming is tested thoroughly, including applications of simplex, dual, big M, and two-phase methods for utilizing slack, surplus and artificial variables. Management Skills for Everyday Life, Mathematical Applications for the Management, Life, and Social Sciences, 9th Edition, Harshbarger, Reynolds, Solutions Manual. Reynolds, Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition Publisher: B.o.ks C.l. Reynolds, "Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition" B.o.ks C.l. Financial Management: Principles and Applications, 11th Edition 2011, Titman, Martin, Keown, Instructor Manual. Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition by Ronald J. Download Free eBook:Mathematical Applications for the Management, Life, and Social Sciences (9th edition) [Repost] - Free chm, pdf ebooks rapidshare download, ebook torrents bittorrent download. Mathematical Applications for the Management, Life, and Social Sciences, 9th Edition is intended for a two-semester applied calculus or combined finite mathematics and applied calculus course. Mathematical Applications: For the Management, Life, and Social Sciences 8th Edition PDF Ronald J. Management Skills for Everyday Life, 3rd Edition 2012, Paula Caproni, Instructor Manual. Macroeconomics plus MyEconLab plus .. Graphics Calculator Guide for Mathematical Applications for the Management, Life, and Social scien Ces book download Harshbarger / Reynolds Download Graphics Calculator Guide for Mathematical Applications for the Management and social sciences eighth edition Book Companion Site - Instructor Mathematical Applications for the Management, Life, and Social Sciences, 9th Edition Ronald J. The Eighth Edition retains the features that have made this text a popular choice, including applications covering diverse topics that are important to students in the management, life, and social sciences. Download ebook Applied Mathematics for Business, Economics, Life Sciences and Social Sciences by Raymond Barnett, Michael Ziegler and Karl Byleen pdf free. Finite Mathematics for the Managerial, Life, and Social Sciences, 8th Edition, Tan, Test Bank. | ISBN: 0618654216 | 2006 | PDF | 1104. MATHEMATICAL APPLICATIONS FOR THE MANAGEMENT, LIFE, AND SOCIAL SCIENCES, 9th EDITION is intended for a two-semester applied calculus or combined finite mathematics and applied calculus course. Macroeconomics plus MyEconLab plus eBook 2-semester Student Access Kit, 8th Edition, Michael Parkin, Instructor Manual. Chapter Zero: Fundamental Notions of Abstract Mathematics, 2nd Edition, Schumacher, Solutions Manual Check-In Check-Out: Managing Hotel Operations, 8th Edition, Emeritus, Vallen, Instructor Manual Check-In Check-Out: Managing Hotel Operations, 8th Edition, Emeritus, Vallen, Test Bank Chemical, Biochemical . Mathematical Applications: For the Management, Life, and Social Sciences, 8th Edition. Other ebooks: Yesterday, I Cried: Celebrating the Lessons of Living and Loving book
{"url":"http://qyjaxinu.jimdo.com/2013/07/26/download-ebook-mathematical-applications-for-the-management-life-and-social-sciences-8th-edition/","timestamp":"2014-04-17T03:48:54Z","content_type":null,"content_length":"13664","record_id":"<urn:uuid:9ba5d4c3-5489-4ccc-afee-be7b22ad0829>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
What is probability? May 13th 2010, 09:50 AM #1 Jun 2008 If someone picks 6 numbers first from A={1,2,3,...,46,47,48,49} remeber these numbers and put these back. Then if he picks 7 numbers from A={1,2,3,...,46,47,48,49} what is the probability to take the first six numbers? what does this to take the first six numbers? and are you sampling with or without replacement each time? To be more precisly, Look if there are two Set with balls If you take six balls from one of them and after that you take seven balls from another. What is probability to have the six numbers from set A, same with six of seven from set B? Is the correct? Once we have a set of six numbers from batch A, then we randomly select seven numbers from batch B. Question: what is the probability that the six A-numbers are among the seven B-numbers? If that is correct then note that there are $\binom{49}{7}$ possible combinations of seven B-numbers. Of all those only 43 contain the six A-numbers. Yes it is correct! Can you tell me the probability pls...? May 13th 2010, 03:31 PM #2 May 15th 2010, 08:06 AM #3 Jun 2008 May 15th 2010, 08:24 AM #4 May 16th 2010, 08:45 AM #5 Jun 2008
{"url":"http://mathhelpforum.com/advanced-statistics/144566-what-probability.html","timestamp":"2014-04-17T08:33:22Z","content_type":null,"content_length":"41725","record_id":"<urn:uuid:e0bf9a6b-725d-4867-a25c-a8226609a9dc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
balancedness in n-pretoposes 1-pretoposes are balanced It is well-known that in a 1-pretopos, 1. every monic is regular, and as a consequence 2. a 1-pretopos is balanced (every monic epic is an isomorphism), and 3. every epic is regular. 2-pretoposes are not balanced For a similar statement in an $n$-pretopos for $n>1$, the natural first guess is to replace “monic” by “ff” and “regular epic” by “eso.” However, there are (at least) two reasonable replacements for “epic” and “regular monic:” 1. We could replace “epic” by “cofaithful” and “regular monic” by “equifier,” or 2. we could replace “epic” by “coconservative” (aka “liberal”) and “regular monic” by “inverter.” Since esos in Cat are cofaithful and liberal but not co-ff, it wouldn’t work to replace “epic” by “co-ff.” Unfortunately, both ideas fail in Cat, where equifiers and inverters are not just ff but are closed under retracts. Likewise, the map from a category to its Cauchy completion is full, faithful, cofaithful, and liberal, but generally not an equivalence. There are two ways to deal with this problem. One is to restrict to smaller values of $n$, for which there are no nontrivial retracts. The other is to go back and change “ff” to “ff and closed under retracts” and change “eso” to “surjective up to retracts.” (2,1)-pretoposes are balanced In a (2,1)-exact positive coherent 2-category, every ff with groupoidal codomain is an equifier (in fact, an identifier of an involution). Recall that an identifier of a 2-cell $\alpha :f\to f:A\to B$ in a 2-category is the equifier of $\alpha$ and ${1}_{f}$. By an involution we mean an (invertible) 2-cell that is its own inverse. Let $m:A\to B$ be ff with $B$ groupoidal, and consider first the (2,1)-congruence given by $B+B\phantom{\rule{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}B$, where one copy of $B$ gives the identities, and the composition treats the other copy of $B$ as an involution. Its quotient is the copower of $B$ by the “walking involution.” Now consider the following equivalence relation on $B+B$ in $\mathrm{disc}\left(K/B×B\right)$. We have $\left(B+B\right){×}_{B×B}\left(B+B\right)\simeq \left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right)$(B+B)\times_{B\times B}(B+B) \simeq (B\times_{B\ times B} B)+(B\times_{B\times B} B)+(B\times_{B\times B} B)+(B\times_{B\times B} B) and we define the relation to be given by $\begin{array}{c}B+A+A+B\\ ↓\\ \left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right)+\left(B{×}_{B×B}B\right).\end{array}$\array{B+A+A+B \\ \downarrow\\ (B\times_{B\times B} B)+ (B\times_{B\times B} B)+(B\times_{B\times B} B)+(B\times_{B\times B} B).} Since $\mathrm{disc}\left(K/B×B\right)\simeq \mathrm{disc}\left({\mathrm{Fib}}_{K}\left(B×B\right)\right)$ is a 1-pretopos, this relation has a quotient, say $\left[f,g\right]:B+B\to C$. It is easy to verify that $a,b:C\phantom{\rule{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}B$ is then a (2,1)-congruence on $B$ with $af=ag=bf=bg={1}_{B}$. (This depends on $B$ being groupoidal; otherwise it would be a homwise-discrete category but not necessarily a congruence.) Let $q:B\to D$ be the quotient in $K$ of this (2,1)-congruence. Then we have a 2-fork $\varphi :qa\to qb$ such that $\varphi f={1}_{q}$ and $\varphi g$ is an involution of $q$. We claim that $m$ is an identifier of $\varphi g$. By construction of $C$, we have $\varphi gm={1}_{qm}$, so since $m$ is ff, it suffices to show that for any $x:X\to B$ with $\varphi gx={1}_{qx}$, $x$ factors through $m$. But since $C$ with $\varphi$ is the kernel of $q$, the assumption $\varphi gx={1}_{qx}=\varphi fx$ implies that in fact $fx=gx$. If we write $i,j:B\phantom{\rule {thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}B+B$ for the two inclusions, this means that $ix:X\to B+B$ and $jx:X\to B+B$ become equal in $C$, and therefore factor through the kernel pair of $\left[f,g\right]$, namely $B+A+A+B$. But this is evidently tantamount to saying that $x$ factors through $A$. In a (2,1)-pretopos, 1. every ff is an equifier, 2. every cofaithful ff is an equivalence, and 3. every cofaithful morphism is eso. Theorem 1 shows the first statement. Then any ff $f:A\to B$ is an equifier of $\alpha ,\beta :g\phantom{\rule{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}h:B\phantom{\rule {thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}C$, so in particular $\alpha f=\beta f$; but if $f$ is also cofaithful, this implies $\alpha =\beta$, and thus their equifier $f$ is an equivalence. Finally, if $f$ is just cofaithful, we factor it as $f=me$ where $m$ is ff and $e$ is eso; but then $m$ is also cofaithful, hence an equivalence, and so $f$, like $e$, is eso. (1,2)-pretoposes are balanced In a (1,2)-exact positive coherent 2-category, every ff with posetal codomain is an inverter. Let $m:A\to B$ be ff, and consider the 2-congruence on $B+B$ defined as follows. We have (B_0+B_1) = (B_0\times B_0)+(B_0\times B_1)+(B_1\times B_0)+(B_1\times B_1), (adding subscripts to distinguish the two copies of $B$) and the congruence is given by $\begin{array}{ccccccc}{B}^{2}& +& {B}^{2}& +& Y& +& {B}^{2}\\ & & & ↓\\ \left({B}_{0}×{B}_{0}\right)& +& \left({B}_{0}×{B}_{1}\right)& +& \left({B}_{1}×{B}_{0}\right)& +& \left({B}_{1}×{B}_{1}\ right).\end{array}$\array{ B ^{\mathbf{2}} &+& B ^{\mathbf{2}} &+& Y &+& B ^{\mathbf{2}}\\ &&& \downarrow\\ (B_0\times B_0) &+& (B_0\times B_1) &+& (B_1\times B_0) &+& (B_1\times B_1).} Here $Y↪{B}^{2}$ is defined to be the ff image of the “composition” morphism $\left({1}_{B}/m/{1}_{B}\right)\to {B}^{2}$; in other words it is “the object of arrows in $B$ which factor through some element of $A$.” The composition is easy to define making this into a 2-congruence, and if $B$ is posetal, then it is a (1,2)-congruence. Let $\left[p,q\right]:B+B\to C$ be the quotient of this congruence. Analogously to the proof of Theorem 1, the fork defining this quotient gives a 2-cell $\varphi :p\to q$ such that $\varphi m$ is an isomorphism. Thus, to show that $m$ is the inverter of $\varphi$, it suffices to show that for any $x:X\to B$ with $\varphi x$ invertible, $x$ factors through $m$. Now $\varphi x$ is induced by the fork defining $C$ together with the composite $X\to B\to {B}^{2}$. If $\varphi x$ is invertible, then its inverse is given by some map $y:X\to Y$, which must lie over the diagonal $X\to B\to B×B$. Now, pulling back the eso $\left(1/m/1\right)\to Y$ along $y$ we obtain an eso $r:Z\to X$ with a morphism $s:Z\to A$ such that the identity 2-cell of $xr$ is the composite of 2-cells $xr\to ms\to xr$ ; in other words, $xr$ is a retract of $ms$. But since $B$ is posetal, this means $xr\cong ms$, and then the fact that $r$ is eso implies that $x$ factors through $m$, as desired. In a (1,2)-pretopos, 1. every ff is an inverter, 2. every liberal ff is an equivalence, and 3. every liberal morphism is eso. Just like Corollary 1 but using Theorem 2 instead. 2-pretoposes are Cauchy balanced Now, in any regular 2-category, in addition to the (eso,ff) factorization system we also have a Cauchy factorization system consisting of the cso (Cauchy surjective) and rff (ff and retract-closed) morphisms. Moreover, every cso is cofaithful and liberal. In “Modulated bicategories” by Carboni, Johnson, Street, and Verity (CJSV), it is shown that the liberal functors in Cat are precisely the Cauchy surjective ones; we now show that the same is true in any 2-pretopos. In a 2-pretopos, 1. every rff is an inverter, 2. every liberal rff is an equivalence, and 3. every liberal morphism is Cauchy surjective. In particular, every liberal morphism is cofaithful. The first statement is proven exactly like Theorem 2, except that at the last step we use the assumption that $m$ is retract-closed rather than the assumption that $B$ is posetal. The other statements follow as before, recalling that Cauchy surjective morphisms are cofaithful. In (CJSV) a 2-category is defined to be co-conservational (liberational?) if • it has finite colimits, • it has (liberal, strong conservative) factorizations, • strong conservative morphisms are preserved by copowers with $2$, and • strong conservative morphisms are stable under pushout, and faithfully co-conservational if moreover • every liberal morphism is cofaithful. Here a strong conservative morphism is one that is right orthogonal to all liberals. Theorem 3 shows that in a 2-pretopos, strong conservatives coincide with rffs, since liberals coincide with csos; thus any 2-pretopos satisfies the second condition to be co-conservational. It need not have finite colimits, but this can be remedied with some infinitary structure. The construction of copowers in a 2-pretopos can be used to show that rffs are preserved by copowers with $2$. And we have also seen that since every liberal is cso, it is cofaithful. However, even $\mathrm{Cat}$ fails the final condition, as shown in (CJSV, Prop. 3.4). This can be remedied by passing to the 2-category ${\mathrm{Cat}}_{\mathrm{cc}}$ of Cauchy-complete categories. I do not yet know whether a similar construction is possible in any 2-pretopos.
{"url":"http://ncatlab.org/michaelshulman/show/balancedness+in+n-pretoposes","timestamp":"2014-04-19T11:59:16Z","content_type":null,"content_length":"37762","record_id":"<urn:uuid:c8e0826b-0be1-4bc1-a67c-501dc2c9c416>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
• Jefferson?s Presidency is considered a transitional period in US History. Many historians look at this time period as the beginning of ?true democracy?. Jeffersonian democracy Champion for the common man?.Believed education would prepare them for participation in government?.. But for now, educated should rule? • In Euclidean Geometry, it is defined as an undefined term • Geometry Cheat Sheet of Theorems, Postulates, and Corollaries Text automatically extracted from attachment below. Please download attachment to view properly formatted document.---Extracted text from uploads/geometry/ • A Development of Proof Through Cyclic Quadrilaterals Developed by Kimberly Aitken This lesson was developed for use in a geometry class as a stepping stone towards the mastery of proof writing skills. It incorporates knowledge on the geometry of circles as well as angles, and it provides students with additional exposure beyond the curriculum on writing proofs. • HI Text automatically extracted from attachment below. Please download attachment to view properly formatted document.---Extracted text from uploads/geometry/document_1.docx--- • Text automatically extracted from attachment below. Please download attachment to view properly formatted document.---Extracted text from uploads/geometry/algebra_paper_.docx--- • GRADUATE RECORD EXAMINATIONS? Math Review Chapter 3: Geometry Copyright ? 2010 by Educational Testing Service. All rights reserved. ETS, the ETS logo, GRADUATE RECORD EXAMINATIONS, and GRE are registered trademarks of Educational Testing Service (ETS) in the United States and other countries. • Geometry is the math of shapes. Text automatically extracted from attachment below. Please download attachment to view properly formatted document.---Extracted text from uploads/geometry/geometry_is_the_math_of_shapes.docx--- • Geometry When drawing a line segment make sure to have the two endpoints marked and correctly labeled. A B This is the proper label to a line segment • Courtney McCoy Dr. Kyler English 112 12 October 2012 Change in Iranian Film after the Iranian Revolution • Ap human geography is so cool Text automatically extracted from attachment below. Please download attachment to view properly formatted document.---Extracted text from uploads/geometry/ap_2.docx---
{"url":"http://www.course-notes.org/Geometry/premium","timestamp":"2014-04-18T05:33:33Z","content_type":null,"content_length":"55768","record_id":"<urn:uuid:67e6a088-f023-4572-8a52-c18f504d594c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
oint Presentations Cosmology - The Beginning of Time PPT Presentation Summary : Title: Cosmology - The Beginning of Time Subject: Astronomy 5 Author: Astronomy Staff Keywords: Astronomy 5, 2008 - Spring, Presentations Last modified by Source : http://astronomy.sierracollege.edu/Courses/Astronomy05/Astro05_Lecture11b.ppt Cosmology - Portland Community College PPT Presentation Summary : Cosmology: The Origin and Evolution of the Universe The universe shows structure at many scales subatomic particles atoms stars and planets star clusters and galaxies ... Source : http://spot.pcc.edu/~aodman/physics123/Cosmology.ppt Cosmology - Department of Physics + Astronomy PPT Presentation Summary : 0 Cosmology Please press “1” to test your transmitter. Measuring the “Deceleration” of the Universe … By observing type Ia supernovae, astronomers can ... Source : http://www.phy.ohiou.edu/~mboett/PSC100/spring12/cosmology.ppt Cosmology - God And Science.org PPT Presentation Summary : Evidence for God from Cosmology Richard Deem Evidence for God The Greatest Discovery (COBE, 1992) “unbelievably important... They have found the Holy Grail of ... Source : http://www.godandscience.org/ppt/evidence.ppt Cosmology - Chabot College PPT Presentation Summary : Title: Cosmology Author: Scott Hildreth - Chabot College Last modified by: Scott Hildreth Created Date: 10/12/2007 6:53:13 PM Document presentation format Source : http://www.chabotcollege.edu/faculty/shildreth/astronomy/lecture/Cosmology.ppt Cosmology - FSU Physics Department PPT Presentation Summary : 1 Topics Early Milestones in Cosmology The Expanding Universe Summary Early Milestones in Cosmology 1823 – Heinrich Olbers An infinite universe of infinite age ... Source : http://www.physics.fsu.edu/courses/spring08/phy3101/Lecture29.ppt Presentation Summary : Cosmology The Origin, Evolution, and Destiny of the Universe Questions What is the flatness problem What is the horizon problem? What is the solution of the flatness ... Source : http://webpages.ursinus.edu/dnagy/physics101q/lectures/18Cosmology/18Cosmology.ppt Cosmology - Dark Matter, Dark Energy, and Fate PPT Presentation Summary : Title: Cosmology - Dark Matter, Dark Energy, and Fate Subject: Astronomy 10 Author: Astronomy Staff Keywords: Astronomy 10, 2008 - Spring, Presentations Source : http://astronomy.sierracollege.edu/Courses/Astronomy10/Astro10_Lecture15a.ppt Presentation Summary : COSMOLOGY The origin of large structures The geometry of the universe The history of the universe COSMOLOGICAL CONCEPTS: THE UNIVERSE AS A WHOLE MAJOR QUESTIONS HOW BIG? Source : http://www.chara.gsu.edu/~wiita/a1020cosmology16a.ppt Presentation Summary : Title: Cosmology Author: Markus Boettcher Last modified by: Michael Brotherton Created Date: 3/3/2003 2:19:27 AM Document presentation format: On-screen Show (4:3) Source : http://physics.uwyo.edu/~mbrother/a1050f12/cosmology.ppt Presentation Summary : Cosmology Structure of the Universe Open/Flat Universe Expansion continues forever Galaxies separate 100 trillion yrs. star formation ceased Temperature decreases Low ... Source : http://www.brophy.net/Downloads/AIL%20Class%20on%20Reality%20&%20Unreality/Slide%20Shows/Cosmology%20Structure%20of%20Universe,%20Big%20Bang_VANNATTA.ppt Presentation Summary : Cosmology Part 1 As the universe expands, is the solar system expanding with it? Yes, if new spacetime is forming, then there is more space between all objects than ... Source : http://faculty.mwsu.edu/physics/jackie.dunn/phys1533/astro_lec11.ppt Cosmology - Montgomery College PPT Presentation Summary : 0 Chapter 15: Cosmology in the 21st Century Olbers’s Paradox Hubble’s Law The Expanding Universe Expanding Space The Expanding Universe (II) The Age of the ... Source : http://montgomerycollege.edu/Departments/planet/M_AS101/Seeds9th/chapter15.ppt Atheists’ Myths: Part 1, Biblical Cosmology PPT Presentation Summary : Title: Atheists’ Myths: Part 1, Biblical Cosmology Subject: Beliefs of Atheists About Christianity Author: Richard Deem Keywords: atheism,beliefs,Christianity,flat ... Source : http://www.godandscience.org/ppt/atheistsmyths1.ppt Cosmology, Astronomy and Modern Physics - Hays High School PPT Presentation Summary : Nuclear Physics. Back to Rutherford and his discovery of the nucleus. Also coined the term “proton” in 1920, and described a “neutron” in 1921 Source : http://www.hayshighindians.com/academics/classes/csadams/srphysics/notes/nuclear1.pptx Presentation Summary : Cosmology Equipped with his five senses, man explores the universe around him, and calls the adventure "science". -- Edwin Powell Hubble Cosmology Equipped with his ... Source : http://serendip.brynmawr.edu/local/suminst/eei05/L1_Cosmology.ppt Chapter 18: Cosmology - Glynn County School System PPT Presentation Summary : Chapter 18: Cosmology WHAT DO YOU THINK? What does the Universe include? Did the Universe have a beginning? Is the Universe expanding, fixed in size, or contracting? Source : http://flashmedia.glynn.k12.ga.us/webpages/hwatkins/files/16%20cosmology.ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/cosmology.html","timestamp":"2014-04-19T22:29:02Z","content_type":null,"content_length":"17649","record_id":"<urn:uuid:8469d562-1b5d-469f-9e1e-adf077fa6269>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Beacon Lesson Plan Library Parallel and Perpendicular Lines Johnny Wolfe Santa Rosa District Schools Students work with parallel and perpendicular lines and their properties. Florida Sunshine State Standards Understands the relative size of integers, rational numbers, irrational numbers, real numbers and complex numbers. Understands that numbers can be represented in a variety of equivalent forms using integers, fractions, decimals, and percents, scientific notation, exponents, radicals, absolute value, or Understands and explains the effects of addition, subtraction, multiplication and division on real numbers, including square roots, exponents, and appropriate inverse relationships. The student understands the geometric concepts of symmetry, reflections, congruency, similarity, perpendicularity, parallelism, and transformations, including flips, slides, turns, and enlargements. Understands geometric concepts such as perpendicularity, parallelism, tangency, congruency, similarity, reflections, symmetry, and transformations including flips, slides, turns, enlargements, and Using a rectangular coordinate system (graph), applies and algebraically verifies properties of two-and three- dimensional figures, including distance, midpoint, slope, parallelism, and Florida Process Standards Numeric Problem Solvers 03 Florida students use numeric operations and concepts to describe, analyze, communicate, synthesize numeric data, and to identify and solve problems. - Overhead transparencies (if examples are to be worked on overhead) for Solving Equations Involving Parallel and Perpendicular Lines (see attached file). - Marking pens (for overhead). - Solving Equations Involving Parallel and Perpendicular Lines Examples (see attached file). - Solving Equations Involving Parallel and Perpendicular Lines Worksheet (see attached file). - Solving Equations Involving Parallel and Perpendicular Lines Checklist (see attached file). 1. Prepare transparencies (if teacher uses overhead for examples) for Solving Equations Involving Parallel and Perpendicular Lines Examples (see attached file). 2. Have marking pens (for overhead). 3. Have Solving Equations Involving Parallel and Perpendicular Lines Examples (see attached file) prepared and ready to demonstrate to students. 4. Have enough copies of Solving Equations Involving Parallel and Perpendicular Lines Worksheet (see attached file) for each student. 5. Have enough copies of Solving Equations Involving Parallel and Perpendicular Lines Checklist (see attached file) for each student. Prior Knowledge: Students should be familiar with slope and basic operation skills such as addition, subtraction, multiplication, division, exponents, fractions, decimals, solving equations, and point-slope form of a linear equation. Note: This lesson does not address the following: irrational, real, and complex numbers in SSS MA.A.1.4.2; decimals, percents, scientific notation, exponents, radicals, absolute value, logarithms in SSS MA.A.1.4.4; square roots, exponents, and appropriate inverse relationships in SSS MA.A.3.4.1; tangency, congruency, similarity, reflections, symmetry, rotation, transformations including flips, slides, turns, enlargements, and fractals in SSS MA.C.2.4.1; symmetry, reflections, congruency, similarity, transformations, including flips, slides, turns, and enlargements in MA.C.2.3.1; distance, midpoint in SSS MA.C.3.4.2. This lesson contains a checklist to assist the teacher in determining which students need remediation. The sole purpose of this checklist is to aide the teacher in identifying students that need remediation. Students who meet the “C” criteria are ready for the next level of learning. 1. Give examples of parallel lines and discuss the definition of a parallel line (see # 1 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 2. Work example #2 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 3. Work example #3 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 4. Work example #4 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 5. Discuss Thought Provoker with students (see # 5 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 6. Work example #6 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 7. Work example #7 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 8. Give students examples of perpendicular lines (see # 8 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 9. Discuss the Definition of Perpendicular Lines (see # 9 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 10. Show students how to derive the product of –1 from the slopes of perpendicular lines (see # 10 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 11. Work example #11 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 12. Work example #12 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 13. Work example #13 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 14. Work example #14 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 15. Work example #15 (see attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 16. Discuss Thought Provoker with students (see # 16 on attached file- Solving Equations Involving Parallel and Perpendicular Lines Examples). Answer student questions and comments. 17. Distribute the Solving Equations Involving Parallel and Perpendicular Lines Worksheet (see attached file). Make sure students understand the directions and assignment. 18. Distribute the Solving Equations Involving Parallel and Perpendicular Lines Checklist (see attached file). Describe what constitutes an “A,” “B,” “C,” “D,” and an “F” in the CHECKLIST. (This is to familiarize students with the grading procedure which will determine their level of proficiency.) 19. The student will write their response on the worksheet. 20. The teacher will move from student to student observing the students work and lending assistance. 21. Collect student work and use the appropriate checklists to assess work. Student worksheets will be taken up and scored according to “Solving Equations Involving Parallel and Perpendicular Lines Checklist”. These scores may be placed in the grade book if students have been given sufficient practice time. If this is the first time students have been given this information, it would not be appropriate to grade the work. Formatively assess until students have had sufficient practice. Have students prepare a list of everyday items that can observed using parallel and perpendicular lines. For example, power lines, rows in a garden, 4-way stops, etc. Web Links Web supplement for Parallel and Perpendicular Lines Parallel & Perpendicular Lines Web supplement for Parallel and Perpendicular Lines Math Help-Algebra-Linear Functions and Straight Lines Web supplement for Parallel and Perpendicular Lines Parallel and Perpendicular Lines Attached Files This file will contain: Solving Equations Involving Parallel and Perpendicular Lines examples, worksheet, worksheet key and checklist. File Extension: pdf Return to the Beacon Lesson Plan Library.
{"url":"http://www.beaconlearningcenter.com/Lessons/1750.htm","timestamp":"2014-04-20T08:15:29Z","content_type":null,"content_length":"10506","record_id":"<urn:uuid:c1472da2-1654-434c-98e0-d81b5e3aeeb3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
How to compute Kobayashi distance of compact Kaehler manifolds with postive Ricci curvature? up vote 1 down vote favorite Recently I just learned the Kobayashi distance on complex manifolds and wants to get some feeling of how it looks like on exmaples of manifolds with positive Ricci curvature. I have a feeling that the Kobayashi distance on those manifolds should vanish since those are not very "hyperbolic". A simple example is extended complex plane. then its Kobayashi distance vanishes since the automorphism group can contract one point very close to the origin while keeping the origin fixed. I believe it is similarly true for all complex projective spaces with usual complex structure since automorphism group is known. Can we have more examples? Or is there a counterexample (an example of Kaehler manifold with positive Ricci curvature but not identically vanishing Kobayashi distance)? complex-manifolds complex-analysis add comment 1 Answer active oldest votes Positive Ricci curvature Kaehler manifolds are Fano, and therefore rationally connected. See Janos Kollar's book Rational Curves on Algebraic Varieties. Therefore any two points lie up vote 4 down on a rational curve. Since the Kobayashi pseudodistance along a subvariety is never less than the pseudodistance in the ambient space, any pair of points lie at 0 pseudodistance. vote accepted Thank you very much for the quick answer. What happens if we change "compact" to "complete noncompact" or change "Ricci positive" to "Ricci nonnegative"? Is there a similar argument? I know almost zero of algebraic geometry. – Bo_Y Oct 26 '11 at 22:29 If the Ricci curvature is bounded from below by a positive constant, then complete implies compact, by the Bonnet-Myers theorem. I don't know whether Ricci nonnegative manifolds have identically zero pseudodistance. It seems likely. – Ben McKay Nov 14 '11 at 17:32 add comment Not the answer you're looking for? Browse other questions tagged complex-manifolds complex-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/79182/how-to-compute-kobayashi-distance-of-compact-kaehler-manifolds-with-postive-ricc/79185","timestamp":"2014-04-17T15:47:32Z","content_type":null,"content_length":"51553","record_id":"<urn:uuid:d5a441e9-ba65-4794-a7d2-a53489da7448>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: A strange anomaly in 3n+1 Replies: 1 Last Post: Apr 8, 2013 10:29 PM Messages: [ Previous | Next ] N29 Re: A strange anomaly in 3n+1 Posted: Apr 8, 2013 10:29 PM Posts: 6 From: Australia I know that n either enters the final stretch of its sequence via: Registered: 4/7/13 5->16->8->4->2-> OR 32->16->8->4->2->1 However does parity of n have any bearing on final path through 5 or 32? Finding 1 only 2^65-1 in test seems like an anomaly but what is the general ratio for results 32 or 5 for numbers say less than 1000. Date Subject Author 1/21/13 A strange anomaly in 3n+1 dan73 4/8/13 Re: A strange anomaly in 3n+1 N29
{"url":"http://mathforum.org/kb/message.jspa?messageID=8874535","timestamp":"2014-04-20T16:44:18Z","content_type":null,"content_length":"17331","record_id":"<urn:uuid:a3aabdc7-8d92-47f8-adc7-cdf4967b3db0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Types of Variables CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results. CO-7: Use statistical software to analyze public health data. Classifying Types of Variables LO 4.1: Determine the type (categorical or quantitative) of a given variable. LO 4.2: Classify a given variable as nominal, ordinal, discrete, or continuous. Variables can be broadly classified into one of two types: Below we define these two main types of variables and provide further sub-classifications for each type. Categorical variables take category or label values, and place an individual into one of several groups. Categorical variables are often further classified as either: • Nominal, when there is no natural ordering among the categories. Common examples would be gender, eye color, or ethnicity. • Ordinal, when there is a natural order among the categories, such as, ranking scales or letter grades. However, ordinal variables are still categorical and do not provide precise measurements. Differences are not precisely meaningful, for example, if one student scores an A and another a B on an assignment, we cannot say precisely the difference in their scores, only that an A is larger than a B. Quantitative variables take numerical values, and represent some kind of measurement. Quantitative variables are often further classified as either: • Discrete, when the variable takes on a countable number of values. Most often these variables indeed represent some kind of count such as the number of prescriptions an individual takes daily. • Continuous, when the variable can take on any value in some range of values. Our precision in measuring these variables is often limited by our instruments. Units should be provided. Common examples would be height (inches), weight (pounds), or time to recovery (days). One special variable type occurs when a variable has only two possible values. A variable is said to be Binary or Dichotomous, when there are only two possible levels. These variables can usually be phrased in a “yes/no” question. Gender is an example of a binary variable. Currently we are primarily concerned with classifying variables as either categorical or quantitative. Sometimes, however, we will need to consider further and sub-classify these variables as defined above. These concepts will be discussed and reviewed as needed. EXAMPLE: Medical Records Let’s revisit the dataset showing medical records for a sample of patients In our example of medical records, there are several variables of each type: • Age, Weight, and Height are quantitative variables. • Race, Gender, and Smoking are categorical variables. • Notice that the values of the categorical variable Smoking have been coded as the numbers 0 or 1. It is quite common to code the values of a categorical variable as numbers, but you should remember that these are just codes. They have no arithmetic meaning (i.e., it does not make sense to add, subtract, multiply, divide, or compare the magnitude of such values). Usually, if such a coding is used, all categorical variables will be coded and we will tend to do this type of coding for datasets in this course. • Sometimes, quantitative variables are divided into groups for analysis, in such a situation, although the original variable was quantitative, the variable analyzed is categorical. A common example is to provide information about an individual’s Body Mass Index by stating whether the individual is underweight, normal, overweight, or obese. This categorized BMI is an example of an ordinal categorical variable. • Categorical variables are sometimes called qualitative variables, but in this course we’ll use the term “categorical.” Software Activity LO 7.1: View a dataset in EXCEL, text editor, or other spreadsheet or statistical software. LO 4.1: Determine the type (categorical or quantitative) of a given variable. Why Does the Type of Variable Matter? The types of variables you are analyzing directly relate to the available descriptive and inferential statistical methods. It is important to assess how you will measure the effect of interest and know how this determines the statistical methods you can use. As we proceed in this course, we will continually emphasize the types of variables that are appropriate for each method we discuss. For To compare the number of polio cases in the two treatment arms of the Salk Polio vaccine trial, you could use • Fisher’s Exact Test • Chi-Square Test To compare blood pressures in a clinical trial evaluating two blood pressure-lowering medications, you could use • Two-sample t-Test • Wilcoxon Rank-Sum Test
{"url":"http://bolt.mph.ufl.edu/6050-6052/preliminaries/types-of-variables/","timestamp":"2014-04-16T13:16:22Z","content_type":null,"content_length":"81753","record_id":"<urn:uuid:0080b85e-fa7e-4785-985f-6326d17f1196>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Formal Calculus and Umbral Calculus We use the viewpoint of the formal calculus underlying vertex operator algebra theory to study certain aspects of the classical umbral calculus. We begin by calculating the exponential generating function of the higher derivatives of a composite function, following a very short proof which naturally arose as a motivating computation related to a certain crucial "associativity" property of an important class of vertex operator algebras. Very similar (somewhat forgotten) proofs had appeared by the 19-th century, of course without any motivation related to vertex operator algebras. Using this formula, we derive certain results, including especially the calculation of certain adjoint operators, of the classical umbral calculus. This is, roughly speaking, a reversal of the logical development of some standard treatments, which have obtained formulas for the higher derivatives of a composite function, most notably Faà di Bruno's formula, as a consequence of umbral calculus. We also show a connection between the Virasoro algebra and the classical umbral shifts. This leads naturally to a more general class of operators, which we introduce, and which include the classical umbral shifts as a special case. We prove a few basic facts about these operators. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v17i1r95","timestamp":"2014-04-17T04:09:58Z","content_type":null,"content_length":"15867","record_id":"<urn:uuid:7ccb11c9-41f5-4667-9e6a-d629e44c9dba>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help!!!!! Determine the delivery radius for your shop. Draw a point on a coordinate plane where your shop will be located. Create two different radii lengths from your shop, and construct the circles that represent each delivery area. How much area will each delivery radius cover? Write the equation for each circle created. Please show your work for all calculations. • one year ago • one year ago Best Response You've already chosen the best response. I have to create a pizza shop. This is what I have so far. I'm very confused. Best Response You've already chosen the best response. Is your first radius 4.5 units long? Best Response You've already chosen the best response. Yes, it is. Best Response You've already chosen the best response. Ok, so you need to create another circle with a different radius length and then solve for the areas of the two different circles. The formula for area of a circle is pi(r)^2 where r represents the length of the radius. So, your first equation would be Pi(4.5)^2 Best Response You've already chosen the best response. Best Response You've already chosen the best response. The equation of a circle in x-y coordinates is: (x-g)² + (y-h)² = R² where the radius of the circle is R and the center of the circle is at (x,y) = (g, h) (g,h) would be the coordinates of the location of you shop, and R would be the radius of the delivery area. Assuming that your shop is located at the center of the coordinate system, (x,y) = (0,0), and your delivery radius is 10 miles, the equation of the circle representing that delivery area would be: (x-0)² + (y-0)² = 10² or x² + y² = 100 where x and y are measured in miles. Best Response You've already chosen the best response. I'm still struggling with the equation creator lol Best Response You've already chosen the best response. @ayeeraee is there is any doubts Best Response You've already chosen the best response. @best.shakir not yet, I'm gonna try this out Best Response You've already chosen the best response. he's right Best Response You've already chosen the best response. Best Response You've already chosen the best response. @chrisisbad92 how should I construct the new circles? Best Response You've already chosen the best response. However you like. Decide a new place for the midpoint of your 2nd circle and make sure the radius is either longer or shorter than 4.5. Best Response You've already chosen the best response. I'd assume you don't want them to intersect either, so using a different quadrant woud probably be best. Best Response You've already chosen the best response. Okay, I'm gonna try to work out the equation. Hold on.. Best Response You've already chosen the best response. hold on...shouldn't the 2 circles be concentric? they didn't mention to change the location of ur shop. so, for the second circle, keep the centre same and only change the value of ur radius. Best Response You've already chosen the best response. You're right Vaidehi09. The problem doesn't state two different locations for the midpoint. Best Response You've already chosen the best response. try 10.125*2 this may help chrisisbad92 Best Response You've already chosen the best response. @Vaidehi09, okay, so I can two radii pointing in different directions? Best Response You've already chosen the best response. i don't really get what u mean by that...but this is how ur figure should look like: |dw:1344003586308:dw| Best Response You've already chosen the best response. Basically two circles with one inside of the other one. Your shop is located in one position and you're making two different size radius circles representing two different possible delivery areas Best Response You've already chosen the best response. This is what I have now. Best Response You've already chosen the best response. And basically, my shop is the midpoint? Best Response You've already chosen the best response. no! that's the same circle! see, the lengths of both those radii is the same right? so its the same circle. u need a bigger or smaller radius for the second circle. Best Response You've already chosen the best response. Oh, okay, I see where I went wrong. Best Response You've already chosen the best response. two circles inside each other, one larger than the other where the midpoint is your shop. Best Response You've already chosen the best response. Okay, so this is correct? Best Response You've already chosen the best response. yup. that's it. Best Response You've already chosen the best response. sorry for the light colors. Best Response You've already chosen the best response. btw, for easier calculations, why don't u use origin (0,0) as ur centre? Best Response You've already chosen the best response. Looks good. Now just set up your two equations and solve for the area Best Response You've already chosen the best response. Also, both circles represent the delivery area. Therefore I have to find the area of both circles, correct? Best Response You've already chosen the best response. That's correct. Best Response You've already chosen the best response. that's right. Best Response You've already chosen the best response. Okay, I'm calculating now. Best Response You've already chosen the best response. I got 153.86 for the area of the large circle. Is it right? Best Response You've already chosen the best response. absolutely right. Best Response You've already chosen the best response. and 50.24 for the small circle? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Good job. Best Response You've already chosen the best response. Thank you so much. You guys are awesome!!! Best Response You've already chosen the best response. Best Response You've already chosen the best response. No problem! If you need any more help with anything just drop me a line Best Response You've already chosen the best response. @best.shakir, what is orx? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/501bd8c1e4b02742c0b2e1b5","timestamp":"2014-04-18T00:13:42Z","content_type":null,"content_length":"147083","record_id":"<urn:uuid:c6a9c71f-a7de-4095-ae80-a2f2bd88ddfc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability non-portable (only tested with GHC 6.12.3) Stability stable Maintainer maxwellsayles@gmail.com Safe Haskell Safe-Inferred Persistent Disjoint-Sets (a.k.a. Union-Find). This implements disjoint-sets according to the description given in "Introduction to Algorithms" by Cormen et al (http://mitpress.mit.edu/algorithms). Most functions incur an additional O(logn) overhead due to the use of persistent maps. Disjoint-sets are a set of elements with equivalence relations defined between elements, i.e. two elements may be members of the same equivalence set. Each element has a set representative. The implementation works by maintaining a map from an element to its parent. When an element is its own parent, it is the set representative. Two elements are part of the same equivalence set when their set representatives are the same. In order to find the set representative efficiently, after each traversal from an element to its representative, we compress the path so that each element on the path points directly to the set representative. For this to be persistent, lookup is stateful and so returns the result of the lookup and a new disjoint set. Additionally, to make sure that path lengths grow logarithmically, we maintain the rank of a set. This is a logarithmic upper bound on the number of elements in each set. When we compute the union of two sets, we make the set with the smaller rank a child of the set with the larger rank. When two sets have equal rank, the first set is a child of the second and the rank of the second is increased by 1. Below alpha(n) refers to the extremely slowly growing inverse Ackermann function. insert :: Int -> IntDisjointSet -> IntDisjointSetSource Insert x into the disjoint set. If it is already a member, then do nothing, otherwise x has no equivalence relations. O(logn). unsafeMerge :: IntDisjointSet -> IntDisjointSet -> IntDisjointSetSource Given two instances of disjoint sets that share no members in common, computes a third disjoint set that is the combination of the two. This method is unsafe in that is does not verify that the two input sets share no members in common and in the event that a member overlaps, the resulting set may have incorrect equivalence union :: Int -> Int -> IntDisjointSet -> IntDisjointSetSource Create an equivalence relation between x and y. Amortized O(logn * alpha(n)). This function works by looking up the set representatives for both x and y. If they are the same, it does nothing. Then it looks up the rank for both representatives and makes the tree of the smaller ranked representative a child of the tree of the larger ranked representative. If both representatives have the same rank, x is made a child of y and the rank of y is increase by 1. If either x or y is not present in the input set, nothing is done.
{"url":"http://hackage.haskell.org/package/disjoint-set-0.2/docs/Data-IntDisjointSet.html","timestamp":"2014-04-17T13:55:33Z","content_type":null,"content_length":"15773","record_id":"<urn:uuid:8a82f243-f7fc-4907-8eb1-59a64ee5e3b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Normal Inverse Gaussian Lévy Process This Demonstration shows a path of the normal inverse Gaussian (NIG) Lévy process and the graph of the probability density of the process at various moments in time. The NIG process is a pure-jump Lévy process with infinite variation, which has been used successfully in modeling the distribution of stock returns on the German and Danish exchanges. The version of the model shown here is controlled by three parameters that arise from the realization of the process as a time-changed Brownian motion with drift. The parameters are the drift and the volatility of the Brownian process and the variance of the (inverse Gaussian) subordinator (whose expectation is assumed to be 1). In the limiting case when the variance of the subordinator is set to zero, the NIG process coincides with Brownian motion and the probability density is normal. For other values of the variance, the NIG probability density has nonzero excess kurtosis and skewness, displayed beneath the graph on the Note that a part of the path represented on the left may sometimes disappear from view, in which case one should adjust the "range of values" slider. The normal inverse Gaussian distribution and associated stochastic processes were introduced by Barndorff-Nielsen in [1] and [2]. The name derives from its representation as the distribution of Brownian motion with drift time changed by the inverse Gaussian Lévy process. Since the distribution is infinitely divisible, it gives rise to a corresponding Lévy process, which has been shown capable of accurately modelling the returns on a number of assets on German, Danish, and U.S. exchanges [4]. The normal inverse Gaussian Lévy process is in many ways similar to the variance gamma process due to Madan and Seneta. Both belong to the family of Lévy processes of the generalized hyperbolic type; however, they possess unique properties that make them particularly tractable and convenient for option pricing. In particular, they are alone in the hyperbolic family as having the property of being closed under convolution (i.e., the sum of independently distributed variables of the given type has the same type). Both processes can be represented as a time change of a Brownian motion with drift by a Lévy process with increasing increments. In the case of the variance gamma process the time change process is the gamma process; in the case of a NIG process it is an inverse gamma process. Both processes are pure-jump Lévy processes (they have no continuous Brownian component), but they differ in the nature of jumps: the variance gamma process has jumps of finite variation while the variation of the NIG process is infinite. There are several common parametrizations of the NIG process. Most of them use four parameters. Here, however, we use a three-parameter representation given in [4]. The parameters are the drift and volatility of the time-changed Brownian process and the variance of the time-change process, which is assumed to have expectation 1. (This does not lose generality due to the scaling properties of the NIG process). This representation brings out clearly the similarity between the NIG process and the variance gamma process; in particular, when the variance of the time change tends to 0 both processes approximate the Brownian motion process. [1] O. E. Barndorff–Nielssen, "Normal Inverse Gaussian Distributions and Stochastic Volatility Modelling," Scandinavian Journal of Statistics, 24 , 1997 pp. 1–13. [2] O. E. Barndorff–Nielssen, "Processes of Normal Inverse Gaussian Type," Finance and Stochastics, 2 (1), 1998 pp. 41–68. [3] R. Cont and P. Tankov, Financial Modelling with Jump Processes , Boca Raton, FL: CRC Press, 2004. [4] T. H. Rydberg, "The Normal Inverse Gaussian Lévy Process: Simulation and Approximation", Comm. Stat.: Stoch. Models, 13 (4), 1997 pp. 887–910.
{"url":"http://demonstrations.wolfram.com/TheNormalInverseGaussianLevyProcess/","timestamp":"2014-04-18T08:57:26Z","content_type":null,"content_length":"46777","record_id":"<urn:uuid:43fc6012-3cee-4623-a217-27359a12bff4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: SOLVABLE GROUPS OF EXPONENTIAL Roger C. Alperin An extraordinary theorem of Gromov, [Gv], characterizes the finitely gen- erated groups of polynomial growth; a group has polynomial growth iff it is nilpotent by finite. This theorem went a long way from its roots in the class of discrete subgroups of solvable Lie groups. Wolf, [W], proved that a polycyclic group of polynomial growth is nilpotent by finite. This theorem is primarily about linear groups and another proof by Tits appears as an appendix to Gromov's paper. In fact if G is torsion free polycyclic and not nilpotent then Rosenblatt, [R], constructs a free abelian by cyclic group in G, in which the automorphism is expanding and thereby constructs a free semigroup. The converse of this, that a finitely generated nilpotent by finite group is of polynomial growth is relatively easy; but in fact one can also use the nilpotent length to estimate the degree of polynomial growth as shown by Guivarc'h, [Gh], Bass, [Bs], and Wolf, [W]. The theorem of Milnor, [M], on the other hand shows that a finitely generated solvable group, not of ex- ponential growth, is polycyclic. Rosenblatt's version of this, [Rt], is that a finitely generated solvable group without a two generator free subsemigroup is polycyclic. We give another version of Milnor's theorem using the HNN
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/702/1779081.html","timestamp":"2014-04-20T16:18:38Z","content_type":null,"content_length":"8431","record_id":"<urn:uuid:2359eb55-1436-4248-bc9c-2ffca39e5a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Physics Lab: what happens if we ignore gravity? Why must the coils of the spring not touch? How come the position function is a perfect sine wave with gravity? It doesn't make sense to me. As the mass is moving up while below the y-axis, wouldn't it be slowed down by gravity? And why the mass is moving down while above the y-axis, wouldn't it be sped up by gravity? Does the solution to the equation of motion of a spring explicitly give the sine function as the only solution? So it would be impossible to model the wave if trigonometry was not studied and the sine function was not discovered? If the coils touch other, the motion will be interfered with and it will no longer be SHM. In gravity, the same force (the weight) is always in the same direction and doesn't affect the dynamic motion. The sine function is just a way of describing a bit of geometry (at its simplest). You can relate SHM to circular motion in simple terms, as with the motion of a piston resulting from a crank going round, but, as a general principle, you can't do advanced Physics without bringing in Maths functions - the problems just can't be stated in a way that can be solved if you don't use some form of Maths. Newton, for instance, had to make up his own form of calculus before he could get anywhere. The sine function is the solution to an equation - there isn't 'an easier one'.
{"url":"http://www.physicsforums.com/showpost.php?p=4242165&postcount=5","timestamp":"2014-04-19T19:38:45Z","content_type":null,"content_length":"9286","record_id":"<urn:uuid:94896da3-c967-49c9-b9c7-a475de9e69c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the smallest cardinal number of a set that requires the axiom of choice to prove that it exists and is non-empty? up vote 3 down vote favorite Let C(x) be a formula belonging to the language of ZFC in which the variable "x" and no other variable occurs free. Suppose that (a sentence of this language equivalent to) the following statement, is provable in ZFC but not in ZF. "There exists a non-empty set Q such that every element x of Q satisfies the formula C(x)" QUESTION: What is the smallest cardinal number that such a set Q can (be proved in ZFC) to have? I know of no examples of such a set Q having a cardinal number less than 2^(2^k) where k is the cardinal number of the contnuum. Examples of such sets Q are the set of all uncountable sets of real numbers that are non-measurable in the sense of Lebesgue or that contain no perfect subset. What about a set Q containing just one element which is a non-measurable subset of the reals? Do you mean the set of all x satisfying C(x)? – Sergei Ivanov Apr 22 '10 at 20:14 Sergei, even if he does mean all x, then my answer still gives a one-element set. – Joel David Hamkins Apr 22 '10 at 20:29 But your answer isn't sporting! – Simon Thomas Apr 22 '10 at 20:42 But it is optimal...But seriously, I am interested in the projective version. – Joel David Hamkins Apr 22 '10 at 20:54 add comment 3 Answers active oldest votes You probably won't like this, but the answer is cardinality 1. Let C(x) be the statement, "x=0 and the Axiom of Choice holds". ZF doesn't prove that any x satisifes C(x), since it doesn't prove AC. If AC fails, then no x can have C(x). Thus, ZF+¬AC proves that no x has C(x). But ZFC proves that C(0) holds, and so it proves that Q={0} is the desired set. The set { x | C(x) } is an indicator set for AC, in the sense that it is either 0 or 1, depending exactly on whether AC holds. A similar trick works to construct indicator sets for any I have suggested that the question be focused on the possibility of projective statements C(x). A projective statement is one expressible in the language of second order number theory, up vote 16 with quantifiers over real numbers and natural numbers. Thus, the question would be whether there is a specific projective statement C(x) such that ZFC proves that Q = { x | C(x) } is down vote nonempty, but ZF does not. This version of the question is exactly equivalent to the question of whether ZFC is not conservative over ZF for projective sentences, since if there is a counterexample C(X), then the assertion $\exists x C(x)$ is provable in ZFC but not ZF, and if $\sigma$ is provable in ZFC but not in ZF, then the set {x | $\sigma$} is ZFC provably all of the reals, but ZF is consistent with this set being empty. Therefore, the question amounts to: Is ZFC not conservative over ZF for projective statements? I think it is not, but I don't have a counterexample. Meanwhile, I can say that if one replaces ZF here with ZF+DC, looking at the difference between the full Axiom of Choice and the Axiom of Dependent Choices, rather than at the difference between full AC and no AC at all, then the answer is that it IS conservative. In this MO answer, I explained that ZFC is conservative over ZF+DC for projective sentences, and so if one replaces ZF with ZF+DC in the question, the answer would be no. But without DC, weird things can happen in the reals, and I'm not yet quite sure about it. In this case, the formula asserting that there is such a nonempty Q is exactly equivalent to AC. You could replace AC in this argument with any statement whatsoever; what you get is a kind of indicator set for the truth of that statement. It is empty when the statement fails, and it is 1 when the statement holds. – Joel David Hamkins Apr 22 '10 at 20:05 I take this answer to show that you will want to revise your question. Probably it is natural to restrict the complexity of the statement C(x). For example, can there be a projective such C(x)? – Joel David Hamkins Apr 22 '10 at 20:22 This is a really cool answer. – Harry Gindi Apr 22 '10 at 20:27 Thanks, Harry. – Joel David Hamkins Apr 22 '10 at 20:33 @Joel: I'm thinking about C(x) = "x is a minimal uncountable ordinal". Is it projective? – Sergei Ivanov Apr 22 '10 at 20:55 show 3 more comments If you want to prove that the countable union of countable sets is countable you need the Axiom of choice. However you can replace it by the Axiom of Countable Choice. Which is weaker than the Axiom of Choice and more intuitive. Also, it isn't strong enough to prove Banach-Tarski paradox (A good book to read the relation between AC ACC and the paradoxes is The Banach-Tarski Paradox from Wagon) up vote 1 down vote http://en.wikipedia.org/wiki/Axiom_of_countable_choice Actually, the following additional condition needs to be imposed on Q or else the cardinal number of Q could be as small as 1: No formula belonging to the language of ZFC in which one and only one variable occurs free is known to satisfy both of the following conditions: (1)It is provable in ZFC that one and only one set satisfies the formula. (2)It is provable in ZFC that at least one element of Q satisfies the formula. In other words, although Q is a "definable" set, none of it6 elements may be "definable" (or perhaps even ordinal definable). – Garabed Gulbenkian Apr 24 '10 at 17:43 add comment Wilfrid Hodges has shown that it is consistent with ZF that there is an algebraic closure $L$ of the rational field $\mathbb{Q}$ with no nontrivial automorphisms. Obviously $|Aut(L)\ smallsetminus \{1\}| = 2^{\aleph_{0}}$. up vote 8 down vote See: W. Hodges, Läuchli's algebraic closure of $\mathbb{Q}$. Math. Proc. Cambridge Philos. Soc. 79 (1976), no. 2, 289--297 1 Is that rigidity due to the fact that there are less functions in his model than in the usual one? I imagine there is a bijection between his $L$ and the usual algebraic closure, if there is any sense one can talk about such a bijection... – Mariano Suárez-Alvarez♦ Apr 22 '10 at 20:10 Simon, does this example lead to a projective C(x)? I guess not, since I think your L cannot be countable, as weird as that sounds, since if it were then the assertion that L is rigid would be Pi^1_1, and hence absolute to forcing extensions with AC for reals. Could you clarify this? – Joel David Hamkins Apr 22 '10 at 20:26 4 I find this shocking and disturbing...so much so that I dug up the paper. It is available at math.uga.edu/~pete/Hodges76.pdf – Pete L. Clark Apr 22 '10 at 21:50 8 «The reader may well feel he could have bought Corollary 10 cheaper in another bazaar» Best comment in a paper EVAR. – Mariano Suárez-Alvarez♦ Apr 22 '10 at 22:04 Yes, this is shocking! Joel is right: L is not countable in that model of ZF. The basic idea goes back to Plotkin who showed that given any countably categorical theory T you can find a 4 model of ZF with an model M of T such that the only subsets of M that exist are the T-definable subsets of M. This doesn't directly apply to the algebraic closure of Q since ACF_0 is not countably categorical, but Hodges showed that ACF_0 is still nice enough for the basic idea to work... – François G. Dorais♦ Apr 22 '10 at 22:14 show 4 more comments Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/22243/what-is-the-smallest-cardinal-number-of-a-set-that-requires-the-axiom-of-choice?answertab=active","timestamp":"2014-04-20T08:48:12Z","content_type":null,"content_length":"79771","record_id":"<urn:uuid:df45e1ba-91dc-404a-8ba8-266eb6e086bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
The St Andrews Colloquium 1938 and James Gregory Tercentenary The Edinburgh Mathematical Society held its fourth St Andrews Colloquium in St Andrews from 4 to 15 July 1938. This was a special colloquium at which the 300th anniversary of the birth of James Gregory was celebrated. The following press release announced the coming event on 1 April 1938: ST ANDREWS MATHEMATICAL COLLOQUIUM. James Gregory Tercentenary. The 300th anniversary of the birth of James Gregory, who held in succession the Chairs of Mathematics in St Andrews and Edinburgh Universities, will be celebrated at a Mathematical Colloquium to be held in St Andrews from July 4 to 15, 1938, under the auspices of the Edinburgh Mathematical Society. By courtesy of the St Andrews University Court, the Colloquium will be held in the University Hall, where members of the Colloquium will stay with their relations and friends. Short courses of lectures on subjects of pure mathematics, mathematical physics, and mathematical biology will be given by Dr A C Aitken. F.R.S.; Professor G D Birkhoff, of Harvard University, Massachusetts; Dr W O Kermack, LLJD., and Professor E T Whittaker, F.R.S. Lectures on the work of Gregory and his contemporaries will be given by Professor H W Turnbull, F.R.S. A discussion of school mathematics will be opened by Mr George Lawson, president of the Edinburgh Mathematical Society. The opening meeting, which will be a commemoration of James Gregory, will be held in conjunction with the Royal Society of Edinburgh in their rooms at 22 George Street, Edinburgh, on the afternoon of July 4. Members of the Colloquium will also be able to attend a graduation in the University of St Andrews, at which honorary degrees will be presented to a number of eminent mathematicians. This will take place on July 5, and will, be followed by an evening reception. The office of the hon. secretary of the Edinburgh Mathematical Society is at 16 Chambers Street. The following press release described the courses at the colloquium: GREGORY TERCENTENARY. Colloquium at St Andrews. ADVANCE OF MATHEMATICS. In connection with the James Gregory tercentenary the Edinburgh Mathematical Society are holding a Colloquium in St Andrews this week and next. Professor H W Turnbull is delivering a series of lectures on the work of James Gregory. During the last seven years of his life, said the speaker, he refused to publish any of his work, except one trifling little thing which he put in at the end of a book, entitled, The New Art of Weighing Vanity. The reason was that he was waiting for Newton to publish his works. In his lectures Professor Turnbull has been laying bare the proof of what Gregory actually discovered. For example he had evolved a theory of the calculus that we taught in schools today, except for the notation. The interest of his work lay in the fact that he had thought of those things before anyone else except Fermat in France and Dr Pell in Switzerland and in England. The next advance in that theory did not take place until a century later at the time of Lagrange. PROFESSOR WHITTAKER'S COURSE. Professor E T Whittaker Is giving a course on "The Interactions between the Elementary Particles of the Universe." The first lecture dealt with the modifications which the Newtonian theory has undergone in consequence of the discovery of Relativity. The second lecture, yesterday morning was concerned with the "exchange interaction" which accounts for the binding of two atoms of hydrogen into a hydrogen molecule. Professor M Fréchet, of the university of Paris, who recently published a book on "The Definition of Probability" in two lectures expounded the diverse definitions which have been given of the probability of an event and has compared their respective values. Dr A C Aitken, of Edinburgh, dealt with "Invariant Matrices and the Symmetric Group." Dr W O Kermack. Edinburgh, spoke on "Aspects of Mathematical Biology." One of the papers read to the Colloquium (on Saturday 9 July) was by George Lawson, president of the Edinburgh Mathematical Society. The following report of the lecture appeared in The Scotsman on Monday 11 July: Teaching of Algebra in Schools At the St Andrews Mathematical Colloquium, which is being held in connection with the James Gregory Tercentenary at University Hall, St Andrews, Mr George Lawson, president of the Edinburgh Mathematical Society, on Saturday night submitted for debate a paper on "Neglect of Form and Law in School Algebra." He said that by the British Association papers last year, it was indicated that algebra had, during the last 30 years, developed on the lines of Sir Percy Nunn's classical exposition. That same development was indicated also in 1935 by the Mathematical Association's special pamphlet on the teaching of algebra in schools. In criticising that development, Mr Lawson referred to another development which seemed possible 30 years ago, and said it was based largely on a book quoted by Sir Percy Nunn himself - namely, that of Barnard and Child. Mr Lawson took up the position that algebra at that time took the wrong road, and he advocated a modification of the Barnard and Child position. A report on the Colloquium, written by I M H Etherington, appeared in The Mathematical Gazette later in 1938. The full reference is I M H Etherington, Edinburgh Mathematical Society: St Andrews Colloquium, The Mathematical Gazette 22 (252) (December 1938), 482-484. We present below a version of this report: EDINBURGH MATHEMATICAL SOCIETY: ST ANDREWS COLLOQUIUM. This Colloquium, held at St Andrews from July 4 to 15, was in every way as successful as its quadrennial predecessors. A hundred and ten persons attended, including about twenty wives and daughters of mathematicians; including also twenty-one professors of mathematics from the universities of Great Britain, Eire, France, Holland, Denmark, and the United States. Halfway through, a professor was heard to boast that he had talked no mathematics so far, on the other hand, many members of the Colloquium found it a grand opportunity for talking fruitful shop. The arrangement of the timetable encouraged the morning coffee habit; so the café gardens of the town, and in lesser concentration the roads and walks about the place, were the scene of much deep talk on the foundations of probability, theories of the universe, and the personalities of mathematicians. The Colloquium associated itself in various ways with the name of James Gregory, born 300 years ago, who held in succession the Chairs of Mathematics in the Universities of St Andrews and Edinburgh. The opening meeting on July 4 was held in Edinburgh in conjunction with the James Gregory Tercentenary Meeting of the Royal Society of Edinburgh. Two events of this meeting may be recorded here: the presentation to the R.S.E. of the portrait of its President, Sir D'Arcy W Thompson, painted by Mr David S Ewart, A.R.S.A. and the presentation to Professor H S Ruse of the Society's "Keith Prize" for his work on the geometry of Dirac's equations. Next day the University of St Andrews marked Gregory's tercentenary by conferring the Honorary Degree of LL.D. on five distinguished mathematicians, members of the Colloquium being invited to the celebrations. The graduands were Professor G D Birkhoff (Harvard), Professor A W Conway (Dublin), Professor Otto Neugebauer (Copenhagen), Professor R Weitzenböck (Amsterdam), and in absentia Professor V Volterra (Rome). The ceremony took place in the University Library, where Gregory used to work; here also some MSS., instruments and other relies of Gregory were exhibited. Professor H W Turnbull, Gregory's present successor as Regius Professor of Mathematics at St Andrews, lectured on these two occasions, and again in the course of the Colloquium, on Gregory's life and work, and especially on his many unpublished discoveries. Professor Turnbull, has made an exhaustive study of Gregory's MSS. and proved that he should rank with Barrow, Newton and Leibniz as a founder of the Calculus. Of the four main courses of lectures, two were on subjects of pure mathematics. Professor G D Birkhoff lectured on Analytic Deformations and Auto-equivalent Functions, and Dr A C Aitken on Invariant Matrices and the Symmetric Group. An analytic deformation of a function f(x) in the neighbourhood of x = infinity means replacing the independent variable by an analytic function of itself, x' = f(x ), which has a simple pole at infinity. All such deformations form a group. The function x^k has the property of being multiplied by a function which is analytic and non-zero at infinity when any deformation of the group is applied. Other functions, such as G(x), have this property only for a smaller group of deformations; in this case x' = x + any integer. Such functions are auto-equivalent, and Professor Birkhoff classifies them into eight types according to the defining group. The theory, which promises to unify much of modern analysis in one comprehensive sweep, is to be published by the Institut Henri Poincaré. Unification was also an achievement of Dr Aitken's course on algebra, which led through symmetric functions, finite groups, determinants and permanents, Young's standard tableaux, and Schur's invariant matrices to the foreshadowing of a master theorem which would combine them all. On the side of applied mathematics, Professor E T Whittaker gave five lectures on "The Interactions between the Elementary Particles of the Universe". The modern picture of these particles differs immensely from the classical views of Newton and Maxwell. Professor Whittaker covered this immensity, and described the stages in which the transformation took place, through relativity, the early quantum theory and its developments up to the latest speculations on heavy electrons and cosmic rays. The fourth main course was entitled Aspects of Mathematical Biology. Dr W 0 Kermack discussed the integral equations which arise in the mathematics of population growth and of epidemiology. Dr I M H Etherington surveyed Volterra's mathematical theory of the struggle for life and its analogies in classical dynamics. It was impossible not to admire Dr Kermack's masterful exposition of his complicated subject matter, aided by lantern slides displaying his formulae but without the gift of sight. Professor M Fréchet lectured and introduced a discussion on the various definitions of probability. He sketched the views of Laplace, de Mises, Wald and others, and described in more detail the "modernised classical definition" of Neyman and Kolmogorov. The discussion was noteworthy for Professor Whittaker's vigorous defence of the classical (Laplace's) point of view against all comers. Mr George Lawson, the President of the Edinburgh Mathematical Society and Chairman of the Colloquium, addressed himself mainly to school teachers, who formed a not inconsiderable proportion of the membership. Mr Lawson regretted the lack of sincerity in much modern teaching, and urged the importance in teaching algebra of noticing its formal aspect algebra is the study of the forms in which numbers cooperate Professor Otto Neugebauer entertained us for half an hour on the subject of Babylonian astronomy, and on the differences between Babylonian and Egyptian science. Finally, I must not forget to mention Professor Birkhoff's fourth lecture, which was not on auto-equivalent functions but on The Mathematical Theory of Art. He claims to have found a formula for the aesthetic value of any work of art in its formal aspects, and to have devised rules by which it can be applied with demonstrable success in certain cases of special simplicity. The theory, which is little known on this side of the Atlantic, was received by a crowded audience with much interest and a certain scepticism. Our hosts, Professor and Mrs Turnbull, entertained us and many visitors at a reception one musical evening; and many less formal moments musicaux occurred. An incomparable five minutes from Sir D'Arcy Thompson, illustrated with geometrical models, happened unheralded but much applauded in the middle of one informal concert. A bald mention of golf, tennis, rounders, dancing, chess, and an excursion to the Highlands must conclude this account. JOC/EFR February 2008 The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Extras/EMS_1938_Colloquium.html","timestamp":"2014-04-19T19:33:57Z","content_type":null,"content_length":"14506","record_id":"<urn:uuid:834e2101-c604-4a4b-a1f5-3bd3996b87d2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
next_permutation for combinations or subsets in powerset Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. Is there some equivalent library or function that will give me the next combination of a set of values like next_permutation in does for me? up vote 4 down vote favorite c++ permutation combinations powerset add comment Is there some equivalent library or function that will give me the next combination of a set of values like next_permutation in does for me? Combinations: from Mark Nelson's article on the same topic we have next_combination http://marknelson.us/2002/03/01/next-permutation Permutations: from STL we have std::next_permutation template <typename Iterator> inline bool next_combination(const Iterator first, Iterator k, const Iterator last) if ((first == last) || (first == k) || (last == k)) return false; Iterator itr1 = first; Iterator itr2 = last; if (last == itr1) return false; itr1 = last; itr1 = k; while (first != itr1) if (*--itr1 < *itr2) up vote 9 down vote Iterator j = k; while (!(*itr1 < *j)) ++j; itr2 = k; while (last != j) return true; return false; add comment Combinations: from Mark Nelson's article on the same topic we have next_combination http://marknelson.us/2002/03/01/next-permutation Permutations: from STL we have std::next_permutation template <typename Iterator> inline bool next_combination(const Iterator first, Iterator k, const Iterator last) { if ((first == last) || (first == k) || (last == k)) return false; Iterator itr1 = first; Iterator itr2 = last; ++itr1; if (last == itr1) return false; itr1 = last; --itr1; itr1 = k; --itr2; while (first != itr1) { if (*--itr1 < *itr2) { Iterator j = k; while (!(*itr1 < *j)) ++j; std::iter_swap(itr1,j); ++itr1; ++j; itr2 = k; std::rotate(itr1,j,last); while (last != j) { ++j; ++itr2; } std::rotate(k,itr2,last); return true; } } std::rotate(first,k,last); return false; } I am not aware of one. The basic idea is to represent your elements as a bit array. So for example, you have the set S: S = {a, b, c} [i, j, k] // a is the first bit, b is the second bit, c is the third bit To generate the Power Set of S(just generate all numbers that are of size == 3 bits by using the simple addition): 000 // {} 001 // {c} 010 // {b} up vote 5 down 011 // {b, c} vote 100 // {a} 101 // {a, c} 110 // {a, b} 111 // {a, b, c} All what you have to do is to find what bits are set, and to relate them to your set's elements. On final note, there is one combination you can produce when you want all elements to be used and that combination is the set it self, because in combinations the order doesn't matter so for sure we are talking about a number of elements n where 0 <= n <= size(S) show 4 more comments I am not aware of one. The basic idea is to represent your elements as a bit array. So for example, you have the set S: S = {a, b, c} [i, j, k] // a is the first bit, b is the second bit, c is the third bit To generate the Power Set of S(just generate all numbers that are of size == 3 bits by using the simple addition): 000 // {} 001 // {c} 010 // {b} 011 // {b, c} 100 // {a} 101 // {a, c} 110 // {a, b} 111 // {a, b, c} All what you have to do is to find what bits are set, and to relate them to your set's elements. On final note, there is one combination you can produce when you want all elements to be used and that combination is the set it self, because in combinations the order doesn't matter so for sure we are talking about a number of elements n where 0 <= n <= size(S) I've used this library when I've needed to do this. It has an interface very similar to std::next_permutation so it will be easy to use if you've used that before. up vote 1 down vote add comment I've used this library when I've needed to do this. It has an interface very similar to std::next_permutation so it will be easy to use if you've used that before. Googling for C++ "next_combination" turned up this. □ search from "mid" backwards until you find an element that is smaller than *(end - 1). This is the element we should increment. Call this "head_pos". up vote 0 down vote □ search from "end" backwards until you find the last element that is still larger than *head_pos. Call it "tail_pos". □ swap head_pos and tail_pos. Re-order the elements from [head_pos + 1, mid[ and [tail_pos + 1, end[ in increasing order. add comment search from "mid" backwards until you find an element that is smaller than *(end - 1). This is the element we should increment. Call this "head_pos". search from "end" backwards until you find the last element that is still larger than *head_pos. Call it "tail_pos". swap head_pos and tail_pos. Re-order the elements from [head_pos + 1, mid[ and [tail_pos + 1, end[ in increasing order. In case You have no choice, but to implement Your own function maybe this horror can help a bit or maybe other horrors among answers to that question. up vote I wrote it some time ago and the full picture eludes me now :), but the basic idea is this: You have the original set and current combination is a vector of iterators to the elements 0 down selected. To get the next combination, You scan your set from right to left looking for a "bubble". By "bubble" I mean one or more adjacent elements not selected. The "bubble" might be vote immediately at the right. Then, in Your combination, You exchange the first element at the left of the "bubble" and all other elements from the combination, that are to the right in the set, with a subset of adjacent elements that starts at the beginning of the "bubble". add comment In case You have no choice, but to implement Your own function maybe this horror can help a bit or maybe other horrors among answers to that question. I wrote it some time ago and the full picture eludes me now :), but the basic idea is this: You have the original set and current combination is a vector of iterators to the elements selected. To get the next combination, You scan your set from right to left looking for a "bubble". By "bubble" I mean one or more adjacent elements not selected. The "bubble" might be immediately at the right. Then, in Your combination, You exchange the first element at the left of the "bubble" and all other elements from the combination, that are to the right in the set, with a subset of adjacent elements that starts at the beginning of the "bubble". Enumeration of the powerset (that is, all combinations of all sizes) can use an adaptation of the binary increment algorithm. template< class I, class O > // I forward, O bidirectional iterator O next_subset( I uni_first, I uni_last, // set universe in a range O sub_first, O sub_last ) { // current subset in a range std::pair< O, I > mis = std::mismatch( sub_first, sub_last, uni_first ); if ( mis.second == uni_last ) return sub_first; // finished cycle O ret; if ( mis.first == sub_first ) { // copy elements following mismatch std::copy_backward( mis.first, sub_last, ++ (ret = sub_last) ); } else ret = std::copy( mis.first, sub_last, ++ O(sub_first) ); * sub_first = * mis.second; // add first element not yet in result return ret; // return end of new subset. (Output range must accommodate.) The requirement of a bidirectional iterator is unfortunate, and could be worked around. I was going to make it handle identical elements (multisets), but I need to go to bed :v( . #include <iostream> #include <vector> using namespace std; char const *fruits_a[] = { "apples", "beans", "cherries", "durian" }; vector< string > fruits( fruits_a, fruits_a + sizeof fruits_a/sizeof *fruits_a ); int main() { vector< string > sub_fruits( fruits.size() ); vector< string >::iterator last_fruit = sub_fruits.begin(); while ( ( last_fruit = next_subset( fruits.begin(), fruits.end(), sub_fruits.begin(), last_fruit ) ) != sub_fruits.begin() ) { cerr << "size " << last_fruit - sub_fruits.begin() << ": "; for ( vector<string>::iterator fit = sub_fruits.begin(); fit != last_fruit; ++ fit ) { cerr << * fit << " "; cerr << endl; EDIT: Here is the version for multisets. The set doesn't have to be sorted but identical elements do have to be grouped together. #include <iterator> #include <algorithm> #include <functional> template< class I, class O > // I forward, O bidirectional iterator I next_subset( I uni_first, I uni_last, // set universe in a range O sub_first, O sub_last ) { // current subset in a range std::pair< O, I > mis = std::mismatch( sub_first, sub_last, uni_first ); if ( mis.second == uni_last ) return sub_first; // finished cycle up vote 0 down vote typedef std::reverse_iterator<O> RO; mis.first = std::find_if( RO(mis.first), RO(sub_first), std::bind1st( std::not_equal_to< typename std::iterator_traits<O>::value_type >(), * mis.second ) ).base(); // move mis.first before identical grouping O ret; if ( mis.first != sub_first ) { // copy elements after mismatch ret = std::copy( mis.first, sub_last, ++ O(sub_first) ); } else std::copy_backward( mis.first, sub_last, ++ (ret = sub_last) ); * sub_first = * mis.second; // add first element not yet in result return ret; #include <vector> #include <iostream> using namespace std; char const *fruits_a[] = { "apples", "apples", "beans", "beans", "cherries" }; vector< string > fruits( fruits_a, fruits_a + sizeof fruits_a/sizeof *fruits_a ); int main() { vector< string > sub_fruits( fruits.size() ); vector< string >::iterator last_fruit = sub_fruits.begin(); while ( ( last_fruit = next_subset( fruits.begin(), fruits.end(), sub_fruits.begin(), last_fruit ) ) != sub_fruits.begin() ) { cerr << "size " << last_fruit - sub_fruits.begin() << ": "; for ( vector<string>::iterator fit = sub_fruits.begin(); fit != last_fruit; ++ fit ) { cerr << * fit << " "; cerr << endl; size 1: apples size 2: apples apples size 1: beans size 2: apples beans size 3: apples apples beans size 2: beans beans size 3: apples beans beans size 4: apples apples beans beans size 1: cherries size 2: apples cherries size 3: apples apples cherries size 2: beans cherries size 3: apples beans cherries size 4: apples apples beans cherries size 3: beans beans cherries size 4: apples beans beans cherries size 5: apples apples beans beans cherries add comment Enumeration of the powerset (that is, all combinations of all sizes) can use an adaptation of the binary increment algorithm. template< class I, class O > // I forward, O bidirectional iterator O next_subset( I uni_first, I uni_last, // set universe in a range O sub_first, O sub_last ) { // current subset in a range std::pair< O, I > mis = std::mismatch( sub_first, sub_last, uni_first ); if ( mis.second == uni_last ) return sub_first; // finished cycle O ret; if ( mis.first == sub_first ) { // copy elements following mismatch std::copy_backward( mis.first, sub_last, ++ (ret = sub_last) ); } else ret = std::copy( mis.first, sub_last, ++ O(sub_first) ); * sub_first = * mis.second; // add first element not yet in result return ret; // return end of new subset. (Output range must accommodate.) } The requirement of a bidirectional iterator is unfortunate, and could be worked around. I was going to make it handle identical elements (multisets), but I need to go to bed :v( . #include <iostream> #include <vector> using namespace std; char const *fruits_a[] = { "apples", "beans", "cherries", "durian" }; vector< string > fruits( fruits_a, fruits_a + sizeof fruits_a/sizeof *fruits_a ); int main() { vector< string > sub_fruits( fruits.size() ); vector< string >::iterator last_fruit = sub_fruits.begin(); while ( ( last_fruit = next_subset( fruits.begin(), fruits.end(), sub_fruits.begin(), last_fruit ) ) != sub_fruits.begin() ) { cerr << "size " << last_fruit - sub_fruits.begin() << ": "; for ( vector<string>::iterator fit = sub_fruits.begin(); fit != last_fruit; ++ fit ) { cerr << * fit << " "; } cerr << endl; } } EDIT: Here is the version for multisets. The set doesn't have to be sorted but identical elements do have to be grouped together. #include <iterator> #include <algorithm> #include <functional> template< class I, class O > // I forward, O bidirectional iterator I next_subset( I uni_first, I uni_last, // set universe in a range O sub_first, O sub_last ) { // current subset in a range std::pair< O, I > mis = std::mismatch( sub_first, sub_last, uni_first ); if ( mis.second == uni_last ) return sub_first; // finished cycle typedef std::reverse_iterator<O> RO; mis.first = std::find_if( RO(mis.first), RO(sub_first), std::bind1st( std::not_equal_to< typename std::iterator_traits<O>::value_type >(), * mis.second ) ).base (); // move mis.first before identical grouping O ret; if ( mis.first != sub_first ) { // copy elements after mismatch ret = std::copy( mis.first, sub_last, ++ O(sub_first) ); } else std::copy_backward( mis.first, sub_last, ++ (ret = sub_last) ); * sub_first = * mis.second; // add first element not yet in result return ret; } #include <vector> #include <iostream> using namespace std; char const *fruits_a[] = { "apples", "apples", "beans", "beans", "cherries" }; vector< string > fruits( fruits_a, fruits_a + sizeof fruits_a/sizeof *fruits_a ); int main() { vector< string > sub_fruits( fruits.size() ); vector< string >::iterator last_fruit = sub_fruits.begin(); while ( ( last_fruit = next_subset( fruits.begin(), fruits.end(), sub_fruits.begin(), last_fruit ) ) != sub_fruits.begin() ) { cerr << "size " << last_fruit - sub_fruits.begin() << ": "; for ( vector<string>::iterator fit = sub_fruits.begin(); fit != last_fruit; ++ fit ) { cerr << * fit << " "; } cerr << endl; } } size 1: apples size 2: apples apples size 1: beans size 2: apples beans size 3: apples apples beans size 2: beans beans size 3: apples beans beans size 4: apples apples beans beans size 1: cherries size 2: apples cherries size 3: apples apples cherries size 2: beans cherries size 3: apples beans cherries size 4: apples apples beans cherries size 3: beans beans cherries size 4: apples beans beans cherries size 5: apples apples beans beans cherries
{"url":"http://stackoverflow.com/questions/2685501/next-permutation-for-combinations-or-subsets-in-powerset","timestamp":"2014-04-21T02:08:16Z","content_type":null,"content_length":"101526","record_id":"<urn:uuid:ab0a4e88-8541-4d59-832d-106967ae5b8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Histogram Adjustments in MATLAB – Histogram Adjustments in MATLAB – Part II – Equalization This is the second part of a three-part post on understanding and using histograms to modify the appearance of images. The first part covered introductory material on histograms and a method known as histogram stretching for improving contrast and color. This post will cover histogram equalization and an advanced technique called contrast-limited adaptive histogram equalization, both intended for increasing the contrast of an image. The final post will extend the concepts of histogram equalization to arbitrary distributions of pixel values. Histogram Equalization The next stop on our tour of histogram-processing techniques is histogram equalization. If you plotted the CDF of some of your image histograms, you may have noticed that the CDF does not form a straight line—meaning that the pixel values are not equally likely to occur (since the CDF is the integral of the PDF). The good news is that most natural images do not have flat CDFs. That said, some industrial applications can benefit from having a flat CDF. The process of flattening the CDF is called histogram equalization. MATLAB’s Image Processing Toolbox includes the histeq function, which performs histogram equalization: img_histeq = histeq(img); One example: img = rgb2gray(imread('harborSydney.png')); img_adjusted = histeq(img); The resulting image has particularly dark islands and a bright sky, which is not visually appealing, but the detail within the buildings is improved significantly. The skyline also happens to be much easier to threshold: A more advanced version of histogram equalization, adaptive histogram equalization, makes the assumption that the image varies significantly over its spatial extent. The algorithm divides the image into smaller tiles, applies histogram equalization to each tile, then interpolates the results. MATLAB’s implementation, adapthisteq, includes limits on how much the contrast is allowed to be changed, called contrast-limited adaptive histogram equalization, or CLAHE for short. Again, CLAHE will modify the image in strange ways, but those may be better for certain tasks. img = imread('Spores.jpg'); img_adjusted = adjusthisteq(img); This test image highlights two peculiarities of CLAHE. First, sharp edges, like those around the spores, look like they are glowing. This occurs because CLAHE computes histograms over areas, and the sharp change in values from the background to the spore body affects the normalization. (The effect is related to what you would get by dividing the original image by the low-pass filtered version of the image.) Fortunately, that additional contrast near the edges can help some edge detection algorithms, even if the glow is not natural. The second effect from CLAHE is seen in the background areas, where some out-of-focus spores becomes visible and the overall noise increases. This is exactly what CLAHE is supposed to do: increase the contrast, even in the background areas. It is also limiting the amount of contrast adjustment; to see this in a dramatic way, try using histeq on the spores image and compare that against the adapthisteq result. (Hint: if you wanted to remove the out-of-focus spores but still increase the overall contrast, look at combining adapthisteq with bilateral filtering.) MATLAB’s histeq and adapthisteq both assume a single-channel image. Similar to the discussion about multi-channel images in the first post, you could apply the histogram equalization to each channel: img = imread('harborSydney.png'); img_adjusted = zeros(size(img),'uint8'); for ch=1:3 img_adjusted(:,:,ch) = adapthisteq(img(:,:,ch)); Or, apply the equalization to the L* component of a L*a*b*-transformed image: img = imread('harborSydney.png'); c_rgb2lab = makecform('srgb2lab'); c_lab2rgb = makecform('lab2srgb'); labimg = applycform(img, c_rgb2lab); labimg(:,:,1) = adapthisteq(labimg(:,:,1)); img_adjusted = applycform(labimg, c_lab2rgb); If you don’t have the Image Processing Toolbox, you could try Leslie Smith’s implementation of the CLAHE algorithm: (http://www.mathworks.com/matlabcentral/fileexchange/ Histogram equalization focuses on making the CDF flat so that each pixel value is equally likely to occur. The final post in this series will extend that concept to matching an image’s CDF (or histogram) to an arbitrary CDF. 8 Comments Already 3 pingbacks/trackbacks
{"url":"http://imageprocessingblog.com/histogram-adjustments-in-matlab-part-ii-equalization/","timestamp":"2014-04-17T21:54:35Z","content_type":null,"content_length":"66552","record_id":"<urn:uuid:c7817aaa-b6ef-4e02-9c97-cd7a607bf886>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson 7 has three parts A, B, C which can be completed in any order. So far, we have performed math calculations using Python's operators +, -, *, / and the functions max and min. In this lesson we will see some more operators and functions and learn how to perform more complex calculations. Math Operators We have already seen how to use operators for addition (a + b), subtraction (a - b), multiplication (a * b) and division (a / b). We will now learn about three additional operators. • The power operator a ** b computes a^b (a multiplied by itself b times). For example, 2 ** 3 produces 8 (which is 2×2×2). • The integer division operator a // b computes the "quotient" of a divided by b and ignores the remainder. For example, 14 // 3 produces 4. • The modulus operator a % b computes the remainder when a is divided by b. For example, 14 % 3 produces 2. Coding Exercise: Eggsactly Egg cartons each hold exactly 12 eggs. Write a program which reads an integer number of eggs from , then prints out two numbers: how many cartons can be filled by these eggs, and how many eggs will be left over. For example, the output corresponding to eggs is eggs fill cartons, leaving eggs left over. You may enter input for the program in the box below. The modulus operator is used for a variety of tasks. It can be used to answer questions like these ones: • If the time now is 10 o'clock, what will be the time 100 hours from now? (requires modulus by 12) • Will the year 2032 be a leap year? (requires modulus by 4, 100, and 400) Checking leap years is an example of divisibility testing; in the next exercise we ask you to write a program that performs divisibility testing in general. Math Functions Python can compute most of the mathematical functions found on a scientific calculator. • sqrt(x) computes the square root of the number x. • exp(x) and log(x) are the exponential and natural logarithmic functions. • sin(x), cos(x), tan(x) and other trigonometric functions are available. • pi, the mathematical constant 3.1415..., is also included. When using Python's trigonometric functions, the angle x must be expressed in radians, not degrees. Python includes such a large number of functions that they are organized into groups called modules. The above functions belong to the math module. Before using any functions from a module, you must import the module as shown in the example below. To use a function from a module you must type the module name, followed by a period, followed by the name of the function. Coding Exercise: Pizza Circles Your friends have eaten their square pizzas and are now ordering a round pizza. Write a program to calculate the area of this circular pizza. The input is a float r , which represents the radius in cm. The output should be the area in cm , calculated using the formula . Use Python's feature instead of typing You may enter input for the program in the box below. Coding Exercise: Geometric Mean geometric mean of two numbers is the number (It is used to compare aspect ratios of display screens and describe the average growth rate of a population.) Write a program that reads two lines of positive float from input, and outputs their geometric mean. Example: If the input is then the output should be You may enter input for the program in the box below. Putting it all together As you saw in the previous exercise, you can build mathematical expressions by combining operators. Python evaluates the operators using the same "order of operations" that we learn about in math Brackets first, then Exponents, followed by Division and Multiplication, then finally Addition and Subtraction, which we remember by the acronym "BEDMAS". Integer division and modulus fit into the "Division and Multiplication" category. For example, the expression 3 * (1 + 2) ** 2 % 4 is evaluated by performing the addition in brackets (1+2 = 3), then the exponent (3 ** 2 = 9) , then the multiplication (3 * 9 = 27), and finally the modulus, producing a final result of 27 % 4 = Short Answer Exercise: Order of Operations Compute the value of the Python expression 6 - 52 // 5 ** 2 Integer division with negative numbers: The expressions a // b and int(a / b) are the same when a and b are positive. However, when a is negative, a // b uses "round towards negative infinity" and int(a / b) uses "round towards zero." Integers and Floating-Point Numbers The result of a mathematical expression is a number. As we saw previously, each number is stored as one of the two possible types: int or float. The int type represents integers, both positive and negative, that can be as big as you want. Python does not accept numbers written in the form 1 000 000 or 1,000,000. Type 1000000 instead. The float type represents decimal numbers. Just as a simple calculator stores 1/3 as its approximate value 0.33333333, Python also stores decimal numbers as their approximate values. Because Python uses approximations of decimal numbers, certain equations which are mathematically true may not be true in Python. For this reason, it is important to allow some tolerance for these approximations when comparing numbers of type float. For example, in the internal grader used by this website, any float output is marked as correct if it is approximately equal to the expected We finish this lesson with some exercises. Coding Exercise: A Feat With Feet For this program the input is a floating-point number representing a height measured in feet. Write a program that will output the equivalent height in centimetres using the conversion formula 1 foot = 30.48 cm. For example, if the input is , then the output should be You may enter input for the program in the box below. Coding Exercise: Gravity A parcel is thrown downward at a speed of m/s from an airplane at altitude 11000 m. As it falls, its distance from the ground is given by the formula -4.9 + 11000, where is the time in seconds since it was dropped. Write a program to output the time it will take to reach the ground. The input to your program is the positive floating-point number . The required time is given by the quadratic formula You may enter input for the program in the box below. Congratulations! After completing these exercises you are ready to move to another lesson.
{"url":"http://cscircles.cemc.uwaterloo.ca/7b-math/","timestamp":"2014-04-20T23:27:06Z","content_type":null,"content_length":"54343","record_id":"<urn:uuid:ca032ca1-d818-40be-af9f-5e8660cbc4e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Holiday Hills, IL Math Tutor Find a Holiday Hills, IL Math Tutor ...I have had tutoring experience before and enjoy it very much! I tutored beginner to intermediate Spanish for one year, and have tutored math as well. My goal is to help the student meet their goals for the subject as well as their personal goals. 31 Subjects: including statistics, prealgebra, Spanish, reading ...In addition to majoring in Economics and International Relations, I have completed the equivalent of a major in philosophy, including Introductory Philosophy, Logic, Honors Ethics, Social and Political Philosophy, Contemporary Analytic Philosophy, Non-Western Humanities, and have completed a self... 57 Subjects: including trigonometry, differential equations, linear algebra, SAT math ...Everything around us has some explanation involving these subjects! I would like to share my enthusiasm with others. My background includes physical science (two degrees in Engineering) as well as biological sciences (MD). I have teaching experience at the university level as a teaching assistant for biology as well as anatomy and physiology. 2 Subjects: including algebra 1, geometry ...I will teach English grammar, word order, vocabulary, writing techniques and pronunciation. I have had success working with adult as well as children I have successfully tutored ADD/ADHD children for 12 years. I have also taken some classes in special Education working toward certification. 26 Subjects: including prealgebra, ACT Math, algebra 1, algebra 2 I am very experienced and knowledgeable in many areas of math. I have a Bachelors Degree in Mathematics Education and will be certified as a Secondary Mathematics Teacher in Illinois. I have done private tutoring for four years with all different levels of math and age groups. 10 Subjects: including algebra 1, algebra 2, calculus, geometry Related Holiday Hills, IL Tutors Holiday Hills, IL Accounting Tutors Holiday Hills, IL ACT Tutors Holiday Hills, IL Algebra Tutors Holiday Hills, IL Algebra 2 Tutors Holiday Hills, IL Calculus Tutors Holiday Hills, IL Geometry Tutors Holiday Hills, IL Math Tutors Holiday Hills, IL Prealgebra Tutors Holiday Hills, IL Precalculus Tutors Holiday Hills, IL SAT Tutors Holiday Hills, IL SAT Math Tutors Holiday Hills, IL Science Tutors Holiday Hills, IL Statistics Tutors Holiday Hills, IL Trigonometry Tutors Nearby Cities With Math Tutor Deer Park, IL Math Tutors Fox River Grove Math Tutors Ingleside, IL Math Tutors Island Lake Math Tutors Lakemoor, IL Math Tutors Mccullom Lake, IL Math Tutors Mchenry, IL Math Tutors North Barrington, IL Math Tutors Oakwood Hills, IL Math Tutors Prairie Grove, IL Math Tutors Ringwood, IL Math Tutors Round Lake Heights, IL Math Tutors Third Lake, IL Math Tutors Volo, IL Math Tutors Wauconda, IL Math Tutors
{"url":"http://www.purplemath.com/Holiday_Hills_IL_Math_tutors.php","timestamp":"2014-04-17T21:56:03Z","content_type":null,"content_length":"24076","record_id":"<urn:uuid:84c4f5dd-8336-46ad-958d-a2e2a96a8bd1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Extremal principles in non-equilibrium thermodynamics This page discusses ‘maximum entropy production’, ‘minimimum entropy production’, and related extremal principles that have been proposed in non-equilibrium statistical mechanics. It is not about E. T. Jaynes’ MaxEnt Principle in statistical inference, except insofar as it may be related to the topic at hand. (As we shall see, Jaynes has tried to develop a relationship.) This is a controversial and confusing subject, as a brief perusal of this page makes clear: It begins: According to Kondepudi (2008), and to Grandy (2008), there is no general rule that provides an extremum principle that governs the evolution of a far-from-equilibrium system to a steady state. According to Glansdorff and Prigogine (1971, page 16), irreversible processes usually are not governed by global extremal principles because description of their evolution requires differential equations which are not self-adjoint, but local extremal principles can be used for local solutions. Lebon Jou and Casas-Vásquez (2008) state that “In non-equilibrium … it is generally not possible to construct thermodynamic potentials depending on the whole set of variables”. Šilhavý (1997) offers the opinion that ”… the extremum principles of thermodynamics … do not have any counterpart for non-equilibrium steady states (despite many claims in the literature).” It follows that any general extremal principle for a non-equilibrium problem will need to refer in some detail to the constraints that are specific for the structure of the system considered in the problem. Minimum entropy production This book offers a concise and fairly rigorous discussion of Ilya Prigogine’s principle of minimum entropy production, which applies only to a limited class of systems: • Georgy Lebon and David Jou, Understanding Non-equilibrium Thermodynamics, Springer, Berlin, 2008. It starts on page 51 in section 2.5.1, “Minimum Entropy Production Principle”. The authors say that the main assumptions behind Prigogine’s theorem are: 1. Time-independent boundary conditions 2. Linear phenomenological laws 3. Constant phenomenological coefficients 4. Symmetry of the phenomenological coefficients As we shall see, the argument really shows not that rate of entropy production is minimized in the steady state, but that the rate of entropy production decreases as time passes. It takes some extra assumption to conclude that when the system has reached a steady state, its entropy production has reached the minimum possible. Let us try to distill the argument to its mathematical essence. Warning: the rest of this section will be extremely abstract, not very well motivated, and deeply flawed - but it can be salvaged. So, keeping an example in mind will be very helpful. Suppose we have a possibly inhomogeneous ball of metal whose temperature is some function $X\left(t,\stackrel{⇀}{x}\right)$ of time $t$ and space $\ stackrel{⇀}{x}$. Suppose the temperature is independent of time at the boundary of the ball: that’s condition 1 above, ‘time-independent boundary conditions’. Let $dX$ stand for the exterior derivative of $X$, regarded as a function of $\stackrel{⇀}{x}$ at a fixed time $t$. (In physics exterior derivative $dX$ is usually treated as a vector field and denoted $\stackrel{⇀}{abla }X$, but we are about to engage in a massive generalization). The rate of entropy production is $P=⟨dX,dX⟩$P = \langle d X, d X \rangle where the angle brackets denote a certain inner product on 1-forms. Moreover, the heat equation can be written $\stackrel{˙}{X}={d}^{*}dX$\dot X = d^* d X where ${d}^{*}$ is the adjoint of $d$ with respect to this inner product. Combining these facts, the argument we’re about to exhibit will show that $\stackrel{˙}{P}\le 0$\dot{P} \le 0 meaning the rate of entropy production always decreases. The above authors conclude: This result proves that that the total entropy production $P$ decreases in the course of time and that it reaches its minimum value in the stationary state. However, this is a leap of logic: just because a function is decreasing ($\stackrel{˙}{P}\le 0$), we can’t conclude that when the function is constant ($\stackrel{˙}{P}=0$) it has reached its minimum value. However, in practice they are right! So, there is some true assumption that they are failing to make explicit… but their argument can be rescued somehow. We shall see how in a while. Now let us exhibit the argument leading to $\stackrel{˙}{P}\le 0$ in a very general context. We shall assume the state of a system is described by an $n$-chain $X$ in some chain complex equipped with an inner product. We also assume the rate of entropy production is $P=⟨dX,dX⟩$P = \langle d X, d X \rangle where the brackets are the inner product. Then taking the time derivative of both sides: $\stackrel{˙}{P}=2⟨dX,d\stackrel{˙}{X}⟩$\dot{P} = 2 \langle d X , d \dot X \rangle or using the adjoint of the operator $d$: $\stackrel{˙}{P}=2⟨{d}^{*}dX,\stackrel{˙}{X}⟩$\dot{P} = 2 \langle d^* d X , \dot X \rangle The analogue of the heat equation in this situation says that: ${d}^{*}dX=L\stackrel{˙}{X}$d^* d X = L \dot{X} for some linear operator $L$. We thus obtain $\stackrel{˙}{P}=2⟨L\stackrel{˙}{X},\stackrel{˙}{X}⟩$\dot{P} = 2 \langle L \dot{X} , \dot{X} \rangle In many cases the operator $L$ is negative, so that $\stackrel{˙}{P}\le 0$\dot{P} \le 0 In other words, the rate of entropy production decreases with the passage of time. However, if $\stackrel{˙}{P}=0$, the equation $\stackrel{˙}{P}=2⟨L\stackrel{˙}{X},\stackrel{˙}{X}⟩$ implies that $L\stackrel{˙}{X}=0$. In this case the equation ${d}^{*}dX=L\stackrel{˙}{X}$ implies that ${d}^{*}dX=0$, which further implies that $P=⟨dX,dX⟩=⟨{d}^{*}dX,X⟩=0$P = \langle d X, d X \rangle = \langle d^* d X, X \rangle = 0 So now we are getting that $\stackrel{˙}{P}=0$ implies $P=0$! This in turn implies that $P$ is minimized, but somehow we’ve gone too far: we want $\stackrel{˙}{P}=0$ to imply that $P$ is minimized, not that it’s zero. So, there is something about our assumptions here that are too strong, and need to be corrected. This idea, that entropy production decreases with the passage of time, is also what the following authors focus on: So it could be that this is the really interesting content of Prigogine’s theorem. To wrap up (for now): 1. “Time-independent boundary conditions” are what let us do the “integration by parts” here: $⟨dX,d\stackrel{˙}{X}⟩=⟨{d}^{*}dX,\stackrel{˙}{X}⟩$. 2. “Linear phenomenological laws”, “constant phenomenological coefficients” and “symmetric phenomenological coefficients” are what let us write the rate of entropy production as a quadratic form $P= ⟨dX,dX⟩$. We need to break this down into steps, but this seems roughly right. 3. Another crucial step is the equation ${d}^{*}dX=L\stackrel{˙}{X}$. This again can be derived from smaller assumptions, but it’s worth noting that this is formally just a generalization of the heat equation. 4. Another crucial assumption is that $L$ is negative. 5. However, the assumptions used so far are too strong; we need to fix them to get $P$ minimized but not necessarily zero when $\stackrel{˙}{P}=0$. The trick is probably to use ideas from here: After all, the ‘principle of least power’ discussed there is closely related to the ‘principal of minimum entropy production’ we’re discussing here. References on minimum entropy production Christian Maes has written some papers on minimum entropy production: We explain the (non-)validity of close-to-equilibrium entropy production principles in the context of linear electrical circuits. Both the minimum and the maximum entropy production principles are understood within dynamical fluctuation theory. The starting point are Langevin equations obtained by combining Kirchoff’s laws with a Johnson-Nyquist noise at each dissipative element in the circuit. The main observation is that the fluctuation functional for time averages, that can be read off from the path-space action, is in first order around equilibrium given by an entropy production rate. That allows to understand beyond the schemes of irreversible thermodynamics (1) the validity of the least dissipation, the minimum entropy production, and the maximum entropy production principles close to equilibrium; (2) the role of the observables’ parity under time-reversal and, in particular, the origin of Landauer’s counterexample (1975) from the fact that the fluctuating observable there is odd under time-reversal; (3) the critical remark of Jaynes (1980) concerning the apparent inappropriateness of entropy production principles in temperature-inhomogeneous circuits. The minimum entropy production principle provides an approximative variational characterization of close-to-equilibrium stationary states, both for macroscopic systems and for stochastic models. Analyzing the fluctuations of the empirical distribution of occupation times for a class of Markov processes, we identify the entropy production as the large deviation rate function, up to leading order when expanding around a detailed balance dynamics. In that way, the minimum entropy production principle is recognized as a consequence of the structure of dynamical fluctuations, and its approximate character gets an explanation. We also discuss the subtlety emerging when applying the principle to systems whose degrees of freedom change sign under kinematical I. W. Richardson has an interesting letter on minimum entropy production versus steady state, which could probably be used to formulate these ideas using differential forms and Laplacians, or more general elliptic operators. He mentions a no-go theorem due to Gage, saying that steady state cannot always be described using an extremal principle: Maximum entropy production While Ilya Prigogine has a successful principle of least entropy production that applies to a special class of linear steady-state systems, other people talk about a principle of ‘maximum entropy production’! Is there a contradiction here? This paper begins to address the issue: Martyusheva and Seleznev write: 1.2.6. The relation of Ziegler’s maximum entropy production principle and Prigogine’s minimum entropy production principle If one casts a glance at the heading, he may think that the two principles are absolutely contradictory. This is not the case. It follows from the above discussion that both linear and nonlinear thermodynamics can be constructed deductively using Ziegler’s principle. This principle yields, as a particular case (Section 1.2.3), Onsager’s variational principle, which holds only for linear nonequilibrium thermodynamics. Prigogine’s minimum entropy production principle (see Section 1.1) follows already from Onsager–Gyarmati’s principle as a particular statement, which is valid for stationary processes in the presence of free forces. Thus, applicability of Prigogine’s principle is much narrower than applicability of Ziegler’s principle. For the relation between maximum entropy production and Jaynes’ MaxEnt principle, see: David Corfield has noted that Dewar’s paper relies on a paper by E. T. Jaynes in which he proposes something called the ‘Maximum Caliber Principle’: • E. T. Jaynes, Macroscopic prediction, in H. Haken (ed.) Complex systems – operational approaches in neurobiology, Springer, Berlin, 1985, pp. 254–269. This paper delves further into the relation: Abstract: Jaynes’ maximum entropy (MaxEnt) principle was recently used to give a conditional, local derivation of the “maximum entropy production” (MEP) principle, which states that a flow system with fixed flow(s) or gradient(s) will converge to a steady state of maximum production of thermodynamic entropy (R.K. Niven, Phys. Rev. E, in press). The analysis provides a steady state analog of the MaxEnt formulation of equilibrium thermodynamics, applicable to many complex flow systems at steady state. The present study examines the classification of physical systems, with emphasis on the choice of constraints in MaxEnt. The discussion clarifies the distinction between equilibrium, fluid flow, source/sink, flow/reactive and other systems, leading into an appraisal of the application of MaxEnt to steady state flow and reactive systems. On the n-Category Café, David Lyon wrote: Most of the posters here study beautiful subjects such as the quantum theory of closed systems, which has time reversal symmetry. I have a little bit of experience with open dissipative systems, which are not so pretty but may interest some of you for a moment. My advisor has experimentally explored entropy production in driven systems. Although I haven’t been personally involved in most of the experiments, we’ve had many interesting discussions on the topic. There is a very simple maximum entropy production principle in systems with a linear response. In this case the system evolves towards its maximum entropy state along the gradient, which is the direction of maximum change in entropy. This principle applies in practice to systems which are perturbed a small amount away from equilibrium and then allowed to relax back to equilibrium. As Tomate said, if you take a closed system, open it briefly to do something gentle to it, and then wait for it to relax before closing it again, you’ll see this kind of response. However, the story is very different in open systems. When a flux through a system becomes large, (e.g. close to the Eddington Limit for radiating celestial bodies, when heat flow follows Cattaneo’s Law, etc), the response no longer follows simple gradient dynamics and there is no maximum entropy production principle. There have been many claims of maximum or minimum entropy production principles by various authors and many attempts to derive theories based on these principles, but these principles are not universal and any theories based on them will have limited In high voltage experiments involving conducting spheres able to roll in a highly resistive viscous fluid, there is a force on the spheres which always acts to reduce the resistance $R$ of the system. This is true whether the boundary condition is constant current $I$ or constant voltage $V$. Since power dissipation is ${I}^{2}R$ in the first case and ${V}^{2}/R$ in the second case, one can readily see that entropy production is minimized for constant current and maximized for constant voltage. In experiments involving heat flow through a fluid, convection cells (a.k.a. Benard Cells) form at high rates of flow. For a constant temperature difference, these cells act to maximize the heat flow and thus the entropy production in the system. For a constant heat flow, these cells minimize the temperature difference and thus minimize the entropy production in the system. If one were to carefully read “This Week’s Finds in Mathematical Physics (Week 296)” one would be able to find several more analogous examples where the response of open systems to high flows will either maximize or minimize the entropy production for pure boundary conditions or do neither for mixed boundary conditions. As David points out, many variational principles for nonequilibrium systems have been proposed. They only hold in the so-called “linear regime”, where the system is slightly perturbed from its equilibrium steady state. We are very far from understanding general non-equilibrium systems, one major result being the “fluctuation theorem”, from which all kinds of peculiar results descend; in particular, the Onsager-Machlup variational principle for trajectories. For the mathematically-minded, I think the works by Christian Maes et al. might appeal to your tastes. Funnily enough, there exists a “minimum entropy production principle” and a “maximum entropy production principle”. The apparent clash is due to the fact that while minimum entropy production is an ensemble property, that is, it holds on a macroscopic scale, the maximum entropy production principle is believed to hold for single trajectories, single “histories”. I think the first is well-established, indeed a classical result due to Prigogine, while the second is still speculative and sloppy; it is believed to have important ecological applications. Similarly, a similar confusion arises when one defines entropy as an ensemble property (Gibb’s entropy) or else as a microstate property (Boltzmann entropy) Unfortunately, that I know, there is not one simple and comprehensive review on the topic of variational principle in Noneq Stat Mech. I went through Dewar’s paper some time ago. While I think most of his arguments are correct, still I don’t regard them as a full proof of the principle he has in mind. Unfortunately, he doesn’t explain analogies, differences and misunderstandings around minimum entropy production and maximum entropy production. In fact, nowhere in his articles does a clear-cut definition of MEP appear. I don’t think, like Martyusheva and Seleznev, that it is just a problem of boundary conditions, and the excerpt you take does not explain why these two principles are not in conflict in the regime where they both are supposed to hold. Let me explain my own take on the minEP vs. maxEP problem and on similar problems (such as Boltzmann vs. Gibbs entropy increase). It might help sorting out ideas. By “state” we mean very different things in NESM, among which: 1) the (micro)state which a single history of a system occupies at given times 2) the trajectory itself 3) the density of microstates which an ensemble of a large number of trajectories occupies at a given time (a macrostate). One can define entropy production at all levels of discussion (for the mathematically-inclined, markovian master equation systems offer the best set up where all is nice and defined). So, for example, the famous “fluctuation theorem” is a statement about microscopic entropy production along a trajectory, while the Onsager’s reciprocity relations are a statement about macroscopic entropy production. By “steady state”, we mean a stationary macrostate. The minEP principle asserts that the distribution of macroscopic currents at a nonequilibrium steady state minimizes entropy production consistently with the constraints which prevent the system from reaching equilibrium. As I understand it, maxEP is instead a property of single trajectories: most probable trajectories are those which have a maximum entropy production rate, consistently with constraints. As a climate scientist, you should be interested in the second as we have not an ensemble of planets among which to maximize entropy or minimize entropy production. We have one single realization of the process, and we’d better make good use of it. Maximum entropy production in climate science The above paper by Niven cites some papers applying these ideas to climate change. Here’s a review article on entropy maximization in climate physics: As mentioned by Ozawa et al., Lorenz suspected that the Earth’s atmosphere operates in such a manner as to generate available potential energy at a possible maximum rate. The available potential energy is defined as the amount of potential energy that can be converted into kinetic energy. Independently, Paltridge suggested that the mean state of the present climate is reproducible as a state with a maximum rate of entropy production due to horizontal heat transport in the atmosphere and oceans. Figure 2 shows such an example. Without considering the detailed dynamics of the system, the predicted distributions (air temperature, cloud amount, and meridional heat transport) show remarkable agreement with observations. Later on, several researchers investigated Paltridge’s work and obtained essentially the same result. On the other hand, some climate scientists are deeply skeptical of work based on the principle of entropy maximization. For example, Garth Paltridge has done work based on this principle, but others say his theories would imply that the Earth’s climate is independent of its rate of rotation, in blatant contradiction to what more detailed models show. So, it seems that climate models based on some principle of entropy maximization are highly controversial at best, at this time. The fact that Garth Paltridge has also written a book entitled The Climate Caper, arguing that the “case for action against climate change is not nearly so certain as is presented to politicians and the public”, adds an extra political aspect to this controversy. The Wikipedia article on Garth Paltridge says: Paltridge was involved in studies on stratospheric electricity, the effect of the atmosphere on plant growth and the radiation properties of clouds. Paltridge researched topics such as the optimum design of plants and the economics of climate forecasting, and worked on atmospheric radiation and the theoretical basis of climate. In terms of scientific impact, his most significant contribution has been to show that the earth/atmosphere climate system may have adopted a format that maximises its rate of thermodynamic dissipation, i.e. entropy production. This suggests a governing constraint by a principle of maximum rate of entropy production. According to this principle, prediction of the broad-scale steady-state distribution of cloud, temperature and energy flows in the ocean and atmosphere may be possible when one has sufficient data about the system for that purpose, but does not have fully detailed data about every variable of the system. This article argues against maximum entropy production in the Earth’s weather, but says some data are compatible with ‘maximum kinetic energy dissipation’: They write: Lorenz (1960) proposed that the atmospheric general circulation is organized to maximise kinetic energy dissipation (MKED), or, equivalently, the generation of APE (available potential energy). Similarly Paltridge (1975, 1978) suggested that Earth’s climate structure might be explained from a hypothesis of maximum entropy production (MEP). Closely related principles have been popular also in biology and engineering. For example the ‘‘maximum power principle’’, advocated by Odum (1988) for biological systems, is consistent with the maximum dissipation conjecture; the ‘‘constructal law’’ of Bejan and Lorente (2004) is very closely related to MEP as discussed by Kleidon (2009). A broad discussion on the maximizing power generation and transfer for Earth system processes can be found in Kleidon (2010). We conclude that the maximum entropy production conjecture does not hold within the climate system when the effects of the hydrological cycle and radiative feedbacks are taken into account, but our experiments provide some evidence in support of the conjecture of maximum APE production (or equivalently maximum dissipation of kinetic energy). Roderick Dewar on maximum entropy production On his website, Roderick Dewar writes: Theory and application of Maximum Entropy Production Cells, plants and ecosystems – like all open, non-equilibrium systems – import available energy from their environment and export it in more degraded (higher entropy) forms. But at what rate is energy degraded? According to the hypothesis of Maximum Entropy Production (MEP) – as fast as possible. MEP provides a new guiding principle for modelling the flows of energy and matter between plants, ecosystems and their environment, and offers a novel thermodynamic perspective on the origin and evolution of life. MEP has reproduced key features observed in a diverse range of non-equilibrium systems across physics and biology, from the large-scale distributions of temperature and cloud cover in Earth’s climate system to the functional design of the ubiquitous biomolecular motor ATP synthase. But in the absence of a fundamental explanation for MEP, it has remained something of a scientific Our aim is to elucidate the theoretical basis of MEP in order to underpin and guide its wider practical application. We are exploring the idea that MEP can be derived from the fundamental rules of statistical mechanics developed in physics by Boltzmann, Gibbs and Jaynes – implying that MEP is a statistical principle that describes the most likely properties of non-equilibrium systems. Ultimately our goal is to extend the application of MEP from climate modelling – where previous MEP work has mostly focused – to plant and ecosystem modelling. We collaborate with an international network of MEP researchers including: Axel Kleidon (Max-Planck Institute for Biogeochemistry, Jena), Peter Cox & Tim Jupp (Exeter University), Amos Maritan (Padua University), Robert Niven (UNSW@ADFA), Hisashi Ozawa (Hiroshima University), Davor Juretić & Pasko Zupanović (Split University). His website also includes this reading list: • R. C. Dewar, Information theoretic explanation of maximum entropy production, the fluctuation theorem and self-organized criticality in non-equilibrium stationary states, J. Phys. A (Mathematical and General), 36 (2003), 631-641. Summary here. • R. C. Dewar, Maximum entropy production and non-equilibrium statistical mechanics, in Non-Equilibrium Thermodynamics and Entropy Production: Life, Earth and Beyond, eds. A. Kleidon and R. Lorenz, Springer, New York, 2004, 41-55. • R. C. Dewar, D. Juretić, P. Zupanović, The functional design of the rotary enzyme ATP synthase is consistent with maximum entropy production, Chem. Phys. Lett. 430 (2006), 177-182. • R. C. Dewar, Maximum entropy production and the fluctuation theorem, J. Phys. A.: Math. Gen. 38 (2005), L371-L381. • Dewar RC. 2003. Information theoretic explanation of maximum entropy production, the fluctuation theorem and self-organized criticality in non-equilibrium stationary states. Journal of Physics A (Mathematical and General) 36, 631-641. • R. C. Dewar, Maximum entropy production as an inference algorithm that translates physical assumptions into macroscopic predictions: don’t shoot the messenger, Entropy 11 (2009), 931-944. Contribution to Special Issue (eds. Dyke J, Kleidon A): What is Maximum Entropy Production and how should we apply it? • R. C. Dewar, Maximum entropy production and plant optimization theories, Phil. Trans. Roy. Soc. B 365 (2010) 1429-1435. Contribution to Theme Issue (eds. Kleidon A, Cox PM, Mahli Y): Maximum entropy production in ecological and environmental systems: applications and implications. The principle of least action While the principle of least action most commonly appears in classical mechanics and classical field theory, it also shows up in nonequilibrium statistical mechanics. See for example: Kohler’s variational principle “Kohler’s variational principle” appears in the kinetic theory of gases. It is mentioned in this book: • J. H. Ferziger and H. G. Kaper, Mathematical Theory of Transport Processes in Gases, 1972. Here is a quote: Thus we can formulate the following maximum principle: In non-equilibrium systems the distribution of the molecular velocities is such that, for given temperature and velocity gradients, the rate of change of the entropy density due to collisions is as large as possible. This maximum principle, together with a similar minimum principle, was first given by Kohler (1948). A discussion of these and other variational principles can be found in a paper by Ziman (1956) or in a paper by Snider (1964a). It may have been proved within the Chapman–Enskog theory of gases. In the appendix to Chapter 5 of this book: • D. N. Zubarev, V. G. Morozov, G. Röpke, Statistical Mechanics of Nonequilibrium Processes, 1996. another (more general?) principle is discussed, that is claimed to be similar to Kohler’s variational principle. The constructal law The ‘constructal law’ is yet another maximum principle that has been proposed in biology: This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Other references From the abstract: We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations of among all these levels of description may be constructed explicitly and understood in terms of information conditions. The example systems considered introduce methods that may be used to systematically construct descriptions with all the features familiar from equilibrium thermodynamics, for a much wider range of systems describable by stochastic
{"url":"http://www.azimuthproject.org/azimuth/show/Extremal+principles+in+non-equilibrium+thermodynamics","timestamp":"2014-04-16T15:58:56Z","content_type":null,"content_length":"57541","record_id":"<urn:uuid:2702eb70-a7b8-4b5d-aeca-759c093eee85>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Methodology for Large Astronomical Surveys Many astronomical surveys are not amenable to traditional multivariate analysis and classification, and present serious needs for methodological advances by statisticians. Four major difficulties are outlined here. First, fluxes or other measured quantities are subject to heteroscedastic measurement errors with known variances. That is, each variable of each object has an associated measurement of the variable uncertainty, and these uncertainties can differ for each object. Surprisingly, statistical methodology is very poorly developed for such situations. For instance, there is no clustering algorithm that weights points by their known measurement errors. Only the LISREL model of the multivariate linear regression problem can begin to treat known heteroscedastic measurement errors (Jöreskog & Sörbom 1989). Second, objects may be undetected at one or many wavebands, leading to upper limits or censored data in one or many variables. A mature field of statistics known as survival analysis, developed principally for biomedical and industrial reliability applications, has been developed for censored datasets. A suite of survival methods is now widely used in astronomy (Feigelson 1992). However, most survival statistics apply only to univariate problems; Cox regression, the principal multivariate technique, permits censoring only in the single dependent variable. A more general partial co-parametric methods, Bayesian approaches, outlier detection and robust methods, multicollinearity and ridge regression, goodness-of-fit measures, nonparametric density estimation, wavelet analysis, bootstrap resampling and cross-validation, mathematical morphology, and many aspects of traditional multivariate analysis. The methodology for understanding multivariate databases is vast and constantly growing. Multivariate statistics are briefly reviewed in an astronomical context by Babu & Feigelson (1996), and are more thoroughly described (with FORTRAN codes) by Murtagh & Heck (1987). Many monographs presenting multivariate statistics are available, such as Johnson & Wichern (1992). While commercial statistical packages are the most powerful tools for implementing statistical procedures, a considerable amount of software is in the public domain on the World Wide Web. An informative essay on statistical software by Wegman (1997) can be found at Information on commercial statistical software packages such as SAS, SPSS and S-PLUS is available at Significant archives of on-line public domain statistical software reside at StatLib (http://lib.stat.cmu.edu) and the Guide to Available Mathematical Software (http://gams.nist.gov). StatLib provides many state-of-the-art codes useful to astronomers such as XGobi, ODRPACK, loess and MARS. Penn State operates the Statistical Consulting Center for Astronomy (http://www.stat.psu.edu/scca) for astronomers with statistical questions, and is initiating a site with links to statistical software on the Web (http://www.astro.psu.edu/statcodes). This work was supported by NSF DMS 9626189, NASA NAGW-2120 and NAS 5-32669.
{"url":"http://ned.ipac.caltech.edu/level5/Feigelson/F_B4.html","timestamp":"2014-04-20T18:24:58Z","content_type":null,"content_length":"5618","record_id":"<urn:uuid:3bd02ec8-2210-4faa-bbd4-acfbe4361981>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
recursively enumerable set A potentially infinite set whose members can be enumerated by a universal computer; however, a universal computer may not be able to determine that something is not a member of a recursively enumerable set. The halting set is recursively enumerable but not recursive. Related category SETS AND SET THEORY
{"url":"http://www.daviddarling.info/encyclopedia/R/recursively_enumerable_set.html","timestamp":"2014-04-20T13:21:12Z","content_type":null,"content_length":"5500","record_id":"<urn:uuid:05dc96dc-d34b-4df4-9787-1d9caf181976>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. where is the statement.. still line n is perpendicular to the plane containing j1 and j2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50dfb79ce4b050087cd0b745","timestamp":"2014-04-17T22:07:17Z","content_type":null,"content_length":"28047","record_id":"<urn:uuid:777b1fd4-98a2-4b6a-ae72-ea9a985b41cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: AMS Chelsea Publishing 1964; 1212 pp; hardcover Volume: 84 Reprint/Revision History: first AMS printing 1999 ISBN-10: 0-8218-1931-3 ISBN-13: 978-0-8218-1931-9 List Price: US$120 Member Price: US$108 Order Code: CHEL/84.H In addition to the standard topics, this volume contains many topics not often found in an algebra book, such as inequalities, and the elements of substitution theory. Especially extensive is Chrystal's treatment of the infinite series, infinite products, and (finite and infinite) continued fractions. The range of entries in the Subject Index is very wide. To mention a few out of many hundreds: Horner's method, multinomial theorem, mortality table, arithmetico-geometric series, Pellian equation, Bernoulli numbers, irrationality of \(e\), Gudermanian, Euler numbers, continuant, Stirling's theorem, Riemann surface. This volume includes over 2,400 exercises with solutions. Volume I • Fundamental laws and processes of algebra • Monomials--Laws of indices--Degree • Theory of quotients--First principles of theory of numbers • Distribution of products--Elements of the theory of rational integral functions • Transformation of the quotient of two integral functions • Greatest common measure and least common multiple • Factorisation of integral functions • Rational fractions • Further applications to the theory of numbers • Irrational functions • Arithmetical theory of surds • Complex numbers • Ratio and proportion • On conditional equations in general • Variation of a function • Equations and functions of first degree • Equations of the second degree • General theory of integral functions • Solution of problems by means of equations • Arithmetic, geometric, and allied series • Logarithms • Theory of interest and annuities • Appendix • Results of exercises • Index for parts I and II Volume II • Permutations and combinations • General theory of inequalities • Limits • Convergence of infinite series and of infinite products • Binomial and multinomial series for any index • Exponential and logarithmic series • Summation of the fundamental power-series for complex values of the variable • General theorems regarding the expansion of functions in infinite forms • Summation and transformation of series in general • Simple continued fractions • On recurring continued fractions • General continued fractions • General properties of integral numbers • Probability, or the theory of averages • Results of exercises • Index of proper names for parts I and II • Index for parts I and II
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-84-H","timestamp":"2014-04-20T02:32:45Z","content_type":null,"content_length":"16634","record_id":"<urn:uuid:ed9e0dcf-e564-43fb-8f18-895f207f96d1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: The amount of algebraic topology a graduate student specializing in topology must learn can be intimidating. Moreover, by their second year of graduate studies, students must make the transition from understanding simple proofs line-by-line to understanding the overall structure of proofs of difficult theorems. To help students make this transition, the material in this book is presented in an increasingly sophisticated manner. It is intended to bridge the gap between algebraic and geometric topology, both by providing the algebraic tools that a geometric topologist needs and by concentrating on those areas of algebraic topology that are geometrically motivated. Prerequisites for using this book include basic set-theoretic topology, the definition of CW-complexes, some knowledge of the fundamental group/covering space theory, and the construction of singular homology. Most of this material is briefly reviewed at the beginning of the book. The topics discussed by the authors include typical material for first- and second-year graduate courses. The core of the exposition consists of chapters on homotopy groups and on spectral sequences. There is also material that would interest students of geometric topology (homology with local coefficients and obstruction theory) and algebraic topology (spectra and generalized homology), as well as preparation for more advanced topics such as algebraic \(K\)-theory and the s-cobordism theorem. A unique feature of the book is the inclusion, at the end of each chapter, of several projects that require students to present proofs of substantial theorems and to write notes accompanying their explanations. Working on these projects allows students to grapple with the "big picture", teaches them how to give mathematical lectures, and prepares them for participating in research seminars. The book is designed as a textbook for graduate students studying algebraic and geometric topology and homotopy theory. It will also be useful for students from other fields such as differential geometry, algebraic geometry, and homological algebra. The exposition in the text is clear; special cases are presented over complex general statements. Graduate students and research mathematicians interested in geometric topology and homotopy theory. "Many exercises and comments in the book, which complement the material, as well as suggestions for further study, presented in the form of projects ... The book is a nice advanced textbook on algebraic topology and can be recommended to anybody interested in modern and advanced algebraic topology." -- European Mathematical Society Newsletter "The book might well have been titled `What Every Young Topologist Should Know' ... presents, in a self-contained and clear manner, all classical constituents of algebraic topology ... recommend this book as a valuable tool for everybody teaching graduate courses as well as a self-contained introduction ... for independent reading." -- Mathematica Bohemica • Chain complexes, homology, and cohomology • Homological algebra • Products • Fiber bundles • Homology with local coefficients • Fibrations, cofibrations and homotopy groups • Obstruction theory and Eilenberg-MacLane spaces • Bordism, spectra, and generalized homology • Spectral sequences • Further applications of spectral sequences • Simple-homotopy theory • Bibliography • Index
{"url":"http://ams.org/bookstore?fn=20&arg1=gsmseries&ikey=GSM-35","timestamp":"2014-04-17T22:29:52Z","content_type":null,"content_length":"18064","record_id":"<urn:uuid:c1df7925-d118-4e44-b4b6-2414485af974>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Equations by Graphing 10.2: Quadratic Equations by Graphing Created by: CK-12 Learning Objectives At the end of this lesson, students will be able to: • Identify the number of solutions of quadratic equations. • Solve quadratic equations by graphing. • Find or approximate zeros of quadratic functions. • Analyze quadratic functions using a graphing calculator. • Solve real-world problems by graphing quadratic functions. Terms introduced in this lesson: distinct solutions double root no real solutions Teaching Strategies and Tips Emphasize the connection between algebra and geometry: • Finding the roots of a quadratic function (algebra) is equivalent to finding the $x-$ • Have students use correct vocabulary: equations have roots or zeros; graphs have $x-$intercepts. Use Example 4 to explore the graph of an equation using a graphing calculator. • In graph mode, use the cursor to scroll over the $x-$ • In graph mode, from the CALC menu, use ZERO and MAXIMUM (MINIMUM) to find an $x-$ • Use the built-in table to find an $x-$$y = 0$$y$ • Emphasize approximating roots of an equation by reading a graph is an essential skill; real-world equations rarely factor over the integers. In Example 5, encourage students to explore the equation on a graphing calculator using the WINDOW menu. By changing parameters XMIN, XMAX, YMIN, and YMAX, students learn to find an appropriate display for any graph. Have students interpret their answers. In Example 5, the two roots indicate the two times when the arrow is on the ground. General Tip: In Review Questions 1-12, have students check their answers with a graphing calculator for added practice. Error Troubleshooting In Review Question 6, remind those students inputting the equation into a calculator to use proper syntax. The following are equivalent: • $y_1 = (1/2)x^{\land}2-2x+3$ • $y_1 = x^{\land}{2/2}-2x+3$ • $y_1 = (x^{\land}2)/2 - 2x + 3$ You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/10.2/","timestamp":"2014-04-21T09:40:35Z","content_type":null,"content_length":"114890","record_id":"<urn:uuid:f8407e74-e100-46d5-9748-40a66c87006e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: i^2=1 Replies: 3 Last Post: Feb 27, 2013 3:04 AM Messages: [ Previous | Next ] Posted: Feb 25, 2013 2:19 AM No, not I^2=-1 :-) But similarly: I'd like to have a symbol with the property i^2=1. And not like slashdotting with {i^2->1} (which then ignores i^3 and (i+1)(i-1)), but really automatically i^2=1 as soon as it appears somewhere, like for I. How do I define that? Hauke Reddmann <:-EX8 fc3a501@uni-hamburg.de Die Weltformel: IQ+dB=const.
{"url":"http://mathforum.org/kb/message.jspa?messageID=8409728","timestamp":"2014-04-19T16:06:51Z","content_type":null,"content_length":"19539","record_id":"<urn:uuid:d7ea0b9b-9f64-4e01-9772-3ba9d24b32fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Upton, PA Math Tutor Find an Upton, PA Math Tutor ...Regardless of the setting, the subject, or the level of instruction, my goal remains the same: I strive to motivate and inspire students to discover a learning style that will facilitate their academic growth and success. I value and respect the incredible variety of personalities that I encount... 17 Subjects: including ACT Math, SAT math, English, reading ...For this reason, I like to use the 4-Square Writing Program for struggling writers. It allows them to easily organize their reading and writing, and the graphic organizer never changes, they just use it differently. This eliminates them having to "choose the right one". 20 Subjects: including SAT math, dyslexia, geometry, algebra 1 ...I am a Certified teacher who is rated highly qualified to teach mathematics at the secondary level. For fourteen years I taught Special Education in the Phila. School System. 31 Subjects: including prealgebra, algebra 1, English, reading ...I enjoy working with students and take pride in showing students their full potential can be obtained through hard work and dedication. I look forward to meeting and working with students and helping them achieve their academic goals. Thank you. 9 Subjects: including linear algebra, algebra 1, algebra 2, geometry ...I can tutor for any middle to lower level college math classes and any high school math classes, except statistics, which is not my cup of tea. I love all kinds of science, including physics and chemistry. I have tutored before, and in most of my classes I tended to end up helping the other students. 19 Subjects: including algebra 1, algebra 2, calculus, grammar Related Upton, PA Tutors Upton, PA Accounting Tutors Upton, PA ACT Tutors Upton, PA Algebra Tutors Upton, PA Algebra 2 Tutors Upton, PA Calculus Tutors Upton, PA Geometry Tutors Upton, PA Math Tutors Upton, PA Prealgebra Tutors Upton, PA Precalculus Tutors Upton, PA SAT Tutors Upton, PA SAT Math Tutors Upton, PA Science Tutors Upton, PA Statistics Tutors Upton, PA Trigonometry Tutors Nearby Cities With Math Tutor Bala, PA Math Tutors Belmont Hills, PA Math Tutors Broad Axe, PA Math Tutors Carroll Park, PA Math Tutors Center Square, PA Math Tutors Gulph Mills, PA Math Tutors Ithan, PA Math Tutors Merion Park, PA Math Tutors Oakview, PA Math Tutors Radnor, PA Math Tutors Rose Tree, PA Math Tutors Rosemont, PA Math Tutors Saint Davids, PA Math Tutors Valley Forge Math Tutors Wayne, PA Math Tutors
{"url":"http://www.purplemath.com/upton_pa_math_tutors.php","timestamp":"2014-04-17T00:53:09Z","content_type":null,"content_length":"23618","record_id":"<urn:uuid:0ede05af-5ff0-4a16-b9dc-484ed954591f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Optic theory 12-01-2006, 04:06 AM #1 Newbie's ~2.5"dia ~1.25"fl aspheric lens with AR coating measured 37k lux with a cree xre at 3.some watts, IIRC. Way back I remember some mag luxeonIII measuring 10-12k lux. Point is, we have a lot of data points on this. What I am missing in these boards, is guidance towards a theoretical and numerical approach to this as well. So that's what I am fishing for here. I have the feeling that ballpark numbers could be found without expensive and complicated raytracing. To start with a simple question, if I were to scale an aspheric to half the dimensions, would I get 1/4 as many lux? I'm thinking that degrees in=degrees out; If I am sitting on the flat surface of the lens, the LED is covering x degrees of my view; and that is the resulting beam angle as well. If I and the lens gets two times closer to the LED, then the beam angle is doubled - resulting in the light being spread over an area four times as large. There are lots of factors which may or may not be relevant to a napkin-class calculation - I would very much welcome the discussion of these. Last edited by AilSnail; 12-14-2006 at 12:55 PM. Last edited by AilSnail; 12-08-2006 at 02:20 PM. edited out some wrong stuff. Last edited by AilSnail; 12-08-2006 at 02:19 PM. Last edited by AilSnail; 12-08-2006 at 02:20 PM. Let me try again. Here is a chart showing how many percent of the total emitted lumens of an xr-e which falls within any angle. It shows that newbie's lens catches about the same as the hd45 and other 1:1(L/Ř) reflectors, about 70%. It is made with the following method: The beam is divided into rings, and each ring's area on a sphere is calculated, and multiplied by the intensity as shown in the xre datasheet: For instance, between 20 and 40 degrees included (10-20 deg half angle), is a ring that projects upon 0,0225576 parts of a sphere. The average intensity for that part of the beam is then ogled from the datasheet graph - I'd say about 90 percent. 0,0225576 is multiplied with 90, to find how much of the total output of the led falls within this ring. Then the rings' light amount are converted to percent of the total light amount and added from 0-180deg and tossed into the chart. apparently in an inexplicable way. Hope it works. keep in mind the difference between half angle and included angle: 1:2 Last edited by AilSnail; 12-14-2006 at 12:59 PM. Here is an image that shows what I think happens with the cree and a true parabolic ~45x45mm reflector. The apparent die image moves around a bit. the angles on the side should have a "-" in front of them. Last edited by AilSnail; 12-14-2006 at 01:01 PM. Hi AilSnail, If you use a f/1 aspheric optical lens with lets say 50mm diameter, the f/1 determines the angle of light that is collected from the led.. So the f/1 determines the amount of lumens you collect. Now if you would use a f/1 lens with 25mm diameter, the amount of lumens you collect stays the same, because it grabbes the same angle of light. BUT: (And this is a fact! You are right: with the smaller lens closer to the led the emitting surface seems 4 times bigger to that lens: Conclusion is: with the laws of light the 25mm lens 'sees' a 4 times bigger surface, so it illuminates a 4 times bigger surface at 10 metres distance than the 50mm lens does!! So with the 25mm lens the surface brightness in the beam at 10 metres is 1/4 of the surface brightness of the beam comming from the 50mm lens.. That is why the lens- or reflector-diameter determines the throw of a torch. (together with the surface brightness of the source) Fact is that the Cree and Luxeon emitters are front-emitters, so don't work very well with conventional reflectors, indeed if you use conventional reflectors you'll need them very deep !! The acrylic optics, especially designed for these emitters do a much better job: They grab the light where it is most emitted: directly at the front. They are also much more efficient because they use the 100% internal reflection law of light ! A conventional reflector reflects about 84% of the light at the most! Any questions,, please ask.. The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! Hey, nice graphs! I haven’t seen people here use those graphs of ray tracing. I agree that it is very complex. It is too bad that nobody has replied yet. I have a post in this "general lighting discussion" area and only people who happened to wander on would give any help. Sometimes you have to search around to some of the "old" and experienced members that know their stuff and invite them over. It sucks that you have to post important lighting questions here only because they are not directly flashlight related. I was thrown in here because I was trying to discuss my LED headlight project. Oh well. Now you asked some questions, and I am probably not qualified to answer them, but I do have some understanding of optics. … if I were to scale an aspheric to half the dimensions, would I get 1/4 as many lux? Is this when the lens stays the same distance away from the light source? Wouldn’t you get the same lux if you move the aspheric lens (scaled to half the size) to half of that distance? That may be because it is at the correct focal point of the lens. If this is true, I see what you mean that the beam angle would appear to be bigger. That would happen because to scale (with this smaller and closer lens), the LED die size would appear to the lens as being “bigger”. Then you said: If I and the lens gets two times closer to the LED, then the beam angle is doubled… Is the lens the same size as the original you were talking about in this statement (not the half size)? If so, then the beam angle coming out should be bigger (I don’t really know if it would be a pretty and exact double size). This should be because the lens is moved closer past the focal point, right? If the lens was half of the size and half of the distance, then the beam angle coming out would be only slightly larger, right? I hope that you know what I mean when I talked about scaling down the lenses and distances, as well as the apparent “size” of the LED die compared to the lens at this scaled down size. Now I see that you changed your focus from the aspheric lens to a parabolic reflector. Do you have a question about this, or are you just showing what you had found? It is interesting using ray tracing. I tried doing this by hand to find the intensity of my LED headlights at different angles, trying to pretend that I do not have a cluster of 18 lights, but instead, one continuous 9 degree optic and light source. I wanted to find the beam angle of the projected light, and then consider how bright the light would be projected on the ground with the light tilted down at 1.5 degrees. It is tough and I will just rely on testing the finished product. Anyway, what do you need to know now? Are you still trying to achieve that awesome 37K lux? That will be tough. That is like a 230 cd/lm on-axis efficiency with the optic (with the XR-E at 160lm). I mean the standard Fraen FHS optic has a 21 cd/lm on-axis efficiency, as well as the typical 27mm IMS reflectors. The optic that Cree makes has a supposed 46cd/lm on-axis efficiency (according to the specs sheet). These are all tiny optics, so you might have to stick with a larger reflector or lens. I don’t know. Did you need to achieve this 37Klux with a smaller optic or something, or are you trying to enhance the lux with the same size 2.5” by 1.25” optic? Also, with this claimed 37K lux, was this measured at 1 meter, or even closer? You can blow that number out of the water with using those concentrator optics made by Polymer Optics, but this measurement is at 14.5mm. You can get up to 5 million lux at this distance, all in a 6mm diameter circle! This is with the LED at 160lm. This would be useless as a flashlight because the 14.5mm is the focal point, and the beam spreads out further from that, BUT, what if you could incorporate a second lens to turn this intensely concentrated light to a parallel beam? This would take some experimentation with different optics, but it is doable. I thought about getting a concentrator optic from an online store (used in projectors) to take this spot and project it forward at a narrow angle, but never did so. It even had the perfect focal point (of light coming in). A aspheric lens would do the same (they are similarly shaped), all you would have to do is turn it so the convex end would be facing the concentrator optic. Then just focus it in. I remember another way to project the light forward at huge distances, but I don’t think that the lux rating is that extreme. I managed to clamp in my old V-binned LuxV flashlight with 27mm IMS reflector into our 6in diameter reflector telescope where the eyepiece would have went. The thing probably was not efficient, but I could project a beam that is 8-12 inches across at huge distances (+100yds). The beam isn’t smooth at all, but it was cool because it felt like you were holding a light cannon. Oh, I just remembered that I seen somewhere in CPF where someone used these XR-E’s in their D-sized Maglites with stock reflectors. This person made a mistake (I think) and they took off the dome lens from the XR-E. There was some silicone inside, but the wires were intact. They turned their XR-E with the odd 70 degree half angle intensity into a Lambertian beam pattern LED. The light was better focused with the stock Maglite reflector and more lux was produced! The beam was far more usable. If you can do this, then it would be just as easy to focus the light of the XR-E as the standard Lambertian Luxeon LED. As long as the reflector or optic doesn’t press against the delicate die and die wires (like an optic or reflector with a holder or legs), then you would be good! I will try to find that thread where this guy modified his XR-E and post a link. What do you think? Sorry if my posts are too long. I cant help it. I hope you get stuff figured out. I think that what you are doing is just as important to the LED and flashlight world as what some do with optimizing batteries and driver circuits. Please keep us updated with graphs and pictures of your findings… Ha! I just realized that this was posted 5 minutes after the last post by Ra. It took me like 15 minutes to type this thing, so I missed that. Ra was right on the dot. Nice. Some of the things that I said was similar to what Ra said, but he said it in a shorter and more clearer way. If it sounds like what I said was wrong, well, I meant the right thing, but I just didn't say it right. Does that make sense? Lol... Last edited by Gryloc; 12-11-2006 at 04:28 PM. Thanks! Nice to have some theories confirmed and seen from slightly different angles. It is not real raytrace, but a rather long rhino3d session. The apparent die placement in the drawing is only what it looks like in the real world, and the base of the package is placed at the fp of the parabola. I think it was 3rd shift who beheaded his cree and put it in a light. Newbie's lens was 37k at 1m. Ra, I like how you are always talking about lux at 10m and more! I suspect that those readings can give a different picture than the 1m readings - which I believe can be skewed a lot from both the effect of aperture diameter and from crossover points in the beam. I do have a question: I should like to be able to have excel (or openoffice) return an y value from the first graph when i feed it an x value. So that if I put 70degrees into one cell, another cell will show 50%. I think it would be called a look-up table. Tony, I like it here in the slow moving backwaters. Thanks. Well, I found another. Two have done this and took pictures so far (that I have seen), but several others claimed that they have done it. I guess that this mod makes the tint shift to a warmer color, but in a good way (as some has said). I guess that the lens can be removed easily, but the metal ring is more difficult. You have to use a cutting disk (like a dremel) to cut a small notch in the ring for grip, then set a sharp flathead screwdriver into the notch and bend and twist to pry it off. According to others findings, the hemispherical lens is made of a solid "glass"-like material (it isnt hollow). Between the solid lens and the die (in the region surrounded by the metal ring), it is full of a soft silicone material. So you can safely remove the lens (responsible for the 70 degree angle), and get a more lambertian beam. Make sure you cut or score the silicone on the inside edge of the metal ring before you pry it of. They say that there is a chance that if you dont, all the silicone may go with it, messing up the bond wires possibly. For my next project, I might take the lens off of four of these in my Quad 2D Maglite upgrade. This way, I can use the existing IMS20XA reflectors that sat atop the LuxeonIII's. I may have to modify them a little to focus them, but that wont be difficult. I thought about using some clear silicone glue to attach old lambertian domes from some dead K2's I had to protect the dies even further, as well as make the beam pattern nicer. If I scrape a little silicone out, the surface will then be very rough, throwing light in random ways. If I use a dome over top filled with silicone glue, then those rough spots would be smoothed out (as long as I dont trap any air bubbles). This will be weird for the XR-E because the original dome was slid and glass-like, while the K2 dome is softer and filled with a little silicone. Anyway, sorry. Here are those links: This was the first one I seen by 3rd-shift: If you look at the first post in this thread, he made a link to the previous thread where there are better explanations. Then I stumbled upon this one by Download. He shows it a little ways down: AilSnail, I wish I can help you with the Exel spreadsheet. I learned how to add in formulas, and I have used it to measure out beam angles of my project before, but it is really time consuming. I tried today for a simple task, but it just didnt like me and I had to enter data in manually. I have no clue how to set up "programs" where you enter a number and it gives you a response automatically (like a calculator). I have seen it done with an elaborite database program made in Exel, but I am clueless how it was made. That first graph of yours is confusing me. I am having a hard time gripping what you are trying to achieve with that one. It seems tough to read it and put it to use. Maybe if it turn it upside-down. LOL... Yep, you are right Tony... But perhaps I can enlighten you a bit: In that graph you can see how much of the lotal lumens-output you grab with a sertain lens: If you take a F/1 lens: It has a focal-angle of about 53 degrees, you can see in the graph that it will collect a little over 30% of the total lumens generated.. If you take a F/0.5 lens you would grab a 90 degree angle: 70% of the lumens output according to the graph.. Hope this clears things up for you.. Last edited by Ra; 12-12-2006 at 05:00 AM. Reason: typo The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! Tony - Yes! It is "right" for lenses only. For reflectors you have to look at it the other way! Does it make sense now? In the second graph, you can see that the beam has 50% intensity at 80 degrees. I believe this is what is usually referred to as "beam angle"? Maybe not. In the first graph, you can see that half of the light falls within a 70 degree cone. edit: you beat me to it Ra. thanks. Last edited by AilSnail; 12-12-2006 at 05:08 AM. Here is the same data presented differently; It shows how much of the light of the XR-E that will hit a lens of a given F/#. The F/# is the same as focal length divided by diameter. Last edited by AilSnail; 12-12-2006 at 06:10 AM. Some wrong stuff deleted Last edited by AilSnail; 12-15-2006 at 08:24 AM. I think this drawing from wikipedia would be nice in this thread: Keep up the good work guys! This is one of the best discussions I read here on CPF in awhile. ......They are also much more efficient because they use the 100% internal reflection law of light ! A conventional reflector reflects about 84% of the light at the most! I am not familiar with the 100% internal reflection law of light? The acrylic lenses I have seen which for the most part are a combination of TIR and refractive lens element certainly are not 100% efficient in their TIR aspect as I see much light leaving from the optic when viewed from the side? I understand that SureFire uses a very expensive grade of optical plastic and I believe they do see transmision efficiencies greater than those of an external reflective surface. In terms of total efficiency, in the case of a reflector, we get at best your 84% transmision of that light actually encountering the reflector but we get 100% of the light which leaves the optic unaltered in direction. In the acrylic optic, you have all of the light traveling through the acrylic and its absorption loss will be levied on 100% of the source's output. Additionally, you will loose some light out the side due to the T in TIR not being Total but some fraction there of. Yes? I guess the only point I am trying to make here is that there will be losses regardless of the type of optic used. If the primary goal is minimal loss in flux but with a secondary optic required, then one can identify the type of optic which results in minimal loss. Typically one is more concerned with more specific photon management and say a tight collimation of light. In such a case, the delivery of light on target is of primary concern and the nature of loss of light is not as important as the loss itself. Build Prices .... some mods and builds (not 4 sale) "Nature can be cruel- but we don't have to be."~ Temple Grandin Last edited by AilSnail; 12-12-2006 at 01:38 PM. This is at approx 1m. Last edited by AilSnail; 12-12-2006 at 02:54 PM. Hi McGizmo, welcome to this dicussion... I said that the acrylic optics use the 100% internal reflection law of light. I didn't say that the acrylic optics have 100% efficiency !! All optics have losses, or else you would not see any light comming from the side! Earlier in this discussion we agreed on the fact that the Cree emitters put out most of their lumens within a 30 degees or so at the front! Conventional reflectors are designed to work best with light-sources that emit their lumens sideways (Halogen with axial fiament, automotive HID, short arc.) So not with light-sources that emit at the front ! The acrylic optics use a combination of internal refraction and internal reflection, that means that these optics grab almost the entire lumens output of the emitter: Even the center-hole is covered by a collimating lens ! If they are kept very clean and undamaged, they have much higher total efficiencies compared to conventional reflectors with the same diameter (ofcource only if they are used with emitters like Cree and Luxeon) AND NOW FOR SOMETHING COMPLETELY DIFFERENT : THROW : And this will be the hard part of this post...! Let me try it this way: Lets assume you are an object 100 yards away, that wants to be illuminated by a torch. Which are the things that determine the amount of light you actually receive 100 yards away ?? Or, in other words, what determines the apparent brightness of a torch, seen from a distance?? Well, two major things: Surface brightness of the source, and the dimensions of the source ! NOTHING ELSE ! (well,, atmospheric conditions, ofcource,, but lets say they are perfect...) A reflector not only reflects lumens, it also reflects the sources surface brightness. FACT: The apparent surface brightness of the reflector cannot ever be higher than the surface brightness of the source! It always is lower due to reflection losses in the reflector. Now its time for a picture: What does it show? Two operating torches, directly photographed in the hottest part of the beam (through a type 13 welding-filter) A 35watt halogen torch at the left, a 10watt HID torch at the right. You see the reflectors, lit by the emitting surfaces (filament and arc) These two torches have the same throw !! The low surface brightness of the halogen is compensated by the bigger reflector! FACT: A torch has max throw when the entire reflector is lit by the source, seen from a distance. You are propably not going to beleve what I'm going to tell you now: THROW IS ABSOLUTELY NOT AFFECTED BY THE FOCAL LENGTH OF THE REFLECTOR OR LENS !! ONLY LUMENS OUTPUT IS !! From a distance the object 'sees' only a two-dimensional surface, no matter how deep or how shallow the reflector. If you have a deep reflector, you collect more lumens from the source, that results in a wider beam (more sidespill) But you'll have the same throw with a shallow reflector, but because you send less lumens towards the object, the beam will be tighter (more laserlike if visible) So, bottom line: With a lens diameter of 50mm and a focal length of 1 metre you'll have exactly the same throw as with a 50mm lens with a focal length of 30mm !! Only the beam of the first will be useless because of the pathetic amount of lumens its made of.. I hope this isn't too long for you.. any questions, please ask.. The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! Doug, I like it too - Ra's last post has me confused though. The green filtered picture illustrates how lux at 1m gets skewed in favor of small reflectors. Ra, I'm not sure I am following you. You seem to be opposing your earlier post. A short focal length will not compete with a long focal length lens when it comes to illuminate something far away, especially if f/# is kept constant. I'm going to think a bit more on what you are saying. Last edited by AilSnail; 12-12-2006 at 03:59 PM. This one illustrates a long and a short fl, with the same diameter. It shows that the light is spread over a larger area with the shorter fl, like we concluded earlier. It also illustrates that in a perfect lens, the only light beams which are going straight ahead, are the ones coming from the direction of the infinately small focal point. Now, I happen to have a 21.5mm aperture x 6.9mm bfl aspheric, and another lens with a focal like 50mm or so, which I masked to to get the apertures the same. The latter is much better at lighting up distant targets. So if I understand correctly what you are saying, Ra, either your theory is flawed, or my equipment, method or eyes skews my observation in favor of the longer fl. Last edited by AilSnail; 12-12-2006 at 05:00 PM. That is what I meant with "This will be the hard part of this post !" On these forums there is a lot of confusion and mis understanding about what determines the throw of a light ! But there is hope: Take a good look at the last picture AilSnail posted: Now concentrate only on the green beams: That beam, comming out of the lens, towards the object is exactly the same with both lenses !! That means that the center of each beam has exactly the same amount of light (lux at 1m), This also means: THE THROW OF THE CENTER BEAM IS THE SAME WITH BOTH LENSES !! And that is that I've been trying to tell you.. For the entire beam, I hope the earlier posts were clear enough about that: If the lens "sees" a larger surface, it projects a larger surface, but if the lens diameter stayes the same, the surface brightness at the object stayes the same! So, back to the pic AilSnail posted: The lens with the short focus projects a bigger spot at a sertain distance compared to the lens with the longer focal length. But in the middle of both spots the lux-reading will be the same, so they'll have the same throw. Now what happens is: The lens with the short focal length grabes much more lumens from the emitter, thats why the beam is wider. Now if you illuminate an object with this wider beam objects in the neighborhood light up as well by the wider beam, That is the main reason the narrow beam seems to throw further: You have a clearer wiew to the object, not disturbed by illuminated objects nearby !! AilSnail,, Try to do the same experiment, but now measuring the lux at a distance of about 10 metres, at the center of each beam. The brightness of a illuminated object is directly related to the amount of lux it receives, so the amount of lux you measure at a distance is directly related to throw !! Hope this helps you to understand.. The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! Thanks for explaining more. I don't have a lux meter, but maybe I can set up something with a photoresistor. Meanwhile I need to think some more about whether your theory holds up, in theory. What a cliffhanger. Imagine that you put a reflector behind a star, a reflector so immensely large that the star is almost not measurable in comparison - I'm talking about a sincerely huge construction, almost half the universe. The reflector is parabolic, and focused towards the earth. That was just for fun, I'll get back to the theory shortly. The green filtered picture illustrates how lux at 1m gets skewed in favor of small reflectors. Sorry, but no.. It does not !! the size of the reflector isn't a issue here, only the difference in surface brightness is: If I would put a halogen bulb in the smaller reflector the image would look like this: The other thing you can see is that the surface brightness reflected by the reflector is mostly of the same intensity, no matter the source-reflector surface distance: The outer rim of the reflector is further away from the filament than the central part, yes ?? But it has the same intensity !! So no matter how small or big or deep or shallow your reflector is,, if its parabolic (with the source in focus..) it would look the same when you look into it from a distance, evenly lit over its entire surface ! The HID-arc has a much higher surface brightness than halogen: Thats why the picture with the small HID-torch looks like this: Not because of the smaller reflector!!: Last edited by Ra; 12-13-2006 at 09:51 AM. The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! The HID-arc has a much higher surface brightness than halogen: Thats why the picture with the small HID-torch looks like this: Not because of the smaller reflector!!: But the smaller one with the short arc reads a higher lux at 1m than the large halogen doesn't it? I guess it would depend on which sort of measurement equipment you use? Last edited by AilSnail; 12-13-2006 at 12:17 PM. Here is the same data presented differently; It shows how much of the light of the XR-E that will hit a lens of a given F/#. The F/# is the same as focal length divided by diameter. A very useful graph! Optics is a bit far from my formal training but I can offer one practical comment on using this graph. It becomes very difficult to design lenses with very low f numbers because at some point the rays from the source will strike the lens material at less than the critical angle and thus never enter the lens. Nice of you to point that out, we should delve into that a bit also. Every time I try to quote someone the link to CPF crashes!!! So I'll try it this way: QUOTE: (AilSnail No, it doesn't !! That is, not if the entire reflector of the halogen torch is lit by the filament! The amount of lux you measure comming from a torch is a combination of the surface brightness and the dimensions of that surface. So with a lower surface brightness (halogen) you need a bigger reflector to get the same lux-reading at a distance, than with higher surface brightness (HID). Every lightsource has its typical surface brightness (within a sertain range) You can overdrive a halogen bulb to get a higher surface brightness, but physics of nature determine the limits: Going too far will melt the filament! With halogen you'll never reach the surface brightness of HID!! As a result: With a halogen torch with a 4 inch reflector, you'll never reach the throw of a HID-torch with the same reflector diameter! AND: You'll never see military searchlights with a CCFL-tube as a lightsource !! The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! QUOTE (Doug_S): A very useful graph! Optics is a bit far from my formal training but I can offer one practical comment on using this graph. It becomes very difficult to design lenses with very low f numbers because at some point the rays from the source will strike the lens material at less than the critical angle and thus never enter the lens. END QUOTE. You are absolutely right Doug.. With short focal lenghts the lens absolutely needs to be aspheric, and even then there indeed is a limit of angle of the incomming rays at the edge of the lens. F/0.7 is about the shortest possible focal length of a single aspheric lens! A lens is the best way to grab light from an emitter that emits at the front. A conventional reflector is the best way to grab light from emitters that emit to the side and towards the back. The acrylic optics for use with emitters like Cree and Luxeon are a combination of the workings of a conventional reflector with a lens: They grab almost 95% of the lumens output of the emitter: more than ever is possible with only a reflector or a single lens! The proof of the existence of intelligent extra terrestrial life lies in the fact that they didn't contact us yet... Maxablaster MB Beamshots 10Watt Mini HID SSC Microblaster! 12-02-2006, 01:34 AM #2 12-04-2006, 09:48 AM #3 12-08-2006, 03:09 AM #4 12-08-2006, 10:48 AM #5 12-09-2006, 04:08 PM #6 12-11-2006, 03:40 PM #7 12-11-2006, 03:45 PM #8 12-11-2006, 07:21 PM #9 12-12-2006, 03:33 AM #10 12-12-2006, 04:46 AM #11 12-12-2006, 05:05 AM #12 12-12-2006, 06:02 AM #13 12-12-2006, 07:35 AM #14 12-12-2006, 09:30 AM #15 12-12-2006, 09:59 AM #16 Join Date Jun 2002 Chickamauga Georgia 12-12-2006, 10:45 AM #17 12-12-2006, 10:58 AM #18 12-12-2006, 02:30 PM #19 12-12-2006, 03:09 PM #20 12-12-2006, 03:56 PM #21 12-12-2006, 04:57 PM #22 12-13-2006, 03:17 AM #23 12-13-2006, 08:11 AM #24 12-13-2006, 09:36 AM #25 12-13-2006, 12:12 PM #26 12-13-2006, 12:55 PM #27 Join Date Jun 2002 Chickamauga Georgia 12-13-2006, 01:30 PM #28 12-13-2006, 03:18 PM #29 12-13-2006, 03:40 PM #30
{"url":"http://www.candlepowerforums.com/vb/showthread.php?143017-Optic-theory&p=1731090&viewfull=1","timestamp":"2014-04-19T09:51:39Z","content_type":null,"content_length":"199945","record_id":"<urn:uuid:9d438016-ca4f-4d59-b691-4dc68a5b3c8b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Aperiodically Ordered Patterns Aperiodically ordered patterns, i.e., non-repeating yet highly structured tilings such as the Penrose tiling(s), have proved to be a source of interaction between noncommutative geometry, dynamics and topology. Associated K-theoretic invariants have given both geometric and physical information about the underlying patterns and their realisation as models for so-called 'quasicrystals'. This talk will present both background and some recent results and perspectives aiming to understand further the properties of aperiodic patterns, their K-theory and related topology.
{"url":"http://www.newton.ac.uk/programmes/NCG/abstract3/hunton.html","timestamp":"2014-04-20T03:13:53Z","content_type":null,"content_length":"2504","record_id":"<urn:uuid:73ce25fe-f6a9-46c9-9589-a3a51da8e52a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Segmentation of ARX-Models Using Sum-of-Norms Regularization H. Ohlsson, L. Ljung, and S. Boyd Automatica, 46(6):1107-1111, June 2010. • l1_seg.pdf Segmentation of time-varying systems and signals into models whose parameters are piecewise constant in time is an important and well studied problem. It is here formulated as a least-squares problem with sum-of-norms regularization over the state parameter jumps, a generalization of
{"url":"http://stanford.edu/~boyd/papers/l1_seg.html","timestamp":"2014-04-17T22:15:24Z","content_type":null,"content_length":"3742","record_id":"<urn:uuid:a43abdf5-c8f3-4f99-9284-63091040a0e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
GillespieSSA 0.3-1 released October 4, 2007 By Mario Pineda-Krch I recently rolled up the new version of the GillespieSSA package, version 0.3-1. The tar ball of the new version is posted on its web page (here). I also submitted it to CRAN so in (due time) it should appear on the official R package list. The release consists of a number of bug fixes (among others typos in the package URL and demo models). The biggest change for the end user is, however, a now and cleaner way of passing model parameters to the main interface function ssa(). Rather than having to define model parameters in the global environment or hard coding their values into the propensity functions they can now be passed as a formal argument in the form of a named vector directly to ssa(). I owe a thanks to Ben Bolker for suggesting this deuglification solution. An added benefit of this revised feature is that model definitions are actually shorter now. The whole model + running it can now easily be a one liner. For example defining and running the stochastic version of the classical logistic growth model $frac{{displaystyle dN}}{{displaystyle dt}}=rN left( 1-frac{{displaystyle N}}{displaystyle K} right)$ is now as simple as out <- ssa(x0=c(N=500),a=c(”b*{N}”, ”(d+(b-d)*{N}/K)*{N}”),nu=matrix(c(+1,-1),ncol=2),parms=c(b=2, d=1, K=1000),tf=25) Here the per capita birth rate is $b=2$, death rate $d=1$, intrinsic growth rate $r=b-d=1$, carrying capacity $K=1000$. The Monte Carlo simulation is run for ten time units (t=10) using Gillespie’s Direct method (the default method in GillespieSSA) and starting with a population size of $N=500$. One way of quickly visualizing the resulting time series is by As expected the results look rather stochastically familiar, daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/gillespiessa-0-3-1-released/","timestamp":"2014-04-17T13:16:03Z","content_type":null,"content_length":"39965","record_id":"<urn:uuid:b3fc15d3-44dc-496b-b52f-969f6296ebbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
calculator - (9) Ads related to calculator Online Calculator - Free Online Calculator - Full Screen ... Everybody needs a Calculator at some point -- Full Screen, Fast Loading and FREE! Check it out! Online Calculator! From the Simple Calculator below, to the Scientific ... The Best Free Online Calculator Use simple and easy free online calculator at work, at school or at home. With our calculator you can perform simple and trigonometric calculations. Use our customizable calculator for basic math, trigonometry, fractions, and statistics. Basic Calculator - Math Free math lessons and math homework help from basic math to algebra, geometry and beyond. Students, teachers, parents, and everyone can find solutions to their math ... Calculator.net : Free Online Calculators A quick online scientific calculator as well as a rich collection of free online calculators, including mortgage, loan, BMI, ideal weight, body fat calculators and ... Ads related to calculator
{"url":"http://search.snap.do/?q=calculator&category=Web","timestamp":"2014-04-18T15:44:45Z","content_type":null,"content_length":"96203","record_id":"<urn:uuid:6b8a5c8e-03d3-4462-ae0e-809739b0ba38>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Downey, CA Prealgebra Tutor Find a Downey, CA Prealgebra Tutor ...Additionally, physics is a very geometric heavy field so I'm sure I can be of help! I have taken the standard Caltech course in differential equations, the graduate course in differential equations, and my thesis is on differential equations of ions in an ion trap. I am a graduating senior at Caltech in physics. 26 Subjects: including prealgebra, calculus, physics, algebra 2 ...I am also an ESL/ESOL teacher at the South Coast Literacy Council, where I am responsible for instructing English as a Second Language classes to adult students whose native language is a language other than English. I am a former English teacher at Elite Educational Institute in Cerritos, Calif... 40 Subjects: including prealgebra, reading, English, writing ...Have taken many math classes since then. Have tutored students in Algebra 2. Received 5 on Calc BC exam. 15 Subjects: including prealgebra, reading, algebra 1, geometry With a BS in Electrical Engineering and real world experience in aerospace engineering, I can help you cut through the complexities of English, math, science and computer programming. I will stick with you until you are confident you understand the material and can apply it. If your textbook is missing examples or explanations, we will fill in the details. 8 Subjects: including prealgebra, reading, writing, algebra 1 ...German: Having attended German universities for three years, taught in German public schools for six years, and lived for a further four years in Germany, I have near-native fluency in speaking, reading, and writing, a CA credential, and college teaching experience as well to recommend me for tu... 15 Subjects: including prealgebra, English, reading, writing
{"url":"http://www.purplemath.com/Downey_CA_prealgebra_tutors.php","timestamp":"2014-04-18T00:54:46Z","content_type":null,"content_length":"24014","record_id":"<urn:uuid:51853ab0-eb31-40fc-af73-f66e4475f072>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: AW: ppv and npv [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: AW: ppv and npv From "Lachenbruch, Peter" <Peter.Lachenbruch@oregonstate.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: AW: ppv and npv Date Mon, 9 Nov 2009 08:57:02 -0800 This is a handy ado file. There is one issue that I have with NPV and PPV. In most studies to determine sensitivity and specificity, the samples are conditional on disease status, so the prevalence of the disease in the training sample is quite different from what holds in most applications. For example, in a AIDS clinic, the prevalence of disease is likely to be at most 0.05, while the training sample may be 50% HIV positive. The PPV would then be p*Sensitivity/(p*Sensitivity+(1-p)(1-Specificity)) where p is the prevalence of the HIV. If Sensitivity and Specificity are 0.95 (actually a bit low for HIV), if we assume prevalence is 0.5 we would have "PPV"=0.5*0.95/(0.5*0.95+0.5*0.05)=0.95 If we use a prevalence of 0.05, we would have 0.05*0.95/(0.05*0.95+0.95*0.05)=0.50 In a blood donation center, the prevalence is more like 0.01, the PPV would be 0.161 It is usually a good idea to compute PPV and NPV for a variety of prevalences as a test may be used in a number of different contexts. Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Martin Weiss Sent: Monday, November 09, 2009 4:35 AM To: statalist@hsphsun2.harvard.edu Subject: st: AW: ppv and npv Try Roger`s -ssc d senspec- -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Celine Genty Gesendet: Montag, 9. November 2009 13:16 An: statalist@hsphsun2.harvard.edu Betreff: st: ppv and npv The "roctab" command give for each cutpoint sensitivity / specificity / LR+ / LR-. I need PPV and NPV for each cutpoint too... How can I do ? Thank you. Céline Genty * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-11/msg00434.html","timestamp":"2014-04-19T09:47:27Z","content_type":null,"content_length":"8555","record_id":"<urn:uuid:0f29c5fe-f8dc-47f5-b783-92b39900fd2b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
poset of commutative subalgebras $(0,1)$-Category theory Algebraic theories Algebras and modules Higher algebras Model category presentations Geometry on formal duals of algebras Operator algebra AQFT and operator algebra States and observables Operator algebra Local QFT Euclidean QFT For $A$ a an associative algebra, not necessarily commutative, its collection $ComSub(A)$ of commutative subalgebras $B \hookrightarrow A$ is naturally a poset under inclusion of subalgebras. As a site for noncommutative geometry Various authors have proposed (Butterfield-Hamilton-Isham, Döring-Isham, Heunen-Landsmann-Spitters) that for the case that $A$ is a C-star algebra the noncommutative geometry of the formal dual space $\Sigma(A)$ of $A$ may be understood as a commutative geometry internal to a sheaf topos $\mathcal{T}_A$ over $ComSub(A)$ or its opposite $ComSub(A)^{op}$. An advantage of the latter is that $\Sigma$ becomes a compact regular locale. As a site for noncommutative phase spaces Specifically, consider the case that the algebra $A = B(\mathcal{H})$ is that of bounded operators on a Hilbert space. This is interpreted as an algebra of quantum observables and the commutative subalgebras are then “classical contexts”. Applying Bohrification to this situation (see there for more discussion), one finds that the locale $\Sigma(A)$ internal to $\mathcal{T}_A$ behaves like the noncommutative phase space of a system of quantum mechanics, which however internally looks like an ordinary commutative geometry. Various statements about operator algebra then have geometric analogs in $\mathcal{T}_A$. Notably the Kochen-Specker theorem says that $\Sigma(B(\mathcal{H}))$, while nontrivial, has no points/no global elements. (This topos-theoretic geometric reformulation of the Kochen-Specker theorem had been the original motivation for considering $ComSub(A)$ in the first place in ButterfieldIsham). Moreover, inside $\mathcal{T}_A$ the quantum mechanical kinematics encoded by $B(\mathcal{H})$ looks like classical mechanics kinematics internal to $\mathcal{T}_A$ (HeunenLandsmannSpitters, following DöringIsham): 1. the open subsets of $\Sigma(A)$ are identified with the quantum states on $A$. Their collection forms the Heyting algebra of quantum logic. 2. observables are morphisms of internal locales $\Sigma(A) \to IR$, where $IR$ is the interval domain?. The assignment to a noncommutative algebra $A$ of a locale $\underline{\Sigma}_A$ internal to $\mathcal{T}_A$ has been called Bohrification, in honor of Nils Bohr whose heuristic writings about the nature of quantum mechanics as being probed by classical (= commutative) context one may argue is being formalized by this construction. The poset of commutative subalgebras $C(A)$ is always an (unbounded) meet-semilattice. If $A$ itself is commutative then it is a bounded meet semilattice, with $A$ itself being the top element. Relation to Jordan algebras For $A$ an associative algebra write $A_J$ for its corresponding Jordan algebra, where the commutative product $\circ : A_J \otimes A_J \to A_J$ is the symmetrization of the product in $A$: $a \circ b = \frac{1}{2}(a b + b a)$. There exist von Neumann algebras $A$, $B$ such that there exists a Jordan algebra isomorphism $A_J \to B_J$ but not an algebra isomorphism $A \to B$. • Alain Connes, A factor not anti-isomorphic to itself, Annals of Mathematics, 101 (1962), no. 3, 536–554. (JSTOR) there is a von Neumann algebra factor $A$ with no algebra isomorphism to its opposite algebra $A^{op}$. But clearly $A_J \simeq (A^{op})_J$. Let $A, B$ be von Neumann algebras without a type $I_2$-von Neumann algebra factor-summand and let $ComSub(A)$, $ComSub(B)$ be their posets of commutative sub-von Neumann algebras. Then every isomorphism $ComSub(A) \to ComSub(B)$ of posets comes from a unique Jordan algebra isomorphism $A_J \to B_J$. This is the theorem in (Harding-Döring). There is a generalization of this theorem to more general C-star algebras in (Hamhalter). For more on this see at Harding-Döring-Hamhalter theorem. The presheaf topos over $ComSub(A)^{op}$ For $A$ a C-star algebra, write $ComSub(A)$ for its poset of sub-$C^*$-algebras. Write $\mathcal{T}_A := [ComSub(A),Set]$ for the presheaf topos on $ComSub(A)^{op}$. This is alse called the Bohr topos. Because $ComSub(A)$ is a posite. The locale $\Sigma(A)$ The presheaf $(\mathbb{A} : B \mapsto U(B)) \;\; \in \mathcal{T}_A \,,$ where $U(B)$ is the underlying set of the commutative subalgebra $B$, is canonically a commutative $C^*$-algebra internal to $\mathcal{T}_A$. This is (HeunenLandsmanSpitters, theorem 5). By the constructive Gelfand duality theorem there is uniquely a locale $\Sigma(A)$ internal to $\mathcal{T}_A$ such that $\mathbb{A}$ is the internal commutative $C^*$-algebra of functions on $\Sigma This observation is amplified in (HeunenLandsmanSpitters). This is (HeunenLandsmanSpitters, theorem 6), following (ButterfieldIsham). The proposal that the the noncommutative geometry of $A$ is fruitfully studied via the commutative geometry over $ComSub(A)$ goes back to • Jeremy Butterfield, John Hamilton, Chris Isham, A topos perspective on the Kochen-Specker theorem I. quantum states as generalized valuations International Journal of Theoretical Physics, 37(11):2669–2733, 1998. II. conceptual aspects and classical analogues International Journal of Theoretical Physics, 38(3):827–859, 1999 III. Von Neumann algebras as the base category International Journal of Theoretical Physics, 39(6):1413–1436, 2000. The proposal that the non-commutativity of the phase space in quantum mechanics is fruitfully understood in the light of this has been amplified in a series of articles The presheaf topos on $ComSub(A)^{op}$ (Bohr topos) and its internal localic Gelfand dual to $A$ is discussed in See also higher category theory and physics. Relation to Jordan algebras The relation to Jordan algebras of $ComSub(A)$ is discussed in for $A$ a von Neumann algebra and more generally for $A$ a C*-algebra in • Jan Hamhalter, Isomorphisms of ordered structures of abelian $C^\ast$-subalgebras of $C^\ast$-algebras, J. Math. Anal. Appl. 383 (2011) 391–399 (journal) See at Harding-Döring-Hamhalter theorem.
{"url":"http://ncatlab.org/nlab/show/poset+of+commutative+subalgebras","timestamp":"2014-04-20T19:01:51Z","content_type":null,"content_length":"63118","record_id":"<urn:uuid:42124983-efc0-490c-ad38-02e6470e8457>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Downers Grove Algebra Tutor ...I have created an Excel workbook that I have populated with many formulas useful for beginners and advanced Excel users. I will be happy to share my knowledge and the workbook with you. I hold a Bachelor of Science degree in Electrical Engineering from the University of Illinois at Chicago with emphasis in higher mathematics. 18 Subjects: including algebra 2, algebra 1, geometry, ASVAB ...My student's success is my success. My teaching will not be limited to tutoring sessions. I will be available for my students at any time by any means of communications. 8 Subjects: including algebra 1, algebra 2, physics, SAT math ...I am a math-fanatic and am able to effectively help people understand how to apply their knowledge. In addition, I can show students how to improve their study habits through the use of flashcards, mnemonic devices, etc. I can't wait to start working with your student (note: I am very open to n... 29 Subjects: including algebra 2, algebra 1, English, chemistry ...With directed practice, a student can definitely improve his/her test results in a reasonable amount of time. My methods have proven to be very successful. I have a Masters degree in applied mathematics and most coursework for a doctorate. 18 Subjects: including algebra 1, algebra 2, physics, GRE ...I worked in various fields including management, administration, statistical data compilation and analysis, and medical coding before quitting to become a full-time mother to my two sons. In that time, I have had the pleasure of raising two catholic school honor-roll students. During the last f... 24 Subjects: including algebra 2, algebra 1, reading, English
{"url":"http://www.purplemath.com/Downers_Grove_Algebra_tutors.php","timestamp":"2014-04-21T10:38:10Z","content_type":null,"content_length":"23885","record_id":"<urn:uuid:30512c91-f9cc-46fe-a287-ddbb51d9dd6b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
question about circular law up vote 2 down vote favorite I have a question about the circular law. Consider $A_n=[x_{ij}]$ a sequence of random matrices where $x_{ij}$ are iid with mean $0$ and variance $1$. Consider $\lambda_{n,1},\dots,\lambda_{n,n}$ the eigenvalues of $\frac{1}{\sqrt{n}} A_n$ and $\mu_n$ the random measure $\frac{1}{n}\sum_{k=1}^n \delta_{\lambda_{n,k}}$ on $\mathbb{C}$. The circular law (see the paper of Tao-Vu-Krishnapur, 2010, annals of proba) states that for each test function $f:\mathbb{C}\rightarrow \\mathbb{C}$ one has almost surely $$ \lim\limits_{n\rightarrow +\infty} \int_{\mathbb{C}} f(z) d\mu_n = \frac{1}{\pi} \ int_{p^2+q^2\leq 1} f(p+iq) dpdq $$ By test function, we mean continuous and compactly supported. Is the same conclusion also true if $f$ is the caracteristic function of a closed subset of $\mathbb{C}$ ? Here is a simple example, consider a closed subset $F\subset \mathbb{C}$ with zero Lebesgue measure, is it true that $$\lim\limits_{n\rightarrow +\infty} \mathbb{P}( \forall k \in [1,n], \lambda_{n,k}\in F ) =0 $$ pr.probability probability-distributions 1 By Urysohn's lemma, you can approximate $1_F$ from above in $L^1$ by a sequence of continuous compactly supported functions, $(f_n)$, and below by 0. Applying the result of the TVK paper to the $f_n$'s, you get a slightly weaker conclusion than you're asking for. – Anthony Quas Mar 11 '13 at 6:32 Which $L^1$ space are you thinking about ? with the Lebesgue measure or $\mu_n$ measures ? So you say that the answer of my question would be affirmative ? – jeanB Mar 11 '13 at 12:16 The (non-random) $L^1$ space with Lebesgue measure. This will mean the right side of the TVK equality (which is an upper bound) converges to 0. But this only gives that $\mathbb E\mu_{n}(F)\to 0$, which is a weaker conclusion than you asked for. – Anthony Quas Mar 11 '13 at 16:24 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged pr.probability probability-distributions or ask your own question.
{"url":"http://mathoverflow.net/questions/124188/question-about-circular-law","timestamp":"2014-04-18T23:31:22Z","content_type":null,"content_length":"49777","record_id":"<urn:uuid:215cca3d-61e5-43bc-8117-b58b10b27b98>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Haverhill, MA Prealgebra Tutor Find a Haverhill, MA Prealgebra Tutor ...I look forward to hearing from you, training you, and putting my knowledge and expertise to work, toward providing you with the best possible preparation you can get. I have many success stories to tell, and most certainly would like to include yours, given the opportunity to work with you, answ... 6 Subjects: including prealgebra, physics, algebra 1, trigonometry ...I was a straight A student in algebra. I can help your student with more advanced algebra concepts such as solving polynomials and the quadratic equation. I can work with your child to develop approaches that can help solve polynomial equations. 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...While at UNH, I tutored general chemistry for 3 years and was a teachers assistant for the general chemistry lab course for a semester as an undergraduate student. I am currently working as a Physical Science teacher in one of the Nashua High schools. Here, I teach mainly chemistry and physics concepts to students of various ages and abilities. 8 Subjects: including prealgebra, chemistry, algebra 1, algebra 2 ...I have formal education in Differential Equations at both undergraduate and graduate levels. The courses I've taught and tutored required differential equations, so I have experience working with them in a teaching context. In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school. 16 Subjects: including prealgebra, calculus, physics, geometry ...I focus on helping my student develop both improved content knowledge as well as learning helpful problem-solving/learning strategies. My interest is in helping my students improve their analytical thinking and attain a deeper understanding of underlying concepts that will help them succeed not ... 33 Subjects: including prealgebra, English, reading, writing Related Haverhill, MA Tutors Haverhill, MA Accounting Tutors Haverhill, MA ACT Tutors Haverhill, MA Algebra Tutors Haverhill, MA Algebra 2 Tutors Haverhill, MA Calculus Tutors Haverhill, MA Geometry Tutors Haverhill, MA Math Tutors Haverhill, MA Prealgebra Tutors Haverhill, MA Precalculus Tutors Haverhill, MA SAT Tutors Haverhill, MA SAT Math Tutors Haverhill, MA Science Tutors Haverhill, MA Statistics Tutors Haverhill, MA Trigonometry Tutors Nearby Cities With prealgebra Tutor Andover, MA prealgebra Tutors Atkinson, NH prealgebra Tutors Bradford, MA prealgebra Tutors Georgetown, MA prealgebra Tutors Groveland, MA prealgebra Tutors Lawrence, MA prealgebra Tutors Lowell, MA prealgebra Tutors Merrimac, MA prealgebra Tutors Methuen prealgebra Tutors Nashua, NH prealgebra Tutors North Andover prealgebra Tutors Plaistow prealgebra Tutors Riverside, MA prealgebra Tutors Salem, NH prealgebra Tutors West Newbury, MA prealgebra Tutors
{"url":"http://www.purplemath.com/Haverhill_MA_Prealgebra_tutors.php","timestamp":"2014-04-20T07:03:37Z","content_type":null,"content_length":"24293","record_id":"<urn:uuid:1b3885ee-116a-4202-9ba4-30dc2bff3cbf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Altafini, Claudio - Functional Analysis Sector, Scuola Internazionale Superiore di Studi Avanzati (SISSA) • Representing multiqubit unitary evolutions via Stokes tensors Claudio Alta ni • BIOINFORMATICS Vol. 00 no. 00 2007 • The De Casteljau algorithm on SE(3) Claudio Altafini • A Path Tracking Criterion for an LHD Articulated Claudio Altafini • Redundant robotic chains on Riemannian manifolds Claudio Alta ni • Discerning static and causal interactions in genomewide reverse engineering problems • Commuting multiparty quantum observables and local compatibility Claudio Altafini • Some Properties of the General nTrailer Claudio Altafini \Lambda • Feedback control of NMR systems: a control-theoretic perspective • A kinetic mechanism inducing oscillations in simple chemical reactions networks • Almost Global Stochastic Feedback Stabilization of Conditional Quantum Dynamics # • BIOINFORMATICS Vol. 00 no. 00 2009 • Feedback control of spin systems Claudio Altafini • SUBMITTED TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, MAY 2001 1 Path Following with Reduced O -Tracking for • SUPPLEMENTARY MATERIAL Comparing association network algorithms for reverse engineering • Coherent control of open quantum dynamical systems Claudio Alta ni • SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, JANUARY 2001; REVISED MAY 2001, AND AUG 2001 1 A feedback control scheme for reversing a truck • Motion on submanifolds of noninvariant holonomic constraints for a kinematic control system evolving on a matrix Lie group • Network inference from gene expression profiles: what "physical" network are we seeing? (Prokaryotes vs Eukaryotes) • SUPPLEMENTARY NOTES Adaptation as a genome-wide autoregulatory • Monotonicity, frustration, and ordered response: an analysis of the energy landscape of perturbed large-scale biological networks • Determining the distance to monotonicity of a biological network: a graph-theoretical approach • mRNA stability and the unfolding of gene expression in the long-period yeast metabolic cycle • Controllability and simultaneous controllability of isospectral bilinear control systems on complex flag • SUPPLEMETARY NOTES Network inference from gene expression profiles: what "physical" • Discerning static and causal interactions in genome-wide reverse engineering problems • Feedback stabilization of isospectral control systems on complex flag manifolds: application to quantum • On the generation of sequential unitary gates from continuous time Schrodinger equations driven by external fields • Parameter differentiation and quantum state decomposition for time varying Schrodinger equations • Positivity for matrix systems: a case study from quantum mechanics • Feedback control of NMR systems: a control-theoretic perspective • Graph-theoretical decompositions of large-scale biological N. Soranzo1 • Tensor of coherences parameterization of multiqubit density operators for entanglement characterization • Following a path of varying curvature as an output regulation problem • Explicit Wei-Norman formul for matrix Lie groups via Putzer's method • Controllability of quantum mechanical systems by root space decomposition of Claudio Altafini • Bilinear Control Systems: Theory and Applications Instructor: Claudio Altafini, SISSA (Int. School for Advanced Studies), Trieste. e-mail: altafini@sissa.it • CURRICULUM VIT Claudio Altafini, SISSA • Homogeneous Polynomial Forms for Simultaneous Stabilizability of Families of • Geometric motion control for a kinematically redundant robotic chain: application to a holonomic mobile manipulator • Erratum for the paper "Feedback stabilization of isospectral control systems on complex flag • On the exact unitary integration of time-varying quantum Liouville equations Claudio Alta ni • SUPPLEMENTARY NOTES Monotonicity, frustration, and ordered response: an analysis of • A matrix Lie group of Carnot type for filiform subRiemannian structures and its application • Adaptation as a genome-wide autoregulatory principle in the stress response of yeast • The reachable set of a linear endogenous switching Claudio Altafini • SUPPLEMENTARY NOTES mRNA stability and the unfolding of gene expression in the • Re ection symmetries for multiqubit density operators Claudio Alta ni • Controllability properties for finite dimensional quantum Markovian master Claudio Altafini • A system-level approach for deciphering the transcriptional response to prion infection • BIOINFORMATICS Vol. 00 no. 00 2011 • SUPPLEMENTARY NOTES A system-level approach for deciphering the transcriptional • Geometric Control Methods for Nonlinear Systems • Computing global structural balance in large-scale signed social networks • SUPPORTING INFORMATION Computing global structural balance in large-scale signed social
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/02/314.html","timestamp":"2014-04-21T15:10:26Z","content_type":null,"content_length":"16196","record_id":"<urn:uuid:ce71e007-ece2-4d9c-9bfc-0e68e5a24eab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
When is a Mathematical Proof Difficult? “Young man, in mathematics you don’t understand things. You just get used to them.” John von Neumann. Continuing the discussion on what does it mean to understand a proof, and following up on the discussion on a recent post by Gowers, I would like to bring up the question of what does it mean for a mathematical proof to be difficult. First of all, I think it is important, again, to distinguish between the perspective of the person conceiving the proof and the person reading (or trying to understand) the proof. Proofs that are hard to read From the perspective of the reader, a proof may seem hard (or at least harder than it needs to be) simply because of the style of exposition, as already discussed. Even some flawlessly presented papers, however, can be hard to follow, because of the complexity of the technical machinery used by the author. In many areas that have developed very sophisticated tools (PCP constructions, for example), every paper might look very complicated to the non-expert, because of the need to “interface” with the very complicated context, or the need to re-prove “standard” results because they were never stated exactly in the way that is needed and so on. To the expert, however, many such paper look relatively simple, in the sense that given the basic ideas, which can often be summarized in a few sentences, it is “routine” to reconstruct the rest of the paper. (More later on what this means for the value of such papers.) Finally, there are well-presented papers which baffle the experts (sometimes, this includes the authors). I am thinking, for example, of the Khot-Vishnoi paper on sparsest cut. Those are papers that introduce new techniques that are not yet fully understood. (Meaning, for example, that the full extent of their applicability and generality is not clear yet.) They contain the genuinely difficult proofs, and they are the ones that are most useful to study. Note that, here, I am considering difficulty as a transient, not an intrinsic property: at some point the techniques become used over and over, a theory is built around them, we know how to make use of them, and they become “understood,” or “simple” to the experts. Needless to say, this process of clarification and simplification is extremely useful, and should be well regarded and rewarded, as it usually is. (Most of my research is of this type, and I cannot complain for the way it is received.) Proofs that are hard to conceive What about difficulty from the perspective of authors? Clearly, papers like the ones I just described are as difficult (or more) to come up with than to understand for the experts. One is devising new techniques and exploiting them on the fly, and it is a miracle when it all works out. What about papers that seem difficult to non-experts and are easy (when properly presented) to experts? Here it depends. Sometime, given the right tools, the result is just routine application of the technical machinery, no harder to conceive than to verify. Sometimes, however, even if the main idea of the paper can be expressed in one sentence, that one idea can be the result of endless attempts, blind alleys, complicated first attempts, later simplifications, and so on. Of course, papers of the first type are nearly (but not completely) useless, and papers of the second type are very good. The value of technical work There was recently a discussion in the theory community about the value of “conceptual” papers, which, perhaps unpredictably, turned in part into a discussion on the value of “technical” papers. At the time, I completely agreed (as I still do) with statements such as “all other things being equal, a simpler proof is better than a harder one,” but I had a problem with some of the conclusions that were being drawn. Leaving aside the issue of papers whose main contribution is a new definition or a new model (which are, in fact, the “conceptual” papers that the original statement dealt with), when we look at a paper proving a new result and introducing a new idea, then, all other things being equal, we want the authors the explain their idea and distill their proof in the simplest possible way. However, another point is also true: that given two well presented papers, both simplified as much as they can be, the more complicated one is the one that uses the more complex technical tools, and a new ideas about complex machinery is more valuable than a new ideas about simpler machinery, because the former makes powerful tools even more powerful. And, as argued above, the technical papers which are going to be more useful (by stimulating the construction of new theoretical thinking around their tools) are the mystifying ones. In conclusion: • writing papers just because you can, employing difficult techniques in routine ways: BAD (but even such papers may be somewhat useful, I may return to this point in a later post.) • presenting a needlessly complicated version of a potentially simple argument: VERY BAD • having a new idea on how to use difficult techniques, and explaining it as transparently as possible: GOOD (but such papers would still be hard to read for the non-experts) • making breakthroughs by conceiving new ways of doing things: VERY GOOD • finding the “right” way to understand a previously mystifying argument: GOOD (And, of course, coming up with a new definition or model that extends the reach of theoretical work: VERY EXCELLENT; but this was not the subject of this post.) 12 comments I can’t help thinking of that Gilbert and Sullivan line: And ev’ry one will say, As you walk your mystic way, “If this young man expresses himself in terms too deep for me, Why, what a very singularly deep young man this deep young man must be!” I dug up an old post of mine on this topic that might be somewhat interesting.. “However, another point is also true: that given two well presented papers, both simplified as much as they can be, the more complicated one is the one that uses the more complex technical tools, and a new ideas about complex machinery is more valuable than a new ideas about simpler machinery, because the former makes powerful tools even more powerful.” Perhaps this is why complexity gets more coverage at STOC/FOCS. No one wants a complicated algorithm, because it is simply not useful. So it is not true for all of TCS. But since people think it’s true for everything, they end up being biased towards certain areas. This is an interesting post; but I the conclusions are unclear and to the extent I understand them I disagree. “In conclusion: writing papers just because you can, employing difficult techniques in routine ways: BAD (but even such papers may be somewhat useful, I may return to this point in a later post.)” I think in some sense we all write papers “just because we can.” its actually important to be able to employ routinely difficult techniques; it tends to make them over time less difficult. (Still, maybe this statement refers to something I can agree with, but I dont know what.) “presenting a needlessly complicated version of a potentially simple argument: VERY BAD” What do you mean by “needlessly”? The important value of “prove the damn conjecture at all costs” sometimes lead to proofs which are needlessly complicated. I do not think it is a common practice to present proofs in a needlessly difficult ways. (Sometimes people try to extend the scope of the theorem on the expense of making the proofs more difficult.) Maybe people should be given more incentives to simplify, but once you finally prove what you want to prove it is often difficult to see how to simplify. “making breakthroughs by conceiving new ways of doing things: VERY GOOD ” I think it is hard to disagree that breakthroughs are very good. But don’t we give too much weight to “conceiving new ways of doing things” per se? “(And, of course, coming up with a new definition or model that extends the reach of theoretical work: VERY EXCELLENT; but this was not the subject of this post.)” So is this better than all of the above? [...] a link to an interesting post regarding proofs and understanding from “in theory”; And another one; And another one by Arvind Narayanan. [...] “all other things being equal, a simpler proof is better than a harder one,” It is very difficult to disagree with this statement like with “all other things being equal I will buy the cheaper TV” The problem is that one way to evaluate the theorem is through the difficulty of the proof (and similarly the price of the TV gives us some signal about its quality.) So in some sense it is never the case that “all other things being equal”. Also, in practice, when people and their proofs are evaluated I am not sure that the rule “the simpler the better” really applies. “papers whose main contribution is a new definition or a new model (which are, in fact, the “conceptual” papers that the original statement dealt with)” I do not think that this is the only kind of papers that the original statement dealt with, and I think that the common view of “conceptual papers” as “papers whose main contribution is a new definition or a new model” caused the whole discussion on “conceptual versus technical” to go to the wrong direction. Citing the statement: “by “conceptual” we mean the aspects that can be communicated succinctly, with a minimum amount of technical notation, and yet their content reshapes our view/understanding.” “conceptual aspect “along the way” may be an innovative way of modeling, looking at, or manipulating a known object or problem, including establishing a new connection between known objects/ Note that according to the above lines, conceptual papers may also be the papers that perform the process you described of “clarification and simplification”. In particular, your paper on extractors and pseudorandom generators is a purely conceptual paper according to the above lines of the statement, even though its main emphasis is not on giving a new definition or a new model. In other words, I think that the “conceptual papers” that the original statement referred to is all the papers that you described as the papers that “look relatively simple, in the sense that given the basic ideas, which can often be summarized in a few sentences, it is “routine” to reconstruct the rest of the paper”. “However, another point is also true: that given two well presented papers, both simplified as much as they can be, the more complicated one is the one that uses the more complex technical tools, and a new ideas about complex machinery is more valuable than a new ideas about simpler machinery, because the former makes powerful tools even more powerful.” Are you trying to say that your recent work on the smallest eigenvalue and the Max-Cut is more valuable than GW’s work? Your paper is certainly more complicated, but it would be pretty arrogant (and likely incorrect) to assume that your work is more valuable. What a terrible comment, eigo, to end such a nice post and thread of comments. It is interesting to discuss the general question of judging and comparing the values of contributions. Probably there are many criteria and points of views, and a lot of genuine uncertainty. I am a bit doubtful that an abstract discussion on the value of papers is useful. There are three separate questions: is the paper good for you to READ, is it good to PUBLICIZE for the community as a whole, and is it a GOOD paper in itself. (The latter two questions are not necessarily the same thing – for example there could be papers that contain very little original research contribution but elucidate highly complex previous work so well that one would want to accept them to a conference.) All these questions should be considered on a case-by-case basis, without following some rigid rules or manifestos. (Of course, unless you are a PC member or reading/writing a reference letter, there’s generally no reason for you to waste time thinking on the latter two questions.) I often recommend to students that they read highly complicated papers (as long as they are also good and important complicated papers of course). I think reading these can be especially beneficial to students because: 1) Obviously, your technical skills improve more as a result of reading a complicated papers than a simple one. 2) Complexity serves as a “barrier to entry”. Many people won’t have time to deeply read a complex paper, even if it’s very important. This includes many experts who may feel they get the general idea, and put reading the paper on their neverending “todo list”… This means that if you do make the effort and read the paper, you are pretty uniquely well-positioned to make further contributions. 3) Often a paper is complicated because the author threw “the kitchen sink” at the problem, using any tool he/she could think of. So, in one paper you get to see all the tricks that they accumulated in many years of work. 4) Another reason a paper is complicated is because the authors themselves still don’t understand the problem very well. This is a sign that there’s still room for new improvements and discoveries in this area. Someone coming with a fresh mind could perhaps see things the authors have missed. Of course you only get these benefits if you read the paper deeply, thinking hard on how to recast the proof in your own language, whether each trick or complication is necessary, and whether or not a completely different approach should be taken. It’s not always easy to decide, whether a paper is worth such an investment, but in any case time spent reading a paper is always better than time spent reading (or commenting) in a blog… :)
{"url":"http://lucatrevisan.wordpress.com/2009/01/08/when-is-a-mathematical-proof-difficult/","timestamp":"2014-04-18T18:12:02Z","content_type":null,"content_length":"98133","record_id":"<urn:uuid:be02cc9d-9351-40b5-88ec-d57202cac7fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] Scheme and R From: Noel Welsh (noelwelsh at gmail.com) Date: Thu Mar 26 13:30:34 EDT 2009 On Thu, Mar 26, 2009 at 5:24 PM, Neil Toronto <ntoronto at cs.byu.edu> wrote: >> I'm wondering if anyone on this list has experience with teaching R or >> with making explicit the connections between R and Scheme. I'd like to take >> more notice of this in our first-year CS classes, and help prepare students >> for the use of R in their second-year stats class. --PR > As a language it's rather weak and inconsistent. I agree with this, and the rest of Neil's analysis. It would be simple to create a PLT Scheme language that extends the numeric operators to vectors in addition to scalars. Perhaps one should also replace the list operations with the vector equivalents. This could be used to introduce R-esque programming. For my own work I have written most of the vector operations you'd need (vector-+/-*, vector-sum, vector-append, vector-map etc.) Note that I haven't done much of anything for matrices or arrays of higher You can do some fun stuff with Monte Carlo simulations, for a variety of definitions of fun. Examples: - computing the value of PI - finding the stationary distribution of a Markov chain (your very own Google!) - photorealistic rendering Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2009-March/031587.html","timestamp":"2014-04-17T04:20:38Z","content_type":null,"content_length":"6387","record_id":"<urn:uuid:c9fc7653-3209-427b-b275-0b008ceb391d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Models of the reals which have no unmeasurable sets up vote 2 down vote favorite I recall being told -- at tea, once upon a time -- that there exist models of the real numbers which have no unmeasurable sets. This seems a bit bizarre; since any two models of the reals are isomorphic, you'd expect any two models to have the same collection of subsets. Can anyone tell me exactly what the story here is? Have I misremembered something? Is this some subtlety involving how strong a choice axiom you use to define your set theory? measure-theory set-theory Please clarify -- what is a "model of the reals"? Do you mean the fact that any two complete real-closed fields are isomorphic? – John Goodrick Oct 20 '09 at 20:46 oh, and you should tag your question set-theory as well. there's also results like Solovay's that use versions of the Axiom of Determinacy, which is inconsistent with full choice. – Kenny Easwaran Oct 21 '09 at 5:16 add comment 5 Answers active oldest votes As John Goodrick is asking in a few places, you have to be careful in stating what you mean by "a model of the reals". If you're going to talk about sets of reals, then you need to have variables ranging over reals, and also variables ranging over sets of reals. You also of course want symbols in your language for the field operations and ordering, and possibly more. Three Options One way to do this is to use the language of second-order analysis, which is bi-interpretable with the language of third-order number theory. (It's straightforward to translate between real numbers and sets of natural numbers, and then between sets of real numbers and sets of sets of naturals.) Another way to do this is to use ZF, which talks about the reals and sets of reals, but also many many other things. (Far more than any mathematician who's not a logician (or perhaps category theorist?) ever uses.) There's also an intermediate strategy, which is basically what Russell and Whitehead did in Principia Mathematica, where you have some variables ranging over objects at the bottom (which might be real numbers, or anything else), and then variables ranging over sets of objects, and then variables ranging over sets of sets of objects, and so on to arbitrarily high levels. This is still far weaker than ZF, because you don't get sets that mix levels, and you also can't make sense of infinitely high levels. First-order and Higher-order logic If you take the first or third option, then you have two more choices, which correspond to what David Speyer was saying. You can require that variables that range over sets of things range over "honest subsets" of the collection of things they're supposed to be sets of. Or you can interpret the set variables in a "whacked model". (The technical term is a "Henkin model".) On this interpretation, the "sets" are just further objects in your domain, and "membership" is just interpreted as some arbitrary relation between the objects of one type and the objects of the "set" type, and you interpret all your axioms in first-order logic. The difference is that the honest interpretation uses second-order logic, while the Henkin interpretation just uses first-order logic. Second-order logic (and higher-order logic) is nice in that it lets you prove all sorts of uniqueness results - there is a unique model of honest second order Peano arithmetic, and if you require honest set-hood then this means there will be unique models at the third order level and higher, giving you one result that you remember. But first-order logic is nice because there's actually a proof system - that is, there is a set of rules for manipulating sentences such that any sentence true in every first-order model can actually be reached by doing these manipulations. That is, Gödel's Completeness Theorem applies. However, his Incompleteness Theorems also apply - thus, there are lots of models of first-order Peano arithmetic, and then there are even more Henkin models of "second-order" up vote 22 Peano arithmetic, and far far more Henkin models of "third-order" Peano arithmetic, which is the theory you're interested in. down vote accepted Unfortunately, I don't know what these Henkin models look like. It all depends on what set existence axioms you use. There's a lot of discussion of this stuff for "second-order" Peano arithmetic in Steven Simpson's book Subsystems of Second-Order Arithmetic, which is the canonical text of the field known as reverse mathematics. However, none of that talks about arbitrary sets of reals, which is what you're interested in. Solovay's results The other result you mention, which is cited in one of the other answers here, takes the other option from above. That is, we do everything in ZF and see what different models of ZF are like. (Note that I don't say ZFC - of course if you have choice, then you have non-measurable sets of reals.) Every model of ZF has a set it calls ω, which is the set it thinks of as "the natural numbers". Set theorists then talk about the powerset of this set as "the real numbers" - you might prefer to think of this set as "the Cantor set", and some other object in the model of ZF as its "real numbers", but there will be some nice translation between the Cantor set and your set, that gives the relevant topological and measure-theoretic properties. Of course, since we're just talking about models of ZF, none of this is going to be the real real numbers. After all, since ZF is a first-order theory, the Löwenheim-Skolem theorem guarantees that it has a countable model. This model thinks that its "real numbers" are uncountable, but that's just because the model doesn't know what uncountable really means. (This is called Skolem's Paradox - http://en.wikipedia.org/wiki/Skolem%27s_paradox>wikipedia, http://plato.stanford.edu/entries/paradox-skolem/>Stanford Encyclopedia of Philosophy.) What Solovay showed is that if you start with a countable model of ZFC that has an inaccessible cardinal (assuming that inaccessibles are consistent, then there is such a model, and we have almost as much reason to believe that inaccessibles are consistent as we do to believe that ZFC is consistent) then you can use Cohen's method of forcing to construct a different (countable) model of ZF where there are no unmeasurable sets of "reals". Of course, the first result you stated (that any two models of the reals are isomorphic) holds within any model of set theory, assuming you're talking about "honest" second-order models (that is, models of reals that are "honest" with respect to the notion of "subset" that you get from the ambient model of ZF). But the notion of "honest" second-order model doesn't even translate when you move from one model of set theory to another. So Solovay's model of ZF has the property that every "honest" model of second-order analysis (or third-order number theory) has no non-measurable sets, while any model of ZFC has the property that every "honest" model of second-order analysis (or third-order number theory) does have non-measurable sets. That's how your two results are consistent. Thanks Kenny! I wish I had more than one vote to give you. – David Speyer Oct 21 '09 at 13:04 Thanks for spelling this all out! I was thinking of writing up an answer along these lines, but got lazy, and your explanation is clear and to the point. – John Goodrick Oct 21 '09 at Yes, seriously, thanks very much! – userN Oct 22 '09 at 1:22 add comment Be warned: this is pretty far from my expertise; I may be making stupid errors. There is a subtlety which shows up in theorems like "All the models of the reals are isomorphic." In any axiomatization of the reals, you will have some sort of completeness axiom. For Dedekind's axiom: If L and R are subsets of the reals, such that l <= r for any l in L and r in R, then there is some x such that l <= x <= r for any l in L and any r in R. This axiom will always make reference to set theory; in more technical terms, it is not a first order axiom. Now, there are two things that are both true. up vote (1) Define a "straight-forward model of the reals" to be a set R, with relations +, *, 0, 1, =, < obeying the axioms of the reals. Here the word "subset" in Dedekind's axiom is to be 4 down interpreted as an honest subset of R. Any two straight-forward models of R are isomorphic. Whether or not they have non-measurable sets depends on which set theory you use in the preceding definition; if you use ZFC, then they have non-measurable sets. (2) Define a "whacked model of the reals" to be a set R, with relations +, *, 0, 1, =, <, and a set S, with relations \subset and \in. These obey the axioms of "reals and sets of reals". So R is an honest ordered field with respect to (+, *, 0, 1, =, <); the Dedekind axiom holds if we interpret "set" as "element of S"; and (S, \subset, \in) has all the properties that the subsets of R should have according to ZF (no C). However, the elements of S do not have to be all the subsets of R; they just have to be all the subsets whose existence can be proven from ZF and the axioms of the reals. Not all whacked models are isomorphic; and some of them don't have nonmeasurable sets. I guess by "model of R" you must mean, "another model satisfying all the same second-order formulas as R"? – John Goodrick Oct 21 '09 at 0:00 I'm afraid I really don't understand what a "whack model of the reals" is supposed to be. You're saying that it satisfies all the second-order axioms of the genuine real-number field, and also has a bunch of things in S which behave like subsets of S, but S may not include all subsets of S? There is a sentence phi in the second-order language with S which is true in a model M if and only if M has a subset which is not encoded by any element of S. So this formula phi could be true in a "whack model"? I.e. not all "whack models" satsify all the same second-order sentences? – John Goodrick Oct 21 '09 at 0:36 ^^ "... things in S which behave like subsets of M", sorry (wish I could edit my comment). – John Goodrick Oct 21 '09 at 0:38 1 The answer, in both cases, is that the axiom system in question is not first order. The completeness and well ordering axioms are about subsets of R and Z. If you interpret "subset" as meaning "subset", and hence make the axiom system second order, than there really is only one model of these theories up to isomorphism. – David Speyer Oct 21 '09 at 3:47 To clarify the distinction, go through the axioms that you want R to satisfying, and replace "is a subset of" with the phrase "est un sous-ensemble de". Now the point is that you can make 2 the notions of "ensembles" and the relation of being a "sous-ensemble" part of your model, and it doesn't have to be related to actual sets and subsets. In particular, there are models satisfying these axioms where R has only countably many "sous-ensembles". I think the presentation on Wikipedia is nice: en.wikipedia.org/wiki/Second-order_logic# Why_second-order_logic_is_not_reducible_to_first-order_logic – Tom Church Oct 21 '09 at 4:23 show 3 more comments up vote 3 down vote Thanks for the link to the paper I was thinking of; even better would be to (shortly) describe it. It is the proof by Solovay of the (relative) consistency of ZF + Dependent Choice + 2 "every subset of the reals is Lebesgue measurable": Solovay, Robert M. (1970). "A model of set-theory in which every set of reals is Lebesgue measurable". Annals of Mathematics. Second Series 92: 1–56. – Benoit Jubin Oct 20 '09 at 21:58 add comment It could certainly be true if you take your set theoretic axioms to be something other than ZFC. For instance I'm pretty sure that in ZF + AD (Axiom of Determinacy), every set of the real line is measurable. up vote 2 EDIT: In fact, whether or not you like AD, it is the case that "every set is measurable" is consistent with ZF. Thus ZF (without Choice) certainly supports two models of R, one in which down vote every set is measurable and another in which there exist non-measurable sets. I'm not actually sure how to interpret your comment "any two models of the reals are isomorphic". add comment I think if you assume the Axiom of Choice, then unmeasurable sets are inevitable. Was your tea-time companion perhaps entertaining you with glimpses of the worlds that are possible up vote 1 down without AC? add comment Not the answer you're looking for? Browse other questions tagged measure-theory set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/1480/models-of-the-reals-which-have-no-unmeasurable-sets?answertab=active","timestamp":"2014-04-17T07:25:22Z","content_type":null,"content_length":"86618","record_id":"<urn:uuid:498ec26f-9695-4ddf-a1ba-48018db28e6d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
d Events Anyone who is not shocked by quantum theory has not understood it. Niels Bohr, 1927 A paper written by Einstein, Podalsky, and Rosen (EPR) in 1935 described a thought experiment which, the authors believed, demonstrated that quantum mechanics does not provide a complete description of physical reality, at least not if we accept certain common notions of locality and realism. Subsequently the EPR experiment was refined by David Bohm (so it is now called the EPRB experiment) and analyzed in detail by John Bell, who highlighted a fascinating subtlety that Einstein, et al, may have missed. Bell showed that the outcomes of the EPRB experiment predicted by quantum mechanics are inherently incompatible with conventional notions of locality and realism combined with a certain set of assumptions about causality. The precise nature of these causality assumptions is rather subtle, and Bell found it necessary to revise and clarify his premises from one paper to the next. In Section 9.6 we discuss Bell's assumptions in detail, but for the moment we'll focus on the EPRB experiment itself, and the outcomes predicted by quantum mechanics. Most actual EPRB experiments are conducted with photons, but in principle the experiment could be performed with massive particles. The essential features of the experiment are independent of the kind of particle we use. For simplicity we'll describe a hypothetical experiment using electrons (although in practice it may not be feasible to actually perform the necessary measurements on individual electrons). Consider the decay of a spin-0 particle resulting in two spin-1/2 particles, an electron and a positron, ejected in opposite directions. If spin measurements are then performed on the two individual particles, the correlation between the two results is found to depend on the difference between the two measurement angles. This situation is illustrated below, with a and b signifying the respective measurement angles at detectors 1 and 2. Needless to say, the mere existence of a correlation between the measurements on these two particles is not at all surprising. In fact, this would be expected in most classical models, as would a variation in the correlation as a function of the absolute difference q = |a - b| between the two measurement angles. The essential strangeness of the quantum mechanical prediction is not the mere existence of a correlation that varies with q, it is the non-linearity of the predicted variation. If the correlation varied linearly as q ranged from 0 to p, it would be easy to explain in classical terms. We could simply imagine that the decay of the original spin-0 particle produced a pair of particles with spin vectors pointing oppositely along some randomly chosen axis. Then we could imagine that a measurement taken at any particular angle gives the result UP if the angle is within p/2 of the positive spin axis, and gives the result DOWN otherwise. This situation is illustrated below: Since the spin axis is random, each measurement will have an equal probability of being UP or DOWN. In addition, if the measurements on the two particles are taken in exactly the same direction, they will always give opposite results (UP/DOWN or DOWN/UP), and if they are taken in the exact opposite directions they will always give equal results (UP/UP or DOWN/DOWN). Also, if they are taken at right angles to each other the results will be completely uncorrelated, meaning they are equally likely to agree or disagree. In general, if q denotes the absolute value of the angle between the two spin measurements, the above model implies that the correlation between these two measurements would be C(q) = (2/p)q - 1, as plotted below. This linear correlation function is consistent with quantum mechanics (and confirmed by experiment) if the two measurement angles differ by q = 0, p/2, or p, giving the correlations -1, 0, and +1 However, for intermediate angles, quantum theory predicts (and experiments confirm) that the actual correlation function for spin-1/2 particles is not the linear function shown above, but the non-linear function given by C(q) = -cos(q), as shown below On this basis, the probabilities of the four possible joint outcomes of spin measurements performed at angles differing by q are as shown in the table below. (The same table would apply to spin-1 particles such as photons if we replace q with 2q.) To understand why the shape of this correlation function defies explanation within the classical framework of local realism, suppose we confine ourselves to spin measurements along one of just three axes, at 0, 120, and 240 degrees. For convenience we will denote these axes by the symbols A, B, and C respectively. Several pairs of particles are produced and sent off to two distant locations in opposite directions. In both locations a spin measurement along one of the three allowable axes is performed, and the results are recorded. Our choices of measurements (A, B, or C) may be arbitrary, e.g., by flipping coins, or by any other means. In each location it is found that, regardless of which measurement is made, there is an equal probability of spin UP or spin DOWN, which we will denote by "1" and "0" respectively. This is all that the experimenters at either site can determine separately. However, when all the results are brought together and compared in matched pairs, we find the following joint correlations The numbers in this matrix indicate the fraction of times that the results agreed (both 0 or both 1) when the indicated measurements were made on the two members of a matched pair of objects. Notice that if the two distant experimenters happened to have chosen to make the same measurement for a given pair of particles, the results never agreed, i.e., they were always the opposite (1 and 0, or 0 and 1). Also notice that, if both measurements are selected at random, the overall probability of agreement is 1/2. The remarkable fact is that there is no way (within the traditional view of physical processes) to prepare the pairs of particles in advance of the measurements such that they will give the joint probabilities listed above. To see why, notice that each particle must be ready to respond to any one of the three measurements, and if it happens to be the same measurement as is selected on its matched partner, then it must give the opposite answer. Hence if the particle at one location will answer "0" for measurement A, then the particle at the other location must be prepared to give the answer "1" for measurement A. There are similar constraints on the preparations for measurements B and C, so there are really only eight ways of preparing a pair of particles These preparations - and only these - will yield the required anti-correlation when the same measurement is applied to both objects. Therefore, assuming the particles are pre-programmed (at the moment when they separate from each other) to give the appropriate result for any one of the nine possible joint measurements that might be performed on them, it follows that each pair of particles must be pre-programmed in one of the eight ways shown above. It only remains now to determine the probabilities of these eight preparations. The simplest state of affairs would be for each of the eight possible preparations to be equally probable, but this yields the measurement correlations shown below Not only do the individual joint probabilities differ from the quantum mechanical predictions, this distribution gives an overall probability of agreement of 1/3, rather than 1/2 (as quantum mechanics says it must be), so clearly the eight possible preparations cannot be equally likely. Now, we might think some other weighting of these eight preparation states will give the right overall results, but in fact no such weighting is possible. The overall preparation process must yield some linear convex combination of the eight mutually exclusive cases, i.e., each of the eight possible preparations must have some fixed long-term probability, which we will denote by a, b,.., h, respectively. These probabilities are all positive values in the range 0 to 1, and the sum of these eight values is identically 1. It follows that the sum of the six probabilities b through g must be less than or equal to 1. This is a simple form of "Bell's inequality", which must be satisfied by any local realistic model of the sort that Bell had in mind. However, the joint probabilities in the correlation table predicted by quantum mechanics imply Adding these three expressions together gives 2(b + c + d + e + f + g) = 9/4, so the sum of the probabilities b through g is 9/8, which exceeds 1. Hence the results of the EPRB experiment predicted by quantum mechanics (and empirically confirmed) violate Bell's inequality. This shows that there does not exist a linear combination of those eight preparations that can yield the joint probabilities predicted by quantum mechanics, so there is no way of accounting for the actual experimental results by means of any realistic local physical model of the sort that Bell had in mind. The observed violations of Bell's inequality in EPRB experiments imply that Bell's conception of local realism is inadequate to represent the actual processes of nature. The causality assumptions underlying Bell's analysis are inherently problematic (see Section 9.7), but the analysis is still important, because it highlights the fundamental inconsistency between the predictions of quantum mechanics and certain conventional ideas about causality and local realism. In order to maintain those conventional ideas, we would be forced to conclude that information about the choice of measurement basis at one detector is somehow conveyed to the other detector, influencing the outcome at that detector, even though the measurement events are space-like separated. For this reason, some people have been tempted to think that violations of Bell's inequality imply superluminal communication, contradicting the principles of special relativity. However, there is actually no effective transfer of information from one measurement to the other in an EPRB experiment, so the principles of special relativity are safe. One of the most intriguing aspects of Bell's analysis is that it shows how the workings of quantum mechanics (and, evidently, nature) involve correlations between space-like separated events that seemingly could only be explained by the presence of information from distant locations, even though the separate events themselves give no way of inferring that information. In the abstract, this is similar to "zero-information proofs" in To illustrate, consider a "twins paradox" involving a pair of twin brothers who are separated and sent off to distant locations in opposite directions. When twin #1 reaches his destination he asks a stranger there to choose a number x[1] from 1 to 10, and the twin writes this number down on a slip of paper along with another number y[1] of his own choosing. Likewise twin #2 asks someone at his destination to choose a number x[2], and he writes this number down along with a number y[2] of his own choosing. When the twins are re-united, we compare their slips of paper and find that |y[2] - y [1]| = (x[2] - x[1])^2. This is really astonishing. Of course, if the correlation was some linear relationship of the form y[2] - y[1] = A(x[2] - x[1]) + B for any pre-established constants A and B, the result would be quite easy to explain. We would simply surmise that the twins had agreed in advance that twin #1 would write down y[1] = Ax[1] - B/2, and twin #2 would write down y[2] = Ax[2] + B /2. However, no such explanation is possible for the observed non-linear relationship, because there do not exist functions f[1] and f[2] such that f[2](x[2]) - f[1](x[1]) = (x[2] - x[1])^2. Thus if we assume the numbers x[1] and x[2] are independently and freely selected, and there is no communication between the twins after they are separated, then there is no "locally realistic" way of accounting for this non-linear correlation. It seems as though one or both of the twins must have had knowledge of his brother's numbers when writing down his own number, despite the fact that it is not possible to infer anything about the individual values of x[2] and y[2] from the values of x[1] and y[1] or vice versa. In the same way, the results of EPRB experiments imply a greater degree of inter-dependence between separate events than can be accounted for by traditional models of causality. One possible idea for adjusting our conceptual models to accommodate this aspect of quantum phenomena would be to deny the existence of any correlations until they becomes observable. According to the most radical form of this proposal, the universe is naturally partitioned into causally compact cells, and only when these cells interact do their respective measurement bases become reconciled, in such a way as to yield the quantum mechanical correlations. This is an appealing idea in many ways, but it's far from clear how it could be turned into a realistic model. Another possibility is that the preparation of the two particles at the emitter and the choices of measurement bases at the detectors may be mutually influenced by some common antecedent event(s). This can never be ruled out, as discussed in Section 9.6. Lastly, we mention the possibility that the preparation of the two particles may be conditioned by the measurements to which they are subjected. This is discussed in Section 9.10. Return to Table of Contents
{"url":"http://mathpages.com/rr/s9-05/9-05.htm","timestamp":"2014-04-17T00:54:20Z","content_type":null,"content_length":"31382","record_id":"<urn:uuid:6b753961-b908-4e92-81e3-0385f4b8b0d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Team members in Nanoscale Theory, Modeling and Simulation Thrust. • Peter Binev: (Ph.D., Mathematics, Sofia University, 1985), Scientific Computing, Approximation Theory, Numerical Analysis. Research interests include: nonlinear approximation, learning theory, high dimensional problems, numerical methods for PDEs, computer graphics, image and surface processing. • Sophya Garashchuk: (Ph.D., Chemistry, University of Notre Dame, 1998) Theoretical and computational chemistry including quantum effects in dynamics of nuclei, development of approximate quantum potential method applicable to large molecular systems and studies of reactivity of hyperthermal oxygen. • Xiaoming He: (Ph.D., Mechanical Engineering, University of Minnesota, 2004), Heat and mass transfer in medicine and biotechnology; Cryobiology and biopreservation; Thermal therapy, hyperthermia, and cryosurgery; Molecular, cellular, and tissue engineering; Biomechanics and biothermodynamics; Bionanotechnology and bioMEMS. • Andreas Heyden: (Ph. D., Chemical Engineering, Hamburg University of Technology, 2005), computational nanomaterial science and heterogeneous catalysis, multiscale methods.Lili Ju: (Ph.D., Iowa State University, 2002), Computational Mathematics. Research interests include: Scientific computation and numerical analysis. Exact boundary controllability problems for the wave equation. Parallel algorithms and high-performance computing. Human brain imaging. • Xinfeng Liu: (Ph.D., Mathematics, SUNY at Stonybrook), Computational biology, computational fluid dybamics, cell migration, front tracking methods, parallel computing.Yuriy Pershin: (Ph.D., Physics, UC San Diego, 2006), computational/theoretical nanophysics, charge and spin transport in molecules, semiconductor structures and other submicron electronic devices, different aspects of transport in biological systems. • Vitaly Rassolov: (Ph.D., Chemistry, University of Notre Dame, 1996), Quantum chemistry, hyperfine interactions, use of linear operators to describe electron correlation effects in molecules. • Hong Wang: (Ph.D., Mathematics, University of Wyoming, 1992), Numerical Analysis and Differential Equations. Research interests include: Numerical approximation to differential/integral equations, scientific computations. • Qi Wang: (Ph.D., Mathematics, Ohio State University, 1991), Applied and Computational Mathematics, Computational Fluid Dynamics and Rheology of Complex Fluids, Continuum Mechanics and Kinetic Theory, Multiscale Modeling and computation of soft matter and complex fluids of anisotropic Microstructures, Multiscale modeling and computation of biofluids and biomaterials, Parallel and high performance Computing. • Xiaofeng Yang: (Ph.D., Mathematics, Purdue University, 2007), Numerical analysis, computational rheology, phase field modeling, interfacial dynamics, parallel computing.
{"url":"http://nano.sc.edu/research/nanoscaletheorymodelingsimulation/team.aspx","timestamp":"2014-04-20T16:03:25Z","content_type":null,"content_length":"18987","record_id":"<urn:uuid:c81241bc-2c6f-4127-ab94-4fa0ceb2227b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications (7)33.64 Total impact Physical Review Letters 09/2000; 85(12). · 7.94 Impact Factor [show abstract] [hide abstract] ABSTRACT: We study the relaxation of a spin I that is weakly coupled to a quantum mechanical environment. Starting from the microscopic description, we derive a system of coupled relaxation equations within the adiabatic approximation. These are valid for arbitrary I and also for a general stationary non--equilibrium state of the environment. In the case of equilibrium, the stationary solution of the equations becomes the correct Boltzmannian equilibrium distribution for given spin I. The relaxation towards the stationary solution is characterized by a set of relaxation times, the longest of which can be shorter, by a factor of up to 2I, than the relaxation time in the corresponding Bloch equations calculated in the standard perturbative way. Comment: 4 pages, Latex, 2 figures Physical review. B, Condensed matter 06/2000; [show abstract] [hide abstract] ABSTRACT: We study the spin relaxation in an interacting two--dimensional electron gas in a strong magnetic field for the case that the electron density is close to filling just one Landau sub--level of one spin projection, i.e., for filling factor near one. Assuming the relaxation to be caused by scattering with phonons, we derive the kinetic equations for the electron's spin--density which replace the Bloch equations in our case. These equations are non--linear and their solution depends crucially on the filling factor and on the temperature of the phonon bath. In the limit of zero temperature and for filling factor 1, the solution relaxes asymptotically with a power law inversely proportional to time, instead of following the conventional exponential behavior. Comment: 4 pages, 1 figure Physical Review Letters 11/1998; · 7.94 Impact Factor Physics-Uspekhi 01/1998; 41(2):134-138. · 1.87 Impact Factor Physical Review Letters 01/1997; 79(19):3792-3792. · 7.94 Impact Factor [show abstract] [hide abstract] ABSTRACT: We study interacting electrons in two dimensions moving in the lowest Landau level under the condition that the Zeeman energy is much smaller than the Coulomb energy and the filling factor is one. In this case, Skyrmion quasiparticles play an important role. Here, we present a simple and transparent derivation of the corresponding effective Lagrangian. In its kinetic part, we find a non-zero Hopf term the prefactor of which we determine rigorously. In the Hamiltonian part, we calculate, by means of a gradient expansion, the Skyrmion-Skyrmion interaction completely up to fourth order in spatial derivatives. Comment: 4 pages, Latex Physical Review Letters 10/1996; · 7.94 Impact Factor Top Journals • 1996–2000 □ Physikalisch-Technische Bundesanstalt Berlín, Berlin, Germany
{"url":"http://www.researchgate.net/researcher/7186688_W_Apel","timestamp":"2014-04-21T13:48:46Z","content_type":null,"content_length":"120965","record_id":"<urn:uuid:a182ead0-d4ae-4ec4-969e-53e90c07f18b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Syllabus Special Notes: • Office Hours: □ Mon: 9:25- 10:20 □ Mon: 2:15- 3:15 □ Tues: 9:15- 9:45 □ Tues: 3:30- 4:30 □ Wed: 1:00- 1:30 □ Thurs: 08:50- 09:50 □ Fri: 1:05- 2:00 Instr: Dr. Roger Griffiths Office: Old Main 404 (Tower) Email: griffiths.roger@gmail.com Phone: 824-2123 Location: Zurn 213 Class Time: Mon, Wed, Fri: 1:00 - 2:30 Web: http://math.mercyhurst.edu/~griff/courses/m240/ Text: Fundamentals of Differential Equations (7th Edition), by Nagle, Saff, Snider Goals: Our goals involve gaining an introduction to the mathematical content of ordinary differential equations and their applications. This will include analytical, qualitative and numerical methods for ordinary differential equations. Prior to calculus, we used our understanding of the rules of algebra to develop techniques for solving algebraic equations. In this class we will use both the rules of algebra and the rules of calculus (e.g., differentiation shortcuts, integration techniques, etc.) to develop techniques for solving differential equations. We will continue to improve our ability to write mathematics. Why? The major application of calculus is posing, solving, and understanding solutions of differential equations. Because many laws of nature are equations involving rates at which quantities change, this idea is a derivative, and equations containing derivatives are differential equations. So, in order to understand the many processes of change in the world, one needs to understand differential Evaluation: There will be weekly quizzes, occasional take-home assignments, two exams, and a cumulative final exam. Homework will be assigned but not collected. We will occasionally discuss the homework in class, but students are expected to clear up questions using my office hours. Quizzes and tests will be closed-book and administered in class. In-class quiz problems will be very similar to the assigned homework problems. The final exam will be cumulative (and worth twice a mid-term exam). No makeups will be given! Grading Policy: 2 exams at 100 points each, Quiz average out of 100 points, will drop 1 quiz score, Comprehensive Final exam worth 200 points. • Students are required to take all exams at the scheduled hour as they appear on the syllabus and course schedule. • A missed exam will result in the final exam being worth 300 points. There will be no late 'make-up' exams, as this is unfair to the rest of the class. • The quizzes will be based largely on the suggested homework, and should be expected any day. • Everyone is allowed to miss one quiz without penalty (for any reason). If you end up taking all of the quizzes, you may drop your low quiz score. Athletes or other individuals missing for school activities are to let me know BEFORE missing the quiz (or it lands above). • Part of any correct write-up includes: connecting your work, proper notation, and an explanation of steps as you see necessary. You should write-up problems as if you were explaining them to some one else. • Your overall performance in the course is measured by the total number of points you accumulate relative to the maximum 500 points possible. Your letter grade in this course will be based on the distribution below. • These are the only points possible in this class, there is no extra credit (or 'make up'), your asking for extra credit is a clear indication that you have not read your contract (this syllabus). Total Class Points Percent % Letter Grade Interpretation 470 - 500 94 to 100 A Exceptional and Rare 450 - 469 90 to 93 B+ Outstanding 420 - 449 84 to 89 B Very Good 390 - 419 78 to 83 C+ Good 350 - 389 70 to 77 C Satisfactory - Average 300 - 349 60 to 69 D Unsatisfactory 0 - 299 Below 60 F Failure Course Policies: • You are responsible for all that is announced or covered in class even if you are absent. • You are responsible for all the material in a given section unless told otherwise, use the course schedule and suggested homework as a guide. • A prerequisite for additional help outside the classroom is regular class attendance. • Every student is required to establish a class contact, that is, a fellow classmate that you may contact in case you are having a problem with a particular homework exercise at night/weekend or in the event you miss class you can get the class notes from them. • If you miss class, you are responsible for getting the notes from your 'class contact' (see above). • Email is great for simple communications, but more complex issues must be handled in person. • Don't use email as an excuse to avoid personal contact. • Due to the overwhelming amount of email I receive, any email requests that involve more than a yes or no response may not get addressed, please come see me in that case. • I expect you to read this syllabus and get clarification of any items you do not understand the first week of class. If you send me an email asking me about something covered in this syllabus, that email will be disregarded. • Please fasten your seat belts and observe the 'No Smoking' signs when in flight. Calculators and Computers. You may use a calculator/computer to help learn the material, but not on exams or quizzes. There are several portions of the class that will require the use of a computer, however, all of our examinations are carefully designed to be taken "closed book" without the use of calculators or computers. Examination problems will focus on the basic methods and problem solving techniques which every student of differential equations must know without a calculator or textbook. Important Dates to Remember: Exam 1: Wednesday, April 3^rd Exam 2: Friday, April 26^th Final Examination: Wednesday, May 15^th; 10:30 - 12:30 There will be a link to your grades from our class web-page (these are NOT on blackboard). To check your grade login with your (Mercyhurst) email address as user name and your student ID as your password. You may change your password or email address once you have logged in. Access to your grades will be further explained in class. Suggested Homework: http://math.mercyhurst.edu/~griff/courses/m240/HW.php I do not collect or grade the homework. You will be held accountable for the mastery of homework problems via the quizzes (which can occur any day). As such, you get no credit for merely attempting the homework, your goal is independent mastery of each type of problem assigned. The quizzes serve as an immediate assessment of the extent to which you mastered a particular assignment. Good quiz results should serve as positive feedback, but poor quiz results mean you must go back and master that material. Homework is far and away the single most important part of any mathematics course because this is when most of the learning takes place. Homework problems will be assigned regularly and I expect you to do them. If you are unable to do a problem I expect you to find out how to do it. You have at your disposal several means of meeting this expectation. You can stick with it until you figure it out yourself. You can discuss the problem with a classmate or several classmates (strongly encouraged). You can ask me about the problem in class, time permitting. You can see me individually during my office hours. I am always happy to talk to you during my office hours or at any other time if not otherwise committed. You can discuss the problem with anyone who can and is willing to help you. Simply ignoring a problem that you are unable to solve is not acceptable. You should continue to work problems of a given type (even beyond the assigned problems) until you see the pattern yourself, without assistance of any type. As you 'PRACTICE', keep in mind our stated goal 'to improve our ability to write mathematics, you will want to practice in the manner you will be accessed. In keeping with college policy, any student with a disability who needs academic accommodations must call Learning Differences Program secretary at 824-3017, to arrange a confidential appointment with the director of the Learning Differences Program during the first week of classes.
{"url":"http://math.mercyhurst.edu/~griff/courses/m240/syllabus.php","timestamp":"2014-04-20T18:24:23Z","content_type":null,"content_length":"17198","record_id":"<urn:uuid:f2729614-8f9b-48f5-8eb2-167cbeadf794>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Summer 2001 Rigorous Studies in the Statistical Mechanics of Lattice Models / Études rigoureuses dans la mécanique statistique des modèles de réseaux (Chris Soteros and Stu Whittington, Organizers) MIREILLE BOUSQUET-MÉLOU, LaBRI, Universite Bordeaux 1. 351 cours de la Liberation. 33405 Talence Cedex. France The site-perimeter of bargraphs The site perimeter of a lattice animal is the number of vertices of the underlying grid that are adjacent to the animal (but do not belong to it). This parameter plays an important role in percolation models. As self-avoiding polygons can also be seen as polyominoes (hence animals), one can also study their site perimeter. The case of polygons that are both column- and row-convex is easy to deal with, and we shall explain why it always yields an algebraic generating function. But the main result of the talk will be the exact solution of a model of polygons that are ONLY column-convex: bargraphs. This model can also be described as a self-interacting partially directed walk attached to a surface. We obtain the site perimeter generating function for bargraphs as a q-series in which an algebraic series is plugged: to our knowledge, this type of series has not appeared before in the zoo of animals generating functions. (joint work with Andrew Rechnitzer) RICHARD BRAK, Melbourne Combinatorial aspects of the simple asymmetric exclusion process The simple asymmetric exclusion process (ASEP) is a finite one dimensional system of particles hopping with excluded volume. It is a dynamical model for problems such as driven diffusive systems, traffics flows and the kinetics of biopolymerization. Much work has been done on exact calculations for the stationary distribution for the many variants of the model. All these methods are all algebraic. In this talk we show that there is a very rich combinatorial dimension to the ASEP. The infinite dimensional matrices that arise as representations of the algebras in the problem can be interpreted as lattice path problems. This interpretation leads to combinatorial solutions to many of the problems. These solutions use new involutions and bijections. Interestingly, one of the lattice path representations is related to compact percolation with a damp wall. FRANK DEN HOLLANDER, Eurandom, Eindhoven, The Netherlands Heteropolymers near interfaces We consider a heteropolymer, consisting of an i.i.d. concatenation of hydrophilic and hydrophobic monomers, in the presence of water and oil arranged in alternating layers. The heteropolymer is modelled by a directed path where the vertical component lives on Z, and the layers are horizontal with equal width. The path measure for the vertical component is given by that of simple random walk multiplied by an exponential weight factor that favors the combinations hydrophilic/water and hydrophobic/oil and disfavors the combinations hydrophilic/oil and hydrophobic/water. We study the vertical motion of the heteropolymer as a function of its total length n when the width of the layers is d[n] and the parameters in the exponential weight factor are such that the heteropolymer tends to stay close to an interface (the so-called ``localized regime''). In the limit as n ®¥ and under the condition that lim d[n]loglogn = ¥, lim d[n]logn = 0, n ® ¥ n ® ¥ we show that the vertical motion is a diffusive hopping between neighboring interfaces on a time scale where c is computed explicitly in terms of a variational problem. An analysis of this variational problem sheds light on the optimal hopping strategy. (Joint work with M. Wüthrich, Winterthur, Switzerland.) YUANAN DIAO, University of North Carolina at Charlotte, Charlotte, North Carolina 28269, USA Upper bounds on linking numbers of thick links The maximum of the linking number between two lattice polygons of lengths n[1], n[2] (with n[1] £ n[2]) is proven to be of the order of n[1](n[2])^[1/3]. This result is generalized to smooth links of unit thickness. The result also implies that the writhe of a lattice knot K of length n is at most 26n^4/3 /p. In the second half of the paper examples are given to show that linking numbers of order n[1](n[2])^[1/3] can be obtained when n[1]^3 ³ n[2]. When n[1]^3 < n[2], it is further shown that the maximum of the linking number between these two polygons is bounded by c n[1]^2 for some constant c > 0. Finally the maximal total linking number of lattice links with more than 2 components is generalized to k components. BERTRAND DUPLANTIER, Institut Henri Poincaré 11, rue Pierre et Marie Curie, F-75231 Paris Cedex 05 and Service de Physique Théorique, CEA/ Saclay, F-91191 Gif-Sur-Yvette Cedex, France Distribution of potential near conformally invariant boundaries The distribution of the electrostatic potential near any conformally invariant fractal boundary is exactly solved in two dimensions. This class of boundaries appears in a natural way in the statistical mechanics of any random cluster at a critical point, like a percolation or an Ising cluster, a Brownian path, or a self-avoiding walk. Consider a single charged random cluster, generically called C. Let H (z) be the potential at exterior point z, with Dirichlet boundary conditions H(w Î ¶C)=0 on the outer boundary ¶C of C. The multifractal formalism characterizes subsets ¶C[a] of boundary sites w by a local exponent a, such that the potential scales at distance r=|z-w| from w as H (r| w Î ¶C[a]) » r ^a, r ® 0. (1) The subset ¶C[a] has a varying Hausdorff ``multifractal'' dimension f(a)=dim(¶C[a]). An electrostatic exponent a corresponds geometrically to a local equivalent opening angle q = p/a along the fractal boundary, and the Hausdorff dimension [^(f)](q)=f(a = p/q) of the boundary subset with such angle q is found to be f (q)= pq - 25-c12 (p-q)^2q(2p-q) , (2) with c is a parameter describing the critical model. This naturally solves the potential theory near a random fractal in a statistical, i.e., probabilistic sense. Surprisingly, this classical problem is solved in the simplest way by a method of ``quantum gravity'', borrowed from string theory. Values of c are, for instance, c=1/2 for an Ising cluster; c=0 for the frontier of a Brownian motion [2], for a self-avoiding walk [3], as well as for a critical percolation cluster [4]. One thus finds for c=0 that these three boundaries all have the statistics of a self-avoiding walk, with a unique external perimeter dimension D[EP]=sup[q][^(f)](q)=4/3, which establishes and generalizes a well-known conjecture by Mandelbrot. This dimension is identical to the external perimeter dimension for percolation, directly derived in [5]. For any value of c, the Hausdorff dimension of the frontier D[EP] = sup[q][^(f)](q)=[^(f)]([^(q)]), and the typical harmonic angle [^(q)] satisfy [^(q)]=p(3-2D[EP]). For a critical Potts cluster, the dimensions D[EP] of the external perimeter (which is a simple curve) and D[H] of the cluster's hull (which possesses double points) obey the duality equation [1] independently of the model. A related covariant MF spectrum is obtained for any critical system near the cluster boundary. {1.} B. Duplantier, Conformally Invariant Fractals and Potential Theory. Phy. Rev. Lett. 84(2000), 1363-1367. {2.} , Random Walks and Quantum Gravity in Two Dimensions. Phy. Rev. Lett. 81(1998), 5489-5492. {3.} , Two-Dimensional Copolymers and Exact Conformal Multifractality. Phy. Rev. Lett. 82(1999), 880-883. {4.} , Harmonic Measure Exponents for 2D Percolation. Phy. Rev. Lett. 82(1999), 3940-3943. {5.} A. Aizenman, B. Duplantier and A. Aharony, Path Crossing Exponents and the External Perimeter in 2D Percolation. Phys. Rev. Lett. 83(1999), 1359-1362. TONY GUTTMANN, The University of Melbourne, Victoria 3010, Australia The scaling function of self-avoiding polygons We derive the scaling function for rooted self-avoiding polygons, on the square and triangular lattices, enumerated by both perimeter and area, defined up to a single constant. The scaling function satisfies a Ricatti equation, and its solution is given by the logarithmic derivative of an Airy function. The scaling function for unrooted self-avoiding polygons is given by the logarithm of an Airy function. While believed to be exact, the results are not rigorous. The result suggests that the generating function may be the solution of a q-algebraic difference equation of unknown degree. For two-dimensional percolation theory, by contrast, the generating function cannot be expressed as a q-algebraic difference equation. EDNA JAMES, University of Saskatchewan Critical exponents for trails in Z^2 with a fixed number of vertices of degree 4 For N large, the possible number of N-edge embeddings in Z^d of an arbitrary connected graph t, is believed to grow asymptotically as A[t] e^kN N^^g[t]-1, where A[t] is a constant, depending on the type of graph t (a walk, polygon, figure-eight, etc.), and k and g[t] are called the connective constant and critical exponent, respectively. Assuming the above asymptotic form, we prove for d = 2, that the critical exponent for trailgons (closed trails) with k vertices of degree 4 is given by g[o] + k, where g[o] is the critical exponent for self-avoiding polygons. A similar relation holds between the critical exponents for all trails with k vertices of degree 4 and for self-avoiding walks. BUKS JANSE VAN RENSBURG, York University, Toronto, Ontario M3J 1P3 Knots in adsorbing polygons A model for a ring polymer adsorbing onto a solid wall is a polygon confined in the half space z ³ 0 of the cubic lattice, and adsorbing onto the z=0 plane. Such a polygon is also an embedding of the circle in real space, and its knot type is well defined. In this talk I shall present rigorous and numerical results (including the energy and mean size) of adsorbing polygons, with particular reference to the entanglement complexity of these polygons, as measured by considering their knot types. Numerical data were obtained by a multiple Markov Chain implementation of the pivot algorithm for polygons confined in half-space z ³ 0 interacting with the z=0 plane and in the canonical ensemble. ANTAL JARAI, PIMS-UBC, 1933 West Mall, Vancouver, British Columbia V6T 1Z2 Incipient infinite clusters in 2D percolation Percolation models at the critical point produce huge but finite connected clusters that have sometimes been referred to as ``incipient infinite clusters'' or ``infinite clusters at criticality''. In 1986 H. Kesten proposed the definition of an object, which we call the IIC, that makes sense of the above loose terminology. In his definition the origin is conditioned to be connected to the boundary of a large box (at p = p[c]), and the size of the box goes to infinity. In 2D the conditional measures can be shown to have a weak limit, and under the limiting measure the cluster of the origin is an infinite fractal-like set resembling the large critical clusters. As noted by M. Aizenman, the IIC provides the microscopic (lattice scale) description of large percolation clusters at p[c]. In this talk we give a proof of this observation in several settings. For example, consider the largest cluster in a finite box and pick a site from it uniformly at random. Then we show that the law of the cluster, when viewed from the random site, converges weakly to the IIC as the box gets large. Similar results hold, if the largest cluster is replaced by spanning clusters, the Chayes-Chayes-Durrett cluster or the invaded region in invasion percolation. All of these objects have been proposed as alternative definitions of the ``incipient infinite cluster''; therefore our results show the equivalence of several natural definitions of the IIC. PIERRE LEROUX, Université du Québec, Montréal, Québec Enumeration of symmetry classes of convex and parallelogram polyominoes Several families of polynominoes, defined by convexity or directedness properties, have been enumerated according to area, perimeter and more refined parameters, on the square lattice. In this context, polyominoes are taken up to translations. However, it is also natural, from a geometric point of view, to consider polyominoes up to rotations and reflections, that is as ``tiles'' which can move freely in space. These congruence-type polyominoes can be seen as orbits of the dihedral group D[4] acting on (translation-type) polyominoes. Using Burnside's Lemma, we are led into the enumeration of the various symmetry classes of polyominoes associated with each element of D[4]. Together with my students E. Rassart and A. Robitaille, we have solved this problem in the cases of convex and of parallelogram polyominoes. Using Moebius inversion, we have also enumerated the asymmetric polyominoes of these two sorts. Many of the symmetry classes which occur are closely related to existing families of discrete models in statistical mechanics, for example compact source directed convex polyominoes, and their study raises challenging questions. In this talk, I will survey this work and mention some open problems. NEAL MADRAS, Department of Mathematics and Statistics, York University, Toronto, Ontario M3J 1P3 Self-avoiding walks on hyperbolic graphs This talk considers graphs that correspond to regular tilings of the hyperbolic plane. One example is the infinite planar graph in which every face is a triangle and eight triangles meet at every vertex. To a physicist, such graphs display ``infinite dimensional'' characteristics. In particular, self-avoiding walks should behave like ordinary random walks on these graphs. This predicts, for example, that there is exponentially small probability for a random N-step self-avoiding walk to end at a neighbour of the initial point. I shall discuss work in this direction. ALEKS OWCZAREK, The University of Melbourne, Victoria, Australia 3010 An infinite hierarchy of exact scaling functions The partition functions for various problems concerning n directed non-intersecting walks on a square lattice are known to be given by an n×n determinant. The analysis of the asymptotic behaviour is often an outstanding problem because of the difficulty with analysing such an expression. In this talk we focus on the particular problem of such walks interacting via contact potentials with a wall parallel to the direction of the walks, and derive the asymptotics of the partition function of a watermelon network of n such walks for all interaction parameters. The importance of underlying combinatorial results are highlighted in this calculation. As a result we give results for the associated network exponents in the three regimes: desorbed, adsorbed, and at the adsorption transition. Furthermore, we derive the full scaling function around the adsorption transition for all n. At the adsorption transition we also derive a simple ``product form'' for the partition function. NICHOLAS PIPPENGER, Computer Science Department, University of British Columbia, Vancouver, British Columbia V6T 1Z4 Random Boolean functions We consider random Boolean functions of n arguments, f :B^n ® B (where B = {0, 1}), for which each value f(x[1], ¼, x[n]) is independently 1 with probability p (and thus 0 with probability 1- p). We study the expected length [`(L)](n) of the shortest representation of such a function in disjunctive normal form (that is, as the disjunction of zero or more terms, each of which is the conjunction of zero or more literals, each of which is either an argument or its complement). The bounds we obtain for this problem involve correlation inequalities, cluster expansions, and other tools of statistical physics. ANDREW RECHNITZER, Department of Mathematics and Statistics, York University, York, Ontario Polymer adsorption and Dyck-paths One of the many places that lattice animals and their specialisations appear is in the modeling of the polymers in solution. The self-avoiding walk, for example, is used to model long chain polymers in solution; by altering the properties of the walk, different physical situations can be modeled, such as polymer collapse and adsorption. Though the self-avoiding walk model is a very good model of such behaviour, it has one major draw-back-it is (as yet) unsolved. It is possible to obtain a solvable model by considering directed-walk models, such as Dyck-paths. We review the methods for solving the Dyck-path model, and show that it undergoes an adsorption transition. We then extend this model to the problem of non-homogeneous polymers and explain the difficulty of finding general solutions. For an infinite family of cases we are able to find solutions and locate the adsorption transition. YVAN SAINT-AUBIN, Université de Montréal, CP 6128 succ centre-ville, Montréal, Quebéc H3C 3J7 Non-unitarity of some observables of the critical 2d Ising model The distribution of spins at the end of a half-infinite cylinder is obtained rigorously for the critical 2d Ising model on a square lattice. As the number of sites along the circumference goes to infinity, the ratio of number of sign flips to number of sites at the boundary tends to a constant. Using conformal field theory, one can ask the finer question, which pair of sign flips at the boundary of the cylinder are the endpoints of the same cluster inside the cylinder. This leads to a critical exponent related to a non-unitary representation of the Virasoro GORDON SLADE, Department of Mathematics, University of British Columbia Vancouver, British Columbia V6T 1Z2 High-dimensional networks of self-avoiding walks We discuss recent work with Remco van der Hofstad that proves that a sufficiently spread-out network of mutually-avoiding self-avoiding walks having the structure of a tree behaves like a system of independent Brownian motions above four dimensions. Extensions to networks having the shape of a general graph, rather than a tree, will also be discussed. The proof uses a generalisation of the lace expansion, in which the time variable is indexed by a tree or a graph rather than by an interval. ALAN SOKAL, New York University, New York, New York, USA Potts models, chromatic polynomials, and all that The q-state Potts model is a statistical-mechanical model that generalizes the well-known Ising model. It can be defined on an arbitrary finite graph, and its partition function encodes much important information about that graph (including its chromatic polynomial and its reliability polynomial). The complex zeros of the Potts-model partition function are of interest both to statistical mechanicians (in connection with the Lee-Yang picture of phase transitions) and to combinatorists. I begin by giving an introduction to all these problems. I then sketch two recent results: (a) Proof of a universal upper bound on the q-plane zeros of the chromatic polynomial (or antiferromagnetic Potts-model partition function) in terms of the graph's maximum degree (maximum number of nearest neighbors to any site). (b) Construction of a countable family of planar graphs whose chromatic zeros are dense in the whole complex q-plane except possibly for the disc |q-1| < 1. This talk is intended to be understandable to both mathematicians and physicists; no prior knowledge of either graph theory or statistical mechanics is required. CHRIS SOTEROS, Department of Mathematics and Statistics, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E6 Random knotting and linking in lattice models of polymers Self-avoiding polygons in Z^3 can be used to represent the possible conformations of a ring polymer. Studies of random knotting in such a lattice model have been motivated by the conjecture (due to Frisch and Wasserman (1968) and Delbruck (1962)) that almost all sufficiently long ring polymers are knotted, and more recently by evidence that the presence of knots in closed circular DNA can provide information about enzyme action on DNA. More generally, embeddings of graphs (not necessarily connected) in Z^3 can be used to represent the possible conformations of a polymer network, however, it remains a challenge to appropriately characterize the entanglement complexity of such networks. This has motivated studies of random linking in systems of self-avoiding polygons and random knotting in embeddings of graphs. Recent rigorous results about random knotting and linking in lattice models of polymers will be reviewed. The results are focussed on understanding i) the effects of geometrical constraints on the probability of knotting in self-avoiding polygons, ii) the probability of linking in systems of k self-avoiding polygons, and iii) the extent to which these results can be extended to the study of polymer networks. DE WITT SUMNERS, Florida State University, Tallahassee, Florida, USA Using knots to analyze DNA packing in viral capsids Bacteriophages are viruses that infect bacteria; the linear phage DNA is tightly packed in a protein icosahedron. When the viral DNA is released from the confining capsid, it circularizes, and these DNA circles are knotted with very high (80-95%) probability. We use the spectrum of DNA knots produced, together with simulation of knots in confined volumes to infer packing geometry. In particular, we conclude that DNA packing in the viral capsid is not random. STU WHITTINGTON, Department of Chemistry, University of Toronto, Toronto, Ontario Localization transition in a randomly coloured self-avoiding walk Consider a self-avoiding walk on the simple cubic lattice with vertices coloured A and B uniformly and independently. The walk is assigned a weight which is determined by the number of A vertices having positive z-coordinate and the number of B vertices having negative z-coordinate. This can be regarded as a model of a random copolymer with two types of monomers, each of which prefers one of two immiscible solvents. We shall discuss the existence of the limiting quenched average free energy for this model, and thermodynamic self-averaging. The system can be shown to undergo a phase transition corresponding to localization at the interface (i.e. to a phase in which a non-zero fraction of the vertices of the walk have zero z-coordinate). Recent results about the nature of the phase diagram will be presented.
{"url":"http://cms.math.ca/Events/summer01/abs/sm.html","timestamp":"2014-04-16T19:09:41Z","content_type":null,"content_length":"38999","record_id":"<urn:uuid:5acd5255-8c33-4038-92d4-ecc2fb4e4984>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the difference between matrix theory and linear algebra? up vote 9 down vote favorite Currently, I'm taking matrix theory, and our textbook is Strang's Linear Algebra. Besides matrix theory, which all engineers must take, there exists linear algebra I and II for math majors. What is the difference,if any, between matrix theory and linear algebra? 16 Likely the version of the course called "linear algebra" is proof-based and gets deeper into the conceptual content, whereas "matrix theory" probably focuses on applications. It's a matter of emphasis, really. – Qiaochu Yuan Jan 13 '10 at 17:20 The other difference I've seen is that matrix theory usually concentrates on the theory of real complex matrices. Linear algebra cares about those, but also rational canonical forms, etc... – Pace Nielsen Apr 1 '10 at 14:52 add comment 8 Answers active oldest votes Let me elaborate a little on what Steve Huntsman is talking about. A matrix is just a list of numbers, and you're allowed to add and multiply matrices by combining those numbers in a certain way. When you talk about matrices, you're allowed to talk about things like the entry in the 3rd row and 4th column, and so forth. In this setting, matrices are useful for representing things like transition probabilities in a Markov chain, where each entry indicates the probability of transitioning from one state to another. You can do lots of interesting numerical things with matrices, and these interesting numerical things are very important because matrices show up a lot in engineering and the sciences. up vote 41 In linear algebra, however, you instead talk about linear transformations, which are not (I cannot emphasize this enough) a list of numbers, although sometimes it is convenient to use a down vote particular matrix to write down a linear transformation. The difference between a linear transformation and a matrix is not easy to grasp the first time you see it, and most people would accepted be fine with conflating the two points of view. However, when you're given a linear transformation, you're not allowed to ask for things like the entry in its 3rd row and 4th column because questions like these depend on a choice of basis. Instead, you're only allowed to ask for things that don't depend on the basis, such as the rank, the trace, the determinant, or the set of eigenvalues. This point of view may seem unnecessarily restrictive, but it is fundamental to a deeper understanding of pure mathematics. 8 While it is true that people doing "Matrix Theory" often spend a lot of time with a choice of basis, it's important to note that this is frequently in pursuit of quantities that are invariant of choice of basis. – Dan Piponi Jan 13 '10 at 23:44 An even more basic question but in the same line is "What is the difference of a vector and a row (collum) matrix".Vectors are mathematical object living in a linear space or vector 1 space (which satisfy certain properties). Choosing a special set of vectors called a base, we can decompose every vector in the vector space into a kind of sum of vectors in this base. Thus every vector in a code, and this is the row (collum) matrix. The next step is to look at the homomorphisms (maps) between linear spaces. Choosing the base of the domain and the range we can represent the homomorphism by a matrix – Tran Chieu Minh Jan 14 '10 at 15:00 2 Even worse, matrices depend on a choice of an ordered basis. – Harry Gindi Mar 30 '10 at 22:30 Belated comment: Depends on what you call a matrix, Harry. If $X$ and $Y$ are sets and $K[X]$ and $K[Y]$ are their free K-vector spaces then a linear map $\colon K[X] \to K[Y]$ is the 3 same as a map of sets $X \to K^Y = X \times Y \to K$. I'd argue this is what a matrix really is and that ordering is an artifact of trying to write something in linear order on a piece of paper. – Per Vognsen Aug 4 '10 at 5:36 add comment A counter-quotation to the one from Dieudonné: We share a philosophy about linear algebra: we think basis-free, we write basis-free, but when the chips are down we close the office door and compute with matrices like fury. up vote 34 down vote (Irving Kaplansky, writing of himself and Paul Halmos) 2 I totally agree with this one. Thanks for sharing. – user1855 Apr 2 '10 at 2:27 +1! I confess that I like this quote much more than the other one (Dieudonné's), which, at least to me, appears a little arrogant. In my opinion, 'abstract' is not automatically 'better.' There are cases when one needs a concrete and efficient computation [Or, are all the algorithms implemented in Matlab just not smart enough because they use matrices? :) ] – user2734 Apr 2 '10 at 7:48 To echo the comment above: Kaplansky's quotation is that much more appropriate for people who code in low-level or numerical languages. It's possible to do a heck of a lot of symbolic calculation in such settings through the judicious use of integral matrices (here "integral" should be considered broadly). – Steve Huntsman Apr 23 '10 at 16:26 add comment The difference is that in matrix theory you have chosen a particular basis. up vote 21 down vote add comment Let me quote without further comment from Dieudonné's "Foundations of Modern Analysis, Vol. 1". up vote There is hardly any theory which is more elementary [than linear algebra], in spite of the fact that generations of professors and textbook writers have obscured its simplicity by 17 down preposterous calculations with matrices. 3 You should add this to the great quotes in mathematics thread. – Harry Gindi Mar 30 '10 at 22:30 It is ironic that a textbook on analysis would make such an outrageous claim on the trivially of another field: the analytic parts of linear algebra are truly deep and quite actively 6 researched. See, for example, Loewner's classification of matrix-monotone functions, or most any paper in quantum Shannon theory. Additionally, the entire field of quantum information theory (QIT) is essentially the study of unitary and self-adjoint operators on tensor products of Hilbert spaces, and a large majority the interesting questions in QIT retain 99% of their interest in the finite-dimensional case. – Jon Apr 4 '10 at 18:06 10 I don't think that there are many who can claim to have a better understanding of that things than Dieudonné. So instead of trying so hard to misunderstand him, try to find a meaning in his comment. – Tilemachos Vassias Apr 4 '10 at 18:53 3 It should also be pointed out that the "analytic parts of linear algebra" are more properly thought of as linear analysis, or in the case of operator monotone functions and calculations with the c.b. norm, even as non-linear analysis. I think castigating Dieudonné for this quote is taking unnecessary umbrage. – Yemon Choi Apr 4 '10 at 19:25 Of course Dieudonne meant "elementary" as in "simple and foundational", here not using "simple" to mean easy, but simple in the sense of structural complexity. It's not an arrogant 4 statement about how easy he thinks linear algebra is, but rather a castigation of those "generations of professors and textbook writers" who turned an elegant subject into a jumbled mess. – Harry Gindi Jun 10 '10 at 8:45 show 5 more comments Although some years ago I would have agreed with the above comments about the relationship between Linear Algebra and Matrix Theory, I DO NOT agree any more! See, for example Bhatia's "Matrix Analysis" GTM book. For example, doubly-(sub)stochastic matrices arise naturally in the classification of unitarily-invariant norms. They also naturally appear in the study of quantum entanglement, which really has nothing to do with a basis. (In both instances, all sorts of NONarbitrary bases come into play, mainly after the spectral theorem gets applied.) up vote 10 Doubly-stochastic matrices turn out to be useful to give concise proofs of basis-independent inequalities, such as the non-commutative Holder inequality: down vote tr |AB| $\le$ $||A||_p$ $||B||_q$ with 1/p+1/q=1, $|A|=(A^*A)^{1/2}$, and $||A||_p = (tr |A|^p)^{1/p}$ Doubly-stochastic matrices (in one interpretation, anyway) describe transition probabilities of some Markov chain where all the transitions are reversible. The relevant vector space is the free vector space over the states of the chain. Maybe this interpretation isn't directly relevant to the application you're thinking of, but there should be some connection. – Qiaochu Yuan Apr 4 '10 at 18:25 In the application to the Holder inequality, one uses the fact that if U is a unitary operator, then replacing the matrix elements of U by the squares of their absolute values yields a doubly-stochastic matrix. – Jon Apr 7 '10 at 1:33 add comment Matrix theory is the specialization of linear algebra to the case of finite dimensional vector spaces and doing explicit manipulations after fixing a basis. More precisely: The algebra of $n \times n$ matrices with coefficients in a field $F$ is isomorphic to the algebra of $F$-linear homomorphisms from an $n$-dimensional vector space $V$ over $F$, to itself. And the choice of such an isomorphism is precisely the choice of a basis for $V$. Sometimes you need concrete computations for which you use the matrix viewpoint. But for conceptual understanding, application to wider contexts and for overall mathematical elegance, the up vote 6 abstract approach of vector spaces and linear transformations is better. down vote In this second approach you can take over linear algebra to more general settings such as modules over rings(PIDs for instance), functional analysis, homological algebra, representation theory, etc.. All these topics have linear algebra at their heart, or, rather, "is" indeed linear algebra.. add comment My opinion: matrix theory mostly deals with matrix of a paticular kind , or a few relevant ones. But linear algebra cares about the general, underlying structrue. up vote 3 down vote add comment I'm with Jon. Matrices don't always appear as linear transformations. Yes, you can look at them as linear transformations, but there are times when it's better not to and study them for their own right. Jon already gave one example. Another example is the theory of positive (semi)definite matrices. They appear naturally as covariance matrices of random vectors. The up vote 2 notions like schur complements appear naturally in a course in matrix theory, but probably not in linear algebra. down vote Covariance matrices are essentially inner products, aren't they? That's just thinking of matrices as tensors of type (0, 2) instead of as tensors of type (1, 1). I think the theory of 2 linear algebra is really good at clarifying the distinction between this type of matrix and the "usual" type of matrix; for example it gets to the heart of when similarity is relevant vs. when conjugation is relevant. So I don't think this is a good example. – Qiaochu Yuan Apr 4 '10 at 18:20 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
{"url":"https://mathoverflow.net/questions/11669/what-is-the-difference-between-matrix-theory-and-linear-algebra?sort=votes","timestamp":"2014-04-17T01:21:42Z","content_type":null,"content_length":"99780","record_id":"<urn:uuid:90252665-0ed3-4904-8890-467c9bcd3575>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: two-tailed tests Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: two-tailed tests From "Michael N. Mitchell" <Michael.Norman.Mitchell@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: two-tailed tests Date Fri, 09 Jul 2010 11:21:53 -0700 Dear All I have found this to be a very fascinating discussion. I feel that it reveals much about our science and our training as scientists. Issues have been raised that go beyond one and two tailed tests, issues that might be categorized as science, research methods, or even philosophy of science (with David's recent post regarding Popper). I would like to comment, but do so specifically in a very personal way in terms of my experience trained as an experimental psychologist based on methods a philosophy of science that drew largely from books and publications of the 1960s and 1970s. It feels to me that there is a schism between the underlying philosophy of science and statistical methods taught and used. During my training as an experimental psychologist, the research model strongly encouraged "a priori" null and alternative hypotheses that specifically specified the pattern of expected results. The tests usually involved categorical (factor) variables with more than two levels, and we were taught to construct planned comparisons to show the expected direction of results that would be consistent with our hypothesis (and theory) of interest. Likewise, if interactions were present, we were to plot the planned pattern of interaction and statistically test for that exact pattern. I remember one lecture where the professor suggested that it was common practice (and should continue to be common practice) to mail oneself a sealed letter which contained the hypotheses and predicted pattern of results and this would show that the predictions pre-dated the data collection. These practices, I believe, were firmly rooted in the underlying philosophy of science. This way of doing things seems so quaint these days. It has been years since I have seen research that uses this kind of scientific model. In practice, I see much more of the usage of shotgun statistical tests with multiple predictors, multiple outcomes, multiple subgroups, with a very blurred distinction between hypotheses, planned results, and p values less than 0.05. With the richness of modern datasets in terms of observations, predictors and outcomes, and the easy access of statistical computations, it seems like a natural progression to try to extract as much information as possible from a modern dataset using modern techniques. While these practices are understandable and practical, are they still consistent with the scientific foundations from which statistical hypothesis testing derived? Are "p values" from such statistical analyses really "hypothesis tests"? Statistical output can contain dozens or hundreds of "p values"... do researchers really have dozens or hundreds of clearly articulated null hypotheses? (And, let's ignore the Type I error rate issue for now.) Or, is it that the hypothesis was not really conceived until a result less than 0.05 was discovered, at which point the researchers self-deceiving intelligent mind invents an ex-post facto hypothesis that "predicts" the result. Modern statistical tools no longer seem synchronized with the underlying science of hypothesis testing (as I was taught as an experimental psychologist). I feel that the usage of statistical tools used to be tightly integrated into a foundation of a scientific model and a philosophy of science. Over time, I feel the usage of these tools has drifted away from this original foundation, and I have not seen a new foundation that has replaced it. Instead, it feels that statistics (as practiced) is data mining cloaked in the legitimacy of scientific hypothesis testing. A traditional 1960s experimental psychologists style of hypothesis testing is not suited for today, and the data mining style statistical analysis does not seem suited to the foundations of hypothesis testing as taught in the 1960s. Instead of practicing statistics as a form of data mining and pretending that we are testing hypotheses in a planned fashion, perhaps instead we need a scientific model that is still philosophically and scientifically justified and that supports the ability to data mine. Then, researchers could candidly describe what they do in the context of good scientific practices. Thanks for your patience if you have gotten this far. Please remember that although I make some general statements here, they are all rooted and reflect my personal experiences. Best regards, Michael N. Mitchell Data Management Using Stata - http://www.stata.com/bookstore/dmus.html A Visual Guide to Stata Graphics - http://www.stata.com/bookstore/vgsg.html Stata tidbit of the week - http://www.MichaelNormanMitchell.com On 2010-07-09 8.52 AM, David Bell wrote: Statistical tests perform many functions. When one-tailed is theoretically/philosophically justified: In a Popperian theory-testing world, any result other than positive significance means the theory is disconfirmed. So in this case, a one-tailed test is exactly appropriate. (Popper, Karl R. 1965. Conjectures and refutations: The growth of scientific knowledge. New York: Harper& Row.) When one-tailed is NOT theoretically/philosophically justified: A two-tailed test implies meaning in both tails. In applied studies that test outcomes, a positive result is taken to mean the drug works, non-significance is taken to mean that the drug does not work (actually, that there is not evidence that the drug works), but a significant negative result means that the drug is actively harmful. In this case, a one-tailed test would have obscured the harmfulness of the drug by lumping harm with non-effectiveness.. Remember that the purpose of a statistical test to the scientific community as a sociological community is to protect the community from the enthusiasm of researchers. If a researcher is going to tell us his/her theory is true, we want the chance of being fooled (Type I error, false positive) to be strictly limited. We as readers and consumers don’t worry about disappointment of a researcher with a good theory but bad data (Type II error, false negative). As a practical matter, many journals insist on two-tailed tests for several reasons. One reason is because two-tailed tests are conservative (even though the stated probability level is inaccurate, it means that the researcher has only half the chance to fool us). Another is to discourage researchers from “cherry-picking” close calls – e.g., reporting one-tailed .05 significance instead of two-tailed .10 (“marginal”) significance. Another is that, in the endeavor of science, one result is relatively unimportant, so conservative is better for the overall process of science. David C. Bell Professor of Sociology Indiana University Purdue University Indianapolis (IUPUI) (317) 278-1336 On Jul 8, 2010, at 10:10 PM, Eric Uslaner wrote: ... If you found an extreme result in the wrong direction, you would better be advised to check your data for errors or your model for very high levels of multicollinearity. If someone found that strong Republican party identifiers are much more likely than strong Democrats to vote for the Democratic candidate, no one would give that finding any credibility no matter what a two-tailed test showed. The same would hold for a model in economics that showed a strong negative relationship between investment in education and economic growth. Of course, those who put such faith in two-tailed tests would say: You never know. Well, you do. That's the role of Now I don't know what goes on substantively (or methodologically) in the biological sciences, e.g. Seems as if many people are very much concerned with the null hypothesis. In the social sciences, we learn that the null hypothesis is generally uninteresting. When it is interesing, as in my own work on democracy and corruption, it is to debunk the argument that democracy leads to less corruption (with the notion that democracy might lead to more corruption seen as not worth entertaining seriously). So again, one would use a one-tailed test and expect that there would be no positive relation between democratization and lack of corruption. Of course, Nick is right that graphics often tell a much better story. But that is not the issue here. Two-tailed tests are largely an admission that you are going fishing. They are the statistical equivalent of stepwise regression Ric Uslaner * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-07/msg00486.html","timestamp":"2014-04-16T07:20:25Z","content_type":null,"content_length":"18250","record_id":"<urn:uuid:9ba68062-2a51-4a87-807b-855bf6ff309c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Catholic Encyclopedia (1913)/Simon Stevin From Wikisource ←Joseph Stevenson Catholic Encyclopedia (1913), Volume 14 Adalbert Stifter→ Simon Stevin Born at Bruges in 1548; died at Leyden in 1620. He was for some years book-keeper in a business house at Antwerp; later he secured employment in the administration of the Franc of Bruges. After visiting Prussia, Poland, Sweden, and Norway he took up his residence in the Netherlands, where he spent the rest of his life. The Stadtholder Maurice of Nassau esteemed him so highly that he studied under his direction mathematics, science, and engineering, rewarding him for his services by making him director of finances, inspector of dykes of the Low Countries, and quartermaster-general of the Government. His was an upright, modest, and inventive mind. His influence on the development of science was great and lasting. He began with the publication in 1582 of his "Tafelen van Interest" (Tables of Interest; Plantin, Antwerp), thus distributing through the business world an easy and valuable method of calculation, still carefully preserved by the wealthy merchants of the Low Countries. Then came successively: in 1583 the "Problematum geometricorum libri V", a very original work, somewhat imperfectly reproduced in subsequent editions of the author's works, the "Dialektike ofte bewysconst", a treatise on logic, re-edited at Rotterdam in 1621, but not found in the large editions of the author's works, and "De Thiende", a small pamphlet of thirty-six pages containing the oldest systematic and complete explanation of decimal calculus, both published by Plantin at Leyden in 1585. "De Thiende" has often caused its author to be regarded as the inventor of calculus; he was indisputably the first to bring to light its great advantages. Stevin translated the pamphlet into French and re-edited it the same year under the title "La Disme", with his Arithmetic published at Antwerp by Plantin. In 1586 appeared the most famous of his works, "De Beghinselen der Weeghconst, De Weeghdaet, De Beghinselen der Waterwichts" (Antwerp). This was the first edition of his mechanics, in which he sets forth for the first time several theorems since then definitely embodied in science; the hydrostatic paradox; equilibrium of bodies on inclined planes; the parallelogram of forces, formulated, it is true, under a different enunciation by constructing a triangle by means of two components and their results. Stevin's "Vita politica, Het Burgherlick leven", a treatise on the duties of the citizen which is no longer printed in large editions of his works, was published by Raphelengen at Leyden in 1590. It gave rise during the nineteenth century to a long and violent controversy. From some pages of this volume the inference has been drawn that when entering the service of Maurice of Nassau Stevin apostatized from the Catholic Church, but this opinion is hardly tenable and has now been abandoned. In 1594 appeared the "Appendice Algebraïque", an eight-page pamphlet, the rarest of his works (there is a copy at the Catholic University of Louvain) and one of the most remarkable; in it he gave for the first time his famous solution for equations of the third degree by means of successive approximations. In the same year was published "De Sterctenbouwing", a treatise on fortifications, and in 1599, "Havenvinding", a treatise on navigation, instructing mariners how to find ports with the aid of the compass. From 1605 to 1608 Stevin re-edited his chief works in two folio volumes entitled "Wisconstige gedachtenissen" (Bouwenz, Leyden). A Latin translation of them, under the title "Hypomnemata mathematica", was confided to Willebrord Snellius; and an incomplete French translation, entitled "Mémoires mathématiques", was the work of Jean Tuning, secretary of the Stadtholder Maurice. These two versions were published at Leyden by Jean Paedts. The "Wisconstige gedachtenissen" and the "Hypomnemata mathematica" contain several treatises then published for the first time, notably the trigonometry, geography, cosmography, perspective, book-keeping, etc. In 1617 Waesberghe published at Rotterdam Stevin's "Legermeting" and "Nieuwe maniere van Stercktebouw door spilsluysen", of which French translations were published by the same editor in the following year under the titles "Castramétation" and "Nouvelle manière de fortifications par écluses". These were the last publications made during his lifetime, but he left important MSS., the chief of which were published in 1649 by his son Henri, who composed the "Burghelicke Stoffen" (political questions); the others were lost, but later recovered. Bierens de Haan edited two of them at Amsterdam in 1884: "Spiegeling der singconst" (mirror of the art of singing) and "Van de molens" (on mills). After Stevin's death Albert Girard translated several of his works and annotated others, thus forming a large folio volume published at Leyden in 1634 by the Elzevirs as "OEuvres mathématiques de Simon Stevin de Bruges". Abroad Stevin is often known only through this translation, but it does not convey an adequate idea of his works and should be supplemented by several of the original editions mentioned above. Unfortunately these have become bibliographical rarities almost unobtainable outside of Belgium and the Netherlands. M. Ferd. van der Haeghen has made them the subject of a masterly study in his "Bibliotheca Belgica" (1st series, XXIII, Ghent and The Hague, 1880-90), in which he notes most of the copies preserved in the libraries of both countries. GOETHALS, Notice hist. sur la vie et les travaux de Simon Stevin de Bruges (Brussels, 1846); STEICHEN, Mém. sur la vie et les travaux de Simon Stevin (Brussels, 1846); CANTOR, Vorlesungen über Gesch, des Mathematik, II (2nd ed., Leipzig, 1900).
{"url":"http://en.wikisource.org/wiki/Catholic_Encyclopedia_(1913)/Simon_Stevin","timestamp":"2014-04-20T09:11:00Z","content_type":null,"content_length":"28435","record_id":"<urn:uuid:e9aeb542-90f3-4e02-8fb4-a74ee32db801>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Studying balanced allocations with differential equations Results 1 - 10 of 16 - IN PROCEEDINGS OF IEEE INFOCOM , 2000 "... High performance Internet routers require a mechanism for very efficient IP address look-ups. Some techniques used to this end, such as binary search on levels, need to construct quickly a good hash table for the appropriate IP prefixes. In this paper we describe an approach for obtaining good hash ..." Cited by 68 (11 self) Add to MetaCart High performance Internet routers require a mechanism for very efficient IP address look-ups. Some techniques used to this end, such as binary search on levels, need to construct quickly a good hash table for the appropriate IP prefixes. In this paper we describe an approach for obtaining good hash tables based on using multiple hashes of each input key (which is an IP address). The methods we describe are fast, simple, scalable, parallelizable, and flexible. In particular, in instances where the goal is to have one hash bucket fit into a cache line, using multiple hashes proves extremely suitable. We provide a general analysis of this hashing technique and specifically discuss its application to binary search on levels. , 2006 "... We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selec ..." Cited by 58 (7 self) Add to MetaCart We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiple-choice processes in the heavily loaded case were obtained using majorization by the single-choice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiple-choice processes are fundamentally different from the single-choice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiple-choice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the single-choice process. In particular, we investigate the allocation obtained by two different multiple-choice allocation schemes, , 2005 "... We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: ..." Cited by 44 (1 self) Add to MetaCart We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs. 1 - In Proc. of the RANDOM'98 , 1998 "... Microsystems. The views and conclusions contained here are those of the authors and should not be interpreted as necessarily representing the official policies or ..." Cited by 19 (1 self) Add to MetaCart Microsystems. The views and conclusions contained here are those of the authors and should not be interpreted as necessarily representing the official policies or - UNIVERSITY OF ILLINOIS , 1999 "... We investigate variations of a novel, recently proposed load balancing scheme based on small amounts of choice. The static setting is modeled as a balls-and-bins process. The balls are sequentially placed into bins, with each ball selecting d bins randomly and going to the bin with the fewest balls. ..." Cited by 17 (7 self) Add to MetaCart We investigate variations of a novel, recently proposed load balancing scheme based on small amounts of choice. The static setting is modeled as a balls-and-bins process. The balls are sequentially placed into bins, with each ball selecting d bins randomly and going to the bin with the fewest balls. A similar dynamic setting is modeled as a scenario where tasks arrive as a Poisson process at a bank of FIFO servers and queue at one for service. Tasks probe a small random sample of servers in the bank and queue at the server with the fewest tasks. Recently , 2003 "... We consider cuckoo hashing as proposed by Pagh and Rodler in 2001. We show that the expected construction time of the hash table is O(n) as long as the two open addressing tables are each of size at least (1 #)n,where#>0andn is the number of data points. Slightly improved bounds are obtained for ..." Cited by 17 (1 self) Add to MetaCart We consider cuckoo hashing as proposed by Pagh and Rodler in 2001. We show that the expected construction time of the hash table is O(n) as long as the two open addressing tables are each of size at least (1 #)n,where#>0andn is the number of data points. Slightly improved bounds are obtained for various probabilities and constraints. The analysis rests on simple properties of branching - IN PROCEEDINGS OF THE 10TH ANNUAL ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES, PUERTO VALLARTA, MEXICO, 28 JUNE–2 , 1998 "... Many distributed protocols arising in applications in online load balancing and dynamic resource allocation can be modeled by dynamic allocation processes related to the “balls into bin” problems. Traditionally the main focus of the research on dynamic allocation processes is on verifying whether a ..." Cited by 13 (3 self) Add to MetaCart Many distributed protocols arising in applications in online load balancing and dynamic resource allocation can be modeled by dynamic allocation processes related to the “balls into bin” problems. Traditionally the main focus of the research on dynamic allocation processes is on verifying whether a given process is stable, and if so, on analyzing its behavior in the limit (i.e., after sufficiently many steps). Once we know that the process is stable and we know its behavior in the limit, it is natural to analyze its recovery time, which is the time needed by the process to recover from any arbitrarily bad situation and to arrive very closely to a stable (i.e., a typical) state. This investigation is important to provide assurance that even if at some stage the process has reached a highly undesirable state, we can predict with high confidence its behavior after the estimated recovery time. In this paper we present a genera / framework to study the recovery time of discrete-time dynamic allocation processes. We model allocation processes by suitably chosen ergodic Markov chains. For a given Markov chain we apply path coupling arguments to bound its convergence rates to the stationary distribution, which directly yields the estimation of the recovery time of the corresponding allocation process. Our coupling approach provides in a relatively simple way an accurate prediction of the recovery time. In particular, we show that our method can be applied to significantly improve estimations of the recovery time for various allocation processes related to allocations of balls into bins, and for the edge orientation problem studied before by Ajtai et al. , 2004 "... Balancing peer-to-peer graphs, including zone-size distributions, has recently become an important topic of peer-topeer (P2P) research [1], [2], [6], [19], [31], [36]. To bring analytical understanding into the various peer-join mechanisms, we study how zone-balancing decisions made during the initi ..." Cited by 11 (4 self) Add to MetaCart Balancing peer-to-peer graphs, including zone-size distributions, has recently become an important topic of peer-topeer (P2P) research [1], [2], [6], [19], [31], [36]. To bring analytical understanding into the various peer-join mechanisms, we study how zone-balancing decisions made during the initial sampling of the peer space a#ect the resulting zone sizes and derive several asymptotic results for the maximum and minimum zone sizes that hold with high probability. , 2005 "... Suppose that there are n bins, and balls arrive in a Poisson process at rate λn, whereλ>0 is a constant. Upon arrival, each ball chooses a fixed number d of random bins, and is placed into one with least load. Balls have independent exponential lifetimes with unit mean. We show that the system conve ..." Cited by 11 (2 self) Add to MetaCart Suppose that there are n bins, and balls arrive in a Poisson process at rate λn, whereλ>0 is a constant. Upon arrival, each ball chooses a fixed number d of random bins, and is placed into one with least load. Balls have independent exponential lifetimes with unit mean. We show that the system converges rapidly to its equilibrium distribution; and when d ≥ 2, there is an integer-valued function md(n) = ln ln n/ln d + O(1) such that, in the equilibrium distribution, the maximum load of a bin is concentrated on the two values md(n) and md(n) − 1, with probability tending to 1, as n →∞. We show also that the maximum load usually does not vary by more than a constant amount from ln ln n/ln d, even over quite long periods of time. 1. Introduction. Balls-and-bins processes have been useful for modeling and analyzing a wide range of problems, in discrete mathematics, computer science and communication theory, and, in particular, for problems which involve load sharing, see, for example, [4, 5, 12, 15–17, 22]. Here is one central result, from [3]. Let d be a fixed integer at least 2. Suppose that there are n bins, and n balls arrive "... In Grid applications the heterogeneity and potential failures of the computing infrastructure poses significant challenges to efficient scheduling. Performance models have been shown to be useful in providing predictions on which schedules can be based [1, 2] and most such techniques can also take a ..." Cited by 9 (6 self) Add to MetaCart In Grid applications the heterogeneity and potential failures of the computing infrastructure poses significant challenges to efficient scheduling. Performance models have been shown to be useful in providing predictions on which schedules can be based [1, 2] and most such techniques can also take account of failures and degraded service. However, when several alternative schedules are to be compared it is vital that the analysis of the models does not become so costly as to outweigh the potential gain of choosing the best schedule. Moreover, it is vital that the modelling approach can scale to match the size and complexity of realistic applications. In this
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=561143","timestamp":"2014-04-16T15:07:12Z","content_type":null,"content_length":"38190","record_id":"<urn:uuid:7b35e375-94b9-4e9b-912c-f483da929536>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Representations of Lie Algebras: An Introduction Through \(\mathbf{gl}_n\) For many years now, I have believed that the world needs more elementary (say, undergraduate-level) books on the purely algebraic theory of Lie algebras. The last decade or so has seen the publication of several books that attempt to make the theory of Lie groups accessible to undergraduates by focusing on matrix groups (thereby eliminating the need to talk about differentiable manifolds, while still retaining a lot of the flavor of the theory). Books along these lines include the excellent Naive Lie Theory by Stillwell and (at a somewhat higher, early graduate, level) Hall’s Lie Groups, Lie Algebras and Representations; another very good example is Tapp’s Matrix Groups for Undergraduates. There is also Pollatsek’s Lie Groups: A Problem-Oriented Introduction via Matrix Groups, but, as previously noted here, this is really a problem book rather than a textbook. Books like these do define Lie algebras and talk about them to some degree, but in all of them Lie algebras play a supporting role; Lie groups (of matrices) are the main objects of study. On the other hand, reasonably elementary books that are devoted to the algebraic theory of Lie algebras as entities in their own right are somewhat harder to find. I first learned the subject in graduate school in a reading course at Rutgers University from Humphreys’ classic Introduction to Lie Algebras and Representation Theory, with occasional excursions to Jacobson’s Lie Algebras (then an Interscience book, now a Dover paperback) to learn the characteristic p theory of modular Lie algebras. The Humphreys text (one of the earlier entries in the Graduate Texts in Mathematics series, and still in print by Springer-Verlag after about 40 years) was considerably more accessible than the one by Jacobson, which I view as being completely beyond the reach of the vast majority of undergraduates. The preface to Humphreys says that the first four chapters might well be accessible to a bright undergraduate, but I think this is perhaps over-optimistic: although it is an excellent book, it is quite densely written, and the first seventy or eighty pages might well consume an entire semester. Then, about six years ago, Erdmann and Wildon’s Introduction to Lie Algebras was written, and (Humphreys’ preface notwithstanding) that book was, I thought, the only text that was genuinely within the reach of advanced undergraduates. Until now, that is. Henderson’s book, though substantively quite different than the one by Erdmann and Wildon, is also one that could, with good likelihood of success, be used to teach Lie algebras to serious undergraduates, and it manages this by essentially using the same trick that the books of Stillwell/Hall/Tapp use in the theory of Lie groups: it focuses on matrices. By concentrating on one particular Lie algebra, namely the set gl[n ]of all n x n complex matrices under the commutator bracket operation [A, B] = AB – BA, Henderson is able to use calculations that are specific to matrices and therefore avoid having to engage in more general arguments. The statement that this book is accessible to undergraduates should come with a caveat attached. Its origin is in a set of lectures delivered by the author to students in Australia, where mathematics undergraduates, as in Great Britain, are more advanced than here in the United States. Henderson’s students, for example, have already had some prior exposure to group representation theory. And, of course, anybody planning to study Lie algebras, even in the concrete context of matrix algebras, is going to want to have a good grounding in linear algebra: Irving Kaplansky once wrote, in a survey article on Lie algebras in Saaty’s Lectures in Modern Mathematics, that some Lie algebraic arguments are “linear algebra raised to the nth power” and that an “apprentice algebraist”, working through the theory of Lie algebras, will “find everything he knows [about linear algebra] used in a dazzling array of spectacular arguments”. Thus, the blurb on the back cover of this book stating that a reader should be familiar with linear algebra up to and including the Jordan Canonical Form is certainly not surprising, but does perhaps have the effect of narrowing the field somewhat. (Perhaps anticipating this, the Erdmann/Wildon text contains an appendix of about ten pages summarizing the more sophisticated aspects of linear algebra that are necessary; this text does not do this, but does spend some time in the first chapter discussing multilinear algebra.) In addition, any person undertaking a study of Lie algebras will want to be fairly familiar with the rudiments of abstract algebra. Nevertheless, despite these inevitable prerequisites, this book is well written enough to still be accessible to well-prepared and well-motivated senior-level sophisticated undergraduates, perhaps in some kind of senior seminar. In terms of overall accessibility, I would put this book between Erdmann/Wildon and Humphreys, and closer to the former than the latter. Another interesting and unusual feature of the text is that it rethinks the traditional approach to teaching the elementary theory of Lie algebras. Most courses along these lines, I suspect, have as their goal the classification of simple (and hence semisimple, since they are direct sums of simple) Lie algebras over an algebraically closed field of characteristic 0. The books by Humphreys and Erdmann/Wildon reflect this approach, although the former also contains a lot of subsequent material on representation theory as well. The problem with this agenda, though, is that if one starts from scratch with the definition of a Lie algebra and a summary of the basic properties and results, it is almost impossible to get through the classification theory without resorting to extensive hand-waving at the end of the semester. Henderson has set himself the goal of devising a semester course at the advanced undergraduate level in which the student actually sees, without omitted proofs, one “peak” (his term) of the theory. He has chosen as his desired “peak” the theory of representations of the Lie algebra gl[n], and has attempted to reach that peak in as direct a manner as Towards this end, Henderson has made a conscious decision to omit from the text a number of topics that are often covered in introductory books on this subject. Topics such as Cartan subalgebras, the root space decomposition of a simple Lie algebra and Dynkin diagrams, for example, which are essential tools in the classification theory of semisimple Lie algebras, are not developed here at all. Perhaps more surprisingly, a number of famous theorems (including, for example, Cartan’s criterion for semi-simplicity in terms of the Killing form, Engel’s theorem on nilpotence, and Lie’s theorem on solvable subalgebras of gl[n]) are also not proved. These omissions are understandable given the author’s goals, but at the same time, they may make the book less attractive as a text for an introductory graduate course. The book begins with an introductory and motivational chapter that, by means of explicit calculations and with a little multilinear algebra (developed as needed), shows that one problem (determining homomorphisms from the matrix group GL[n] to the matrix group GL[m]) is essentially equivalent to another (that of studying the mappings from the set of all n x n matrices to the set of all m x m matrices that preserve the commutator; or, in algebraic language, studying the homomorphisms from the Lie algebra gl[n] to the Lie algebra gl[m]). This material is not really used in the sequel (which in fact mostly avoids any reference to Lie groups), but does motivate the central question of this text, namely the classification of the integral representations of gl[n] . A chapter like this seems like an excellent way to start the text, given the fact that the definition of a Lie algebra is not the most natural thing in the world. Neither Humphreys’ text, nor the one by Erdmann and Wildon, provide this much detail about the connections between Lie algebras and Lie groups. Lie algebras themselves are formally defined in chapter 2, which also provides some basic examples and introductory definitions. (The book has a standing convention, mentioned prior to chapter 1, that all Lie algebras are over the field of complex numbers. Of course, a potential problem with such standing conventions — even when, as here, they are prominently announced at the beginning — is that “drop in” readers who are not reading the book from the beginning may be confused or misled.) Chapter 3 continues with more definitions (ideal, subalgebra, nilpotent, solvable, etc.) and examples; one nice feature here is a simple, calculational proof of the simplicity of the Lie algebra sl[n] of n x n matrices with trace 0. (Contrast this, for example, with the approach taken by Humphreys: he proves the simplicity of sl[2 ]directly but does not establish the simplicity of sl[n] (and the other classical Lie algebras) until much later in the book, as a consequence of the root space decomposition. Erdmann and Wildon do mention the simplicity of sl[n] fairly early on, but leave the proof as an exercise.) Chapter 4 of the text introduces the idea of a representation of a Lie algebra L, done via the notion of an L-module, which is a vector space on which L operates in such a way a way as to induce a Lie algebra homomorphism from L into gl[n]. The next three chapters then elaborate on this idea, with the ultimate goal of classifying integral representations of gl[n] covered in chapter 7 (this is the “peak” referred to earlier); along the way we also see the classification of representations of sl[2], a result which plays a significant role in the classification theory of (complex) semisimple Lie algebras. The final chapter of the text, entitled “Guide to Further Reading”, is a relatively short exposition of some of the ideas of Lie algebra theory that lie just beyond the boundaries of the text, including the classification theory of semisimple Lie algebras and the representation theory of general Lie algebras. There are no proofs here, just an explication of the main ideas with some examples and bibliographic references. Each chapter, save for the first and last, ends with a collection of eight or nine exercises, solutions to all of which are in the back of the book. An instructor using the book as a text may therefore need to come up with problems on his or her own to supplement the book’s exercises. In summary, this is a well-written book that offers an interesting and novel perspective on teaching an introductory Lie algebras course. Of course, using this book as a text would require agreement with the author’s fairly unconventional approach to the material, but anybody who agrees with this approach, or is open to the possibility of being persuaded to agree with it, should certainly give this book a serious look. In fact, I think that anyone who is interested in Lie algebras should give this book a serious look, if for no other reason than the fact that any nicely written account of beautiful mathematics deserves to be noticed. Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
{"url":"http://www.maa.org/publications/maa-reviews/representations-of-lie-algebras-an-introduction-through-mathbfgln","timestamp":"2014-04-18T14:39:25Z","content_type":null,"content_length":"107964","record_id":"<urn:uuid:86b2ff9b-a4fa-4688-98c8-3dd0891a3fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
[f(x)'^n continuous at a point (EXAM IN 4 hrs.) Sorry, the comment about the composition of functions threw me off. I should have known that the solvable interpretation is the right one. Anyways, you need to estimate |[f(x)]^n - [f(a)]^n| knowing that you can bind |f(x) - f(a)| by any positive number. But a^n - b^n has a very nice and symmetric formal factorization, namely [tex]a^n - b^n = (a-b)(a^{n-1} + a^{n-2}b + ... + b^{n-2}a + b^{n-1})[/tex] Thus you can replace a with f(x) and b with f(a) to get |[f(x)]^n - [f(a)]^n| equal to |f(x) - f(a)| times a bunch of terms within an absolute value (this is key) as a result of the factorization above. But now can you see what to bind |f(x) - f(a)| by to ensure that |[f(x)]^n - [f(a)]^n| is less than say, epsilon?
{"url":"http://www.physicsforums.com/showthread.php?t=363301","timestamp":"2014-04-20T14:13:26Z","content_type":null,"content_length":"40626","record_id":"<urn:uuid:645e0904-5b22-476d-b71f-b6dff2f54838>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 132 to the MATH 132: Calculus II (On-Line) Introductory Web Page (Summer I, 2011; Tues 5/17 - Fri 6/24) This web page is a brief introduction to the on-line version of Math 132 (Calculus II) that is offered during the first summer session of 2011. Use this page to find out about First, here is the golden rule of on-line courses: This course is taught completely on-line. Since all of our interaction is electronic, there is an inherent requirement that you are familiar with the technology we need to use - your PC, the internet, your scanner, and image handling via Word or PDF files. If you are unwilling or unable to learn to use this technology in the required manner, you should not enroll in this course. The couse begins on Tuesday, May 17, 2011. Once the first day of the course comes around, we will start off at full speed, and not having course material by then will not be acceptable reasons for late work. In fact, you can expect your first assignment to be due that very first day; it will not be a "math" assignment, but instead an assignment that will demonstrate that you are completely set up and ready with your technology. This course will be run through the on-line courseware system Blackboard . All students taking this course have been or will be assigned a Valpo e-mail account firstname.lastname@valpo.edu and correspoding username and password for logging on. That username and password are also used to log on to Blackboard and the web based e-mail server Groupwise . This class is intended for those who excelled in Calc I and are ready for the challenge of completing a heavy Calc II courseload at an accelerated pace. We cover most of the topics that are included in a "regular" Calc II course, so if you're expecting "Calc II Lite", please move along, there's nothing for you to see here. You should NOT enroll in this course if: • You did not do well in Calc I • You previously failed Calc II and are trying to take it again • You do not have a significant amount of time to devote to this course • You are enrolled in another course during Summer I • You are not comfortable in the on-line and electronic environment You have the potential to do well in this course if: • You are very self directed, but will not hesitate to ask questions when confused about something • You know you have an aptitude for mathematics, or at least the part you were exposed to in Calc I • You are honestly interested in the subject matter • Your written work is neat and well presented Obviously you already have a web browser if you are reading this page. In order to do the work in this course, you will also need the following resources - please obtain them as soon as possible if you do not have them already. • A textbook. Which textbook? I don't care. If you took Calc I here at Valpo, you likely have a recent edition of "Calculus, Early Transcendentals" by James Stewart. If you took Calc I someplace else, you may have a different book. That's OK. The on-line material in this course is designed to be self-contained and independent of any one textbook. That's not to say that you don't need a textbook; when (not if) you decide you need extra examples or more practice problems to try on your own, you need to have a textbook handy. But calculus textbooks are very much interchangeable, so as long as you can use the index or table of contents to find examples and exercises for any given topic, you should be fine with whatever textbook you have. • A supplemental source of problems and solutions. I highly recommend one or both of the following, they are cheap and chock full of solved problems and exercises for practice. If you have these, you probably don't need a separate textbook: • A copy of Adobe Acrobat Reader . This is a free program that can be downloaded from here . • You must have access to the mathematical software system Maple 14 (or 15). There are several computing assignments in this class that take the place of the weekly computer labs in the regular course; many of these assignments are written in, and require the use of, Maple. If you do not have access to Maple 14/15 in a computer lab near you, then you'll need to purchase your own copy. A student edition can be purchased with a discount using a promo code we have for this course; you can get this code once you are officially enrolled. (This software is a great investment even beyond this class, if you plan on taking more math courses at VU.) • A couple of assignments will require Microsoft Excel. • A SCANNER . Written homeworks and exams will be submitted electronically by the means discussed below . If you do not own a scanner, you need to arrange to have access to one. • MICROSOFT WORD or ADOBE ACROBAT. NOTE! *.doc and *.pdf are the two acceptable platforms for creation of files you will submit. This list does not include Microsoft Works, Word Perfect, or any other alternative software. This course wll be offered at the same pace as a "normal" 6-week summer session course. That means the course material will come at you almost three times as fast as in a regular 16-week course during the fall or spring semester. In a regular semester, Calc II meets for 3 lectures plus one computer lab per week. Consequently, you should plan on putting a lot of time into this course. Six-week summer sessions are always intense, and this one may be be even more so because you will be completely self-directed and self-motivated to keep up the pace. I would suggest that you consider how many hours per week you put into your last math course (class time and out of class time combined), multiply that by at least three, then add another 3-4 hours due to the self-directed nature of this course - this will give you an idea of how many hours per week you can expect to devote to this course. Each week we cover several topics. For each topic, there will be an on-line quiz based on problem sets, a final written assignment, and (sometimes) a computing assignment. The class week starts on Monday. The on-line quizzes (3-4 per week) are generally due each day from Wednesday through Friday or Saturday; this forces you to begin work early in the week and not save it all for the weekend. Written (scanned) work and computing exercises for all topics in the week are due Sunday night. Your exposure to each subject / section will have several ''phases" associated with it. • Topic Notes. You will be provided with written notes on each topic. The notes contain discussion, definitions, theorems, and solved examples. Each solved example will be accompanied by a similar problem for you to try. These first problems, labelled as "You Try It" in the notes, have solutions already available. As you read the notes, you supplement your understanding by finding and reading appropriate sections in the textbook or supplemental books you have. You should also try as many other exercises from your book as you need to feel comfortable with the material. • Practice Problems. At the end of each section's notes, there will be another list of problems to do, which will reinforce ideas you learned by reading the notes and working the first set of problems. The solutions to this second set of practice problems are not available right away. You will be asked questions relating to these practice problems and general concepts in a quiz; answers to quiz questions are entered in Blackboard. The answer form will have a due date and time; once that due date/time is expired, the quiz will disappear and be replaced by the solutions to those practice problems - so that you can check your own work and fix any errors that were indicated by your answers on the form. In any given week, due dates for practice problems will usually start on Wednesday. • Formal (Written) Homework Problems. Homework problems are problems for which you submit well-written solutions to me. These written homework problems from all sections covered in a week are due by 9am on Monday of the following week - see below for discussion of how written homework is submitted. Once the due date / time for homework has expired, the solutions to the homework problems will appear on-line - thus, no late homework is ever accepted. • Exams. There will be two exams, at the end of weeks 3 and 6. Each exam is worth 20% of your final grade (40% total on exams). Exams will be made available on Blackboard from 6pm Friday through 9pm Sunday, and must be submitted by Monday morning. Exams will be open book and note, and you can use as much time as you want, although you should be concerned if you are spending more than two hours on an exam. (In weeks 3 and 6, written homework will be due earlier, on Sunday instead of Monday). • Submission of Work : Written homework and exams will be due at the end of each weekend. You must submit your work by the due date and time or it will not be graded (and so will receive a score of 0). Submission is accomplished in the following manner: □ You scan your written work (2 problems per topic) into JPG format (scan as black/white text to keep image sizes SMALL; do NOT use TIFFs or BMPs). □ You will collect the scanned images of your work for a single topic into a single Microsoft Word (or PDF) document, using the "Insert/Picture/From File" menu option. (You may have more than one page in this single file.) □ You will submit the SINGLE Word (or PDF) document contaiting all of the topic's written work through Blackboard; a link for submissions is included at the end of each topic's folder within Assignments. That link will disappear once the due date is expired, so be sure to submit your work on time! NOTE: Electronically submitted work not conforming to these guidelines will not be graded.You must learn to scan your files so that they are readable but with small file size. This means avoiding photo quality or color scans. You should be able to scan a clear black and white image of your work with no more than 1MB per page. Files which are too large to download in a reasonable amount of time don't get graded. Practice with the settings on your scanner so that you know you can do this, and check your file sizes before you submit your work! If you don't do this, and try to submit a file that's 35MB in size, it won't get graded. When I have 20 students' work to download and view, I won't wait 5 minutes for each download. • Communication . There will be a Discussion Board in Blackboard where you can post questions about content, problems, etc that can be answered by myself or anyone else in the class. If you are not in a terrible hurry to receive an answer, this is the ideal place to post your question, because other people may be interested in the answer to your question. If you have a more urgent question, you can e-mail me directly. I will try to check the Discussion Board daily and e-mail several times per day. I will also be in my office many days of the week for phone calls, generally in the morning. My phone number is (219) 464-5183. My e-mail address is ken.luther@valpo.edu . All communication from me to you will take place via Blackboard's announcement area or e-mails. All e-mails will go to your firstname.lastname@valpo.edu e-mail address. It is YOUR responsibility to regularly check that account or arrange for forwarding. I would also not recommend that you mail me from spam-heavy domains such as hotmail.com. Your mail could easily be mistakenly flagged as spam and filtered out. Just stick with the VU mail system and be safe. • Other class details (such as grading scale, etc) will be distributed in the class syllabus; you will get access to Blackboard and this syllabus when you are officially enrolled in the class. All instructions on this page (and more!) are repeated in separate files that you can access via Blackboard once enrolled. Good luck! Let me know if you have any questions.
{"url":"http://faculty.valpo.edu/kluther1/CLASSINFO/m132ol-intro.html","timestamp":"2014-04-16T21:51:42Z","content_type":null,"content_length":"17389","record_id":"<urn:uuid:f695ae2f-b6f9-453b-9223-7e144e65540f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Physics Problems (Force; Newton's Second Law; Horizontal Force; Net Force; Static Friction; Kinetic Friction) 1. 27474 Basic Physics Problems (Force; Newton's Second Law; Horizontal Force; Net Force; Static Friction; Kinetic Friction) I would like for you to show me all of your work/calculations and the correct answer to each problem. 15. A 5.0-kg block at rest on a frictionless surface is acted on by forces F1 = 5.5 N and F2 = 3.5 N. What additional horizontal force will keep the block at rest? 19. The engines of most rockets produce a constant thrust (forward force). However, when a rocket is fired into space, its acceleration increase with time as the engine continues to operate. Is this situation a violation of Newton's second law? Explain. 23. What is the mass of an object that accelerates at 3.0 m/s[squared] under the influence of a 5.0-N net force? 29. What is the weight of an 8.0-kg mass in newtons? How about in pounds? 33. A horizontal force of 12 N acts on an object that rests on a level, frictionless surface on the Earth, where the object has a weight of 98 N. (a) What is the magnitude of the acceleration of the object? (b) What would be the acceleration of the same object in a similar situation on the Moon? 35. When a horizontal force of 300 N is applied to a 75.0-kg box, the box slides on a level floor, opposed by a force of kinetic friction of 120 N. What is the magnitude of the acceleration of the 39. A jet catapult on an aircraft carrier accelerates a 2000-kg plane uniformly from rest to a launch speed of 320 km/h in 2.0 s. What is the magnitude of the net force of the plane? 49. In an Olympic figure-skating event, a 60-kg male skater pushes a 45-kg female skater, causing her to accelerate at a rate of 2.0 m/s[squared]. At what rate will the male skater accelerate? What is the direction of his acceleration? 53. The weight of a 500-kg object is 4900 N. (a) When the object is on a moving elevator, its measured weight could be (1) zero, (2) between zero and 4900 N, (3) more than 4900 N, or (4) all of the preceding. Why? (b) Describe the motion if the object's measured weight is only 4000 N in a moving elevator. 59. A girl pushes a 25-kg lawn mower. If F = 30 N and 0 = 37 degrees, (a) what is the acceleration of the mower, and (b) what is the normal force exerted on the mower by the lawn? Ignore friction. 65. A 3000-kg truck tows a 1500-kg car by a chain. If the net forward force on the truck by the ground is 3200 N, (a) what is the acceleration of the car, and (b) what is the tension in the connecting chain? 83. A 40-kg crate is at rest on a level surface. If the coefficient of static friction between the crate and the surface is 0.69, what horizontal force is required to get the crate moving? 87. A hockey player hits a puck with his stick, giving the puck an initial speed of 5.0 m/s. If the puck slows uniformly and comes to rest in a distance of 20 m, what is the coefficient of kinetic friction between the ice and the puck? 93. While being unloaded from a truck, a 10-kg suitcase is placed on a flat ramp inclined at 37 degrees. When released from rest, the suitcase accelerates down the ramp at 0.15 m/s[squared]. What is the coefficient of kinetic friction between the suitcase and the ramp? The solution is given in an attachment, using step-by-step equations and/or explanations to answer each question.
{"url":"https://brainmass.com/physics/acceleration/27474","timestamp":"2014-04-19T04:24:54Z","content_type":null,"content_length":"31401","record_id":"<urn:uuid:75a02be2-3585-423f-9fed-173eacabd83e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
2009-2010 6th Grade Math Syllabus Jemtegaard Middle School 6th Grade Connected Math Program Syllabus for 2009 – 2010

 The Connected Math Project (CMP), the current 6th grade math curriculum at Jemtegaard Middle School, is built around mathematical problems that help students develop an understanding of important concepts and skills in number sense, geometry, measurement, algebra, probability, and statistics. These skills will be critical for students to be successful when taking the Measure of Student Progress (the new name for state mandated test). Listed are the books and performance expectations for this year: 
 Performance
 #
 CMP
Publication
1
or
2
 Expectations
 Module/Book
 
 Multiplication and Division of Fractions & Decimals 
 Compare and order non-negative 6.1.A
 • Bits
and
Pieces
I
 fractions, decimals, and integers using the number line, lists, and the symbols <, >, or =. Represent multiplication and division of 6.1.B
 • Bits
and
Pieces
II
 non-negative fractions and decimals using area models and the number line, and connect each representation to the related equation. Estimate products and quotients of 6.1.C
 • 
 fractions and decimals. Fluently and accurately multiply and 6.1.D
 • Bits
and
Pieces
II
 divide non-negative fractions and explain the inverse relationship between multiplication and division with fractions. Multiply and divide whole numbers and 6.1.E
 • Bits
and
Pieces
II
 decimals by 1000, 100, 10, 1, 0.1, 0.01, and 0.001. Fluently and accurately multiply and 6.1.F
 • Bits
and
Pieces
II
 divide non-negative decimals. Describe the effect of multiplying or 6.1.G
 • Moving
Straight
Ahead
 dividing a number by one, by zero, by a (7th
grade)
 number between zero and one, and by a number greater than one. Solve single- and multi-step word 6.1.H
 • Moving
Straight
 problems involving operations with Ahead?
 fractions and decimals and verify the Mathematical Expressions and Equations
 Write a mathematical expression or 6.2.A
 • Variables
and
Patterns
 equation with variables to represent information in a table or given situation. Draw a first-quadrant graph in the 6.2.B
 • 
Variables
and
Pattern
 coordinate plane to represent information in a table or given situation. Evaluate mathematical expressions when 6.2.C
 • Variables
and
Patterns
 the value for each variable is given. Apply the commutative, associative, and 6.2.D
 • Accentuate
the
 distributive properties, and use the order Negative
 of operations to evaluate mathematical Solve one-step equations and verify 6.2.E
 • Variables
and
Patterns
 solutions. • Moving
Straight
Ahead
 Solve word problems using mathematical 6.2.F
 • Variables
and
patterns
 expressions and equations and verify Ratios, Rates & Percents
 Identify and write ratios as comparisons of 6.3.A
 • Comparing
and
Scaling
 part-to-part and part-to-whole • Stretching
and
 Write ratios to represent a variety of rates. 6.3.B
 • Comparing
and
 • Stretching
and
 Represent percents visually and 6.3.C
 • Bits
and
Pieces
I
 numerically, and convert between the fractional, decimal, and percent representations of a number. Solve single- and multi-step word 6.3.D
 • Bits
and
Pieces
II
 problems involving ratios, rates, and • Comparing
and
Scaling
 percents, and verify the solutions. Identify the ratio of the circumference to 6.3.E
 • Covering
and
 the diameter of a circle as the constant , Surrounding
 and recognize and 3.14 as common approximations of . Determine the experimental probability of 6.3.F
 • How
likely
is
it?
 a simple event using data collected in an Determine the theoretical probability of an 6.3.G
 • How
likely
is
it?
 event and its complement and represent the probability as a fraction or decimal from 0 to 1 or as a percent from 0 to 100. Two & Three Dimensional Figures
 Determine the circumference and area of 6.4.A
 • Covering
and
 circles. Surrounding
 Determine the perimeter and area of a 6.4.B
 • Covering
and
 composite figure that can be divided into Surrounding
 triangles, rectangles, and parts of circles. Solve single- and multi-step word 6.4.C
 • Covering
and
 problems involving the relationships Surrounding
 among radius, diameter, circumference, and area of circles, and verify the Recognize and draw two-dimensional 6.4.D
 • Filling
and
Wrapping
 representations of three-dimensional Determine the surface area and volume of 6.4.E
 • Filling
and
Wrapping
 rectangular prisms using appropriate formulas and explain why the formulas Determine the surface area of a pyramid. 6.4.F
 • 
 Describe and sort polyhedra by their 6.4.G
 • 
 attributes: parallel faces, types of faces, number of faces, edges, and vertices. Additional Key Content
 Use strategies for mental computations 6.5.A
 • Bits
and
Pieces
II
 with non-negative whole numbers, fractions, and decimals. Locate positive and negative integers on 6.5.B
 • Accentuate
the
 the number line and use integers to Negative
 represent quantities in various contexts. Compare and order positive and negative 6.5.C
 • Accentuate
the
 integers using the number line, lists, and Negative
 the symbols <, >, or =. 
Reasoning,
Problem
Solving
& Communication
 Grading Scale: Analyze a problem situation to determine 6.6.A
 the question(s) to be answered. 90-100% A Identify relevant, missing, and extraneous 6.6.B
 information related to the solution to a problem. 80-89% B Analyze and compare mathematical 6.6.C
 strategies for solving problems, and select 70-79% C and use one or more strategies to solve a 65-69% D Represent a problem situation, describe 6.6.D
 the process used to solve the problem, and verify the reasonableness of the 0-64% F Communicate the answer(s) to the 6.6.E
 question(s) in a problem using Final Grades Based On: appropriate representations, including symbols and informal and formal 30% Quizzes mathematical language. Apply a previously used problem-solving 6.6.F
 strategy in a new context. 20% Journals Extract and organize mathematical 6.6.G
 information from symbols, diagrams, and 10% Projects graphs to make inferences, draw conclusions, and justify reasoning. 25% Homework Make and test conjectures based on data 6.6.H
 (or information) collected from explorations and experiments. 15% Exit Exam
{"url":"http://www.docstoc.com/docs/41119218/2009-2010-6th-Grade-Math-Syllabus","timestamp":"2014-04-23T15:33:26Z","content_type":null,"content_length":"62763","record_id":"<urn:uuid:99df9da6-99dd-4ad9-a9f3-d3ded93afd10>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer Hello, Can anybody help me in this program. Write a C program to find the square root of a given number? I can understand only C and please i need answer in C only. - read more A simple C program allowing you to find the square root of a number. #include <math.h> #include <stdio.h> int main(void) { double x = 4.0, result; result = sqrt(x ... C program to find square root of a number ; Square root of a number in c - read more Share your answer: write a c program to find the square root of a given numbe? Question Analizer write a c program to find the square root of a given numbe resources
{"url":"http://www.askives.com/write-a-c-program-to-find-the-square-root-of-a-given-numbe.html","timestamp":"2014-04-16T15:24:25Z","content_type":null,"content_length":"37676","record_id":"<urn:uuid:287bc22a-bc04-4b56-9ab7-df7f09869d27>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Higher Order Functions For those of us who want to gain a greater understanding of how s work, what are some of the more that are included in their libraries? • CurriedFunctions -- define the primary method being used where a HigherOrderFunction takes a CurriedFunctor. A common class of these functions are: • ComposeFunction -- partially specify inputs and composes functors. • Other combinators, like flip (flip f x y = f y x) • Monadic bind (>>=) • map is a generic concept not constrained to lists • HaskellLanguage's interact • higher order parsers, like lchain : In we have std::binder1st, std::binder2nd, std::binary_compose, and std::unary_compose which can only compose and partially specify the inputs for unary and binary operations. • Technically speaking, binary_compose and unary_compose are not strictly std::, but an SGI extension to the StandardTemplateLibrary. Nicolai Josuttis, in his "TheCppStandardLibrary: A Tutorial and Reference", describes other composers: □ compose_f_gx(f, g)(x) == f(g(x)) □ compose_f_gxy(f, g)(x, y) == f(g(x, y)) □ compose_f_gx_hx(f, g, h)(x) == f(g(x), h(x)) □ compose_f_gx_hy(f, g, h)(x, y) == f(g(x), h(y)) CurryingSchonfinkelling and lambda functions are for partially specifying inputs. ComposeFunction is only for composing other functions. The CeePlusPlus templates you give are an attempt to implement currying and functional composition in a language that does not have native support for curried functions or higher order functions, and so are not good examples. Well, you would know more than I, but it still seems like compose1 and compose2 in are very much like the . Can you give a CeePlusPlus like example for ? Example of compose in C++: #include <iostream> // implementation of "compose" in C++ template<class A, class B, class C> class Compose Compose(A (*f)(B b), B (*g)(C c)) this->f = f; this->g = g; A operator()(C c) return (*f)((*g)(c)); A (*f)(B b); B (*g)(C c); // example of use int f(int x) return x + 1; int g(int y) return 2*y; int main() Compose<int, int, int> f_o_g(&f, &g); // f(g(3)) should produce the same answer as f_o_g(3) std::cout << "f(g(3)) = " << f(g(3)) << std::endl; std::cout << "f_o_g(3) = " << f_o_g(3) << std::endl; return 0; This seems to reinforce what I said about unary_compose and binary_compose approximating . What you've typed is very much like these templates in the just defined them to use function objects instead of pointers. Fortunately, through the use of std::ptr_fun() (a template adapter to create instances of std::pointer_to_unary_function and std::pointer_to_binary_function, you can mix and match both. For example: // create a CeePlusPlus style FunctorObject struct f( double arg ) { double operator()( double arg ) { return arg * ( pi / 180 ); // compose FunctorObject and function pointer angles.begin(), angles.end(), std::compose1( std::ptr_fun( sin ), g() ) ); Or you can compose even further like: angles.begin(), angles.end(), std::ptr_fun( sin ), g() ) ) ); Which is really just the same as: angles.begin(), angles.end(), std::ptr_fun( sin ), std::bind2nd( std::multiplies<double>(), pi / 180. ) ) ) ); In the above, compose1 and compose2 are just template function wrappers that make it easier to use std::unary_compose and std::binary_compose. This is why I originally wrote that one could use unary_compose and binary_compose in to approximate a . This is from some of its documentation: This operation is called function composition, hence the name unary_compose. It is often represented in mathematics as the operation f o g, where f o g is a function such that (f o g)(x) == f(g (x)). Function composition is a very important concept in algebra. It is also extremely important as a method of building software components out of other components, because it makes it possible to construct arbitrarily complicated function objects out of simple ones. However, I agree that binder1st and binder2nd, while important for most uses of compose1 and compose2 are not an required part of composition. Maybe that was what you meant to say -- i.e. not all the templates I mentioned, just the binders. -- RobertDiFalco CategoryFunctionalProgramming
{"url":"http://c2.com/cgi/wiki?CommonHigherOrderFunctions","timestamp":"2014-04-19T17:27:43Z","content_type":null,"content_length":"8302","record_id":"<urn:uuid:d92e3a7b-da2b-4c54-b60e-0c59ba9f556b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Problems Library - Primary, Operations with Numbers - Addition This page: About Levels of Difficulty operations with numbers number theory algebraic thinking data analysis logical reasoning Browse all About the PoW Library Addition is central to solving these problems, though it may not be the only required operation. Related Resources Interactive resources from our Math Tools project: Math 1: Operations with Numbers: Addition NCTM Standards: Number and Operations Standard for Grades Pre-K-3 Access to these problems requires a Membership.
{"url":"http://mathforum.org/library/problems/sets/primary_operations_addition.html","timestamp":"2014-04-16T08:29:24Z","content_type":null,"content_length":"31850","record_id":"<urn:uuid:ae286e7a-8c68-44c3-9e5d-9442f4ed7381>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the first Galois cohomology group of the Galois module End(T_l(A)) for some abelian variety A over a finite field k and l some prime number different from the characteristic of the base field? up vote 2 down vote favorite According to Serre's book 'Galois cohomology', Galois chomology group are always torsion, but it seems to me that H^1(k, End_{Z_l}(T_l(A)))=coker(Frob-1) on End_{Z_l}(T_l(A)), which has the same Z_l rank as End_{k}(T_l(A)) So maybe End_{Z_l}(T_l(A)) is not a discrete galois module. And why is the Tate module a discrete galois module? waht are the Galois cohomology groups of the Tate module of some abelian variety over a finite field or a number field? arithmetic-geometry galois-theory Tate modules (with their natural topology) aren't discrete except in trivial cases. – David Loeffler May 17 '10 at 11:58 Thank you very much, David – Heer May 17 '10 at 13:02 add comment 1 Answer active oldest votes In general, if $G$ is a profinite group and $M$ a continuous discrete $G$--module, then $H^i(G,M)$ is torsion for $i>0$. This applies in particular to Galois cohomology, i.e. when $G$ is a Galois group. up vote 0 down Tate modules are not discrete Galois modules, and their cohomology will usually not be torsion. The same goes for $\mathrm{End}(T_\ell A)$. vote accepted Over finite or local fields the cohomology of $T_\ell A$ is more or less well understood. Not so over global fields. thank you very much. – Heer May 17 '10 at 13:02 williamstein.org/edu/2010/582e/lectures/all/… In Stein's lecture note section 17, it seems that he formulates galois cohomology without mensioning discrete galois module, and says that Tate modules are Galois modules. This makes me very confusing – Heer May 17 '10 at 13:05 1 @Heer: look more closely at the statement of 17.2 and especially the sentence preceding it; his example of Tate module certainly violates his conventions there. You may also wish to look at the appendix of Rubin's book on Euler systems (where he discusses the general formalism of Galois cohomology with ``$\ell$-adic coefficients" for arithmetically interesting fields in a concrete way). – BCnrd May 17 '10 at 13:36 @BCnrd: Thank you very much, Prof. Conrad – Heer May 17 '10 at 14:55 add comment Not the answer you're looking for? Browse other questions tagged arithmetic-geometry galois-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/24984/what-is-the-first-galois-cohomology-group-of-the-galois-module-endt-la-for-s?sort=votes","timestamp":"2014-04-19T04:40:20Z","content_type":null,"content_length":"58838","record_id":"<urn:uuid:c1316369-1f61-4493-8560-94dbaa38e2fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Dacula Prealgebra Tutor ...Tutoring is only effective if the student comes to class prepared, with specific questions that they are having trouble with. Otherwise, a a tutoring session turns into a lecture and becomes so much less efficient. The key is to maximize efficient student - tutor contact time, so that the student gets the most out the time they are paying for! 8 Subjects: including prealgebra, Spanish, English, chemistry ...I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...In the past three years, I have realized that this is where I should be (in teaching) as I truly enjoy the different personalities each student brings. I have a bachelors in Accounting and currently pursuing a masters in teaching in Education K-8. I am truly a tennis fanatic as my tennis bag is always packed in my trunk and any point in time I can get dressed and ready for the 18 Subjects: including prealgebra, reading, writing, accounting ...I also scored a 3 on the AP Government exam, and was a member of the Alabama team for We The People in 2009 which came in second in the nation. As a senior in college, I took an entire semester of classes devoted to classic literature and languages. I have studied Latin for one year, and analyzed various classic books, plays, and poems. 34 Subjects: including prealgebra, reading, writing, calculus ...Teaching has always been a passion for me. I love my job and truly enjoy assisting struggling students to reach their potential. I do my best to raise students’ self-esteem and to make them understand hard work can lead to a better future. 7 Subjects: including prealgebra, French, biology, grammar
{"url":"http://www.purplemath.com/Dacula_Prealgebra_tutors.php","timestamp":"2014-04-16T04:39:51Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:477aac63-572d-433b-9467-39fdc2d72e02>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
The Breastplate Of The High Priest html 1 Aaron B Cohen, BA The breastplate, the first mentioned and seemingly most significant of the holy garments worn by the High Priest of the Levitical priesthood, was a remarkable and beautiful adornment. It was finely crafted in brightly coloured threads of gold, blue, purple and scarlet and fine twined, bleached linen, having twelve precious gemstones set in gold filigree, and bearing the engraved names of the twelve tribes of Israel in four rows of three. It was a perfect square, a span in length and a span in breadth (circa 9 x 9 inches), the material being doubled (i.e. the material originally being circa 9 x 18 inches) to form a pouch in which the Urim and Thummin were kept. So it was both decorative and with purpose. But its purpose is much more than we read in the text for hidden beneath this object of great value and beauty is a mathematical matrix of equal magnificence and ingenuity, defying the laws of probability or chance occurrence. Let there be no doubt that the same perfect design and precision that lay behind the making of the breastplate also went into the construction of this mathematical object, made not of man but by the divine inspiration of God. Lists of the twelve tribes of Israel occur throughout the Bible from Genesis to Revelation in various permutations of fourteen possible names but we are not given precise details relating to the names of the tribes whose names were engraved on the stones of the breastplate. However, with the exclusion of Levi (the Levitical priesthood being set apart), and Joseph who was separated from his brethren (but including his two sons Manasseh and Ephraim), we have the list of the twelve tribes of Israel as detailed in the book of Numbers, chapter 1. The breastplate is described as having four rows of three and, therefore, the logical list of the twelve names of the tribes on the breastplate, not in the order of their camps but in the order of the birth of their forebears in accordance with the making of the ephod (Exodus 28:15:‘And thou shalt make the breastplate of judgment with cunning work; after the work of the ephod thou shalt make it ...’ and Exodus 28:10: ‘Six of their names on one stone, and the other six names of the rest on the other stone, according to their birth’.), and also in the order of the first appearance of their names in the Bible, is as follows, reading from right to left as with written Hebrew. TRIBAL NAMES ON THE BREASTPLATE JUDAH SIMEON REUBEN GAD NAPTHALI DAN ZEBULUN ISSACHAR ASHER EPHRAIM MANASSEH BENJAMIN By transposing the gematria or numerical value of the names (each letter of the Hebrew alphabet has a corresponding numerical value) we obtain the numerical matrix below. Let us now consider the mathematics: The total sum of all twelve name values is 3700. Bear in mind that the breastplate was a perfect square and this figure is equal to the perfect square of 10 times 37. Ten is associated with the Ten Commandments and also with the dimensions of the Holy of Holies which measured 10 x 10 x 10 cubits. The significance of 10 then can be said to be one of Holiness. But of much greater interest is the highest prime factor of 3700, the number 37, for this is the most sublime of all numbers, the number of God. Much could be said here about the number 37 but in order not to digress too far from the purpose of this article this discussion is limited to the more salient points. The number 37 can be said to represent the ‘key to wisdom’. Why the ‘key to wisdom’? Well, if it is a key it means it can be turned or used to open something. When 37 is ‘turned’ it becomes 73, which is the exact numerical value of the Hebrew word chokmah, which means wisdom. The gematria of the very first verse of the Holy Bible, Genesis 1:1 ‘In the beginning God created the heaven and the earth.’ is equal to 37 x 73. This verse is the key to wisdom if we accept that in the beginning God really did create the heaven and the earth and that what follows this verse is truth. Interestingly, the product of the digits of these two prime factors of the Genesis 1:1 verse total 2701, i.e. 3 x 7 x 7 x 3, is equal to 441, the gematria of emeth, the Hebrew word for truth. Can it be coincidence that the highest prime factor of 2701 is 73, the value of wisdom and that the highest prime factor of its reflection, 1072, is 67, the value of binah which means understanding? Thus we are taught from this very first verse that wisdom comes through reading through the Holy Bible but true understanding only comes when we go back over it, by studying and reflecting or meditating upon the word. The number 37 is a concatenation of the digits 3 and 7, represented respectively in the Hebrew numbering system by the letters gimmel and zayin. Is it again coincidence that the gematria value of the word gimmel is 73, the value of chokmah or wisdom, and that of zayin is 67 the gematria of binah or understanding? I must reiterate that here we are talking about the very first verse of the Holy Bible, 1 of a total of 31,102. And is it yet again coincidence that the following names or titles in Greek gematria of Jesus, Christ, Godhead (Theotes), Son of Man, and of course Jesus Christ by amalgamation, exhibit numerical values which are all multiples of 37? The highest prime factor, in other words the highest number that itself cannot be further divided but itself divides the number, of all these name values is 37! We have already reached the realms of incalculable improbability but these few facts concerning 37 are just the tip of one gigantic iceberg. Hidden in and beneath the text of the Holy Bible is a wealth of treasure waiting to be discovered by those who would seek it. This is where real enlightenment comes. There exists much rabbinic argument as to exactly which tribes were engraved on the breastplate and in which order but the truth is contained in the mathematics. The figure 3700, the matrix total, is the exact value of the Greek words spoken by the Samaritan woman at the well in John 4:25: ‘Messias cometh, which is called Christ’. Complimentary to this theme is the fact that the Hebrew word khoshen, translated as ‘breastplate’ (or ‘breastpiece’ in some bibles) but more probably meaning ‘pocket’ or ‘bag’ and used only in connection with this covering on the breast of the High Priest, has exactly the same numerical value as the Hebrew word Moshiach meaning Messiah. Thus the breastplate may also be regarded as a spiritual proclamation of the coming Messiah, of extreme relevance and significance in relationship to the mathematics of this particular matrix. If we consider the matrix as a checkerboard, there is perfect mathematical symmetry in that the sum of the figures on the black squares is exactly equal to the sum of the figures on the white 259 + 30 + 570 + 501 + 95 + 395 = 1850 466 + 54 + 7 + 830 + 162 + 331 = 1850 It has been stated that the sum of the whole matrix is 3,700 or 10 squared times its highest prime factor 37. On the basis that multiples of 37 only occur every 37^th number we should not expect any of the twelve figures themselves to divide by 37 since 12/37 only gives a probability of 0.324. However, there is one which does and this is the very first figure 259, the gematria or numerical value of Reuben, the first born. The number 259 is equal to 7 x 37. By pairing values together, the are sixty-six different combinations or possibilities and, in a random set of twelve numbers we would only expect 66/37, a probability of 1.783, occurrences of pair combinations that are divisible by 37. There are actually four pair combinations that are exact multiples of 37, more than double what might be expected from a random set. However, the really striking and amazing thing is that the figures in each pair are not scattered on the matrix as again one might expect but are all adjacent to one another. This is quite incredible. We can see on the matrix how these single or paired figures which are multiples of 37 fit together like a jigsaw, making a perfect square of 3 x 3 cells. 259 = 7 x 37 466 + 570 = 1036 = 28 x 37 30 + 7 = 1 x 37 54 + 501 = 555 = 15 x 37 830 + 95 = 925 = 25 x 37 Only the bottom or fourth row is not touched by a single figure or paired combination which is a multiple of the enigmatic 37. The row total is 888, the exact numerical value of JESUS in Greek 162 + 395 + 331 = 888 = 24 x 37 This fourth row begins with the name Benjamin which means ‘Son of my right hand’, an extraordinary and beautiful touch. Numbers 23:10, the last verse of Balaam’s first oracle, more often than not is interpreted as a rhetorical question but there can be little doubt that it is also a riddle: ‘Who can count the dust of Jacob, and the number of the fourth part of Israel?’ Counting the dust of Jacob simply means, as far as the riddle is concerned, counting the gematria of the names of the tribes, just as in Revelation 13:18 where counting the number of the Beast results in the number 666. The fourth part or fourth row of this tribal matrix is the value of JESUS. The fourth part of the whole matrix as a total figure 3700 equals 925, the value of the fourth coupling of two on the matrix. This figure is the value of JESUS CHRIST in English or French when the Hebrew/Greek alphanumeric system of units, tens, hundreds is applied to the English or French alphabets. The verse above concludes ‘Let me die the death of the righteous, and let my last end be like his!’ In every single instance of the possible combinations of 1, 2, 3, … and so on up to 12 numbers there is an abundance of occurrences of multiples of 37 above the expected rate in each group. This is a really quite remarkable feature even in factor analyses where there exists an overall abundance of the factor in question. The sum of all the possible combinations that divide exactly by 37 is 236800 or 10 squared (a numerical symbol of holiness) times 2368, the gematria of JESUS CHRIST in the Greek. Just to recap on this: the whole matrix is 10 squared times its highest prime factor 37, and the sum of all the combinations which are multiples of its highest prime factor is 10 squared times 2368 JESUS CHRIST, the latter figure being equal to 8 squared times 37. The 10 squared or 100^th figure in table 1 below is 2368! Jesus said ‘I am the first and the last’ a theme which pervades Bible numerics, the breastplate matrix being no exception. Summing the logical first combination value in each of the 12 groups, and again summing the last combination in each group, the difference of these totals is 2368, the exact gematria of JESUS CHRIST. The last figure in group 7 is 2368. There are in all 127 combinations of multiples of 37 and 127 is the gematria of ‘KING OF GLORY’ in Hebrew. Some of these combination values are repeated as can be seen in the list of these in ascending order (table 2). Is it a coincidence that 37 squared (1369) occurs only once and this is the 37^th in the series? Or that 2368 (Jesus Christ) occurs only once and this is the 91^st? The Hebrew words Adonai YHWH (Lord GOD or Jehovah), Ha-Elohim (GOD) and Amen each compute to 91, the 13^th triangular number. In Deuteronomy 6:4 ‘Hear, O Israel: The LORD our God is one LORD:’, the Hebrew word ‘echad’ meaning one computes to 13. The number 91 therefore incorporates the idea of the Trinity. The Greek gematria of Jesus 888, this number being one digit repeated three times, also conveys this idea albeit in a different manner. There are three occurrences of 888, which occur at positions 12, 13, and 14 in the list. The sum of these is 39, the value of ‘one LORD’ in the Shema, i.e. the verse above. The positions of the two occurrences of 925, Jesus Christ, are 15 and 16, which total 31, the Hebrew gematria of El (GOD). The ordinal positions of the seven tribal names on the matrix which constitute 2368, Jesus Christ, sum to 61, the value of ‘I AM’ in Hebrew. These names form a contiguous block, which breaks down into two smaller contiguous blocks having the values 1480 CHRIST and 888 JESUS. The alternate addition/subtraction of each of the seven name values results in 386, the Hebrew value of YESHUA or Jesus. When the seven name values are each reversed and then added the resultant figure is 1234 which is the gematria of the Hebrew words ‘THE MESSIAH OF ISRAEL’. The digit sum of the numbers constituting Jesus Christ sum to 73, the value of WISDOM. The last definition of Wisdom given in Chambers 20^th Century Dictionary, 1977, is Jesus Christ. Embedded in the ‘Jesus Christ’ (2368) block is another contiguous block 925 (830 + 95), the gematria of JESUS CHRIST in English and French. I would be extremely indebted and very grateful if any reader whose mother tongue is not English, French, Greek or Hebrew would be kind enough to forward their alphabet, together with the spelling of Jesus Christ. I am particularly interested in those languages where the names Jesus or Jesus Christ compute to a multiple of 37 but those which do not are still of great importance. Thank SQUARES ON THE MATRIX: Geometric and mathematical design showing a clear allusion to the deity of Jesus Christ. The breastplate matrix of twelve cells allows a total of 20 different squares to be formed in 3 sizes as follows: │SIZE │NUMBER OF SQUARES │ │1 x 1 │ 12│ │2 x 2 │ 6│ │3 x 3 │ 2│ We should not expect any of the totals of the values of these squares to be an exact multiple of the highest prime factor of the matrix, 37, since 20/37 only gives a probability of 0.54. However, there are altogether 3, there being one in each of the possible sizes, i.e. uniform distribution. 1 x 1 2 x 2 3 x 3 │30│ 466│ 259│ │ 7│ 570│ 54│ │95│ 830│ 501│ │ │ │ │ │ 1 x 1│ 1│ 1│ (7 x 37) 259│ 259.00│ │ 2 x 2│ 4│ 2,3,5,6│ (29 x 37) 1073│ 268.25│ │ 3 x 3│ 9│ 1 - 9│ (76 x 37) 2812│ 312.44│ │TOTAL │ 14│ │ (112 x 37) 4144│ 296.00│ With the obvious exception of the first (1 x 1), the average cell value on each square is not a whole integer. However, in the grand total of all 3 squares the average cell value is an exact number, 296, the highest common factor of the numerical values of JESUS (3 x 296) and of CHRIST (5 x 296) in the Greek. Attention is thus drawn to the total of all three as opposed to individual square totals. This value 4144 is equal to 37 (the highest prime factor of the whole matrix, and of 'JESUS' and of 'CHRIST') times 112, the exact numerical value of the Hebrew name JEHOVAH-ELOHIM, i.e. Lord GOD. At the same time it is also a multiple of the value for the Greek word Theotes meaning GODHEAD (7 x 592). The table below shows the number of times each individual cell is used in the formation of the three squares. It can be observed that this operation breaks the breastplate matrix into three distinct contiguous blocks. │CELL USAGE│CELL POSITIONS│TOTAL CELL GEMATRIA VALUES│NUMERICAL VALUE OF: │ │0 │ 10,11,12│ 888│JESUS │ │1 │ 4,7,8,9│ 1480│CHRIST │ │2 │ 1,2,3,5,6│ 1332│ │ │TOTAL │ │ 3700│MESSIAS COMETH WHICH IS CALLED CHRIST │ The figure below highlights the residual values when the name values are divided by 37, i.e. nv mod 37 = rv. The sum of the residual values of the names that make ‘Christ’ is 74; the value of Jesus computed using the positional values of the letters in the English alphabet. Likewise, the sum of the residual values of the names that make Jesus is also 74. The sum of these seven values is 148, the value of the Hebrew Pesach or Passover. The first and the last values, 17 and 35, total 52, the value of Elakbaw, a name meaning ‘God will hide’. This figure is twice the value of YHWH. The highly improbable data, and there’s much more, contained in this matrix is nothing less than astonishing, especially concerning the name of Jesus Christ. When Aaron the High Priest put on the breastplate he bore the names of the children of Israel on his heart (Ex 28:29). Was he spiritually aware that he was also bearing the name of the Messiah of Israel, Jesus Christ, on his heart? The breastplate of judgment or decision of the Old Testament spiritually equates to the breastplate of righteousness of the New. As believers we are a chosen generation, a royal priesthood, so let us put on the breastplate of righteousness (Ephesians 6:14) and bear the name of JESUS upon our hearts. The breastplate of the Old Covenant was for making decisions. If you dear reader have not yet made a decision for CHRIST you are urged to do it now, before it is too late. JESUS IS COMING SOON! - The End - This article will be updated shortly with more information regarding the Breastplate of the High Priest. Please bookmark for future reference. Recommended links: An Oracle Restored by Vernon Jenkins: The Gospel in the Stones by John Tng: Contact me: aaron_b_cohen_ba@yahoo.co.uk Table 1: Breastplate name value combinations, which are multiples of the highest prime factor of the matrix (37) in group (number of elements) order and logic progression. Table 2: Breastplate name value combinations, which are multiples of the highest prime factor of the matrix (37) in ascending numerical order. Ian Mallett discovered the breastplate matrix in April 1997, copyright Palmoni Research. Re-written by Aaron B Cohen, BA, November 2005, copyright Far-In X-Ray. Last updated 18^th November 2005. The contents of this article, in part or as a whole, may be freely copied with appropriate credits for non-commercial use only.
{"url":"http://www.fivedoves.com/revdrnatch/breastplate.htm","timestamp":"2014-04-17T16:14:41Z","content_type":null,"content_length":"275585","record_id":"<urn:uuid:4910fc18-7e76-4fdf-b9fc-21c2fcfa1719>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
foot per second squared The foot per second squared (symbolized ft/s ^2 or ft/sec ^2 ) is the unit of acceleration vector magnitude in the foot-pound-second ( fps ) or English system. This quantity can be defined in either of two senses: average or instantaneous. For an object traveling in a straight line, the average acceleration magnitude is obtained by evaluating the object's instantaneous linear speed (in feet per second) at two different points t [1] and t [2] in time, and then dividing the distance by the span of time t [2] - t [1] (in seconds). Suppose the instantaneous speed at time t [1] is equal to s [1] , and the instantaneous speed at time t [2] is equal to s [2] . Then the average acceleration magnitude a [avg] (in feet per second squared) during the time interval [ t [1] , t [2] ] is given by: a [avg] = ( s [2] - s [1] ) / ( t [2] - t [1] ) Instantaneous acceleration magnitude is more difficult to intuit, because it involves an expression of motion over an arbitrarily small interval of time. Let p represent a specific point in time. Suppose an object is in motion at about that time. The average acceleration magnitude can be determined over increasingly short time intervals centered at p , for example: [ p -4, p +4] [ p -3, p +3] [ p -2, p +2] [ p -1, p +1] [ p -0.5, p +0.5] [ p -0.25, p +0.25] [ p - x , p + x ] where the added and subtracted numbers represent seconds. The instantaneous acceleration magnitude, a [inst] , is the limit of the average acceleration magnitude as x approaches zero. This is a theoretical value, because it can be obtained only by inference from instantanous speed values determined at the starting and ending points of progressively shorter time spans. Acceleration, in its fullest sense, is a vector quantity, possessing direction as well as magnitude. For an object moving in a straight line and whose linear speed changes, the acceleration vector points in the same direction as the object's direction of motion. But acceleration can be the result of a change in the direction of a moving object, even if the instantaneous speed remains constant. The classic example is given by an object in circular motion, such as a revolving weight attached to the rim of a wheel. If the rotational speed of the wheel is constant, the weight's acceleration vector points directly inward toward the center of the wheel. The foot per second squared is not generally used by scientists; the meter per second squared (m/s ^2 or m/sec ^2 ) is preferred. However, lay people in the United States and, to a lesser extent, in England occasionally define acceleration in terms of feet per second squared. Also see acceleration , velocity , International System of Units ( SI ), and Table of Physical Units and Constants. This was last updated in September 2005 Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/foot-per-second-squared","timestamp":"2014-04-21T05:22:43Z","content_type":null,"content_length":"63358","record_id":"<urn:uuid:e4fcbe64-c6ee-465e-9a81-cbc9d4e85618>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Re(eigenvalue) inequality problem September 28th 2005, 11:51 AM Re(eigenvalue) inequality problem if a diff.eqn. has the characteristic equation L^2 + (3-k)L +1 = 0 the eigenvalues solves to -3/2 + K/2 +/- 1/2*sqrt(5-6K+K^2). No problem there. But when is the diff.eqn. asymp. stable, meaning Re(L)<0 ? I can only get this far Re[ -3/2 + K/2 +/- 1/2*sqrt(5-6K+K^2) ] < 0 -3/2 + 1/2 Re[ K +/- sqrt(5-6K+K^2) ] < 0 How can i find the values for K, where this inequality is true? September 30th 2005, 11:50 AM Firstly distinguish the cases when the eigenvalues are real and complex. When the discriminant k^2-6k+5 < 0, the eigenvalues are a complex conjugate pair with real part (k-3)/2. Now k^2-6k+5 < 0 precisely when 2 < k < 3. So in this case we have (k-3)/2 < 0 and there is stability. Otherwise the discriminant is positive and there are two real roots: we want to know whether they are noth negative. The product of the roots is 1, so they have the same sign. The sum of the roots is k-3 and this is negative if k < 3. But the case k > 2 has already been dealt with -- that leads to imaginary roots. So there is stability with two negative real roots if k <= 2. The remaining case is k>=3 which means two positive real roots and hence instability.
{"url":"http://mathhelpforum.com/advanced-math-topics/986-re-eigenvalue-inequality-problem-print.html","timestamp":"2014-04-19T13:45:11Z","content_type":null,"content_length":"4497","record_id":"<urn:uuid:8881dcf4-0b24-4612-9fe5-b4b4d9c82ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric description of the Deligne-Mumford stacks up vote 6 down vote favorite It is well known that a one-dimensional smooth Deligne-Mumford stack (over $\mathbb{C}$) could be described as a collection of its "stacky" points (finitely many) on its coarse moduli space with the orders of their stabilizers. As wccanard noted in the comments, here we need to assume that all but finitely many points of the stack have the trivial automorphisms group. 1. Is there an analogous description for two-dimensional DM stacks? For $\dim > 2$ DM stacks? Maybe for stacks with some additional restrictions (for example, assume that its coarse moduli is a smooth scheme)? By analogous description I mean a collection of closed subschemes on its coarse moduli space with the orders of their stabilizers. Is there an example of two non-isomorphic DM-stacks for which such descriptions coincide? 2. Is there some geometric description for one-dimensional DM 2-stacks (n-stacks)? stacks ag.algebraic-geometry 5 This doesn't seem to me to be true for one-dimensional stacks. Can't I take any 1-dimensional space, take any finite group, let the finite group act trivially and take the stack quotient? And then your assertion becomes "a finite group is determined by its order". What am I missing? Can you give a reference for this well-known fact? – user30035 Mar 25 '13 at 20:17 1 Yes, certainly I've missed some conditions. I think this fact about 1-dimensional DM-stacks will be true if we shall assume that general points have trivial stabilizer. Unfortunately, I can't provide a reference for this fact. – user32511 Mar 25 '13 at 21:17 You've edited the question since my previous comment but let me try to cause more trouble. Consider two lines meeting transversally at a point, and let the group of order 2 switch the lines. The 3 coarse moduli space of the stack quotient is the line, with one point having a group of order 2 as automorphisms. Now consider the group of order 2 acting on the affine line with the non-trivial element acting as -1. The coarse moduli space is again the line with one point having automorphism group of order 2. But those stacks are not isomorphic, if my understanding is correct (one is smooth andoneisnt – user30035 Mar 25 '13 at 22:28 2 Assuming that previous counterexample is OK, here's a suggestion. Instead of just asserting that something that sort-of looks true in your world view is "well-known", why not actually try and find, or write down, a proof yourself? Then you'll see what actually is true, and then you'll probably be able to ask a much better question. – user30035 Mar 25 '13 at 22:31 wccanard: Thanks. I forgot about some conditions, because my question was not about the one-dimensional case and also rather general. I asked if there is something in this style... So, certainly, 2 we need to add a smoothness condition. And now, I think, that's it! I think the proof is straightforward. Use the fact that smooth DM stacks locally (in the etale topology) are quotients of a smooth scheme by a finite group. – user32511 Mar 25 '13 at 23:53 show 2 more comments 1 Answer active oldest votes For question 2, I've never seen a definition of DM 2-stack, so I don't know where to start. For question 1, I have a tentative counterexample even for the case where both the stack and the coarse moduli space are smooth. The étale local picture near a stacky point is that we take an affine space, quotient by a finite group $G$, and take the coarse moduli space to get an affine space. By the Chevalley-Shepard-Todd theorem, if you want your coarse moduli space to be smooth, it is necessary and sufficient that $G$ be a complex reflection group, acting by complex reflections. In order to find a pair of suitable non-isomorphic stacks for which labelings of the coarse space by orders of groups does not distinguish them, we need to find two complex reflection groups satisfying the following conditions: 1. They have the same order. 2. The orders of stabilizers of affine subspaces of a given dimension form identical sets of positive integers. 3. There is an isomorphism of the coarse moduli spaces taking the images of suitably labeled subspaces to each other. up vote 3 down vote The relevant information is in the big table at the Wikipedia page on complex reflection groups. I will confine myself to rank 2 groups, because then the nontrivial subspace stabilizers are precisely the reflections. For some reason, I'm unable to think clearly right now (this may have something to do with the awkward wording of condition 3), so I'm going to make an assumption about transitivity that might not be true. Assumption: For each reflection order in an irreducible complex reflection group $G$, all reflection hyperplanes of that order in the source affine space are mapped to a single hyperplane in the target. Equivalently, irreducible complex reflection groups act transitively by conjugation on the quotient of the set of reflections of any fixed order by the corresponding hyperplane stabilizers. If the assumption is true, we only need to find two rank 2 complex reflection groups of the same order, whose reflections have the same order. For example, the exceptional groups of order 144 listed as numbers 7 and 14 in the Wikipedia table work, as well as several other exceptional groups matched with the semidirect products $G(m,p,2)$. 1 The assumption is not true. For example, the exceptional group of order 144 listed as number 7 gives three different lines with non-trivial stabilizers on the target, and the exceptional group of order 144 listed as number 14 gives only two different lines with non-trivial stabilizers on the target. Moreover, I'm absolutely sure that there is no counterexamples in dimension 2 constructed in such way. – user32511 Apr 7 '13 at 15:39 2 By the way, I don't understand, why the étale local picture near a stacky point need to be an affine space, quotient by a finite group G. Could you explain it? Moreover, for the using of Chevalley-Shepard-Todd theorem we need to assume that the action is linearizable that is not true in general. So, I think there still could be some ways to construct a counterexample in the dimention 2. – user32511 Apr 7 '13 at 15:57 Okay, thanks for letting me know about the two groups. Can you also compare them with the quotient by $G(24,8,2)$? Regarding the étale local picture, if you choose an étale neighborhood 1 of your stacky point that is a smooth scheme, then by EGA IV 17.11.4 it is étale locally isomorphic to affine space. I haven't thought through the reduction to Chevalley-Shepard-Todd in full detail, but I figured that passing to the induced action on the tangent space of a point would not lose substantial information in characteristic zero. – S. Carnahan♦ Apr 8 '13 at add comment Not the answer you're looking for? Browse other questions tagged stacks ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/125555/geometric-description-of-the-deligne-mumford-stacks/126163","timestamp":"2014-04-19T20:52:19Z","content_type":null,"content_length":"65792","record_id":"<urn:uuid:f4e87004-c2b0-4afa-99a6-e317b5c02224>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Excelling With Excel #1 – Custom Functions Using VBA By Todd | Comments (19) Could you imagine your workplace without Microsoft Excel? Absolutely not. Excel is the language of business from corporate offices on down to manufacturing facilities. When a firm hires a new employee, it is assumed they are 1) breathing and 2) have acceptable Excel .. oh, I'm sorry. I've forgotten to introduce myself. My name is Greg, and I'm an orange cat. With the permission of my young professional ChE friend and the use his delicious Apple laptop, I've decided to contribute to the AIChE ChEnected community.. and I will do this through the use of my soft, orange paws to blog regularly about Excel. So, without further ado, allow me to introduce Tip#1: Tip#1: Creating Your Own Functions Just a disclaimer: This first tip is a little involved, simply because it involves the use of VBA (a programming language ), but it is not difficult! The Skinny on Excel Functions Excel functions are easy! The difficult part is knowing which one to use. A nice, complete list of Excel's functions can be found here. All you gotta' do is type them into a cell. [caption id="attachment_2317" align="alignnone" width="363" caption="Use of the standard SUM function in Excel"] Making Your Own Excel Function (!!!) Alright, crack your paws and get into a comfortable position; this will require some typing. The first thing we need to do is open the Visual Basic editor. Example - A Simple Modified 'SUM' Function 1) Opening the Visual Basic Editor Press Alt+F11 2) Open a Module 3) Type/Create Your Function Type the following into the module: Function CAT_SUM(x as Integer, y as Integer) CAT_SUM = x + y End Function 4) Use Your Function in Excel [caption id="attachment_2322" align="alignnone" width="387" caption=" Comparison of the newly-created SUM_CAT function to the standard SUM function in Excel"] Notice the SUM function we had used before has the same result as the CAT_SUM function we have just created. The Anatomy of A VBA Excel Function Now that you've seen a quick example, allow me to share with you the basics of how VBA Excel functions are created: [caption id="attachment_2215" align="alignright" width="466" caption="General Layout of VBA Functions"] If this function looks familiar, it should. It's the CAT_SUM function in the example above. The function's beginning is marked by the Function Declaration, and the function's end is marked by the End of Function Declaration. In between, we have the Body of the function in which calculations are performed (usually involving any indicated Input parameters). Custom VBA Excel functions can be used to simplify your spreadsheets, but, because they require knowledge of rudimentary VBA, they do require some patience to learn. However, once you've gotten a hold on them, you can do some pretty amazing things (much more complicated than a simple sum)! If you have any questions about the content of this post, please contact my young professional ChE friend, Todd Krueger. - Greg the Orange Cat 19 Responses to “Excelling With Excel #1 – Custom Functions Using VBA” 1. I don't want to be difficult, but I would ban Excel form doing serious engineering calculations! To be fair it has its place in a limited number of instances but in general, No Thanks! I could elaborate but simply, in response to your question "could I imagine my workplace without Excel?" – the answer is "easily". Best regards. □ Excel isn't optimal for many situations, but I certainly couldn't imagine my workplace without it. We do all sorts of analyses with excel as well as prepare system uploads. It is used in all sorts of regular forms and production tools. From an overall look at the company and its use of Excel, its pretty flexible – it can handle very simple tasks (just listing numbers) and very complex ones (sophistocated macros). Some (small) companies actually use excel to store all of their sales and inventory data, a kind of pre-ERP. □ Ray, I definitely agree about the "serious engineering calculations" part. Pullin' an Aspen in Excel would be immensely difficult (and foolish)! ☆ Actually, Aspen has developed functions and commands that can be used in Excel VBA. In fact my company right now is trying to put together a program that will extract process data from HYSYS and put it into Excel to be placed into Material Balances and equipment datasheets. Also many hydraulic programs are based in VBA or other programming languages. So to say you could live without Excel is possible, it'd just take you twice the amount of time to do it. 2. Dear Cat; I liked your posting. How about another one using something other then the Integer ? I always wanted to learn how to use macros…is this a type of macro? □ David, here is a similar example using a double (a number with decimals): http://content.screencast.com/users/LittleKrueger... I'll have to make the 'How to Make / Write / Use Macros' post in the near future.. □ David, I recently needed to write a macro but had never done it before. I found the book, "Excel Programming" by Jinjer Simon very helpful. Now that I know how, I find myself writing macros all the time. Also, I've found that if I google what I need to do along with "vba", there is a helpful post on a programming forum somewhere. 3. Dear Greg the Orange Cat, Thanks for the tip! I will definately use this. And I will have to echo the hopes of a future macro post! The world of VBA can get a little scary when you don't know what you are doing. 4. Greg, I'd like to hear more about using excel to extract data from databases and other systems as well. My excel usage is very rudimentary as I only use it to track my projects. Can macros be used to extract data straight from databases or only from reports created by the databases? P.S. I like your style. Lets meet at the Three Blind Mice restaurant tonight, say around meow:30. I heard the cat-nip is marvelous. □ Macros (VBA) can do just that. That will have to be a subject of future post.. 5. Cat – you say you use mac, but then demonstrate in Win Excel, dastardly no? Wikipedia says macro-writing is no longer possible on Mac Excel, do you have a workaround? Is the what the "Automator Workflow" scroll-looking menu the replacement for VBA macros? Do you ever use this feature? □ Greg doesn't use a Mac. He was simply capitalizing on the fact that he was pictured using a Mac in the article. Greg, in fact, has never used Excel on a Mac. (he's sorry for the confusion) -Todd on behalf of Greg 6. Thanks. FYI – I think (haven't tried) that Mac Excel can load and run Win Excel macros, just not record, write, or edit. □ P.S. I like the first two excel tutorials, please keep it up. Can you touch on using Excel VBA to interact with other applications? And the late Dr. E. B. Nauman would suggest all calculations can be done in excel as long you are willing to run your script overnight or over the weekend. An underrated tool. ☆ Like Greg, I use VBA. I think Dr. E. B. Nauman is right, but I will say VBA is limited. If not implemented judiciously, VBA can make a spreadsheet unusable, and get you into situations that you don't want to be in (e.g. being responsible for a 'broken' spreadsheet at work). They can also be enormous (non-value-added) time sinks. That said, VBA is still very useful! (and Greg is working on a Macro/Function/VBA/Programming post right now. he'll try to include something about interactions with other applications in there too) 7. What exactly is better programming language to start off with java or vb? 8. I just want to add a short note to the above discussion! I think we should know for what reason we use Excel! I think any problem needs its suitable tool to solve! We don't want to do CFD on Excel nor a 2D dynamic simulation of a heterogeneous reaction in a fixed bed with VBA-Excel! In ChE we use it for simple calculations, for data presentation. We use its table capability, simple connection to other application for data extracting and so on… For these cases Excel is more than enough! Sometimes ago there was a debate on using C or Fortran or Matlab for ChE undergraduates. The final answer was any of those software has its own specific area of application! We use Fortran for number crunching, after more than 50 years it is still the best! Matlab is for rapid prototyping and simulation, calculations (not heavy) and quick visualization and C is for system programming and C++ for application development, so none of them can discard the others. I will later add a column on how to use Excel as a great user interface for much sophisticated calculation. It is on how using Fortran libraries with Excel. 9. Thanks for sharing your insights on Microsoft Exel. Appreciate the tips! 10. Hi Greg, Thanks for your post, I was always dreaming to put my work in excel into a VBA function, in this connection I would like to ask on how could we possibly transform this formula into a VBA
{"url":"http://chenected.aiche.org/tools-techniques/excelling-with-excel-1-2/","timestamp":"2014-04-18T13:34:30Z","content_type":null,"content_length":"105567","record_id":"<urn:uuid:84be1c23-71e2-465d-a3e5-ceb527230ff5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] filters for rows or columns Robert Kern robert.kern@gmail.... Tue Aug 25 18:26:10 CDT 2009 On Tue, Aug 25, 2009 at 16:18, Giuseppe Aprea<giuseppe.aprea@gmail.com> wrote: > Hi, I would like to do something like this > a=array([[1,2,3,4],[5,6,7,8],[4,5,6,0]]) > idxList=[] > for i in range(0,a.shape[1]): > if len(nonzero(a[:,i])[0])==1: #want to extract column indices > of those columns which only have one non vanishing element > idxList.append(i) > I already used where on !D array but I don't know if there is some function or > some kind of syntax which allow you to evaluate a condition for each > column(row). column_mask = ((a != 0).sum(axis=1) == 1) idxArray = np.nonzero(column_mask)[0] # if you must Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-August/044784.html","timestamp":"2014-04-20T13:36:11Z","content_type":null,"content_length":"3690","record_id":"<urn:uuid:b5011604-8c17-4ba0-891a-7b5f9aeb213f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00216-ip-10-147-4-33.ec2.internal.warc.gz"}